LKML archives on lore.kernel.org

Post Syndicated from jake original https://lwn.net/Articles/758034/rss

A new archive of linux-kernel mailing list (LKML) posts going back to 1998 is now available at lore.kernel.org. It is based on
public-inbox (which we looked at back in February. Among other things, public-inbox allows retrieving the entire archive via Git: “Git clone URLs are provided at the bottom of each page. Note, that due to its volume, the LKML archive is sharded into multiple repositories, each roughly 1GB in size. In addition to cloning from lore.kernel.org, you may also access these repositories on git.kernel.org.” The full announcement, which includes information about a new Patchwork instance as well as ways to link into the new archive, can be found on kernel.org.

The Effects of Iran’s Telegram Ban

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/06/the_effects_of_4.html

The Center for Human Rights in Iran has released a report outlining the effect’s of that country’s ban on Telegram, a secure messaging app used by about half of the country.

The ban will disrupt the most important, uncensored platform for information and communication in Iran, one that is used extensively by activists, independent and citizen journalists, dissidents and international media. It will also impact electoral politics in Iran, as centrist, reformist and other relatively moderate political groups that are allowed to participate in Iran’s elections have been heavily and successfully using Telegram to promote their candidates and electoral lists during elections. State-controlled domestic apps and media will not provide these groups with such a platform, even as they continue to do so for conservative and hardline political forces in the country, significantly aiding the latter.

From a Wired article:

Researchers found that the ban has had broad effects, hindering and chilling individual speech, forcing political campaigns to turn to state-sponsored media tools, limiting journalists and activists, curtailing international interactions, and eroding businesses that grew their infrastructure and reach off of Telegram.

It’s interesting that the analysis doesn’t really center around the security properties of Telegram, but more around its ubiquity as a messaging platform in the country.

Fancy making a motion-tracking eye in a jar?

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/motion-tracking-eye-in-a-jar/

Using motion detection and a Raspberry Pi Zero W, Lukas Stratmann has produced this rather creepy moving eye in a jar. And with a little bit of, ahem, dissection, you can too!

Floating Eye in a Jar With Motion Tracking

Made for an Arts seminar I attended for my General Studies, i.e. classes not organized by the faculty for CompSci: “Interaktive Exponate entwickeln mit dem RaspberryPi” (translation: Development of interactive exhibitions with the RaspberryPi). Music: Rise by Meydän: CC-BY http://freemusicarchive.org/music/Meydan/For_Creators/Rise_1709 I embedded some neodymium magnets in a ping-pong ball that I’d cut open.

Eww!

We hear you. Among the Raspberry Pi projects we’ve shared on this blog, Lukas’s eye in a jar is definitely one of the eww-est. But the idea and the tech behind it is quite fascinating.

Here’s what we know…

Lukas hasn’t shared the code for his project online. But with a bit of sleuthing, we’re sure the Raspberry Pi community can piece it together.

What we do know is that the project uses a Raspberry Pi Zero W, a camera, some magnets, a servo, and a ping pong ball, with a couple of 3D-printed parts to keep everything in place. Lukas has explained:

I embedded some neodymium magnets in a ping-pong ball that I’d cut open. The magnets and weights (two 20 Euro cent coins) are held in place by a custom 3D-printed mount. Everything is glued in with hot glue, and I sealed the ping-pong ball with silicone sealant and painted it with acrylic paint.

Beneath the jar, a servo motor is connected to a second set of magnets. When the servo moves, these magnets cause the eyeball to move in tandem, by magnet magic.

eye in a jar raspberry pi

Using this tutorial by , Lukas incorporated motion detection into his project, allowing the camera to track passers-by, and the Pi to direct the servo and eyeball.

Build your own eye in a jar

The best skill of makers is their ability to figure out how things work to recreate them. So if you’re up for the challenge, we’d love to see you try to build your own tribute to Lukas’s eye in a jar.

And why stop there? Using magnets and servos with your Raspberry Pi opens up a world of projects, such as Bethanie’s amazing Harry Potter–inspired wizard chess set!

Wizard's Chess gif

How would you use them in your builds?

The post Fancy making a motion-tracking eye in a jar? appeared first on Raspberry Pi.

timeShift(GrafanaBuzz, 1w) Issue 50

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2018/06/22/timeshiftgrafanabuzz-1w-issue-50/

Welcome to TimeShift TimeShift is turning 1 year old! I really hope you’ve enjoyed reading these weekly roundups as much as I’ve enjoyed writing them. This week we have news on the new Grafana v5.2.0-beta3 release, a bunch of plugin updates to share, and your regular dose of recent blog posts.
Have an article you’d like included in an upcoming TimeShift? Contact Us.

Latest Beta Release: Grafana 5.2.0-beta3 New Features Dashboard: Import dashboard to folder #10796 Minor Updates Build: All rpm packages should be signed #12359 Permissions: Important security fix for API keys with viewer role #12343 Dashboard: Fix so panel titles don’t wrap #11074 Dashboard: Prevent double-click when saving dashboard #11963 Dashboard: Autofocus the add-panel search filter #12189 thx @ryantxu Units: W/m2 (energy), l/h (flow) and kPa (pressure) #11233, thx @flopp999 Units: Litre/min (flow) and milliLitre/min (flow) #12282, thx @flopp999 Alerting: Fix mobile notifications for Microsoft Teams alert notifier #11484, thx @manacker Influxdb: Add support for mode function #12286 Cloudwatch: Fixes panic caused by bad timerange settings #12199 Auth Proxy: Whitelist proxy IP address instead of client IP address #10707 User Management: Make sure that a user always has a current org assigned #11076 Snapshots: Fix: annotations not properly extracted leading to incorrect rendering of annotations #12278 LDAP: Allow use of DN in group_search_filter_user_attribute and member_of #3132, thx @mmolnar Graph: Fix legend decimals precision calculation #11792 Dashboard: Make sure to process panels in collapsed rows when exporting dashboard #12256 Please try the new beta release out and let us know what you think.

Cheezball Rising: Drawing a sprite

Post Syndicated from Eevee original https://eev.ee/blog/2018/06/21/cheezball-rising-drawing-a-sprite/

This is a series about Star Anise Chronicles: Cheezball Rising, an expansive adventure game about my cat for the Game Boy Color. Follow along as I struggle to make something with this bleeding-edge console!

source codeprebuilt ROMs (a week early for $4) • works best with mGBA

In this issue, I figure out how to draw a sprite. This part was hard.

Previously: figuring out how to put literally anything on the goddamn screen.

Recap

Welcome back! I’ve started cobbling together a Pygments lexer for RGBDS’s assembly flavor, so hopefully the code blocks are more readable, and will become moreso over time.

When I left off last time, I had… um… this.

Vertical stripes of red, green, blue, and white

This is all on the background layer, which I mentioned before is a fixed grid of 8×8 tiles.

For anything that moves around freely, like the player, I need to use the object layer. So that’s an obvious place to go next.

Now, if you remember, I can define tiles by just writing to video RAM, and I define palettes with a goofy system involving writing them one byte at a time to the same magic address. You might expect defining objects to do some third completely different thing, and you’d be right!

Defining an object

Objects are defined in their own little chunk of RAM called OAM, for object attribute memory. They’re also made up of tiles, but each tile can be positioned at an arbitrary point on the screen.

OAM starts at $fe00 and each object takes four bytes — the y-coordinate, the x-coordinate, the tile number, and some flags — for a total of 160 bytes. There are some curiosities, like how the top left of the screen is (8, 10) rather than (0, 0), but I’ll figure out what’s up with that later. (I suppose if zeroes meant the upper left corner, there’d be a whole stack of tile 0 there all the time.)

Here’s the fun part: I can’t write directly to OAM? I guess??? Come to think of it, I don’t think the manual explicitly says I can’t, but it’s strongly implied. Hmm. I’ll look into that. But I didn’t at the time, so I’ll continue under the assumption that the following nonsense is necessary.

Because I “can’t” write directly, I need to use some shenanigans. First, I need something to write! This is an Anise game, so let’s go for Anise.

I’m on my laptop at this point without access to the source code for the LÖVE Anise game I started, so I have to rustle up a screenshot I took.

Cropped screenshot of Star Anise and some critters, all pixel art

Wait a second.

Even on the Game Boy Color, tiles are defined with two bits per pixel. That means an 8×8 tile has a maximum of four colors. For objects, the first color is transparent, so I really have three colors — which is exactly why most Game Boy Color protagonists have a main color, an outline/shadow color, and a highlight color.

Let’s check out that Anise in more detail.

Star Anise at 8×

Hm yes okay that’s more than three colors. I guess I’m going to need to draw some new sprites from scratch, somehow.

In the meantime, I optimistically notice that Star Anise’s body only uses three colors, and it’s 8×7! I could make a tile out of that! I painstakingly copy the pixels into a block of those backticks, which you can kinda see is his body if you squint a bit:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
SECTION "Sprites", ROM0
ANISE_SPRITE:
    dw `00000000
    dw `00001333
    dw `00001323
    dw `10001233
    dw `01001333
    dw `00113332
    dw `00003002
    dw `00003002

The dw notation isn’t an opcode; it tells the assembler to put two literal bytes of data in the final ROM. A word of data. (Each row of a tile is two bytes, remember.)

If you think about this too hard, you start to realize that both the data and code are just bytes, everything is arbitrary, and true meaning is found only in the way we perceive things rather than in the things themselves.

Note I didn’t specify an exact address for this section, so the linker will figure out somewhere to put it and make sure all the labels are right at the end.

Now I load this into tilespace, back in my main code:

1
2
3
4
5
6
7
8
    ; Define an object
    ld hl, $8800
    ld bc, ANISE_SPRITE
    REPT 16
    ld a, [bc]
    ld [hl+], a
    inc bc
    ENDR

This copies 16 bytes, starting from the ANISE_SPRITE label, to $8800.


Why $8800, not $8000? I’m so glad you asked!

There are actually three blocks of tile space, each with enough room for 128 tiles: one at $8000, one at $8800, and one at $9000. Object tiles always use the $8000 block followed by the $8800 block, whereas background tiles can use either $8000 + $8800 or $9000 + $8800. By default, background tiles use $8000 + $8800.

All of which is to say that I got very confused reading the manual (which spends like five pages explaining the above paragraph) and put the object tiles in the wrong place. Whoops. It’s fine; this just ends up being tile 128.

In my partial defense, looking at it now, I see the manual is wrong! Bit 4 of the LCD controller register ($ff40) controls whether the background uses tiles from $8000 + $8800 (1) or $9000 + $8800 (0). The manual says that this register defaults to $83, which has bit 4 off, suggesting that background tiles use $9000 + $8800 (i.e. start at $8800), but disassembly of the boot ROM shows that it actually defaults to $91, which has bit 4 on. Thanks a lot, Nintendo!

That was quite a diversion. Here’s a chart of where the dang tiles live. Note that the block at $8800 is always shared between objects and background tiles. Oh, and on the Game Boy Color, all three blocks are twice as big thanks to the magic of banking. I’ll get to banking… much later.

1
2
3
4
5
                            bit 4 ON (default)  bit 4 OFF
                            ------------------  ---------
$8000   obj tiles 0-127     bg tiles 0-127
$8800   obj tiles 128-255   bg tiles 128-255    bg tiles 128-255
$9000                                           bg tiles 0-127

Hokay. What else? I’m going to need a palette for this, and I don’t want to use that gaudy background palette. Actually, I can’t — the background and object layers have two completely separate sets of palettes.

Writing an object palette is exactly the same as writing a background palette, except with different registers.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
    ; This should look pretty familiar
    ld a, %10000000
    ld [$ff6a], a

    ld bc, %0000000000000000  ; transparent
    ld a, c
    ld [$ff6b], a
    ld a, b
    ld [$ff6b], a
    ld bc, %0010110100100101  ; dark
    ld a, c
    ld [$ff6b], a
    ld a, b
    ld [$ff6b], a
    ld bc, %0100000111001101  ; med
    ld a, c
    ld [$ff6b], a
    ld a, b
    ld [$ff6b], a
    ld bc, %0100001000010001  ; white
    ld a, c
    ld [$ff6b], a
    ld a, b
    ld [$ff6b], a

Riveting!

I wrote out those colors by hand. The original dark color, for example, was #264a59. That uses eight bits per channel, but the Game Boy Color only supports five (a factor of 8 difference), so first I rounded each channel to the nearest 8 and got #284858. Swap the channels to get 58 48 28 and convert to binary (sans the trailing zeroes) to get 01011 01001 00101.

Note to self: probably write a macro or whatever so I can define colors like a goddamn human being. Also why am I not putting the colors in a ROM section too?

Almost there. I still need to write out those four bytes that specify the tile and where it goes. I can’t actually write them to OAM yet, so I need some scratch space in regular RAMworking RAM.

1
2
3
SECTION "OAM Buffer", WRAM0[$C100]
oam_buffer:
    ds 4 * 40

The ds notation is another “data” variant, except it can take a size and reserves space for a whole string of data. Note that I didn’t put any actual data here — this section is in RAM, which only exists while the game is running, so there’d be nowhere to put data.

Also note that I gave an explicit address this time. The buffer has to start at an address ending in 00, for reasons that will become clear momentarily. The space from $c000 to $dfff is available as working RAM, and I chose $c100 for… reasons that will also become clear momentarily.

Now to write four bytes to it at runtime:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
    ; Put an object on the screen
    ld hl, oam_buffer
    ; y-coord
    ld a, 64
    ld [hl+], a
    ; x-coord
    ld [hl+], a
    ; tile index
    ld a, 128
    ld [hl+], a
    ; attributes, including palette, which are all zero
    ld a, %00000000
    ld [hl+], a

(I tried writing directly to OAM on my first attempt. Nothing happened! Very exciting.)

But how to get this into OAM so it’ll actually show on-screen? For that, I need to do a DMA transfer.

DMA

DMA, or direct memory access, is one of those things the Game Boy programming manual seems to think everyone is already familiar with. It refers generally to features that allow some other hardware to access memory, without going through the CPU. In the case of the Game Boy, it’s used to copy data from working RAM to OAM. Only to OAM. It’s very specific.

Performing a DMA transfer is super easy! I write the high byte of the source address to the DMA register ($ff46), and then some magic happens, and 160 bytes from the source address appear in OAM. In other words:

1
2
3
    ld a, $c1       ; copy from $c100
    ld [$ff46], a   ; perform DMA transfer
    ; now $c000 through $c09f have been copied into OAM!

It’s almost too good to be true! And it is. There are some wrinkles.

First, the transfer takes some time, during which I almost certainly don’t want to be doing anything else.

Second, during the transfer, the CPU can only read from “high RAM” — $ff80 and higher. Wait, uh oh.

The usual workaround here is to copy a very short function into high RAM to perform the actual transfer and wait for it to finish, then call that instead of starting a transfer directly. Well, that sounds like a pain, so I break my rule of accounting for every byte and find someone else who’s done it. Conveniently enough, that post is by the author of the small template project I’ve been glancing at.

I end up with something like the following.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
    ; Copy the little DMA routine into high RAM
    ld bc, DMA_BYTECODE
    ld hl, $ff80
    ; DMA routine is 13 bytes long
    REPT 13
    ld a, [bc]
    inc bc
    ld [hl+], a
    ENDR

; ...

SECTION "DMA Bytecode", ROM0
DMA_BYTECODE:
    db $F5, $3E, $C1, $EA, $46, $FF, $3E, $28, $3D, $20, $FD, $F1, $D9

That’s compiled assembly, written inline as bytes. Oh boy. The original code looks like:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
    ; start the transfer, as shown above
    ld a, $c1
    ld [$ff46], a

    ; wait 160 cycles/microseconds, the time it takes for the
    ; transfer to finish; this works because 'dec' is 1 cycle
    ; and 'jr' is 3, for 4 cycles done 40 times
    ld      a, 40
loop:
    dec     a
    jr      nz, loop

    ; return
    ret

Now you can see why I used $c100 for my OAM buffer: because it’s the address this person used.

(Hm, the opcode reference I usually use seems to have all the timings multiplied by a factor of 4 without comment? Odd. The rgbds reference is correct.)

(Also, here’s a fun fact: the stack starts at $fffe and grows backwards. If it grows too big, the very first thing it’ll overwrite is this DMA routine! I bet that’ll have some fun effects.)

At this point I have a thought. (Okay, I had the thought a bit later, but it works better narratively if I have it now.) I’ve already demonstrated that the line between code and data is a bit fuzzy here. So why does this code need to be pre-assembled?

And a similar thought: why is the length hardcoded? Surely, we can do a little better. What if we shuffle things around a bit…

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
SECTION "init", ROM0[$0100]
    nop
    ; Jump to a named label instead of an address
    jp main

SECTION "main", ROM0[$0150]
; DMA copy routine, copied into high RAM at startup.
; Never actually called where it is.
dma_copy:
    ld a, $c1
    ld [$ff46], a
    ld a, 40
.loop:
    dec a
    jr nz, .loop
    ret
dma_copy_end:
    nop

main:
    ; ... all previous code is here now ...

    ; Copy the little DMA routine into high RAM
    ld bc, dma_copy
    ld hl, $ff80
    ; DMA routine is 13 bytes long
    REPT dma_copy_end - dma_copy
    ld a, [bc]
    inc bc
    ld [hl+], a
    ENDR

This is very similar to what I just had, except that the code is left as code, and its length is computed by having another label at the end — so I’m free to edit it later if I want to. It all ends up as bytes in the ROM, so the code ends up exactly the same as writing out the bytes with db. Come to think of it, I don’t even need to hardcode the $c1 there; I could replace it with oam_buffer >> 8 and avoid repeating myself.

(I put the code at $0150 because rgbasm is very picky about subtracting labels, and will only do it if they both have fixed positions. These two labels would be the same distance apart no matter where I put the section, but I guess rgbasm isn’t smart enough to realize that.)

I’m actually surprised that the author of the above post didn’t think to do this? Maybe it’s dirty even by assembly standards.

Timing, vblank, and some cool trickery

Okay, so, as I was writing that last section, I got really curious about whether and when I’m actually allowed to write to OAM. Or tile RAM, for that matter.

I found/consulted the Game Boy dev wiki, and the rules match what’s in the manual, albeit with a chart that makes things a little more clear.

My understanding is as follows. The LCD draws the screen one row of pixels at a time, and each row has the following steps:

  1. Look through OAM to see if any sprites are on this row. OAM is inaccessible to the CPU.

  2. Draw the row. OAM, VRAM, and palettes are all inaccessible.

  3. Finish the row and continue on to the beginning of the next row. This takes a nonzero amount of time, called the horizontal blanking period, during which the CPU can access everything freely.

Once the LCD reaches the bottom, it continues to “draw” a number of faux rows below the bottom of the visible screen (vertical blanking), and the CPU can again do whatever it wants. Eventually it returns to the top-left corner to draw again, concluding a single frame. The entire process happens 59.7 times per second.

There’s one exception: DMA transfers can happen any time, but the LCD will simply not draw sprites during the transfer.

So I probably shouldn’t be writing to tiles and palettes willy-nilly. I suspect I got away with it because it happened in that first OAM-searching stage… and/or because I did it on emulators which are a bit more flexible than the original hardware.

In fact…

Same screenshot as above, but the first row of pixels is corrupt

I took this screenshot by loading the ROM I have so far, pausing it, resetting it, and then advancing a single frame. This is the very first frame my game shows. If you look closely at the first row of pixels, you can see they’re actually corrupt — they’re being drawn before I’ve set up the palette! You can even see each palette entry taking effect along the row.

This is very cool. It also means my current code would not work at all on actual hardware. I should probably just turn the screen off while I’m doing setup like this.

It’s interesting that only OAM gets a special workaround in the form of a DMA transfer — I imagine because sprites move around much more often than the tileset changes — but having the LCD stop drawing sprites in the meantime is quite a limitation. Surely, you’d only want to do a DMA transfer during vblank anyway? It is much faster than copying by hand, so I’ll still take it.

All of this is to say: I’m gonna need to care about vblanks.


Incidentally, the presence of hblank is very cool and can be used for a number of neat effects, especially when combined with the Game Boy’s ability to call back into user code when the LCD reaches a specific row:

  • The GBC Zelda games use it for map scrolling. The status bar at the top is in one of the two background maps, and as soon as that finishes drawing, the game switches to the other one, which contains the world.

  • Those same games also use it for a horizontal wavy effect, both when warping around and when underwater — all they need to do is change the background layer’s x offset during each hblank!

  • The wiki points out that OAM could be written to in the middle of a screen update, thus bypassing the 40-object restriction: draw 40 objects on the top half of the screen, swap out OAM midway, and then the LCD will draw a different 40 on the bottom half!

  • I imagine you could also change palettes midway through a redraw and exceed the usual limit of 56 colors on screen at a time! No telling whether this sort of trick would work on an emulator, though.

I am very excited at the prospects here.

I’m also slightly terrified. I have a fixed amount of time between frames, and with the LCD as separate hardware, there’s no such thing as a slow frame. If I don’t finish, things go bad. And that time is measured in instructions — an ld always takes the same number of cycles! There’s no faster computer or reducing GC pressure. There’s just me. Yikes.

Back to drawing a sprite

I haven’t had a single new screenshot this entire post! This is ridiculous. All I want is to draw a thing to the screen.

I have some data in my OAM buffer. I have DMA set up. All I should need to do now is start a transfer.

1
    call $ff80

And… nothing. mGBA’s memory viewer confirms everything’s in the right place, but nothing’s on the screen.

Whoops! Remember that LCD controller register, and how it defaults to $91? Well, bit 1 is whether to show objects at all, and it defaults to off. So let’s fix that.

1
2
    ld a, %10010011  ; $91 plus bit 2
    ld [$ff40], a
The same gaudy background, but now with a partial Anise sprite on top

SUCCESS!

It doesn’t look like much, but it took a lot of flailing to get here, and I was overjoyed when I first saw it. The rest should be a breeze! Right?

To be continued

That doesn’t even get us all the way through commit 1b17c7, but this is already more than enough.

Next time: input, and moderately less eye-searing art!

New Collaborative Editing for Amazon WorkDocs – Powered by Hancom Thinkfree Office Online

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-collaborative-editing-for-amazon-workdocs-powered-by-hancom-thinkfree-office-online/

I’ve got some important news for Amazon WorkDocs users. As a result of our partnership with Hancom, you can now edit Microsoft Office documents in your browser without having to install any applications or connect with another web service. You can quickly create a document, share it with team members, and let them make changes and contribute to the finished product. Everyone can see changes in real-time as they work together, regardless of where they are located or what device they are using to access WorkDocs.

This feature is available at no extra charge and you can start using it as soon as your WorkDocs administrator enables it. Let’s take a tour!

Collaborative Editing
I start by creating a document, spreadsheet, or presentation using the New menu. I’ll create a document:

I can create and edit my document from the comfort of my web browser:

Then I save and rename it (a default name is generated using the creation time as a starting point):

Next, I share it with my colleague Manoj so that he can take a look and make any desired edits:

I can see his edits in real-time:

And I can see all of the participants in the collaborative editing session:

WorkDocs creates a new revision after all of the participants have exited the editing session.

I can also create new spreadsheets and presentations and edit existing ones! Here’s a new spreadsheet:

And here’s an existing presentation (I opened one from 2008 just for fun):

Now Available
This feature is available now in the US West (Oregon) Region and will become available in other regions in the next couple of weeks. It is available at no extra charge to all WorkDocs users.

Jeff;

Bottomley: Containers and Cloud Security

Post Syndicated from jake original https://lwn.net/Articles/757987/rss

On his blog, James Bottomley looks at the value proposition for various types of cloud deployments. In particular, he compares the vertical and horizontal attack profile (VAP and HAP) of four different models: separate servers, separate logins on a single server, virtual machines, and containers. He finds the container story to be compelling: “The total VAP here is identical to that of physical infrastructure. However, the Tenant component is much smaller (the kernel accounting for around 50% of all vulnerabilities). It is this reduction in the Tenant VAP that makes containers so appealing: the CSP [cloud service provider] is now responsible for monitoring and remediating about half of the physical system VAP which is a great improvement for the Tenant. Plus when the CSP remediates on the host, every container benefits at once, which is much better than having to crack open every virtual machine image to do it. Best of all, the Tenant images don’t have to be modified to benefit from these fixes, simply running on an updated CSP host is enough. However, the cost for this is that the HAP is the entire linux kernel syscall interface meaning the HAP is much larger than then hypervisor virtual infrastructure case because the latter benefits from interface narrowing to only the hypercalls (qualitatively, assuming the hypercall interface is ~30 calls and the syscall interface is ~300 calls, then the HAP is 10x larger in the container case than the hypervisor case); however, thanks to protections from the kernel namespace code, the HAP is less than the shared login server case. Best of all, from the Tenant point of view, this entire HAP cost is borne by the CSP, which makes this an incredible deal: not only does the Tenant get a significant reduction in their VAP but the CSP is hugely motivated to keep on top of all vulnerabilities in their part of the VAP and remediate very fast because of the business implications of a successful horizontal attack.

The Everyday Sexism That I See In My Work

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2018/06/21/everyday-sexism.html

My friend, colleague, and boss, Karen Sandler,
yesterday tweeted
about one of the unfortunately sexist incidents
that she’s faced in her
life. This incident is a culmination of sexist incidents that Karen and I
have seen since we started working together. I describe below how these
events entice me to be complicit in sexist incidents, which I do my best to
actively resist.

Ultimately, this isn’t about me, Karen, or about a single situation, but
this is a great example of how sexist behaviors manipulate a situation and
put successful women leaders in no-win situations. If you read this tweet
(and additionally already knew about Software Freedom Conservancy where I
work)…


“#EveryDaySexism I'm Exec Director of a charity.  A senior tech exec is making his company's annual donation conditional on his speaking privately to a man who reports to me. I hope shining light on these situations erodes their power to build no-win situations for women leaders.” — Karen Sandler

… you’ve already guessed that I’m the male employee that this
executive meant. When I examine the situation, I can’t think of a single
reason this donor could want to speak to me that would not be more productive
if he instead spoke with Karen. Yet, the executive, who was previously well
briefed on the role changes at Conservancy, repeatedly insisted that the
donation was gated on a conversation with me.

Those who follow my and Karen’s work know that I was Conservancy’s first Executive Director.
Now, I
have a lower-ranking role
since Karen came to Conservancy.

Back in 2014, Karen and I collaboratively talked about what role would
make sense for her and me — and we made a choice together. We briefly
considered a co-Executive Director situation, but that arrangement has been
tried elsewhere and is typically not successful in the long term. Karen is
much better than me at the key jobs of a successful Executive Director.
Karen and I agreed she was better for the job than me. We took it to
Conservancy’s Board of Directors, and they moved my leadership role at
Conservancy to be honorary, and we named Karen the sole Executive Director.
Yes, I’m still nebulously a leader in the Free Software community (which I’m
of course glad about). But for Conservancy matters, and specifically donor
relations and major decisions about the organization, Karen is in charge.

Karen is an impressive leader and there is no one else that I’d want to
follow in my software freedom activism work. She’s the best Executive
Director that Conservancy could possibly have — by far. Everyone in
the community who works with us regularly knows this. Yet ever since Karen
was named our Executive Director, she faces everyday sexist behavior,
including people who seek to conscript me into participation in institutional
sexism. As outlined above, I was initially Executive Director of Conservancy,
and I was treated very differently than she is treated in similar situations,
even though the organization has grown significantly under her
leadership. More on that below, but first a few of the other everyday
examples of sexism I’ve witnessed with Karen:

Many times when we’re at conferences together, men who meet us assume
that Karen works for me until we explain our roles. This happens almost
every time both Karen and I are at the same conference, which is at least a
few times each year.

Another time: a journalist wrote an article about some of “Bradley’s
work” at Conservancy. We privately pointed out to the journalist how
strange it was that Karen was not mentioned in the article, and that it made
it sound like I was the only person doing this work at our organization. He
responded that because I was the “primary spokesperson”, it was
natural to credit me and not her. Karen in fact had been more recently giving
multiple keynotes on the topic, and had more speaking engagements than I did
in that year. One of those keynotes was just weeks before the article, and
it had been months since I’d given a talk or made any public statements. The
journalist fortunately did agree it was a mistake, but neverthless couldn’t
rewrite the article.

Another time: we were leaked (reliable) information about a closed-door
meeting where some industry leaders were discussing Conservancy and its
work. The person who leaked us the information told us that multiple
participants kept talking only about me, not Karen’s work. When someone in
the meeting said wait, isn’t Karen Sandler the Executive Director?,
our source (who was giving us a real-time report over IRC) reported that
that the (male) meeting coordinator literally said: Oh sure, Karen
works there, but Bradley is their guiding light
. Karen had been
Executive Director for years at that point.

I consistently say in talks, and in public conversations, that Karen is my
boss. I literally use the word “boss”, so there is no
confusion nor ambiguity. I did it this week at a talk. But instead of
taking that as the fact that it is, many people make comments like well,
Karen’s not really your boss, right; that’s just a thing you say?
. So,
I’m saying unequivocally here (surely not for the last time): I report to
Karen at Conservancy. She is in charge of Conservancy. She has the
authority to fire me. (I hope she won’t, of course :). She takes views and
opinions of our entire staff seriously but she sets the agenda and makes
the decisions about what work we do and how we do it. (It shows how bad
sexism is in our culture that Karen and I often have to explain in
intricate detail what it means for someone to be an Executive Director of
an organization.)

Interestingly but disturbingly, these incidents teach how institutional
sexism operates in practice. Every time I’m approached (which is often)
with some subtle situation where it makes Karen look like she’s not really
in charge, I’m given the opportunity to pump myself up, make myself look
more important, and gain more credibility and power. It is clear to me that
this comes at the expense of subtly denigrating Karen and that the
enticement is part of an institutionally sexist zero-sum game.

These situations are no-win. I know that in the recent situation, the
donation would be assured if I’d just agreed to a call right away without
Karen’s involvement. I didn’t do it, because that approach would make me
inherently complicit in institutional sexism.

These situations are sadly very common, particularly for women who are
banging cracks into the glass ceiling. For my part, I’m glad to help where
I can tell my side the story, because I think it’s essential for men to
assist and corroborate the fight against sexism in our industry without
mansplaining or white-knighting. I hope other men in technology will join
me and refuse to participate and support behavior that seeks to erode
women’s well-earned power in our community. When you are told that a woman
is in charge of a free software project, that a woman is the executive
director of the organization, or that a woman is the chair of the board,
take the fact at face value, treat that person as the one who is in charge
of that endeavor, and don’t (inadvertantly nor explicitly) undermine her
authority.

Best Practices for resizing and automatic scaling in Amazon EMR

Post Syndicated from Brandon Scheller original https://aws.amazon.com/blogs/big-data/best-practices-for-resizing-and-automatic-scaling-in-amazon-emr/

You can increase your savings by taking advantage of the dynamic scaling feature set available in Amazon EMR. The ability to scale the number of nodes in your cluster up and down on the fly is among the major features that make Amazon EMR elastic. You can take advantage of scaling in EMR by resizing your cluster down when you have little or no workload. You can also scale your cluster up to add processing power when the job gets too slow. This allows you to spend just enough to cover the cost of your job and little more.

Knowing the complex logic behind this feature can help you take advantage of it to save on cluster costs. In this post, I detail how EMR clusters resize, and I present some best practices for getting the maximum benefit and resulting cost savings for your own cluster through this feature.

EMR scaling is more complex than simply adding or removing nodes from the cluster. One common misconception is that scaling in Amazon EMR works exactly like Amazon EC2 scaling. With EC2 scaling, you can add/remove nodes almost instantly and without worry, but EMR has more complexity to it, especially when scaling a cluster down. This is because important data or jobs could be running on your nodes.

To prevent data loss, Amazon EMR scaling ensures that your node has no running Apache Hadoop tasks or unique data that could be lost before removing your node. It is worth considering this decommissioning delay when resizing your EMR cluster. By understanding and accounting for how this process works, you can avoid issues that have plagued others, such as slow cluster resizes and inefficient automatic scaling policies.

When an EMR scale cluster is scaled down, two different decommission processes are triggered on the nodes that will be terminated. The first process is the decommissioning of Hadoop YARN, which is the Hadoop resource manager. Hadoop tasks that are submitted to Amazon EMR generally run through YARN, so EMR must ensure that any running YARN tasks are complete before removing the node. If for some reason the YARN task is stuck, there is a configurable timeout to ensure that the decommissioning still finishes. When this timeout happens, the YARN task is terminated and is instead rescheduled to a different node so that the task can finish.

The second decommission process is that of the Hadoop Distributed File System or HDFS. HDFS stores data in blocks that are spread through the EMR cluster on any nodes that are running HDFS. When an HDFS node is decommissioning, it must replicate those data blocks to other HDFS nodes so that they are not lost when the node is terminated.

So how can you use this knowledge in Amazon EMR?

Tips for resizing clusters

The following are some issues to consider when resizing your clusters.

EMR clusters can use two types of nodes for Hadoop tasks: core nodes and task nodes. Core nodes host persistent data by running the HDFS DataNode process and run Hadoop tasks through YARN’s resource manager. Task nodes only run Hadoop tasks through YARN and DO NOT store data in HDFS.

When scaling down task nodes on a running cluster, expect a short delay for any running Hadoop task on the cluster to decommission. This allows you to get the best usage of your task node by not losing task progress through interruption. However, if your job allows for this interruption, you can adjust the one hour default timeout on the resize by adjusting the yarn.resourcemanager.nodemanager-graceful-decommission-timeout-secs property (in EMR 5.14) in yarn-site.xml. When this process times out, your task node is shut down regardless of any running tasks. This process is usually relatively quick, which makes it fast to scale down task nodes.

When you’re scaling down core nodes, Amazon EMR must also wait for HDFS to decommission to protect your data. HDFS can take a relatively long time to decommission. This is because HDFS block replication is throttled by design through configurations located in hdfs-site.xml. This in turn means that HDFS decommissioning is throttled. This protects your cluster from a spiked workload if a node goes down, but it slows down decommissioning. When scaling down a large number of core nodes, consider adjusting these configurations beforehand so that you can scale down more quickly.

For example, consider this exercise with HDFS and resizing speed.

The HDFS configurations, located in hdfs-site.xml, have some of the most significant impact on throttling block replication:

  • datanode.balance.bandwidthPerSec: Bandwidth for each node’s replication
  • namenode.replication.max-streams: Max streams running for block replication
  • namenode.replication.max-streams-hard-limit: Hard limit on max streams
  • datanode.balance.max.concurrent.moves: Number of threads used by the block balancer for pending moves
  • namenode.replication.work.multiplier.per.iteration: Used to determine the number of blocks to begin transfers immediately during each replication interval

(Beware when modifying: Changing these configurations improperly, especially on a cluster with high load, can seriously degrade cluster performance.)

Cluster resizing speed exercise

Modifying these configurations can speed up the decommissioning time significantly. Try the following exercise to see this difference for yourself.

  1. Create an EMR cluster with the following hardware configuration:
  • Master: 1 node – m3.xlarge
  • Core: 6 nodes – m3.xlarge
  1. Connect to the master node of your cluster using SSH (Secure Shell).

For more information, see Connect to the Master Node Using SSH in the Amazon EMR documentation.

  1. Load data into HDFS by using the following jobs:
$ hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar teragen 1000000000 /user/hadoop/data1/
$ s3-dist-cp --src s3://aws-bigdata-blog/artifacts/ClusterResize/smallfiles25k/ --dest  hdfs:///user/hadoop/data2/
  1. Edit your hdfs-site.xml configs:
$ sudo vim /etc/hadoop/conf/hdfs-site.xml

Then paste in the following configuration setup in the hdfs-site properties.

Disclaimer: These values are relatively high for example purposes and should not necessarily be used in production. Be sure to test config values for production clusters under load before modifying them.

<property>
  <name>dfs.datanode.balance.bandwidthPerSec</name>
  <value>100m</value>
</property>

<property>
  <name>dfs.namenode.replication.max-streams</name>
  <value>100</value>
</property>

<property>
  <name>dfs.namenode.replication.max-streams-hard-limit</name>
  <value>200</value>
</property>

<property>
  <name>dfs.datanode.balance.max.concurrent.moves</name>
  <value>500</value>
</property>

<property>
  <name>dfs.namenode.replication.work.multiplier.per.iteration</name>
  <value>30</value>
</property>
  1. Resize your EMR cluster from six to five core nodes, and look in the EMR events tab to see how long the resize took.
  2. Repeat the previous steps without modifying the configurations, and check the difference in resize time.

While performing this exercise, I saw resizing time lower from 45+ minutes (without config changes) down to about 6 minutes (with modified hdfs-site configs). This exercise demonstrates how much HDFS is throttled under default configurations. Although removing these throttles is dangerous and performance using them should be tested first, they can significantly speed up decommissioning time and therefore resizing.

The following are some additional tips for resizing clusters:

  • Shrink resizing timeouts. You can configure EMR nodes in two ways: instance groups or instance fleets. For more information, see Create a Cluster with Instance Fleets or Uniform Instance Groups. EMR has implemented shrink resize timeouts when nodes are configured in instance fleets. This timeout prevents an instance fleet from attempting to resize forever if something goes wrong during the resize. It currently defaults to one day, so keep it in mind when you are resizing an instance fleet down.

If an instance fleet shrink request takes longer than one day, it finishes and pauses at however many instances are currently running. On the other hand, instance groups have no default shrink resize timeout. However, both types have the one-hour YARN timeout described earlier in the yarn.resourcemanager.nodemanager-graceful-decommission-timeout-secs property (in EMR 5.14) in yarn-site.xml.

  • Watch out for high frequency HDFS writes when resizing core nodes. If HDFS is receiving a lot of writes, it will modify a large number of blocks that require replication. This replication can interfere with the block replication from any decommissioning core nodes and significantly slow down the resizing process.

Setting up policies for automatic scaling

Although manual scaling is useful, a majority of the time cluster resizes are executed dynamically through Amazon EMR automatic scaling. Generally, the details of the automatic scaling policy must be tailored to the specific Hadoop job, so I won’t go into detail there. Instead, I provide some general guidelines for setting up your cluster’s auto scaling policies.

The following are some considerations when setting up your auto scaling policy.

Metrics for scaling

Choose the right metrics for your node types to trigger scaling. For example, scaling core nodes solely on the YARNMemoryAvailablePercentage metric doesn’t make sense. This is because you would be increasing/decreasing HDFS total size when really you only need more processing power. Scaling task nodes on HDFSUtilization also doesn’t make sense because you would want more HDFS storage space that does not come with task nodes. A common automatic scaling metric for core nodes is HDFSUtilization. Common auto scaling metrics for task nodes include ContainerPending-Out and YarnMemoryAvailablePercentage.

Note: Keep in mind that Amazon EMR currently requires HDFS, so you must have at least one core node in your cluster. Core nodes can also provide CPU and memory resources. But if you don’t need to scale HDFS, and you just need more CPU or memory resources for your job, we recommend that you use task nodes for that purpose.

Scaling core nodes

As described earlier, one of the two EMR node types in your cluster is the core node. Core nodes run HDFS, so they have a longer decommissioning delay. This means that they are slow to scale and should not be aggressively scaled. Only adding and removing a few core nodes at a time will help you avoid scaling issues. Unless you need the HDFS storage, scaling task nodes is usually a better option. If you find that you have to scale large numbers of core nodes, consider changing hdfs-site.xml configurations to allow faster decommission time and faster scale down.

Scaling task nodes

Task nodes don’t run HDFS, which makes them perfect for aggressively scaling with a dynamic job. When your Hadoop task has spikes of work between periods of downtime, this is the node type that you want to use.

You can set up task nodes with a very aggressive auto scaling policy, and they can be scaled up or down easily. If you don’t need HDFS space, you can use task nodes in your cluster.

Using Spot Instances

Automatic scaling is a perfect time to use EMR Spot Instance types. The tendency of Spot Instances to disappear and reappear makes them perfect for task nodes. Because these task nodes are already used to scale in and out aggressively, Spot Instances can have very little disadvantage here. However, for time-sensitive Hadoop tasks, On-Demand Instances might be prioritized for the guaranteed availability.

Scale-in vs. scale-out policies for core nodes

Don’t fall into the trap of making your scale-in policy the exact opposite of your scale-out policy, especially for core nodes. Many times, scaling in results in additional delays for decommissioning. Take this into account and allow your scale-in policy to be more forgiving than your scale-out policy. This means longer cooldowns and higher metric requirements to trigger resizes.

You can think of scale-out policies as easily triggered with a low cooldown and small node increments. Scale-in policies should be hard to trigger, with larger cooldowns and node increments.

Minimum nodes for core node auto scaling

One last thing to consider when scaling core nodes is the yarn.app.mapreduce.am.labels property located in yarn-site.xml. In Amazon EMR, yarn.app.mapreduce.am.labels is set to “CORE” by default, which means that the application master always runs on core nodes and not task nodes. This is to help prevent application failure in a scenario where Spot Instances are used for the task nodes.

This means that when setting a minimum number of core nodes, you should choose a number that is greater than or at least equal to the number of simultaneous application masters that you plan to have running on your cluster. If you want the application master to also run on task nodes, you should modify this property to include “TASK.” However, as a best practice, don’t set the yarn.app.mapreduce.am.labels property to TASK if Spot Instances are used for task nodes.

Aggregating data using S3DistCp

Before wrapping up this post, I have one last piece of information to share about cluster resizing. When resizing core nodes, you might notice that HDFS decommissioning takes a very long time. Often this is the result of storing many small files in your cluster’s HDFS. Having many small files within HDFS (files smaller than the HDFS block size of 128 MB) adds lots of metadata overhead and can cause slowdowns in both decommissioning and Hadoop tasks.

Keeping your small files to a minimum by aggregating your data can help your cluster and jobs run more smoothly. For information about how to aggregate files, see the post Seven Tips for Using S3DistCp on Amazon EMR to Move Data Efficiently Between HDFS and Amazon S3.

Summary

In this post, you read about how Amazon EMR resizing logic works to protect your data and Hadoop tasks. I also provided some additional considerations for EMR resizing and automatic scaling. Keeping these practices in mind can help you maximize cluster savings by allowing you to use only the required cluster resources.

If you have questions or suggestions, please leave a comment below.

 


Additional Reading

If you found this post useful, be sure to check out Seven Tips for Using S3DistCp on Amazon EMR to Move Data Efficiently Between HDFS and Amazon S3 and Dynamically Scale Applications on Amazon EMR with Auto Scaling.

 


About the Author

Brandon Scheller is a software development engineer for Amazon EMR. His passion lies in developing and advancing the applications of the Hadoop ecosystem and working with the open source community. He enjoys mountaineering in the Cascades with his free time.

Computer Backup Awareness in 2018: Getting Better and Getting Worse

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/computer-backup-awareness-in-2018/

Backup Frequency - 10 Years of History

Back in June 2008, Backblaze launched our first Backup Awareness Survey. Beginning with that survey and each year since, we’ve asked the folks at The Harris Poll to conduct our annual survey. For the last 11 years now, they’ve asked the simple question, “How often do you backup all the data on your computer?” Let’s see what they’ve found.

First, a Little History

While we did the first survey in 2008, it wasn’t until 2009, after the second survey was conducted, that we declared June as Backup Awareness Month, making June 2018 the 10th anniversary of Backup Awareness Month. But, why June? You’re probably thinking that June is a good time to remind people about backing up their computers. It’s before summer vacations in the northern hemisphere and the onset of winter down under. In truth, back in 2008 Backblaze was barely a year old and the survey, while interesting, got pushed aside as we launched the first beta of our cloud backup product on June 4, 2008. When June 2009 rolled around, we had a little more time and two years worth of data. Thus, Backup Awareness Month was born (PS — the contest is over).

More People Are Backing Up, But…

Fast forward to June 2018, and the folks at The Harris Poll have diligently delivered another survey. You can see the details about the survey methodology at the end of this post. Here’s a high level look at the results over the last 11 years.
Computer Backup Frequency

The percentage of people backing up all the data on their computer has steadily increased over the years, from 62% in 2008 to 76% in 2018. That’s awesome, but at the other end of the time spectrum it’s not so pretty. The percentage of people backing up once a day or more is 5.5% in 2018. That’s the lowest percentage ever reported for daily backup. Wouldn’t it be nice if there were a program you could install on your computer that would back up all the data automatically?

Here’s how 2018 compares to 2008 for how often people back up all the data on their computers.

Computer Data Backup Frequency in 2008
Computer Data Backup Frequency in 2018

A lot has happened over the last 11 years in the world of computing, but at least people are taking backing up their computers a little more seriously. And that’s a good thing.

A Few Data Backup Facts

Each survey provides interesting insights into the attributes of backup fiends and backup slackers. Here are a few facts from the 2018 survey.

Men

  • 21% of American males have never backed up all the data on their computers.
  • 11% of American males, 18-34 years old, have never backed up all the data on their computers.
  • 33% of American males, 65 years and older, have never backed up all the data on their computers.

Women

  • 26% of American females have never backed up all the data on their computers.
  • 22% of American females, 18-34 years old, have never backed up all the data on their computers.
  • 36% of American females, 65 years and older, have never backed up all the data on their computers.

When we look at the four regions in the United States, we see that in 2018 the percentage of people who have backed up all the data on their computer at least once was about the same across regions. This was not the case back in 2012 as seen below:

Year Northeast South Midwest West
2012 67% 73% 65% 77%
2018 75% 78% 75% 76%

 

Looking Back

Here are links to our previous blog posts on our annual Backup Awareness Survey:

Survey Method:

The surveys cited in this post were conducted online within the United States by The Harris Poll on behalf of Backblaze as follows: June 5-7, 2018 among 2,035 U.S. adults, among whom 1,871 own a computer. May 19-23, 2017 among 2048 U.S. adults, May 13-17, 2016 among 2,012 U.S. adults, May 15-19, 2015 among 2,090 U.S. adults, June 2-4, 2014 among 2,037 U.S. adults, June 13–17, 2013 among 2,021 U.S. adults, May 31–June 4, 2012 among 2,209 U.S. adults, June 28–30, 2011 among 2,257 U.S. adults, June 3–7, 2010 among 2,071 U.S. adults, May 13–14, 2009 among 2,185 U.S. adults, and May 27–29, 2008 among 2,761 U.S. adults. In all surveys, respondents consisted of U.S. adult computer users (aged 18+). These online surveys were not based on a probability sample and therefore no estimate of theoretical sampling error can be calculated. For complete survey methodology, including weighting variables and subgroup sample sizes, please contact Backblaze.

The 2018 Survey: Please note sample composition changed in the 2018 wave as new sample sources were introduced to ensure representativeness among all facets of the general population.

The post Computer Backup Awareness in 2018: Getting Better and Getting Worse appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

HackSpace magazine 8: Raspberry Pi <3 Arduino

Post Syndicated from Andrew Gregory original https://www.raspberrypi.org/blog/hackspace-magazine-8/

Arduino is officially brilliant. It’s the perfect companion for your Raspberry Pi, opening up new possibilities for robotics, drones and all sorts of physical computing projects. In HackSpace magazine issue 8  we’re taking a look at what’s going on on planet Arduino, and how it can make our world better.

HackSpace magazine

This little board and its ecosystem are hugely important to the world of digital making. It’s affordable, it’s powerful, and it’s open hardware so you know that if you embed one of these in a project and the company goes bust tomorrow, the hardware will always be viable.

Arduino has helped power a new generation of digital makers, and now with a new team in charge, new boards and new software, it’s ready for the next generation.

Noisy toys

We get to speak to loads of fascinating people, but this month marks the first time we’ve ever met a science busker. Meet Stephen Summers, a former teacher who makes a mess with cornflour, water, and sound waves, all in the name of sharing the joy of physics.

HackSpace magazine

Glass-blowing

While we love messing about with digital technologies, we’re also a big fan of good old-fashioned craft skills. And you can’t get much more old-fashioned than traditional glass-blowing. Join us as we attempt to turn red hot molten glass into a multicoloured object without burning ourselves or setting anything on fire.

Guitar synth

People are endlessly clever, inventive, and all-round brilliant. A fantastic example is Björk, the Icelandic musician whose work defies categorisation. Another is Matt Bradshaw, who has made a synthesiser that you play by strumming six metal strings with a plectrum to complete a circuit. Oh, and named it after Björk. Read all about it and get inspired to do something equally bonkers.

HackSpace magazine

Machine learning

Do you have children? Do they leave the lights on all the time, causing you to shout, “THIS ISN’T BLACKPOOL FLAMING ILLUMINATIONS, YOU KNOW!” Well, now you can replace those children with an Arduino. With a bit of machine learning, the Arduino can train itself to turn the lights on and off at the right time, all the time. Plus they don’t cost as much as human children, so it’s a double win!

Dry ice cream

When the sun comes out in Blighty, it doesn’t hang around for long. So why wait for your domestic fridge to freeze your tasty dairy-based desserts, when you can add some solid carbon dioxide and freeze it in a flash? Follow our tutorial and you too can have tasty treats with the ironically warm glow that comes from using chemicals at -78°C.

HackSpace magazine

And there’s more

We’ve filled the rest of the magazine with a robot orchestra, watch restoration, audio boards for Raspberry Pi, magical colour-changing wearables, and more. Get stuck in!



Get your copy of HackSpace magazine

If you like the sound of this month’s content, you can find HackSpace magazine in WHSmith, Tesco, Sainsbury’s, and independent newsagents in the UK. If you live in the US, check out your local Barnes & Noble, Fry’s, or Micro Center next week. We’re also shipping to stores in Australia, Hong Kong, Canada, Singapore, Belgium, and Brazil, so be sure to ask your local newsagent whether they’ll be getting HackSpace magazine.

And if you can’t get to the shops, fear not: you can subscribe from £4 an issue from our online shop. And if you’d rather try before you buy, you can always download the free PDF. Happy reading, and happy making!

The post HackSpace magazine 8: Raspberry Pi <3 Arduino appeared first on Raspberry Pi.

[$] Mentoring and diversity for Python

Post Syndicated from jake original https://lwn.net/Articles/757715/rss

A two-part session at the 2018 Python Language Summit tackled the core
developer diversity problem from two different angles. Victor Stinner
outlined some work he has been doing to mentor new developers on their path
toward joining the core development ranks; he has also been trying to
document that path. Mariatta Wijaya gave a very personal talk that
described the diversity problem while also providing some concrete action
items that the project and individuals could take to help make Python more
welcoming to minorities.

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close