Tag Archives: eset

This IoT Pet Monitor barks back

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/iot-pet-monitor/

Jennifer Fox, founder of FoxBot Industries, uses a Raspberry Pi pet monitor to check the sound levels of her home while she is out, allowing her to keep track of when her dog Marley gets noisy or agitated, and to interact with the gorgeous furball accordingly.

Bark Back Project Demo

A quick overview and demo of the Bark Back, a project to monitor and interact with Check out the full tutorial here: https://learn.sparkfun.com/tutorials/bark-back-interactive-pet-monitor For any licensing requests please contact [email protected]

Marley, bark!

Using a Raspberry Pi 3, speakers, SparkFun’s MEMS microphone breakout board, and an analogue-to-digital converter (ADC), the IoT Pet Monitor is fairly easy to recreate, all thanks to Jennifer’s full tutorial on the FoxBot website.

Building the pet monitor

In a nutshell, once the Raspberry Pi and the appropriate bits and pieces are set up, you’ll need to sign up at CloudMQTT — it’s free if you select the Cute Cat account. CloudMQTT will create an invisible bridge between your home and wherever you are that isn’t home, so that you can check in on your pet monitor.

Screenshot CloudMQTT account set-up — IoT Pet Monitor Bark Back Raspberry Pi

Image c/o FoxBot Industries

Within the project code, you’ll be able to calculate the peak-to-peak amplitude of sound the microphone picks up. Then you can decide how noisy is too noisy when it comes to the occasional whine and bark of your beloved pup.

MEMS microphone breakout board — IoT Pet Monitor Bark Back Raspberry Pi

The MEMS microphone breakout board collects sound data and relays it back to the Raspberry Pi via the ADC.
Image c/o FoxBot Industries

Next you can import sounds to a preset song list that will be played back when the volume rises above your predefined threshold. As Jennifer states in the tutorial, the sounds can easily be recorded via apps such as Garageband, or even on your mobile phone.

Using the pet monitor

Whenever the Bark Back IoT Pet Monitor is triggered to play back audio, this information is fed to the CloudMQTT service, allowing you to see if anything is going on back home.

A sitting dog with a doll in its mouth — IoT Pet Monitor Bark Back Raspberry Pi

*incoherent coos of affection from Alex*
Image c/o FoxBot Industries

And as Jennifer recommends, a update of the project could include a camera or sensors to feed back more information about your home environment.

If you’ve created something similar, be sure to let us know in the comments. And if you haven’t, but you’re now planning to build your own IoT pet monitor, be sure to let us know in the comments. And if you don’t have a pet but just want to say hi…that’s right, be sure to let us know in the comments.

The post This IoT Pet Monitor barks back appeared first on Raspberry Pi.

Hacker House’s Zero W–powered automated gardener

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/hacker-house-automated-gardener/

Are the plants in your home or office looking somewhat neglected? Then build an automated gardener using a Raspberry Pi Zero W, with help from the team at Hacker House.

Make a Raspberry Pi Automated Gardener

See how we built it, including our materials, code, and supplemental instructions, on Hackster.io: https://www.hackster.io/hackerhouse/automated-indoor-gardener-a90907 With how busy our lives are, it’s sometimes easy to forget to pay a little attention to your thirsty indoor plants until it’s too late and you are left with a crusty pile of yellow carcasses.

Building an automated gardener

Tired of their plants looking a little too ‘crispy’, Hacker House have created an automated gardener using a Raspberry Pi Zero W alongside some 3D-printed parts, a 5v USB grow light, and a peristaltic pump.

Hacker House Automated Gardener Raspberry Pi

They designed and 3D printed a PLA casing for the project, allowing enough space within for the Raspberry Pi Zero W, the pump, and the added electronics including soldered wiring and two N-channel power MOSFETs. The MOSFETs serve to switch the light and the pump on and off.

Hacker House Automated Gardener Raspberry Pi

Due to the amount of power the light and pump need, the team replaced the Pi’s standard micro USB power supply with a 12v switching supply.

Coding an automated gardener

All the code for the project — a fairly basic Python script —is on the Hacker House GitHub repository. To fit it to your requirements, you may need to edit a few lines of the code, and Hacker House provides information on how to do this. You can also find more details of the build on the hackster.io project page.

Hacker House Automated Gardener Raspberry Pi

While the project runs with preset timings, there’s no reason why you couldn’t upgrade it to be app-based, for example to set a watering schedule when you’re away on holiday.

To see more for the Hacker House team, be sure to follow them on YouTube. You can also check out some of their previous Raspberry Pi projects featured on our blog, such as the smartphone-connected door lock and gesture-controlled holographic visualiser.

Raspberry Pi and your home garden

Raspberry Pis make great babysitters for your favourite plants, both inside and outside your home. Here at Pi Towers, we have Bert, our Slack- and Twitter-connected potted plant who reminds us when he’s thirsty and in need of water.

Bert Plant on Twitter

I’m good. There’s plenty to drink!

And outside of the office, we’ve seen plenty of your vegetation-focused projects using Raspberry Pi for planting, monitoring or, well, commenting on social and political events within the media.

If you use a Raspberry Pi within your home gardening projects, we’d love to see how you’ve done it. So be sure to share a link with us either in the comments below, or via our social media channels.

 

The post Hacker House’s Zero W–powered automated gardener appeared first on Raspberry Pi.

[$] The rest of the 4.16 merge window

Post Syndicated from corbet original https://lwn.net/Articles/746791/rss

At the close of the 4.16 merge window,
11,746
non-merge changesets had been merged; that is 5,000 since last week’s summary. This merge window is
thus a busy one, though not out of line with its predecessors — 4.14 had
11,500 changesets during its merge window, while 4.15 had 12,599. Quite a
bit of that work is of the boring internal variety; over 600 of those
changesets were
device-tree updates, for example. But there was still a fair amount of
interesting work merged in the second half of the 4.16 merge window; read
on for the highlights.

Barbot 4: the bartending Grandfather clock

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/barbot-4/

Meet Barbot 4, the drink-dispensing Grandfather clock who knows when it’s time to party.

Barbot 4. Grandfather Time (first video of cocktail robot)

The first introduction to my latest barbot – this time made inside a grandfather clock. There is another video where I explain a bit about how it works, and am happy to give more explanations. https://youtu.be/hdxV_KKH5MA This can make cocktails with up to 4 spirits, and 4 mixers, and is controlled by voice, keyboard input, or a gui, depending which is easiest.

Barbot 4

Robert Prest’s Barbot 4 is a beverage dispenser loaded into an old Grandfather clock. There’s space in the back for your favourite spirits and mixers, and a Raspberry Pi controls servo motors that release the required measures of your favourite cocktail ingredients, according to preset recipes.

Barbot 4 Raspberry Pi drink-dispensing robot

The clock can hold four mixers and four spirits, and a human supervisor records these using Drinkydoodad, a friendly touchscreen interface. With information about its available ingredients and a library of recipes, Barbot 4 can create your chosen drink. Patrons control the system either with voice commands or with the touchscreen UI.

Barbot 4 Raspberry Pi drink-dispensing robot

Robert has experimented with various components as this project has progressed. He has switched out peristaltic pumps in order to increase the flow of liquid, and adjusted the motors so that they can handle carbonated beverages. In the video, he highlights other quirks he hopes to address, like the fact that drinks tend to splash during pouring.

Barbot 4 Raspberry Pi drink-dispensing robot

As well as a Raspberry Pi, the build uses Arduinos. These control the light show, which can be adjusted according to your party-time lighting preferences.

An explanation of the build accompanies Robert’s second video. We’re hoping he’ll also release more details of Barbot 3, his suitcase-sized, portable Barbot, and of Doom Shot Bot, a bottle topper that pours a shot every time you die in the game DoomZ.

Automated bartending

Barbot 4 isn’t the first cocktail-dispensing Raspberry Pi bartender we’ve seen, though we have to admit that fitting it into a grandfather clock definitely makes it one of the quirkiest.

If you’ve built a similar project using a Raspberry Pi, we’d love to see it. Share your project in the comments, or tell us what drinks you’d ask Barbot to mix if you had your own at home.

The post Barbot 4: the bartending Grandfather clock appeared first on Raspberry Pi.

[$] 4.16 Merge window part 1

Post Syndicated from corbet original https://lwn.net/Articles/746129/rss

As of this writing, just over 6,700 non-merge changesets have been pulled
into the mainline repository for the 4.16 development cycle. Given that
there are a number of significant trees yet to be pulled, the early
indications are that 4.16 will be yet another busy development cycle. What
follows is a summary of the significant changes merged in the first half of
this merge window.

Game night 2: Detention, Viatoree, Paletta

Post Syndicated from Eevee original https://eev.ee/blog/2018/01/16/game-night-2-detention-viatoree-paletta/

Game night continues with:

  • Detention
  • Viatoree
  • Paletta

These are impressions, not reviews. I try to avoid major/ending spoilers, but big plot points do tend to leave impressions.

Detention

longish · inventory horror · jan 2017 · lin/mac/win · $12 on steam · website

Inventory horror” is a hell of a genre.

I think this one came from a Twitter thread where glip asked for indie horror recommendations. It’s apparently well-known enough to have a Wikipedia article, but I hadn’t heard of it before.

I love love love the aesthetic here. It’s obviously 2Dish from a side view (though there’s plenty of parallax in a lot of places), and it’s all done with… papercraft? I think of it as papercraft. Everything is built out of painted chunks that look like they were cut out of paper. It’s most obvious when watching the protagonist move around; her legs and skirt swivel as she walks.

Less obvious are the occasional places where tiny details repeat in the background because a paper cutout was reused. I don’t bring that up as a dig on the art; on the contrary, I really liked noticing that once or twice. It made the world feel like it was made with a tileset (albeit with very large chunky tiles), like it’s slightly artificial. I’m used to seeing sidescrollers made from tiles, of course, but the tiles are usually colorful and cartoony pixel art; big gritty full-color tiles are unusual and eerie.

And that’s a good thing in a horror game! Detention’s setting is already slightly unreal, and it’s made all the moreso by my Western perspective: it takes place in a Taiwanese school in the 60’s, a time when Taiwan was apparently under martial law. The Steam page tells you this, but I didn’t even know that much when we started playing, so I’d effectively been dropped somewhere on the globe and left to collect the details myself. Even figuring out we were in Taiwan (rather than mainland China) felt like an insight.

Thinking back, it was kind of a breath of fresh air. Games can be pretty heavy-handed about explaining the setting, but I never got that feeling from Detention. There’s more than enough context to get what’s going on, but there are no “stop and look at the camera while monologuing some exposition” moments. The developers are based in Taiwan, so it’s possible the setting is plenty familiar to them, and my perception of it is a complete accident. Either way, it certainly made an impact. Death of the author and whatnot, I suppose.

One thing in particular that stood out: none of the Chinese text in the environment is directly translated. The protagonist’s thoughts still give away what it says — “this is the nurse’s office” and the like — but that struck me as pretty different from simply repeating the text in English as though I were reading a sign in an RPG. The text is there, perfectly legible, but I can’t read it; I can only ask the protagonist to read it and offer her thoughts. It drives home that I’m experiencing the world through the eyes of the protagonist, who is their own person with their own impression of everything. Again, this is largely an emergent property of the game’s being designed in a culture that is not mine, but I’m left wondering how much thought went into this style of localization.

The game itself sees you wandering through a dark and twisted version of the protagonist’s school, collecting items and solving puzzles with them. There’s no direct combat, though some places feature a couple varieties of spirits called lingered which you have to carefully avoid. As the game progresses, the world starts to break down, alternating between increasingly abstract and increasingly concrete as we find out who the protagonist is and why she’s here.

The payoff is very personal and left a lasting impression… though as I look at the Wikipedia page now, it looks like the ending we got was the non-canon bad ending?! Well, hell. The bad ending is still great, then.

The whole game has a huge Silent Hill vibe, only without the combat and fog. Frankly, the genre might work better without combat; personal demons are more intimidating and meaningful when you can’t literally shoot them with a gun until they’re dead.

FINAL SCORE: 拾

Viatoree

short · platformer · sep 2013 · win · free on itch

I found this because @itchio tweeted about it, and the phrase “atmospheric platform exploration game” is the second most beautiful sequence of words in the English language.

The first paragraph on the itch.io page tells you the setup. That paragraph also contains more text than the entire game. In short: there are five things, and you need to find them. You can walk, jump, and extend your arms straight up to lift yourself to the ceiling. That’s it. No enemies, no shooting, no NPCs (more or less).

The result is, indeed, an atmospheric platform exploration game. The foreground is entirely 1-bit pixel art, save for the occasional white pixel to indicate someone’s eyes, and the background is only a few shades of the same purple hue. The game becomes less about playing and more about just looking at the environmental detail, appreciating how much texture the game manages to squeeze out of chunky colorless pixels. The world is still alive, too, much moreso than most platformers; tiny critters appear here and there, doing some wandering of their own, completely oblivious to you.

The game is really short, but it… just… makes me happy. I’m happy that this can exist, that not only is it okay for someone to make a very compact and short game, but that the result can still resonate with me. Not everything needs to be a sprawling epic or ask me to dedicate hours of time. It takes a few tiny ideas, runs with them, does what it came to do, and ends there. I love games like this.

That sounds silly to write out, but it’s been hard to get into my head! I do like experimenting, but I also feel compelled to reach for the grandiose, and grandiose experiment sounds more like mad science than creative exploration. For whatever reason, Viatoree convinced me that it’s okay to do a small thing, in a way that no other jam game has. It was probably the catalyst that led me to make Roguelike Simulator, and I thank it for that.

Unfortunately, we collected four of the five macguffins before hitting upon on a puzzle we couldn’t make heads or tails of. After about ten minutes of fruitless searching, I decided to abandon this one unfinished, rather than bore my couch partner to tears. Maybe I’ll go take another stab at it after I post this.

FINAL SCORE: ●●●●○

Paletta

medium · puzzle story · nov 2017 · win · free on itch

Paletta, another RPG Maker work, won second place in the month-long Indie Game Maker Contest 2017. Nice! Apparently MOOP came in fourth in the same jam; also nice! I guess that’s why both of them ended up on the itch front page.

The game is set in a world drained of color, and you have to go restore it. Each land contains one lost color, and each color gives you a corresponding spell, which is generally used for some light puzzle-solving in further lands. It’s a very cute and light-hearted game, and it actually does an impressive job of obscuring its RPG Maker roots.

The world feels a little small to me, despite having fairly spacious maps. The progression is pretty linear: you enter one land, talk to a small handful of NPCs, solve the one puzzle, get the color, and move on. I think all the areas were continuously connected, too, which may have thrown me off a bit — these areas are described as though they were vast regions, but they’re all a hundred feet wide and nestled right next to each other.

I love playing with color as a concept, and I wish the game had run further with it somehow. Rescuing a color does add some color back to the world, but at times it seemed like the color that reappeared was somewhat arbitrary? It’s not like you rescue green and now all the green is back. Thinking back on it now, I wonder if each rescued color actually changed a fixed set of sprites from gray to colorized? But it’s been a month (oops) and now I’m not sure.

I’m not trying to pick on the authors for the brevity of their jam game and also first game they’ve ever finished. I enjoyed playing it and found it plenty charming! It just happens that this time, what left the biggest impression on me was a nebulous feeling that something was missing. I think that’s still plenty important to ponder.

FINAL SCORE: ❤️💛💚💙💜

Wanted: Sales Engineer

Post Syndicated from Yev original https://www.backblaze.com/blog/wanted-sales-engineer/

At inception, Backblaze was a consumer company. Thousands upon thousands of individuals came to our website and gave us $5/mo to keep their data safe. But, we didn’t sell business solutions. It took us years before we had a sales team. In the last couple of years, we’ve released products that businesses of all sizes love: Backblaze B2 Cloud Storage and Backblaze for Business Computer Backup. Those businesses want to integrate Backblaze deeply into their infrastructure, so it’s time to hire our first Sales Engineer!

Company Description:
Founded in 2007, Backblaze started with a mission to make backup software elegant and provide complete peace of mind. Over the course of almost a decade, we have become a pioneer in robust, scalable low cost cloud backup. Recently, we launched B2 – robust and reliable object storage at just $0.005/gb/mo. Part of our differentiation is being able to offer the lowest price of any of the big players while still being profitable.

We’ve managed to nurture a team oriented culture with amazingly low turnover. We value our people and their families. Don’t forget to check out our “About Us” page to learn more about the people and some of our perks.

We have built a profitable, high growth business. While we love our investors, we have maintained control over the business. That means our corporate goals are simple – grow sustainably and profitably.

Some Backblaze Perks:

  • Competitive healthcare plans
  • Competitive compensation and 401k
  • All employees receive Option grants
  • Unlimited vacation days
  • Strong coffee
  • Fully stocked Micro kitchen
  • Catered breakfast and lunches
  • Awesome people who work on awesome projects
  • Childcare bonus
  • Normal work hours
  • Get to bring your pets into the office
  • San Mateo Office – located near Caltrain and Highways 101 & 280.

Backblaze B2 cloud storage is a building block for almost any computing service that requires storage. Customers need our help integrating B2 into iOS apps to Docker containers. Some customers integrate directly to the API using the programming language of their choice, others want to solve a specific problem using ready made software, already integrated with B2.

At the same time, our computer backup product is deepening it’s integration into enterprise IT systems. We are commonly asked for how to set Windows policies, integrate with Active Directory, and install the client via remote management tools.

We are looking for a sales engineer who can help our customers navigate the integration of Backblaze into their technical environments.

Are you 1/2” deep into many different technologies, and unafraid to dive deeper?

Can you confidently talk with customers about their technology, even if you have to look up all the acronyms right after the call?

Are you excited to setup complicated software in a lab and write knowledge base articles about your work?

Then Backblaze is the place for you!

Enough about Backblaze already, what’s in it for me?
In this role, you will be given the opportunity to learn about the technologies that drive innovation today; diverse technologies that customers are using day in and out. And more importantly, you’ll learn how to learn new technologies.

Just as an example, in the past 12 months, we’ve had the opportunity to learn and become experts in these diverse technologies:

  • How to setup VM servers for lab environments, both on-prem and using cloud services.
  • Create an automatically “resetting” demo environment for the sales team.
  • Setup Microsoft Domain Controllers with Active Directory and AD Federation Services.
  • Learn the basics of OAUTH and web single sign on (SSO).
  • Archive video workflows from camera to media asset management systems.
  • How upload/download files from Javascript by enabling CORS.
  • How to install and monitor online backup installations using RMM tools, like JAMF.
  • Tape (LTO) systems. (Yes – people still use tape for storage!)

How can I know if I’ll succeed in this role?

You have:

  • Confidence. Be able to ask customers questions about their environments and convey to them your technical acumen.
  • Curiosity. Always want to learn about customers’ situations, how they got there and what problems they are trying to solve.
  • Organization. You’ll work with customers, integration partners, and Backblaze team members on projects of various lengths. You can context switch and either have a great memory or keep copious notes. Your checklists have their own checklists.

You are versed in:

  • The fundamentals of Windows, Linux and Mac OS X operating systems. You shouldn’t be afraid to use a command line.
  • Building, installing, integrating and configuring applications on any operating system.
  • Debugging failures – reading logs, monitoring usage, effective google searching to fix problems excites you.
  • The basics of TCP/IP networking and the HTTP protocol.
  • Novice development skills in any programming/scripting language. Have basic understanding of data structures and program flow.
  • Your background contains:

  • Bachelor’s degree in computer science or the equivalent.
  • 2+ years of experience as a pre or post-sales engineer.
  • The right extra credit:
    There are literally hundreds of previous experiences you can have had that would make you perfect for this job. Some experiences that we know would be helpful for us are below, but make sure you tell us your stories!

  • Experience using or programming against Amazon S3.
  • Experience with large on-prem storage – NAS, SAN, Object. And backing up data on such storage with tools like Veeam, Veritas and others.
  • Experience with photo or video media. Media archiving is a key market for Backblaze B2.
  • Program arduinos to automatically feed your dog.
  • Experience programming against web or REST APIs. (Point us towards your projects, if they are open source and available to link to.)
  • Experience with sales tools like Salesforce.
  • 3D print door stops.
  • Experience with Windows Servers, Active Directory, Group policies and the like.
  • What’s it like working with the Sales team?
    The Backblaze sales team collaborates. We help each other out by sharing ideas, templates, and our customer’s experiences. When we talk about our accomplishments, there is no “I did this,” only “we”. We are truly a team.

    We are honest to each other and our customers and communicate openly. We aim to have fun by embracing crazy ideas and creative solutions. We try to think not outside the box, but with no boxes at all. Customers are the driving force behind the success of the company and we care deeply about their success.

    If this all sounds like you:

    1. Send an email to [email protected] with the position in the subject line.
    2. Tell us a bit about your Sales Engineering experience.
    3. Include your resume.

    The post Wanted: Sales Engineer appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

    Random with care

    Post Syndicated from Eevee original https://eev.ee/blog/2018/01/02/random-with-care/

    Hi! Here are a few loose thoughts about picking random numbers.

    A word about crypto

    DON’T ROLL YOUR OWN CRYPTO

    This is all aimed at frivolous pursuits like video games. Hell, even video games where money is at stake should be deferring to someone who knows way more than I do. Otherwise you might find out that your deck shuffles in your poker game are woefully inadequate and some smartass is cheating you out of millions. (If your random number generator has fewer than 226 bits of state, it can’t even generate every possible shuffling of a deck of cards!)

    Use the right distribution

    Most languages have a random number primitive that spits out a number uniformly in the range [0, 1), and you can go pretty far with just that. But beware a few traps!

    Random pitches

    Say you want to pitch up a sound by a random amount, perhaps up to an octave. Your audio API probably has a way to do this that takes a pitch multiplier, where I say “probably” because that’s how the only audio API I’ve used works.

    Easy peasy. If 1 is unchanged and 2 is pitched up by an octave, then all you need is rand() + 1. Right?

    No! Pitch is exponential — within the same octave, the “gap” between C and C♯ is about half as big as the gap between B and the following C. If you pick a pitch multiplier uniformly, you’ll have a noticeable bias towards the higher pitches.

    One octave corresponds to a doubling of pitch, so if you want to pick a random note, you want 2 ** rand().

    Random directions

    For two dimensions, you can just pick a random angle with rand() * TAU.

    If you want a vector rather than an angle, or if you want a random direction in three dimensions, it’s a little trickier. You might be tempted to just pick a random point where each component is rand() * 2 - 1 (ranging from −1 to 1), but that’s not quite right. A direction is a point on the surface (or, equivalently, within the volume) of a sphere, and picking each component independently produces a point within the volume of a cube; the result will be a bias towards the corners of the cube, where there’s much more extra volume beyond the sphere.

    No? Well, just trust me. I don’t know how to make a diagram for this.

    Anyway, you could use the Pythagorean theorem a few times and make a huge mess of things, or it turns out there’s a really easy way that even works for two or four or any number of dimensions. You pick each coordinate from a Gaussian (normal) distribution, then normalize the resulting vector. In other words, using Python’s random module:

    1
    2
    3
    4
    5
    6
    def random_direction():
        x = random.gauss(0, 1)
        y = random.gauss(0, 1)
        z = random.gauss(0, 1)
        r = math.sqrt(x*x + y*y + z*z)
        return x/r, y/r, z/r
    

    Why does this work? I have no idea!

    Note that it is possible to get zero (or close to it) for every component, in which case the result is nonsense. You can re-roll all the components if necessary; just check that the magnitude (or its square) is less than some epsilon, which is equivalent to throwing away a tiny sphere at the center and shouldn’t affect the distribution.

    Beware Gauss

    Since I brought it up: the Gaussian distribution is a pretty nice one for choosing things in some range, where the middle is the common case and should appear more frequently.

    That said, I never use it, because it has one annoying drawback: the Gaussian distribution has no minimum or maximum value, so you can’t really scale it down to the range you want. In theory, you might get any value out of it, with no limit on scale.

    In practice, it’s astronomically rare to actually get such a value out. I did a hundred million trials just to see what would happen, and the largest value produced was 5.8.

    But, still, I’d rather not knowingly put extremely rare corner cases in my code if I can at all avoid it. I could clamp the ends, but that would cause unnatural bunching at the endpoints. I could reroll if I got a value outside some desired range, but I prefer to avoid rerolling when I can, too; after all, it’s still (astronomically) possible to have to reroll for an indefinite amount of time. (Okay, it’s really not, since you’ll eventually hit the period of your PRNG. Still, though.) I don’t bend over backwards here — I did just say to reroll when picking a random direction, after all — but when there’s a nicer alternative I’ll gladly use it.

    And lo, there is a nicer alternative! Enter the beta distribution. It always spits out a number in [0, 1], so you can easily swap it in for the standard normal function, but it takes two “shape” parameters α and β that alter its behavior fairly dramatically.

    With α = β = 1, the beta distribution is uniform, i.e. no different from rand(). As α increases, the distribution skews towards the right, and as β increases, the distribution skews towards the left. If α = β, the whole thing is symmetric with a hump in the middle. The higher either one gets, the more extreme the hump (meaning that value is far more common than any other). With a little fiddling, you can get a number of interesting curves.

    Screenshots don’t really do it justice, so here’s a little Wolfram widget that lets you play with α and β live:

    Note that if α = 1, then 1 is a possible value; if β = 1, then 0 is a possible value. You probably want them both greater than 1, which clamps the endpoints to zero.

    Also, it’s possible to have either α or β or both be less than 1, but this creates very different behavior: the corresponding endpoints become poles.

    Anyway, something like α = β = 3 is probably close enough to normal for most purposes but already clamped for you. And you could easily replicate something like, say, NetHack’s incredibly bizarre rnz function.

    Random frequency

    Say you want some event to have an 80% chance to happen every second. You (who am I kidding, I) might be tempted to do something like this:

    1
    2
    if random() < 0.8 * dt:
        do_thing()
    

    In an ideal world, dt is always the same and is equal to 1 / f, where f is the framerate. Replace that 80% with a variable, say P, and every tic you have a P / f chance to do the… whatever it is.

    Each second, f tics pass, so you’ll make this check f times. The chance that any check succeeds is the inverse of the chance that every check fails, which is \(1 – \left(1 – \frac{P}{f}\right)^f\).

    For P of 80% and a framerate of 60, that’s a total probability of 55.3%. Wait, what?

    Consider what happens if the framerate is 2. On the first tic, you roll 0.4 twice — but probabilities are combined by multiplying, and splitting work up by dt only works for additive quantities. You lose some accuracy along the way. If you’re dealing with something that multiplies, you need an exponent somewhere.

    But in this case, maybe you don’t want that at all. Each separate roll you make might independently succeed, so it’s possible (but very unlikely) that the event will happen 60 times within a single second! Or 200 times, if that’s someone’s framerate.

    If you explicitly want something to have a chance to happen on a specific interval, you have to check on that interval. If you don’t have a gizmo handy to run code on an interval, it’s easy to do yourself with a time buffer:

    1
    2
    3
    4
    5
    6
    timer += dt
    # here, 1 is the "every 1 seconds"
    while timer > 1:
        timer -= 1
        if random() < 0.8:
            do_thing()
    

    Using while means rolls still happen even if you somehow skipped over an entire second.

    (For the curious, and the nerds who already noticed: the expression \(1 – \left(1 – \frac{P}{f}\right)^f\) converges to a specific value! As the framerate increases, it becomes a better and better approximation for \(1 – e^{-P}\), which for the example above is 0.551. Hey, 60 fps is pretty accurate — it’s just accurately representing something nowhere near what I wanted. Er, you wanted.)

    Rolling your own

    Of course, you can fuss with the classic [0, 1] uniform value however you want. If I want a bias towards zero, I’ll often just square it, or multiply two of them together. If I want a bias towards one, I’ll take a square root. If I want something like a Gaussian/normal distribution, but with clearly-defined endpoints, I might add together n rolls and divide by n. (The normal distribution is just what you get if you roll infinite dice and divide by infinity!)

    It’d be nice to be able to understand exactly what this will do to the distribution. Unfortunately, that requires some calculus, which this post is too small to contain, and which I didn’t even know much about myself until I went down a deep rabbit hole while writing, and which in many cases is straight up impossible to express directly.

    Here’s the non-calculus bit. A source of randomness is often graphed as a PDF — a probability density function. You’ve almost certainly seen a bell curve graphed, and that’s a PDF. They’re pretty nice, since they do exactly what they look like: they show the relative chance that any given value will pop out. On a bog standard bell curve, there’s a peak at zero, and of course zero is the most common result from a normal distribution.

    (Okay, actually, since the results are continuous, it’s vanishingly unlikely that you’ll get exactly zero — but you’re much more likely to get a value near zero than near any other number.)

    For the uniform distribution, which is what a classic rand() gives you, the PDF is just a straight horizontal line — every result is equally likely.


    If there were a calculus bit, it would go here! Instead, we can cheat. Sometimes. Mathematica knows how to work with probability distributions in the abstract, and there’s a free web version you can use. For the example of squaring a uniform variable, try this out:

    1
    PDF[TransformedDistribution[u^2, u \[Distributed] UniformDistribution[{0, 1}]], u]
    

    (The \[Distributed] is a funny tilde that doesn’t exist in Unicode, but which Mathematica uses as a first-class operator. Also, press shiftEnter to evaluate the line.)

    This will tell you that the distribution is… \(\frac{1}{2\sqrt{u}}\). Weird! You can plot it:

    1
    Plot[%, {u, 0, 1}]
    

    (The % refers to the result of the last thing you did, so if you want to try several of these, you can just do Plot[PDF[…], u] directly.)

    The resulting graph shows that numbers around zero are, in fact, vastly — infinitely — more likely than anything else.

    What about multiplying two together? I can’t figure out how to get Mathematica to understand this, but a great amount of digging revealed that the answer is -ln x, and from there you can plot them both on Wolfram Alpha. They’re similar, though squaring has a much better chance of giving you high numbers than multiplying two separate rolls — which makes some sense, since if either of two rolls is a low number, the product will be even lower.

    What if you know the graph you want, and you want to figure out how to play with a uniform roll to get it? Good news! That’s a whole thing called inverse transform sampling. All you have to do is take an integral. Good luck!


    This is all extremely ridiculous. New tactic: Just Simulate The Damn Thing. You already have the code; run it a million times, make a histogram, and tada, there’s your PDF. That’s one of the great things about computers! Brute-force numerical answers are easy to come by, so there’s no excuse for producing something like rnz. (Though, be sure your histogram has sufficiently narrow buckets — I tried plotting one for rnz once and the weird stuff on the left side didn’t show up at all!)

    By the way, I learned something from futzing with Mathematica here! Taking the square root (to bias towards 1) gives a PDF that’s a straight diagonal line, nothing like the hyperbola you get from squaring (to bias towards 0). How do you get a straight line the other way? Surprise: \(1 – \sqrt{1 – u}\).

    Okay, okay, here’s the actual math

    I don’t claim to have a very firm grasp on this, but I had a hell of a time finding it written out clearly, so I might as well write it down as best I can. This was a great excuse to finally set up MathJax, too.

    Say \(u(x)\) is the PDF of the original distribution and \(u\) is a representative number you plucked from that distribution. For the uniform distribution, \(u(x) = 1\). Or, more accurately,

    $$
    u(x) = \begin{cases}
    1 & \text{ if } 0 \le x \lt 1 \\
    0 & \text{ otherwise }
    \end{cases}
    $$

    Remember that \(x\) here is a possible outcome you want to know about, and the PDF tells you the relative probability that a roll will be near it. This PDF spits out 1 for every \(x\), meaning every number between 0 and 1 is equally likely to appear.

    We want to do something to that PDF, which creates a new distribution, whose PDF we want to know. I’ll use my original example of \(f(u) = u^2\), which creates a new PDF \(v(x)\).

    The trick is that we need to work in terms of the cumulative distribution function for \(u\). Where the PDF gives the relative chance that a roll will be (“near”) a specific value, the CDF gives the relative chance that a roll will be less than a specific value.

    The conventions for this seem to be a bit fuzzy, and nobody bothers to explain which ones they’re using, which makes this all the more confusing to read about… but let’s write the CDF with a capital letter, so we have \(U(x)\). In this case, \(U(x) = x\), a straight 45° line (at least between 0 and 1). With the definition I gave, this should make sense. At some arbitrary point like 0.4, the value of the PDF is 1 (0.4 is just as likely as anything else), and the value of the CDF is 0.4 (you have a 40% chance of getting a number from 0 to 0.4).

    Calculus ahoy: the PDF is the derivative of the CDF, which means it measures the slope of the CDF at any point. For \(U(x) = x\), the slope is always 1, and indeed \(u(x) = 1\). See, calculus is easy.

    Okay, so, now we’re getting somewhere. What we want is the CDF of our new distribution, \(V(x)\). The CDF is defined as the probability that a roll \(v\) will be less than \(x\), so we can literally write:

    $$V(x) = P(v \le x)$$

    (This is why we have to work with CDFs, rather than PDFs — a PDF gives the chance that a roll will be “nearby,” whatever that means. A CDF is much more concrete.)

    What is \(v\), exactly? We defined it ourselves; it’s the do something applied to a roll from the original distribution, or \(f(u)\).

    $$V(x) = P\!\left(f(u) \le x\right)$$

    Now the first tricky part: we have to solve that inequality for \(u\), which means we have to do something, backwards to \(x\).

    $$V(x) = P\!\left(u \le f^{-1}(x)\right)$$

    Almost there! We now have a probability that \(u\) is less than some value, and that’s the definition of a CDF!

    $$V(x) = U\!\left(f^{-1}(x)\right)$$

    Hooray! Now to turn these CDFs back into PDFs, all we need to do is differentiate both sides and use the chain rule. If you never took calculus, don’t worry too much about what that means!

    $$v(x) = u\!\left(f^{-1}(x)\right)\left|\frac{d}{dx}f^{-1}(x)\right|$$

    Wait! Where did that absolute value come from? It takes care of whether \(f(x)\) increases or decreases. It’s the least interesting part here by far, so, whatever.

    There’s one more magical part here when using the uniform distribution — \(u(\dots)\) is always equal to 1, so that entire term disappears! (Note that this only works for a uniform distribution with a width of 1; PDFs are scaled so the entire area under them sums to 1, so if you had a rand() that could spit out a number between 0 and 2, the PDF would be \(u(x) = \frac{1}{2}\).)

    $$v(x) = \left|\frac{d}{dx}f^{-1}(x)\right|$$

    So for the specific case of modifying the output of rand(), all we have to do is invert, then differentiate. The inverse of \(f(u) = u^2\) is \(f^{-1}(x) = \sqrt{x}\) (no need for a ± since we’re only dealing with positive numbers), and differentiating that gives \(v(x) = \frac{1}{2\sqrt{x}}\). Done! This is also why square root comes out nicer; inverting it gives \(x^2\), and differentiating that gives \(2x\), a straight line.

    Incidentally, that method for turning a uniform distribution into any distribution — inverse transform sampling — is pretty much the same thing in reverse: integrate, then invert. For example, when I saw that taking the square root gave \(v(x) = 2x\), I naturally wondered how to get a straight line going the other way, \(v(x) = 2 – 2x\). Integrating that gives \(2x – x^2\), and then you can use the quadratic formula (or just ask Wolfram Alpha) to solve \(2x – x^2 = u\) for \(x\) and get \(f(u) = 1 – \sqrt{1 – u}\).

    Multiply two rolls is a bit more complicated; you have to write out the CDF as an integral and you end up doing a double integral and wow it’s a mess. The only thing I’ve retained is that you do a division somewhere, which then gets integrated, and that’s why it ends up as \(-\ln x\).

    And that’s quite enough of that! (Okay but having math in my blog is pretty cool and I will definitely be doing more of this, sorry, not sorry.)

    Random vs varied

    Sometimes, random isn’t actually what you want. We tend to use the word “random” casually to mean something more like chaotic, i.e., with no discernible pattern. But that’s not really random. In fact, given how good humans can be at finding incidental patterns, they aren’t all that unlikely! Consider that when you roll two dice, they’ll come up either the same or only one apart almost half the time. Coincidence? Well, yes.

    If you ask for randomness, you’re saying that any outcome — or series of outcomes — is acceptable, including five heads in a row or five tails in a row. Most of the time, that’s fine. Some of the time, it’s less fine, and what you really want is variety. Here are a couple examples and some fairly easy workarounds.

    NPC quips

    The nature of games is such that NPCs will eventually run out of things to say, at which point further conversation will give the player a short brush-off quip — a slight nod from the designer to the player that, hey, you hit the end of the script.

    Some NPCs have multiple possible quips and will give one at random. The trouble with this is that it’s very possible for an NPC to repeat the same quip several times in a row before abruptly switching to another one. With only a few options to choose from, getting the same option twice or thrice (especially across an entire game, which may have numerous NPCs) isn’t all that unlikely. The notion of an NPC quip isn’t very realistic to start with, but having someone repeat themselves and then abruptly switch to something else is especially jarring.

    The easy fix is to show the quips in order! Paradoxically, this is more consistently varied than choosing at random — the original “order” is likely to be meaningless anyway, and it already has the property that the same quip can never appear twice in a row.

    If you like, you can shuffle the list of quips every time you reach the end, but take care here — it’s possible that the last quip in the old order will be the same as the first quip in the new order, so you may still get a repeat. (Of course, you can just check for this case and swap the first quip somewhere else if it bothers you.)

    That last behavior is, in fact, the canonical way that Tetris chooses pieces — the game simply shuffles a list of all 7 pieces, gives those to you in shuffled order, then shuffles them again to make a new list once it’s exhausted. There’s no avoidance of duplicates, though, so you can still get two S blocks in a row, or even two S and two Z all clumped together, but no more than that. Some Tetris variants take other approaches, such as actively avoiding repeats even several pieces apart or deliberately giving you the worst piece possible.

    Random drops

    Random drops are often implemented as a flat chance each time. Maybe enemies have a 5% chance to drop health when they die. Legally speaking, over the long term, a player will see health drops for about 5% of enemy kills.

    Over the short term, they may be desperate for health and not survive to see the long term. So you may want to put a thumb on the scale sometimes. Games in the Metroid series, for example, have a somewhat infamous bias towards whatever kind of drop they think you need — health if your health is low, missiles if your missiles are low.

    I can’t give you an exact approach to use, since it depends on the game and the feeling you’re going for and the variables at your disposal. In extreme cases, you might want to guarantee a health drop from a tough enemy when the player is critically low on health. (Or if you’re feeling particularly evil, you could go the other way and deny the player health when they most need it…)

    The problem becomes a little different, and worse, when the event that triggers the drop is relatively rare. The pathological case here would be something like a raid boss in World of Warcraft, which requires hours of effort from a coordinated group of people to defeat, and which has some tiny chance of dropping a good item that will go to only one of those people. This is why I stopped playing World of Warcraft at 60.

    Dialing it back a little bit gives us Enter the Gungeon, a roguelike where each room is a set of encounters and each floor only has a dozen or so rooms. Initially, you have a 1% chance of getting a reward after completing a room — but every time you complete a room and don’t get a reward, the chance increases by 9%, up to a cap of 80%. Once you get a reward, the chance resets to 1%.

    The natural question is: how frequently, exactly, can a player expect to get a reward? We could do math, or we could Just Simulate The Damn Thing.

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    from collections import Counter
    import random
    
    histogram = Counter()
    
    TRIALS = 1000000
    chance = 1
    rooms_cleared = 0
    rewards_found = 0
    while rewards_found < TRIALS:
        rooms_cleared += 1
        if random.random() * 100 < chance:
            # Reward!
            rewards_found += 1
            histogram[rooms_cleared] += 1
            rooms_cleared = 0
            chance = 1
        else:
            chance = min(80, chance + 9)
    
    for gaps, count in sorted(histogram.items()):
        print(f"{gaps:3d} | {count / TRIALS * 100:6.2f}%", '#' * (count // (TRIALS // 100)))
    
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
      1 |   0.98%
      2 |   9.91% #########
      3 |  17.00% ################
      4 |  20.23% ####################
      5 |  19.21% ###################
      6 |  15.05% ###############
      7 |   9.69% #########
      8 |   5.07% #####
      9 |   2.09% ##
     10 |   0.63%
     11 |   0.12%
     12 |   0.03%
     13 |   0.00%
     14 |   0.00%
     15 |   0.00%
    

    We’ve got kind of a hilly distribution, skewed to the left, which is up in this histogram. Most of the time, a player should see a reward every three to six rooms, which is maybe twice per floor. It’s vanishingly unlikely to go through a dozen rooms without ever seeing a reward, so a player should see at least one per floor.

    Of course, this simulated a single continuous playthrough; when starting the game from scratch, your chance at a reward always starts fresh at 1%, the worst it can be. If you want to know about how many rewards a player will get on the first floor, hey, Just Simulate The Damn Thing.

    1
    2
    3
    4
    5
    6
    7
      0 |   0.01%
      1 |  13.01% #############
      2 |  56.28% ########################################################
      3 |  27.49% ###########################
      4 |   3.10% ###
      5 |   0.11%
      6 |   0.00%
    

    Cool. Though, that’s assuming exactly 12 rooms; it might be worth changing that to pick at random in a way that matches the level generator.

    (Enter the Gungeon does some other things to skew probability, which is very nice in a roguelike where blind luck can make or break you. For example, if you kill a boss without having gotten a new gun anywhere else on the floor, the boss is guaranteed to drop a gun.)

    Critical hits

    I suppose this is the same problem as random drops, but backwards.

    Say you have a battle sim where every attack has a 6% chance to land a devastating critical hit. Presumably the same rules apply to both the player and the AI opponents.

    Consider, then, that the AI opponents have exactly the same 6% chance to ruin the player’s day. Consider also that this gives them an 0.4% chance to critical hit twice in a row. 0.4% doesn’t sound like much, but across an entire playthrough, it’s not unlikely that a player might see it happen and find it incredibly annoying.

    Perhaps it would be worthwhile to explicitly forbid AI opponents from getting consecutive critical hits.

    In conclusion

    An emerging theme here has been to Just Simulate The Damn Thing. So consider Just Simulating The Damn Thing. Even a simple change to a random value can do surprising things to the resulting distribution, so unless you feel like differentiating the inverse function of your code, maybe test out any non-trivial behavior and make sure it’s what you wanted. Probability is hard to reason about.

    Kernel prepatch 4.15-rc3

    Post Syndicated from corbet original https://lwn.net/Articles/741066/rss

    The 4.15-rc3 kernel prepatch is out.
    I’m not thrilled about how big the early 4.15 rc’s are, but rc3 is
    often the biggest rc because it’s still fairly early in the
    calming-down period, and yet people have had some time to start
    finding problems. That said, this rc3 is big even by rc3 standards.
    Not good.
    ” 489 changesets were merged since 4.15-rc2.

    Is blockchain a security topic? (Opensource.com)

    Post Syndicated from jake original https://lwn.net/Articles/740929/rss

    At Opensource.com, Mike Bursell looks at blockchain security from the angle of trust. Unlike cryptocurrencies, which are pseudonymous typically, other kinds of blockchains will require mapping users to real-life identities; that raises the trust issue.

    What’s really interesting is that, if you’re thinking about moving to a permissioned blockchain or distributed ledger with permissioned actors, then you’re going to have to spend some time thinking about trust. You’re unlikely to be using a proof-of-work system for making blocks—there’s little point in a permissioned system—so who decides what comprises a “valid” block that the rest of the system should agree on? Well, you can rotate around some (or all) of the entities, or you can have a random choice, or you can elect a small number of über-trusted entities. Combinations of these schemes may also work.

    If these entities all exist within one trust domain, which you control, then fine, but what if they’re distributors, or customers, or partners, or other banks, or manufacturers, or semi-autonomous drones, or vehicles in a commercial fleet? You really need to ensure that the trust relationships that you’re encoding into your implementation/deployment truly reflect the legal and IRL [in real life] trust relationships that you have with the entities that are being represented in your system.

    And the problem is that, once you’ve deployed that system, it’s likely to be very difficult to backtrack, adjust, or reset the trust relationships that you’ve designed.”

    Implementing Canary Deployments of AWS Lambda Functions with Alias Traffic Shifting

    Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/implementing-canary-deployments-of-aws-lambda-functions-with-alias-traffic-shifting/

    This post courtesy of Ryan Green, Software Development Engineer, AWS Serverless

    The concepts of blue/green and canary deployments have been around for a while now and have been well-established as best-practices for reducing the risk of software deployments.

    In a traditional, horizontally scaled application, copies of the application code are deployed to multiple nodes (instances, containers, on-premises servers, etc.), typically behind a load balancer. In these applications, deploying new versions of software to too many nodes at the same time can impact application availability as there may not be enough healthy nodes to service requests during the deployment. This aggressive approach to deployments also drastically increases the blast radius of software bugs introduced in the new version and does not typically give adequate time to safely assess the quality of the new version against production traffic.

    In such applications, one commonly accepted solution to these problems is to slowly and incrementally roll out application software across the nodes in the fleet while simultaneously verifying application health (canary deployments). Another solution is to stand up an entirely different fleet and weight (or flip) traffic over to the new fleet after verification, ideally with some production traffic (blue/green). Some teams deploy to a single host (“one box environment”), where the new release can bake for some time before promotion to the rest of the fleet. Techniques like this enable the maintainers of complex systems to safely test in production while minimizing customer impact.

    Enter Serverless

    There is somewhat of an impedance mismatch when mapping these concepts to a serverless world. You can’t incrementally deploy your software across a fleet of servers when there are no servers!* In fact, even the term “deployment” takes on a different meaning with functions as a service (FaaS). In AWS Lambda, a “deployment” can be roughly modeled as a call to CreateFunction, UpdateFunctionCode, or UpdateAlias (I won’t get into the semantics of whether updating configuration counts as a deployment), all of which may affect the version of code that is invoked by clients.

    The abstractions provided by Lambda remove the need for developers to be concerned about servers and Availability Zones, and this provides a powerful opportunity to greatly simplify the process of deploying software.
    *Of course there are servers, but they are abstracted away from the developer.

    Traffic shifting with Lambda aliases

    Before the release of traffic shifting for Lambda aliases, deployments of a Lambda function could only be performed in a single “flip” by updating function code for version $LATEST, or by updating an alias to target a different function version. After the update propagates, typically within a few seconds, 100% of function invocations execute the new version. Implementing canary deployments with this model required the development of an additional routing layer, further adding development time, complexity, and invocation latency.
    While rolling back a bad deployment of a Lambda function is a trivial operation and takes effect near instantaneously, deployments of new versions for critical functions can still be a potentially nerve-racking experience.

    With the introduction of alias traffic shifting, it is now possible to trivially implement canary deployments of Lambda functions. By updating additional version weights on an alias, invocation traffic is routed to the new function versions based on the weight specified. Detailed CloudWatch metrics for the alias and version can be analyzed during the deployment, or other health checks performed, to ensure that the new version is healthy before proceeding.

    Note: Sometimes the term “canary deployments” refers to the release of software to a subset of users. In the case of alias traffic shifting, the new version is released to some percentage of all users. It’s not possible to shard based on identity without adding an additional routing layer.

    Examples

    The simplest possible use of a canary deployment looks like the following:

    # Update $LATEST version of function
    aws lambda update-function-code --function-name myfunction ….
    
    # Publish new version of function
    aws lambda publish-version --function-name myfunction
    
    # Point alias to new version, weighted at 5% (original version at 95% of traffic)
    aws lambda update-alias --function-name myfunction --name myalias --routing-config '{"AdditionalVersionWeights" : {"2" : 0.05} }'
    
    # Verify that the new version is healthy
    …
    # Set the primary version on the alias to the new version and reset the additional versions (100% weighted)
    aws lambda update-alias --function-name myfunction --name myalias --function-version 2 --routing-config '{}'
    

    This is begging to be automated! Here are a few options.

    Simple deployment automation

    This simple Python script runs as a Lambda function and deploys another function (how meta!) by incrementally increasing the weight of the new function version over a prescribed number of steps, while checking the health of the new version. If the health check fails, the alias is rolled back to its initial version. The health check is implemented as a simple check against the existence of Errors metrics in CloudWatch for the alias and new version.

    GitHub aws-lambda-deploy repo

    Install:

    git clone https://github.com/awslabs/aws-lambda-deploy
    cd aws-lambda-deploy
    export BUCKET_NAME=[YOUR_S3_BUCKET_NAME_FOR_BUILD_ARTIFACTS]
    ./install.sh
    

    Run:

    # Rollout version 2 incrementally over 10 steps, with 120s between each step
    aws lambda invoke --function-name SimpleDeployFunction --log-type Tail --payload \
      '{"function-name": "MyFunction",
      "alias-name": "MyAlias",
      "new-version": "2",
      "steps": 10,
      "interval" : 120,
      "type": "linear"
      }' output
    

    Description of input parameters

    • function-name: The name of the Lambda function to deploy
    • alias-name: The name of the alias used to invoke the Lambda function
    • new-version: The version identifier for the new version to deploy
    • steps: The number of times the new version weight is increased
    • interval: The amount of time (in seconds) to wait between weight updates
    • type: The function to use to generate the weights. Supported values: “linear”

    Because this runs as a Lambda function, it is subject to the maximum timeout of 5 minutes. This may be acceptable for many use cases, but to achieve a slower rollout of the new version, a different solution is required.

    Step Functions workflow

    This state machine performs essentially the same task as the simple deployment function, but it runs as an asynchronous workflow in AWS Step Functions. A nice property of Step Functions is that the maximum deployment timeout has now increased from 5 minutes to 1 year!

    The step function incrementally updates the new version weight based on the steps parameter, waiting for some time based on the interval parameter, and performing health checks between updates. If the health check fails, the alias is rolled back to the original version and the workflow fails.

    For example, to execute the workflow:

    export STATE_MACHINE_ARN=`aws cloudformation describe-stack-resources --stack-name aws-lambda-deploy-stack --logical-resource-id DeployStateMachine --output text | cut  -d$'\t' -f3`
    
    aws stepfunctions start-execution --state-machine-arn $STATE_MACHINE_ARN --input '{
      "function-name": "MyFunction",
      "alias-name": "MyAlias",
      "new-version": "2",
      "steps": 10,
      "interval": 120,
      "type": "linear"}'
    

    Getting feedback on the deployment

    Because the state machine runs asynchronously, retrieving feedback on the deployment requires polling for the execution status using DescribeExecution or implementing an asynchronous notification (using SNS or email, for example) from the Rollback or Finalize functions. A CloudWatch alarm could also be created to alarm based on the “ExecutionsFailed” metric for the state machine.

    A note on health checks and observability

    Weighted rollouts like this are considerably more successful if the code is being exercised and monitored continuously. In this example, it would help to have some automation continuously invoking the alias and reporting metrics on these invocations, such as client-side success rates and latencies.

    The absence of Lambda Errors metrics used in these examples can be misleading if the function is not getting invoked. It’s also recommended to instrument your Lambda functions with custom metrics, in addition to Lambda’s built-in metrics, that can be used to monitor health during deployments.

    Extensibility

    These examples could be easily extended in various ways to support different use cases. For example:

    • Health check implementations: CloudWatch alarms, automatic invocations with payload assertions, querying external systems, etc.
    • Weight increase functions: Exponential, geometric progression, single canary step, etc.
    • Custom success/failure notifications: SNS, email, CI/CD systems, service discovery systems, etc.

    Traffic shifting with SAM and CodeDeploy

    Using the Lambda UpdateAlias operation with additional version weights provides a powerful primitive for you to implement custom traffic shifting solutions for Lambda functions.

    For those not interested in building custom deployment solutions, AWS CodeDeploy provides an intuitive turn-key implementation of this functionality integrated directly into the Serverless Application Model. Traffic-shifted deployments can be declared in a SAM template, and CodeDeploy manages the function rollout as part of the CloudFormation stack update. CloudWatch alarms can also be configured to trigger a stack rollback if something goes wrong.

    i.e.

    MyFunction:
      Type: AWS::Serverless::Function
      Properties:
        FunctionName: MyFunction
        AutoPublishAlias: MyFunctionInvokeAlias
        DeploymentPreference:
          Type: Linear10PercentEvery1Minute
          Role:
            Fn::GetAtt: [ DeploymentRole, Arn ]
          Alarms:
           - { Ref: MyFunctionErrorsAlarm }
    ...
    

    For more information about using CodeDeploy with SAM, see Automating Updates to Serverless Apps.

    Conclusion

    It is often the simple features that provide the most value. As I demonstrated in this post, serverless architectures allow the complex deployment orchestration used in traditional applications to be replaced with a simple Lambda function or Step Functions workflow. By allowing invocation traffic to be easily weighted to multiple function versions, Lambda alias traffic shifting provides a simple but powerful feature that I hope empowers you to easily implement safe deployment workflows for your Lambda functions.

    In the Works – AWS IoT Device Defender – Secure Your IoT Fleet

    Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/in-the-works-aws-sepio-secure-your-iot-fleet/

    Scale takes on a whole new meaning when it comes to IoT. Last year I was lucky enough to tour a gigantic factory that had, on average, one environment sensor per square meter. The sensors measured temperature, humidity, and air purity several times per second, and served as an early warning system for contaminants. I’ve heard customers express interest in deploying IoT-enabled consumer devices in the millions or tens of millions.

    With powerful, long-lived devices deployed in a geographically distributed fashion, managing security challenges is crucial. However, the limited amount of local compute power and memory can sometimes limit the ability to use encryption and other forms of data protection.

    To address these challenges and to allow our customers to confidently deploy IoT devices at scale, we are working on IoT Device Defender. While the details might change before release, AWS IoT Device Defender is designed to offer these benefits:

    Continuous AuditingAWS IoT Device Defender monitors the policies related to your devices to ensure that the desired security settings are in place. It looks for drifts away from best practices and supports custom audit rules so that you can check for conditions that are specific to your deployment. For example, you could check to see if a compromised device has subscribed to sensor data from another device. You can run audits on a schedule or on an as-needed basis.

    Real-Time Detection and AlertingAWS IoT Device Defender looks for and quickly alerts you to unusual behavior that could be coming from a compromised device. It does this by monitoring the behavior of similar devices over time, looking for unauthorized access attempts, changes in connection patterns, and changes in traffic patterns (either inbound or outbound).

    Fast Investigation and Mitigation – In the event that you get an alert that something unusual is happening, AWS IoT Device Defender gives you the tools, including contextual information, to help you to investigate and mitigate the problem. Device information, device statistics, diagnostic logs, and previous alerts are all at your fingertips. You have the option to reboot the device, revoke its permissions, reset it to factory defaults, or push a security fix.

    Stay Tuned
    I’ll have more info (and a hands-on post) as soon as possible, so stay tuned!

    Jeff;

    New- AWS IoT Device Management

    Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-iot-device-management/

    AWS IoT and AWS Greengrass give you a solid foundation and programming environment for your IoT devices and applications.

    The nature of IoT means that an at-scale device deployment often encompasses millions or even tens of millions of devices deployed at hundreds or thousands of locations. At that scale, treating each device individually is impossible. You need to be able to set up, monitor, update, and eventually retire devices in bulk, collective fashion while also retaining the flexibility to accommodate varying deployment configurations, device models, and so forth.

    New AWS IoT Device Management
    Today we are launching AWS IoT Device Management to help address this challenge. It will help you through each phase of the device lifecycle, from manufacturing to retirement. Here’s what you get:

    Onboarding – Starting with devices in their as-manufactured state, you can control the provisioning workflow. You can use IoT Device Management templates to quickly onboard entire fleets of devices with a few clicks. The templates can include information about device certificates and access policies.

    Organization – In order to deal with massive numbers of devices, AWS IoT Device Management extends the existing IoT Device Registry and allows you to create a hierarchical model of your fleet and to set policies on a hierarchical basis. You can drill-down through the hierarchy in order to locate individual devices. You can also query your fleet on attributes such as device type or firmware version.

    Monitoring – Telemetry from the devices is used to gather real-time connection, authentication, and status metrics, which are published to Amazon CloudWatch. You can examine the metrics and locate outliers for further investigation. IoT Device Management lets you configure the log level for each device group, and you can also publish change events for the Registry and Jobs for monitoring purposes.

    Remote ManagementAWS IoT Device Management lets you remotely manage your devices. You can push new software and firmware to them, reset to factory defaults, reboot, and set up bulk updates at the desired velocity.

    Exploring AWS IoT Device Management
    The AWS IoT Device Management Console took me on a tour and pointed out how to access each of the features of the service:

    I already have a large set of devices (pressure gauges):

    These gauges were created using the new template-driven bulk registration feature. Here’s how I create a template:

    The gauges are organized into groups (by US state in this case):

    Here are the gauges in Colorado:

    AWS IoT group policies allow you to control access to specific IoT resources and actions for all members of a group. The policies are structured very much like IAM policies, and can be created in the console:

    Jobs are used to selectively update devices. Here’s how I create one:

    As indicated by the Job type above, jobs can run either once or continuously. Here’s how I choose the devices to be updated:

    I can create custom authorizers that make use of a Lambda function:

    I’ve shown you a medium-sized subset of AWS IoT Device Management in this post. Check it out for yourself to learn more!

    Jeff;

     

    [$] 4.15 Merge window part 2

    Post Syndicated from corbet original https://lwn.net/Articles/740064/rss

    Despite the warnings that the 4.15 merge window could be either longer or
    shorter than usual, the 4.15-rc1 prepatch
    came out right on schedule on November 26. Anybody who was expecting
    a quiet development cycle this time around is in for a surprise, though; 12,599
    non-merge changesets were pulled into the mainline during the 4.15 merge
    window, 1,000 more than were seen in the 4.14 merge window. The first
    8,800 of those changes were covered in this summary; what follows is a look at what
    came after.

    Weekly roundup: VK Ultra

    Post Syndicated from Eevee original https://eev.ee/dev/2017/11/27/weekly-roundup-vk-ultra/

    • fox flux: Cleaned up and committed the “heart get” overlay and worked on some more art for it. Diagnosed a very obscure physics problem, but didn’t come up with a good solution yet; physics is hard! Drew a very good tree trunk to use as a spawn point; also worked on some background foliage, though less successfully. Played with colors a bit. Tried to work out a tileset for underground areas.

    • music: I wrote like half of a little chiptune song that I actually like so far! I’m now seriously toying with the idea of doing my own music for fox flux. Played a bit with more sound effects, too.

    • blog: I wrote up the Eevee mugshot set for Doom I made, as an inaugural post for the release category.

    • veekun: Finished up Ultra Sun and Ultra Moon! Pokémon sprites, box sprites, item sprites, and the same data as Sun/Moon. I say “finished” but of course plenty of stuff is still missing, alas.

    • cc: I’m trying to make glip some building blocks so that they can actually start building the game, so I made some breakable blocks. Also wrote a little shader for implementing their parallax background, which involves a bunch of layer modes.

    • misc: I got a new keyboard. Also I installed umatrix because noscript’s web extension version is half-broken and driving me up the wall. Sorry, noscript.

    Huh, that’s not a bad haul, despite a few nights of incredibly bad sleep. Cool.

    AWS Media Services – Process, Store, and Monetize Cloud-Based Video

    Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-media-services-process-store-and-monetize-cloud-based-video/

    Do you remember what web video was like in the early days? Standalone players, video no larger than a postage stamp, slow & cantankerous connections, overloaded servers, and the ever-present buffering messages were the norm less than two decades ago.

    Today, thanks to technological progress and a broad array of standards, things are a lot better. Video consumers are now in control. They use devices of all shapes, sizes, and vintages to enjoy live and recorded content that is broadcast, streamed, or sent over-the-top (OTT, as they say), and expect immediate access to content that captures and then holds their attention. Meeting these expectations presents a challenge for content creators and distributors. Instead of generating video in a one-size-fits-all format, they (or their media servers) must be prepared to produce video that spans a broad range of sizes, formats, and bit rates, taking care to be ready to deal with planned or unplanned surges in demand. In the face of all of this complexity, they must backstop their content with a monetization model that supports the content and the infrastructure to deliver it.

    New AWS Media Services
    Today we are launching an array of broadcast-quality media services, each designed to address one or more aspects of the challenge that I outlined above. You can use them together to build a complete end-to-end video solution or you can use one or more in building-block style. In true AWS fashion, you can spend more time innovating and less time setting up and running infrastructure, leaving you ready to focus on creating, delivering, and monetizing your content. The services are all elastic, allowing you to ramp up processing power, connections, and storage and giving you the ability to handle million-user (and beyond) spikes with ease.

    Here are the services (all accessible from a set of interactive consoles as well as through a comprehensive set of APIs):

    AWS Elemental MediaConvert – File-based transcoding for OTT, broadcast, or archiving, with support for a long list of formats and codecs. Features include multi-channel audio, graphic overlays, closed captioning, and several DRM options.

    AWS Elemental MediaLive – Live encoding to deliver video streams in real time to both televisions and multiscreen devices. Allows you to deploy highly reliable live channels in minutes, with full control over encoding parameters. It supports ad insertion, multi-channel audio, graphic overlays, and closed captioning.

    AWS Elemental MediaPackage – Video origination and just-in-time packaging. Starting from a single input, produces output for multiple devices representing a long list of current and legacy formats. Supports multiple monetization models, time-shifted live streaming, ad insertion, DRM, and blackout management.

    AWS Elemental MediaStore – Media-optimized storage that enables high performance and low latency applications such as live streaming, while taking advantage of the scale and durability of Amazon Simple Storage Service (S3).

    AWS Elemental MediaTailor – Monetization service that supports ad serving and server-side ad insertion, a broad range of devices, transcoding, and accurate reporting of server-side and client-side ad insertion.

    Instead of listing out all of the features in the sections below, I’ve simply included as many screen shots as possible with the expectation that this will give you a better sense of the rich set of features, parameters, and settings that you get with this set of services.

    AWS Elemental MediaConvert
    MediaConvert allows you to transcode content that is stored in files. You can process individual files or entire media libraries, or anything in-between. You simply create a conversion job that specifies the content and the desired outputs, and submit it to MediaConvert. There’s no software to install or patch and the service scales to meet your needs without affecting turnaround time or performance.

    The MediaConvert Console lets you manage Output presets, Job templates, Queues, and Jobs:

    You can use a built-in system preset or you can make one of your own. You have full control over the settings when you make your own:

    Jobs templates are named, and produce one or more output groups. You can add a new group to a template with a click:

    When everything is ready to go, you create a job and make some final selections, then click on Create:

    Each account starts with a default queue for jobs, where incoming work is processed in parallel using all processing resources available to the account. Adding queues does not add processing resources, but does cause them to be apportioned across queues. You can temporarily pause one queue in order to devote more resources to the others. You can submit jobs to paused queues and you can also cancel any that have yet to start.

    Pricing for this service is based on the amount of video that you process and the features that you use.

    AWS Elemental MediaLive
    This service is for live encoding, and can be run 24×7. MediaLive channels are deployed on redundant resources distributed in two physically separated Availability Zones in order to provide the reliability expected by our customers in the broadcast industry. You can specify your inputs and define your channels in the MediaLive Console:

    After you create an Input, you create a Channel and attach it to the Input:

    You have full control over the settings for each channel:

     

    AWS Elemental MediaPackage
    This service lets you deliver video to many devices from a single source. It focuses on protection and just-in-time packaging, giving you the ability to provide your users with the desired content on the device of their choice. You simply create a channel to get started:

    Then you add one or more endpoints. Once again, plenty of options and full control, including a startover window and a time delay:

    You find the input URL, user name, and password for your channel and route your live video stream to it for packaging:

    AWS Elemental MediaStore
    MediaStore offers the performance, consistency, and latency required for live and on-demand media delivery. Objects are written and read into a new “temporal” tier of object storage for a limited amount of time, then move silently into S3 for long-lived durability. You simply create a storage container to group your media content:

    The container is available within a minute or so:

    Like S3 buckets, MediaStore containers have access policies and no limits on the number of objects or storage capacity.

    MediaStore helps you to take full advantage of S3 by managing the object key names so as to maximize storage and retrieval throughput, in accord with the Request Rate and Performance Considerations.

    AWS Elemental MediaTailor
    This service takes care of server-side ad insertion while providing a broadcast-quality viewer experience by transcoding ad assets on the fly. Your customer’s video player asks MediaTailor for a playlist. MediaTailor, in turn, calls your Ad Decision Server and returns a playlist that references the origin server for your original video and the ads recommended by the Ad Decision Server. The video player makes all of its requests to a single endpoint in order to ensure that client-side ad-blocking is ineffective. You simply create a MediaTailor Configuration:

    Context information is passed to the Ad Decision Server in the URL:

    Despite the length of this post I have barely scratched the surface of the AWS Media Services. Once AWS re:Invent is in the rear view mirror I hope to do a deep dive and show you how to use each of these services.

    Available Now
    The entire set of AWS Media Services is available now and you can start using them today! Pricing varies by service, but is built around a pay-as-you-go model.

    Jeff;

    UI Testing at Scale with AWS Lambda

    Post Syndicated from Stas Neyman original https://aws.amazon.com/blogs/devops/ui-testing-at-scale-with-aws-lambda/

    This is a guest blog post by Wes Couch and Kurt Waechter from the Blackboard Internal Product Development team about their experience using AWS Lambda.

    One year ago, one of our UI test suites took hours to run. Last month, it took 16 minutes. Today, it takes 39 seconds. Here’s how we did it.

    The backstory:

    Blackboard is a global leader in delivering robust and innovative education software and services to clients in higher education, government, K12, and corporate training. We have a large product development team working across the globe in at least 10 different time zones, with an internal tools team providing support for quality and workflows. We have been using Selenium Webdriver to perform automated cross-browser UI testing since 2007. Because we are now practicing continuous delivery, the automated UI testing challenge has grown due to the faster release schedule. On top of that, every commit made to each branch triggers an execution of our automated UI test suite. If you have ever implemented an automated UI testing infrastructure, you know that it can be very challenging to scale and maintain. Although there are services that are useful for testing different browser/OS combinations, they don’t meet our scale needs.

    It used to take three hours to synchronously run our functional UI suite, which revealed the obvious need for parallel execution. Previously, we used Mesos to orchestrate a Selenium Grid Docker container for each test run. This way, we were able to run eight concurrent threads for test execution, which took an average of 16 minutes. Although this setup is fine for a single workflow, the cracks started to show when we reached the scale required for Blackboard’s mature product lines. Going beyond eight concurrent sessions on a single container introduced performance problems that impact the reliability of tests (for example, issues in Webdriver or the browser popping up frequently). We tried Mesos and considered Kubernetes for Selenium Grid orchestration, but the answer to scaling a Selenium Grid was to think smaller, not larger. This led to our breakthrough with AWS Lambda.

    The solution:

    We started using AWS Lambda for UI testing because it doesn’t require costly infrastructure or countless man hours to maintain. The steps we outline in this blog post took one work day, from inception to implementation. By simply packaging the UI test suite into a Lambda function, we can execute these tests in parallel on a massive scale. We use a custom JUnit test runner that invokes the Lambda function with a request to run each test from the suite. The runner then aggregates the results returned from each Lambda test execution.

    Selenium is the industry standard for testing UI at scale. Although there are other options to achieve the same thing in Lambda, we chose this mature suite of tools. Selenium is backed by Google, Firefox, and others to help the industry drive their browsers with code. This makes Lambda and Selenium a compelling stack for achieving UI testing at scale.

    Making Chrome Run in Lambda

    Currently, Chrome for Linux will not run in Lambda due to an absent mount point. By rebuilding Chrome with a slight modification, as Marco Lüthy originally demonstrated, you can run it inside Lambda anyway! It took about two hours to build the current master branch of Chromium to build on a c4.4xlarge. Unfortunately, the current version of ChromeDriver, 2.33, does not support any version of Chrome above 62, so we’ll be using Marco’s modified version of version 60 for the near future.

    Required System Libraries

    The Lambda runtime environment comes with a subset of common shared libraries. This means we need to include some extra libraries to get Chrome and ChromeDriver to work. Anything that exists in the java resources folder during compile time is included in the base directory of the compiled jar file. When this jar file is deployed to Lambda, it is placed in the /var/task/ directory. This allows us to simply place the libraries in the java resources folder under a folder named lib/ so they are right where they need to be when the Lambda function is invoked.

    To get these libraries, create an EC2 instance and choose the Amazon Linux AMI.

    Next, use ssh to connect to the server. After you connect to the new instance, search for the libraries to find their locations.

    sudo find / -name libgconf-2.so.4
    sudo find / -name libORBit-2.so.0
    

    Now that you have the locations of the libraries, copy these files from the EC2 instance and place them in the java resources folder under lib/.

    Packaging the Tests

    To deploy the test suite to Lambda, we used a simple Gradle tool called ShadowJar, which is similar to the Maven Shade Plugin. It packages the libraries and dependencies inside the jar that is built. Usually test dependencies and sources aren’t included in a jar, but for this instance we want to include them. To include the test dependencies, add this section to the build.gradle file.

    shadowJar {
       from sourceSets.test.output
       configurations = [project.configurations.testRuntime]
    }
    

    Deploying the Test Suite

    Now that our tests are packaged with the dependencies in a jar, we need to get them into a running Lambda function. We use  simple SAM  templates to upload the packaged jar into S3, and then deploy it to Lambda with our settings.

    {
       "AWSTemplateFormatVersion": "2010-09-09",
       "Transform": "AWS::Serverless-2016-10-31",
       "Resources": {
           "LambdaTestHandler": {
               "Type": "AWS::Serverless::Function",
               "Properties": {
                   "CodeUri": "./build/libs/your-test-jar-all.jar",
                   "Runtime": "java8",
                   "Handler": "com.example.LambdaTestHandler::handleRequest",
                   "Role": "<YourLambdaRoleArn>",
                   "Timeout": 300,
                   "MemorySize": 1536
               }
           }
       }
    }

    We use the maximum timeout available to ensure our tests have plenty of time to run. We also use the maximum memory size because this ensures our Lambda function can support Chrome and other resources required to run a UI test.

    Specifying the handler is important because this class executes the desired test. The test handler should be able to receive a test class and method. With this information it will then execute the test and respond with the results.

    public LambdaTestResult handleRequest(TestRequest testRequest, Context context) {
       LoggerContainer.LOGGER = new Logger(context.getLogger());
      
       BlockJUnit4ClassRunner runner = getRunnerForSingleTest(testRequest);
      
       Result result = new JUnitCore().run(runner);
    
       return new LambdaTestResult(result);
    }
    

    Creating a Lambda-Compatible ChromeDriver

    We provide developers with an easily accessible ChromeDriver for local test writing and debugging. When we are running tests on AWS, we have configured ChromeDriver to run them in Lambda.

    To configure ChromeDriver, we first need to tell ChromeDriver where to find the Chrome binary. Because we know that ChromeDriver is going to be unzipped into the root task directory, we should point the ChromeDriver configuration at that location.

    The settings for getting ChromeDriver running are mostly related to Chrome, which must have its working directories pointed at the tmp/ folder.

    Start with the default DesiredCapabilities for ChromeDriver, and then add the following settings to enable your ChromeDriver to start in Lambda.

    public ChromeDriver createLambdaChromeDriver() {
       ChromeOptions options = new ChromeOptions();
    
       // Set the location of the chrome binary from the resources folder
       options.setBinary("/var/task/chrome");
    
       // Include these settings to allow Chrome to run in Lambda
       options.addArguments("--disable-gpu");
       options.addArguments("--headless");
       options.addArguments("--window-size=1366,768");
       options.addArguments("--single-process");
       options.addArguments("--no-sandbox");
       options.addArguments("--user-data-dir=/tmp/user-data");
       options.addArguments("--data-path=/tmp/data-path");
       options.addArguments("--homedir=/tmp");
       options.addArguments("--disk-cache-dir=/tmp/cache-dir");
      
       DesiredCapabilities desiredCapabilities = DesiredCapabilities.chrome();
       desiredCapabilities.setCapability(ChromeOptions.CAPABILITY, options);
      
       return new ChromeDriver(desiredCapabilities);
    }
    

    Executing Tests in Parallel

    You can approach parallel test execution in Lambda in many different ways. Your approach depends on the structure and design of your test suite. For our solution, we implemented a custom test runner that uses reflection and JUnit libraries to create a list of test cases we want run. When we have the list, we create a TestRequest object to pass into the Lambda function that we have deployed. In this TestRequest, we place the class name, test method, and the test run identifier. When the Lambda function receives this TestRequest, our LambdaTestHandler generates and runs the JUnit test. After the test is complete, the test result is sent to the test runner. The test runner compiles a result after all of the tests are complete. By executing the same Lambda function multiple times with different test requests, we can effectively run the entire test suite in parallel.

    To get screenshots and other test data, we pipe those files during test execution to an S3 bucket under the test run identifier prefix. When the tests are complete, we link the files to each test execution in the report generated from the test run. This lets us easily investigate test executions.

    Pro Tip: Dynamically Loading Binaries

    AWS Lambda has a limit of 250 MB of uncompressed space for packaged Lambda functions. Because we have libraries and other dependencies to our test suite, we hit this limit when we tried to upload a function that contained Chrome and ChromeDriver (~140 MB). This test suite was not originally intended to be used with Lambda. Otherwise, we would have scrutinized some of the included libraries. To get around this limit, we used the Lambda functions temporary directory, which allows up to 500 MB of space at runtime. Downloading these binaries at runtime moves some of that space requirement into the temporary directory. This allows more room for libraries and dependencies. You can do this by grabbing Chrome and ChromeDriver from an S3 bucket and marking them as executable using built-in Java libraries. If you take this route, be sure to point to the new location for these executables in order to create a ChromeDriver.

    private static void downloadS3ObjectToExecutableFile(String key) throws IOException {
       File file = new File("/tmp/" + key);
    
       GetObjectRequest request = new GetObjectRequest("s3-bucket-name", key);
    
       FileUtils.copyInputStreamToFile(s3client.getObject(request).getObjectContent(), file);
       file.setExecutable(true);
    }
    

    Lambda-Selenium Project Source

    We have compiled an open source example that you can grab from the Blackboard Github repository. Grab the code and try it out!

    https://blackboard.github.io/lambda-selenium/

    Conclusion

    One year ago, one of our UI test suites took hours to run. Last month, it took 16 minutes. Today, it takes 39 seconds. Thanks to AWS Lambda, we can reduce our build times and perform automated UI testing at scale!

    A Thanksgiving Carol: How Those Smart Engineers at Twitter Screwed Me

    Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/11/a-thanksgiving-carol-how-those-smart.html

    Thanksgiving Holiday is a time for family and cheer. Well, a time for family. It’s the holiday where we ask our doctor relatives to look at that weird skin growth, and for our geek relatives to fix our computers. This tale is of such computer support, and how the “smart” engineers at Twitter have ruined this for life.

    My mom is smart, but not a good computer user. I get my enthusiasm for science and math from my mother, and she has no problem understanding the science of computers. She keeps up when I explain Bitcoin. But she has difficulty using computers. She has this emotional, irrational belief that computers are out to get her.

    This makes helping her difficult. Every problem is described in terms of what the computer did to her, not what she did to her computer. It’s the computer that needs to be fixed, instead of the user. When I showed her the “haveibeenpwned.com” website (part of my tips for securing computers), it showed her Tumblr password had been hacked. She swore she never created a Tumblr account — that somebody or something must have done it for her. Except, I was there five years ago and watched her create it.

    Another example is how GMail is deleting her emails for no reason, corrupting them, and changing the spelling of her words. She emails the way an impatient teenager texts — all of us in the family know the misspellings are not GMail’s fault. But I can’t help her with this because she keeps her GMail inbox clean, deleting all her messages, leaving no evidence behind. She has only a vague description of the problem that I can’t make sense of.

    This last March, I tried something to resolve this. I configured her GMail to send a copy of all incoming messages to a new, duplicate account on my own email server. With evidence in hand, I would then be able solve what’s going on with her GMail. I’d be able to show her which steps she took, which buttons she clicked on, and what caused the weirdness she’s seeing.

    Today, while the family was in a state of turkey-induced torpor, my mom brought up a problem with Twitter. She doesn’t use Twitter, she doesn’t have an account, but they keep sending tweets to her phone, about topics like Denzel Washington. And she said something about “peaches” I didn’t understand.

    This is how the problem descriptions always start, chaotic, with mutually exclusive possibilities. If you don’t use Twitter, you don’t have the Twitter app installed, so how are you getting Tweets? Over much gnashing of teeth, it comes out that she’s getting emails from Twitter, not tweets, about Denzel Washington — to someone named “Peaches Graham”. Naturally, she can only describe these emails, because she’s already deleted them.

    “Ah ha!”, I think. I’ve got the evidence! I’ll just log onto my duplicate email server, and grab the copies to prove to her it was something she did.

    I find she is indeed receiving such emails, called “Moments”, about topics trending on Twitter. They are signed with “DKIM”, proving they are legitimate rather than from a hacker or spammer. The only way that can happen is if my mother signed up for Twitter, despite her protestations that she didn’t.

    I look further back and find that there were also confirmation messages involved. Back in August, she got a typical Twitter account signup message. I am now seeing a little bit more of the story unfold with this “Peaches Graham” name on the account. It wasn’t my mother who initially signed up for Twitter, but Peaches, who misspelled the email address. It’s one of the reasons why the confirmation process exists, to make sure you spelled your email address correctly.

    It’s now obvious my mom accidentally clicked on the [Confirm] button. I don’t have any proof she did, but it’s the only reasonable explanation. Otherwise, she wouldn’t have gotten the “Moments” messages. My mom disputed this, emphatically insisting she never clicked on the emails.

    It’s at this point that I made a great mistake, saying:

    “This sort of thing just doesn’t happen. Twitter has very smart engineers. What’s the chance they made the mistake here, or…”.

    I recognized condescension of words as they came out of my mouth, but dug myself deeper with:

    “…or that the user made the error?”

    This was wrong to say even if I were right. I have no excuse. I mean, maybe I could argue that it’s really her fault, for not raising me right, but no, this is only on me.

    Regardless of what caused the Twitter emails, the problem needs to be fixed. The solution is to take control of the Twitter account by using the password reset feature. I went to the Twitter login page, clicked on “Lost Password”, got the password reset message, and reset the password. I then reconfigured the account to never send anything to my mom again.

    But when I logged in I got an error saying the account had not yet been confirmed. I paused. The family dog eyed me in wise silence. My mom hadn’t clicked on the [Confirm] button — the proof was right there. Moreover, it hadn’t been confirmed for a long time, since the account was created in 2011.

    I interrogated my mother some more. It appears that this has been going on for years. She’s just been deleting the emails without opening them, both the “Confirmations” and the “Moments”. She made it clear she does it this way because her son (that would be me) instructs her to never open emails she knows are bad. That’s how she could be so certain she never clicked on the [Confirm] button — she never even opens the emails to see the contents.

    My mom is a prolific email user. In the last eight months, I’ve received over 10,000 emails in the duplicate mailbox on my server. That’s a lot. She’s technically retired, but she volunteers for several charities, goes to community college classes, and is joining an anti-Trump protest group. She has a daily routine for triaging and processing all the emails that flow through her inbox.

    So here’s the thing, and there’s no getting around it: my mom was right, on all particulars. She had done nothing, the computer had done it to her. It’s Twitter who is at fault, having continued to resend that confirmation email every couple months for six years. When Twitter added their controversial “Moments” feature a couple years back, somehow they turned on Notifications for accounts that technically didn’t fully exist yet.

    Being right this time means she might be right the next time the computer does something to her without her touching anything. My attempts at making computers seem rational has failed. That they are driven by untrustworthy spirits is now a reasonable alternative.

    Those “smart” engineers at Twitter screwed me. Continuing to send confirmation emails for six years is stupid. Sending Notifications to unconfirmed accounts is stupid. Yes, I know at the bottom of the message it gives a “Not my account” selection that she could have clicked on, but it’s small and easily missed. In any case, my mom never saw that option, because she’s been deleting the messages without opening them — for six years.

    Twitter can fix their problem, but it’s not going to help mine. Forever more, I’ll be unable to convince my mom that the majority of her problems are because of user error, and not because the computer people are out to get her.