Tag Archives: PIR

Amazon SageMaker Updates – Tokyo Region, CloudFormation, Chainer, and GreenGrass ML

Post Syndicated from Randall Hunt original https://aws.amazon.com/blogs/aws/sagemaker-tokyo-summit-2018/

Today, at the AWS Summit in Tokyo we announced a number of updates and new features for Amazon SageMaker. Starting today, SageMaker is available in Asia Pacific (Tokyo)! SageMaker also now supports CloudFormation. A new machine learning framework, Chainer, is now available in the SageMaker Python SDK, in addition to MXNet and Tensorflow. Finally, support for running Chainer models on several devices was added to AWS Greengrass Machine Learning.

Amazon SageMaker Chainer Estimator


Chainer is a popular, flexible, and intuitive deep learning framework. Chainer networks work on a “Define-by-Run” scheme, where the network topology is defined dynamically via forward computation. This is in contrast to many other frameworks which work on a “Define-and-Run” scheme where the topology of the network is defined separately from the data. A lot of developers enjoy the Chainer scheme since it allows them to write their networks with native python constructs and tools.

Luckily, using Chainer with SageMaker is just as easy as using a TensorFlow or MXNet estimator. In fact, it might even be a bit easier since it’s likely you can take your existing scripts and use them to train on SageMaker with very few modifications. With TensorFlow or MXNet users have to implement a train function with a particular signature. With Chainer your scripts can be a little bit more portable as you can simply read from a few environment variables like SM_MODEL_DIR, SM_NUM_GPUS, and others. We can wrap our existing script in a if __name__ == '__main__': guard and invoke it locally or on sagemaker.


import argparse
import os

if __name__ =='__main__':

    parser = argparse.ArgumentParser()

    # hyperparameters sent by the client are passed as command-line arguments to the script.
    parser.add_argument('--epochs', type=int, default=10)
    parser.add_argument('--batch-size', type=int, default=64)
    parser.add_argument('--learning-rate', type=float, default=0.05)

    # Data, model, and output directories
    parser.add_argument('--output-data-dir', type=str, default=os.environ['SM_OUTPUT_DATA_DIR'])
    parser.add_argument('--model-dir', type=str, default=os.environ['SM_MODEL_DIR'])
    parser.add_argument('--train', type=str, default=os.environ['SM_CHANNEL_TRAIN'])
    parser.add_argument('--test', type=str, default=os.environ['SM_CHANNEL_TEST'])

    args, _ = parser.parse_known_args()

    # ... load from args.train and args.test, train a model, write model to args.model_dir.

Then, we can run that script locally or use the SageMaker Python SDK to launch it on some GPU instances in SageMaker. The hyperparameters will get passed in to the script as CLI commands and the environment variables above will be autopopulated. When we call fit the input channels we pass will be populated in the SM_CHANNEL_* environment variables.


from sagemaker.chainer.estimator import Chainer
# Create my estimator
chainer_estimator = Chainer(
    entry_point='example.py',
    train_instance_count=1,
    train_instance_type='ml.p3.2xlarge',
    hyperparameters={'epochs': 10, 'batch-size': 64}
)
# Train my estimator
chainer_estimator.fit({'train': train_input, 'test': test_input})

# Deploy my estimator to a SageMaker Endpoint and get a Predictor
predictor = chainer_estimator.deploy(
    instance_type="ml.m4.xlarge",
    initial_instance_count=1
)

Now, instead of bringing your own docker container for training and hosting with Chainer, you can just maintain your script. You can see the full sagemaker-chainer-containers on github. One of my favorite features of the new container is built-in chainermn for easy multi-node distribution of your chainer training jobs.

There’s a lot more documentation and information available in both the README and the example notebooks.

AWS GreenGrass ML with Chainer

AWS GreenGrass ML now includes a pre-built Chainer package for all devices powered by Intel Atom, NVIDIA Jetson, TX2, and Raspberry Pi. So, now GreenGrass ML provides pre-built packages for TensorFlow, Apache MXNet, and Chainer! You can train your models on SageMaker then easily deploy it to any GreenGrass-enabled device using GreenGrass ML.

JAWS UG

I want to give a quick shout out to all of our wonderful and inspirational friends in the JAWS UG who attended the AWS Summit in Tokyo today. I’ve very much enjoyed seeing your pictures of the summit. Thanks for making Japan an amazing place for AWS developers! I can’t wait to visit again and meet with all of you.

Randall

Project Floofball and more: Pi pet stuff

Post Syndicated from Janina Ander original https://www.raspberrypi.org/blog/project-floofball-pi-pet-stuff/

It’s a public holiday here today (yes, again). So, while we indulge in the traditional pastime of barbecuing stuff (ourselves, mainly), here’s a little trove of Pi projects that cater for our various furry friends.

Project Floofball

Nicole Horward created Project Floofball for her hamster, Harold. It’s an IoT hamster wheel that uses a Raspberry Pi and a magnetic door sensor to log how far Harold runs.

Project Floofball: an IoT hamster wheel

An IoT Hamsterwheel using a Raspberry Pi and a magnetic door sensor, to see how far my hamster runs.

You can follow Harold’s runs in real time on his ThingSpeak channel, and you’ll find photos of the build on imgur. Nicole’s Python code, as well as her template for the laser-cut enclosure that houses the wiring and LCD display, are available on the hamster wheel’s GitHub repo.

A live-streaming pet feeder

JaganK3 used to work long hours that meant he couldn’t be there to feed his dog on time. He found that he couldn’t buy an automated feeder in India without paying a lot to import one, so he made one himself. It uses a Raspberry Pi to control a motor that turns a dispensing valve in a hopper full of dry food, giving his dog a portion of food at set times.

A transparent cylindrical hopper of dry dog food, with a motor that can turn a dispensing valve at the lower end. The motor is connected to a Raspberry Pi in a plastic case. Hopper, motor, Pi, and wiring are all mounted on a board on the wall.

He also added a web cam for live video streaming, because he could. Find out more in JaganK3’s Instructable for his pet feeder.

Shark laser cat toy

Sam Storino, meanwhile, is using a Raspberry Pi to control a laser-pointer cat toy with a goshdarned SHARK (which is kind of what I’d expect from the guy who made the steampunk-looking cat feeder a few weeks ago). The idea is to keep his cats interested and active within the confines of a compact city apartment.

Raspberry Pi Automatic Cat Laser Pointer Toy

Post with 52 votes and 7004 views. Tagged with cat, shark, lasers, austin powers, raspberry pi; Shared by JeorgeLeatherly. Raspberry Pi Automatic Cat Laser Pointer Toy

If I were a cat, I would definitely be entirely happy with this. Find out more on Sam’s website.

And there’s more

Michel Parreno has written a series of articles to help you monitor and feed your pet with Raspberry Pi.

All of these makers are generous in acknowledging the tutorials and build logs that helped them with their projects. It’s lovely to see the Raspberry Pi and maker community working like this, and I bet their projects will inspire others too.

Now, if you’ll excuse me. I’m late for a barbecue.

The post Project Floofball and more: Pi pet stuff appeared first on Raspberry Pi.

Enchanting images with Inky Lines, a Pi‑powered polargraph

Post Syndicated from Helen Lynn original https://www.raspberrypi.org/blog/enchanting-images-inky-lines-pi-powered-polargraph/

A hanging plotter, also known as a polar plotter or polargraph, is a machine for drawing images on a vertical surface. It does so by using motors to control the length of two cords that form a V shape, supporting a pen where they meet. We’ve featured one on this blog before: Norbert “HomoFaciens” Heinz’s video is a wonderfully clear introduction to how a polargraph works and what you have to consider when you’re putting one together.

Today, we look at Inky Lines, by John Proudlock. With it, John is creating a series of captivating and beautiful pieces, and with his most recent work, each rendering of an image is unique.

The Inky Lines plotter draws a flock of seagulls in blue ink on white paper. The print head is suspended near the bottom left corner of the image, as the pen inks the wing of a gull

An evolving project

The project isn’t new – John has been working on it for at least a couple of years – but it is constantly evolving. When we first spotted it, John had just implemented code to allow the plotter to produce mesmeric, spiralling patterns.

A blue spiral pattern featuring overlapping "bubbles"
A dense pink spiral pattern, featuring concentric circles and reminiscent of a mandala
A blue spirograph-type pattern formed of large overlapping squares, each offset from its neighbour by a few degrees, producing a four-spiral-armed "galaxy" shape where lines overlap. The plotter's print head is visible in a corner of the image

But we’re skipping ahead. Let’s go back to the beginning.

From pixels to motor movements

John starts by providing an image, usually no more than 100 pixels wide, to a Raspberry Pi. Custom software that he wrote evaluates the darkness of each pixel and selects a pattern of a suitable density to represent it.

The two cords supporting the plotter’s pen are wound around the shafts of two stepper motors, such that the movement of the motors controls the length of the cords: the program next calculates how much each motor must move in order to produce the pattern. The Raspberry Pi passes corresponding instructions to two motor circuits, which transform the signals to a higher voltage and pass them to the stepper motors. These turn by very precise amounts, winding or unwinding the cords and, very slowly, dragging the pen across the paper.

A Raspberry Pi in a case, with a wide flex connected to a GPIO header
The Inky Lines plotter's print head, featuring cardboard and tape, draws an apparently random squiggle
A large area of apparently random pattern drawn by the plotter

John explains,

Suspended in-between the two motors is a print head, made out of a new 3-d modelling material I’ve been prototyping called cardboard. An old coat hanger and some velcro were also used.

(He’s our kind of maker.)

Unique images

The earlier drawings that John made used a repeatable method to render image files as lines on paper. That is, if the machine drew the same image a number of times, each copy would be identical. More recently, though, he has been using a method that yields random movements of the pen:

The pen point is guided around the image, but moves to each new point entirely at random. Up close this looks like a chaotic squiggle, but from a distance of a couple of meters, the human eye (and brain) make order from the chaos and view an infinite number of shades and a smoother, less mechanical image.

An apparently chaotic squiggle

This method means that no matter how many times the polargraph repeats the same image, each copy will be unique.

A gallery of work

Inky Lines’ website and its Instagram feed offer a collection of wonderful pieces John has drawn with his polargraph, and he discusses the different techniques and types of image that he is exploring.

A 3 x 3 grid of varied and colourful images from inkylinespolargraph's Instagram feed

They range from holiday photographs, processed to extract particular features and rendered in silhouette, to portraits, made with a single continuous line that can be several hundred metres long, to generative images spirograph images like those pictured above, created by an algorithm rather than rendered from a source image.

The post Enchanting images with Inky Lines, a Pi‑powered polargraph appeared first on Raspberry Pi.

[$] Easier container security with entitlements

Post Syndicated from corbet original https://lwn.net/Articles/755238/rss

During KubeCon
+ CloudNativeCon Europe 2018
, Justin Cormack and Nassim Eddequiouaq presented
a proposal to simplify the setting of security parameters for containerized
applications.
Containers depend on a large set of intricate security primitives that can
have weird interactions. Because they are so hard to use, people often just
turn the whole thing off. The goal of the proposal is to make those
controls easier to understand and use; it is partly inspired by mobile apps
on iOS and Android platforms, an idea that trickled back into Microsoft and
Apple desktops. The time seems ripe to improve the field of
container security, which is in desperate need of simpler controls.

[$] Modifying the Python object model

Post Syndicated from jake original https://lwn.net/Articles/754163/rss

At the 2018 Python Language Summit, Carl Shapiro described some of
the experiments that he and others at Instagram did to look at ways to
improve the
performance of the CPython interpreter.
The talk was somewhat academic in tone and built on what has been learned
in other dynamic languages over the years. By modifying the Python object
model fairly substantially, they were able to roughly double the performance
of the “classic” Richards benchmark.

Brutus 2: the gaming PC case of your dreams

Post Syndicated from Janina Ander original https://www.raspberrypi.org/blog/brutus-2-gaming-pc-case/

Attention, case modders: take a look at the Brutus 2, an extremely snazzy computer case with a partly transparent, animated side panel that’s powered by a Pi. Daniel Otto and Carsten Lehman have a current crowdfunder for the case; their video is in German, but the looks of the build speak for themselves. There are some truly gorgeous effects here.

der BRUTUS 2 by 3nb Gaming

Vorbestellungen ab sofort auf https://www.startnext.com/brutus2 Weitere Infos zu uns auf: https://3nb.de https://www.facebook.com/3nb.de https://www.instagram.com/3nb.de Über 3nb: – GbR aus Leipzig, gegründet 2017 – wir kommen aus den Bereichen Elektronik und Informatik – erstes Produkt: der Brutus One ein Gaming PC mit transparentem Display in der Seite Kurzinfo Brutus 2: – Markencomputergehäuse für Gaming- /Casemoddingszene – Besonderheit: animiertes Seitenfenster angesteuert mit einem Raspberry Pi – Vorteile von unserem Case: o Case ist einzeln lieferbar und nicht nur als komplett-PC o kein Leistungsverbrauch der Grafikkarte dank integriertem Raspberry Pi o bessere Darstellung von Texten und Grafiken durch unscharfen Hintergrund

What’s case modding?

Case modding just means modifying your computer or gaming console’s case, and it’s very popular in the gaming community. Some mods are functional, while others improve the way the case looks. Lots of dedicated gamers don’t only want a powerful computer, they also want it to look amazing — at home, or at LAN parties and games tournaments.

The Brutus 2 case

The Brutus 2 case is made by Daniel and Carsten’s startup, 3nb electronics, and it’s a product that is officially Powered by Raspberry Pi. Its standout feature is the semi-transparent TFT screen, which lets you play any video clip you choose while keeping your gaming hardware on display. It looks incredibly cool. All the graphics for the case’s screen are handled by a Raspberry Pi, so it doesn’t use any of your main PC’s GPU power and your gaming won’t suffer.

Brutus 2 PC case powered by Raspberry Pi

The software

To use Brutus 2, you just need to run a small desktop application on your PC to choose what you want to display on the case. A number of neat animations are included, and you can upload your own if you want.

So far, the app only runs on Windows, but 3nb electronics are planning to make the code open-source, so you can modify it for other operating systems, or to display other file types. This is true to the spirit of the case modding and Raspberry Pi communities, who love adapting, retrofitting, and overhauling projects and code to fit their needs.

Brutus 2 PC case powered by Raspberry Pi

Daniel and Carsten say that one of their campaign’s stretch goals is to implement more functionality in the Brutus 2 app. So in the future, the case could also show things like CPU temperature, gaming stats, and in-game messages. Of course, there’s nothing stopping you from integrating features like that yourself.

If you have any questions about the case, you can post them directly to Daniel and Carsten here.

The crowdfunding campaign

The Brutus 2 campaign on Startnext is currently halfway to its first funding goal of €10000, with over three weeks to go until it closes. If you’re quick, you still be may be able to snatch one of the early-bird offers. And if your whole guild NEEDS this, that’s OK — there are discounts for bulk orders.

The post Brutus 2: the gaming PC case of your dreams appeared first on Raspberry Pi.

Solving Complex Ordering Challenges with Amazon SQS FIFO Queues

Post Syndicated from Christie Gifrin original https://aws.amazon.com/blogs/compute/solving-complex-ordering-challenges-with-amazon-sqs-fifo-queues/

Contributed by Shea Lutton, AWS Cloud Infrastructure Architect

Amazon Simple Queue Service (Amazon SQS) is a fully managed queuing service that helps decouple applications, distributed systems, and microservices to increase fault tolerance. SQS queues come in two distinct types:

  • Standard SQS queues are able to scale to enormous throughput with at-least-once delivery.
  • FIFO queues are designed to guarantee that messages are processed exactly once in the exact order that they are received and have a default rate of 300 transactions per second.

As customers explore SQS FIFO queues, they often have questions about how the behavior works when messages arrive and are consumed. This post walks through some common situations to identify the exact behavior that you can expect. It also covers the behavior of message groups in depth and explains why message groups are key to understanding how FIFO queues work.

The simple case

Suppose that you run a major auction platform where people buy and sell a wide range of products. Your platform requires that transactions from buyers and sellers get processed in exactly the order received. Here’s how a FIFO queue helps you keep all your transactions in one straight flow.

A seller currently is holding an auction for a laptop, and three different bids are received for the same price. Ties are awarded to the first bidder at that price so it is important to track which arrived first. Your auction platform receives the three bids and sends them to a FIFO queue before they are processed.

Now observe how messages leave the queue. When your consumer asks for a batch of up to 10 messages, SQS starts filling the batch with the oldest message (bid A1). It keeps filling until either the batch is full or the queue is empty. In this case, the batch contains the three messages and the queue is now empty. After a batch has left the queue, SQS considers that batch of messages to be “in-flight” until the consumer either deletes them or the batch’s visibility timer expires.

 

When you have a single consumer, this is easy to envision. The consumer gets a batch of messages (now in-flight), does its processing, and deletes the messages. That consumer is then ready to ask for the next batch of messages.

The critical thing to keep in mind is that SQS won’t release the next batch of messages until the first batch has been deleted. By adding more messages to the queue, you can see more interesting behaviors. Imagine that a burst of 11 bids is sent to your FIFO queue, with two bids for Auction A arriving last.

The FIFO queue now has at least two batches of messages in it. When your single consumer requests the first batch of 10 messages, it receives a batch starting with B1 and ending with A1. Later, after the first batch has been deleted, the consumer can get the second batch of messages containing the final A2 message from the queue.

Adding complexity with multiple message groups

A new challenge arises. Your auction platform is getting busier and your dev team added a number of new features. The combination of increased messages and extra processing time for the new features means that a single consumer is too slow. The solution is to scale to have more consumers and process messages in parallel.

To work in parallel, your team realized that only the messages related to a single auction must be kept in order. All transactions for Auction A need to be kept in order and so do all transactions for Auction B. But the two auctions are independent and it does not matter which auctions transactions are processed first.

FIFO can handle that case with a feature called message groups. Each transaction related to Auction A is placed by your producer into message group A, and so on. In the diagram below, Auction A and Auction B each received three bid transactions, with bid B1 arriving first. The FIFO queue always keeps transactions within a message group in the order in which they arrived.

How is this any different than earlier examples? The consumer now gets the messages ordered by message groups, all the B group messages followed by all the A group messages. Multiple message groups create the possibility of using multiple consumers, which I explain in a moment. If FIFO can’t fill up a batch of messages with a single message group, FIFO can place more than one message group in a batch of messages. But whenever possible, the queue gives you a full batch of messages from the same group.

The order of messages leaving a FIFO queue is governed by three rules:

  1. Return the oldest message where no other message in the same message group is currently in-flight.
  2. Return as many messages from the same message group as possible.
  3. If a message batch is still not full, go back to rule 1.

To see this behavior, add a second consumer and insert many more messages into the queue. For simplicity, the delete message action has been omitted in these diagrams but it is assumed that all messages in a batch are processed successfully by the consumer and the batch is properly deleted immediately after.

In this example, there are 11 Group A and 11 Group B transactions arriving in interleaved order and a second consumer has been added. Consumer 1 asks for a group of 10 messages and receives 10 Group A messages. Consumer 2 then asks for 10 messages but SQS knows that Group A is in flight, so it releases 10 Group B messages. The two consumers are now processing two batches of messages in parallel, speeding up throughput and then deleting their batches. When Consumer 1 requests the next batch of messages, it receives the remaining two messages, one from Group A and one from Group B.

Consider this nuanced detail from the example above. What would happen if Consumer 1 was on a faster server and processed its first batch of messages before Consumer 2 could mark its messages for deletion? See if you can predict the behavior before looking at the answer.

If Consumer 2 has not deleted its Group B messages yet when Consumer 1 asks for the next batch, then the FIFO queue considers Group B to still be in flight. It does not release any more Group B messages. Consumer 1 gets only the remaining Group A message. Later, after Consumer 2 has deleted its first batch, the remaining Group B message is released.

Conclusion

I hope this post answered your questions about how Amazon SQS FIFO queues work and why message groups are helpful. If you’re interested in exploring SQS FIFO queues further, here are a few ideas to get you started:

Mayank Sinha’s home security project

Post Syndicated from Helen Lynn original https://www.raspberrypi.org/blog/home-security/

Yesterday, I received an email from someone called Mayank Sinha, showing us the Raspberry Pi home security project he’s been working on. He got in touch particularly because, he writes, the Raspberry Pi community has given him “immense support” with his build, and he wanted to dedicate it to the commmunity as thanks.

Mayank’s project is named Asfaleia, a Greek word that means safety, certainty, or security against threats. It’s part of an honourable tradition dating all the way back to 2012: it’s a prototype housed in a polystyrene box, using breadboards and jumper leads and sticky tape. And it’s working! Take a look.

Asfaleia DIY Home Security System

An IOT based home security system. The link to the code: https://github.com/mayanksinha11/Asfaleia

Home security with Asfaleida

Asfaleia has a PIR (passive infrared) motion sensor, an IR break beam sensor, and a gas sensor. All are connected to a Raspberry Pi 3 Model B, the latter two via a NodeMCU board. Mayank currently has them set up in a box that’s divided into compartments to model different rooms in a house.

A shallow box divided into four labelled "rooms", all containing electronic components

All the best prototypes have sticky tape or rubber bands

If the IR sensors detect motion or a broken beam, the webcam takes a photo and emails it to the build’s owner, and the build also calls their phone (I like your ringtone, Mayank). If the gas sensor detects a leak, the system activates an exhaust fan via a small relay board, and again the owner receives a phone call. The build can also authenticate users via face and fingerprint recognition. The software that runs it all is written in Python, and you can see Mayank’s code on GitHub.

Of prototypes and works-in-progess

Reading Mayank’s email made me very happy yesterday. We know that thousands of people in our community give a great deal of time and effort to help others learn and make things, and it is always wonderful to see an example of how that support is helping someone turn their ideas into reality. It’s great, too, to see people sharing works-in-progress, as well as polished projects! After all, the average build is more likely to feature rubber bands and Tupperware boxes than meticulously designed laser-cut parts or expert joinery. Mayank’s YouTube channel shows earlier work on this and another Pi project, and I hope he’ll continue to document his builds.

So here’s to Raspberry Pi projects big, small, beginner, professional, endlessly prototyped, unashamedly bodged, unfinished or fully working, shonky or shiny. Please keep sharing them all!

The post Mayank Sinha’s home security project appeared first on Raspberry Pi.

Securing Your Cryptocurrency

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/backing-up-your-cryptocurrency/

Securing Your Cryptocurrency

In our blog post on Tuesday, Cryptocurrency Security Challenges, we wrote about the two primary challenges faced by anyone interested in safely and profitably participating in the cryptocurrency economy: 1) make sure you’re dealing with reputable and ethical companies and services, and, 2) keep your cryptocurrency holdings safe and secure.

In this post, we’re going to focus on how to make sure you don’t lose any of your cryptocurrency holdings through accident, theft, or carelessness. You do that by backing up the keys needed to sell or trade your currencies.

$34 Billion in Lost Value

Of the 16.4 million bitcoins said to be in circulation in the middle of 2017, close to 3.8 million may have been lost because their owners no longer are able to claim their holdings. Based on today’s valuation, that could total as much as $34 billion dollars in lost value. And that’s just bitcoins. There are now over 1,500 different cryptocurrencies, and we don’t know how many of those have been misplaced or lost.



Now that some cryptocurrencies have reached (at least for now) staggering heights in value, it’s likely that owners will be more careful in keeping track of the keys needed to use their cryptocurrencies. For the ones already lost, however, the owners have been separated from their currencies just as surely as if they had thrown Benjamin Franklins and Grover Clevelands over the railing of a ship.

The Basics of Securing Your Cryptocurrencies

In our previous post, we reviewed how cryptocurrency keys work, and the common ways owners can keep track of them. A cryptocurrency owner needs two keys to use their currencies: a public key that can be shared with others is used to receive currency, and a private key that must be kept secure is used to spend or trade currency.

Many wallets and applications allow the user to require extra security to access them, such as a password, or iris, face, or thumb print scan. If one of these options is available in your wallets, take advantage of it. Beyond that, it’s essential to back up your wallet, either using the backup feature built into some applications and wallets, or manually backing up the data used by the wallet. When backing up, it’s a good idea to back up the entire wallet, as some wallets require additional private data to operate that might not be apparent.

No matter which backup method you use, it is important to back up often and have multiple backups, preferable in different locations. As with any valuable data, a 3-2-1 backup strategy is good to follow, which ensures that you’ll have a good backup copy if anything goes wrong with one or more copies of your data.

One more caveat, don’t reuse passwords. This applies to all of your accounts, but is especially important for something as critical as your finances. Don’t ever use the same password for more than one account. If security is breached on one of your accounts, someone could connect your name or ID with other accounts, and will attempt to use the password there, as well. Consider using a password manager such as LastPass or 1Password, which make creating and using complex and unique passwords easy no matter where you’re trying to sign in.

Approaches to Backing Up Your Cryptocurrency Keys

There are numerous ways to be sure your keys are backed up. Let’s take them one by one.

1. Automatic backups using a backup program

If you’re using a wallet program on your computer, for example, Bitcoin Core, it will store your keys, along with other information, in a file. For Bitcoin Core, that file is wallet.dat. Other currencies will use the same or a different file name and some give you the option to select a name for the wallet file.

To back up the wallet.dat or other wallet file, you might need to tell your backup program to explicitly back up that file. Users of Backblaze Backup don’t have to worry about configuring this, since by default, Backblaze Backup will back up all data files. You should determine where your particular cryptocurrency, wallet, or application stores your keys, and make sure the necessary file(s) are backed up if your backup program requires you to select which files are included in the backup.

Backblaze B2 is an option for those interested in low-cost and high security cloud storage of their cryptocurrency keys. Backblaze B2 supports 2-factor verification for account access, works with a number of apps that support automatic backups with encryption, error-recovery, and versioning, and offers an API and command-line interface (CLI), as well. The first 10GB of storage is free, which could be all one needs to store encrypted cryptocurrency keys.

2. Backing up by exporting keys to a file

Apps and wallets will let you export your keys from your app or wallet to a file. Once exported, your keys can be stored on a local drive, USB thumb drive, DAS, NAS, or in the cloud with any cloud storage or sync service you wish. Encrypting the file is strongly encouraged — more on that later. If you use 1Password or LastPass, or other secure notes program, you also could store your keys there.

3. Backing up by saving a mnemonic recovery seed

A mnemonic phrase, mnemonic recovery phrase, or mnemonic seed is a list of words that stores all the information needed to recover a cryptocurrency wallet. Many wallets will have the option to generate a mnemonic backup phrase, which can be written down on paper. If the user’s computer no longer works or their hard drive becomes corrupted, they can download the same wallet software again and use the mnemonic recovery phrase to restore their keys.

The phrase can be used by anyone to recover the keys, so it must be kept safe. Mnemonic phrases are an excellent way of backing up and storing cryptocurrency and so they are used by almost all wallets.

A mnemonic recovery seed is represented by a group of easy to remember words. For example:

eye female unfair moon genius pipe nuclear width dizzy forum cricket know expire purse laptop scale identify cube pause crucial day cigar noise receive

The above words represent the following seed:

0a5b25e1dab6039d22cd57469744499863962daba9d2844243fec 9c0313c1448d1a0b2cd9e230a78775556f9b514a8be45802c2808e fd449a20234e9262dfa69

These words have certain properties:

  • The first four letters are enough to unambiguously identify the word.
  • Similar words are avoided (such as: build and built).

Bitcoin and most other cryptocurrencies such as Litecoin, Ethereum, and others use mnemonic seeds that are 12 to 24 words long. Other currencies might use different length seeds.

4. Physical backups — Paper, Metal

Some cryptocurrency holders believe that their backup, or even all their cryptocurrency account information, should be stored entirely separately from the internet to avoid any risk of their information being compromised through hacks, exploits, or leaks. This type of storage is called “cold storage.” One method of cold storage involves printing out the keys to a piece of paper and then erasing any record of the keys from all computer systems. The keys can be entered into a program from the paper when needed, or scanned from a QR code printed on the paper.

Printed public and private keys

Printed public and private keys

Some who go to extremes suggest separating the mnemonic needed to access an account into individual pieces of paper and storing those pieces in different locations in the home or office, or even different geographical locations. Some say this is a bad idea since it could be possible to reconstruct the mnemonic from one or more pieces. How diligent you wish to be in protecting these codes is up to you.

Mnemonic recovery phrase booklet

Mnemonic recovery phrase booklet

There’s another option that could make you the envy of your friends. That’s the CryptoSteel wallet, which is a stainless steel metal case that comes with more than 250 stainless steel letter tiles engraved on each side. Codes and passwords are assembled manually from the supplied part-randomized set of tiles. Users are able to store up to 96 characters worth of confidential information. Cryptosteel claims to be fireproof, waterproof, and shock-proof.

image of a Cryptosteel cold storage device

Cryptosteel cold wallet

Of course, if you leave your Cryptosteel wallet in the pocket of a pair of ripped jeans that gets thrown out by the housekeeper, as happened to the character Russ Hanneman on the TV show Silicon Valley in last Sunday’s episode, then you’re out of luck. That fictional billionaire investor lost a USB drive with $300 million in cryptocoins. Let’s hope that doesn’t happen to you.

Encryption & Security

Whether you store your keys on your computer, an external disk, a USB drive, DAS, NAS, or in the cloud, you want to make sure that no one else can use those keys. The best way to handle that is to encrypt the backup.

With Backblaze Backup for Windows and Macintosh, your backups are encrypted in transmission to the cloud and on the backup server. Users have the option to add an additional level of security by adding a Personal Encryption Key (PEK), which secures their private key. Your cryptocurrency backup files are secure in the cloud. Using our web or mobile interface, previous versions of files can be accessed, as well.

Our object storage cloud offering, Backblaze B2, can be used with a variety of applications for Windows, Macintosh, and Linux. With B2, cryptocurrency users can choose whichever method of encryption they wish to use on their local computers and then upload their encrypted currency keys to the cloud. Depending on the client used, versioning and life-cycle rules can be applied to the stored files.

Other backup programs and systems provide some or all of these capabilities, as well. If you are backing up to a local drive, it is a good idea to encrypt the local backup, which is an option in some backup programs.

Address Security

Some experts recommend using a different address for each cryptocurrency transaction. Since the address is not the same as your wallet, this means that you are not creating a new wallet, but simply using a new identifier for people sending you cryptocurrency. Creating a new address is usually as easy as clicking a button in the wallet.

One of the chief advantages of using a different address for each transaction is anonymity. Each time you use an address, you put more information into the public ledger (blockchain) about where the currency came from or where it went. That means that over time, using the same address repeatedly could mean that someone could map your relationships, transactions, and incoming funds. The more you use that address, the more information someone can learn about you. For more on this topic, refer to Address reuse.

Note that a downside of using a paper wallet with a single key pair (type-0 non-deterministic wallet) is that it has the vulnerabilities listed above. Each transaction using that paper wallet will add to the public record of transactions associated with that address. Newer wallets, i.e. “deterministic” or those using mnemonic code words support multiple addresses and are now recommended.

There are other approaches to keeping your cryptocurrency transaction secure. Here are a couple of them.

Multi-signature

Multi-signature refers to requiring more than one key to authorize a transaction, much like requiring more than one key to open a safe. It is generally used to divide up responsibility for possession of cryptocurrency. Standard transactions could be called “single-signature transactions” because transfers require only one signature — from the owner of the private key associated with the currency address (public key). Some wallets and apps can be configured to require more than one signature, which means that a group of people, businesses, or other entities all must agree to trade in the cryptocurrencies.

Deep Cold Storage

Deep cold storage ensures the entire transaction process happens in an offline environment. There are typically three elements to deep cold storage.

First, the wallet and private key are generated offline, and the signing of transactions happens on a system not connected to the internet in any manner. This ensures it’s never exposed to a potentially compromised system or connection.

Second, details are secured with encryption to ensure that even if the wallet file ends up in the wrong hands, the information is protected.

Third, storage of the encrypted wallet file or paper wallet is generally at a location or facility that has restricted access, such as a safety deposit box at a bank.

Deep cold storage is used to safeguard a large individual cryptocurrency portfolio held for the long term, or for trustees holding cryptocurrency on behalf of others, and is possibly the safest method to ensure a crypto investment remains secure.

Keep Your Software Up to Date

You should always make sure that you are using the latest version of your app or wallet software, which includes important stability and security fixes. Installing updates for all other software on your computer or mobile device is also important to keep your wallet environment safer.

One Last Thing: Think About Your Testament

Your cryptocurrency funds can be lost forever if you don’t have a backup plan for your peers and family. If the location of your wallets or your passwords is not known by anyone when you are gone, there is no hope that your funds will ever be recovered. Taking a bit of time on these matters can make a huge difference.

To the Moon*

Are you comfortable with how you’re managing and backing up your cryptocurrency wallets and keys? Do you have a suggestion for keeping your cryptocurrencies safe that we missed above? Please let us know in the comments.


*To the Moon — Crypto slang for a currency that reaches an optimistic price projection.

The post Securing Your Cryptocurrency appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Hello World Issue 5: Engineering

Post Syndicated from Russell Barnes original https://www.raspberrypi.org/blog/hello-world-issue-5/

Join us as we celebrate the Year of Engineering in the newest issue of Hello World, our magazine for computing and digital making educators.

 

Inspiring future engineers

We’ve brought together a wide range of experts to share their ideas and advice on how to bring engineering to your classroom — read issue 5 to find out the best ways to inspire the next generation.



Plus we’ve got plenty on GP and Scratch, we answer your latest questions, and we bring you our usual collection of useful features, guides, and lesson plans.

Highlights of issue 5 include:

  • The bluffers’ guide to putting together a tech-themed school trip
  • Inclusion, and coding for the visually impaired
  • Getting students interested in databases
  • Why copying may not always be a bad thing

How to get Hello World #5

Hello World is available as a free download under a Creative Commons license for everyone in world who is interested in computer science and digital making education. Get the latest issue as a PDF file straight from the Hello World website.

We’re currently offering free print copies of the magazine to serving educators in the UK. This offer is open to teachers, Code Club and CoderDojo volunteers, teaching assistants, teacher trainers, and others who help children and young people learn about computing and digital making. Subscribe to have your free print magazine posted directly to your home, or subscribe digitally — 20000 educators have already signed up to receive theirs!

Get in touch!

You could write for us about your experiences as an educator, and share your advice with the community. Wherever you are in the world, get in touch by emailing our editorial team about your article idea — we would love to hear from you!

Hello World magazine is a collaboration between the Raspberry Pi Foundation and Computing At School, which is part of the British Computing Society.

The post Hello World Issue 5: Engineering appeared first on Raspberry Pi.

[$] Who controls glibc?

Post Syndicated from corbet original https://lwn.net/Articles/753646/rss

The removal of an old joke from the GNU C Library manual might not seem
like the sort of topic that would inspire a heated debate. At times,
though, a small action can serve as an inadvertent proxy for a more
significant question, one which is relevant to both the developers and the
users of the project. In this case, that question would be:
how is the project governed and who
makes decisions about which patches are applied?

The intersection of Customer Engagement and Data Science

Post Syndicated from Brent Meyer original https://aws.amazon.com/blogs/messaging-and-targeting/the-intersection-of-customer-engagement-and-data-science/

On the Messaging and Targeting team, we’re constantly inspired by the new and novel ways that customers use our services. For example, last year we took an in-depth look at a customer who built a fully featured email marketing platform based on Amazon SES and other AWS Services.

This week, our friends on the AWS Machine Learning team published a blog post that brings together the worlds of data science and customer engagement. Their solution uses Amazon SageMaker (a platform for building and deploying machine learning models) to create a system that makes purchasing predictions based on customers’ past behaviors. It then uses Amazon Pinpoint to send campaigns to customers based on these predictions.

The blog post is an interesting read that includes a primer on the process of creating a useful Machine Learning solution. It then goes in-depth, discussing the real-world considerations that are involved in implementing the solution.

Take a look at their post, Amazon Pinpoint campaigns driven by machine learning on Amazon SageMaker, on the AWS Machine Learning Blog.

Security Vulnerabilities in VingCard Electronic Locks

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/04/security_vulner_14.html

Researchers have disclosed a massive vulnerability in the VingCard eletronic lock system, used in hotel rooms around the world:

With a $300 Proxmark RFID card reading and writing tool, any expired keycard pulled from the trash of a target hotel, and a set of cryptographic tricks developed over close to 15 years of on-and-off analysis of the codes Vingcard electronically writes to its keycards, they found a method to vastly narrow down a hotel’s possible master key code. They can use that handheld Proxmark device to cycle through all the remaining possible codes on any lock at the hotel, identify the correct one in about 20 tries, and then write that master code to a card that gives the hacker free reign to roam any room in the building. The whole process takes about a minute.

[…]

The two researchers say that their attack works only on Vingcard’s previous-generation Vision locks, not the company’s newer Visionline product. But they estimate that it nonetheless affects 140,000 hotels in more than 160 countries around the world; the researchers say that Vingcard’s Swedish parent company, Assa Abloy, admitted to them that the problem affects millions of locks in total. When WIRED reached out to Assa Abloy, however, the company put the total number of vulnerable locks somewhat lower, between 500,000 and a million.

Patching is a nightmare. It requires updating the firmware on every lock individually.

And the researchers speculate whether or not others knew of this hack:

The F-Secure researchers admit they don’t know if their Vinguard attack has occurred in the real world. But the American firm LSI, which trains law enforcement agencies in bypassing locks, advertises Vingcard’s products among those it promises to teach students to unlock. And the F-Secure researchers point to a 2010 assassination of a Palestinian Hamas official in a Dubai hotel, widely believed to have been carried out by the Israeli intelligence agency Mossad. The assassins in that case seemingly used a vulnerability in Vingcard locks to enter their target’s room, albeit one that required re-programming the lock. “Most probably Mossad has a capability to do something like this,” Tuominen says.

Slashdot post.

Congratulations to Oracle on MySQL 8.0

Post Syndicated from Michael "Monty" Widenius original http://monty-says.blogspot.com/2018/04/congratulations-to-oracle-on-mysql-80.html

Last week, Oracle announced the general availability of MySQL 8.0. This is good news for database users, as it means Oracle is still developing MySQL.

I decide to celebrate the event by doing a quick test of MySQL 8.0. Here follows a step-by-step description of my first experience with MySQL 8.0.
Note that I did the following without reading the release notes, as is what I have done with every MySQL / MariaDB release up to date; In this case it was not the right thing to do.

I pulled MySQL 8.0 from [email protected]:mysql/mysql-server.git
I was pleasantly surprised that ‘cmake . ; make‘ worked without without any compiler warnings! I even checked the used compiler options and noticed that MySQL was compiled with -Wall + several other warning flags. Good job MySQL team!

I did have a little trouble finding the mysqld binary as Oracle had moved it to ‘runtime_output_directory’; Unexpected, but no big thing.

Now it’s was time to install MySQL 8.0.

I did know that MySQL 8.0 has removed mysql_install_db, so I had to use the mysqld binary directly to install the default databases:
(I have specified datadir=/my/data3 in the /tmp/my.cnf file)

> cd runtime_output_directory
> mkdir /my/data3
> ./mysqld –defaults-file=/tmp/my.cnf –install

2018-04-22T12:38:18.332967Z 1 [ERROR] [MY-011011] [Server] Failed to find valid data directory.
2018-04-22T12:38:18.333109Z 0 [ERROR] [MY-010020] [Server] Data Dictionary initialization failed.
2018-04-22T12:38:18.333135Z 0 [ERROR] [MY-010119] [Server] Aborting

A quick look in mysqld –help –verbose output showed that the right command option is –-initialize. My bad, lets try again,

> ./mysqld –defaults-file=/tmp/my.cnf –initialize

2018-04-22T12:39:31.910509Z 0 [ERROR] [MY-010457] [Server] –initialize specified but the data directory has files in it. Aborting.
2018-04-22T12:39:31.910578Z 0 [ERROR] [MY-010119] [Server] Aborting

Now I used the right options, but still didn’t work.
I took a quick look around:

> ls /my/data3/
binlog.index

So even if the mysqld noticed that the data3 directory was wrong, it still wrote things into it.  This even if I didn’t have –log-binlog enabled in the my.cnf file. Strange, but easy to fix:

> rm /my/data3/binlog.index
> ./mysqld –defaults-file=/tmp/my.cnf –initialize

2018-04-22T12:40:45.633637Z 0 [ERROR] [MY-011071] [Server] unknown variable ‘max-tmp-tables=100’
2018-04-22T12:40:45.633657Z 0 [Warning] [MY-010952] [Server] The privilege system failed to initialize correctly. If you have upgraded your server, make sure you’re executing mysql_upgrade to correct the issue.
2018-04-22T12:40:45.633663Z 0 [ERROR] [MY-010119] [Server] Aborting

The warning about the privilege system confused me a bit, but I ignored it for the time being and removed from my configuration files the variables that MySQL 8.0 doesn’t support anymore. I couldn’t find a list of the removed variables anywhere so this was done with the trial and error method.

> ./mysqld –defaults-file=/tmp/my.cnf

2018-04-22T12:42:56.626583Z 0 [ERROR] [MY-010735] [Server] Can’t open the mysql.plugin table. Please run mysql_upgrade to create it.
2018-04-22T12:42:56.827685Z 0 [Warning] [MY-010015] [Repl] Gtid table is not ready to be used. Table ‘mysql.gtid_executed’ cannot be opened.
2018-04-22T12:42:56.838501Z 0 [Warning] [MY-010068] [Server] CA certificate ca.pem is self signed.
2018-04-22T12:42:56.848375Z 0 [Warning] [MY-010441] [Server] Failed to open optimizer cost constant tables
2018-04-22T12:42:56.848863Z 0 [ERROR] [MY-013129] [Server] A message intended for a client cannot be sent there as no client-session is attached. Therefore, we’re sending the information to the error-log instead: MY-001146 – Table ‘mysql.component’ doesn’t exist
2018-04-22T12:42:56.848916Z 0 [Warning] [MY-013129] [Server] A message intended for a client cannot be sent there as no client-session is attached. Therefore, we’re sending the information to the error-log instead: MY-003543 – The mysql.component table is missing or has an incorrect definition.
….
2018-04-22T12:42:56.854141Z 0 [System] [MY-010931] [Server] /home/my/mysql-8.0/runtime_output_directory/mysqld: ready for connections. Version: ‘8.0.11’ socket: ‘/tmp/mysql.sock’ port: 3306 Source distribution.

I figured out that if there is a single wrong variable in the configuration file, running mysqld –initialize will leave the database in an inconsistent state. NOT GOOD! I am happy I didn’t try this in a production system!

Time to start over from the beginning:

> rm -r /my/data3/*
> ./mysqld –defaults-file=/tmp/my.cnf –initialize

2018-04-22T12:44:45.548960Z 5 [Note] [MY-010454] [Server] A temporary password is generated for [email protected]: px)NaaSp?6um
2018-04-22T12:44:51.221751Z 0 [System] [MY-013170] [Server] /home/my/mysql-8.0/runtime_output_directory/mysqld (mysqld 8.0.11) initializing of server has completed

Success!

I wonder why the temporary password is so complex; It could easily have been something that one could easily remember without decreasing security, it’s temporary after all. No big deal, one can always paste it from the logs. (Side note: MariaDB uses socket authentication on many system and thus doesn’t need temporary installation passwords).

Now lets start the MySQL server for real to do some testing:

> ./mysqld –defaults-file=/tmp/my.cnf

2018-04-22T12:45:43.683484Z 0 [System] [MY-010931] [Server] /home/my/mysql-8.0/runtime_output_directory/mysqld: ready for connections. Version: ‘8.0.11’ socket: ‘/tmp/mysql.sock’ port: 3306 Source distribution.

And the lets start the client:

> ./client/mysql –socket=/tmp/mysql.sock –user=root –password=”px)NaaSp?6um”
ERROR 2059 (HY000): Plugin caching_sha2_password could not be loaded: /usr/local/mysql/lib/plugin/caching_sha2_password.so: cannot open shared object file: No such file or directory

Apparently MySQL 8.0 doesn’t work with old MySQL / MariaDB clients by default 🙁

I was testing this in a system with MariaDB installed, like all modern Linux system today, and didn’t want to use the MySQL clients or libraries.

I decided to try to fix this by changing the authentication to the native (original) MySQL authentication method.

> mysqld –skip-grant-tables

> ./client/mysql –socket=/tmp/mysql.sock –user=root
ERROR 1045 (28000): Access denied for user ‘root’@’localhost’ (using password: NO)

Apparently –skip-grant-tables is not good enough anymore. Let’s try again with:

> mysqld –skip-grant-tables –default_authentication_plugin=mysql_native_password

> ./client/mysql –socket=/tmp/mysql.sock –user=root mysql
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MySQL connection id is 7
Server version: 8.0.11 Source distribution

Great, we are getting somewhere, now lets fix “root”  to work with the old authenticaion:

MySQL [mysql]> update mysql.user set plugin=”mysql_native_password”,authentication_string=password(“test”) where user=”root”;
ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ‘(“test”) where user=”root”‘ at line 1

A quick look in the MySQL 8.0 release notes told me that the PASSWORD() function is removed in 8.0. Why???? I don’t know how one in MySQL 8.0 is supposed to generate passwords compatible with old installations of MySQL. One could of course start an old MySQL or MariaDB version, execute the password() function and copy the result.

I decided to fix this the easy way and use an empty password:

(Update:: I later discovered that the right way would have been to use: FLUSH PRIVILEGES;  ALTER USER’ root’@’localhost’ identified by ‘test’  ; I however dislike this syntax as it has the password in clear text which is easy to grab and the command can’t be used to easily update the mysql.user table. One must also disable the –skip-grant mode to do use this)

MySQL [mysql]> update mysql.user set plugin=”mysql_native_password”,authentication_string=”” where user=”root”;
Query OK, 1 row affected (0.077 sec)
Rows matched: 1 Changed: 1 Warnings: 0
 
I restarted mysqld:
> mysqld –default_authentication_plugin=mysql_native_password

> ./client/mysql –user=root –password=”” mysql
ERROR 1862 (HY000): Your password has expired. To log in you must change it using a client that supports expired passwords.

Ouch, forgot that. Lets try again:

> mysqld –skip-grant-tables –default_authentication_plugin=mysql_native_password

> ./client/mysql –user=root –password=”” mysql
MySQL [mysql]> update mysql.user set password_expired=”N” where user=”root”;

Now restart and test worked:

> ./mysqld –default_authentication_plugin=mysql_native_password

>./client/mysql –user=root –password=”” mysql

Finally I had a working account that I can use to create other users!

When looking at mysqld –help –verbose again. I noticed the option:

–initialize-insecure
Create the default database and exit. Create a super user
with empty password.

I decided to check if this would have made things easier:

> rm -r /my/data3/*
> ./mysqld –defaults-file=/tmp/my.cnf –initialize-insecure

2018-04-22T13:18:06.629548Z 5 [Warning] [MY-010453] [Server] [email protected] is created with an empty password ! Please consider switching off the –initialize-insecure option.

Hm. Don’t understand the warning as–initialize-insecure is not an option that one would use more than one time and thus nothing one would ‘switch off’.

> ./mysqld –defaults-file=/tmp/my.cnf

> ./client/mysql –user=root –password=”” mysql
ERROR 2059 (HY000): Plugin caching_sha2_password could not be loaded: /usr/local/mysql/lib/plugin/caching_sha2_password.so: cannot open shared object file: No such file or directory

Back to the beginning 🙁

To get things to work with old clients, one has to initialize the database with:
> ./mysqld –defaults-file=/tmp/my.cnf –initialize-insecure –default_authentication_plugin=mysql_native_password

Now I finally had MySQL 8.0 up and running and thought I would take it up for a spin by running the “standard” MySQL/MariaDB sql-bench test suite. This was removed in MySQL 5.7, but as I happened to have MariaDB 10.3 installed, I decided to run it from there.

sql-bench is a single threaded benchmark that measures the “raw” speed for some common operations. It gives you the ‘maximum’ performance for a single query. Its different from other benchmarks that measures the maximum throughput when you have a lot of users, but sql-bench still tells you a lot about what kind of performance to expect from the database.

I tried first to be clever and create the “test” database, that I needed for sql-bench, with
> mkdir /my/data3/test

but when I tried to run the benchmark, MySQL 8.0 complained that the test database didn’t exist.

MySQL 8.0 has gone away from the original concept of MySQL where the user can easily
create directories and copy databases into the database directory. This may have serious
implication for anyone doing backup of databases and/or trying to restore a backup with normal OS commands.

I created the ‘test’ database with mysqladmin and then tried to run sql-bench:

> ./run-all-tests –user=root

The first run failed in test-ATIS:

Can’t execute command ‘create table class_of_service (class_code char(2) NOT NULL,rank tinyint(2) NOT NULL,class_description char(80) NOT NULL,PRIMARY KEY (class_code))’
Error: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ‘rank tinyint(2) NOT NULL,class_description char(80) NOT NULL,PRIMARY KEY (class_’ at line 1

This happened because ‘rank‘ is now a reserved word in MySQL 8.0. This is also reserved in ANSI SQL, but I don’t know of any other database that has failed to run test-ATIS before. I have in the past run it against Oracle, PostgreSQL, Mimer, MSSQL etc without any problems.

MariaDB also has ‘rank’ as a keyword in 10.2 and 10.3 but one can still use it as an identifier.

I fixed test-ATIS and then managed to run all tests on MySQL 8.0.

I did run the test both with MySQL 8.0 and MariaDB 10.3 with the InnoDB storage engine and by having identical values for all InnoDB variables, table-definition-cache and table-open-cache. I turned off performance schema for both databases. All test are run with a user with an empty password (to keep things comparable and because it’s was too complex to generate a password in MySQL 8.0)

The result are as follows
Results per test in seconds:

Operation         |MariaDB|MySQL-8|

———————————–
ATIS              | 153.00| 228.00|
alter-table       |  92.00| 792.00|
big-tables        | 990.00|2079.00|
connect           | 186.00| 227.00|
create            | 575.00|4465.00|
insert            |4552.00|8458.00|
select            | 333.00| 412.00|
table-elimination |1900.00|3916.00|
wisconsin         | 272.00| 590.00|
———————————–

This is of course just a first view of the performance of MySQL 8.0 in a single user environment. Some reflections about the results:

  • Alter-table test is slower (as expected) in 8.0 as some of the alter tests benefits of the instant add column in MariaDB 10.3.
  • connect test is also better for MariaDB as we put a lot of efforts to speed this up in MariaDB 10.2
  • table-elimination shows an optimization in MariaDB for the  Anchor table model, which MySQL doesn’t have.
  • CREATE and DROP TABLE is almost 8 times slower in MySQL 8.0 than in MariaDB 10.3. I assume this is the cost of ‘atomic DDL’. This may also cause performance problems for any thread using the data dictionary when another thread is creating/dropping tables.
  • When looking at the individual test results, MySQL 8.0 was slower in almost every test, in many significantly slower.
  • The only test where MySQL was faster was “update_with_key_prefix”. I checked this and noticed that there was a bug in the test and the columns was updated to it’s original value (which should be instant with any storage engine). This is an old bug that MySQL has found and fixed and that we have not been aware of in the test or in MariaDB.
  • While writing this, I noticed that MySQL 8.0 is now using utf8mb4 as the default character set instead of latin1. This may affect some of the benchmarks slightly (not much as most tests works with numbers and Oracle claims that utf8mb4 is only 20% slower than latin1), but needs to be verified.
  • Oracle claims that MySQL 8.0 is much faster on multi user benchmarks. The above test indicates that they may have done this by sacrificing single user performance.
  •  We need to do more and many different benchmarks to better understand exactly what is going on. Stay tuned!

Short summary of my first run with MySQL 8.0:

  • Using the new caching_sha2_password authentication as default for new installation is likely to cause a lot of problems for users. No old application will be able to use MySQL 8.0, installed with default options, without moving to MySQL’s client libraries. While working on this blog I saw MySQL users complain on IRC that not even MySQL Workbench can authenticate with MySQL 8.0. This is the first time in MySQL’s history where such an incompatible change has ever been done!
  • Atomic DDL is a good thing (We plan to have this in MariaDB 10.4), but it should not have such a drastic impact on performance. I am also a bit skeptical of MySQL 8.0 having just one copy of the data dictionary as if this gets corrupted you will lose all your data. (Single point of failure)
  • MySQL 8.0 has several new reserved words and has removed a lot of variables, which makes upgrades hard. Before upgrading to MySQL 8.0 one has to check all one’s databases and applications to ensure that there are no conflicts.
  • As my test above shows, if you have a single deprecated variable in your configuration files, the installation of MySQL will abort and can leave the database in inconsistent state. I did of course my tests by installing into an empty data dictionary, but one can assume that some of the problems may also happen when upgrading an old installation.

Conclusions:
In many ways, MySQL 8.0 has caught up with some earlier versions of MariaDB. For instance, in MariaDB 10.0, we introduced roles (four years ago). In MariaDB 10.1, we introduced encrypted redo/undo logs (three years ago). In MariaDB 10.2, we introduced window functions and CTEs (a year ago). However, some catch-up of MariaDB Server 10.2 features still remains for MySQL (such as check constraints, binlog compression, and log-based rollback).

MySQL 8.0 has a few new interesting features (mostly Atomic DDL and JSON TABLE functions), but at the same time MySQL has strayed away from some of the fundamental corner stone principles of MySQL:

From the start of the first version of MySQL in 1995, all development has been focused around 3 core principles:

  • Ease of use
  • Performance
  • Stability

With MySQL 8.0, Oracle has sacrifices 2 of 3 of these.

In addition (as part of ease of use), while I was working on MySQL, we did our best to ensure that the following should hold:

  • Upgrades should be trivial
  • Things should be kept compatible, if possible (don’t remove features/options/functions that are used)
  • Minimize reserved words, don’t remove server variables
  • One should be able to use normal OS commands to create and drop databases, copy and move tables around within the same system or between different systems. With 8.0 and data dictionary taking backups of specific tables will be hard, even if the server is not running.
  • mysqldump should always be usable backups and to move to new releases
  • Old clients and application should be able to use ‘any’ MySQL server version unchanged. (Some Oracle client libraries, like C++, by default only supports the new X protocol and can thus not be used with older MySQL or any MariaDB version)

We plan to add a data dictionary to MariaDB 10.4 or MariaDB 10.5, but in a way to not sacrifice any of the above principles!

The competition between MySQL and MariaDB is not just about a tactical arms race on features. It’s about design philosophy, or strategic vision, if you will.

This shows in two main ways: our respective view of the Storage Engine structure, and of the top-level direction of the roadmap.

On the Storage Engine side, MySQL is converging on InnoDB, even for clustering and partitioning. In doing so, they are abandoning the advantages of multiple ways of storing data. By contrast, MariaDB sees lots of value in the Storage Engine architecture: MariaDB Server 10.3 will see the general availability of MyRocks (for write-intensive workloads) and Spider (for scalable workloads). On top of that, we have ColumnStore for analytical workloads. One can use the CONNECT engine to join with other databases. The use of different storage engines for different workloads and different hardware is a competitive differentiator, now more than ever.

On the roadmap side, MySQL is carefully steering clear of features that close the gap between MySQL and Oracle. MariaDB has no such constraints. With MariaDB 10.3, we are introducing PL/SQL compatibility (Oracle’s stored procedures) and AS OF (built-in system versioned tables with point-in-time querying). For both of those features, MariaDB is the first Open Source database doing so. I don’t except Oracle to provide any of the above features in MySQL!

Also on the roadmap side, MySQL is not working with the ecosystem in extending the functionality. In 2017, MariaDB accepted more code contributions in one year, than MySQL has done during its entire lifetime, and the rate is increasing!

I am sure that the experience I had with testing MySQL 8.0 would have been significantly better if MySQL would have an open development model where the community could easily participate in developing and testing MySQL continuously. Most of the confusing error messages and strange behavior would have been found and fixed long before the GA release.

Before upgrading to MySQL 8.0 please read https://dev.mysql.com/doc/refman/8.0/en/upgrading-from-previous-series.html to see what problems you can run into! Don’t expect that old installations or applications will work out of the box without testing as a lot of features and options has been removed (query cache, partition of myisam tables etc)! You probably also have to revise your backup methods, especially if you want to ever restore just a few tables. (With 8.0, I don’t know how this can be easily done).

According to the MySQL 8.0 release notes, one can’t use mysqldump to copy a database to MySQL 8.0. One has to first to move to a MySQL 5.7 GA version (with mysqldump, as recommended by Oracle) and then to MySQL 8.0 with in-place update. I assume this means that all old mysqldump backups are useless for MySQL 8.0?

MySQL 8.0 seams to be a one way street to an unknown future. Up to MySQL 5.7 it has been trivial to move to MariaDB and one could always move back to MySQL with mysqldump. All MySQL client libraries has worked with MariaDB and all MariaDB client libraries has worked with MySQL. With MySQL 8.0 this has changed in the wrong direction.

As long as you are using MySQL 5.7 and below you have choices for your future, after MySQL 8.0 you have very little choice. But don’t despair, as MariaDB will always be able to load a mysqldump file and it’s very easy to upgrade your old MySQL installation to MariaDB 🙂

I wish you good luck to try MySQL 8.0 (and also the upcoming MariaDB 10.3)!

Invent new sounds with Google’s NSynth Super

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/google-nsynth-super/

Discover new sounds and explore the role of machine learning in music production and sound research with the NSynth Super, an ongoing project from Google’s Magenta research team that you can build at home.

Google Open NSynth Super Testing

Uploaded by AB Open on 2018-04-17.

What is the NSynth Super?

Part of the ongoing Magenta research project within Google, NSynth Super explores the ways in which machine learning tools help artists and musicians be creative.

Google Nsynth Super Raspberry Pi

“Technology has always played a role in creating new types of sounds that inspire musicians — from the sounds of distortion to the electronic sounds of synths,” explains the team behind the NSynth Super. “Today, advances in machine learning and neural networks have opened up new possibilities for sound generation.”

Using TensorFlow, the Magenta team builds tools and interfaces that let  artists and musicians use machine learning in their work. The NSynth Super AI algorithm uses deep neural networking to investigate the character of sounds. It then builds new sounds based on these characteristics instead of simply mixing sounds together.

Using an autoencoder, it extracts 16 defining temporal features from each input. These features are then interpolated linearly to create new embeddings (mathematical representations of each sound). These new embeddings are then decoded into new sounds, which have the acoustic qualities of both inputs.

The team publishes all hardware designs and software that are part of their ongoing research under open-source licences, allowing you to build your own synth.

Build your own NSynth Super

Using these open-source tools, Andrew Black has produced his own NSynth Super, demoed in the video above. Andrew’s list of build materials includes a Raspberry Pi 3, potentiometers, rotary encoders, and the Adafruit 1.3″ OLED display. Magenta also provides Gerber files for you to fabricate your own PCB.

Google Nsynth Super Raspberry Pi

Once fabricated, the PCB includes a table of contents for adding components.

The build isn’t easy — it requires soldering skills or access to someone who can assemble PCBs. Take a look at Andrew’s blog post and the official NSynth GitHub repo to see whether you’re up to the challenge.

Google Nsynth Super Raspberry Pi
Google Nsynth Super Raspberry Pi
Google Nsynth Super Raspberry Pi

Music and Raspberry Pi

The Raspberry Pi has been widely used for music production and music builds. Be it retrofitting a boombox, distributing music atop Table Mountain, or coding tracks with Sonic Pi, the Pi offers endless opportunities for musicians and music lovers to expand their repertoire of builds and instruments.

If you’d like to try more music-based projects using the Raspberry Pi, you can check out our free resources. And if you’ve used a Raspberry Pi in your own musical project, please share it with us in the comments or via our social network accounts.

The post Invent new sounds with Google’s NSynth Super appeared first on Raspberry Pi.

Announcing Coolest Projects North America

Post Syndicated from Courtney Lentz original https://www.raspberrypi.org/blog/coolest-projects-north-america/

The Raspberry Pi Foundation loves to celebrate people who use technology to solve problems and express themselves creatively, so we’re proud to expand the incredibly successful event Coolest Projects to North America. This free event will be held on Sunday 23 September 2018 at the Discovery Cube Orange County in Santa Ana, California.

Coolest Projects North America logo Raspberry Pi CoderDojo

What is Coolest Projects?

Coolest Projects is a world-leading showcase that empowers and inspires the next generation of digital creators, innovators, changemakers, and entrepreneurs. The event is both a competition and an exhibition to give young digital makers aged 7 to 17 a platform to celebrate their successes, creativity, and ingenuity.

showcase crowd — Coolest Projects North America

In 2012, Coolest Projects was conceived as an opportunity for CoderDojo Ninjas to showcase their work and for supporters to acknowledge these achievements. Week after week, Ninjas would meet up to work diligently on their projects, hacks, and code; however, it can be difficult for them to see their long-term progress on a project when they’re concentrating on its details on a weekly basis. Coolest Projects became a dedicated time each year for Ninjas and supporters to reflect, celebrate, and share both the achievements and challenges of the maker’s journey.

three female coolest projects attendees — Coolest Projects North America

Coolest Projects North America

Not only is Coolest Projects expanding to North America, it’s also expanding its participant pool! Members of our team have met so many amazing young people creating in all areas of the world, that it simply made sense to widen our outreach to include Code Clubs, students of Raspberry Pi Certified Educators, and members of the Raspberry Jam community at large as well as CoderDojo attendees.

 a boy showing a technology project to an old man, with a girl playing on a laptop on the floor — Coolest Projects North America

Exhibit and attend Coolest Projects

Coolest Projects is a free, family- and educator-friendly event. Young people can apply to exhibit their projects, and the general public can register to attend this one-day event. Be sure to register today, because you make Coolest Projects what it is: the coolest.

The post Announcing Coolest Projects North America appeared first on Raspberry Pi.

Audit Trail Overview

Post Syndicated from Bozho original https://techblog.bozho.net/audit-trail-overview/

As part of my current project (secure audit trail) I decided to make a survey about the use of audit trail “in the wild”.

I haven’t written in details about this project of mine (unlike with some other projects). Mostly because it’s commercial and I don’t want to use my blog as a direct promotion channel (though I am doing that at the moment, ironically). But the aim of this post is to shed some light on how audit trail is used.

The survey can be found here. The questions are basically: does your current project have audit trail functionality, and if yes, is it protected from tampering. If not – do you think you should have such functionality.

The results are interesting (although with only around 50 respondents)

So more than half of the systems (on which respondents are working) don’t have audit trail. While audit trail is recommended by information security and related standards, it may not find place in the “busy schedule” of a software project, even though it’s fairly easy to provide a trivial implementation (e.g. I’ve written how to quickly setup one with Hibernate and Spring)

A trivial implementation might do in many cases but if the audit log is critical (e.g. access to sensitive data, performing financial operations etc.), then relying on a trivial implementation might not be enough. In other words – if the sysadmin can access the database and delete or modify the audit trail, then it doesn’t serve much purpose. Hence the next question – how is the audit trail protected from tampering:

And apparently, from the less than 50% of projects with audit trail, around 50% don’t have technical guarantees that the audit trail can’t be tampered with. My guess is it’s more, because people have different understanding of what technical measures are sufficient. E.g. someone may think that digitally signing your log files (or log records) is sufficient, but in fact it isn’t, as whole files (or records) can be deleted (or fully replaced) without a way to detect that. Timestamping can help (and a good audit trail solution should have that), but it doesn’t guarantee the order of events or prevent a malicious actor from deleting or inserting fake ones. And if timestamping is done on a log file level, then any not-yet-timestamped log file is vulnerable to manipulation.

I’ve written about event logs before and their two flavours – event sourcing and audit trail. An event log can effectively be considered audit trail, but you’d need additional security to avoid the problems mentioned above.

So, let’s see what would various levels of security and usefulness of audit logs look like. There are many papers on the topic (e.g. this and this), and they often go into the intricate details of how logging should be implemented. I’ll try to give an overview of the approaches:

  • Regular logs – rely on regular INFO log statements in the production logs to look for hints of what has happened. This may be okay, but is harder to look for evidence (as there is non-auditable data in those log files as well), and it’s not very secure – usually logs are collected (e.g. with graylog) and whoever has access to the log collector’s database (or search engine in the case of Graylog), can manipulate the data and not be caught
  • Designated audit trail – whether it’s stored in the database or in logs files. It has the proper business-event level granularity, but again doesn’t prevent or detect tampering. With lower risk systems that may is perfectly okay.
  • Timestamped logs – whether it’s log files or (harder to implement) database records. Timestamping is good, but if it’s not an external service, a malicious actor can get access to the local timestamping service and issue fake timestamps to either re-timestamp tampered files. Even if the timestamping is not compromised, whole entries can be deleted. The fact that they are missing can sometimes be deduced based on other factors (e.g. hour of rotation), but regularly verifying that is extra effort and may not always be feasible.
  • Hash chaining – each entry (or sequence of log files) could be chained (just as blockchain transactions) – the next one having the hash of the previous one. This is a good solution (whether it’s local, external or 3rd party), but it has the risk of someone modifying or deleting a record, getting your entire chain and re-hashing it. All the checks will pass, but the data will not be correct
  • Hash chaining with anchoring – the head of the chain (the hash of the last entry/block) could be “anchored” to an external service that is outside the capabilities of a malicious actor. Ideally, a public blockchain, alternatively – paper, a public service (twitter), email, etc. That way a malicious actor can’t just rehash the whole chain, because any check against the external service would fail.
  • WORM storage (write once, ready many). You could send your audit logs almost directly to WORM storage, where it’s impossible to replace data. However, that is not ideal, as WORM storage can be slow and expensive. For example AWS Glacier has rather big retrieval times and searching through recent data makes it impractical. It’s actually cheaper than S3, for example, and you can have expiration policies. But having to support your own WORM storage is expensive. It is a good idea to eventually send the logs to WORM storage, but “fresh” audit trail should probably not be “archived” so that it’s searchable and some actionable insight can be gained from it.
  • All-in-one – applying all of the above “just in case” may be unnecessary for every project out there, but that’s what I decided to do at LogSentinel. Business-event granularity with timestamping, hash chaining, anchoring, and eventually putting to WORM storage – I think that provides both security guarantees and flexibility.

I hope the overview is useful and the results from the survey shed some light on how this aspect of information security is underestimated.

The post Audit Trail Overview appeared first on Bozho's tech blog.

Cloud Empire: Meet the Rebel Alliance

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/cloud-empire-meet-the-rebel-alliance/

Cloud Empire: Meet the Rebel Alliance

Last week Backblaze made the exciting announcement that through partnerships with Packet and ServerCentral, cloud computing is available to Backblaze B2 Cloud Storage customers.

Those of you familiar with cloud computing will understand the significance of this news. We are now offering the least expensive cloud storage + cloud computing available anywhere. You no longer have to submit to the lock-in tactics and exorbitant prices charged by the other big players in the cloud services biz.

As Robin Harris wrote in ZDNet about last week’s computing partners announcement, Cloud Empire: Meet the Rebel Alliance.

We understand that some of our cloud backup and storage customers might be unfamiliar with cloud computing. Backblaze made its name in cloud backup and object storage, and that’s what our customers know us for. In response to customers requests, we’ve directly connected our B2 cloud object storage with cloud compute providers. This adds the ability to use and run programs on data once it’s in the B2 cloud, opening up a world of new uses for B2. Just some of the possibilities include media transcoding and rendering, web hosting, application development and testing, business analytics, disaster recovery, on-demand computing capacity (cloud bursting), AI, and mobile and IoT applications.

The world has been moving to a multi-cloud / hybrid cloud world, and customers are looking for more choices than those offered by the existing cloud players. Our B2 compute partnerships build on our mission to offer cloud storage that’s astonishingly easy and low-cost. They enable our customers to move into a more flexible and affordable cloud services ecosystem that provides a greater variety of choices and costs far less. We believe we are helping to fulfill the promise of the internet by allowing customers to choose the best-of-breed services from the best vendors.

If You’re Not Familiar with Cloud Computing, Here’s a Quick Overview

Cloud computing is another component of cloud services, like object storage, that replicates in the cloud a basic function of a computer system. Think of services that operate in a cloud as an infinitely scalable version of what happens on your desktop computer. In your desktop computer you have computing/processing (CPU), fast storage (like an SSD), data storage (like your disk drive), and memory (RAM). Their counterparts in the cloud are computing (CPU), block storage (fast storage), object storage (data storage), and processing memory (RAM).

Computer building blocks

CPU, RAM, fast internal storage, and a hard drive are the basic building blocks of a computer
They also are the basic building blocks of cloud computing

Some customers require only some of these services, such as cloud storage. B2 as a standalone service has proven to be an outstanding solution for those customers interested in backing up or archiving data. There are many customers that would like additional capabilities, such as performing operations on that data once it’s in the cloud. They need object storage combined with computing.

With the just announced compute partnerships, Backblaze is able to offer computing services to anyone using B2. A direct connection between Backblaze’s and our partners’ data centers means that our customers can process data stored in B2 with high speed, low latency, and zero data transfer costs.

Backblaze, Packet and Server Central cloud compute workflow diagram

Cloud service providers package up CPU, storage, and memory into services that you can rent on an hourly basis
You can scale up and down and add or remove services as you need them

How Does Computing + B2 Work?

Those wanting to use B2 with computing will need to sign up for accounts with Backblaze and either Packet or ServerCentral. Packet customers need only select “SJC1” as their region and then get started. The process is also simple for ServerCentral customers — they just need to register with a ServerCentral account rep.

The direct connection between B2 and our compute partners means customers will experience very low latency (less than 10ms) between services. Even better, all data transfers between B2 and the compute partner are free. When combined with Backblaze B2, customers can obtain cloud computing services for as little as 50% of the cost of Amazon’s Elastic Compute Cloud (EC2).

Opening Up the Cloud “Walled Garden”

Traditionally, cloud vendors charge fees for customers to move data outside the “walled garden” of that particular vendor. These fees reach upwards of $0.12 per gigabyte (GB) for data egress. This large fee for customers accessing their own data restricts users from using a multi-cloud approach and taking advantage of less expensive or better performing options. With free transfers between B2 and Packet or ServerCentral, customers now have a predictable, scalable solution for computing and data storage while avoiding vendor lock in. Dropbox made waves when they saved $75 million by migrating off of AWS. Adding computing to B2 helps anyone interested in moving some or all of their computing off of AWS and thereby cutting their AWS bill by 50% or more.

What are the Advantages of Cloud Storage + Computing?

Using computing and storage in the cloud provide a number of advantages over using in-house resources.

  1. You don’t have to purchase the actual hardware, software licenses, and provide space and IT resources for the systems.
  2. Cloud computing is available with just a few minutes notice and you only pay for whatever period of time you need. You avoid having additional hardware on your balance sheet.
  3. Resources are in the cloud and can provide online services to customers, mobile users, and partners located anywhere in the world.
  4. You can isolate the work on these systems from your normal production environment, making them ideal for testing and trying out new applications and development projects.
  5. Computing resources scale when you need them to, providing temporary or ongoing extra resources for expected or unexpected demand.
  6. They can provide redundant and failover services when and if your primary systems are unavailable for whatever reason.

Where Can I Learn More?

We encourage B2 customers to explore the options available at our partner sites, Packet and ServerCentral. They are happy to help customers understand what services are available and how to get started.

We are excited to see what you build! And please tell us in the comments what you are doing or have planned with B2 + computing.

P.S. May the force be with all of us!

The post Cloud Empire: Meet the Rebel Alliance appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

User Authentication Best Practices Checklist

Post Syndicated from Bozho original https://techblog.bozho.net/user-authentication-best-practices-checklist/

User authentication is the functionality that every web application shared. We should have perfected that a long time ago, having implemented it so many times. And yet there are so many mistakes made all the time.

Part of the reason for that is that the list of things that can go wrong is long. You can store passwords incorrectly, you can have a vulnerably password reset functionality, you can expose your session to a CSRF attack, your session can be hijacked, etc. So I’ll try to compile a list of best practices regarding user authentication. OWASP top 10 is always something you should read, every year. But that might not be enough.

So, let’s start. I’ll try to be concise, but I’ll include as much of the related pitfalls as I can cover – e.g. what could go wrong with the user session after they login:

  • Store passwords with bcrypt/scrypt/PBKDF2. No MD5 or SHA, as they are not good for password storing. Long salt (per user) is mandatory (the aforementioned algorithms have it built in). If you don’t and someone gets hold of your database, they’ll be able to extract the passwords of all your users. And then try these passwords on other websites.
  • Use HTTPS. Period. (Otherwise user credentials can leak through unprotected networks). Force HTTPS if user opens a plain-text version.
  • Mark cookies as secure. Makes cookie theft harder.
  • Use CSRF protection (e.g. CSRF one-time tokens that are verified with each request). Frameworks have such functionality built-in.
  • Disallow framing (X-Frame-Options: DENY). Otherwise your website may be included in another website in a hidden iframe and “abused” through javascript.
  • Have a same-origin policy
  • Logout – let your users logout by deleting all cookies and invalidating the session. This makes usage of shared computers safer (yes, users should ideally use private browsing sessions, but not all of them are that savvy)
  • Session expiry – don’t have forever-lasting sessions. If the user closes your website, their session should expire after a while. “A while” may still be a big number depending on the service provided. For ajax-heavy website you can have regular ajax-polling that keeps the session alive while the page stays open.
  • Remember me – implementing “remember me” (on this machine) functionality is actually hard due to the risks of a stolen persistent cookie. Spring-security uses this approach, which I think should be followed if you wish to implement more persistent logins.
  • Forgotten password flow – the forgotten password flow should rely on sending a one-time (or expiring) link to the user and asking for a new password when it’s opened. 0Auth explain it in this post and Postmark gives some best pracitces. How the link is formed is a separate discussion and there are several approaches. Store a password-reset token in the user profile table and then send it as parameter in the link. Or do not store anything in the database, but send a few params: userId:expiresTimestamp:hmac(userId+expiresTimestamp). That way you have expiring links (rather than one-time links). The HMAC relies on a secret key, so the links can’t be spoofed. It seems there’s no consensus, as the OWASP guide has a bit different approach
  • One-time login links – this is an option used by Slack, which sends one-time login links instead of asking users for passwords. It relies on the fact that your email is well guarded and you have access to it all the time. If your service is not accessed to often, you can have that approach instead of (rather than in addition to) passwords.
  • Limit login attempts – brute-force through a web UI should not be possible; therefore you should block login attempts if they become too many. One approach is to just block them based on IP. The other one is to block them based on account attempted. (Spring example here). Which one is better – I don’t know. Both can actually be combined. Instead of fully blocking the attempts, you may add a captcha after, say, the 5th attempt. But don’t add the captcha for the first attempt – it is bad user experience.
  • Don’t leak information through error messages – you shouldn’t allow attackers to figure out if an email is registered or not. If an email is not found, upon login report just “Incorrect credentials”. On passwords reset, it may be something like “If your email is registered, you should have received a password reset email”. This is often at odds with usability – people don’t often remember the email they used to register, and the ability to check a number of them before getting in might be important. So this rule is not absolute, though it’s desirable, especially for more critical systems.
  • Make sure you use JWT only if it’s really necessary and be careful of the pitfalls.
  • Consider using a 3rd party authentication – OpenID Connect, OAuth by Google/Facebook/Twitter (but be careful with OAuth flaws as well). There’s an associated risk with relying on a 3rd party identity provider, and you still have to manage cookies, logout, etc., but some of the authentication aspects are simplified.
  • For high-risk or sensitive applications use 2-factor authentication. There’s a caveat with Google Authenticator though – if you lose your phone, you lose your accounts (unless there’s a manual process to restore it). That’s why Authy seems like a good solution for storing 2FA keys.

I’m sure I’m missing something. And you see it’s complicated. Sadly we’re still at the point where the most common functionality – authenticating users – is so tricky and cumbersome, that you almost always get at least some of it wrong.

The post User Authentication Best Practices Checklist appeared first on Bozho's tech blog.