Tag Archives: www

HDD vs SSD: What Does the Future for Storage Hold?

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/ssd-vs-hdd-future-of-storage/

SSD 60 TB drive

This is part one of a series. Use the Join button above to receive notification of future posts on this and other topics.

Customers frequently ask us whether and when we plan to move our cloud backup and data storage to SSDs (Solid-State Drives). That’s not a surprising question considering the many advantages SSDs have over magnetic platter type drives, also known as HDDs (Hard-Disk Drives).

We’re a large user of HDDs in our data centers (currently 100,000 hard drives holding over 500 petabytes of data). We want to provide the best performance, reliability, and economy for our cloud backup and cloud storage services, so we continually evaluate which drives to use for operations and in our data centers. While we use SSDs for some applications, which we’ll describe below, there are reasons why HDDs will continue to be the primary drives of choice for us and other cloud providers for the foreseeable future.

HDDs vs SSDs

HDD vs SSD

The laptop computer I am writing this on has a single 512GB SSD, which has become a common feature in higher end laptops. The SSD’s advantages for a laptop are easy to understand: they are smaller than an HDD, faster, quieter, last longer, and are not susceptible to vibration and magnetic fields. They also have much lower latency and access times.

Today’s typical online price for a 2.5” 512GB SSD is $140 to $170. The typical online price for a 3.5” 512 GB HDD is $44 to $65. That’s a pretty significant difference in price, but since the SSD helps make the laptop lighter, enables it to be more resistant to the inevitable shocks and jolts it will experience in daily use, and adds of benefits of faster booting, faster waking from sleep, and faster launching of applications and handling of big files, the extra cost for the SSD in this case is worth it.

Some of these SSD advantages, chiefly speed, also will apply to a desktop computer, so desktops are increasingly outfitted with SSDs, particularly to hold the operating system, applications, and data that is accessed frequently. Replacing a boot drive with an SSD has become a popular upgrade option to breathe new life into a computer, especially one that seems to take forever to boot or is used for notoriously slow-loading applications such as Photoshop.

We covered upgrading your computer with an SSD in our blog post SSD 101: How to Upgrade Your Computer With An SSD.

Data centers are an entirely different kettle of fish. The primary concerns for data center storage are reliability, storage density, and cost. While SSDs are strong in the first two areas, it’s the third where they are not yet competitive. At Backblaze we adopt higher density HDDs as they become available — we’re currently using both 10TB and 12TB drives (among other capacities) in our data centers. Higher density drives provide greater storage density per Storage Pod and Vault and reduce our overhead cost through less required maintenance and lower total power requirements. Comparable SSDs in those sizes would cost roughly $1,000 per terabyte, considerably higher than the corresponding HDD. Simply put, SSDs are not yet in the price range to make their use economical for the benefits they provide, which is the reason why we expect to be using HDDs as our primary storage media for the foreseeable future.

What Are HDDs?

HDDs have been around over 60 years since IBM introduced them in 1956. The first disk drive was the size of a car, stored a mere 3.75 megabytes, and cost $300,000 in today’s dollars.

IBM 350 Disk Storage System — 3.75MB in 1956

The 350 Disk Storage System was a major component of the IBM 305 RAMAC (Random Access Method of Accounting and Control) system, which was introduced in September 1956. It consisted of 40 platters and a dual read/write head on a single arm that moved up and down the stack of magnetic disk platters.

The basic mechanism of an HDD remains unchanged since then, though it has undergone continual refinement. An HDD uses magnetism to store data on a rotating platter. A read/write head is affixed to an arm that floats above the spinning platter reading and writing data. The faster the platter spins, the faster an HDD can perform. Typical laptop drives today spin at either 5400 RPM (revolutions per minute) or 7200 RPM, though some server-based platters spin at even higher speeds.

Exploded drawing of a hard drive

Exploded drawing of a hard drive

The platters inside the drives are coated with a magnetically sensitive film consisting of tiny magnetic grains. Data is recorded when a magnetic write-head flies just above the spinning disk; the write head rapidly flips the magnetization of one magnetic region of grains so that its magnetic pole points up or down, to encode a 1 or a 0 in binary code. If all this sounds like an HDD is vulnerable to shocks and vibration, you’d be right. They also are vulnerable to magnets, which is one way to destroy the data on an HDD if you’re getting rid of it.

The major advantage of an HDD is that it can store lots of data cheaply. One and two terabyte (1,024 and 2,048 gigabytes) hard drives are not unusual for a laptop these days, and 10TB and 12TB drives are now available for desktops and servers. Densities and rotation speeds continue to grow. However, if you compare the cost of common HDDs vs SSDs for sale online, the SSDs are roughly 3-5x the cost per gigabyte. So if you want cheap storage and lots of it, using a standard hard drive is definitely the more economical way to go.

What are the best uses for HDDs?

  • Disk arrays (NAS, RAID, etc.) where high capacity is needed
  • Desktops when low cost is priority
  • Media storage (photos, videos, audio not currently being worked on)
  • Drives with extreme number of reads and writes

What Are SSDs?

SSDs go back almost as far as HDDs, with the first semiconductor storage device compatible with a hard drive interface introduced in 1978, the StorageTek 4305.

Storage Technology 4305 SSD

The StorageTek was an SSD aimed at the IBM mainframe compatible market. The STC 4305 was seven times faster than IBM’s popular 2305 HDD system (and also about half the price). It consisted of a cabinet full of charge-coupled devices and cost $400,000 for 45MB capacity with throughput speeds up to 1.5 MB/sec.

SSDs are based on a type of non-volatile memory called NAND (named for the Boolean operator “NOT AND,” and one of two main types of flash memory). Flash memory stores data in individual memory cells, which are made of floating-gate transistors. Though they are semiconductor-based memory, they retain their information when no power is applied to them — a feature that’s obviously a necessity for permanent data storage.

Samsung SSD

Samsung SSD 850 Pro

Compared to an HDD, SSDs have higher data-transfer rates, higher areal storage density, better reliability, and much lower latency and access times. For most users, it’s the speed of an SSD that primarily attracts them. When discussing the speed of drives, what we are referring to is the speed at which they can read and write data.

For HDDs, the speed at which the platters spin strongly determines the read/write times. When data on an HDD is accessed, the read/write head must physically move to the location where the data was encoded on a magnetic section on the platter. If the file being read was written sequentially to the disk, it will be read quickly. As more data is written to the disk, however, it’s likely that the file will be written across multiple sections, resulting in fragmentation of the data. Fragmented data takes longer to read with an HDD as the read head has to move to different areas of the platter(s) to completely read all the data requested.

Because SSDs have no moving parts, they can operate at speeds far above those of a typical HDD. Fragmentation is not an issue for SSDs. Files can be written anywhere with little impact on read/write times, resulting in read times far faster than any HDD, regardless of fragmentation.

Samsung SSD 850 Pro (back)

Due to the way data is written and read to the drive, however, SSD cells can wear out over time. SSD cells push electrons through a gate to set its state. This process wears on the cell and over time reduces its performance until the SSD wears out. This effect takes a long time and SSDs have mechanisms to minimize this effect, such as the TRIM command. Flash memory writes an entire block of storage no matter how few pages within the block are updated. This requires reading and caching the existing data, erasing the block and rewriting the block. If an empty block is available, a write operation is much faster. The TRIM command, which must be supported in both the OS and the SSD, enables the OS to inform the drive which blocks are no longer needed. It allows the drive to erase the blocks ahead of time in order to make empty blocks available for subsequent writes.

The effect of repeated reading and erasing on an SSD is cumulative and an SSD can slow down and even display errors with age. It’s more likely, however, that the system using the SSD will be discarded for obsolescence before the SSD begins to display read/write errors. Hard drives eventually wear out from constant use as well, since they use physical recording methods, so most users won’t base their selection of an HDD or SSD drive based on expected longevity.

SSD internals

SSD circuit board

Overall, SSDs are considered far more durable than HDDs due to a lack of mechanical parts. The moving mechanisms within an HDD are susceptible to not only wear and tear over time, but to damage due to movement or forceful contact. If one were to drop a laptop with an HDD, there is a high likelihood that all those moving parts will collide, resulting in potential data loss and even destructive physical damage that could kill the HDD outright. SSDs have no moving parts so, while they hold the risk of a potentially shorter life span due to high use, they can survive the rigors we impose upon our portable devices and laptops.

What are the best uses for SSDs?

  • Notebooks, laptops, where performance, lightweight, areal storage density, resistance to shock and general ruggedness are desirable
  • Boot drives holding operating system and applications, which will speed up booting and application launching
  • Working files (media that is being edited: photos, video, audio, etc.)
  • Swap drives where SSD will speed up disk paging
  • Cache drives
  • Database servers
  • Revitalizing an older computer. If you’ve got a computer that seems slow to start up and slow to load applications and files, updating the boot drive with an SSD could make it seem, if not new, at least as if it just came back refreshed from spending some time on the beach.

Stay Tuned for Part 2 of HDD vs SSD

That’s it for part 1. In our second part we’ll take a deeper look at the differences between HDDs and SSDs, how both HDD and SSD technologies are evolving, and how Backblaze takes advantage of SSDs in our operations and data centers.

Here's a tip!Here’s a tip on finding all the posts tagged with SSD on our blog. Just follow https://www.backblaze.com/blog/tag/ssd/.

Don’t miss future posts on HDDs, SSDs, and other topics, including hard drive stats, cloud storage, and tips and tricks for backing up to the cloud. Use the Join button above to receive notification of future posts on our blog.

The post HDD vs SSD: What Does the Future for Storage Hold? appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

The Challenges of Opening a Data Center — Part 2

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/factors-for-choosing-data-center/

Rows of storage pods in a data center

This is part two of a series on the factors that an organization needs to consider when opening a data center and the challenges that must be met in the process.

In Part 1 of this series, we looked at the different types of data centers, the importance of location in planning a data center, data center certification, and the single most expensive factor in running a data center, power.

In Part 2, we continue to look at factors that need to considered both by those interested in a dedicated data center and those seeking to colocate in an existing center.

Power (continued from Part 1)

In part 1, we began our discussion of the power requirements of data centers.

As we discussed, redundancy and failover is a chief requirement for data center power. A redundantly designed power supply system is also a necessity for maintenance, as it enables repairs to be performed on one network, for example, without having to turn off servers, databases, or electrical equipment.

Power Path

The common critical components of a data center’s power flow are:

  • Utility Supply
  • Generators
  • Transfer Switches
  • Distribution Panels
  • Uninterruptible Power Supplies (UPS)
  • PDUs

Utility Supply is the power that comes from one or more utility grids. While most of us consider the grid to be our primary power supply (hats off to those of you who manage to live off the grid), politics, economics, and distribution make utility supply power susceptible to outages, which is why data centers must have autonomous power available to maintain availability.

Generators are used to supply power when the utility supply is unavailable. They convert mechanical energy, usually from motors, to electrical energy.

Transfer Switches are used to transfer electric load from one source or electrical device to another, such as from one utility line to another, from a generator to a utility, or between generators. The transfer could be manually activated or automatic to ensure continuous electrical power.

Distribution Panels get the power where it needs to go, taking a power feed and dividing it into separate circuits to supply multiple loads.

A UPS, as we touched on earlier, ensures that continuous power is available even when the main power source isn’t. It often consists of batteries that can come online almost instantaneously when the current power ceases. The power from a UPS does not have to last a long time as it is considered an emergency measure until the main power source can be restored. Another function of the UPS is to filter and stabilize the power from the main power supply.

Data Center UPS

Data center UPSs

PDU stands for the Power Distribution Unit and is the device that distributes power to the individual pieces of equipment.

Network

After power, the networking connections to the data center are of prime importance. Can the data center obtain and maintain high-speed networking connections to the building? With networking, as with all aspects of a data center, availability is a primary consideration. Data center designers think of all possible ways service can be interrupted or lost, even briefly. Details such as the vulnerabilities in the route the network connections make from the core network (the backhaul) to the center, and where network connections enter and exit a building, must be taken into consideration in network and data center design.

Routers and switches are used to transport traffic between the servers in the data center and the core network. Just as with power, network redundancy is a prime factor in maintaining availability of data center services. Two or more upstream service providers are required to ensure that availability.

How fast a customer can transfer data to a data center is affected by: 1) the speed of the connections the data center has with the outside world, 2) the quality of the connections between the customer and the data center, and 3) the distance of the route from customer to the data center. The longer the length of the route and the greater the number of packets that must be transferred, the more significant a factor will be played by latency in the data transfer. Latency is the delay before a transfer of data begins following an instruction for its transfer. Generally latency, not speed, will be the most significant factor in transferring data to and from a data center. Packets transferred using the TCP/IP protocol suite, which is the conceptual model and set of communications protocols used on the internet and similar computer networks, must be acknowledged when received (ACK’d) and requires a communications roundtrip for each packet. If the data is in larger packets, the number of ACKs required is reduced, so latency will be a smaller factor in the overall network communications speed.

Latency generally will be less significant for data storage transfers than for cloud computing. Optimizations such as multi-threading, which is used in Backblaze’s Cloud Backup service, will generally improve overall transfer throughput if sufficient bandwidth is available.

Those interested in testing the overall speed and latency of their connection to Backblaze’s data centers can use the Check Your Bandwidth tool on our website.
Data center telecommunications equipment

Data center telecommunications equipment

Data center under floor cable runs

Data center under floor cable runs

Cooling

Computer, networking, and power generation equipment generates heat, and there are a number of solutions employed to rid a data center of that heat. The location and climate of the data center is of great importance to the data center designer because the climatic conditions dictate to a large degree what cooling technologies should be deployed that in turn affect the power used and the cost of using that power. The power required and cost needed to manage a data center in a warm, humid climate will vary greatly from managing one in a cool, dry climate. Innovation is strong in this area and many new approaches to efficient and cost-effective cooling are used in the latest data centers.

Switch's uninterruptible, multi-system, HVAC Data Center Cooling Units

Switch’s uninterruptible, multi-system, HVAC Data Center Cooling Units

There are three primary ways data center cooling can be achieved:

Room Cooling cools the entire operating area of the data center. This method can be suitable for small data centers, but becomes more difficult and inefficient as IT equipment density and center size increase.

Row Cooling concentrates on cooling a data center on a row by row basis. In its simplest form, hot aisle/cold aisle data center design involves lining up server racks in alternating rows with cold air intakes facing one way and hot air exhausts facing the other. The rows composed of rack fronts are called cold aisles. Typically, cold aisles face air conditioner output ducts. The rows the heated exhausts pour into are called hot aisles. Typically, hot aisles face air conditioner return ducts.

Rack Cooling tackles cooling on a rack by rack basis. Air-conditioning units are dedicated to specific racks. This approach allows for maximum densities to be deployed per rack. This works best in data centers with fully loaded racks, otherwise there would be too much cooling capacity, and the air-conditioning losses alone could exceed the total IT load.

Security

Data Centers are high-security facilities as they house business, government, and other data that contains personal, financial, and other secure information about businesses and individuals.

This list contains the physical-security considerations when opening or co-locating in a data center:

Layered Security Zones. Systems and processes are deployed to allow only authorized personnel in certain areas of the data center. Examples include keycard access, alarm systems, mantraps, secure doors, and staffed checkpoints.

Physical Barriers. Physical barriers, fencing and reinforced walls are used to protect facilities. In a colocation facility, one customers’ racks and servers are often inaccessible to other customers colocating in the same data center.

Backblaze racks secured in the data center

Backblaze racks secured in the data center

Monitoring Systems. Advanced surveillance technology monitors and records activity on approaching driveways, building entrances, exits, loading areas, and equipment areas. These systems also can be used to monitor and detect fire and water emergencies, providing early detection and notification before significant damage results.

Top-tier providers evaluate their data center security and facilities on an ongoing basis. Technology becomes outdated quickly, so providers must stay-on-top of new approaches and technologies in order to protect valuable IT assets.

To pass into high security areas of a data center requires passing through a security checkpoint where credentials are verified.

Data Center security

The gauntlet of cameras and steel bars one must pass before entering this data center

Facilities and Services

Data center colocation providers often differentiate themselves by offering value-added services. In addition to the required space, power, cooling, connectivity and security capabilities, the best solutions provide several on-site amenities. These accommodations include offices and workstations, conference rooms, and access to phones, copy machines, and office equipment.

Additional features may consist of kitchen facilities, break rooms and relaxation lounges, storage facilities for client equipment, and secure loading docks and freight elevators.

Moving into A Data Center

Moving into a data center is a major job for any organization. We wrote a post last year, Desert To Data in 7 Days — Our New Phoenix Data Center, about what it was like to move into our new data center in Phoenix, Arizona.

Desert To Data in 7 Days — Our New Phoenix Data Center

Visiting a Data Center

Our Director of Product Marketing Andy Klein wrote a popular post last year on what it’s like to visit a data center called A Day in the Life of a Data Center.

A Day in the Life of a Data Center

Would you Like to Know More about The Challenges of Opening and Running a Data Center?

That’s it for part 2 of this series. If readers are interested, we could write a post about some of the new technologies and trends affecting data center design and use. Please let us know in the comments.

Here's a tip!Here’s a tip on finding all the posts tagged with data center on our blog. Just follow https://www.backblaze.com/blog/tag/data-center/.

Don’t miss future posts on data centers and other topics, including hard drive stats, cloud storage, and tips and tricks for backing up to the cloud. Use the Join button above to receive notification of future posts on our blog.

The post The Challenges of Opening a Data Center — Part 2 appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Petoi: a Pi-powered kitty cat

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/petoi-a-pi-powered-kitty-cat/

A robot pet is the dream of many a child, thanks to creatures such as K9, Doctor Who’s trusted companion, and the Tamagotchi, bleeping nightmare of parents worldwide. But both of these pale in comparison (sorry, K9) to Petoi, the walking, meowing, live-streaming cat from maker Rongzhong Li.

Petoi: OpenCat Demo

Mentioned on IEEE Spectrum: https://spectrum.ieee.org/automaton/robotics/humanoids/video-friday-boston-dynamics-spotmini-opencat-robot-engineered-arts-mesmer-uncanny-valley More reads on Hackster: https://www.hackster.io/petoi/opencat-845129 优酷: http://v.youku.com/v_show/id_XMzQxMzA1NjM0OA==.html?spm=a2h3j.8428770.3416059.1 We are developing programmable and highly maneuverable quadruped robots for STEM education and AI-enhanced services. Its compact and bionic design makes it the only affordable consumer robot that mimics various mammal gaits and reacts to surroundings.

Petoi

Not only have cats conquered the internet, they also have a paw firmly in the door of many makerspaces and spare rooms — rooms such as the one belonging to Petoi’s owner/maker, Rongzhong Li, who has been working on this feline creation since he bought his first Raspberry Pi.

Petoi Raspberry Pi Robot Cat

Petoi in its current state – apple for scale in lieu of banana

Petoi is just like any other housecat: it walks, it plays, its ribcage doubles as a digital xylophone — but what makes Petoi so special is Li’s use of the project as a platform for study.

I bought my first Raspberry Pi in June 2016 to learn coding hardware. This robot Petoi served as a playground for learning all the components in a regular Raspberry Pi beginner kit. I started with craft sticks, then switched to 3D-printed frames for optimized performance and morphology.

Various iterations of Petoi have housed various bits of tech, 3D-printed parts, and software, so while it’s impossible to list the exact ingredients you’d need to create your own version of Petoi, a few components remain at its core.

Petoi Raspberry Pi Robot Cat — skeleton prototype

An early version of Petoi, housed inside a plastic toy helicopter frame

A Raspberry Pi lives within Petoi and acts as its brain, relaying commands to an Arduino that controls movement. Li explains:

The Pi takes no responsibility for controlling detailed limb movements. It focuses on more serious questions, such as “Who am I? Where do I come from? Where am I going?” It generates mind and sends string commands to the Arduino slave.

Li is currently working on two functional prototypes: a mini version for STEM education, and a larger version for use within the field of AI research.

A cat and a robot cat walking upstairs Petoi Raspberry Pi Robot Cat

You can read more about the project, including details on the various interactions of Petoi, on the hackster.io project page.

Not quite ready to commit to a fully grown robot pet for your home? Why not code your own pixel pet with our free learning resource? And while you’re looking through our projects, check out our other pet-themed tutorials such as the Hamster party cam, the Infrared bird box, and the Cat meme generator.

The post Petoi: a Pi-powered kitty cat appeared first on Raspberry Pi.

AskRob: Does Tor let government peek at vuln info?

Post Syndicated from Robert Graham original http://blog.erratasec.com/2018/03/askrob-does-tor-let-government-peek-at.html

On Twitter, somebody asked this question:

The question is about a blog post that claims Tor privately tips off the government about vulnerabilities, using as proof a “vulnerability” from October 2007 that wasn’t made public until 2011.
The tl;dr is that it’s bunk. There was no vulnerability, it was a feature request. The details were already public. There was no spy agency involved, but the agency that does Voice of America, and which tries to protect activists under foreign repressive regimes.

Discussion

The issue is that Tor traffic looks like Tor traffic, making it easy to block/censor, or worse, identify users. Over the years, Tor has added features to make it look more and more like normal traffic, like the encrypted traffic used by Facebook, Google, and Apple. Tors improves this bit-by-bit over time, but short of actually piggybacking on website traffic, it will always leave some telltale signature.
An example showing how we can distinguish Tor traffic is the packet below, from the latest version of the Tor server:
Had this been Google or Facebook, the names would be something like “www.google.com” or “facebook.com”. Or, had this been a normal “self-signed” certificate, the names would still be recognizable. But Tor creates randomized names, with letters and numbers, making it distinctive. It’s hard to automate detection of this, because it’s only probably Tor (other self-signed certificates look like this, too), which means you’ll have occasional “false-positives”. But still, if you compare this to the pattern of traffic, you can reliably detect that Tor is happening on your network.
This has always been a known issue, since the earliest days. Google the search term “detect tor traffic”, and set your advanced search dates to before 2007, and you’ll see lots of discussion about this, such as this post for writing intrusion-detection signatures for Tor.
Among the things you’ll find is this presentation from 2006 where its creator (Roger Dingledine) talks about how Tor can be identified on the network with its unique network fingerprint. For a “vulnerability” they supposedly kept private until 2011, they were awfully darn public about it.
The above blogpost claims Tor kept this vulnerability secret until 2011 by citing this message. It’s because Levine doesn’t understand the terminology and is just blindly searching for an exact match for “TLS normalization”. Here’s an earlier proposed change for the long term goal of to “make our connection handshake look closer to a regular HTTPS [TLS] connection”, from February 2007. Here is another proposal from October 2007 on changing TLS certificates, from days after the email discussion (after they shipped the feature, presumably).
What we see here is here is a known problem from the very beginning of the project, a long term effort to fix that problem, and a slow dribble of features added over time to preserve backwards compatibility.
Now let’s talk about the original train of emails cited in the blogpost. It’s hard to see the full context here, but it sounds like BBG made a feature request to make Tor look even more like normal TLS, which is hinted with the phrase “make our funders happy”. Of course the people giving Tor money are going to ask for improvements, and of course Tor would in turn discuss those improvements with the donor before implementing them. It’s common in project management: somebody sends you a feature request, you then send the proposal back to them to verify what you are building is what they asked for.
As for the subsequent salacious paragraph about “secrecy”, that too is normal. When improving a problem, you don’t want to talk about the details until after you have a fix. But note that this is largely more for PR than anything else. The details on how to detect Tor are available to anybody who looks for them — they just aren’t readily accessible to the layman. For example, Tenable Networks announced the previous month exactly this ability to detect Tor’s traffic, because any techy wanting to would’ve found the secrets how to. Indeed, Teneble’s announcement may have been the impetus for BBG’s request to Tor: “can you fix it so that this new Tenable feature no longer works”.
To be clear, there are zero secret “vulnerability details” here that some secret spy agency could use to detect Tor. They were already known, and in the Teneble product, and within the grasp of any techy who wanted to discover them. A spy agency could just buy Teneble, or copy it, instead of going through this intricate conspiracy.

Conclusion

The issue isn’t a “vulnerability”. Tor traffic is recognizable on the network, and over time, they make it less and less recognizable. Eventually they’ll just piggyback on true HTTPS and convince CloudFlare to host ingress nodes, or something, making it completely undetectable. In the meanwhile, it leaves behind fingerprints, as I showed above.
What we see in the email exchanges is the normal interaction of a donor asking for a feature, not a private “tip off”. It’s likely the donor is the one who tipped off Tor, pointing out Tenable’s product to detect Tor.
Whatever secrets Tor could have tipped off to the “secret spy agency” were no more than what Tenable was already doing in a shipping product.

Update: People are trying to make it look like Voice of America is some sort of intelligence agency. That’s a conspiracy theory. It’s not a member of the American intelligence community. You’d have to come up with a solid reason explaining why the United States is hiding VoA’s membership in the intelligence community, or you’d have to believe that everything in the U.S. government is really just some arm of the C.I.A.

Happy birthday to us!

Post Syndicated from Eben Upton original https://www.raspberrypi.org/blog/happy-birthday-2018/

The eagle-eyed among you may have noticed that today is 28 February, which is as close as you’re going to get to our sixth birthday, given that we launched on a leap day. For the last three years, we’ve launched products on or around our birthday: Raspberry Pi 2 in 2015; Raspberry Pi 3 in 2016; and Raspberry Pi Zero W in 2017. But today is a snow day here at Pi Towers, so rather than launching something, we’re taking a photo tour of the last six years of Raspberry Pi products before we don our party hats for the Raspberry Jam Big Birthday Weekend this Saturday and Sunday.

Prehistory

Before there was Raspberry Pi, there was the Broadcom BCM2763 ‘micro DB’, designed, as it happens, by our very own Roger Thornton. This was the first thing we demoed as a Raspberry Pi in May 2011, shown here running an ARMv6 build of Ubuntu 9.04.

BCM2763 micro DB

Ubuntu on Raspberry Pi, 2011-style

A few months later, along came the first batch of 50 “alpha boards”, designed for us by Broadcom. I used to have a spreadsheet that told me where in the world each one of these lived. These are the first “real” Raspberry Pis, built around the BCM2835 application processor and LAN9512 USB hub and Ethernet adapter; remarkably, a software image taken from the download page today will still run on them.

Raspberry Pi alpha board, top view

Raspberry Pi alpha board

We shot some great demos with this board, including this video of Quake III:

Raspberry Pi – Quake 3 demo

A little something for the weekend: here’s Eben showing the Raspberry Pi running Quake 3, and chatting a bit about the performance of the board. Thanks to Rob Bishop and Dave Emett for getting the demo running.

Pete spent the second half of 2011 turning the alpha board into a shippable product, and just before Christmas we produced the first 20 “beta boards”, 10 of which were sold at auction, raising over £10000 for the Foundation.

The beginnings of a Bramble

Beta boards on parade

Here’s Dom, demoing both the board and his excellent taste in movie trailers:

Raspberry Pi Beta Board Bring up

See http://www.raspberrypi.org/ for more details, FAQ and forum.

Launch

Rather to Pete’s surprise, I took his beta board design (with a manually-added polygon in the Gerbers taking the place of Paul Grant’s infamous red wire), and ordered 2000 units from Egoman in China. After a few hiccups, units started to arrive in Cambridge, and on 29 February 2012, Raspberry Pi went on sale for the first time via our partners element14 and RS Components.

Pallet of pis

The first 2000 Raspberry Pis

Unboxing continues

The first Raspberry Pi from the first box from the first pallet

We took over 100000 orders on the first day: something of a shock for an organisation that had imagined in its wildest dreams that it might see lifetime sales of 10000 units. Some people who ordered that day had to wait until the summer to finally receive their units.

Evolution

Even as we struggled to catch up with demand, we were working on ways to improve the design. We quickly replaced the USB polyfuses in the top right-hand corner of the board with zero-ohm links to reduce IR drop. If you have a board with polyfuses, it’s a real limited edition; even more so if it also has Hynix memory. Pete’s “rev 2” design made this change permanent, tweaked the GPIO pin-out, and added one much-requested feature: mounting holes.

Revision 1 versus revision 2

If you look carefully, you’ll notice something else about the revision 2 board: it’s made in the UK. 2012 marked the start of our relationship with the Sony UK Technology Centre in Pencoed, South Wales. In the five years since, they’ve built every product we offer, including more than 12 million “big” Raspberry Pis and more than one million Zeros.

Celebrating 500,000 Welsh units, back when that seemed like a lot

Economies of scale, and the decline in the price of SDRAM, allowed us to double the memory capacity of the Model B to 512MB in the autumn of 2012. And as supply of Model B finally caught up with demand, we were able to launch the Model A, delivering on our original promise of a $25 computer.

A UK-built Raspberry Pi Model A

In 2014, James took all the lessons we’d learned from two-and-a-bit years in the market, and designed the Model B+, and its baby brother the Model A+. The Model B+ established the form factor for all our future products, with a 40-pin extended GPIO connector, four USB ports, and four mounting holes.

The Raspberry Pi 1 Model B+ — entering the era of proper product photography with a bang.

New toys

While James was working on the Model B+, Broadcom was busy behind the scenes developing a follow-on to the BCM2835 application processor. BCM2836 samples arrived in Cambridge at 18:00 one evening in April 2014 (chips never arrive at 09:00 — it’s always early evening, usually just before a public holiday), and within a few hours Dom had Raspbian, and the usual set of VideoCore multimedia demos, up and running.

We launched Raspberry Pi 2 at the start of 2015, pairing BCM2836 with 1GB of memory. With a quad-core Arm Cortex-A7 clocked at 900MHz, we’d increased performance sixfold, and memory fourfold, in just three years.

Nobody mention the xenon death flash.

And of course, while James was working on Raspberry Pi 2, Broadcom was developing BCM2837, with a quad-core 64-bit Arm Cortex-A53 clocked at 1.2GHz. Raspberry Pi 3 launched barely a year after Raspberry Pi 2, providing a further doubling of performance and, for the first time, wireless LAN and Bluetooth.

All our recent products are just the same board shot from different angles

Zero to hero

Where the PC industry has historically used Moore’s Law to “fill up” a given price point with more performance each year, the original Raspberry Pi used Moore’s law to deliver early-2000s PC performance at a lower price. But with Raspberry Pi 2 and 3, we’d gone back to filling up our original $35 price point. After the launch of Raspberry Pi 2, we started to wonder whether we could pull the same trick again, taking the original Raspberry Pi platform to a radically lower price point.

The result was Raspberry Pi Zero. Priced at just $5, with a 1GHz BCM2835 and 512MB of RAM, it was cheap enough to bundle on the front of The MagPi, making us the first computer magazine to give away a computer as a cover gift.

Cheap thrills

MagPi issue 40 in all its glory

We followed up with the $10 Raspberry Pi Zero W, launched exactly a year ago. This adds the wireless LAN and Bluetooth functionality from Raspberry Pi 3, using a rather improbable-looking PCB antenna designed by our buddies at Proant in Sweden.

Up to our old tricks again

Other things

Of course, this isn’t all. There has been a veritable blizzard of point releases; RAM changes; Chinese red units; promotional blue units; Brazilian blue-ish units; not to mention two Camera Modules, in two flavours each; a touchscreen; the Sense HAT (now aboard the ISS); three compute modules; and cases for the Raspberry Pi 3 and the Zero (the former just won a Design Effectiveness Award from the DBA). And on top of that, we publish three magazines (The MagPi, Hello World, and HackSpace magazine) and a whole host of Project Books and Essentials Guides.

Chinese Raspberry Pi 1 Model B

RS Components limited-edition blue Raspberry Pi 1 Model B

Brazilian-market Raspberry Pi 3 Model B

Visible-light Camera Module v2

Learning about injection moulding the hard way

250 pages of content each month, every month

Essential reading

Forward the Foundation

Why does all this matter? Because we’re providing everyone, everywhere, with the chance to own a general-purpose programmable computer for the price of a cup of coffee; because we’re giving people access to tools to let them learn new skills, build businesses, and bring their ideas to life; and because when you buy a Raspberry Pi product, every penny of profit goes to support the Raspberry Pi Foundation in its mission to change the face of computing education.

We’ve had an amazing six years, and they’ve been amazing in large part because of the community that’s grown up alongside us. This weekend, more than 150 Raspberry Jams will take place around the world, comprising the Raspberry Jam Big Birthday Weekend.

Raspberry Pi Big Birthday Weekend 2018. GIF with confetti and bopping JAM balloons

If you want to know more about the Raspberry Pi community, go ahead and find your nearest Jam on our interactive map — maybe we’ll see you there.

The post Happy birthday to us! appeared first on Raspberry Pi.

The Challenges of Opening a Data Center — Part 1

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/choosing-data-center/

Backblaze storage pod in new data center

This is part one of a series. The second part will be posted later this week. Use the Join button above to receive notification of future posts in this series.

Though most of us have never set foot inside of a data center, as citizens of a data-driven world we nonetheless depend on the services that data centers provide almost as much as we depend on a reliable water supply, the electrical grid, and the highway system. Every time we send a tweet, post to Facebook, check our bank balance or credit score, watch a YouTube video, or back up a computer to the cloud we are interacting with a data center.

In this series, The Challenges of Opening a Data Center, we’ll talk in general terms about the factors that an organization needs to consider when opening a data center and the challenges that must be met in the process. Many of the factors to consider will be similar for opening a private data center or seeking space in a public data center, but we’ll assume for the sake of this discussion that our needs are more modest than requiring a data center dedicated solely to our own use (i.e. we’re not Google, Facebook, or China Telecom).

Data center technology and management are changing rapidly, with new approaches to design and operation appearing every year. This means we won’t be able to cover everything happening in the world of data centers in our series, however, we hope our brief overview proves useful.

What is a Data Center?

A data center is the structure that houses a large group of networked computer servers typically used by businesses, governments, and organizations for the remote storage, processing, or distribution of large amounts of data.

While many organizations will have computing services in the same location as their offices that support their day-to-day operations, a data center is a structure dedicated to 24/7 large-scale data processing and handling.

Depending on how you define the term, there are anywhere from a half million data centers in the world to many millions. While it’s possible to say that an organization’s on-site servers and data storage can be called a data center, in this discussion we are using the term data center to refer to facilities that are expressly dedicated to housing computer systems and associated components, such as telecommunications and storage systems. The facility might be a private center, which is owned or leased by one tenant only, or a shared data center that offers what are called “colocation services,” and rents space, services, and equipment to multiple tenants in the center.

A large, modern data center operates around the clock, placing a priority on providing secure and uninterrrupted service, and generally includes redundant or backup power systems or supplies, redundant data communication connections, environmental controls, fire suppression systems, and numerous security devices. Such a center is an industrial-scale operation often using as much electricity as a small town.

Types of Data Centers

There are a number of ways to classify data centers according to how they will be used, whether they are owned or used by one or multiple organizations, whether and how they fit into a topology of other data centers; which technologies and management approaches they use for computing, storage, cooling, power, and operations; and increasingly visible these days: how green they are.

Data centers can be loosely classified into three types according to who owns them and who uses them.

Exclusive Data Centers are facilities wholly built, maintained, operated and managed by the business for the optimal operation of its IT equipment. Some of these centers are well-known companies such as Facebook, Google, or Microsoft, while others are less public-facing big telecoms, insurance companies, or other service providers.

Managed Hosting Providers are data centers managed by a third party on behalf of a business. The business does not own data center or space within it. Rather, the business rents IT equipment and infrastructure it needs instead of investing in the outright purchase of what it needs.

Colocation Data Centers are usually large facilities built to accommodate multiple businesses within the center. The business rents its own space within the data center and subsequently fills the space with its IT equipment, or possibly uses equipment provided by the data center operator.

Backblaze, for example, doesn’t own its own data centers but colocates in data centers owned by others. As Backblaze’s storage needs grow, Backblaze increases the space it uses within a given data center and/or expands to other data centers in the same or different geographic areas.

Availability is Key

When designing or selecting a data center, an organization needs to decide what level of availability is required for its services. The type of business or service it provides likely will dictate this. Any organization that provides real-time and/or critical data services will need the highest level of availability and redundancy, as well as the ability to rapidly failover (transfer operation to another center) when and if required. Some organizations require multiple data centers not just to handle the computer or storage capacity they use, but to provide alternate locations for operation if something should happen temporarily or permanently to one or more of their centers.

Organizations operating data centers that can’t afford any downtime at all will typically operate data centers that have a mirrored site that can take over if something happens to the first site, or they operate a second site in parallel to the first one. These data center topologies are called Active/Passive, and Active/Active, respectively. Should disaster or an outage occur, disaster mode would dictate immediately moving all of the primary data center’s processing to the second data center.

While some data center topologies are spread throughout a single country or continent, others extend around the world. Practically, data transmission speeds put a cap on centers that can be operated in parallel with the appearance of simultaneous operation. Linking two data centers located apart from each other — say no more than 60 miles to limit data latency issues — together with dark fiber (leased fiber optic cable) could enable both data centers to be operated as if they were in the same location, reducing staffing requirements yet providing immediate failover to the secondary data center if needed.

This redundancy of facilities and ensured availability is of paramount importance to those needing uninterrupted data center services.

Active/Passive Data Centers

Active/Active Data Centers

LEED Certification

Leadership in Energy and Environmental Design (LEED) is a rating system devised by the United States Green Building Council (USGBC) for the design, construction, and operation of green buildings. Facilities can achieve ratings of certified, silver, gold, or platinum based on criteria within six categories: sustainable sites, water efficiency, energy and atmosphere, materials and resources, indoor environmental quality, and innovation and design.

Green certification has become increasingly important in data center design and operation as data centers require great amounts of electricity and often cooling water to operate. Green technologies can reduce costs for data center operation, as well as make the arrival of data centers more amenable to environmentally-conscious communities.

The ACT, Inc. data center in Iowa City, Iowa was the first data center in the U.S. to receive LEED-Platinum certification, the highest level available.

ACT Data Center exterior

ACT Data Center exterior

ACT Data Center interior

ACT Data Center interior

Factors to Consider When Selecting a Data Center

There are numerous factors to consider when deciding to build or to occupy space in a data center. Aspects such as proximity to available power grids, telecommunications infrastructure, networking services, transportation lines, and emergency services can affect costs, risk, security and other factors that need to be taken into consideration.

The size of the data center will be dictated by the business requirements of the owner or tenant. A data center can occupy one room of a building, one or more floors, or an entire building. Most of the equipment is often in the form of servers mounted in 19 inch rack cabinets, which are usually placed in single rows forming corridors (so-called aisles) between them. This allows staff access to the front and rear of each cabinet. Servers differ greatly in size from 1U servers (i.e. one “U” or “RU” rack unit measuring 44.50 millimeters or 1.75 inches), to Backblaze’s Storage Pod design that fits a 4U chassis, to large freestanding storage silos that occupy many square feet of floor space.

Location

Location will be one of the biggest factors to consider when selecting a data center and encompasses many other factors that should be taken into account, such as geological risks, neighboring uses, and even local flight paths. Access to suitable available power at a suitable price point is often the most critical factor and the longest lead time item, followed by broadband service availability.

With more and more data centers available providing varied levels of service and cost, the choices increase each year. Data center brokers can be employed to find a data center, just as one might use a broker for home or other commercial real estate.

Websites listing available colocation space, such as upstack.io, or entire data centers for sale or lease, are widely used. A common practice is for a customer to publish its data center requirements, and the vendors compete to provide the most attractive bid in a reverse auction.

Business and Customer Proximity

The center’s closeness to a business or organization may or may not be a factor in the site selection. The organization might wish to be close enough to manage the center or supervise the on-site staff from a nearby business location. The location of customers might be a factor, especially if data transmission speeds and latency are important, or the business or customers have regulatory, political, tax, or other considerations that dictate areas suitable or not suitable for the storage and processing of data.

Climate

Local climate is a major factor in data center design because the climatic conditions dictate what cooling technologies should be deployed. In turn this impacts uptime and the costs associated with cooling, which can total as much as 50% or more of a center’s power costs. The topology and the cost of managing a data center in a warm, humid climate will vary greatly from managing one in a cool, dry climate. Nevertheless, data centers are located in both extremely cold regions and extremely hot ones, with innovative approaches used in both extremes to maintain desired temperatures within the center.

Geographic Stability and Extreme Weather Events

A major obvious factor in locating a data center is the stability of the actual site as regards weather, seismic activity, and the likelihood of weather events such as hurricanes, as well as fire or flooding.

Backblaze’s Sacramento data center describes its location as one of the most stable geographic locations in California, outside fault zones and floodplains.

Sacramento Data Center

Sometimes the location of the center comes first and the facility is hardened to withstand anticipated threats, such as Equinix’s NAP of the Americas data center in Miami, one of the largest single-building data centers on the planet (six stories and 750,000 square feet), which is built 32 feet above sea level and designed to withstand category 5 hurricane winds.

Equinix Data Center in Miami

Equinix “NAP of the Americas” Data Center in Miami

Most data centers don’t have the extreme protection or history of the Bahnhof data center, which is located inside the ultra-secure former nuclear bunker Pionen, in Stockholm, Sweden. It is buried 100 feet below ground inside the White Mountains and secured behind 15.7 in. thick metal doors. It prides itself on its self-described “Bond villain” ambiance.

Bahnhof Data Center under White Mountain in Stockholm

Usually, the data center owner or tenant will want to take into account the balance between cost and risk in the selection of a location. The Ideal quadrant below is obviously favored when making this compromise.

Cost vs Risk in selecting a data center

Cost = Construction/lease, power, bandwidth, cooling, labor, taxes
Risk = Environmental (seismic, weather, water, fire), political, economic

Risk mitigation also plays a strong role in pricing. The extent to which providers must implement special building techniques and operating technologies to protect the facility will affect price. When selecting a data center, organizations must make note of the data center’s certification level on the basis of regulatory requirements in the industry. These certifications can ensure that an organization is meeting necessary compliance requirements.

Power

Electrical power usually represents the largest cost in a data center. The cost a service provider pays for power will be affected by the source of the power, the regulatory environment, the facility size and the rate concessions, if any, offered by the utility. At higher level tiers, battery, generator, and redundant power grids are a required part of the picture.

Fault tolerance and power redundancy are absolutely necessary to maintain uninterrupted data center operation. Parallel redundancy is a safeguard to ensure that an uninterruptible power supply (UPS) system is in place to provide electrical power if necessary. The UPS system can be based on batteries, saved kinetic energy, or some type of generator using diesel or another fuel. The center will operate on the UPS system with another UPS system acting as a backup power generator. If a power outage occurs, the additional UPS system power generator is available.

Many data centers require the use of independent power grids, with service provided by different utility companies or services, to prevent against loss of electrical service no matter what the cause. Some data centers have intentionally located themselves near national borders so that they can obtain redundant power from not just separate grids, but from separate geopolitical sources.

Higher redundancy levels required by a company will of invariably lead to higher prices. If one requires high availability backed by a service-level agreement (SLA), one can expect to pay more than another company with less demanding redundancy requirements.

Stay Tuned for Part 2 of The Challenges of Opening a Data Center

That’s it for part 1 of this post. In subsequent posts, we’ll take a look at some other factors to consider when moving into a data center such as network bandwidth, cooling, and security. We’ll take a look at what is involved in moving into a new data center (including stories from Backblaze’s experiences). We’ll also investigate what it takes to keep a data center running, and some of the new technologies and trends affecting data center design and use. You can discover all posts on our blog tagged with “Data Center” by following the link https://www.backblaze.com/blog/tag/data-center/.

The second part of this series on The Challenges of Opening a Data Center will be posted later this week. Use the Join button above to receive notification of future posts in this series.

The post The Challenges of Opening a Data Center — Part 1 appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Hacker House’s Zero W–powered automated gardener

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/hacker-house-automated-gardener/

Are the plants in your home or office looking somewhat neglected? Then build an automated gardener using a Raspberry Pi Zero W, with help from the team at Hacker House.

Make a Raspberry Pi Automated Gardener

See how we built it, including our materials, code, and supplemental instructions, on Hackster.io: https://www.hackster.io/hackerhouse/automated-indoor-gardener-a90907 With how busy our lives are, it’s sometimes easy to forget to pay a little attention to your thirsty indoor plants until it’s too late and you are left with a crusty pile of yellow carcasses.

Building an automated gardener

Tired of their plants looking a little too ‘crispy’, Hacker House have created an automated gardener using a Raspberry Pi Zero W alongside some 3D-printed parts, a 5v USB grow light, and a peristaltic pump.

Hacker House Automated Gardener Raspberry Pi

They designed and 3D printed a PLA casing for the project, allowing enough space within for the Raspberry Pi Zero W, the pump, and the added electronics including soldered wiring and two N-channel power MOSFETs. The MOSFETs serve to switch the light and the pump on and off.

Hacker House Automated Gardener Raspberry Pi

Due to the amount of power the light and pump need, the team replaced the Pi’s standard micro USB power supply with a 12v switching supply.

Coding an automated gardener

All the code for the project — a fairly basic Python script —is on the Hacker House GitHub repository. To fit it to your requirements, you may need to edit a few lines of the code, and Hacker House provides information on how to do this. You can also find more details of the build on the hackster.io project page.

Hacker House Automated Gardener Raspberry Pi

While the project runs with preset timings, there’s no reason why you couldn’t upgrade it to be app-based, for example to set a watering schedule when you’re away on holiday.

To see more for the Hacker House team, be sure to follow them on YouTube. You can also check out some of their previous Raspberry Pi projects featured on our blog, such as the smartphone-connected door lock and gesture-controlled holographic visualiser.

Raspberry Pi and your home garden

Raspberry Pis make great babysitters for your favourite plants, both inside and outside your home. Here at Pi Towers, we have Bert, our Slack- and Twitter-connected potted plant who reminds us when he’s thirsty and in need of water.

Bert Plant on Twitter

I’m good. There’s plenty to drink!

And outside of the office, we’ve seen plenty of your vegetation-focused projects using Raspberry Pi for planting, monitoring or, well, commenting on social and political events within the media.

If you use a Raspberry Pi within your home gardening projects, we’d love to see how you’ve done it. So be sure to share a link with us either in the comments below, or via our social media channels.

 

The post Hacker House’s Zero W–powered automated gardener appeared first on Raspberry Pi.

When tiny robot COZMO met our tiny Raspberry Pi

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/cozmo-raspberry-pi/

Hack your COZMO for ultimate control, using a Raspberry Pi and this tutorial from Instructables user Marcelo ‘mjrovai’ Rovai.

Cozmo – RPi 4

Full integration The complete tutorial can be found here: https://www.instructables.com/id/When-COZMO-the-Robot-Meets-the-Raspberry-Pi/

COZMO

COZMO is a Python-programmable robot from ANKI that boasts a variety of on-board sensors and a camera, and that can be controlled via an app or via code. To get an idea of how COZMO works, check out this rather excitable video from the wonderful Mayim Bialik.

The COZMO SDK

COZMO’s creators, ANKI, provide a Software Development Kit (SDK) so that users can get the most out of their COZMO. This added functionality is a great opportunity for budding coders to dive into hacking their toys, without the risk of warranty voiding/upsetting parents/not being sure how to put a toy back together again.

By the way, I should point out that this is in no way a sponsored blog post. I just think COZMO is ridiculously cute…because tiny robots are adorable, no matter their intentions.

Raspberry Pi Doctor Who Cybermat

Marcelo Rovai + Raspberry Pi + COZMO

For his Instructables tutorial, Marcelo connected an Android device running the COZMO app to his Raspberry Pi 3 via USB. Once USB debugging had been enabled on his device, he installed the Android Debug Bridge (ADB) to the Raspberry Pi. Then his Pi was able to recognise the connected Android device, and from there, Marcelo moved on to installing the SDK, including support for COZMO’s camera.

COZMO Raspberry Pi

The SDK comes with pre-installed examples, allowing users to try out the possibilities of the kit, such as controlling what COZMO says by editing a Python script.

Cozmo and RPi

Hello World The complete tutorial can be found here: https://www.instructables.com/id/When-COZMO-the-Robot-Meets-the-Raspberry-Pi/

Do more with COZMO

Marcelo’s tutorial offers more example code for users of the COZMO SDK, along with the code to run the LED button game featured in the video above, and tips on utilising the SDK to take full advantage of COZMO. Check it out here on Instructables, and visit his website for even more projects.

The post When tiny robot COZMO met our tiny Raspberry Pi appeared first on Raspberry Pi.

2018 Picademy dates in the United States

Post Syndicated from Andrew Collins original https://www.raspberrypi.org/blog/new-picademy-2018-dates-in-united-states/

Cue the lights! Cue the music! Picademy is back for another year stateside. We’re excited to bring our free computer science and digital making professional development program for educators to four new cities this summer — you can apply right now.

Picademy USA Denver Raspberry Pi
Picademy USA Seattle Raspberry Pi
Picademy USA Jersey City Raspberry Pi
Raspberry Pi Picademy USA Atlanta

We’re thrilled to kick off our 2018 season! Before we get started, let’s take a look back at our community’s accomplishments in the 2017 Picademy North America season.

Picademy 2017 highlights

Last year, we partnered with four awesome venues to host eight Picademy events in the United States. At every event across the country, we met incredibly talented educators passionate about bringing digital making to their learners. Whether it was at Ann Arbor District Library’s makerspace, UC Irvine’s College of Engineering, or a creative community center in Boise, Idaho, we were truly inspired by all our Picademy attendees and were thrilled to welcome them to the Raspberry Pi Certified Educator community.

JWU Hosts Picademy

JWU Providence’s College of Engineering & Design recently partnered with the Raspberry Pi Foundation to host Picademy, a free training session designed to give educators the tools to teach computer skills with confidence and creativity. | http://www.jwu.edu

The 2017 Picademy cohorts were a diverse bunch with a lot of experience in their field. We welcomed more than 300 educators from 32 U.S. states and 10 countries. They were a mix of high school, middle school, and elementary classroom teachers, librarians, museum staff, university lecturers, and teacher trainers. More than half of our attendees were teaching computer science or technology already, and over 90% were specifically interested in incorporating physical computing into their work.

Picademy has a strong and lasting impact on educators. Over 80% of graduates said they felt confident using Raspberry Pi after attending, and 88% said they were now interested in leading a digital making event in their community. To showcase two wonderful examples of this success: Chantel Mason led a Raspberry Pi workshop for families and educators in her community in St. Louis, Missouri this fall, and Dean Palmer led a digital making station at the Computer Science for Rhode Island Summit in December.

Picademy 2018 dates

This year, we’re partnering with four new venues to host our Picademy season.


We’ll be at mindSpark Learning in Denver the first week in June, at Liberty Science Center in Jersey City later that month, at Georgia Tech University in Atlanta in mid-July, and finally at the Living Computer Museum in Seattle the first week in August.


A big thank you to each of these venues for hosting us and supporting our free educator professional development program!

Ready to join us for Picademy 2018? Learn more and apply now: rpf.io/picademy2018.

The post 2018 Picademy dates in the United States appeared first on Raspberry Pi.

Astro Pi Mission Zero: your code is in space

Post Syndicated from David Honess original https://www.raspberrypi.org/blog/astro-pi-mission-zero-day/

Every school year, we run the European Astro Pi challenge to find the next generation of space scientists who will program two space-hardened Raspberry Pi units, called Astro Pis, living aboard the International Space Station.

Italian ESA Astronaut Paolo Nespoli with the Astro Pi units. Image credit ESA.

Astro Pi Mission Zero

The 2017–2018 challenge included the brand-new non-competitive Mission Zero, which guaranteed that participants could have their code run on the ISS for 30 seconds, provided they followed the rules. They would also get a certificate showing the exact time period during which their code ran in space.

Astro Pi Mission Zero logo

We asked participants to write a simple Python program to display a personalised message and the air temperature on the Astro Pi screen. No special hardware was needed, since all the code could be written in a web browser using the Sense HAT emulator developed in partnership with Trinket.

Scott McKenzie on Twitter

Students coding #astropi emulator to scroll a message to astronauts on @Raspberry_Pi in space this summer. Try it here: https://t.co/0KURq11X0L #Rm9Parents #CSforAll #ontariocodes

And now it’s time…

We received over 2500 entries for Mission Zero, and we’re excited to announce that tomorrow all entries with flight status will be run on the ISS…in SPAAACE!

There are 1771 Python programs with flight status, which will run back-to-back on Astro Pi VIS (Ed). The whole process will take about 14 hours. This means that everyone will get a timestamp showing 1 February, so we’re going to call this day Mission Zero Day!

Part of each team’s certificate will be a map, like the one below, showing the exact location of the ISS while the team’s code was running.

The grey line is the ISS orbital path, the red marker shows the ISS’s location when their code was running. Produced using Google Static Maps API.

The programs will be run in the same sequence in which we received them. For operational reasons, we can’t guarantee that they will run while the ISS flies over any particular location. However, if you have submitted an entry to Mission Zero, there is a chance that your code will run while the ISS is right overhead!

Go out and spot the station

Spotting the ISS is a great activity to do by yourself or with your students. The station looks like a very fast-moving star that crosses the sky in just a few minutes. If you know when and where to look, and it’s not cloudy, you literally can’t miss it.

Source Andreas Möller, Wikimedia Commons.

The ISS passes over most ground locations about twice a day. For it to be clearly visible though, you need darkness on the ground with sunlight on the ISS due to its altitude. There are a number of websites which can tell you when these visible passes occur, such as NASA’s Spot the Station. Each of the sites requires you to give your location so it can work out when visible passes will occur near you.

Visible ISS pass star chart from Heavens Above, on which familiar constellations such as the Plough (see label Ursa Major) can be seen.

A personal favourite of mine is Heavens Above. It’s slightly more fiddly to use than other sites, but it produces brilliant star charts that show you precisely where to look in the sky. This is how it works:

  1. Go to www.heavens-above.com
  2. To set your location, click on Unspecified in the top right-hand corner
  3. Enter your location (e.g. Cambridge, United Kingdom) into the text box and click Search
  4. The map should change to the correct location — scroll down and click Update
  5. You’ll be taken back to the homepage, but with your location showing at the top right
  6. Click on ISS in the Satellites section
  7. A table of dates will now show, which are the upcoming visible passes for your location
  8. Click on a row to view the star chart for that pass — the line is the path of the ISS, and the arrow shows direction of travel
  9. Be outside in cloudless weather at the start time, look towards the direction where the line begins, and hope the skies stay clear

If you go out and do this, then tweet some pictures to @raspberry_pi, @astro_pi, and @esa. Good luck!

More Astro Pi

Mission Zero certificates will be arriving in participants’ inboxes shortly. We would like to thank everyone who participated in Mission Zero this school year, and we hope that next time you’ll take it one step further and try Mission Space Lab.

Mission Zero and Mission Space Lab are two really exciting programmes that young people of all ages can take part in. If you would like to be notified when the next round of Astro Pi opens for registrations, sign up to our mailing list here.

The post Astro Pi Mission Zero: your code is in space appeared first on Raspberry Pi.

Съдържа ли вирус Справка по чл. 73 от ЗДДФЛ, версия 6.0?

Post Syndicated from Григор original http://www.gatchev.info/blog/?p=2111

Днес мои клиенти ми звъннаха, че компютърът не им позволявал да си свалят новата версия на една програма от НАП. Когато стигнах на място, установих следното:

1. Въпросната програма е Справка от чл. 73 от ЗДДФЛ, версия 6.0
2. „Не може да бъде свалена“, понеже Windows Defender открива в нея вирус – Trojan:Win32/Azden.A!cl – и я блокира.
3. Сайтът на НАП, към който те се свързват, е истинският. Линкът е http://www.nap.bg/document?id=4311

Липсата на време не ми позволи да седна и да анализирам файловете в пакета ръчно, или дори да ги проверя с друг антивирус. Затова не зная дали реално съдържат вирус, или е фалшив позитив на Windows Defender.

Както едното, така и другото се е случвало преди. Надявам се да е фалшива тревога – поне един друг продукт, Xeoma, бива идентифициран погрешно от WD като този вирус. Ако обаче е реална заплаха, е неприятна. Вирусът е доста „модерен“ – събира и изпраща на стопаните си много подробна информация за компютъра и потребителите му, ъпдейтва се автоматично, сваля от Интернет и инсталира още допълнителни вирусни възможности, и позволява отдалечено командване на компютъра. Затова е разумно в този случай да се заложи на предпазливостта.

Свързах се веднага с НАП и ги предупредих за ситуацията. Единствената реакция (упорито повтаряна всеки път, когато се опитвах да обясня, че е възможно положението да е опасно), беше да им пратя е-майл и принтстрийн на съобщението, което получавам. За всеки случай им пратих описание на проблема – току-виж го прочете и някой, който различава компютър от прахосмукачка.

Моят съвет към всички е – задръжте мъничко с инсталирането на тази версия. Изчакайте, докато се разбере дали наистина съдържа вирус, или е фалшива тревога. НАП вероятно скоро ще обявят нещата и в двата случая – елементарна отговорност е да го направят.

Backblaze Cloud Backup Release 5.2

Post Syndicated from Yev original https://www.backblaze.com/blog/backblaze-cloud-backup-release-5-2/

We’re pleased to start the year off the right way, with an update to Backblaze Cloud Backup, version 5.2! This is a smaller release, but does increase backup speeds, optimizes the backup client, and addresses a few minor bugs that we’re excited to lay to rest.

What’s New

  • Increased transmission speed of files between 30MB and 400MB+.
  • Optimized indexing to decrease system resource usage and lower the performance impact on computers that are backing up to Backblaze.
  • Adjusted external hard drive monitoring and increased the speed of indexing.
  • Changed copyright to 2018.

Release Version Number:

  • Mac — 5.2.0
  • PC — 5.2.0

Clients:
Backblaze Personal Backup
Backblaze Business Backup

Availability:
January 4, 2018

Upgrade Methods:

Cost:
This is a free update for all Backblaze Cloud Backup consumer and business customers and active trial users.

The post Backblaze Cloud Backup Release 5.2 appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

12 B2 Power Tips for New Users

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/newbie-cloud-storage-guide/

B2 Tips for Beginners
You probably know that B2 is Backblaze’s fast and economical general purpose cloud storage, but do you know everything that you can do with it?

If you’re a B2 newbie, here are some blazing power tips to help you get the most out of B2 Cloud Storage.

If you’re a B2 expert or a developer, stay tuned. We’ll be publishing power tips for you in the near future. Enter your email address using the Join button at the top of the page and you won’t miss any upcoming blog posts.
Backblaze logo

1    Drag and Drop Files to B2

Use Backblaze’s drag-and-drop web interface to store, restore, and share B2 files.

Backblaze logo

2    Share Files You Have in B2

You can designate a B2 bucket as private or public. If the bucket is public and you’d like to share a file with others, you can create and copy a Friendly URL and paste it into an email or message.

Backblaze logo

3    Use B2 Just Like Any Other Drive

Use B2 just as if it were a drive on your computer — drag and drop files and folders, save files to it — using one of a number of integrations that let you mount B2 as a volume in your Windows or Macintosh file system (Mountain Duck, ExpanDrive, odrive). Pick the files you want to save, drop them in a desktop folder, and they are automatically saved to B2.

Backblaze logo

4    Drag and Drop To and From B2 from the Desktop, Too

Use Cyberduck, a B2 integration partner, to drag-and-drop files to and from B2 right from the Windows or Macintosh desktop.

Backblaze logo

5    Determine the Speed of your Connection to B2

You can check the speed and latency of your internet connection between your location and Backblaze’s data centers, and see how much data you could theoretically transfer in a day, at https://www.backblaze.com/speedtest/.

Backblaze logo

6    No Matter What Type of Data you Have, B2 Can Handle It

You can transfer any type or amount of data to B2 from any device that can connect to the internet, including Windows, Macintosh, Linux, servers, mobile devices, external drives, and NAS.

Backblaze logo

7    Get Your Files from B2 by Mail

You have a choice of how to receive your data from B2. You can download data directly or request that your data be shipped to you via FedEx.

Backblaze logo

8    Back Up Your Backups to B2

You can automatically back up your Apple Time Machine backup or Windows backup to a NAS and then back that up to B2 to give you both local and cloud backups for a 3-2-1 backup solution.

Backblaze logo

9    Protect Your B2 Account with Two-Factor Verification

You can (and should) protect your Backblaze account with two-factor verification (such as using an app on your smartphone), and you can use backup codes and SMS verification in case you lose access to your smartphone.

Backblaze logo

10    Preview Photos Stored on B2 from the Web

Preview your photos as thumbnails (and optionally download individual photos) in common image formats (including jpg, png, img, tiff, and gif) with the B2 web interface.

Backblaze logo

11    B2 Has Group Management, Too

Backblaze Groups works for B2, too — just like Backblaze Personal Backup and Business Backup. You can manage billing, group membership, and control access using Group Management in your Backblaze account dashboard.

Backblaze logo

12    B2 Integrations Make B2 More Powerful and Useful

There are over 30+ software and hardware integrations that make B2 more powerful. You can visit our integrations page to find a solution that works for you.

Want to Learn More About B2?

You can find more information on B2 on our website and in our help pages.

The post 12 B2 Power Tips for New Users appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Има ли проблем с приетия от правителство план за управление на Пирин?

Post Syndicated from Bozho original https://blog.bozho.net/blog/3020

Реших да проверя какво точно са приели вчера за Пирин, защото хората взеха да се мятат на амбразурите в социалните мрежи, а детайлите останаха на заден план.

В правителствената информационна система още не са качили документите (обикновено го правят с малко закъснение), но все пак на сайта на правителството има качен проектът на решението: ето тук http://www.government.bg/fce/001/0228/files/T_13.doc . Правя уговорката, че това не е приетото решение, та може да има корекции в последния момент (случват се такива неща с подмяна на листчета в папки минути преди заседание).
Действащият план за управление, който се изменя, е тук (важната част започва от стр. 182): http://pirin.bg/wp-content/uploads/2017/07/Plan-za-uprav.pdf

Промените правят общо взето едно нещо: разрешава се строителството на ски писти и съоръжения в т.нар. „зона за строителство“ и „зона за туризъм“, които са 0,6%+2,2% от територията на парка. Строителството става само след екологична оценка (или поне така пише; дали такава няма да бъде правена проформа е друг въпрос)

„Зоната за строителство“ до момента е допускала строителство на „сгради, пътища и съоръжения“. Това звучи общо, но ще видим след малко какво значи.

Има обаче едно двусмислие в решението – в таблицата на допустимите дейности, строителството става допустима дейност и в „зона за опазване на горските екосистеми и отдих“, която е 45,2% от парка. В съответната точка за тази зона обаче няма промяна, която да позволи строителство там, освен за „водохващане“ (което изглежда оправдано).

Дали това обаче не е хитър начин да се скрие нещо – не знам. Според мен таблицата може да се прецизира и 9-ти ред да се разбие допълнително.

По-интересното обаче е друго – в чл. 21 от Закона за защитените територии се забранява строителство на почти всичко (с някои изключения). Допуска се само ремонт на „спортни съоръжения“. Допуска се строителство на „съоръжения за нуждите на управлението на парка“, към което реферирах няколко абзаца по-нагоре. С изменението на т.1 от нормативната част на плана, на практика законът се нарушава – т.е. планът предвижда възможност за строеж на неща, които законът не допуска.

Тук трябва да се добави и решение на ВАС (Решение № 7214 от 2.10.2001), че ски зоната включва „съоръжения за обслужване на посетители“. Решението е спорно, обаче, тъй като приема, че законът допуска строеж на спортни и други съоръжения, но законът предвижда само техния ремонт. Което тълкуване пък се потвърждава от решение на ВАС по друг казус (№6883 от 09.06.2008 г. на ВАС по адм. д. № 4543 / 2008).

Та, в заключение:
– Зона IIa няма как да е допустима за строителство на писти и лифтове, а ако такова е било намерението, то не е било реализирано, тъй като в текста липсва.
– Измененият план противоречи на чл. 21 от Закона за защитените територии, тъй като позволява строителство на съоръжения, които законът не допуска.

НЕ казвам, че не трябва да има нови писти и нови лифтове. Не знам какво е съотношението на зоните за ски спрямо цялостната територия на планината в други европейски държави. Най-вероятно е добре да има още накъде да се разраства ски туризма.

Но за да стане това ми изглежда, че е необходима промяна на чл. 21 от Закона за защитените територии ИЛИ промяна на границите на парка по реда на глава трета от ЗЗТ.

Така че протестът е обоснован и той е протест както за Пирин, така и за законност.

(Заб.: сега навлизам в темата с Пирин, така че моля коригирайте грешни допускания и заключения, ако видите такива.)

Power Tips for Backblaze Backup

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/data-backup-tips/

Backup Power Tips

2017 has been a busy year for Backblaze. We’ve reached a total of over 400 petabytes of data stored for our customers — that’s a lot!, released a major upgrade to our backup product — Backblaze Cloud Backup 5.0, added Groups to our consumer and business backup products, further enhanced account security, and welcomed a whole lot of new customers to Backblaze.

For all of our new users (and maybe some of you more experienced ones, too), we’d like to share some power tips that will help you get the most out of Backblaze Backup for home and business.

Blazing Power Tips for Backblaze Backup

Back Up All of Your Valuable Data

Backblaze logo

Include Directly-Attached External Drives in Your Backup

Backblaze can back up external drives attached via USB, Thunderbolt, or Firewire.

Backblaze logo

Back Up Virtual Machines Installed on Your Computer

Virtual machines, such as those created by Parallels, VMware Fusion, VirtualBox, Hyper-V, or other programs, can be backed up with Backblaze.

Backblaze logo

You Can Back Up Your Mobile Phone to Backblaze

Gain extra peace-of-mind by backing up your iPhone or Android phone to your computer and including that in your computer backup.

Backblaze logo

Bring on Your Big Files

By default, Backblaze has no restrictions on the size of the files you are backing up, even that large high school reunion video you want to be sure to keep.

Backblaze logo

Rescan Your Hard Drive to Check for Changes

Backblaze works quietly and continuously in the background to keep you backed up, but you can ask Backblaze to immediately check whether anything needs backing up by holding down the Alt key and clicking on the Restore Options button in the Backblaze client.

Manage and Restore Your Backed Up Files

Backblaze logo

You Can Share Files You’ve Backed Up

You can share files with anyone directly from your Backblaze account.

Backblaze logo

Select and Restore Individual Files

You can restore a single file without zipping it using the Backblaze web interface.

Backblaze logo

Receive Your Restores from Backblaze by Mail

You have a choice of how to receive your data from Backblaze. You can download individual files, download a ZIP of the files you choose, or request that your data be shipped to you anywhere in the world via FedEx.

Backblaze logo

Put Your Account on Hold for Six Months

As long as your account is current, all the data you’ve backed up is maintained for up to six months if you’re traveling or not using your computer and don’t connect to our servers. (For active accounts, data is maintained up to 30 days.)

Backblaze logo

Groups Make Managing Business or Family Members Easy

For businesses, families, or organizations, our Groups feature makes it easy to manage billing, group membership, and individual user access to files and accounts — all at no incremental charge.

Backblaze logo

You Can Browse and Restore Previous Versions of a File

Visit the View/Restore Files page to go back in time to earlier or deleted versions of your files.

Backblaze logo

Mass Deploy Backblaze Remotely to Many Computers

Companies, organizations, schools, non-profits, and others can deploy Backblaze computer backup remotely across all their computers without any end-user interaction.

Backblaze logo

Move Your Account and Preserve Backups on a New or Restored Computer

You can move your Backblaze account to a new or restored computer with the same data — and preserve the backups you have already completed — using the Inherit Backup State feature.

Backblaze logo

Reinstall Backblaze under a Different Account

Backblaze remembers the account information when it is uninstalled and reinstalled. To install Backblaze under a different account, hold down the ALT key and click the Install Now button.

Keep Your Data Secure

Backblaze logo

Protect Your Account with Two-Factor Verification

You can (and should) protect your Backblaze account with two-factor verification. You can use backup codes and SMS verification in case you lose access to your smartphone and the authentication app. Sign in to your account to set that up.

Backblaze logo

Add Additional Security to Your Data

All transmissions of your data between your system and our servers is encrypted. For extra account security, you can add an optional private encryption key (PEK) to the data on our servers. Just be sure to remember your encryption key because it’s required to restore your data.

Get the Best Data Transfer Speeds

Backblaze logo

How Fast is your Connection to Backblaze?

You can check the speed and latency of your internet connection between your location and Backblaze’s data centers at https://www.backblaze.com/speedtest/.

Backblaze logo

Fine-Tune Your Upload Speed with Multiple Threads

Our auto-threading feature adjusts Backblaze’s CPU usage to give you the best upload speeds, but for those of you who like to tinker, the Backblaze client on Windows and Macintosh lets you fine-tune the number of threads our client is using to upload your files to our data centers.

Backblaze logo

Use the Backblaze Downloader To Get Your Restores Faster

If you are downloading a large ZIP restore, we recommend that you use the Backblaze Downloader application for Macintosh or Windows for maximum speed.

Want to Learn More About Backblaze Backup?

You can find more information on Backblaze Backup (including a free trial) on our website, and more tips about backing up in our help pages and in our Backup Guide.

Do you have a friend who should be backing up, but doesn’t? Why not give the gift of Backblaze?

The post Power Tips for Backblaze Backup appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

The deep learning Santa/Not Santa detector

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/deep-learning-santa-detector/

Did you see Mommy kissing Santa Claus? Or was it simply an imposter? The Not Santa detector is here to help solve the mystery once and for all.

Building a “Not Santa” detector on the Raspberry Pi using deep learning, Keras, and Python

The video is a demo of my “Not Santa” detector that I deployed to the Raspberry Pi. I trained the detector using deep learning, Keras, and Python. You can find the full source code and tutorial here: https://www.pyimagesearch.com/2017/12/18/keras-deep-learning-raspberry-pi/

Ho-ho-how does it work?

Note: Adrian Rosebrock is not Santa. But he does a good enough impression of the jolly old fellow that his disguise can fool a Raspberry Pi into thinking otherwise.

Raspberry Pi 'Not Santa' detector

We jest, but has anyone seen Adrian and Santa in the same room together?
Image c/o Adrian Rosebrock

But how is the Raspberry Pi able to detect the Santa-ness or Not-Santa-ness of people who walk into the frame?

Two words: deep learning

If you’re not sure what deep learning is, you’re not alone. It’s a hefty topic, and one that Adrian has written a book about, so I grilled him for a bluffers’ guide. In his words, deep learning is:

…a subfield of machine learning, which is, in turn a subfield of artificial intelligence (AI). While AI embodies a large, diverse set of techniques and algorithms related to automatic reasoning (inference, planning, heuristics, etc), the machine learning subfields are specifically interested in pattern recognition and learning from data.

Artificial Neural Networks (ANNs) are a class of machine learning algorithms that can learn from data. We have been using ANNs successfully for over 60 years, but something special happened in the past 5 years — (1) we’ve been able to accumulate massive datasets, orders of magnitude larger than previous datasets, and (2) we have access to specialized hardware to train networks faster (i.e., GPUs).

Given these large datasets and specialized hardware, deeper neural networks can be trained, leading to the term “deep learning”.

So now we have a bird’s-eye view of deep learning, how does the detector detect?

Cameras and twinkly lights

Adrian used a model he had trained on two datasets to detect whether or not an image contains Santa. He deployed the Not Santa detector code to a Raspberry Pi, then attached a camera, speakers, and The Pi Hut’s 3D Xmas Tree.

Raspberry Pi 'Not Santa' detector

Components for Santa detection
Image c/o Adrian Rosebrock

The camera captures footage of Santa in the wild, while the Christmas tree add-on provides a twinkly notification, accompanied by a resonant ho, ho, ho from the speakers.

A deeper deep dive into deep learning

A full breakdown of the project and the workings of the Not Santa detector can be found on Adrian’s blog, PyImageSearch, which includes links to other deep learning and image classification tutorials using TensorFlow and Keras. It’s an excellent place to start if you’d like to understand more about deep learning.

Build your own Santa detector

Santa might catch on to Adrian’s clever detector and start avoiding the camera, and for that eventuality, we have our own Santa detector. It uses motion detection to notify you of his presence (and your presents!).

Raspberry Pi Santa detector

Check out our Santa Detector resource here and use a passive infrared sensor, Raspberry Pi, and Scratch to catch the big man in action.

The post The deep learning Santa/Not Santa detector appeared first on Raspberry Pi.

Rosie the Countdown champion

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/rosie-the-countdown-champion/

Beating the contestants at Countdown: is it cheating if you happen to know every word in the English dictionary?

Rosie plays Countdown

Allow your robots to join in the fun this Christmas with a round of Channel 4’s Countdown. https://www.rosietheredrobot.com/2017/12/tea-minus-30.html

Rosie the Red Robot

First, a little bit of backstory. Challenged by his eldest daughter to build a robot, technology-loving Alan got to work building Rosie.

I became (unusually) determined. I wanted to show her what can be done… and the how can be learnt later. After all, there is nothing more exciting and encouraging than seeing technology come alive. Move. Groove. Quite literally.

Originally, Rosie had a Raspberry Pi 3 brain controlling ultrasonic sensors and motors via Python. From there, she has evolved into something much grander, and Alan has documented her upgrades on the Rosie the Red Robot blog. Using GPS trackers and a Raspberry Pi camera module, she became Rosie Patrol, a rolling, walking, interactive bot; then, with further upgrades, the Tea Minus 30 project came to be. Which brings us back to Countdown.

T(ea) minus 30

In case it hasn’t been a big part of your life up until now, Countdown is one of the longest running televisions shows in history, and occupies a special place in British culture. Contestants take turns to fill a board with nine randomly selected vowels and consonants, before battling the Countdown clock to find the longest word they can in the space of 30 seconds.

The Countdown Clock

I’ve had quite a few requests to show just the Countdown clock for use in school activities/own games etc., so here it is! Enjoy! It’s a brand new version too, using the 2010 Office package.

There’s a numbers round involving arithmetic, too – but for now, we’re going to focus on letters and words, because that’s where Rosie’s skills shine.

Using an online resource, Alan created a dataset of the ten thousand most common English words.

Rosie the Red Robot Raspberry Pi

Many words, listed in order of common-ness. Alan wrote a Python script to order them alphabetically and by length

Next, Alan wrote a Python script to select nine letters at random, then search the word list to find all the words that could be spelled using only these letters. He used the randint function to select letters from a pre-loaded alphabet, and introduced a requirement to include at least two vowels among the nine letters.

Rosie the Red Robot Raspberry Pi

Words that match the available letters are displayed on the screen.

Rosie the Red Robot Raspberry Pi

Putting it all together

With the basic game-play working, it was time to bring the project to life. For this, Alan used Rosie’s camera module, along with optical character recognition (OCR) and text-to-speech capabilities.

Rosie the Red Robot Raspberry Pi

Alan writes, “Here’s a very amateurish drawing to brainstorm our idea. Let’s call it a design as it makes it sound like we know what we’re doing.”

Alan’s script has Rosie take a photo of the TV screen during the Countdown letters round, then perform OCR using the Google Cloud Vision API to detect the nine letters contestants have to work with. Next, Rosie runs Alan’s code to check the letters against the ten-thousand-word dataset, converts text to speech with Python gTTS, and finally speaks her highest-scoring word via omxplayer.

You can follow the adventures of Rosie the Red Robot on her blog, or follow her on Twitter. And if you’d like to build your own Rosie, Alan has provided code and tutorials for his projects too. Thanks, Alan!

The post Rosie the Countdown champion appeared first on Raspberry Pi.

All the lights, all of the twinkly lights

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/all-of-the-lights/

Twinkly lights are to Christmas what pumpkins are to Halloween. And when you add a Raspberry Pi to your light show, the result instantly goes from “Meh, yeah.” to “OMG, wow!”

Here are some cool light-based Christmas projects to inspire you this weekend.

Raspberry Pi Christmas Lights

App-based light control

Christmas Tree Lights Demo

Project Code – https://github.com/eidolonFIRE/Christmas-Lights Raspberry Pi A+ ws2812b – https://smile.amazon.com/gp/product/B01H04YAIQ/ref=od_aui_detailpages00?ie=UTF8&psc=1 200w 5V supply – https://smile.amazon.com/gp/product/B01LZRIWZD/ref=od_aui_detailpages01?ie=UTF8&psc=1

In his Christmas lights project, Caleb Johnson uses an app as a control panel to switch between predefined displays. The full code is available on his GitHub, and it connects a Raspberry Pi A+ to a strip of programmable LEDs that change their pattern at the touch of a phone screen.

What’s great about this project, aside from the simplicity of its design, is the scope for extending it. Why not share the app with friends and family, allowing them to control your lights remotely? Or link the lights to social media so they are triggered by a specific hashtag, like in Alex Ellis’ #cheerlights project below.

Worldwide holiday #cheerlights

Holiday lights hack – 1$ Snowman + Raspberry Pi

Here we have a smart holiday light which will only run when it detects your presence in the room through a passive infrared PIR sensor. I’ve used hot glue for the fixings and an 8-LED NeoPixel strip connected to port 18.

Cheerlights, an online service created by Hans Scharler, allows makers to incorporate hashtag-controlled lighting into the projects. By tweeting the hashtag #cheerlights, followed by a colour, you can control a network of lights so that they are all displaying the same colour.

For his holiday light hack using Cheerlights, Alex incorporated the Pimoroni Blinkt! and a collection of cheap Christmas decorations to create cute light-up ornaments for the festive season.

To make your own, check out Alex’s blog post, and head to your local £1/$1 store for hackable decor. You could even link your Christmas tree and the trees of your family, syncing them all in one glorious, Santa-pleasing spectacular.

Outdoor decorations

DIY musical Xmas lights for beginners with raspberry pi

With just a few bucks of extra material, I walk you through converting your regular Christmas lights into a whole-house light show. The goal here is to go from scratch. Although this guide is intended for people who don’t know how to use linux at all and those who do alike, the focus is for people for whom linux and the raspberry pi are a complete mystery.

Looking to outdo your neighbours with your Christmas light show this year? YouTuber Makin’Things has created a beginners guide to setting up a Raspberry Pi–based musical light show for your facade, complete with information on soldering, wiring, and coding.

Once you’ve wrapped your house in metres and metres of lights and boosted your speakers so they can be heard for miles around, why not incorporate #cheerlights to make your outdoor decor interactive?

Still not enough? How about controlling your lights using a drum kit? Christian Kratky’s MIDI-Based Christmas Lights Animation system (or as I like to call it, House Rock) does exactly that.

Eye Of The Tiger (MIDI based christmas lights animation system prototype)

Project documentation and source code: https://www.hackster.io/cyborg-titanium-14/light-pi-1c88b0 The song is taken from: https://www.youtube.com/watch?v=G6r1dAire0Y

Any more?

We know these projects are just the tip of the iceberg when it comes to the Raspberry Pi–powered Christmas projects out there, and as always, we’d love you to share yours with us. So post a link in the comments below, or tag us on social media when posting your build photos, videos, and/or blog links. ‘Tis the season for sharing after all.

The post All the lights, all of the twinkly lights appeared first on Raspberry Pi.

How to Enhance the Security of Sensitive Customer Data by Using Amazon CloudFront Field-Level Encryption

Post Syndicated from Alex Tomic original https://aws.amazon.com/blogs/security/how-to-enhance-the-security-of-sensitive-customer-data-by-using-amazon-cloudfront-field-level-encryption/

Amazon CloudFront is a web service that speeds up distribution of your static and dynamic web content to end users through a worldwide network of edge locations. CloudFront provides a number of benefits and capabilities that can help you secure your applications and content while meeting compliance requirements. For example, you can configure CloudFront to help enforce secure, end-to-end connections using HTTPS SSL/TLS encryption. You also can take advantage of CloudFront integration with AWS Shield for DDoS protection and with AWS WAF (a web application firewall) for protection against application-layer attacks, such as SQL injection and cross-site scripting.

Now, CloudFront field-level encryption helps secure sensitive data such as a customer phone numbers by adding another security layer to CloudFront HTTPS. Using this functionality, you can help ensure that sensitive information in a POST request is encrypted at CloudFront edge locations. This information remains encrypted as it flows to and beyond your origin servers that terminate HTTPS connections with CloudFront and throughout the application environment. In this blog post, we demonstrate how you can enhance the security of sensitive data by using CloudFront field-level encryption.

Note: This post assumes that you understand concepts and services such as content delivery networks, HTTP forms, public-key cryptography, CloudFrontAWS Lambda, and the AWS CLI. If necessary, you should familiarize yourself with these concepts and review the solution overview in the next section before proceeding with the deployment of this post’s solution.

How field-level encryption works

Many web applications collect and store data from users as those users interact with the applications. For example, a travel-booking website may ask for your passport number and less sensitive data such as your food preferences. This data is transmitted to web servers and also might travel among a number of services to perform tasks. However, this also means that your sensitive information may need to be accessed by only a small subset of these services (most other services do not need to access your data).

User data is often stored in a database for retrieval at a later time. One approach to protecting stored sensitive data is to configure and code each service to protect that sensitive data. For example, you can develop safeguards in logging functionality to ensure sensitive data is masked or removed. However, this can add complexity to your code base and limit performance.

Field-level encryption addresses this problem by ensuring sensitive data is encrypted at CloudFront edge locations. Sensitive data fields in HTTPS form POSTs are automatically encrypted with a user-provided public RSA key. After the data is encrypted, other systems in your architecture see only ciphertext. If this ciphertext unintentionally becomes externally available, the data is cryptographically protected and only designated systems with access to the private RSA key can decrypt the sensitive data.

It is critical to secure private RSA key material to prevent unauthorized access to the protected data. Management of cryptographic key material is a larger topic that is out of scope for this blog post, but should be carefully considered when implementing encryption in your applications. For example, in this blog post we store private key material as a secure string in the Amazon EC2 Systems Manager Parameter Store. The Parameter Store provides a centralized location for managing your configuration data such as plaintext data (such as database strings) or secrets (such as passwords) that are encrypted using AWS Key Management Service (AWS KMS). You may have an existing key management system in place that you can use, or you can use AWS CloudHSM. CloudHSM is a cloud-based hardware security module (HSM) that enables you to easily generate and use your own encryption keys in the AWS Cloud.

To illustrate field-level encryption, let’s look at a simple form submission where Name and Phone values are sent to a web server using an HTTP POST. A typical form POST would contain data such as the following.

POST / HTTP/1.1
Host: example.com
Content-Type: application/x-www-form-urlencoded
Content-Length:60

Name=Jane+Doe&Phone=404-555-0150

Instead of taking this typical approach, field-level encryption converts this data similar to the following.

POST / HTTP/1.1
Host: example.com
Content-Type: application/x-www-form-urlencoded
Content-Length: 1713

Name=Jane+Doe&Phone=AYABeHxZ0ZqWyysqxrB5pEBSYw4AAA...

To further demonstrate field-level encryption in action, this blog post includes a sample serverless application that you can deploy by using a CloudFormation template, which creates an application environment using CloudFront, Amazon API Gateway, and Lambda. The sample application is only intended to demonstrate field-level encryption functionality and is not intended for production use. The following diagram depicts the architecture and data flow of this sample application.

Sample application architecture and data flow

Diagram of the solution's architecture and data flow

Here is how the sample solution works:

  1. An application user submits an HTML form page with sensitive data, generating an HTTPS POST to CloudFront.
  2. Field-level encryption intercepts the form POST and encrypts sensitive data with the public RSA key and replaces fields in the form post with encrypted ciphertext. The form POST ciphertext is then sent to origin servers.
  3. The serverless application accepts the form post data containing ciphertext where sensitive data would normally be. If a malicious user were able to compromise your application and gain access to your data, such as the contents of a form, that user would see encrypted data.
  4. Lambda stores data in a DynamoDB table, leaving sensitive data to remain safely encrypted at rest.
  5. An administrator uses the AWS Management Console and a Lambda function to view the sensitive data.
  6. During the session, the administrator retrieves ciphertext from the DynamoDB table.
  7. The administrator decrypts sensitive data by using private key material stored in the EC2 Systems Manager Parameter Store.
  8. Decrypted sensitive data is transmitted over SSL/TLS via the AWS Management Console to the administrator for review.

Deployment walkthrough

The high-level steps to deploy this solution are as follows:

  1. Stage the required artifacts
    When deployment packages are used with Lambda, the zipped artifacts have to be placed in an S3 bucket in the target AWS Region for deployment. This step is not required if you are deploying in the US East (N. Virginia) Region because the package has already been staged there.
  2. Generate an RSA key pair
    Create a public/private key pair that will be used to perform the encrypt/decrypt functionality.
  3. Upload the public key to CloudFront and associate it with the field-level encryption configuration
    After you create the key pair, the public key is uploaded to CloudFront so that it can be used by field-level encryption.
  4. Launch the CloudFormation stack
    Deploy the sample application for demonstrating field-level encryption by using AWS CloudFormation.
  5. Add the field-level encryption configuration to the CloudFront distribution
    After you have provisioned the application, this step associates the field-level encryption configuration with the CloudFront distribution.
  6. Store the RSA private key in the Parameter Store
    Store the private key in the Parameter Store as a SecureString data type, which uses AWS KMS to encrypt the parameter value.

Deploy the solution

1. Stage the required artifacts

(If you are deploying in the US East [N. Virginia] Region, skip to Step 2, “Generate an RSA key pair.”)

Stage the Lambda function deployment package in an Amazon S3 bucket located in the AWS Region you are using for this solution. To do this, download the zipped deployment package and upload it to your in-region bucket. For additional information about uploading objects to S3, see Uploading Object into Amazon S3.

2. Generate an RSA key pair

In this section, you will generate an RSA key pair by using OpenSSL:

  1. Confirm access to OpenSSL.
    $ openssl version

    You should see version information similar to the following.

    OpenSSL <version> <date>

  1. Create a private key using the following command.
    $ openssl genrsa -out private_key.pem 2048

    The command results should look similar to the following.

    Generating RSA private key, 2048 bit long modulus
    ................................................................................+++
    ..........................+++
    e is 65537 (0x10001)
  1. Extract the public key from the private key by running the following command.
    $ openssl rsa -pubout -in private_key.pem -out public_key.pem

    You should see output similar to the following.

    writing RSA key
  1. Restrict access to the private key.$ chmod 600 private_key.pem Note: You will use the public and private key material in Steps 3 and 6 to configure the sample application.

3. Upload the public key to CloudFront and associate it with the field-level encryption configuration

Now that you have created the RSA key pair, you will use the AWS Management Console to upload the public key to CloudFront for use by field-level encryption. Complete the following steps to upload and configure the public key.

Note: Do not include spaces or special characters when providing the configuration values in this section.

  1. From the AWS Management Console, choose Services > CloudFront.
  2. In the navigation pane, choose Public Key and choose Add Public Key.
    Screenshot of adding a public key

Complete the Add Public Key configuration boxes:

  • Key Name: Type a name such as DemoPublicKey.
  • Encoded Key: Paste the contents of the public_key.pem file you created in Step 2c. Copy and paste the encoded key value for your public key, including the -----BEGIN PUBLIC KEY----- and -----END PUBLIC KEY----- lines.
  • Comment: Optionally add a comment.
  1. Choose Create.
  2. After adding at least one public key to CloudFront, the next step is to create a profile to tell CloudFront which fields of input you want to be encrypted. While still on the CloudFront console, choose Field-level encryption in the navigation pane.
  3. Under Profiles, choose Create profile.
    Screenshot of creating a profile

Complete the Create profile configuration boxes:

  • Name: Type a name such as FLEDemo.
  • Comment: Optionally add a comment.
  • Public key: Select the public key you configured in Step 4.b.
  • Provider name: Type a provider name such as FLEDemo.
    This information will be used when the form data is encrypted, and must be provided to applications that need to decrypt the data, along with the appropriate private key.
  • Pattern to match: Type phone. This configures field-level encryption to match based on the phone.
  1. Choose Save profile.
  2. Configurations include options for whether to block or forward a query to your origin in scenarios where CloudFront can’t encrypt the data. Under Encryption Configurations, choose Create configuration.
    Screenshot of creating a configuration

Complete the Create configuration boxes:

  • Comment: Optionally add a comment.
  • Content type: Enter application/x-www-form-urlencoded. This is a common media type for encoding form data.
  • Default profile ID: Select the profile you added in Step 3e.
  1. Choose Save configuration

4. Launch the CloudFormation stack

Launch the sample application by using a CloudFormation template that automates the provisioning process.

Input parameterInput parameter description
ProviderIDEnter the Provider name you assigned in Step 3e. The ProviderID is used in field-level encryption configuration in CloudFront (letters and numbers only, no special characters)
PublicKeyNameEnter the Key Name you assigned in Step 3b. This name is assigned to the public key in field-level encryption configuration in CloudFront (letters and numbers only, no special characters).
PrivateKeySSMPathLeave as the default: /cloudfront/field-encryption-sample/private-key
ArtifactsBucketThe S3 bucket with artifact files (staged zip file with app code). Leave as default if deploying in us-east-1.
ArtifactsPrefixThe path in the S3 bucket containing artifact files. Leave as default if deploying in us-east-1.

To finish creating the CloudFormation stack:

  1. Choose Next on the Select Template page, enter the input parameters and choose Next.
    Note: The Artifacts configuration needs to be updated only if you are deploying outside of us-east-1 (US East [N. Virginia]). See Step 1 for artifact staging instructions.
  2. On the Options page, accept the defaults and choose Next.
  3. On the Review page, confirm the details, choose the I acknowledge that AWS CloudFormation might create IAM resources check box, and then choose Create. (The stack will be created in approximately 15 minutes.)

5. Add the field-level encryption configuration to the CloudFront distribution

While still on the CloudFront console, choose Distributions in the navigation pane, and then:

    1. In the Outputs section of the FLE-Sample-App stack, look for CloudFrontDistribution and click the URL to open the CloudFront console.
    2. Choose Behaviors, choose the Default (*) behavior, and then choose Edit.
    3. For Field-level Encryption Config, choose the configuration you created in Step 3g.
      Screenshot of editing the default cache behavior
    4. Choose Yes, Edit.
    5. While still in the CloudFront distribution configuration, choose the General Choose Edit, scroll down to Distribution State, and change it to Enabled.
    6. Choose Yes, Edit.

6. Store the RSA private key in the Parameter Store

In this step, you store the private key in the EC2 Systems Manager Parameter Store as a SecureString data type, which uses AWS KMS to encrypt the parameter value. For more information about AWS KMS, see the AWS Key Management Service Developer Guide. You will need a working installation of the AWS CLI to complete this step.

  1. Store the private key in the Parameter Store with the AWS CLI by running the following command. You will find the <KMSKeyID> in the KMSKeyID in the CloudFormation stack Outputs. Substitute it for the placeholder in the following command.
    $ aws ssm put-parameter --type "SecureString" --name /cloudfront/field-encryption-sample/private-key --value file://private_key.pem --key-id "<KMSKeyID>"
    
    ------------------
    |  PutParameter  |
    +----------+-----+
    |  Version |  1  |
    +----------+-----+

  1. Verify the parameter. Your private key material should be accessible through the ssm get-parameter in the following command in the Value The key material has been truncated in the following output.
    $ aws ssm get-parameter --name /cloudfront/field-encryption-sample/private-key --with-decryption
    
    -----…
    
    ||  Value  |  -----BEGIN RSA PRIVATE KEY-----
    MIIEowIBAAKCAQEAwGRBGuhacmw+C73kM6Z…….

    Notice we use the —with decryption argument in this command. This returns the private key as cleartext.

    This completes the sample application deployment. Next, we show you how to see field-level encryption in action.

  1. Delete the private key from local storage. On Linux for example, using the shred command, securely delete the private key material from your workstation as shown below. You may also wish to store the private key material within an AWS CloudHSM or other protected location suitable for your security requirements. For production implementations, you also should implement key rotation policies.
    $ shred -zvu -n  100 private*.pem
    
    shred: private_encrypted_key.pem: pass 1/101 (random)...
    shred: private_encrypted_key.pem: pass 2/101 (dddddd)...
    shred: private_encrypted_key.pem: pass 3/101 (555555)...
    ….

Test the sample application

Use the following steps to test the sample application with field-level encryption:

  1. Open sample application in your web browser by clicking the ApplicationURL link in the CloudFormation stack Outputs. (for example, https:d199xe5izz82ea.cloudfront.net/prod/). Note that it may take several minutes for the CloudFront distribution to reach the Deployed Status from the previous step, during which time you may not be able to access the sample application.
  2. Fill out and submit the HTML form on the page:
    1. Complete the three form fields: Full Name, Email Address, and Phone Number.
    2. Choose Submit.
      Screenshot of completing the sample application form
      Notice that the application response includes the form values. The phone number returns the following ciphertext encryption using your public key. This ciphertext has been stored in DynamoDB.
      Screenshot of the phone number as ciphertext
  3. Execute the Lambda decryption function to download ciphertext from DynamoDB and decrypt the phone number using the private key:
    1. In the CloudFormation stack Outputs, locate DecryptFunction and click the URL to open the Lambda console.
    2. Configure a test event using the “Hello World” template.
    3. Choose the Test button.
  4. View the encrypted and decrypted phone number data.
    Screenshot of the encrypted and decrypted phone number data

Summary

In this blog post, we showed you how to use CloudFront field-level encryption to encrypt sensitive data at edge locations and help prevent access from unauthorized systems. The source code for this solution is available on GitHub. For additional information about field-level encryption, see the documentation.

If you have comments about this post, submit them in the “Comments” section below. If you have questions about or issues implementing this solution, please start a new thread on the CloudFront forum.

– Alex and Cameron