Tag Archives: lecture

UK soldiers design Raspberry Pi bomb disposal robot

Post Syndicated from Helen Lynn original https://www.raspberrypi.org/blog/uk-soldiers-design-raspberry-pi-bomb-disposal-robot/

Three soldiers in the British Army have used a Raspberry Pi to build an autonomous robot, as part of their Foreman of Signals course.

Meet The Soldiers Revolutionising Bomb Disposal

Three soldiers from Blandford Camp have successfully designed and built an autonomous robot as part of their Foreman of Signals Course at the Dorset Garrison.

Autonomous robots

Forces Radio BFBS carried a story last week about Staff Sergeant Jolley, Sergeant Rana, and Sergeant Paddon, also known as the “Project ROVER” team. As part of their Foreman of Signals training, their task was to design an autonomous robot that can move between two specified points, take a temperature reading, and transmit the information to a remote computer. The team comments that, while semi-autonomous robots have been used as far back as 9/11 for tasks like finding people trapped under rubble, nothing like their robot and on a similar scale currently exists within the British Army.

The ROVER buggy

Their build is named ROVER, which stands for Remote Obstacle aVoiding Environment Robot. It’s a buggy that moves on caterpillar tracks, and it’s tethered; we wonder whether that might be because it doesn’t currently have an on-board power supply. A demo shows the robot moving forward, then changing its path when it encounters an obstacle. The team is using RealVNC‘s remote access software to allow ROVER to send data back to another computer.

Applications for ROVER

Dave Ball, Senior Lecturer in charge of the Foreman of Signals course, comments that the project is “a fantastic opportunity for [the team] to, even only halfway through the course, showcase some of the stuff they’ve learnt and produce something that’s really quite exciting.” The Project ROVER team explains that the possibilities for autonomous robots like this one are extensive: they include mine clearance, bomb disposal, and search-and-rescue campaigns. They point out that existing semi-autonomous hardware is not as easy to program as their build. In contrast, they say, “with the invention of the Raspberry Pi, this has allowed three very inexperienced individuals to program a robot very capable of doing these things.”

We make Raspberry Pi computers because we want building things with technology to be as accessible as possible. So it’s great to see a project like this, made by people who aren’t techy and don’t have a lot of computing experience, but who want to solve a problem and see that the Pi is an affordable and powerful tool that can help.

The post UK soldiers design Raspberry Pi bomb disposal robot appeared first on Raspberry Pi.

2018 Picademy dates in the United States

Post Syndicated from Andrew Collins original https://www.raspberrypi.org/blog/new-picademy-2018-dates-in-united-states/

Cue the lights! Cue the music! Picademy is back for another year stateside. We’re excited to bring our free computer science and digital making professional development program for educators to four new cities this summer — you can apply right now.

Picademy USA Denver Raspberry Pi
Picademy USA Seattle Raspberry Pi
Picademy USA Jersey City Raspberry Pi
Raspberry Pi Picademy USA Atlanta

We’re thrilled to kick off our 2018 season! Before we get started, let’s take a look back at our community’s accomplishments in the 2017 Picademy North America season.

Picademy 2017 highlights

Last year, we partnered with four awesome venues to host eight Picademy events in the United States. At every event across the country, we met incredibly talented educators passionate about bringing digital making to their learners. Whether it was at Ann Arbor District Library’s makerspace, UC Irvine’s College of Engineering, or a creative community center in Boise, Idaho, we were truly inspired by all our Picademy attendees and were thrilled to welcome them to the Raspberry Pi Certified Educator community.

JWU Hosts Picademy

JWU Providence’s College of Engineering & Design recently partnered with the Raspberry Pi Foundation to host Picademy, a free training session designed to give educators the tools to teach computer skills with confidence and creativity. | http://www.jwu.edu

The 2017 Picademy cohorts were a diverse bunch with a lot of experience in their field. We welcomed more than 300 educators from 32 U.S. states and 10 countries. They were a mix of high school, middle school, and elementary classroom teachers, librarians, museum staff, university lecturers, and teacher trainers. More than half of our attendees were teaching computer science or technology already, and over 90% were specifically interested in incorporating physical computing into their work.

Picademy has a strong and lasting impact on educators. Over 80% of graduates said they felt confident using Raspberry Pi after attending, and 88% said they were now interested in leading a digital making event in their community. To showcase two wonderful examples of this success: Chantel Mason led a Raspberry Pi workshop for families and educators in her community in St. Louis, Missouri this fall, and Dean Palmer led a digital making station at the Computer Science for Rhode Island Summit in December.

Picademy 2018 dates

This year, we’re partnering with four new venues to host our Picademy season.


We’ll be at mindSpark Learning in Denver the first week in June, at Liberty Science Center in Jersey City later that month, at Georgia Tech University in Atlanta in mid-July, and finally at the Living Computer Museum in Seattle the first week in August.


A big thank you to each of these venues for hosting us and supporting our free educator professional development program!

Ready to join us for Picademy 2018? Learn more and apply now: rpf.io/picademy2018.

The post 2018 Picademy dates in the United States appeared first on Raspberry Pi.

Say Hello To Our Newest AWS Community Heroes (Fall 2017 Edition)

Post Syndicated from Sara Rodas original https://aws.amazon.com/blogs/aws/say-hello-to-our-newest-aws-community-heroes-fall-2017-edition/

The AWS Community Heroes program helps shine a spotlight on some of the innovative work being done by rockstar AWS developers around the globe. Marrying cloud expertise with a passion for community building and education, these heroes share their time and knowledge across social media and through in-person events. Heroes also actively help drive community-led tracks at conferences. At this year’s re:Invent, many Heroes will be speaking during the Monday Community Day track.

This November, we are thrilled to have four Heroes joining our network of cloud innovators. Without further ado, meet to our newest AWS Community Heroes!

 

Anh Ho Viet

Anh Ho Viet is the founder of AWS Vietnam User Group, Co-founder & CEO of OSAM, an AWS Consulting Partner in Vietnam, an AWS Certified Solutions Architect, and a cloud lover.

At OSAM, Anh and his enthusiastic team have helped many companies, from SMBs to Enterprises, move to the cloud with AWS. They offer a wide range of services, including migration, consultation, architecture, and solution design on AWS. Anh’s vision for OSAM is beyond a cloud service provider; the company will take part in building a complete AWS ecosystem in Vietnam, where other companies are encouraged to become AWS partners through training and collaboration activities.

In 2016, Anh founded the AWS Vietnam User Group as a channel to share knowledge and hands-on experience among cloud practitioners. Since then, the community has reached more than 4,800 members and is still expanding. The group holds monthly meetups, connects many SMEs to AWS experts, and provides real-time, free-of-charge consultancy to startups. In August 2017, Anh joined as lead content creator of a program called “Cloud Computing Lectures for Universities” which includes translating AWS documentation & news into Vietnamese, providing students with fundamental, up-to-date knowledge of AWS cloud computing, and supporting students’ career paths.

 

Thorsten Höger

Thorsten Höger is CEO and Cloud consultant at Taimos, where he is advising customers on how to use AWS. Being a developer, he focuses on improving development processes and automating everything to build efficient deployment pipelines for customers of all sizes.

Before being self-employed, Thorsten worked as a developer and CTO of Germany’s first private bank running on AWS. With his colleagues, he migrated the core banking system to the AWS platform in 2013. Since then he organizes the AWS user group in Stuttgart and is a frequent speaker at Meetups, BarCamps, and other community events.

As a supporter of open source software, Thorsten is maintaining or contributing to several projects on Github, like test frameworks for AWS Lambda, Amazon Alexa, or developer tools for CloudFormation. He is also the maintainer of the Jenkins AWS Pipeline plugin.

In his spare time, he enjoys indoor climbing and cooking.

 

Becky Zhang

Yu Zhang (Becky Zhang) is COO of BootDev, which focuses on Big Data solutions on AWS and high concurrency web architecture. Before she helped run BootDev, she was working at Yubis IT Solutions as an operations manager.

Becky plays a key role in the AWS User Group Shanghai (AWSUGSH), regularly organizing AWS UG events including AWS Tech Meetups and happy hours, gathering AWS talent together to communicate the latest technology and AWS services. As a female in technology industry, Becky is keen on promoting Women in Tech and encourages more woman to get involved in the community.

Becky also connects the China AWS User Group with user groups in other regions, including Korea, Japan, and Thailand. She was invited as a panelist at AWS re:Invent 2016 and spoke at the Seoul AWS Summit this April to introduce AWS User Group Shanghai and communicate with other AWS User Groups around the world.

Besides events, Becky also promotes the Shanghai AWS User Group by posting AWS-related tech articles, event forecasts, and event reports to Weibo, Twitter, Meetup.com, and WeChat (which now has over 2000 official account followers).

 

Nilesh Vaghela

Nilesh Vaghela is the founder of ElectroMech Corporation, an AWS Cloud and open source focused company (the company started as an open source motto). Nilesh has been very active in the Linux community since 1998. He started working with AWS Cloud technologies in 2013 and in 2014 he trained a dedicated cloud team and started full support of AWS cloud services as an AWS Standard Consulting Partner. He always works to establish and encourage cloud and open source communities.

He started the AWS Meetup community in Ahmedabad in 2014 and as of now 12 Meetups have been conducted, focusing on various AWS technologies. The Meetup has quickly grown to include over 2000 members. Nilesh also created a Facebook group for AWS enthusiasts in Ahmedabad, with over 1500 members.

Apart from the AWS Meetup, Nilesh has delivered a number of seminars, workshops, and talks around AWS introduction and awareness, at various organizations, as well as at colleges and universities. He has also been active in working with startups, presenting AWS services overviews and discussing how startups can benefit the most from using AWS services.

Nilesh is Red Hat Linux Technologies and AWS Cloud Technologies trainer as well.

 

To learn more about the AWS Community Heroes Program and how to get involved with your local AWS community, click here.

Me on the Equifax Breach

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/11/me_on_the_equif.html

Testimony and Statement for the Record of Bruce Schneier
Fellow and Lecturer, Belfer Center for Science and International Affairs, Harvard Kennedy School
Fellow, Berkman Center for Internet and Society at Harvard Law School

Hearing on “Securing Consumers’ Credit Data in the Age of Digital Commerce”

Before the

Subcommittee on Digital Commerce and Consumer Protection
Committee on Energy and Commerce
United States House of Representatives

1 November 2017
2125 Rayburn House Office Building
Washington, DC 20515

Mister Chairman and Members of the Committee, thank you for the opportunity to testify today concerning the security of credit data. My name is Bruce Schneier, and I am a security technologist. For over 30 years I have studied the technologies of security and privacy. I have authored 13 books on these subjects, including Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World (Norton, 2015). My popular newsletter CryptoGram and my blog Schneier on Security are read by over 250,000 people.

Additionally, I am a Fellow and Lecturer at the Harvard Kennedy School of Government –where I teach Internet security policy — and a Fellow at the Berkman-Klein Center for Internet and Society at Harvard Law School. I am a board member of the Electronic Frontier Foundation, AccessNow, and the Tor Project; and an advisory board member of Electronic Privacy Information Center and VerifiedVoting.org. I am also a special advisor to IBM Security and the Chief Technology Officer of IBM Resilient.

I am here representing none of those organizations, and speak only for myself based on my own expertise and experience.

I have eleven main points:

1. The Equifax breach was a serious security breach that puts millions of Americans at risk.

Equifax reported that 145.5 million US customers, about 44% of the population, were impacted by the breach. (That’s the original 143 million plus the additional 2.5 million disclosed a month later.) The attackers got access to full names, Social Security numbers, birth dates, addresses, and driver’s license numbers.

This is exactly the sort of information criminals can use to impersonate victims to banks, credit card companies, insurance companies, cell phone companies and other businesses vulnerable to fraud. As a result, all 143 million US victims are at greater risk of identity theft, and will remain at risk for years to come. And those who suffer identify theft will have problems for months, if not years, as they work to clean up their name and credit rating.

2. Equifax was solely at fault.

This was not a sophisticated attack. The security breach was a result of a vulnerability in the software for their websites: a program called Apache Struts. The particular vulnerability was fixed by Apache in a security patch that was made available on March 6, 2017. This was not a minor vulnerability; the computer press at the time called it “critical.” Within days, it was being used by attackers to break into web servers. Equifax was notified by Apache, US CERT, and the Department of Homeland Security about the vulnerability, and was provided instructions to make the fix.

Two months later, Equifax had still failed to patch its systems. It eventually got around to it on July 29. The attackers used the vulnerability to access the company’s databases and steal consumer information on May 13, over two months after Equifax should have patched the vulnerability.

The company’s incident response after the breach was similarly damaging. It waited nearly six weeks before informing victims that their personal information had been stolen and they were at increased risk of identity theft. Equifax opened a website to help aid customers, but the poor security around that — the site was at a domain separate from the Equifax domain — invited fraudulent imitators and even more damage to victims. At one point, the official Equifax communications even directed people to that fraudulent site.

This is not the first time Equifax failed to take computer security seriously. It confessed to another data leak in January 2017. In May 2016, one of its websites was hacked, resulting in 430,000 people having their personal information stolen. Also in 2016, a security researcher found and reported a basic security vulnerability in its main website. And in 2014, the company reported yet another security breach of consumer information. There are more.

3. There are thousands of data brokers with similarly intimate information, similarly at risk.

Equifax is more than a credit reporting agency. It’s a data broker. It collects information about all of us, analyzes it all, and then sells those insights. It might be one of the biggest, but there are 2,500 to 4,000 other data brokers that are collecting, storing, and selling information about us — almost all of them companies you’ve never heard of and have no business relationship with.

The breadth and depth of information that data brokers have is astonishing. Data brokers collect and store billions of data elements covering nearly every US consumer. Just one of the data brokers studied holds information on more than 1.4 billion consumer transactions and 700 billion data elements, and another adds more than 3 billion new data points to its database each month.

These brokers collect demographic information: names, addresses, telephone numbers, e-mail addresses, gender, age, marital status, presence and ages of children in household, education level, profession, income level, political affiliation, cars driven, and information about homes and other property. They collect lists of things we’ve purchased, when we’ve purchased them, and how we paid for them. They keep track of deaths, divorces, and diseases in our families. They collect everything about what we do on the Internet.

4. These data brokers deliberately hide their actions, and make it difficult for consumers to learn about or control their data.

If there were a dozen people who stood behind us and took notes of everything we purchased, read, searched for, or said, we would be alarmed at the privacy invasion. But because these companies operate in secret, inside our browsers and financial transactions, we don’t see them and we don’t know they’re there.

Regarding Equifax, few consumers have any idea what the company knows about them, who they sell personal data to or why. If anyone knows about them at all, it’s about their business as a credit bureau, not their business as a data broker. Their website lists 57 different offerings for business: products for industries like automotive, education, health care, insurance, and restaurants.

In general, options to “opt-out” don’t work with data brokers. It’s a confusing process, and doesn’t result in your data being deleted. Data brokers will still collect data about consumers who opt out. It will still be in those companies’ databases, and will still be vulnerable. It just don’t be included individually when they sell data to their customers.

5. The existing regulatory structure is inadequate.

Right now, there is no way for consumers to protect themselves. Their data has been harvested and analyzed by these companies without their knowledge or consent. They cannot improve the security of their personal data, and have no control over how vulnerable it is. They only learn about data breaches when the companies announce them — which can be months after the breaches occur — and at that point the onus is on them to obtain credit monitoring services or credit freezes. And even those only protect consumers from some of the harms, and only those suffered after Equifax admitted to the breach.

Right now, the press is reporting “dozens” of lawsuits against Equifax from shareholders, consumers, and banks. Massachusetts has sued Equifax for violating state consumer protection and privacy laws. Other states may follow suit.

If any of these plaintiffs win in the court, it will be a rare victory for victims of privacy breaches against the companies that have our personal information. Current law is too narrowly focused on people who have suffered financial losses directly traceable to a specific breach. Proving this is difficult. If you are the victim of identity theft in the next month, is it because of Equifax or does the blame belong to another of the thousands of companies who have your personal data? As long as one can’t prove it one way or the other, data brokers remain blameless and liability free.

Additionally, much of this market in our personal data falls outside the protections of the Fair Credit Reporting Act. And in order for the Federal Trade Commission to levy a fine against Equifax, it needs to have a consent order and then a subsequent violation. Any fines will be limited to credit information, which is a small portion of the enormous amount of information these companies know about us. In reality, this is not an effective enforcement regime.

Although the FTC is investigating Equifax, it is unclear if it has a viable case.

6. The market cannot fix this because we are not the customers of data brokers.

The customers of these companies are people and organizations who want to buy information: banks looking to lend you money, landlords deciding whether to rent you an apartment, employers deciding whether to hire you, companies trying to figure out whether you’d be a profitable customer — everyone who wants to sell you something, even governments.

Markets work because buyers choose from a choice of sellers, and sellers compete for buyers. None of us are Equifax’s customers. None of us are the customers of any of these data brokers. We can’t refuse to do business with the companies. We can’t remove our data from their databases. With few limited exceptions, we can’t even see what data these companies have about us or correct any mistakes.

We are the product that these companies sell to their customers: those who want to use our personal information to understand us, categorize us, make decisions about us, and persuade us.

Worse, the financial markets reward bad security. Given the choice between increasing their cybersecurity budget by 5%, or saving that money and taking the chance, a rational CEO chooses to save the money. Wall Street rewards those whose balance sheets look good, not those who are secure. And if senior management gets unlucky and the a public breach happens, they end up okay. Equifax’s CEO didn’t get his $5.2 million severance pay, but he did keep his $18.4 million pension. Any company that spends more on security than absolutely necessary is immediately penalized by shareholders when its profits decrease.

Even the negative PR that Equifax is currently suffering will fade. Unless we expect data brokers to put public interest ahead of profits, the security of this industry will never improve without government regulation.

7. We need effective regulation of data brokers.

In 2014, the Federal Trade Commission recommended that Congress require data brokers be more transparent and give consumers more control over their personal information. That report contains good suggestions on how to regulate this industry.

First, Congress should help plaintiffs in data breach cases by authorizing and funding empirical research on the harm individuals receive from these breaches.

Specifically, Congress should move forward legislative proposals that establish a nationwide “credit freeze” — which is better described as changing the default for disclosure from opt-out to opt-in — and free lifetime credit monitoring services. By this I do not mean giving customers free credit-freeze options, a proposal by Senators Warren and Schatz, but that the default should be a credit freeze.

The credit card industry routinely notifies consumers when there are suspicious charges. It is obvious that credit reporting agencies should have a similar obligation to notify consumers when there is suspicious activity concerning their credit report.

On the technology side, more could be done to limit the amount of personal data companies are allowed to collect. Increasingly, privacy safeguards impose “data minimization” requirements to ensure that only the data that is actually needed is collected. On the other hand, Congress should not create a new national identifier to replace the Social Security Numbers. That would make the system of identification even more brittle. Better is to reduce dependence on systems of identification and to create contextual identification where necessary.

Finally, Congress needs to give the Federal Trade Commission the authority to set minimum security standards for data brokers and to give consumers more control over their personal information. This is essential as long as consumers are these companies’ products and not their customers.

8. Resist complaints from the industry that this is “too hard.”

The credit bureaus and data brokers, and their lobbyists and trade-association representatives, will claim that many of these measures are too hard. They’re not telling you the truth.

Take one example: credit freezes. This is an effective security measure that protects consumers, but the process of getting one and of temporarily unfreezing credit is made deliberately onerous by the credit bureaus. Why isn’t there a smartphone app that alerts me when someone wants to access my credit rating, and lets me freeze and unfreeze my credit at the touch of the screen? Too hard? Today, you can have an app on your phone that does something similar if you try to log into a computer network, or if someone tries to use your credit card at a physical location different from where you are.

Moreover, any credit bureau or data broker operating in Europe is already obligated to follow the more rigorous EU privacy laws. The EU General Data Protection Regulation will come into force, requiring even more security and privacy controls for companies collecting storing the personal data of EU citizens. Those companies have already demonstrated that they can comply with those more stringent regulations.

Credit bureaus, and data brokers in general, are deliberately not implementing these 21st-century security solutions, because they want their services to be as easy and useful as possible for their actual customers: those who are buying your information. Similarly, companies that use this personal information to open accounts are not implementing more stringent security because they want their services to be as easy-to-use and convenient as possible.

9. This has foreign trade implications.

The Canadian Broadcast Corporation reported that 100,000 Canadians had their data stolen in the Equifax breach. The British Broadcasting Corporation originally reported that 400,000 UK consumers were affected; Equifax has since revised that to 15.2 million.

Many American Internet companies have significant numbers of European users and customers, and rely on negotiated safe harbor agreements to legally collect and store personal data of EU citizens.

The European Union is in the middle of a massive regulatory shift in its privacy laws, and those agreements are coming under renewed scrutiny. Breaches such as Equifax give these European regulators a powerful argument that US privacy regulations are inadequate to protect their citizens’ data, and that they should require that data to remain in Europe. This could significantly harm American Internet companies.

10. This has national security implications.

Although it is still unknown who compromised the Equifax database, it could easily have been a foreign adversary that routinely attacks the servers of US companies and US federal agencies with the goal of exploiting security vulnerabilities and obtaining personal data.

When the Fair Credit Reporting Act was passed in 1970, the concern was that the credit bureaus might misuse our data. That is still a concern, but the world has changed since then. Credit bureaus and data brokers have far more intimate data about all of us. And it is valuable not only to companies wanting to advertise to us, but foreign governments as well. In 2015, the Chinese breached the database of the Office of Personal Management and stole the detailed security clearance information of 21 million Americans. North Korea routinely engages in cybercrime as way to fund its other activities. In a world where foreign governments use cyber capabilities to attack US assets, requiring data brokers to limit collection of personal data, securely store the data they collect, and delete data about consumers when it is no longer needed is a matter of national security.

11. We need to do something about it.

Yes, this breach is a huge black eye and a temporary stock dip for Equifax — this month. Soon, another company will have suffered a massive data breach and few will remember Equifax’s problem. Does anyone remember last year when Yahoo admitted that it exposed personal information of a billion users in 2013 and another half billion in 2014?

Unless Congress acts to protect consumer information in the digital age, these breaches will continue.

Thank you for the opportunity to testify today. I will be pleased to answer your questions.

Derek Woodroffe’s steampunk tentacle hat

Post Syndicated from Janina Ander original https://www.raspberrypi.org/blog/steampunk-tentacle-hat/

Halloween: that glorious time of year when you’re officially allowed to make your friends jump out of their skin with your pranks. For those among us who enjoy dressing up, Halloween is also the occasion to go all out with costumes. And so, dear reader, we present to you: a steampunk tentacle hat, created by Derek Woodroffe.

Finished Tenticle hat

Finished Tenticle hat

Extreme Electronics

Derek is an engineer who loves all things electronics. He’s part of Extreme Kits, and he runs the website Extreme Electronics. Raspberry Pi Zero-controlled Tesla coils are Derek’s speciality — he’s even been on one of the Royal Institution’s Christmas Lectures with them! Skip ahead to 15:06 in this video to see Derek in action:

Let There Be Light! // 2016 CHRISTMAS LECTURES with Saiful Islam – Lecture 1

The first Lecture from Professor Saiful Islam’s 2016 series of CHRISTMAS LECTURES, ‘Supercharged: Fuelling the future’. Watch all three Lectures here: http://richannel.org/christmas-lectures 2016 marked the 80th anniversary since the BBC first broadcast the Christmas Lectures on TV. To celebrate, chemist Professor Saiful Islam explores a subject that the lectures’ founder – Michael Faraday – addressed in the very first Christmas Lectures – energy.

Wearables

Wearables are electronically augmented items you can wear. They might take the form of spy eyeglasses, clothes with integrated sensors, or, in this case, headgear adorned with mechanised tentacles.

Why did Derek make this? We’re not entirely sure, but we suspect he’s a fan of the Cthulu mythos. In any case, we were a little astounded by his project. This is how we reacted when Derek tweeted us about it:

Raspberry Pi on Twitter

@ExtElec @extkits This is beyond incredible and completely unexpected.

In fact, we had to recover from a fit of laughter before we actually managed to type this answer.

Making a steampunk tentacle hat

Derek made the ‘skeleton’ of each tentacle out of a net curtain spring, acrylic rings, and four lengths of fishing line. Two servomotors connect to two ends of fishing line each, and pull them to move the tentacle.

net curtain spring and acrylic rings forming a mechanic tentacle skeleton - steampunk tentacle hat by Derek Woodroffe
Two servos connecting to lengths of fishing line - steampunk tentacle hat by Derek Woodroffe

Then he covered the tentacles with nylon stockings and liquid latex, glued suckers cut out of MDF onto them, and mounted them on an acrylic base. The eight motors connect to a Raspberry Pi via an I2C 8-port PWM controller board.

artificial tentacles - steampunk tentacle hat by Derek Woodroffe
8 servomotors connected to a controller board and a raspberry pi- steampunk tentacle hat by Derek Woodroffe

The Pi makes the servos pull the tentacles so that they move in sine waves in both the x and y directions, seemingly of their own accord. Derek cut open the top of a hat to insert the mounted tentacles, and he used more liquid latex to give the whole thing a slimy-looking finish.

steampunk tentacle hat by Derek Woodroffe

Iä! Iä! Cthulhu fhtagn!

You can read more about Derek’s steampunk tentacle hat here. He will be at the Beeston Raspberry Jam in November to show off his build, so if you’re in the Nottingham area, why not drop by?

Wearables for Halloween

This build is already pretty creepy, but just imagine it with a sensor- or camera-powered upgrade that makes the tentacles reach for people nearby. You’d have nightmare fodder for weeks.

With the help of the Raspberry Pi, any Halloween costume can be taken to the next level. How could Pi technology help you to win that coveted ‘Scariest costume’ prize this year? Tell us your ideas in the comments, and be sure to share pictures of you in your get-up with us on Twitter, Facebook, or Instagram.

The post Derek Woodroffe’s steampunk tentacle hat appeared first on Raspberry Pi.

Have Friends Who Don’t Back Up? Share This Post!

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/beginner-guide-to-computer-backup/

pointing out how to backup a computer

We’ve all been there.

A friend or family member comes to you knowing you’re a knowledgeable computer user and tells you that he has lost all the data on his computer.

You say, “Sure, I’ll help you get your computer working again. We’ll just restore your backup to a new drive or a new computer.”

Your friend looks at his feet and says, “I didn’t have a backup.”

You have to tell your friend that it’s very possible that without a backup that data is lost forever. It’s too late for a lecture about how he should have made regular backups of his computer. Your friend just wants his data back and he’s looking to you to help him.

You wish you could help. You realize that the time you could have helped was before the loss happened; when you could have helped your friend start making regular backups.

Yes, we’ve all been there. In fact, it’s how Backblaze got started.

You Can Be a Hero to a Friend by Sharing This Post

If you share this post with a friend or family member, you could avoid the situation where your friend loses his data and you wish you could help but can’t.

The following information will help your friend get started backing up in the easiest way possible — no fuss, no decisions, and no buying storage drives or plugging in cables.

The guide begins here:

Getting Started Backing Up

Your friend or family member has shared this guide with you because he or she believes you might benefit from backing up your computer. Don’t consider this an intervention, just a friendly tip that will save you lots of headaches, sorrow, and maybe money. With the right backup solution, it’s easy to protect your data against accidental deletion, theft, natural disaster, or malware, including ransomware.

Your friend was smart to send this to you, which probably means that you’re a smart person as well, so we’ll get right to the point. You likely know you should be backing up, but like all of us, don’t always get around to everything we should be doing.

You need a backup solution that is:

  1. Affordable
  2. Easy
  3. Never runs out of storage space
  4. Backs up everything automatically
  5. Restores files easily

Why Cloud Backup is the Best Solution For You

Backblaze Personal Backup was created for everyone who knows they should back up, but doesn’t. It backs up to the cloud, meaning that your data is protected in our secure data centers. A simple installation gets you started immediately, with no decisions about what or where to back up. It just works. And it’s just $5 a month to back up everything. Other services might limit the amount of data, the types of files, or both. With Backblaze, there’s no limit on the amount of data you can back up from your computer.

You can get started immediately with a free 15 day trial of Backblaze Unlimited Backup. In fewer than 5 minutes you’ll be all set.

Congratulations, You’re Done!

You can now celebrate. Your data is backed up and secure.

That’s it, and all you really need to get started backing up. We’ve included more details below, but frankly, the above is all you need to be safely and securely backed up.

You can tell the person who sent this to you that you’re now safely backed up and have moved on to other things, like what advice you can give them to help improve their life. Seriously, you might want to buy the person who sent this to you a coffee or another treat. They deserve it.

Here’s more information if you’d like to learn more about backing up.

Share or Email This Post to a Friend

Do your friend and yourself a favor and share this post. On the left side of the page (or at the bottom of the post) are buttons you can use to share this post on Twitter, Facebook, LinkedIn, and Google+, or to email it directly to your friend. It will take just a few seconds and could save your friend’s data.

It could also save you from having to give someone the bad news that her finances, photos, manuscript, or other work are gone forever. That would be nice.

But your real reward will be in knowing you did the right thing.

Tell us in the comments how it went. We’d like to hear.

The post Have Friends Who Don’t Back Up? Share This Post! appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Pi-powered hands-on statistical model at the Royal Society

Post Syndicated from Janina Ander original https://www.raspberrypi.org/blog/royal-society-galton-board/

Physics! Particles! Statistical modelling! Quantum theory! How can non-scientists understand any of it? Well, students from Durham University are here to help you wrap your head around it all – and to our delight, they’re using the power of the Raspberry Pi to do it!

At the Royal Society’s Summer Science Exhibition, taking place in London from 4-9 July, the students are presenting a Pi-based experiment demonstrating the importance of statistics in their field of research.

Modelling the invisible – Summer Science Exhibition 2017

The Royal Society Summer Science Exhibition 2017 features 22 exhibits of cutting-edge, hands-on UK science , along with special events and talks. You can meet the scientists behind the research. Find out more about the exhibition at our website: https://royalsociety.org/science-events-and-lectures/2017/summer-science-exhibition/

Ramona, Matthew, and their colleagues are particle physicists keen to bring their science to those of us whose heads start to hurt as soon as we hear the word ‘subatomic’. In their work, they create computer models of subatomic particles to make predictions about real-world particles. Their models help scientists to design better experiments and to improve sensor calibrations. If this doesn’t sound straightforward to you, never fear – this group of scientists has set out to show exactly how statistical models are useful.

The Galton board model

They’ve built a Pi-powered Galton board, also called a bean machine (much less intimidating, I think). This is an upright board, shaped like an upside-down funnel, with nails hammered into it. Drop a ball in at the top, and it will randomly bounce off the nails on its way down. How the nails are spread out determines where a ball is most likely to land at the bottom of the board.

If you’re having trouble picturing this, you can try out an online Galton board. Go ahead, I’ll wait.

You’re back? All clear? Great!

Now, if you drop 100 balls down the board and collect them at the bottom, the result might look something like this:

Galton board

By Antoine Taveneaux CC BY-SA 3.0

The distribution of the balls is determined by the locations of the nails in the board. This means that, if you don’t know where the nails are, you can look at the distribution of balls to figure out where they are most likely to be located. And you’ll be able to do all this using … statistics!!!

Statistical models

Similarly, how particles behave is determined by the laws of physics – think of the particles as balls, and laws of physics as nails. Physicists can observe the behaviour of particles to learn about laws of physics, and create statistical models simulating the laws of physics to predict the behaviour of particles.

I can hear you say, “Alright, thanks for the info, but how does the Raspberry Pi come into this?” Don’t worry – I’m getting to that.

Modelling the invisible – the interactive exhibit

As I said, Ramona and the other physicists have not created a regular old Galton board. Instead, this one records where the balls land using a Raspberry Pi, and other portable Pis around the exhibition space can access the records of the experimental results. These Pis in turn run Galton board simulators, and visitors can use them to recreate a virtual Galton board that produces the same results as the physical one. Then, they can check whether their model board does, in fact, look like the one the physicists built. In this way, people directly experience the relationship between statistical models and experimental results.

Hurrah for science!

The other exhibit the Durham students will be showing is a demo dark matter detector! So if you decide to visit the Summer Science Exhibition, you will also have the chance to learn about the very boundaries of human understanding of the cosmos.

The Pi in museums

At the Raspberry Pi Foundation, education is our mission, and of course we love museums. It is always a pleasure to see our computers incorporated into exhibits: the Pi-powered visual theremin teaches visitors about music; the Museum in a Box uses Pis to engage people in hands-on encounters with exhibits; and this Pi is itself a museum piece! If you want to learn more about Raspberry Pis and museums, you can listen to this interview with Pi Towers’ social media maestro Alex Bate.

It’s amazing that our tech is used to educate people in areas beyond computer science. If you’ve created a pi-powered educational project, please share it with us in the comments.

The post Pi-powered hands-on statistical model at the Royal Society appeared first on Raspberry Pi.

Developers and Ethics

Post Syndicated from Bozho original https://techblog.bozho.net/developers-and-ethics/

“What are some areas you are particularly interested in” – recruiters (head-hunters) tend to ask that question a lot. I don’t have a good answer for that – I’ll know it when I see it. But I have a list of areas that I wouldn’t like to work in. And one of them is gambling.

Several years ago I got a very lucrative offer for a gambling company, both well paid and technically challenging. But I rejected it. Because I didn’t want to contribute to abusing peoples’ weaknesses for the sake of getting their money. And no, I’m not a raging Marxist, but gambling is bad. You may argue that it’s a necessary vice and people need it to suppress other internal struggles, but I’m not buying that as a motivator.

I felt it’s unethical to write code that does that. Like I feel it’s unethical to profile users’ behaviours and “read” their emails in order to target ads, or to write bots to disseminate fake news.

A few months ago I was part of the campaign HQ for a party in a parliamentary election. Cambridge Analytica had already become popular after “delivering Brexit and Trump’s victory”, that using voters’ data in order to target messages at them sounded like the new cool thing. As head of IT & data, I rejected this approach. Because it would be unethical to bait unsuspecting users to take dumb tests in order to provide us with facebook tokens. Yes, we didn’t have any money to hire Cambridge Analytica-like companies, but even if we had, is “outsourcing” the dubious practice changing anything? If you pay someone to trick users into unknowingly giving their personal data, it’s as if you did it yourself.

This can be a very long post about technology and ethics. But it won’t, as this is a technical blog, not a philosophical one. It won’t be about philosophy – for interesting takes on the matter you can listen to Damon Horowitz’s TED talk or even go through all of Michael Sandel’s Justice lectures at Harvard. It won’t be about how companies should be ethical (e.g. following the ethical design manifesto)

Instead, it will be a short post focusing on developers and their ethical choices.

I think we have the freedom to be ethical – there’s so much demand on the job market that rejecting an offer, refusing to do something, or leaving a company for ethical reasons is something we have the luxury to do without compromising our well-being. When asked to do something unethical, we can refuse (several years ago I was asked to take part in some shady interactions related to a potential future government contract, which I refused to do). When offered jobs that are slightly better paid but would have us build abusive technology, we can turn the offer down. When a new feature requires us to breach people’s privacy, we can argue it, and ultimately not do it.

But in order to start making these ethical choices, we have to start thinking about ethics. To put ourselves in context. We, developers, are building the world of tomorrow (it sounds grandiose, but we know it’s way more mundane than that). We are the “tools” with which future products will be shaped. And yes, that’s true even for the average back-office system of an insurance company (which allows for raising the insurance for pre-existing conditions), and true for boring banking software (which allows mortgages way beyond the actual coverage the bank has), and so on.

Are these decisions ours to make? Isn’t it legislators that should define what’s allowed and what isn’t? We are just building whatever they tell us to build. Forgive me the far-fetched analogy, but Nazi Germany was an anti-humanity machine based on people who “just followed orders”. Yes, we’ll refuse, someone else will come and do it, but collective ethics gets built over time.

As Hannah Arendt had put it – “The sad truth is that most evil is done by people who never make up their minds to be good or evil.”. We may think that as developers we don’t have a say. But without us, no software can be built. So with our individual ethical stance, a certain unethical software may not be built or be successful, and that’s a stance worth considering, especially when it costs us next to nothing.

The post Developers and Ethics appeared first on Bozho's tech blog.

A kindly lesson for you non-techies about encryption

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/06/a-kindly-lesson-for-you-non-techies.html

The following tweets need to be debunked:

The answer to John Schindler’s question is:

every expert in cryptography doesn’t know this

Oh, sure, you can find fringe wacko who also knows crypto that agrees with you but all the sane members of the security community will not.

Telegram is not trustworthy because it’s partially closed-source. We can’t see how it works. We don’t know if they’ve made accidental mistakes that can be hacked. We don’t know if they’ve been bribed by the NSA or Russia to put backdoors in their program. In contrast, PGP and Signal are open-source. We can read exactly what the software does. Indeed, thousands of people have been reviewing their software looking for mistakes and backdoors. Being open-source doesn’t automatically make software better, but it does make hiding secret backdoors much harder.

Telegram is not trustworthy because we aren’t certain the crypto is done properly. Signal, and especially PGP, are done properly.

The thing about encryption is that when done properly, it works. Neither the NSA nor the Russians can break properly encrypted content. There’s no such thing as “military grade” encryption that is better than consumer grade. There’s only encryption that nobody can hack vs. encryption that your neighbor’s teenage kid can easily hack. Those scenes in TV/movies about breaking encryption is as realistic as sound in space: good for dramatic presentation, but not how things work in the real world.

In particular, end-to-end encryption works. Sure, in the past, such apps only encrypted as far as the server, so whoever ran the server could read your messages. Modern chat apps, though, are end-to-end: the servers have absolutely no ability to decrypt what’s on them, unless they can get the decryption keys from the phones. But some tasks, like encrypted messages to a group of people, can be hard to do properly.

Thus, in contrast to what John Schindler says, while we techies have doubts about Telegram, we don’t have doubts about Russia authorities having access to Signal and PGP messages.

Snowden hatred has become the anti-vax of crypto. Sure, there’s no particular reason to trust Snowden — people should really stop treating him as some sort of privacy-Jesus. But there’s no particular reason to distrust him, either. His bland statements on crypto are indistinguishable from any other crypto-enthusiast statements. If he’s a Russian pawn, then so too is the bulk of the crypto community.

With all this said, using Signal doesn’t make you perfectly safe. The person you are chatting with could be a secret agent — especially in group chat. There could be cameras/microphones in the room where you are using the app. The Russians can also hack into your phone, and likewise eavesdrop on everything you do with the phone, regardless of which app you use. And they probably have hacked specific people’s phones. On the other hand, if the NSA or Russians were widely hacking phones, we’d detect that this was happening. We haven’t.

Signal is therefore not a guarantee of safety, because nothing is, and if your life depends on it, you can’t trust any simple advice like “use Signal”. But, for the bulk of us, it’s pretty damn secure, and I trust neither the Russians nor the NSA are reading my Signal or PGP messages.

At first blush, this @20committee tweet appears to be non-experts opining on things outside their expertise. But in reality, it’s just obtuse partisanship, where truth and expertise doesn’t matter. Nothing you or I say can change some people’s minds on this matter, no matter how much our expertise gives weight to our words. This post is instead for bystanders, who don’t know enough to judge whether these crazy statements have merit.


Bonus:

So let’s talk about “every crypto expert“. It’s, of course, impossible to speak for every crypto expert. It’s like saying how the consensus among climate scientists is that mankind is warming the globe, while at the same time, ignoring the wide spread disagreement on how much warming that is.

The same is true here. You’ll get a widespread different set of responses from experts about the above tweet. Some, for example, will stress my point at the bottom that hacking the endpoint (the phone) breaks all the apps, and thus justify the above tweet from that point of view. Others will point out that all software has bugs, and it’s quite possible that Signal has some unknown bug that the Russians are exploiting.

So I’m not attempting to speak for what all experts might say here in the general case and what long lecture they can opine about. I am, though, pointing out the basics that virtually everyone agrees on, the consensus of open-source and working crypto.

Raspberry Jam round-up: April

Post Syndicated from Ben Nuttall original https://www.raspberrypi.org/blog/raspberry-jam-round-up-april/

In case you missed it: in yesterday’s post, we released our Raspberry Jam Guidebook, a new Jam branding pack and some more resources to help people set up their own Raspberry Pi community events. Today I’m sharing some insights from Jams I’ve attended recently.

Raspberry Jam round-up April 2017

Preston Raspberry Jam

The Preston Jam is one of the most long-established Jams, and it recently ran its 58th event. It has achieved this by running like clockwork: on the first Monday evening of every month, without fail, the Jam takes place. A few months ago I decided to drop in to surprise the organiser, Alan O’Donohoe. The Jam is held at the Media Innovation Studio at the University of Central Lancashire. The format is quite informal, and it’s very welcoming to newcomers. The first half of the event allows people to mingle, and beginners can get support from more seasoned makers. I noticed a number of parents who’d brought their children along to find out more about the Pi and what can be done with it. It’s a great way to find out for real what people use their Pis for, and to get pointers on how to set up and where to start.

About half way through the evening, the organisers gather everyone round to watch a few short presentations. At the Jam I attended, most of these talks were from children, which was fantastic to see: Josh gave a demo in which he connected his Raspberry Pi to an Amazon Echo using the Alexa API, Cerys talked about her Jam in Staffordshire, and Elise told everyone about the workshops she ran at MozFest. All their talks were really well presented. The Preston Jam has done very well to keep going for so long and so consistently, and to provide such great opportunities and support for young people like Josh, Cerys and Elise to develop their digital making abilities (and presentation skills). Their next event is on Monday 1 May.



Manchester Raspberry Jam and CoderDojo

I set up the Manchester Jam back in 2012, around the same time that the Preston one started. Back then, you could only buy one Pi at a time, and only a handful of people in the area owned one. We ran a fairly small event at the local tech community space, MadLab, adopting the format of similar events I’d been to, which was very hands-on and project-based – people brought along their Pis and worked on their own builds. I ran the Jam for a year before moving to Cambridge to work for the Foundation, and I asked one of the regular attendees, Jack, if he’d run it in future. I hadn’t been back until last month, when Clare and I decided to visit.

The Jam is now held at The Shed, a digital innovation space at Manchester Metropolitan University, thanks to Darren Dancey, a computer science lecturer who claims he taught me everything I know (this claim is yet to be peer-reviewed). Jack, Darren, and Raspberry Pi Foundation co-founder and Trustee Pete Lomas put on an excellent event. They have a room for workshops, and a space for people to work on their own projects. It was wonderful to see some of the attendees from the early days still going along every month, as well as lots of new faces. Some of Darren’s students ran a Minecraft Pi workshop for beginners, and I ran one using traffic lights with GPIO Zero and guizero.



The next day, we went along to Manchester CoderDojo, a monthly event for young people learning to code and make things. The Dojo is held at The Sharp Project, and thanks to the broad range of skills of the volunteers, they provide a range of different activities: Raspberry Pi, Minecraft, LittleBits, Code Club Scratch projects, video editing, game making and lots more.

Raspberry Jam round-up April 2017

Manchester CoderDojo’s next event is on Sunday 14 May. Be sure to keep an eye on mcrraspjam.org.uk for the next Jam date!

CamJam and Pi Wars

The Cambridge Raspberry Jam is a big event that runs two or three times a year, with quite a different format to the smaller monthly Jams. They have a lecture theatre for talks, a space for workshops, lots of show-and-tell, and even a collection of retailers selling Pis and accessories. It’s a very social event, and always great fun to attend.

The organisers, Mike and Tim, who wrote the foreword for the Guidebook, also run Pi Wars: the annual Raspberry Pi robotics competition. Clare and I went along to this year’s event, where we got to see teams from all over the country (and even one from New Mexico, brought by one of our Certified Educators from Picademy USA, Kerry Bruce) take part in a whole host of robotic challenges. A few of the teams I spoke to have been working on their robots at their local Jams throughout the year. If you’re interested in taking part next year, you can get a team together now and start to make a plan for your 2018 robot! Keep an eye on camjam.me and piwars.org for announcements.

PiBorg on Twitter

Ely Cathedral has surprisingly good straight line speed for a cathedral. Great job Ely Makers! #PiWars

Raspberry Jam @ Pi Towers

As well as working on supporting other Jams, I’ve also been running my own for the last few months. Held at our own offices in Cambridge, Raspberry Jam @ Pi Towers is a monthly event for people of all ages. We run workshops, show-and-tell and other practical activities. If you’re in the area, our next event is on Saturday 13 May.

Ben Nuttall on Twitter

rjam @ Pi Towers

Raspberry Jamboree

In 2013 and 2014, Alan O’Donohoe organised the Raspberry Jamboree, which took place in Manchester to mark the first and second Raspberry Pi birthdays – and it’s coming back next month, this time organised by Claire Dodd Wicher and Les Pounder. It’s primarily an unconference, so the talks are given by the attendees and arranged on the day, which is a great way to allow anyone to participate. There will also be workshops and practical sessions, so don’t miss out! Unless, like me, you’re going to the new Norwich Jam instead…

Start a Jam near you

If there’s no Jam where you live, you can start your own! Download a copy of the brand new Raspberry Jam Guidebook for tips on how to get started. It’s not as hard as you’d think! And we’re on hand if you need any help.

Raspberry Jam round-up April 2017

Visiting Jams and hearing from Jam organisers are great ways for us to find out how we can best support our wonderful community. If you run a Jam and you’d like to tell us about what you do, or share your success stories, please don’t hesitate to get in touch. Email me at [email protected], and we’ll try to feature your stories on the blog in future.

The post Raspberry Jam round-up: April appeared first on Raspberry Pi.

How Backblaze Got Started: The Problem, The Solution, and the Stuff In-Between

Post Syndicated from Gleb Budman original https://www.backblaze.com/blog/how-backblaze-got-started/

How Backblaze Got Started

Backblaze will be celebrating its ten year anniversary this month. As I was reflecting on our path to get here, I thought some of the issues we encountered along the way are universal to most startups. With that in mind, I’ll write a series of blog posts focused on the entrepreneurial journey. This post is the first and focuses on the birth of Backblaze. I hope you stick around and enjoy the Backblaze story along the way.

What’s Your Problem?

The entrepreneur builds things to solve problems – your own or someone else’s. That problem may be a lack of something that you wish existed or something broken you want to fix. Here’s the problem that kicked off Backblaze and how it got noticed:

Brian Wilson, now co-founder and CTO of Backblaze, had been doing tech support for friends and family, as many of us did. One day he got a panicked call from one of those friends, Lise.

Lise: “You’ve got to help me! My computer crashed!”
Brian: “No problem – we’ll get you a new laptop; where’s your backup?”
Lise: “Look, what I don’t need now is a lecture! What I need is for you to get my data back!”

Brian was religious about backing up data and had been for years. He burned his data onto a CD and a DVD, diversifying the media types he used. During the process, Brian periodically read some files from each of the discs to test his backups. Finally, Brian put one disc in his closet and mailed another to his brother in New Mexico to have it offsite. Brian did this every week!

Brian was obviously a lot more obsessive than most of us.

Lise, however, had the opposite problem. She had no backup. And she wasn’t alone.

Whose Problem Is It?

A serious pain-point for one person may turn out to be a serious pain-point for millions.

At this point, it would have been easy just to say, “Well that sucks” or blame Lise. “User error” and “they just don’t get it” are common refrains in tech. But blaming the user doesn’t solve the problem.

Brian started talking to people and asking, “Who doesn’t back up?” He also talked with me and some of the others that are now Backblaze co-founders, and we asked the same question to others.

It turned out that most people didn’t back up their computers. Lise wasn’t the anomaly; Brian was. And that was a problem.

Over the previous decade, everything had gone digital. Photos, movies, financials, taxes, everything. A single crashed hard drive could cause you to lose everything. And drives would indeed crash. Over time everything would be digital, and society as a whole would permanently lose vast amounts of information. Big problem.

Surveying the Landscape

There’s a well-known adage that “Having no competition may mean you have no market.” The corollary I’d add is that “Having competition doesn’t mean the market is full.”

Weren’t There Backup Solutions?

Yes. Plenty. In fact, we joked that we were thirty years too late to the problem.

“Solutions Exist” does not mean “Problem Solved.” Even though many backup solutions were available, most people did not back up their data.

What Were the Current Solutions?

At first glance, it seems clear we’d be competing with other backup services. But when I asked people “How do you back up your data today?”, here were the answers I heard most frequently:

  • Copy ‘My Documents’ directory to an external drive before going on vacation
  • Copy files to a USB key
  • Send important files to Gmail
  • Pray
  • And “Do I need to back up?” (I’ll talk about this one in another post.)

Sometimes people would mention a particular backup app or service, but this was rare.

What Was Wrong With the Current Solutions?

Existing backup systems had various issues. They would not back up all of the users’ data, for example. They would only back up periodically and thus didn’t have current data. Most solutions were not off-site, so fire, theft or another catastrophe could still wipe out data. Some weren’t automatic, which left more room for neglect and user error.

“Solutions Exist” does not mean “Problem Solved.”

In fairness, some backup products and services had already solved some of these issues. But few people used those products. I talked with a lot of people and asked, “Why don’t you use some backup software/service?”

The most common answer was, “I tried it…and it was too hard and too expensive.” We’d learn a lot more about what “hard” and “expensive” meant along the way.

Finding and Testing Solutions

Focus is critical for execution, but when brainstorming solutions, go broad.

We considered a variety of approaches to help people back up their files.

Peer-to-Peer Backup: This was the original idea. Two people would install our backup software which would send each person’s data to the other’s computer. This idea had a lot going for it: The data would be off-site; It would work with existing hardware; It was mildly viral.

Local Drive Backup: The backup software would send data to a USB hard drive. Manually copying files to an external drive was most people’s idea of backing up. However, no good software existed at the time to make this easy. (Time Machine for the Mac hadn’t launched yet.)

Backup To Online Services: Weirder and more unique, this idea stemmed from noticing that online services provided free storage: Flickr for photos; Google Docs for documents and spreadsheets; YouTube for movies; and so on. We considered writing software that would back up each file type to the service that supported it and back up the rest to Gmail.

Backup To Our Online Storage: We’d create a service that backed up data to the cloud. It may seem obvious now, but backing up to the cloud was just one of a variety of possibilities at the time. Also, initially, we didn’t mean ‘our’ storage. We assumed we would use S3 or some other storage provider.

The goal was to come up with a solution that was easy.

We put each solution we came up with through its paces. The goal was to come up with a solution that was easy: Easy for people to use. Easy to understand.

Peer-to-peer backup? First, we’d have to explain what it is (no small task) and then get buy-in from the user to host a backup on their machine. That meant having enough space on each computer, and both needed to be online at the same time. After our initial excitement with the idea, we came to the conclusion that there were too many opportunities for things to go wrong. Verdict: Not easy.

Backup software? Not off-site, and required the purchase of a hard drive. If the drive broke or wasn’t connected, no backup occurred. A useful solution but again, too many opportunities for things to go wrong. Verdict: Not easy.

Back up to online services? Users needed accounts at each, and none of the services supported all file types, so your data ended up scattered all over the place. Verdict: Not easy.

Back up to our online storage? The backup would be current, kept off-site, and updated automatically. It was easy to for people to use, and easy to understand. Verdict: Easy!

Getting To the Solution

Don’t brainstorm forever. Problems don’t get solved on ideas alone.

We decided to back up to our online storage! It met many of the key goals. We started building.

Attempt #1

We built a backup software installer, a way to pick files and folders to back up, and the underlying engine that copies the files to remote storage. We tried to make it comfortable by minimizing clicks and questions.

Fail #1

This approach seemed easy enough to use, at least for us, but it turned out not to be for our target users.

We thought about the original answer we heard: “I tried it…and it was too hard and too expensive.”

“Too hard” is not enough information. What was too hard before? Were the icons too small? The text too long? A critical feature missing? Were there too many features to wade through? Or something else altogether?

Dig deeper into users’ actual needs

We reached out to a lot of friends, family, and co-workers and held some low-key pizza and beer focus groups. Those folks walked us through their backup experience. While there were a lot of difficult areas, the most complicated part was setting up what would be backed up.

“I had to get all the files and folders on my computer organized; then I could set up the backup.”

That’s like cleaning the garage. Sounds like a good idea, but life conspires to get in the way, and it doesn’t happen.

We had to solve that or users would never think of our service as ‘easy.’

Takeaway: Dig deeper into users’ actual needs.

Attempt #2

Trying to remove the need to “clean the garage,” we asked folks what they wanted to be backed up. They told us they wanted their photos, movies, music, documents, and everything important.

We listened and tried making it easier. We focused our second attempt at a backup solution by pre-selecting everything ‘important.’ We selected the documents folder and then went one step further by finding all the photo, movies, music, and other common file types on the computer. Now users didn’t have to select files and folders – we would do it for them!

Fail #2

More pizza and beer user testing had people ask, “But how do I know that my photos are being backed up?”

We told them, “we’re searching your whole computer for photos.”

“But my photos are in this weird format: .jpg, are those included? .gif? .psd?”

We learned that the backup process felt nebulous to users since they wouldn’t know what exactly would be selected. Users would always feel uncomfortable – and uncomfortable isn’t ‘easy.’

Takeaway: No, really, keep digging deeper into users’ actual needs. Identify their real problem, not the solution they propose.

Attempt #3

We took a step back and asked, “What do we know?”

We want all of our “important” files backed up, but it can be hard for us to identify what files those are. Having us guess makes us uncomfortable. So, forget the tech. What experience would be the right one?

Our answer was that the computer would just magically be backed up to the cloud.

Then one of our co-founders Tim wondered, “what if we didn’t ask any questions and just backed up everything?”

At first, we all looked at him askew. Backup everything? That was a lot of data. How would that be possible? But we came back to, “Is this the right answer? Yes. So let’s see if we can make it work.”

So we flipped the entire backup approach on its head.

We didn’t ask users, “What do you want to have backed up.” We asked, “What do you NOT want to be backed up?” If you didn’t know, we’d back up all your data. It took away the scary “pick your files” question and made people comfortable that all their necessary data was being backed up.

We ran that experience by users, and their surprised response was, “Really, that’s it?” Hallelujah.

Success.

Takeaway: Keep digging deeper. Don’t let the tech get in the way of understanding the real problem.

Pricing

Pricing isn’t a side-note – it’s part of the product. Understand how customers will perceive your pricing.

We had developed a solution that was easy to use and easy to understand. But could we make it easy to afford? How much do we charge?

We would be storing a lot of data for each customer. The more data they needed to store, the more it would cost us. We planned to put the data on S3, which charged $0.15/GB/month. So it would seem logical to follow that same pricing model.

People thought of the value of the service rather than an amount of storage.

People had no idea how much data they had on their hard drive and certainly not how much of it needed to be backed up. Worse, they could be off by 1000x if they weren’t sure about the difference between megabytes and gigabytes, as some were.

We had to solve that too, or users would never think of our service as ‘easy.’

I asked everyone I could find: “If we were to provide you a service that automatically would backup all of the data on your computer over the internet, what would that be worth to you?”

What I heard back was a bell-curve:

  • A small number of people said, “$0. It should be free. Everything on the net is free!”
  • A small number of people said, “$50 – $100/month. That’s incredibly valuable!”
  • But by far the majority said, “Hmm. If it were $5/month, that’d be a no-brainer.”

A few interesting takeaways:

  • Everyone assumed it would be a monthly charge even though I didn’t ask, “What would you pay per month.”
  • No one said, “I’d pay $x/GB/month,” so people thought of the value of the service rather than an amount of storage.
  • There may have been opportunities to offer a free service and attempt to monetize it in other ways or to charge $50 – $100/month/user, but these were the small markets.
  • At $5/month, there was a significant slice of the population that was excited to use it.

Conclusion On the Solution

Over and over again we heard, “I tried backing up, but it was too hard and too expensive.”

After really understanding what was complicated, we finally got our real solution: An unlimited online backup service that would back up all your data automatically and charge just $5/month.

Easy to use, easy to understand, and easy to afford. Easy in the ways that mattered to the people using the service.

Often looking backward things seem obvious. But we learned a lot along the way:

  • Having competition doesn’t mean the market is full. Just because solutions exist doesn’t mean the problem is solved.
  • Don’t brainstorm forever. Problems don’t get solved on ideas alone. Brainstorm options, but don’t get stuck in the brainstorming phase.
  • Dig deeper into users’ actual needs. Then keep digging. Don’t let your knowledge of tech get in the way of your understanding the user. And be willing to shift course as your learn more.
  • Pricing isn’t a side-note. It’s part of the product. Understand how customers will perceive your pricing.

Just because we knew the right solution didn’t mean that it was possible. I’ll talk about that, along with how to launch, getting early traction, and more in future posts. What other questions do you have? Leave them in the comments.

The post How Backblaze Got Started: The Problem, The Solution, and the Stuff In-Between appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Security Orchestration and Incident Response

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/03/security_orches.html

Last month at the RSA Conference, I saw a lot of companies selling security incident response automation. Their promise was to replace people with computers ­– sometimes with the addition of machine learning or other artificial intelligence techniques ­– and to respond to attacks at computer speeds.

While this is a laudable goal, there’s a fundamental problem with doing this in the short term. You can only automate what you’re certain about, and there is still an enormous amount of uncertainty in cybersecurity. Automation has its place in incident response, but the focus needs to be on making the people effective, not on replacing them ­ security orchestration, not automation.

This isn’t just a choice of words ­– it’s a difference in philosophy. The US military went through this in the 1990s. What was called the Revolution in Military Affairs (RMA) was supposed to change how warfare was fought. Satellites, drones and battlefield sensors were supposed to give commanders unprecedented information about what was going on, while networked soldiers and weaponry would enable troops to coordinate to a degree never before possible. In short, the traditional fog of war would be replaced by perfect information, providing certainty instead of uncertainty. They, too, believed certainty would fuel automation and, in many circumstances, allow technology to replace people.

Of course, it didn’t work out that way. The US learned in Afghanistan and Iraq that there are a lot of holes in both its collection and coordination systems. Drones have their place, but they can’t replace ground troops. The advances from the RMA brought with them some enormous advantages, especially against militaries that didn’t have access to the same technologies, but never resulted in certainty. Uncertainty still rules the battlefield, and soldiers on the ground are still the only effective way to control a region of territory.

But along the way, we learned a lot about how the feeling of certainty affects military thinking. Last month, I attended a lecture on the topic by H.R. McMaster. This was before he became President Trump’s national security advisor-designate. Then, he was the director of the Army Capabilities Integration Center. His lecture touched on many topics, but at one point he talked about the failure of the RMA. He confirmed that military strategists mistakenly believed that data would give them certainty. But he took this change in thinking further, outlining the ways this belief in certainty had repercussions in how military strategists thought about modern conflict.

McMaster’s observations are directly relevant to Internet security incident response. We too have been led to believe that data will give us certainty, and we are making the same mistakes that the military did in the 1990s. In a world of uncertainty, there’s a premium on understanding, because commanders need to figure out what’s going on. In a world of certainty, knowing what’s going on becomes a simple matter of data collection.

I see this same fallacy in Internet security. Many companies exhibiting at the RSA Conference promised to collect and display more data and that the data will reveal everything. This simply isn’t true. Data does not equal information, and information does not equal understanding. We need data, but we also must prioritize understanding the data we have over collecting ever more data. Much like the problems with bulk surveillance, the “collect it all” approach provides minimal value over collecting the specific data that’s useful.

In a world of uncertainty, the focus is on execution. In a world of certainty, the focus is on planning. I see this manifesting in Internet security as well. My own Resilient Systems ­– now part of IBM Security –­ allows incident response teams to manage security incidents and intrusions. While the tool is useful for planning and testing, its real focus is always on execution.

Uncertainty demands initiative, while certainty demands synchronization. Here, again, we are heading too far down the wrong path. The purpose of all incident response tools should be to make the human responders more effective. They need both the ability and the capability to exercise it effectively.

When things are uncertain, you want your systems to be decentralized. When things are certain, centralization is more important. Good incident response teams know that decentralization goes hand in hand with initiative. And finally, a world of uncertainty prioritizes command, while a world of certainty prioritizes control. Again, effective incident response teams know this, and effective managers aren’t scared to release and delegate control.

Like the US military, we in the incident response field have shifted too much into the world of certainty. We have prioritized data collection, preplanning, synchronization, centralization and control. You can see it in the way people talk about the future of Internet security, and you can see it in the products and services offered on the show floor of the RSA Conference.

Automation, too, is fixed. Incident response needs to be dynamic and agile, because you are never certain and there is an adaptive, malicious adversary on the other end. You need a response system that has human controls and can modify itself on the fly. Automation just doesn’t allow a system to do that to the extent that’s needed in today’s environment. Just as the military shifted from trying to replace the soldier to making the best soldier possible, we need to do the same.

For some time, I have been talking about incident response in terms of OODA loops. This is a way of thinking about real-time adversarial relationships, originally developed for airplane dogfights, but much more broadly applicable. OODA stands for observe-orient-decide-act, and it’s what people responding to a cybersecurity incident do constantly, over and over again. We need tools that augment each of those four steps. These tools need to operate in a world of uncertainty, where there is never enough data to know everything that is going on. We need to prioritize understanding, execution, initiative, decentralization and command.

At the same time, we’re going to have to make all of this scale. If anything, the most seductive promise of a world of certainty and automation is that it allows defense to scale. The problem is that we’re not there yet. We can automate and scale parts of IT security, such as antivirus, automatic patching and firewall management, but we can’t yet scale incident response. We still need people. And we need to understand what can be automated and what can’t be.

The word I prefer is orchestration. Security orchestration represents the union of people, process and technology. It’s computer automation where it works, and human coordination where that’s necessary. It’s networked systems giving people understanding and capabilities for execution. It’s making those on the front lines of incident response the most effective they can be, instead of trying to replace them. It’s the best approach we have for cyberdefense.

Automation has its place. If you think about the product categories where it has worked, they’re all areas where we have pretty strong certainty. Automation works in antivirus, firewalls, patch management and authentication systems. None of them is perfect, but all those systems are right almost all the time, and we’ve developed ancillary systems to deal with it when they’re wrong.

Automation fails in incident response because there’s too much uncertainty. Actions can be automated once the people understand what’s going on, but people are still required. For example, IBM’s Watson for Cyber Security provides insights for incident response teams based on its ability to ingest and find patterns in an enormous amount of freeform data. It does not attempt a level of understanding necessary to take people out of the equation.

From within an orchestration model, automation can be incredibly powerful. But it’s the human-centric orchestration model –­ the dashboards, the reports, the collaboration –­ that makes automation work. Otherwise, you’re blindly trusting the machine. And when an uncertain process is automated, the results can be dangerous.

Technology continues to advance, and this is all a changing target. Eventually, computers will become intelligent enough to replace people at real-time incident response. My guess, though, is that computers are not going to get there by collecting enough data to be certain. More likely, they’ll develop the ability to exhibit understanding and operate in a world of uncertainty. That’s a much harder goal.

Yes, today, this is all science fiction. But it’s not stupid science fiction, and it might become reality during the lifetimes of our children. Until then, we need people in the loop. Orchestration is a way to achieve that.

This essay previously appeared on the Security Intelligence blog.

NEON PHASE

Post Syndicated from Eevee original https://eev.ee/blog/2017/01/21/neon-phase/

It all started after last year’s AGDQ, when I lamented having spent the entire week just watching speedruns instead of doing anything, and thus having lost my rhythm for days afterwards.

This year, several friends reminded me of this simultaneously, so I begrudgingly went looking for something to focus on during AGDQ. I’d already been working on Isaac’s Descent HD, so why not keep it up? Work on a video game while watching video games.

Working on a game for a week sounded an awful lot like a game jam, so I jokingly tweeted about a game jam whose express purpose was to not completely waste the week staring at a Twitch stream. Then someone suggested I make it an actual jam on itch.io. Then Mel asked to do a game with me.

And so, thanks to an almost comical sequence of events, we made NEON PHASE — a half-hour explorey platformer.

The game

The game is set in the Flora universe, as is everything Mel gets their hands on. (I say this with all the love in the world. ♥ Anyway, my games are also set in the Flora universe, so who am I to talk.)

I started out by literally copy-pasting the source code for Isaac’s Descent HD, the game I’ve been making with LÖVE as an extension of an earlier PICO-8 game I made. It’s not terribly far yet, but it’s almost to the point of replicating the original game, which meant I had a passable platformer engine that could load Tiled maps and had some notion of an “actor”. We both like platformers, anyway, so a platformer it would be.

We probably didn’t make the best use of the week. I think it took us a couple days to figure out how to collaborate, now that we didn’t have the PICO-8’s limitations and tools influencing our direction. Isaac is tile-based, so I’d taken for granted that this game would also be tile-based, whereas Mel, being an illustrator, prefers to draw… illustrations. I, an idiot, decided the best way to handle this would be to start cutting the illustrations into tiles and then piecing them back together. It took several days before I realized that oh, hey, Mel could just draw the entire map as a single image, and I could make the player run around on that.

So I did that. Previously, collision had been associated only with tiles, but it wasn’t too hard to just draw polygons right on the map and use those for collision. (Bless Tiled, by the way. It has some frustrating rough edges due to being a very general-purpose editor, but I can’t imagine how much time it would take me to write my own map editor that can do as much.)

And speaking of collision, while I did have to dig into a few thorny bugs, I’m thrilled with how well the physics came out! The collision detection I’d written for Isaac’s Descent HD was designed to support arbitrary polygons, even though so far I’ve only had square tiles. I knew the whole time I was making my life a lot harder, but I really didn’t want to restrict myself to rectangles right out of the gate. It paid off in NEON PHASE — the world is full of sloping, hilly terrain, and you can run across it fairly naturally!

I’d also thought at first that the game would be a kind of actiony platformer, which is why the very first thing you get greatly resembles a weapon, but you don’t end up actually fighting anything. It turns out enemy behavior takes a bit of careful design and effort, and I ended up busy enough just implementing Mel’s story. Also, dropping fighting meant I didn’t have to worry about death, which meant I didn’t have to worry about saving and loading map state, which was great news because I still haven’t done any of that yet.

It’s kind of interesting how time constraints can influence game design. The game has little buildings you can enter, but because I didn’t have saving/loading implemented, I didn’t want to actually switch maps. Instead, I made the insides of buildings a separate layer in Tiled. And since I had both layers on hand, I just drew the indoor layer right on top of the outdoor layer, which made kind of a cool effect.

A side effect of this approach was that you could see the inside of all buildings (well, within the viewport) while you were inside one, since they all exist in the same space. We ended up adding a puzzle and a couple minor flavor things that took advantage of this.

If I had had saving/loading of maps ready to go, I might have opted instead for a more traditional RPG-like approach, where the inside of each building is on its own map (or appears to be) and floats in a black void.

Another thing I really liked was the glitch effect, which I wrote on a whim early on because I’ve had shaders on the brain lately. We were both a little unsure about it, but in the end Mel wrote it into the plot and I used it more heavily throughout, including as a transition effect between indoors/outdoors.

Mel was responsible for art and music and story, so the plot unfortunately wasn’t finalized until the last day of the jam. It ended up being 30 pages of dialogue. Sprinkled throughout were special effects that sound like standard things you’d find in any RPG dialogue system — menus, branches, screen fades, and the like — but that I just hadn’t written yet.

The dialogue system was downright primitive when we started; I’d only written it as a brief proof of concept for Isaac, and it had only gotten as far as showing lines of wrapped text. It didn’t even know how to deal with text that was too long for the box. Hell, it didn’t even know how to exit the dialogue and return to the game.

So when I got the final script, I went into a sort of mad panic, doing my best to tack on features in ways I wouldn’t regret later and could maybe reuse. I got pretty far, but when it became clear that we couldn’t possibly have a finished product in time, I invoked my powers as jam coordinator and pushed the deadline back by 24 hours. 48 hours. 54⅓ hours. Oh, well.

The final product came out pretty well, modulo a couple release bugs, ahem. I’ve been really impressed with itch.io, too — it has a thousand twiddles, which makes me very happy, plus graphs of how many people have been playing our game and how they found it! Super cool.

Lessons learned

Ah, yes. Here’s that sweet postmortem content you computer people crave.

Don’t leave debug code in

There’s a fairly long optional quest in the game that takes a good few minutes to complete, even if you teleport everywhere instantly. (Ahem.) Finishing the quest kicks off a unique cutscene that involves a decent bit of crappy code I wrote at the last minute. I needed to test it a lot. So, naturally, I added a dummy rule to the beginning of the relevant NPC’s dialogue that just skips right to the end.

I forgot to delete that rule before we released.

Whoops!

The game even has a debug mode, so I could’ve easily made the rule only work then. I didn’t, and it possibly spoiled the whole sidequest for a couple dozen people. My bad.

Try your game at other framerates

The other game-breaking bug we had in the beginning was that some people couldn’t make jumps. For some, it was only when indoors; for others, it was all the time. The common thread was… low framerates.

Why does this matter? Well! When you jump, your upwards velocity is changed to a specific value, calculated to make your jump height slightly more than two tiles. The problem is, gravity is applied after you get jump velocity but before you actually move. It looks like this:

1
self.velocity = self.velocity + gravity * dt

Reasonable, right? Gravity is acceleration, so you multiply it by the amount of time that’s passed to get the change to velocity.

Ah… but if your framerate is low, then dt will be relatively large, and gravity will eat away a relatively large chunk of your upwards velocity. On the frame you jump, this effectively reduces your initial jump speed. If your framerate is low enough, you’ll never be able to jump as high as intended.

One obvious fix would be to rearrange the order things happen, so gravity doesn’t come between jumping and movement. I was wary of doing this as an emergency fix, though, because it would’ve taken a bit of rearchitecturing and I wasn’t sure about the side effects. So instead, I made a fix that’s worth having anyway: when the framerate is too long, I slice up dt and do multiple rounds of updating. Now even if the game draws slowly, it plays at the right speed.

This was really easy to discover once I knew to look; all I had to do was add a sleep() in the update or draw loops to artificially lower the framerate. I even found a second bug, which was that you move slowly at low framerates — much like with jumping, your walk speed is capped at a maximum, then friction lowers it, then you actually move.

I also had problems with framerates that were too high, which took me completely by surprise. Your little companion flips out and jitters all over the place or even gets stuck, and jumping just plain doesn’t work most of the time. The problems here were much simpler. I was needlessly rounding Chip’s position to the nearest pixel, so if dt was very small, Chip would only try to move a fraction of a pixel per frame and never get anywhere; I fixed that by simply not rounding.

The issue with jumping needs a little backstory. One of the problems with sloped terrain is that when you walk up a slope and reach the top, your momentum is still carrying you along the path of the slope, i.e. upwards. I had a lot of problems with launching right off the top of even a fairly shallow hill; it looked goofy and amateurish. My terrible solution was: if you started out on the ground, then after moving, try to move a short distance straight down. If you can’t, because something (presumably the ground) is in the way, then you probably just went over a short bump; move as far as you can downwards so you stick to the ground. If you can move downwards, you just went over a ledge, so abort the movement and let gravity take its course next frame.

The problem was that I used a fixed (arbitrary) distance for this ground test. For very short dt, the distance you moved upwards when jumping was less than the distance I then tried dragging you back down to see if you should stay on the ground. The easy fix was to scale the test distance with dt.

Of course, if you’re jumping, obviously you don’t want to stay on the ground, so I shouldn’t do this test at all. But jumping is an active thing, and staying grounded is currently a passive thing (but shouldn’t be, since it emulates walking rather than sliding), and again I didn’t want to start messing with physics guts after release. I’ll be cleaning a few things up for the next game, I’m sure.

This also turned out to be easy to see once I knew to look — I just turned off vsync, and my framerate shot up to 200+.

Quadratic behavior is bad

The low framerate issue wouldn’t have been quite so bad, except for a teeny tiny problem with indoors. I’d accidentally left a loop in when refactoring, so instead of merely drawing every indoor actor each frame, I was drawing every indoor actor for every indoor actor each frame. I think that worked out to 7225 draws instead of 85. (I don’t skip drawing for offscreen actors yet.) Our computers are pretty beefy, so I never noticed. Our one playtester did comment at the eleventh hour that the framerate dipped very slightly while indoors, but I assumed this was just because indoors requires more drawing than outdoors (since it’s drawn right on top of outdoors) and didn’t investiage.

Of course, if you play on a less powerful machine, the difference will be rather more noticeable. Oops.

Just Do It

My collision detection relies on the separating axis theorem, which only works for convex polygons. (Convex polygons are ones that have no “dents” in them — you could wrap a rubber band around one and it would lie snug along each face.) The map Mel drew has rolling terrain and caverns with ceilings, which naturally lead to a lot of concave polygons. (Concave polygons are not convex. They have caves!)

I must’ve spent a good few hours drawing collision polygons on top of the map, manually eyeballing the terrain and cutting it up into only convex polygons.

Eventually I got so tired of this that I threw up my hands and added support for concave polygons.

It took me, like, two minutes. Not only does LÖVE have a built-in function for cutting a polygon into triangles (which are always convex), it also has a function for detecting whether a polygon is convex. I already had support for objects consisting of multiple shapes, so all I had to do was plug these things into each other.

Collision probably would’ve taken much less time if I’d just done that in the first place.

Delete that old code, or maybe not

One of the very first players reported that they’d managed to crash the game right off the bat. It didn’t take long to realize it was because they’d pressed Q, which isn’t actually used in NEON PHASE. It is used in Isaac’s Descent HD, to scroll through the inventory… but NEON PHASE doesn’t use that inventory, and I’d left in the code for handling the keypress, so the game simply crashed.

(This is Lua, so when I say “crash”, I mean “showed a stack trace and refused to play any more”. Slightly better, but only so much.)

So, maybe delete that old code.

Or, wait, maybe don’t. When I removed the debugging sequence break just after release, I also deleted the code for the Q key… and, in a rush, also deleted the code for handling the E key, which is used in NEON PHASE. Rather heavily. Like, for everything. Dammit.

Maybe just play the game before issuing emergency releases? Nah.

Melding styles is easier than you’d think

When I look at the whole map overall, it’s hilarious to me how much the part I designed sticks out. It’s built out of tiles and consists of one large puzzle, whereas the rest of the game is as untiled as you can get and mostly revolves around talking to people.

And yet I don’t think anyone has noticed. It’s just one part of the game with a thing you do. The rest of the game may not have a bunch of wiring puzzles, but enough loose wires are lying around to make them seem fitting. The tiles Mel gave me are good and varied enough that they don’t look like tiles; they just look like they were deliberately made more square for aesthetic or story reasons.

I drew a few of the tiles and edited a few others. Most of the dialogue was written by Mel, but a couple lines that people really like were my own, completely impromptu, invention. No one seems to have noticed. It’s all one game. We didn’t sit down and have a meeting about the style or how to keep it cohesive; I just did stuff when I felt like it, and I naturally took inspiration from what was already there.

People will pay for things if you ask them to

itch.io does something really interesting.

Anything you download is presented as a purchase. You are absolutely welcome to sell things for free, but rather than being an instant download, itch.io treats this as a case of buying for zero dollars.

Why do that? Well, because you are always free to pay more for something you buy on itch, and the purchase dialog has handy buttons for adding a tip.

It turns out that, when presented with a dialog that offers a way to pay money for a free thing, an awful lot of people… paid money! Over a hundred people chipped in a few bucks for our free game, just because itch offered them a button to do so. The vast majority of them paid one of itch’s preset amounts. I’m totally blown away; I knew abstractly that this was possible, but I didn’t really expect it to happen. I’ve never actually sold anything before, either. This is amazing.

Now, granted, we do offer bonuses (concept art and the OST) if you pay $2 or more, at Mel’s request. But consider that I also put my two PICO-8 games on itch, and those have an interesting difference: they’re played in-browser and load automatically right in the page. Instead of a payment dialog, there’s a “support this game” button below the game. They’re older games that most of my audience has probably played already, but they still got a few hundred views between them. And the number of purchases?

Zero.

I’m not trying to criticize or guilt anyone here! I release stuff for free because I want it to be free. I’m just genuinely amazed by how effective itch’s download workflow seems to be. The buttons for chipping in are a natural part of the process of something you’re already doing, so “I might as well” kicks in. I’ve done this myself — I paid for the free m5x7 font I used in NEON PHASE. But something played in-browser is already there, and it takes a much stronger impulse to go out of your way to initiate the process of supporting the game.

Anyway, this is definitely encouraging me to make more things. I’ll probably put my book on itch when I finish it, too.

Also, my book

Speaking of!

If you remember, I’ve been writing a book about game development. Literally, a book about game development — the concept was that I build some games on various free platforms, then write about what I did and how I did it. Game development as a story, rather than a lecture.

I’ve hit a bit of a problem with it, and that problem is with my “real” games — i.e., the ones I didn’t make for the sake of the book. Writing about Isaac’s Descent requires first explaining how the engine came about, which requires reconstructing how I wrote Under Construction, and now we’re at two games’ worth of stuff even before you consider the whole writing a collision engine thing.

Isaac’s Descent HD is posed to have exactly the same problem: it takes a detour through the development of NEON PHASE, so I should talk about that too in some detail.

Both of these games are huge and complex tales already, far too long for a single “chapter”, and I’d already been worrying that the book would be too long.

So! I’m adjusting the idea slightly. Instead of writing about making a bunch of “artificial” games that I make solely for the sake of writing about the experience… I’m cutting it down to just Isaac’s Descent, HD, and the other games in their lineage. That’s already half a dozen games across two platforms, and I think they offer more than enough opportunity to say everything I want.

The overall idea of “talk about making something” is ultimately the same, but I like this refocusing a lot more. It feels a little more genuine, too.

Guess I’ve got a bit of editing to do!

And, finally

You should try out the other games people made for my jam! I can’t believe a Twitter joke somehow caused more than forty games to come into existence that otherwise would not have. I’ve been busy with NEON PHASE followup stuff (like writing this post) and have only barely scratched the surface so far, but I do intend to play every game that was submitted!

NEON PHASE

Post Syndicated from Eevee original https://eev.ee/blog/2017/01/21/neon-phase/

It all started after last year’s AGDQ, when I lamented having spent the entire week just watching speedruns instead of doing anything, and thus having lost my rhythm for days afterwards.

This year, several friends reminded me of this simultaneously, so I begrudgingly went looking for something to focus on during AGDQ. I’d already been working on Isaac’s Descent HD, so why not keep it up? Work on a video game while watching video games.

Working on a game for a week sounded an awful lot like a game jam, so I jokingly tweeted about a game jam whose express purpose was to not completely waste the week staring at a Twitch stream. Then someone suggested I make it an actual jam on itch.io. Then Mel asked to do a game with me.

And so, thanks to an almost comical sequence of events, we made NEON PHASE — a half-hour explorey platformer.

The game

The game is set in the Flora universe, as is everything Mel gets their hands on. (I say this with all the love in the world. ♥ Anyway, my games are also set in the Flora universe, so who am I to talk.)

I started out by literally copy-pasting the source code for Isaac’s Descent HD, the game I’ve been making with LÖVE as an extension of an earlier PICO-8 game I made. It’s not terribly far yet, but it’s almost to the point of replicating the original game, which meant I had a passable platformer engine that could load Tiled maps and had some notion of an “actor”. We both like platformers, anyway, so a platformer it would be.

We probably didn’t make the best use of the week. I think it took us a couple days to figure out how to collaborate, now that we didn’t have the PICO-8’s limitations and tools influencing our direction. Isaac is tile-based, so I’d taken for granted that this game would also be tile-based, whereas Mel, being an illustrator, prefers to draw… illustrations. I, an idiot, decided the best way to handle this would be to start cutting the illustrations into tiles and then piecing them back together. It took several days before I realized that oh, hey, Mel could just draw the entire map as a single image, and I could make the player run around on that.

So I did that. Previously, collision had been associated only with tiles, but it wasn’t too hard to just draw polygons right on the map and use those for collision. (Bless Tiled, by the way. It has some frustrating rough edges due to being a very general-purpose editor, but I can’t imagine how much time it would take me to write my own map editor that can do as much.)

And speaking of collision, while I did have to dig into a few thorny bugs, I’m thrilled with how well the physics came out! The collision detection I’d written for Isaac’s Descent HD was designed to support arbitrary polygons, even though so far I’ve only had square tiles. I knew the whole time I was making my life a lot harder, but I really didn’t want to restrict myself to rectangles right out of the gate. It paid off in NEON PHASE — the world is full of sloping, hilly terrain, and you can run across it fairly naturally!

I’d also thought at first that the game would be a kind of actiony platformer, which is why the very first thing you get greatly resembles a weapon, but you don’t end up actually fighting anything. It turns out enemy behavior takes a bit of careful design and effort, and I ended up busy enough just implementing Mel’s story. Also, dropping fighting meant I didn’t have to worry about death, which meant I didn’t have to worry about saving and loading map state, which was great news because I still haven’t done any of that yet.

It’s kind of interesting how time constraints can influence game design. The game has little buildings you can enter, but because I didn’t have saving/loading implemented, I didn’t want to actually switch maps. Instead, I made the insides of buildings a separate layer in Tiled. And since I had both layers on hand, I just drew the indoor layer right on top of the outdoor layer, which made kind of a cool effect.

A side effect of this approach was that you could see the inside of all buildings (well, within the viewport) while you were inside one, since they all exist in the same space. We ended up adding a puzzle and a couple minor flavor things that took advantage of this.

If I had had saving/loading of maps ready to go, I might have opted instead for a more traditional RPG-like approach, where the inside of each building is on its own map (or appears to be) and floats in a black void.

Another thing I really liked was the glitch effect, which I wrote on a whim early on because I’ve had shaders on the brain lately. We were both a little unsure about it, but in the end Mel wrote it into the plot and I used it more heavily throughout, including as a transition effect between indoors/outdoors.

Mel was responsible for art and music and story, so the plot unfortunately wasn’t finalized until the last day of the jam. It ended up being 30 pages of dialogue. Sprinkled throughout were special effects that sound like standard things you’d find in any RPG dialogue system — menus, branches, screen fades, and the like — but that I just hadn’t written yet.

The dialogue system was downright primitive when we started; I’d only written it as a brief proof of concept for Isaac, and it had only gotten as far as showing lines of wrapped text. It didn’t even know how to deal with text that was too long for the box. Hell, it didn’t even know how to exit the dialogue and return to the game.

So when I got the final script, I went into a sort of mad panic, doing my best to tack on features in ways I wouldn’t regret later and could maybe reuse. I got pretty far, but when it became clear that we couldn’t possibly have a finished product in time, I invoked my powers as jam coordinator and pushed the deadline back by 24 hours. 48 hours. 54⅓ hours. Oh, well.

The final product came out pretty well, modulo a couple release bugs, ahem. I’ve been really impressed with itch.io, too — it has a thousand twiddles, which makes me very happy, plus graphs of how many people have been playing our game and how they found it! Super cool.

Lessons learned

Ah, yes. Here’s that sweet postmortem content you computer people crave.

Don’t leave debug code in

There’s a fairly long optional quest in the game that takes a good few minutes to complete, even if you teleport everywhere instantly. (Ahem.) Finishing the quest kicks off a unique cutscene that involves a decent bit of crappy code I wrote at the last minute. I needed to test it a lot. So, naturally, I added a dummy rule to the beginning of the relevant NPC’s dialogue that just skips right to the end.

I forgot to delete that rule before we released.

Whoops!

The game even has a debug mode, so I could’ve easily made the rule only work then. I didn’t, and it possibly spoiled the whole sidequest for a couple dozen people. My bad.

Try your game at other framerates

The other game-breaking bug we had in the beginning was that some people couldn’t make jumps. For some, it was only when indoors; for others, it was all the time. The common thread was… low framerates.

Why does this matter? Well! When you jump, your upwards velocity is changed to a specific value, calculated to make your jump height slightly more than two tiles. The problem is, gravity is applied after you get jump velocity but before you actually move. It looks like this:

1
self.velocity = self.velocity + gravity * dt

Reasonable, right? Gravity is acceleration, so you multiply it by the amount of time that’s passed to get the change to velocity.

Ah… but if your framerate is low, then dt will be relatively large, and gravity will eat away a relatively large chunk of your upwards velocity. On the frame you jump, this effectively reduces your initial jump speed. If your framerate is low enough, you’ll never be able to jump as high as intended.

One obvious fix would be to rearrange the order things happen, so gravity doesn’t come between jumping and movement. I was wary of doing this as an emergency fix, though, because it would’ve taken a bit of rearchitecturing and I wasn’t sure about the side effects. So instead, I made a fix that’s worth having anyway: when the framerate is too long, I slice up dt and do multiple rounds of updating. Now even if the game draws slowly, it plays at the right speed.

This was really easy to discover once I knew to look; all I had to do was add a sleep() in the update or draw loops to artificially lower the framerate. I even found a second bug, which was that you move slowly at low framerates — much like with jumping, your walk speed is capped at a maximum, then friction lowers it, then you actually move.

I also had problems with framerates that were too high, which took me completely by surprise. Your little companion flips out and jitters all over the place or even gets stuck, and jumping just plain doesn’t work most of the time. The problems here were much simpler. I was needlessly rounding Chip’s position to the nearest pixel, so if dt was very small, Chip would only try to move a fraction of a pixel per frame and never get anywhere; I fixed that by simply not rounding.

The issue with jumping needs a little backstory. One of the problems with sloped terrain is that when you walk up a slope and reach the top, your momentum is still carrying you along the path of the slope, i.e. upwards. I had a lot of problems with launching right off the top of even a fairly shallow hill; it looked goofy and amateurish. My terrible solution was: if you started out on the ground, then after moving, try to move a short distance straight down. If you can’t, because something (presumably the ground) is in the way, then you probably just went over a short bump; move as far as you can downwards so you stick to the ground. If you can move downwards, you just went over a ledge, so abort the movement and let gravity take its course next frame.

The problem was that I used a fixed (arbitrary) distance for this ground test. For very short dt, the distance you moved upwards when jumping was less than the distance I then tried dragging you back down to see if you should stay on the ground. The easy fix was to scale the test distance with dt.

Of course, if you’re jumping, obviously you don’t want to stay on the ground, so I shouldn’t do this test at all. But jumping is an active thing, and staying grounded is currently a passive thing (but shouldn’t be, since it emulates walking rather than sliding), and again I didn’t want to start messing with physics guts after release. I’ll be cleaning a few things up for the next game, I’m sure.

This also turned out to be easy to see once I knew to look — I just turned off vsync, and my framerate shot up to 200+.

Quadratic behavior is bad

The low framerate issue wouldn’t have been quite so bad, except for a teeny tiny problem with indoors. I’d accidentally left a loop in when refactoring, so instead of merely drawing every indoor actor each frame, I was drawing every indoor actor for every indoor actor each frame. I think that worked out to 7225 draws instead of 85. (I don’t skip drawing for offscreen actors yet.) Our computers are pretty beefy, so I never noticed. Our one playtester did comment at the eleventh hour that the framerate dipped very slightly while indoors, but I assumed this was just because indoors requires more drawing than outdoors (since it’s drawn right on top of outdoors) and didn’t investiage.

Of course, if you play on a less powerful machine, the difference will be rather more noticeable. Oops.

Just Do It

My collision detection relies on the separating axis theorem, which only works for convex polygons. (Convex polygons are ones that have no “dents” in them — you could wrap a rubber band around one and it would lie snug along each face.) The map Mel drew has rolling terrain and caverns with ceilings, which naturally lead to a lot of concave polygons. (Concave polygons are not convex. They have caves!)

I must’ve spent a good few hours drawing collision polygons on top of the map, manually eyeballing the terrain and cutting it up into only convex polygons.

Eventually I got so tired of this that I threw up my hands and added support for concave polygons.

It took me, like, two minutes. Not only does LÖVE have a built-in function for cutting a polygon into triangles (which are always convex), it also has a function for detecting whether a polygon is convex. I already had support for objects consisting of multiple shapes, so all I had to do was plug these things into each other.

Collision probably would’ve taken much less time if I’d just done that in the first place.

Delete that old code, or maybe not

One of the very first players reported that they’d managed to crash the game right off the bat. It didn’t take long to realize it was because they’d pressed Q, which isn’t actually used in NEON PHASE. It is used in Isaac’s Descent HD, to scroll through the inventory… but NEON PHASE doesn’t use that inventory, and I’d left in the code for handling the keypress, so the game simply crashed.

(This is Lua, so when I say “crash”, I mean “showed a stack trace and refused to play any more”. Slightly better, but only so much.)

So, maybe delete that old code.

Or, wait, maybe don’t. When I removed the debugging sequence break just after release, I also deleted the code for the Q key… and, in a rush, also deleted the code for handling the E key, which is used in NEON PHASE. Rather heavily. Like, for everything. Dammit.

Maybe just play the game before issuing emergency releases? Nah.

Melding styles is easier than you’d think

When I look at the whole map overall, it’s hilarious to me how much the part I designed sticks out. It’s built out of tiles and consists of one large puzzle, whereas the rest of the game is as untiled as you can get and mostly revolves around talking to people.

And yet I don’t think anyone has noticed. It’s just one part of the game with a thing you do. The rest of the game may not have a bunch of wiring puzzles, but enough loose wires are lying around to make them seem fitting. The tiles Mel gave me are good and varied enough that they don’t look like tiles; they just look like they were deliberately made more square for aesthetic or story reasons.

I drew a few of the tiles and edited a few others. Most of the dialogue was written by Mel, but a couple lines that people really like were my own, completely impromptu, invention. No one seems to have noticed. It’s all one game. We didn’t sit down and have a meeting about the style or how to keep it cohesive; I just did stuff when I felt like it, and I naturally took inspiration from what was already there.

People will pay for things if you ask them to

itch.io does something really interesting.

Anything you download is presented as a purchase. You are absolutely welcome to sell things for free, but rather than being an instant download, itch.io treats this as a case of buying for zero dollars.

Why do that? Well, because you are always free to pay more for something you buy on itch, and the purchase dialog has handy buttons for adding a tip.

It turns out that, when presented with a dialog that offers a way to pay money for a free thing, an awful lot of people… paid money! Over a hundred people chipped in a few bucks for our free game, just because itch offered them a button to do so. The vast majority of them paid one of itch’s preset amounts. I’m totally blown away; I knew abstractly that this was possible, but I didn’t really expect it to happen. I’ve never actually sold anything before, either. This is amazing.

Now, granted, we do offer bonuses (concept art and the OST) if you pay $2 or more, at Mel’s request. But consider that I also put my two PICO-8 games on itch, and those have an interesting difference: they’re played in-browser and load automatically right in the page. Instead of a payment dialog, there’s a “support this game” button below the game. They’re older games that most of my audience has probably played already, but they still got a few hundred views between them. And the number of purchases?

Zero.

I’m not trying to criticize or guilt anyone here! I release stuff for free because I want it to be free. I’m just genuinely amazed by how effective itch’s download workflow seems to be. The buttons for chipping in are a natural part of the process of something you’re already doing, so “I might as well” kicks in. I’ve done this myself — I paid for the free m5x7 font I used in NEON PHASE. But something played in-browser is already there, and it takes a much stronger impulse to go out of your way to initiate the process of supporting the game.

Anyway, this is definitely encouraging me to make more things. I’ll probably put my book on itch when I finish it, too.

Also, my book

Speaking of!

If you remember, I’ve been writing a book about game development. Literally, a book about game development — the concept was that I build some games on various free platforms, then write about what I did and how I did it. Game development as a story, rather than a lecture.

I’ve hit a bit of a problem with it, and that problem is with my “real” games — i.e., the ones I didn’t make for the sake of the book. Writing about Isaac’s Descent requires first explaining how the engine came about, which requires reconstructing how I wrote Under Construction, and now we’re at two games’ worth of stuff even before you consider the whole writing a collision engine thing.

Isaac’s Descent HD is posed to have exactly the same problem: it takes a detour through the development of NEON PHASE, so I should talk about that too in some detail.

Both of these games are huge and complex tales already, far too long for a single “chapter”, and I’d already been worrying that the book would be too long.

So! I’m adjusting the idea slightly. Instead of writing about making a bunch of “artificial” games that I make solely for the sake of writing about the experience… I’m cutting it down to just Isaac’s Descent, HD, and the other games in their lineage. That’s already half a dozen games across two platforms, and I think they offer more than enough opportunity to say everything I want.

The overall idea of “talk about making something” is ultimately the same, but I like this refocusing a lot more. It feels a little more genuine, too.

Guess I’ve got a bit of editing to do!

And, finally

You should try out the other games people made for my jam! I can’t believe a Twitter joke somehow caused more than forty games to come into existence that otherwise would not have. I’ve been busy with NEON PHASE followup stuff (like writing this post) and have only barely scratched the surface so far, but I do intend to play every game that was submitted!

Tony Joins The Support Squad

Post Syndicated from Yev original https://www.backblaze.com/blog/tony-joins-support-squad/

tony

As we continue to grow, so does our support team! The latest addition is Tony, who’ll be joining us as a Junior Tech Support Technician. He’ll be helping folks make sure their backups are up and running as expected! Lets get to know Tony a bit better shall we?

What is your Backblaze Title
Junior Technical Support Technician

Where are you originally from?
The Internet, CA

What attracted you to Backblaze?
Great service, great company, great employees. Not pandering, isn’t that what we all want?

What do you expect to learn while being at Backblaze?
Much more in depth knowledge of cloud services, coding, operating systems . . . tech in general.

Where else have you worked?
Retail (read: many min. wage mall jobs), Substitute Teacher/Coach/Guest Lecturer, Theatrical Fight Director, Theatrical Electrics Technician, “Glorified Bus Stop Attendant.”

Where did you go to school?
High School: central valley California, College: San Jose State University, Other: Internet.

What’s your dream job?
Competitive Table Top Role Player

Favorite place you’ve traveled?
Kauai, Hawaii

Favorite hobby?
Games (board, video, role play, any combination thereof)

Of what achievement are you most proud?
prepare groans My wife. Or…getting paid to lightsaber fight on stage.

Star Trek or Star Wars?
Star Wars. See above question.

Coke or Pepsi?
Pepsi brand, but Coke if it’s one or the other to drink.

Favorite food?
I have been called “The Disposal” so . . .

Why do you like certain things?
No idea really. I can name -IT- for some things, but others, even I wonder “why? Why is that good/neat/funny!?” and have no answer for myself. The “things I can identify reasons for” usually involve a good story, obscure references, or multiple entendre, such as mash ups. NOT PUNS.

Anything else you’d like you’d like to tell us?
Many things, but most of them are incredibly nerdy fandom theory things that we don’t have time for, so ask me on lunch or during an outing and I will talk ad nauseam. Engage at your own risk.

We shall save our fan theory questions for slow days at the office. Welcome aboard Tony!

The post Tony Joins The Support Squad appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Helen Nissenbaum on Regulating Data Collection and Use

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/04/helen_nissenbau.html

NYU Helen Nissenbaum gave an excellent lecture at Brown University last month, where she rebutted those who think that we should not regulate data collection, only data use: something she calls “big data exceptionalism.” Basically, this is the idea that collecting the “haystack” isn’t the problem; it what is done with it that is. (I discuss this same topic in Data and Goliath, on pages 197-9.)

In her talk, she makes a very strong argument that the problem is one of domination. Contemporary political philosopher Philip Pettit has written extensively about a republican conception of liberty. He defines domination as the extent one person has the ability to interfere with the affairs of another.

Under this framework, the problem with wholesale data collection is not that it is used to curtail your freedom; the problem is that the collector has the power to curtail your freedom. Whether they use it or not, the fact that they have that power over us is itself a harm.

Case 228: The Trembling Giant

Post Syndicated from The Codeless Code original http://thecodelesscode.com/case/228

After one of master Kaimu’s lectures, a monk approached
the master and said: I am bored of this endless talk of
coding practices, of tools and techniques. It is said you
know much about artificial intelligence—say something
about that.

Kaimu grabbed the monk in a headlock, held a knife to his
ear, and said: Let me cut away these useless appendages,
that you might see more clearly.

When the monk begged the master to let fall his knife, Kaimu
answered: I cannot, for it is you that holds it. But since
you wish to keep your two ears, tell me what you will part
with instead—two kidneys, two lungs, or two gallons of
blood?

The monk cried: Mercy! I would part with none of these!

Kaimu said: Yet I would leave you your excellent brain! And
excellent it must be, if my lectures can provide it only
boredom! Very well, I shall take two inches of your neck…

As Kaimu pressed the knife into his flesh, the monk said:
This is madness! What good is my brain without my body?

Kaimu laughed and asked: What good is a rule engine without
code to implement it, interfaces to query it, databases to
keep its store of knowledge, or operating systems to make it
all run? And whence comes all this code?

The monk considered this and said dutifully: I should not
seek to build brains until I master the ears.

Kaimu scowled and said: Foolish boy, you are the ears, and
the eyes, and the hands—one pair each of uncounted
millions. You and I labor day after day, year after year,
building and debugging little bits of code—on platforms
that are themselves made of code—until the code we
create is wired to the code created by our fellows, and our
temple’s code speaks to the code of a hundred other
temples—sometimes directly, sometimes subtly, through eyes that
move minds that move mouths that move ears that move other
minds to move other hands to write even more code—and
so on and so on, node upon node, link upon link, splayed out
in a vast, ethereal nervous system that covers this world and
has begun to reach beyond…

The master’s eyes darted around, and he continued in a
low voice:

When we do our work poorly, we are replaced with our
betters. When we do our work well, the thing we have built
grows larger, faster, more powerful, more entrenched, more
hungry. Sometimes I lie awake in a cold sweat, unable to
decide if we are still building it, or if it has begun
using us to build itself

* The title is inspired by this.

How the media really created Trump

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/03/how-media-really-created-trump_26.html

This NYTimes op-ed claims to diagnose the press’s failings with regard to Trump, but in its first sentence demonstrates how little press understands the problem. The problem isn’t with Trump, but with the press.The reason for Trump is that the press has discarded its principle of “objectivity”. Reasonable people disagree. The failing of the press is that they misrepresent one side, the Republicans, as being unreasonable. You see that in the op-ed above, where the very first sentence decries the “Republican Party’s toxic manipulation of racial resentments”. In fact, both parties are equally reasonable, or unreasonable as the case may be, with regards to race.The article suggests the press should have done more to debunk Trump in the”form of fact checks and robust examination of policy proposals”. But the press doesn’t do that for Democrats, so why should a Republican candidate they don’t like get singled out? No amount of attacking Trump sticks because the press is blatantly unfair.Hillary clearly is complicit in the “Benghazi” affair, because she led the charge to inject weapons into Libya to take down Ghadaffi, then ignored Chris Steven’s efforts to clean up the mess. Hillary’s use of her own email server was clearly an attempt to bypass transparency rules and conduct underhanded diplomacy, as the now public emails show. Yes, it’s true that there are some invalid Republican attacks on these issues that are blatantly partisan. But the press is less interested in holding Hillary accountable for these failures, and more interested in portraying Republicans as unreasonable by focusing on those partisan attacks.Since the press doesn’t attack Obama or Hillary, the public will never see any attack against Trump as fair. The article points out that Politifact rates Trump as more dishonest than the other candidates. But that’s because it debunks virtually everything Trump says, while at the same time, not doing the same for Democrats. Populist rhetoric by any candidate should get the same treatment, but doesn’t. Trump is a populist demagogue, so of course what he says is going to have little relation to the truth, but nobody is going to believe this accusation when you are obviously so unfair about fact checking. Bernie is also a populist demagogue, but there’s little serious effort to debunk what he says.These biases I mention are obvious to anybody outside the system. I’m a Libertarian, so I have equal disdain for both parties. In theory, Libertarians slightly favor the Republican rhetoric on the free market, but since Republican politicians never deliver on this, we slightly favor Democrat practical results on individual freedoms (e.g. acceptance of homosexuality). My point is that as somebody with some objectivity on the parties, the inability of the press to be objective is obvious to me.Every time I bring this up, the press counters with “false balance“, a term they’ve concocted to justify their biases. So let’s pick the least partisan topic, that of “vaccines causing autism” to demonstrate the falseness of “false balance”.To start with, we agree that there’s absolutely no link between vaccines and autism, and that the science is clear on this, and that only crazy/stupid people would think their is a link.The thing is, the policy question is almost unrelated to the science question. Whatever the science says, the policy question is still whether the government can force people to get vaccines (or parents to vaccinate their child). There are two sides to this issue. On one side is the “choice” question of whether it’s a person’s choice what to do with their body. For example, in the abortion debate there is often the analogy of whether people can be forced to donate a kidney, or to give a bone marrow transplant. Moreover, while they don’t cause autism, vaccines do sometimes cause complications, so there is a risk (albeit tiny), to the person receiving the injection. On the other hand, vaccines are fundamentally unlike all other health issues, with vast benefits to be derived from “herd immunity” when everyone getting a vaccine. For example, the measles vaccine is imperfect, so when enough people shirk their duty to get vaccinated, even some vaccinated people may catch the disease.The point being is that we have a clear, two-sided policy debate here, quite apart from the crazies.More importantly is the way the press uses this issue to smear Republicans, in much the same way that I describe above with the way the NYTimes smear Republicans with race issues. Consider this story from CNN “Chris Christie sidesteps vaccine science“. All Christie said was, in response to the policy issue, that most of the time it’s the parent’s choice, which reporters unfairly extended to a position that “sidesteps science”. Obama’s spokesman said much the same thing only days prior to this event, but didn’t get smeared like Christie. In fact, whereas most candidates take the policy position that its always the parent’s choice, Christie took the position that sometimes the government can override parent’s choice when it’s important. But, that CNN article, and many other press articles at the time, smear Christie as somehow supporting anti-vaxxers.The press likewise ignore Democrats on this issue, who pander just as much to the anti-vaxxers as any Republicans. In 2008, candidate Obama said “We’ve seen just a skyrocketing autism rate. Some people are suspicious that it’s connected to the vaccines. This person included”. Candidate Hillary said “I am committed to make investments to find the causes of autism, including possible environmental causes like vaccines”. The science was as clear then as it is now that there’s no link.No politician (except Trump, of course) is on the anti-vax side, but at the same time, they don’t want to needlessly antagonize potential voters. Most take the policy position supporting parental/personal choice, but rather than condemning idiots/crazies for their bad science, simply say things like “well, as a parent, I get my kids vaccinated”. Trump, of course, goes full anti-vax, but once the press has already shown themselves to be corrupt and biased on this issue, what they say about Trump is no longer trusted.Finally, let’s talk “science”. Members of the press stick up for the principles of science when it’s convenient and supports their beliefs, but otherwise attack science. Science says the same things about the autism-vaccine link as it does chiropractics, anti-oxidants, gluten-free, organic produce, and a whole lot of other subjects. The high-end grocery store Whole Foods is a shrine to anti-science, promoting all these things. Yet, many reporters I know shop at Whole Foods, for anti-oxidants and other nonsense.Whether or not you judge somebody as an “anti-science crazy” that should be ignored because of “false balance” depends entirely upon which scientific issue you are discussing.Thus, I’ve disproved your theory of “false-balance” three separate times here. Even while I agree that anti-vaxxers are crazy and shouldn’t be interviewed on the issue, I’ve nonetheless shown how the press is unable to deal with the either policy question or science question fairly, and moreover, uses this issue to unfairly smear politicians they don’t like.It’s not just the anti-vax issue that is the problem. Pick any partisan “false balance” issue, and it’ll have the same problem of media bias, where they justify their corrupt behavior.Half the population, even a large chunk of Democrats, agree with Trump’s idea that we should ban Muslims from coming into this country. I have as much distaste for this idea as you do, I hate it with unbridled passion. But here’s the thing: if half the population of the country believes a wrong thing, it’s by definition “reasonable”. It’s like in the old days when countries were split half-and-half between Protestantism and Catholicism. One side had to be wrong. They couldn’t accept the other half as reasonable, and took the scorched earth approach — literary scorching the fields during Europe’s religious wars that killed half the population. Today, there’s not a single Protestant on the Supreme Court, and nobody cares, because we’ve gotten past our differences. It’s where the term “bigotry” came from, about tolerating those who disagree with you.The same logic applies here. Instead of suppressing Trump’s supporters on the farcical claim of “false balance”, let’s bring them out into the light and debate this issue like reasonable people. Shouting them down for being racists, as we do now, changes neither their minds nor their votes. Tolerating them as being reasonable (but wrong) people, as they certainly are, can change minds. Let’s discuss Muslims we know. Let’s discuss how America is the shining light throughout the world for people yearning to be free, and how we sully ourselves by closing borders. Let’s argue our point as if it has to stand on its own merits, rather than being our ideology.Nicholas Kristof’s piece at the NYTimes lightly chides the press, for being too pure of heart to deal with the massive evil that is Trump. In truth, Trump is just an expression of the press’s evil nature. The press has suppressed and ignored a large section of the population. This has done nothing to change minds, and only caused grievances to fester, until a populist candidate came along. Had the press been less biased, less focused on attacking anybody of the wrong ideology, this anger would not have existed for Trump to tap into.Note: When I was a kid, my father was a journalist. One day, I opened our front door to find two people in burgundy robes on our door step, members of the cult of Bagwan Shree Rajneesh. They weren’t there to convert us, but instead, had accepted my dad’s invitation to dinner (he was writing articles about the Rajneeshees). “But these people are crazy cultists!!”, I exclaimed to my father. He then gave me a lecture on unfair biases, and how just because I believed they were wrong, it didn’t mean they were unreasonable people we couldn’t have dinner with.

How the media really created Trump

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/03/how-media-really-created-trump_26.html

This NYTimes op-ed claims to diagnose the press’s failings with regard to Trump, but in its first sentence demonstrates how little press understands the problem. The problem isn’t with Trump, but with the press.The reason for Trump is that the press has discarded its principle of “objectivity”. Reasonable people disagree. The failing of the press is that they misrepresent one side, the Republicans, as being unreasonable. You see that in the op-ed above, where the very first sentence decries the “Republican Party’s toxic manipulation of racial resentments”. In fact, both parties are equally reasonable, or unreasonable as the case may be, with regards to race.The article suggests the press should have done more to debunk Trump in the”form of fact checks and robust examination of policy proposals”. But the press doesn’t do that for Democrats, so why should a Republican candidate they don’t like get singled out? No amount of attacking Trump sticks because the press is blatantly unfair.Hillary clearly is complicit in the “Benghazi” affair, because she led the charge to inject weapons into Libya to take down Ghadaffi, then ignored Chris Steven’s efforts to clean up the mess. Hillary’s use of her own email server was clearly an attempt to bypass transparency rules and conduct underhanded diplomacy, as the now public emails show. Yes, it’s true that there are some invalid Republican attacks on these issues that are blatantly partisan. But the press is less interested in holding Hillary accountable for these failures, and more interested in portraying Republicans as unreasonable by focusing on those partisan attacks.Since the press doesn’t attack Obama or Hillary, the public will never see any attack against Trump as fair. The article points out that Politifact rates Trump as more dishonest than the other candidates. But that’s because it debunks virtually everything Trump says, while at the same time, not doing the same for Democrats. Populist rhetoric by any candidate should get the same treatment, but doesn’t. Trump is a populist demagogue, so of course what he says is going to have little relation to the truth, but nobody is going to believe this accusation when you are obviously so unfair about fact checking. Bernie is also a populist demagogue, but there’s little serious effort to debunk what he says.These biases I mention are obvious to anybody outside the system. I’m a Libertarian, so I have equal disdain for both parties. In theory, Libertarians slightly favor the Republican rhetoric on the free market, but since Republican politicians never deliver on this, we slightly favor Democrat practical results on individual freedoms (e.g. acceptance of homosexuality). My point is that as somebody with some objectivity on the parties, the inability of the press to be objective is obvious to me.Every time I bring this up, the press counters with “false balance“, a term they’ve concocted to justify their biases. So let’s pick the least partisan topic, that of “vaccines causing autism” to demonstrate the falseness of “false balance”.To start with, we agree that there’s absolutely no link between vaccines and autism, and that the science is clear on this, and that only crazy/stupid people would think their is a link.The thing is, the policy question is almost unrelated to the science question. Whatever the science says, the policy question is still whether the government can force people to get vaccines (or parents to vaccinate their child). There are two sides to this issue. On one side is the “choice” question of whether it’s a person’s choice what to do with their body. For example, in the abortion debate there is often the analogy of whether people can be forced to donate a kidney, or to give a bone marrow transplant. Moreover, while they don’t cause autism, vaccines do sometimes cause complications, so there is a risk (albeit tiny), to the person receiving the injection. On the other hand, vaccines are fundamentally unlike all other health issues, with vast benefits to be derived from “herd immunity” when everyone getting a vaccine. For example, the measles vaccine is imperfect, so when enough people shirk their duty to get vaccinated, even some vaccinated people may catch the disease.The point being is that we have a clear, two-sided policy debate here, quite apart from the crazies.More importantly is the way the press uses this issue to smear Republicans, in much the same way that I describe above with the way the NYTimes smear Republicans with race issues. Consider this story from CNN “Chris Christie sidesteps vaccine science“. All Christie said was, in response to the policy issue, that most of the time it’s the parent’s choice, which reporters unfairly extended to a position that “sidesteps science”. Obama’s spokesman said much the same thing only days prior to this event, but didn’t get smeared like Christie. In fact, whereas most candidates take the policy position that its always the parent’s choice, Christie took the position that sometimes the government can override parent’s choice when it’s important. But, that CNN article, and many other press articles at the time, smear Christie as somehow supporting anti-vaxxers.The press likewise ignore Democrats on this issue, who pander just as much to the anti-vaxxers as any Republicans. In 2008, candidate Obama said “We’ve seen just a skyrocketing autism rate. Some people are suspicious that it’s connected to the vaccines. This person included”. Candidate Hillary said “I am committed to make investments to find the causes of autism, including possible environmental causes like vaccines”. The science was as clear then as it is now that there’s no link.No politician (except Trump, of course) is on the anti-vax side, but at the same time, they don’t want to needlessly antagonize potential voters. Most take the policy position supporting parental/personal choice, but rather than condemning idiots/crazies for their bad science, simply say things like “well, as a parent, I get my kids vaccinated”. Trump, of course, goes full anti-vax, but once the press has already shown themselves to be corrupt and biased on this issue, what they say about Trump is no longer trusted.Finally, let’s talk “science”. Members of the press stick up for the principles of science when it’s convenient and supports their beliefs, but otherwise attack science. Science says the same things about the autism-vaccine link as it does chiropractics, anti-oxidants, gluten-free, organic produce, and a whole lot of other subjects. The high-end grocery store Whole Foods is a shrine to anti-science, promoting all these things. Yet, many reporters I know shop at Whole Foods, for anti-oxidants and other nonsense.Whether or not you judge somebody as an “anti-science crazy” that should be ignored because of “false balance” depends entirely upon which scientific issue you are discussing.Thus, I’ve disproved your theory of “false-balance” three separate times here. Even while I agree that anti-vaxxers are crazy and shouldn’t be interviewed on the issue, I’ve nonetheless shown how the press is unable to deal with the either policy question or science question fairly, and moreover, uses this issue to unfairly smear politicians they don’t like.It’s not just the anti-vax issue that is the problem. Pick any partisan “false balance” issue, and it’ll have the same problem of media bias, where they justify their corrupt behavior.Half the population, even a large chunk of Democrats, agree with Trump’s idea that we should ban Muslims from coming into this country. I have as much distaste for this idea as you do, I hate it with unbridled passion. But here’s the thing: if half the population of the country believes a wrong thing, it’s by definition “reasonable”. It’s like in the old days when countries were split half-and-half between Protestantism and Catholicism. One side had to be wrong. They couldn’t accept the other half as reasonable, and took the scorched earth approach — literary scorching the fields during Europe’s religious wars that killed half the population. Today, there’s not a single Protestant on the Supreme Court, and nobody cares, because we’ve gotten past our differences. It’s where the term “bigotry” came from, about tolerating those who disagree with you.The same logic applies here. Instead of suppressing Trump’s supporters on the farcical claim of “false balance”, let’s bring them out into the light and debate this issue like reasonable people. Shouting them down for being racists, as we do now, changes neither their minds nor their votes. Tolerating them as being reasonable (but wrong) people, as they certainly are, can change minds. Let’s discuss Muslims we know. Let’s discuss how America is the shining light throughout the world for people yearning to be free, and how we sully ourselves by closing borders. Let’s argue our point as if it has to stand on its own merits, rather than being our ideology.Nicholas Kristof’s piece at the NYTimes lightly chides the press, for being too pure of heart to deal with the massive evil that is Trump. In truth, Trump is just an expression of the press’s evil nature. The press has suppressed and ignored a large section of the population. This has done nothing to change minds, and only caused grievances to fester, until a populist candidate came along. Had the press been less biased, less focused on attacking anybody of the wrong ideology, this anger would not have existed for Trump to tap into.Note: When I was a kid, my father was a journalist. One day, I opened our front door to find two people in burgundy robes on our door step, members of the cult of Bagwan Shree Rajneesh. They weren’t there to convert us, but instead, had accepted my dad’s invitation to dinner (he was writing articles about the Rajneeshees). “But these people are crazy cultists!!”, I exclaimed to my father. He then gave me a lecture on unfair biases, and how just because I believed they were wrong, it didn’t mean they were unreasonable people we couldn’t have dinner with.