Tag Archives: math

Hacking Slot Machines by Reverse-Engineering the Random Number Generators

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/08/hacking_slot_ma.html

Interesting story:

The venture is built on Alex’s talent for reverse engineering the algorithms — known as pseudorandom number generators, or PRNGs — that govern how slot machine games behave. Armed with this knowledge, he can predict when certain games are likeliest to spit out money­insight that he shares with a legion of field agents who do the organization’s grunt work.

These agents roam casinos from Poland to Macau to Peru in search of slots whose PRNGs have been deciphered by Alex. They use phones to record video of a vulnerable machine in action, then transmit the footage to an office in St. Petersburg. There, Alex and his assistants analyze the video to determine when the games’ odds will briefly tilt against the house. They then send timing data to a custom app on an agent’s phone; this data causes the phones to vibrate a split second before the agent should press the “Spin” button. By using these cues to beat slots in multiple casinos, a four-person team can earn more than $250,000 a week.

It’s an interesting article; I have no idea how much of it is true.

The sad part is that the slot-machine vulnerability is so easy to fix. Although the article says that “writing such algorithms requires tremendous mathematical skill,” it’s really only true that designing the algorithms requires that skill. Using any secure encryption algorithm or hash function as a PRNG is trivially easy. And there’s no reason why the system can’t be designed with a real RNG. There is some randomness in the system somewhere, and it can be added into the mix as well. The programmers can use a well-designed algorithm, like my own Fortuna, but even something less well-thought-out is likely to foil this attack.

Top 10 Most Obvious Hacks of All Time (v0.9)

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/07/top-10-most-obvious-hacks-of-all-time.html

For teaching hacking/cybersecurity, I thought I’d create of the most obvious hacks of all time. Not the best hacks, the most sophisticated hacks, or the hacks with the biggest impact, but the most obvious hacks — ones that even the least knowledgeable among us should be able to understand. Below I propose some hacks that fit this bill, though in no particular order.

The reason I’m writing this is that my niece wants me to teach her some hacking. I thought I’d start with the obvious stuff first.

Shared Passwords

If you use the same password for every website, and one of those websites gets hacked, then the hacker has your password for all your websites. The reason your Facebook account got hacked wasn’t because of anything Facebook did, but because you used the same email-address and password when creating an account on “beagleforums.com”, which got hacked last year.

I’ve heard people say “I’m sure, because I choose a complex password and use it everywhere”. No, this is the very worst thing you can do. Sure, you can the use the same password on all sites you don’t care much about, but for Facebook, your email account, and your bank, you should have a unique password, so that when other sites get hacked, your important sites are secure.

And yes, it’s okay to write down your passwords on paper.

Tools: HaveIBeenPwned.com

PIN encrypted PDFs

My accountant emails PDF statements encrypted with the last 4 digits of my Social Security Number. This is not encryption — a 4 digit number has only 10,000 combinations, and a hacker can guess all of them in seconds.
PIN numbers for ATM cards work because ATM machines are online, and the machine can reject your card after four guesses. PIN numbers don’t work for documents, because they are offline — the hacker has a copy of the document on their own machine, disconnected from the Internet, and can continue making bad guesses with no restrictions.
Passwords protecting documents must be long enough that even trillion upon trillion guesses are insufficient to guess.

Tools: Hashcat, John the Ripper

SQL and other injection

The lazy way of combining websites with databases is to combine user input with an SQL statement. This combines code with data, so the obvious consequence is that hackers can craft data to mess with the code.
No, this isn’t obvious to the general public, but it should be obvious to programmers. The moment you write code that adds unfiltered user-input to an SQL statement, the consequence should be obvious. Yet, “SQL injection” has remained one of the most effective hacks for the last 15 years because somehow programmers don’t understand the consequence.
CGI shell injection is a similar issue. Back in early days, when “CGI scripts” were a thing, it was really important, but these days, not so much, so I just included it with SQL. The consequence of executing shell code should’ve been obvious, but weirdly, it wasn’t. The IT guy at the company I worked for back in the late 1990s came to me and asked “this guy says we have a vulnerability, is he full of shit?”, and I had to answer “no, he’s right — obviously so”.

XSS (“Cross Site Scripting”) [*] is another injection issue, but this time at somebody’s web browser rather than a server. It works because websites will echo back what is sent to them. For example, if you search for Cross Site Scripting with the URL https://www.google.com/search?q=cross+site+scripting, then you’ll get a page back from the server that contains that string. If the string is JavaScript code rather than text, then some servers (thought not Google) send back the code in the page in a way that it’ll be executed. This is most often used to hack somebody’s account: you send them an email or tweet a link, and when they click on it, the JavaScript gives control of the account to the hacker.

Cross site injection issues like this should probably be their own category, but I’m including it here for now.

More: Wikipedia on SQL injection, Wikipedia on cross site scripting.
Tools: Burpsuite, SQLmap

Buffer overflows

In the C programming language, programmers first create a buffer, then read input into it. If input is long than the buffer, then it overflows. The extra bytes overwrite other parts of the program, letting the hacker run code.
Again, it’s not a thing the general public is expected to know about, but is instead something C programmers should be expected to understand. They should know that it’s up to them to check the length and stop reading input before it overflows the buffer, that there’s no language feature that takes care of this for them.
We are three decades after the first major buffer overflow exploits, so there is no excuse for C programmers not to understand this issue.

What makes particular obvious is the way they are wrapped in exploits, like in Metasploit. While the bug itself is obvious that it’s a bug, actually exploiting it can take some very non-obvious skill. However, once that exploit is written, any trained monkey can press a button and run the exploit. That’s where we get the insult “script kiddie” from — referring to wannabe-hackers who never learn enough to write their own exploits, but who spend a lot of time running the exploit scripts written by better hackers than they.

More: Wikipedia on buffer overflow, Wikipedia on script kiddie,  “Smashing The Stack For Fun And Profit” — Phrack (1996)
Tools: bash, Metasploit

SendMail DEBUG command (historical)

The first popular email server in the 1980s was called “SendMail”. It had a feature whereby if you send a “DEBUG” command to it, it would execute any code following the command. The consequence of this was obvious — hackers could (and did) upload code to take control of the server. This was used in the Morris Worm of 1988. Most Internet machines of the day ran SendMail, so the worm spread fast infecting most machines.
This bug was mostly ignored at the time. It was thought of as a theoretical problem, that might only rarely be used to hack a system. Part of the motivation of the Morris Worm was to demonstrate that such problems was to demonstrate the consequences — consequences that should’ve been obvious but somehow were rejected by everyone.

More: Wikipedia on Morris Worm

Email Attachments/Links

I’m conflicted whether I should add this or not, because here’s the deal: you are supposed to click on attachments and links within emails. That’s what they are there for. The difference between good and bad attachments/links is not obvious. Indeed, easy-to-use email systems makes detecting the difference harder.
On the other hand, the consequences of bad attachments/links is obvious. That worms like ILOVEYOU spread so easily is because people trusted attachments coming from their friends, and ran them.
We have no solution to the problem of bad email attachments and links. Viruses and phishing are pervasive problems. Yet, we know why they exist.

Default and backdoor passwords

The Mirai botnet was caused by surveillance-cameras having default and backdoor passwords, and being exposed to the Internet without a firewall. The consequence should be obvious: people will discover the passwords and use them to take control of the bots.
Surveillance-cameras have the problem that they are usually exposed to the public, and can’t be reached without a ladder — often a really tall ladder. Therefore, you don’t want a button consumers can press to reset to factory defaults. You want a remote way to reset them. Therefore, they put backdoor passwords to do the reset. Such passwords are easy for hackers to reverse-engineer, and hence, take control of millions of cameras across the Internet.
The same reasoning applies to “default” passwords. Many users will not change the defaults, leaving a ton of devices hackers can hack.

Masscan and background radiation of the Internet

I’ve written a tool that can easily scan the entire Internet in a short period of time. It surprises people that this possible, but it obvious from the numbers. Internet addresses are only 32-bits long, or roughly 4 billion combinations. A fast Internet link can easily handle 1 million packets-per-second, so the entire Internet can be scanned in 4000 seconds, little more than an hour. It’s basic math.
Because it’s so easy, many people do it. If you monitor your Internet link, you’ll see a steady trickle of packets coming in from all over the Internet, especially Russia and China, from hackers scanning the Internet for things they can hack.
People’s reaction to this scanning is weirdly emotional, taking is personally, such as:
  1. Why are they hacking me? What did I do to them?
  2. Great! They are hacking me! That must mean I’m important!
  3. Grrr! How dare they?! How can I hack them back for some retribution!?

I find this odd, because obviously such scanning isn’t personal, the hackers have no idea who you are.

Tools: masscan, firewalls

Packet-sniffing, sidejacking

If you connect to the Starbucks WiFi, a hacker nearby can easily eavesdrop on your network traffic, because it’s not encrypted. Windows even warns you about this, in case you weren’t sure.

At DefCon, they have a “Wall of Sheep”, where they show passwords from people who logged onto stuff using the insecure “DefCon-Open” network. Calling them “sheep” for not grasping this basic fact that unencrypted traffic is unencrypted.

To be fair, it’s actually non-obvious to many people. Even if the WiFi itself is not encrypted, SSL traffic is. They expect their services to be encrypted, without them having to worry about it. And in fact, most are, especially Google, Facebook, Twitter, Apple, and other major services that won’t allow you to log in anymore without encryption.

But many services (especially old ones) may not be encrypted. Unless users check and verify them carefully, they’ll happily expose passwords.

What’s interesting about this was 10 years ago, when most services which only used SSL to encrypt the passwords, but then used unencrypted connections after that, using “cookies”. This allowed the cookies to be sniffed and stolen, allowing other people to share the login session. I used this on stage at BlackHat to connect to somebody’s GMail session. Google, and other major websites, fixed this soon after. But it should never have been a problem — because the sidejacking of cookies should have been obvious.

Tools: Wireshark, dsniff

Stuxnet LNK vulnerability

Again, this issue isn’t obvious to the public, but it should’ve been obvious to anybody who knew how Windows works.
When Windows loads a .dll, it first calls the function DllMain(). A Windows link file (.lnk) can load icons/graphics from the resources in a .dll file. It does this by loading the .dll file, thus calling DllMain. Thus, a hacker could put on a USB drive a .lnk file pointing to a .dll file, and thus, cause arbitrary code execution as soon as a user inserted a drive.
I say this is obvious because I did this, created .lnks that pointed to .dlls, but without hostile DllMain code. The consequence should’ve been obvious to me, but I totally missed the connection. We all missed the connection, for decades.

Social Engineering and Tech Support [* * *]

After posting this, many people have pointed out “social engineering”, especially of “tech support”. This probably should be up near #1 in terms of obviousness.

The classic example of social engineering is when you call tech support and tell them you’ve lost your password, and they reset it for you with minimum of questions proving who you are. For example, you set the volume on your computer really loud and play the sound of a crying baby in the background and appear to be a bit frazzled and incoherent, which explains why you aren’t answering the questions they are asking. They, understanding your predicament as a new parent, will go the extra mile in helping you, resetting “your” password.

One of the interesting consequences is how it affects domain names (DNS). It’s quite easy in many cases to call up the registrar and convince them to transfer a domain name. This has been used in lots of hacks. It’s really hard to defend against. If a registrar charges only $9/year for a domain name, then it really can’t afford to provide very good tech support — or very secure tech support — to prevent this sort of hack.

Social engineering is such a huge problem, and obvious problem, that it’s outside the scope of this document. Just google it to find example after example.

A related issue that perhaps deserves it’s own section is OSINT [*], or “open-source intelligence”, where you gather public information about a target. For example, on the day the bank manager is out on vacation (which you got from their Facebook post) you show up and claim to be a bank auditor, and are shown into their office where you grab their backup tapes. (We’ve actually done this).

More: Wikipedia on Social Engineering, Wikipedia on OSINT, “How I Won the Defcon Social Engineering CTF” — blogpost (2011), “Questioning 42: Where’s the Engineering in Social Engineering of Namespace Compromises” — BSidesLV talk (2016)

Blue-boxes (historical) [*]

Telephones historically used what we call “in-band signaling”. That’s why when you dial on an old phone, it makes sounds — those sounds are sent no differently than the way your voice is sent. Thus, it was possible to make tone generators to do things other than simply dial calls. Early hackers (in the 1970s) would make tone-generators called “blue-boxes” and “black-boxes” to make free long distance calls, for example.

These days, “signaling” and “voice” are digitized, then sent as separate channels or “bands”. This is call “out-of-band signaling”. You can’t trick the phone system by generating tones. When your iPhone makes sounds when you dial, it’s entirely for you benefit and has nothing to do with how it signals the cell tower to make a call.

Early hackers, like the founders of Apple, are famous for having started their careers making such “boxes” for tricking the phone system. The problem was obvious back in the day, which is why as the phone system moves from analog to digital, the problem was fixed.

More: Wikipedia on blue box, Wikipedia article on Steve Wozniak.

Thumb drives in parking lots [*]

A simple trick is to put a virus on a USB flash drive, and drop it in a parking lot. Somebody is bound to notice it, stick it in their computer, and open the file.

This can be extended with tricks. For example, you can put a file labeled “third-quarter-salaries.xlsx” on the drive that required macros to be run in order to open. It’s irresistible to other employees who want to know what their peers are being paid, so they’ll bypass any warning prompts in order to see the data.

Another example is to go online and get custom USB sticks made printed with the logo of the target company, making them seem more trustworthy.

We also did a trick of taking an Adobe Flash game “Punch the Monkey” and replaced the monkey with a logo of a competitor of our target. They now only played the game (infecting themselves with our virus), but gave to others inside the company to play, infecting others, including the CEO.

Thumb drives like this have been used in many incidents, such as Russians hacking military headquarters in Afghanistan. It’s really hard to defend against.

More: “Computer Virus Hits U.S. Military Base in Afghanistan” — USNews (2008), “The Return of the Worm That Ate The Pentagon” — Wired (2011), DoD Bans Flash Drives — Stripes (2008)

Googling [*]

Search engines like Google will index your website — your entire website. Frequently companies put things on their website without much protection because they are nearly impossible for users to find. But Google finds them, then indexes them, causing them to pop up with innocent searches.
There are books written on “Google hacking” explaining what search terms to look for, like “not for public release”, in order to find such documents.

More: Wikipedia entry on Google Hacking, “Google Hacking” book.

URL editing [*]

At the top of every browser is what’s called the “URL”. You can change it. Thus, if you see a URL that looks like this:

http://www.example.com/documents?id=138493

Then you can edit it to see the next document on the server:

http://www.example.com/documents?id=138494

The owner of the website may think they are secure, because nothing points to this document, so the Google search won’t find it. But that doesn’t stop a user from manually editing the URL.
An example of this is a big Fortune 500 company that posts the quarterly results to the website an hour before the official announcement. Simply editing the URL from previous financial announcements allows hackers to find the document, then buy/sell the stock as appropriate in order to make a lot of money.
Another example is the classic case of Andrew “Weev” Auernheimer who did this trick in order to download the account email addresses of early owners of the iPad, including movie stars and members of the Obama administration. It’s an interesting legal case because on one hand, techies consider this so obvious as to not be “hacking”. On the other hand, non-techies, especially judges and prosecutors, believe this to be obviously “hacking”.

DDoS, spoofing, and amplification [*]

For decades now, online gamers have figured out an easy way to win: just flood the opponent with Internet traffic, slowing their network connection. This is called a DoS, which stands for “Denial of Service”. DoSing game competitors is often a teenager’s first foray into hacking.
A variant of this is when you hack a bunch of other machines on the Internet, then command them to flood your target. (The hacked machines are often called a “botnet”, a network of robot computers). This is called DDoS, or “Distributed DoS”. At this point, it gets quite serious, as instead of competitive gamers hackers can take down entire businesses. Extortion scams, DDoSing websites then demanding payment to stop, is a common way hackers earn money.
Another form of DDoS is “amplification”. Sometimes when you send a packet to a machine on the Internet it’ll respond with a much larger response, either a very large packet or many packets. The hacker can then send a packet to many of these sites, “spoofing” or forging the IP address of the victim. This causes all those sites to then flood the victim with traffic. Thus, with a small amount of outbound traffic, the hacker can flood the inbound traffic of the victim.
This is one of those things that has worked for 20 years, because it’s so obvious teenagers can do it, yet there is no obvious solution. President Trump’s executive order of cyberspace specifically demanded that his government come up with a report on how to address this, but it’s unlikely that they’ll come up with any useful strategy.

More: Wikipedia on DDoS, Wikipedia on Spoofing

Conclusion

Tweet me (@ErrataRob) your obvious hacks, so I can add them to the list.

Teaching with Raspberry Pis and PiNet

Post Syndicated from Janina Ander original https://www.raspberrypi.org/blog/teaching-pinet/

Education is our mission at the Raspberry Pi Foundation, so of course we love tools that help teachers and other educators use Raspberry Pis in a classroom setting. PiNet, which allows teachers to centrally manage a whole classroom’s worth of Pis, makes administrating a fleet of Pis easier. Set up individual student accounts, install updates and software, share files – PiNet helps you do all of this!

Caleb VinCross on Twitter

The new PiNet lab up and running. 30 raspberry pi 3’s running as fat clients for 600 + students. Much thanks to the PiNet team! @PiNetDev.

PiNet developer Andrew

PiNet was built and is maintained by Andrew Mulholland, who started work on this project when he was 15, and who is also one of the organisers of the Northern Ireland Raspberry Jam. Check out what he says about PiNet’s capabilities in his guest post here.

PiNet in class

PiNet running in a classroom

PiNet, teacher’s pet

PiNet has been available for about two years now, and the teachers using it are over the moon. Here’s what a few of them say about their experience:

We wanted a permanently set up classroom with 30+ Raspberry Pis to teach programming. Students wanted their work to be secure and backed up and we needed a way to keep the Pis up to date. PiNet has made both possible and the classroom now required little or no maintenance. PiNet was set up in a single day and was so successful we set up a second Pi room. We now have 60 Raspberry Pis which are used by our students every day. – Rob Jones, Secondary School Teacher, United Kingdom

AKS Computing on Twitter

21xRaspPi+dedicated network+PiNet server+3 geeks = success! Ready to test with a full class.

I teach Computer Science at middle school, so I have 4 classes per day in my lab, sharing 20 Raspberry Pis. PiNet gives each student separate storage space. Any changes to the Raspbian image can be done from my dashboard. We use Scratch, Minecraft Pi, Sonic Pi, and do physical computing. And when I have had issues, or have wanted to try something a little crazy, the support has been fabulous. – Bob Irving, Middle School Teacher, USA

Wolf Math on Twitter

We’re starting our music unit with @deejaydoc. My CS students are going through the @Sonic_Pi turorial on @PiNetDev.

I teach computer classes for about 600 students between the ages of 5 and 13. PiNet has really made it possible to expand our technology curriculum beyond the simple web-based applications that our Chromebooks were limited to. I’m now able to use Arduino boards to do basic physical computing with LEDs and sensors. None of this could have happened without PiNet making it easy to have an affordable, stable, and maintainable way of managing 30 Linux computers in our lab. – Caleb VinCross, Primary School Teacher, USA

More for educators

If you’re involved in teaching computing, be that as a professional or as a volunteer, check out the new free magazine Hello World, brought to you by Computing At School, BCS Academy of Computing, and Raspberry Pi working in partnership. It is written by educators for educators, and available in print and as a PDF download. And if you’d like to keep up to date with what we are offering to educators and learners, sign up for our education newsletter here.

Are you a teacher who uses Raspberry Pis in the classroom, or another kind of educator who has used them in a group setting? Tell us about your experience in the comments below.

The post Teaching with Raspberry Pis and PiNet appeared first on Raspberry Pi.

Australia Considering New Law Weakening Encryption

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/07/australia_consi.html

News from Australia:

Under the law, internet companies would have the same obligations telephone companies do to help law enforcement agencies, Prime Minister Malcolm Turnbull said. Law enforcement agencies would need warrants to access the communications.

“We’ve got a real problem in that the law enforcement agencies are increasingly unable to find out what terrorists and drug traffickers and pedophile rings are up to because of the very high levels of encryption,” Turnbull told reporters.

“Where we can compel it, we will, but we will need the cooperation from the tech companies,” he added.

Never mind that the law 1) would not achieve the desired results because all the smart “terrorists and drug traffickers and pedophile rings” will simply use a third-party encryption app, and 2) would make everyone else in Australia less secure. But that’s all ground I’ve covered before.

I found this bit amusing:

Asked whether the laws of mathematics behind encryption would trump any new legislation, Mr Turnbull said: “The laws of Australia prevail in Australia, I can assure you of that.

“The laws of mathematics are very commendable but the only law that applies in Australia is the law of Australia.”

Next Turnbull is going to try to legislate that pi = 3.2.

Another article. BoingBoing post.

EDITED TO ADD: More commentary.

Journey into Deep Learning with AWS

Post Syndicated from Tara Walker original https://aws.amazon.com/blogs/aws/journey-into-deep-learning-with-aws/

If you are anything like me, Artificial Intelligence (AI), Machine Learning (ML), and Deep Learning are completely fascinating and exciting topics. As AI, ML, and Deep Learning become more widely used, for me it means that the science fiction written by Dr. Issac Asimov, the robotics and medical advancements in Star Wars, and the technologies that enabled Captain Kirk and his Star Trek crew “to boldly go where no man has gone before” can become achievable realities.

 

Most people interested in the aforementioned topics are familiar with the AI and ML solutions enabled by Deep Learning, such as Convolutional Neural Networks for Image and Video Classification, Speech Recognition, Natural Language interfaces, and Recommendation Engines. However, it is not always an easy task setting up the infrastructure, environment, and tools to enable data scientists, machine learning practitioners, research scientists, and deep learning hobbyists/advocates to dive into these technologies. Most developers desire to go quickly from getting started with deep learning to training models and developing solutions using deep learning technologies.

For these reasons, I would like to share some resources that will help to quickly build deep learning solutions whether you are an experienced data scientist or a curious developer wanting to get started.

Deep Learning Resources

The Apache MXNet is Amazon’s deep learning framework of choice. With the power of Apache MXNet framework and NVIDIA GPU computing, you can launch your scalable deep learning projects and solutions easily on the AWS Cloud. As you get started on your MxNet deep learning quest, there are a variety of self-service tutorials and datasets available to you:

  • Launch an AWS Deep Learning AMI: This guide walks you through the steps to launch the AWS Deep Learning AMI with Ubuntu
  • MXNet – Create a computer vision application: This hands-on tutorial uses a pre-built notebook to walk you through using neural networks to build a computer vision application to identify handwritten digits
  • AWS Machine Learning Datasets: AWS hosts datasets for Machine Learning on the AWS Marketplace that you can access for free. These large datasets are available for anyone to analyze the data without requiring the data to be downloaded or stored.
  • Predict and Extract – Learn to use pre-trained models for predictions: This hands-on tutorial will walk you through how to use pre-trained model for predicting and feature extraction using the full Imagenet dataset.

 

AWS Deep Learning AMIs

AWS offers Amazon Machine Images (AMIs) for use on Amazon EC2 for quick deployment of an infrastructure needed to start your deep learning journey. The AWS Deep Learning AMIs are pre-configured with popular deep learning frameworks built using Amazon EC2 instances on Amazon Linux, and Ubuntu that can be launched for AI targeted solutions and models. The deep learning frameworks supported and pre-configured on the deep learning AMI are:

  • Apache MXNet
  • TensorFlow
  • Microsoft Cognitive Toolkit (CNTK)
  • Caffe
  • Caffe2
  • Theano
  • Torch
  • Keras

Additionally, the AWS Deep Learning AMIs install preconfigured libraries for Jupyter notebooks with Python 2.7/3.4, AWS SDK for Python, and other data science related python packages and dependencies. The AMIs also come with NVIDIA CUDA and NVIDIA CUDA Deep Neural Network (cuDNN) libraries preinstalled with all the supported deep learning frameworks and the Intel Math Kernel Library is installed for Apache MXNet framework. You can launch any of the Deep Learning AMIs by visiting the AWS Marketplace using the Try the Deep Learning AMIs link.

Summary

It is a great time to dive into Deep Learning. You can accelerate your work in deep learning by using the AWS Deep Learning AMIs running on the AWS cloud to get your deep learning environment running quickly or get started learning more about Deep Learning on AWS with MXNet using the AWS self-service resources.  Of course, you can learn even more information about Deep Learning, Machine Learning, and Artificial Intelligence on AWS by reviewing the AWS Deep Learning page, the Amazon AI product page, and the AWS AI Blog.

May the Deep Learning Force be with you all.

Tara

Introducing Our NEW AWS Community Heroes (Summer 2017 Edition)

Post Syndicated from Tara Walker original https://aws.amazon.com/blogs/aws/introducing-our-new-aws-community-heroes-summer-2017-edition/

The AWS Community Heroes program seeks to recognize and honor the most engaged Amazon Web Services developers who have had a positive impact in the global community.  If you are interested in learning more about the AWS Community Heroes program or curious about ways to get involved with your local AWS community, please click the graphic below to see the AWS Heroes talk directly about the program.

Now that you know more about the AWS Community Hero program, I am elated to introduce to you all the latest AWS Heroes to join the fold:

These guys and gals impart their passion for AWS and cloud technologies with the technical community by sharing their time and knowledge across social media and via in-person events.

Ben Kehoe

Ben Kehoe works in the field of Cloud Robotics—using the internet to enable robots to do more and better things—an area of IoT involving computation in the cloud and at the edge, Big Data, and machine learning. Approaching cloud computing from this angle, Ben focuses on developing business value rapidly through serverless (and service full) applications.

At iRobot, Ben guided the transition to a serverless architecture on AWS based on AWS Lambda and AWS IoT to support iRobot’s connected robot fleet. This architecture enables iRobot to focus on its core mission of building amazing robots with a minimum of development and operations effort.

Ben seeks to amplify voices from dev, operations, and security to help the community shape the evolution of serverless and event-driven designs for IoT and cloud computing more broadly.

 

 

Marcia Villalba

Marcia is a Senior Full-stack Developer at Rovio, the creators of Angry Birds. She is originally from Uruguay but has been living in Finland for almost a decade.

She has been designing and developing software professionally for over 10 years. For more than four years she has been working with AWS, including the past year which she’s worked mostly with serverless technologies.

Marcia runs her own YouTube channel, in which she publishes at least one new video every week. In her channel, she focuses on teaching how to use AWS serverless technologies and managed services. In addition to her professional work, she is the Tech Lead in “Girls in Tech” Helsinki, helping to inspire more women to enter into technology and programming.

 

 

Joshua Levy

Joshua Levy is an entrepreneur, engineer, writer, and serial startup technologist and advisor in cloud, AI, search, and startup scaling.

He co-founded the Open Guide to AWS, which is one of the most popular AWS resources and communities on the web. The collaborative project welcomes new contributors or editors, and anyone who wishes to ask or answer questions.

Josh has years of experience in hands-on software engineering and leadership at fast-growing consumer and enterprise startups, including Viv Labs (acquired by Samsung) and BloomReach (where he led engineering and AWS infrastructure), and a background in AI and systems research at SRI and mathematics at Berkeley. He has a passion for improving how we share knowledge on complex engineering, product, or business topics. If you share any of these interests, reach out on Twitter or find his contact details on GitHub.

 

Michael Ezzell

Michael Ezzell is a frequent contributor of detailed, in-depth solutions to questions spanning a wide variety of AWS services on Stack Overflow and other sites on the Stack Exchange Network.

Michael is the resident DBA and systems administrator for Online Rewards, a leading provider of web-based employee recognition, channel incentive, and customer loyalty programs, where he was a key player in the company’s full transition to the AWS platform.

Based in Cincinnati, and known to coworkers and associates as “sqlbot,” he also provides design, development, and support services to freelance consulting clients for AWS services and MySQL, as well as, broadcast & cable television and telecommunications technologies.

 

 

 

Thanos Baskous

Thanos Baskous is a San Francisco-based software engineer and entrepreneur who is passionate about designing and building scalable and robust systems.

He co-founded the Open Guide to AWS, which is one of the most popular AWS resources and communities on the web.

At Twitter, he built infrastructure that allows engineers to seamlessly deploy and run their applications across private data centers and public cloud environments. He previously led a team at TellApart (acquired by Twitter) that built an internal platform-as-a-service (Docker, Apache Aurora, Mesos on AWS) in support of a migration from a monolithic application architecture to a microservice-based architecture. Before TellApart, he co-founded AWS-hosted AdStack (acquired by TellApart) in order to automatically personalize and improve the quality of content in marketing emails and email newsletters.

 

 

Rob Gruhl

Rob is a senior engineering manager located in Seattle, WA. He supports a team of talented engineers at Nordstrom Technology exploring and deploying a variety of serverless systems to production.

From the beginning of the serverless era, Rob has been exclusively using serverless architectures to allow a small team of engineers to deliver incredible solutions that scale effortlessly and wake them in the middle of the night rarely. In addition to a number of production services, together with his team Rob has created and released two major open source projects and accompanying open source workshops using a 100% serverless approach. He’d love to talk with you about serverless, event-sourcing, and/or occasionally-connected distributed data layers.

 

Feel free to follow these great AWS Heroes on Twitter and check out their blogs. It is exciting to have them all join the AWS Community Heroes program.

–  Tara

Building Loosely Coupled, Scalable, C# Applications with Amazon SQS and Amazon SNS

Post Syndicated from Tara Van Unen original https://aws.amazon.com/blogs/compute/building-loosely-coupled-scalable-c-applications-with-amazon-sqs-and-amazon-sns/

 
Stephen Liedig, Solutions Architect

 

One of the many challenges professional software architects and developers face is how to make cloud-native applications scalable, fault-tolerant, and highly available.

Fundamental to your project success is understanding the importance of making systems highly cohesive and loosely coupled. That means considering the multi-dimensional facets of system coupling to support the distributed nature of the applications that you are building for the cloud.

By that, I mean addressing not only the application-level coupling (managing incoming and outgoing dependencies), but also considering the impacts of of platform, spatial, and temporal coupling of your systems. Platform coupling relates to the interoperability, or lack thereof, of heterogeneous systems components. Spatial coupling deals with managing components at a network topology level or protocol level. Temporal, or runtime coupling, refers to the ability of a component within your system to do any kind of meaningful work while it is performing a synchronous, blocking operation.

The AWS messaging services, Amazon SQS and Amazon SNS, help you deal with these forms of coupling by providing mechanisms for:

  • Reliable, durable, and fault-tolerant delivery of messages between application components
  • Logical decomposition of systems and increased autonomy of components
  • Creating unidirectional, non-blocking operations, temporarily decoupling system components at runtime
  • Decreasing the dependencies that components have on each other through standard communication and network channels

Following on the recent topic, Building Scalable Applications and Microservices: Adding Messaging to Your Toolbox, in this post, I look at some of the ways you can introduce SQS and SNS into your architectures to decouple your components, and show how you can implement them using C#.

Walkthrough

To illustrate some of these concepts, consider a web application that processes customer orders. As good architects and developers, you have followed best practices and made your application scalable and highly available. Your solution included implementing load balancing, dynamic scaling across multiple Availability Zones, and persisting orders in a Multi-AZ Amazon RDS database instance, as in the following diagram.


In this example, the application is responsible for handling and persisting the order data, as well as dealing with increases in traffic for popular items.

One potential point of vulnerability in the order processing workflow is in saving the order in the database. The business expects that every order has been persisted into the database. However, any potential deadlock, race condition, or network issue could cause the persistence of the order to fail. Then, the order is lost with no recourse to restore the order.

With good logging capability, you may be able to identify when an error occurred and which customer’s order failed. This wouldn’t allow you to “restore” the transaction, and by that stage, your customer is no longer your customer.

As illustrated in the following diagram, introducing an SQS queue helps improve your ordering application. Using the queue isolates the processing logic into its own component and runs it in a separate process from the web application. This, in turn, allows the system to be more resilient to spikes in traffic, while allowing work to be performed only as fast as necessary in order to manage costs.


In addition, you now have a mechanism for persisting orders as messages (with the queue acting as a temporary database), and have moved the scope of your transaction with your database further down the stack. In the event of an application exception or transaction failure, this ensures that the order processing can be retired or redirected to the Amazon SQS Dead Letter Queue (DLQ), for re-processing at a later stage. (See the recent post, Using Amazon SQS Dead-Letter Queues to Control Message Failure, for more information on dead-letter queues.)

Scaling the order processing nodes

This change allows you now to scale the web application frontend independently from the processing nodes. The frontend application can continue to scale based on metrics such as CPU usage, or the number of requests hitting the load balancer. Processing nodes can scale based on the number of orders in the queue. Here is an example of scale-in and scale-out alarms that you would associate with the scaling policy.

Scale-out Alarm

aws cloudwatch put-metric-alarm --alarm-name AddCapacityToCustomerOrderQueue --metric-name ApproximateNumberOfMessagesVisible --namespace "AWS/SQS" 
--statistic Average --period 300 --threshold 3 --comparison-operator GreaterThanOrEqualToThreshold --dimensions Name=QueueName,Value=customer-orders
--evaluation-periods 2 --alarm-actions <arn of the scale-out autoscaling policy>

Scale-in Alarm

aws cloudwatch put-metric-alarm --alarm-name RemoveCapacityFromCustomerOrderQueue --metric-name ApproximateNumberOfMessagesVisible --namespace "AWS/SQS" 
 --statistic Average --period 300 --threshold 1 --comparison-operator LessThanOrEqualToThreshold --dimensions Name=QueueName,Value=customer-orders
 --evaluation-periods 2 --alarm-actions <arn of the scale-in autoscaling policy>

In the above example, use the ApproximateNumberOfMessagesVisible metric to discover the queue length and drive the scaling policy of the Auto Scaling group. Another useful metric is ApproximateAgeOfOldestMessage, when applications have time-sensitive messages and developers need to ensure that messages are processed within a specific time period.

Scaling the order processing implementation

On top of scaling at an infrastructure level using Auto Scaling, make sure to take advantage of the processing power of your Amazon EC2 instances by using as many of the available threads as possible. There are several ways to implement this. In this post, we build a Windows service that uses the BackgroundWorker class to process the messages from the queue.

Here’s a closer look at the implementation. In the first section of the consuming application, use a loop to continually poll the queue for new messages, and construct a ReceiveMessageRequest variable.

public static void PollQueue()
{
    while (_running)
    {
        Task<ReceiveMessageResponse> receiveMessageResponse;

        // Pull messages off the queue
        using (var sqs = new AmazonSQSClient())
        {
            const int maxMessages = 10;  // 1-10

            //Receiving a message
            var receiveMessageRequest = new ReceiveMessageRequest
            {
                // Get URL from Configuration
                QueueUrl = _queueUrl, 
                // The maximum number of messages to return. 
                // Fewer messages might be returned. 
                MaxNumberOfMessages = maxMessages, 
                // A list of attributes that need to be returned with message.
                AttributeNames = new List<string> { "All" },
                // Enable long polling. 
                // Time to wait for message to arrive on queue.
                WaitTimeSeconds = 5 
            };

            receiveMessageResponse = sqs.ReceiveMessageAsync(receiveMessageRequest);
        }

The WaitTimeSeconds property of the ReceiveMessageRequest specifies the duration (in seconds) that the call waits for a message to arrive in the queue before returning a response to the calling application. There are a few benefits to using long polling:

  • It reduces the number of empty responses by allowing SQS to wait until a message is available in the queue before sending a response.
  • It eliminates false empty responses by querying all (rather than a limited number) of the servers.
  • It returns messages as soon any message becomes available.

For more information, see Amazon SQS Long Polling.

After you have returned messages from the queue, you can start to process them by looping through each message in the response and invoking a new BackgroundWorker thread.

// Process messages
if (receiveMessageResponse.Result.Messages != null)
{
    foreach (var message in receiveMessageResponse.Result.Messages)
    {
        Console.WriteLine("Received SQS message, starting worker thread");

        // Create background worker to process message
        BackgroundWorker worker = new BackgroundWorker();
        worker.DoWork += (obj, e) => ProcessMessage(message);
        worker.RunWorkerAsync();
    }
}
else
{
    Console.WriteLine("No messages on queue");
}

The event handler, ProcessMessage, is where you implement business logic for processing orders. It is important to have a good understanding of how long a typical transaction takes so you can set a message VisibilityTimeout that is long enough to complete your operation. If order processing takes longer than the specified timeout period, the message becomes visible on the queue. Other nodes may pick it and process the same order twice, leading to unintended consequences.

Handling Duplicate Messages

In order to manage duplicate messages, seek to make your processing application idempotent. In mathematics, idempotent describes a function that produces the same result if it is applied to itself:

f(x) = f(f(x))

No matter how many times you process the same message, the end result is the same (definition from Enterprise Integration Patterns: Designing, Building, and Deploying Messaging Solutions, Hohpe and Wolf, 2004).

There are several strategies you could apply to achieve this:

  • Create messages that have inherent idempotent characteristics. That is, they are non-transactional in nature and are unique at a specified point in time. Rather than saying “place new order for Customer A,” which adds a duplicate order to the customer, use “place order <orderid> on <timestamp> for Customer A,” which creates a single order no matter how often it is persisted.
  • Deliver your messages via an Amazon SQS FIFO queue, which provides the benefits of message sequencing, but also mechanisms for content-based deduplication. You can deduplicate using the MessageDeduplicationId property on the SendMessage request or by enabling content-based deduplication on the queue, which generates a hash for MessageDeduplicationId, based on the content of the message, not the attributes.
var sendMessageRequest = new SendMessageRequest
{
    QueueUrl = _queueUrl,
    MessageBody = JsonConvert.SerializeObject(order),
    MessageGroupId = Guid.NewGuid().ToString("N"),
    MessageDeduplicationId = Guid.NewGuid().ToString("N")
};
  • If using SQS FIFO queues is not an option, keep a message log of all messages attributes processed for a specified period of time, as an alternative to message deduplication on the receiving end. Verifying the existence of the message in the log before processing the message adds additional computational overhead to your processing. This can be minimized through low latency persistence solutions such as Amazon DynamoDB. Bear in mind that this solution is dependent on the successful, distributed transaction of the message and the message log.

Handling exceptions

Because of the distributed nature of SQS queues, it does not automatically delete the message. Therefore, you must explicitly delete the message from the queue after processing it, using the message ReceiptHandle property (see the following code example).

However, if at any stage you have an exception, avoid handling it as you normally would. The intention is to make sure that the message ends back on the queue, so that you can gracefully deal with intermittent failures. Instead, log the exception to capture diagnostic information, and swallow it.

By not explicitly deleting the message from the queue, you can take advantage of the VisibilityTimeout behavior described earlier. Gracefully handle the message processing failure and make the unprocessed message available to other nodes to process.

In the event that subsequent retries fail, SQS automatically moves the message to the configured DLQ after the configured number of receives has been reached. You can further investigate why the order process failed. Most importantly, the order has not been lost, and your customer is still your customer.

private static void ProcessMessage(Message message)
{
    using (var sqs = new AmazonSQSClient())
    {
        try
        {
            Console.WriteLine("Processing message id: {0}", message.MessageId);

            // Implement messaging processing here
            // Ensure no downstream resource contention (parallel processing)
            // <your order processing logic in here…>
            Console.WriteLine("{0} Thread {1}: {2}", DateTime.Now.ToString("s"), Thread.CurrentThread.ManagedThreadId, message.MessageId);
            
            // Delete the message off the queue. 
            // Receipt handle is the identifier you must provide 
            // when deleting the message.
            var deleteRequest = new DeleteMessageRequest(_queueName, message.ReceiptHandle);
            sqs.DeleteMessageAsync(deleteRequest);
            Console.WriteLine("Processed message id: {0}", message.MessageId);

        }
        catch (Exception ex)
        {
            // Do nothing.
            // Swallow exception, message will return to the queue when 
            // visibility timeout has been exceeded.
            Console.WriteLine("Could not process message due to error. Exception: {0}", ex.Message);
        }
    }
}

Using SQS to adapt to changing business requirements

One of the benefits of introducing a message queue is that you can accommodate new business requirements without dramatically affecting your application.

If, for example, the business decided that all orders placed over $5000 are to be handled as a priority, you could introduce a new “priority order” queue. The way the orders are processed does not change. The only significant change to the processing application is to ensure that messages from the “priority order” queue are processed before the “standard order” queue.

The following diagram shows how this logic could be isolated in an “order dispatcher,” whose only purpose is to route order messages to the appropriate queue based on whether the order exceeds $5000. Nothing on the web application or the processing nodes changes other than the target queue to which the order is sent. The rates at which orders are processed can be achieved by modifying the poll rates and scalability settings that I have already discussed.

Extending the design pattern with Amazon SNS

Amazon SNS supports reliable publish-subscribe (pub-sub) scenarios and push notifications to known endpoints across a wide variety of protocols. It eliminates the need to periodically check or poll for new information and updates. SNS supports:

  • Reliable storage of messages for immediate or delayed processing
  • Publish / subscribe – direct, broadcast, targeted “push” messaging
  • Multiple subscriber protocols
  • Amazon SQS, HTTP, HTTPS, email, SMS, mobile push, AWS Lambda

With these capabilities, you can provide parallel asynchronous processing of orders in the system and extend it to support any number of different business use cases without affecting the production environment. This is commonly referred to as a “fanout” scenario.

Rather than your web application pushing orders to a queue for processing, send a notification via SNS. The SNS messages are sent to a topic and then replicated and pushed to multiple SQS queues and Lambda functions for processing.

As the diagram above shows, you have the development team consuming “live” data as they work on the next version of the processing application, or potentially using the messages to troubleshoot issues in production.

Marketing is consuming all order information, via a Lambda function that has subscribed to the SNS topic, inserting the records into an Amazon Redshift warehouse for analysis.

All of this, of course, is happening without affecting your order processing application.

Summary

While I haven’t dived deep into the specifics of each service, I have discussed how these services can be applied at an architectural level to build loosely coupled systems that facilitate multiple business use cases. I’ve also shown you how to use infrastructure and application-level scaling techniques, so you can get the most out of your EC2 instances.

One of the many benefits of using these managed services is how quickly and easily you can implement powerful messaging capabilities in your systems, and lower the capital and operational costs of managing your own messaging middleware.

Using Amazon SQS and Amazon SNS together can provide you with a powerful mechanism for decoupling application components. This should be part of design considerations as you architect for the cloud.

For more information, see the Amazon SQS Developer Guide and Amazon SNS Developer Guide. You’ll find tutorials on all the concepts covered in this post, and more. To can get started using the AWS console or SDK of your choice visit:

Happy messaging!

AIMS Desktop 2017.1 released

Post Syndicated from corbet original https://lwn.net/Articles/725712/rss

The AIMS desktop is a
Debian-derived distribution aimed at mathematical and scientific use. This
project’s first public release, based on Debian 9, is now available.
It is a GNOME-based distribution with a bunch of add-on software.
It is maintained by AIMS (The African Institute for Mathematical
Sciences), a pan-African network of centres of excellence enabling Africa’s
talented students to become innovators driving the continent’s scientific,
educational and economic self-sufficiency.

Teaching tech

Post Syndicated from Eevee original https://eev.ee/blog/2017/06/10/teaching-tech/

A sponsored post from Manishearth:

I would kinda like to hear about any thoughts you have on technical teaching or technical writing. Pedagogy is something I care about. But I don’t know how much you do, so feel free to ignore this suggestion 🙂

Good news: I care enough that I’m trying to write a sorta-kinda-teaching book!

Ironically, one of the biggest problems I’ve had with writing the introduction to that book is that I keep accidentally rambling on for pages about problems and difficulties with teaching technical subjects. So maybe this is a good chance to get it out of my system.

Phaser

I recently tried out a new thing. It was Phaser, but this isn’t a dig on them in particular, just a convenient example fresh in my mind. If anything, they’re better than most.

As you can see from Phaser’s website, it appears to have tons of documentation. Two of the six headings are “LEARN” and “EXAMPLES”, which seems very promising. And indeed, Phaser offers:

  • Several getting-started walkthroughs
  • Possibly hundreds of examples
  • A news feed that regularly links to third-party tutorials
  • Thorough API docs

Perfect. Beautiful. Surely, a dream.

Well, almost.

The examples are all microscopic, usually focused around a single tiny feature — many of them could be explained just as well with one line of code. There are a few example games, but they’re short aimless demos. None of them are complete games, and there’s no showcase either. Games sometimes pop up in the news feed, but most of them don’t include source code, so they’re not useful for learning from.

Likewise, the API docs are just API docs, leading to the sorts of problems you might imagine. For example, in a few places there’s a mention of a preUpdate stage that (naturally) happens before update. You might rightfully wonder what kinds of things happen in preUpdate — and more importantly, what should you put there, and why?

Let’s check the API docs for Phaser.Group.preUpdate:

The core preUpdate – as called by World.

Okay, that didn’t help too much, but let’s check what Phaser.World has to say:

The core preUpdate – as called by World.

Ah. Hm. It turns out World is a subclass of Group and inherits this method — and thus its unaltered docstring — from Group.

I did eventually find some brief docs attached to Phaser.Stage (but only by grepping the source code). It mentions what the framework uses preUpdate for, but not why, and not when I might want to use it too.


The trouble here is that there’s no narrative documentation — nothing explaining how the library is put together and how I’m supposed to use it. I get handed some brief primers and a massive reference, but nothing in between. It’s like buying an O’Reilly book and finding out it only has one chapter followed by a 500-page glossary.

API docs are great if you know specifically what you’re looking for, but they don’t explain the best way to approach higher-level problems, and they don’t offer much guidance on how to mesh nicely with the design of a framework or big library. Phaser does a decent chunk of stuff for you, off in the background somewhere, so it gives the strong impression that it expects you to build around it in a particular way… but it never tells you what that way is.

Tutorials

Ah, but this is what tutorials are for, right?

I confess I recoil whenever I hear the word “tutorial”. It conjures an image of a uniquely useless sort of post, which goes something like this:

  1. Look at this cool thing I made! I’ll teach you how to do it too.

  2. Press all of these buttons in this order. Here’s a screenshot, which looks nothing like what you have, because I’ve customized the hell out of everything.

  3. You did it!

The author is often less than forthcoming about why they made any of the decisions they did, where you might want to try something else, or what might go wrong (and how to fix it).

And this is to be expected! Writing out any of that stuff requires far more extensive knowledge than you need just to do the thing in the first place, and you need to do a good bit of introspection to sort out something coherent to say.

In other words, teaching is hard. It’s a skill, and it takes practice, and most people blogging are not experts at it. Including me!


With Phaser, I noticed that several of the third-party tutorials I tried to look at were 404s — sometimes less than a year after they were linked on the site. Pretty major downside to relying on the community for teaching resources.

But I also notice that… um…

Okay, look. I really am not trying to rag on this author. I’m not. They tried to share their knowledge with the world, and that’s a good thing, something worthy of praise. I’m glad they did it! I hope it helps someone.

But for the sake of example, here is the most recent entry in Phaser’s list of community tutorials. I have to link it, because it’s such a perfect example. Consider:

  • The post itself is a bulleted list of explanation followed by a single contiguous 250 lines of source code. (Not that there’s anything wrong with bulleted lists, mind you.) That code contains zero comments and zero blank lines.

  • This is only part two in what I think is a series aimed at beginners, yet the title and much of the prose focus on object pooling, a performance hack that’s easy to add later and that’s almost certainly unnecessary for a game this simple. There is no explanation of why this is done; the prose only says you’ll understand why it’s critical once you add a lot more game objects.

  • It turns out I only have two things to say here so I don’t know why I made this a bulleted list.

In short, it’s not really a guided explanation; it’s “look what I did”.

And that’s fine, and it can still be interesting. I’m not sure English is even this person’s first language, so I’m hardly going to criticize them for not writing a novel about platforming.

The trouble is that I doubt a beginner would walk away from this feeling very enlightened. They might be closer to having the game they wanted, so there’s still value in it, but it feels closer to having someone else do it for them. And an awful lot of tutorials I’ve seen — particularly of the “post on some blog” form (which I’m aware is the genre of thing I’m writing right now) — look similar.

This isn’t some huge social problem; it’s just people writing on their blog and contributing to the corpus of written knowledge. It does become a bit stickier when a large project relies on these community tutorials as its main set of teaching aids.


Again, I’m not ragging on Phaser here. I had a slightly frustrating experience with it, coming in knowing what I wanted but unable to find a description of the semantics anywhere, but I do sympathize. Teaching is hard, writing documentation is hard, and programmers would usually rather program than do either of those things. For free projects that run on volunteer work, and in an industry where anything other than programming is a little undervalued, getting good docs written can be tricky.

(Then again, Phaser sells books and plugins, so maybe they could hire a documentation writer. Or maybe the whole point is for you to buy the books?)

Some pretty good docs

Python has pretty good documentation. It introduces the language with a tutorial, then documents everything else in both a library and language reference.

This sounds an awful lot like Phaser’s setup, but there’s some considerable depth in the Python docs. The tutorial is highly narrative and walks through quite a few corners of the language, stopping to mention common pitfalls and possible use cases. I clicked an arbitrary heading and found a pleasant, informative read that somehow avoids being bewilderingly dense.

The API docs also take on a narrative tone — even something as humble as the collections module offers numerous examples, use cases, patterns, recipes, and hints of interesting ways you might extend the existing types.

I’m being a little vague and hand-wavey here, but it’s hard to give specific examples without just quoting two pages of Python documentation. Hopefully you can see right away what I mean if you just take a look at them. They’re good docs, Bront.

I’ve likewise always enjoyed the SQLAlchemy documentation, which follows much the same structure as the main Python documentation. SQLAlchemy is a database abstraction layer plus ORM, so it can do a lot of subtly intertwined stuff, and the complexity of the docs reflects this. Figuring out how to do very advanced things correctly, in particular, can be challenging. But for the most part it does a very thorough job of introducing you to a large library with a particular philosophy and how to best work alongside it.

I softly contrast this with, say, the Perl documentation.

It’s gotten better since I first learned Perl, but Perl’s docs are still a bit of a strange beast. They exist as a flat collection of manpage-like documents with terse names like perlootut. The documentation is certainly thorough, but much of it has a strange… allocation of detail.

For example, perllol — the explanation of how to make a list of lists, which somehow merits its own separate documentation — offers no fewer than nine similar variations of the same code for reading a file into a nested lists of words on each line. Where Python offers examples for a variety of different problems, Perl shows you a lot of subtly different ways to do the same basic thing.

A similar problem is that Perl’s docs sometimes offer far too much context; consider the references tutorial, which starts by explaining that references are a powerful “new” feature in Perl 5 (first released in 1994). It then explains why you might want to nest data structures… from a Perl 4 perspective, thus explaining why Perl 5 is so much better.

Some stuff I’ve tried

I don’t claim to be a great teacher. I like to talk about stuff I find interesting, and I try to do it in ways that are accessible to people who aren’t lugging around the mountain of context I already have. This being just some blog, it’s hard to tell how well that works, but I do my best.

I also know that I learn best when I can understand what’s going on, rather than just seeing surface-level cause and effect. Of course, with complex subjects, it’s hard to develop an understanding before you’ve seen the cause and effect a few times, so there’s a balancing act between showing examples and trying to provide an explanation. Too many concrete examples feel like rote memorization; too much abstract theory feels disconnected from anything tangible.

The attempt I’m most pleased with is probably my post on Perlin noise. It covers a fairly specific subject, which made it much easier. It builds up one step at a time from scratch, with visualizations at every point. It offers some interpretations of what’s going on. It clearly explains some possible extensions to the idea, but distinguishes those from the core concept.

It is a little math-heavy, I grant you, but that was hard to avoid with a fundamentally mathematical topic. I had to be economical with the background information, so I let the math be a little dense in places.

But the best part about it by far is that I learned a lot about Perlin noise in the process of writing it. In several places I realized I couldn’t explain what was going on in a satisfying way, so I had to dig deeper into it before I could write about it. Perhaps there’s a good guideline hidden in there: don’t try to teach as much as you know?

I’m also fairly happy with my series on making Doom maps, though they meander into tangents a little more often. It’s hard to talk about something like Doom without meandering, since it’s a convoluted ecosystem that’s grown organically over the course of 24 years and has at least three ways of doing anything.


And finally there’s the book I’m trying to write, which is sort of about game development.

One of my biggest grievances with game development teaching in particular is how often it leaves out important touches. Very few guides will tell you how to make a title screen or menu, how to handle death, how to get a Mario-style variable jump height. They’ll show you how to build a clearly unfinished demo game, then leave you to your own devices.

I realized that the only reliable way to show how to build a game is to build a real game, then write about it. So the book is laid out as a narrative of how I wrote my first few games, complete with stumbling blocks and dead ends and tiny bits of polish.

I have no idea how well this will work, or whether recapping my own mistakes will be interesting or distracting for a beginner, but it ought to be an interesting experiment.

No, Netflix Hasn’t Won The War on Piracy

Post Syndicated from Ernesto original https://torrentfreak.com/no-netflix-hasnt-won-the-war-on-piracy-170604/

Recently a hacker group, or hacker, going by the name TheDarkOverlord (TDO) published the premiere episode of the fifth season of Netflix’s Orange is The New Black, followed by nine more episodes a few hours later.

TDO obtained the videos from Larson Studios, which didn’t pay the 50 bitcoin ransom TDO had requested. The hackers then briefly turned their attention to Netflix, before releasing the shows online.

In the aftermath, a flurry of articles claimed that Netflix’s refusal to pay means that it is winning the war on piracy. Torrents are irrelevant or no longer a real threat and piracy is pointless, they concluded.

One of the main reasons cited is a decline in torrent traffic over the years, as reported by the network equipment company Sandvine.

“Last year, BitTorrent traffic reached 1.73 percent of peak period downstream traffic in North America. That’s down from the 60 percent share peer-to-peer file sharing had in 2003. Netflix was responsible for 35.15 percent of downstream traffic,” one reporter wrote.

Piracy pointless?

Even Wired, a reputable technology news site, jumped on the bandwagon.

“It’s not that torrenting is so onerous. But compared to legitimate streaming, the process of downloading a torrenting client, finding a legit file, waiting for it to download, and watching it on a laptop (or mirroring it to a television) hardly seems worth it,” the articles states.

These and many similar articles suggest that Netflix’s ease of use is superior to piracy. Netflix is winning the war on piracy, which is pretty much reduced to a fringe activity carried out by old school data hoarders, they claimed.

But is that really the case?

I wholeheartedly agree that Netflix is a great alternative to piracy, and admit that torrents are not as dominant as they were before. But, everybody who thinks that piracy is limited to torrents, need to educate themselves properly.

Piracy has evolved quite a bit over the past several years and streaming is now the main source to satisfy people’s ‘illegal’ viewing demands.

Whether it’s through pirate streaming sites, mobile apps or dedicated media players hooked to TVs; it’s not hard to argue that piracy is easier and more convenient than it has even been in the past. And arguably, more popular too.

The statistics are dazzling. According to piracy monitoring outfit MUSO there are half a billion visits to video pirate sites every day. Roughly 60% of these are to streaming sites.

While there has been a small decline in streaming visits over the past year, MUSO’s data doesn’t cover the explosion of media player piracy, which means that there is likely a significant increase in piracy overall.

TorrentFreak contacted the aforementioned network equipment company Sandvine, which said that we’re “on to something.”

Unfortunately, they currently have no data to quantify the amount of pirate streaming activity. This is, in part, because many of these streams are hosted by legitimate companies such as Google.

Torrents may not be dominant anymore, but with hundreds of millions of visits to streaming pirate sites per day, and many more via media players and other apps, piracy is still very much alive. Just ask the Motion Picture Association.

I would even argue that piracy is more of a threat to Netflix than it has ever been before.

To illustrate, here is a screenshot from one of the most visited streaming piracy sites online. The site in question receives millions of views per day and featured two Netflix shows, “13 Reasons Why” and the leaked “Orange is The New Black,” in its daily “most viewed” section recently.

Netflix shows among the “most viewed” pirate streams

If you look at a random streaming site, you’ll see that they offer an overview of thousands of popular movies and TV-shows, far more than Netflix. Pirate streaming sites have more content than Netflix, often in high quality, and it doesn’t cost a penny.

Throw in the explosive growth of piracy-capable media players that can bring this content directly to the TV-screen, and you’ll start to realize the magnitude of this threat.

In a way, the boost in streaming piracy is a bigger threat to Netflix than the traditional Hollywood studios. Hollywood still has its exclusive release windows and a superior viewing experience at the box office. All Netflix content is instantly pirated, or already available long before they add it to their catalog.

Sure, pirate sites might not appeal to the average middle-class news columnist who’s been subscribed to Netflix for years, but for tens of millions of less fortunate people, who can do without another monthly charge on their household bill, it’s an easy choice.

Not the right choice, legally speaking, but that doesn’t seem to bother them much.

That’s illustrated by tens of thousands of people from all over the world commenting with their public Facebook accounts, on movies and TV-shows that were obviously pirated.

Pirate comments on a streaming site

Of course, if piracy disappeared overnight then only a fraction of these pirates would pay for a Netflix subscription, but saying that piracy is irrelevant for the streaming giant may be a bit much.

Netflix itself is all too aware of this it seems. The company has launched its own “Global Copyright Protection Group,” an anti-piracy division that’s on par with those of many major Hollywood studios.

Netflix isn’t winning the war on piracy; it just got started….

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

New “Out of Control” Denuvo Piracy Protection Cracked

Post Syndicated from Andy original https://torrentfreak.com/new-control-denuvo-piracy-protection-cracked-170602/

Like many games in recent times, indie title RiME uses Denuvo anti-piracy technology to keep the swashbucklers away. It won’t stay that way for long.

Earlier this week, RiME developer Tequila Works grabbed a few headlines after stating it would remove the Denuvo protection from its game, should it fall to crackers.

“I have seen some conversations about our use of Denuvo anti-tamper, and I wanted to take a moment to address it,” RiME community manager Dariuas wrote on Steam forums.

“RiME is a very personal experience told through both sight and sound. When a game is cracked, it runs the risk of creating issues with both of those items, and we want to do everything we can to preserve this quality in RiME.”

Dariuas concluded that a Denuvo-free version of RiME would be released if the game was cracked. Within days of the announcement and right on cue, pirates struck.

In a fanfare of celebrations, rising cracking star Baldman announced that he had defeated the latest v4+ iteration of Denuvo and dumped a cracked copy of RiME online. While encouraging people to buy what he describes as a “super nice” game, Baldman was less complimentary about Denuvo.

Labeling the anti-tamper technology a “huge abomination,” the cracker said that Denuvo’s creators had really upped their efforts this time out. People like Baldman who work on Denuvo talk of the protection calling on code ‘triggers.’ For RiME, things were reportedly amped up to 11.

“In Rime that ugly creature went out of control – how do you like three fucking hundreds of THOUSANDS calls to ‘triggers’ during initial game launch and savegame loading? Did you wonder why game loading times are so long – here is the answer,” Baldman explained.

“In previous games like Sniper: Ghost Warrior 3, NieR Automata, Prey there were only about 1000 ‘triggers’ called, so we have x300 here.”

But according to the cracker, the 300,000 calls to triggers was a mere “warmup” for Denuvo. After just 30 minutes of gameplay, the count rose to two million, a figure he delivered with shocked expletives.

One of the main points of criticism for protections like Denuvo is that they take a toll on both game performance and gaming hardware. Baldman, who speaks English as a second language, reports that in RiME things have got massively out of hand which negatively affects the game.

“Protection now calls about 10-30 triggers every second during actual gameplay, slowing game down. In previous games like Sniper: Ghost Warrior 3, NieR Automata, Prey there were only about 1-2 ‘triggers’ called every several minutes during gameplay, so do the math.”

Only making matters worse, the cracker says, is the fact the triggers are heavily obfuscated under a virtual machine, which further affects performance. However, thanks to RiME’s developers making good on their word, any protection-related problems will soon be a thing of the past.

“Today, we got word that there was a crack which would bypass Denuvo,” Dariuas wrote last night.

“Upon receiving this news, we worked to test this and verify that it was, in fact, the case. We have now confirmed that it is. As such, we at [publisher] Team Grey Box are following through on our promise from earlier this week that we will be replacing the current build of RiME with one that does not contain Denuvo.”

So while gamers wait for Denuvo to get stripped from RiME and pirates celebrate, the company behind the anti-piracy technology will be considering its options. If what Baldman claims is true, it sounds like more than just a little desperation is in the air.

Worryingly for Denuvo, not even throwing the kitchen sink at the problem has had much effect.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Introspection

Post Syndicated from Eevee original https://eev.ee/blog/2017/05/28/introspection/

This month, IndustrialRobot has generously donated in order to ask:

How do you go about learning about yourself? Has your view of yourself changed recently? How did you handle it?

Whoof. That’s incredibly abstract and open-ended — there’s a lot I could say, but most of it is hard to turn into words.


The first example to come to mind — and the most conspicuous, at least from where I’m sitting — has been the transition from technical to creative since quitting my tech job. I think I touched on this a year ago, but it’s become all the more pronounced since then.

I quit in part because I wanted more time to work on my own projects. Two years ago, those projects included such things as: giving the Python ecosystem a better imaging library, designing an alternative to regular expressions, building a Very Correct IRC bot framework, and a few more things along similar lines. The goals were all to solve problems — not hugely important ones, but mildly inconvenient ones that I thought I could bring something novel to. Problem-solving for its own sake.

Now that I had all the time in the world to work on these things, I… didn’t. It turned out they were almost as much of a slog as my job had been!

The problem, I think, was that there was no point.

This was really weird to realize and come to terms with. I do like solving problems for its own sake; it’s interesting and educational. And most of the programming folks I know and surround myself with have that same drive and use it to create interesting tools like Twisted. So besides taking for granted that this was the kind of stuff I wanted to do, it seemed like the kind of stuff I should want to do.

But even if I create a really interesting tool, what do I have? I don’t have a thing; I have a tool that can be used to build things. If I want a thing, I have to either now build it myself — starting from nearly zero despite all the work on the tool, because it can only do so much in isolation — or convince a bunch of other people to use my tool to build things. Then they’d be depending on my tool, which means I have to maintain and support it, which is even more time and effort poured into this non-thing.

Despite frequently being drawn to think about solving abstract tooling problems, it seems I truly want to make things. This is probably why I have a lot of abandoned projects boldly described as “let’s solve X problem forever!” — I go to scratch the itch, I do just enough work that it doesn’t itch any more, and then I lose interest.

I spent a few months quietly flailing over this minor existential crisis. I’d spent years daydreaming about making tools; what did I have if not that drive? I was having to force myself to work on what I thought were my passion projects.

Meanwhile, I’d vaguely intended to do some game development, but for some reason dragged my feet forever and then took my sweet time dipping my toes in the water. I did work on a text adventure, Runed Awakening, on and off… but it was a fractal of creative decisions and I had a hard time making all of them. It might’ve been too ambitious, despite feeling small, and that might’ve discouraged me from pursuing other kinds of games earlier.

A big part of it might have been the same reason I took so long to even give art a serious try. I thought of myself as a technical person, and art is a thing for creative people, so I’m simply disqualified, right? Maybe the same thing applies to games.

Lord knows I had enough trouble when I tried. I’d orbited the Doom community for years but never released a single finished level. I did finally give it a shot again, now that I had the time. Six months into my funemployment, I wrote a three-part guide on making Doom levels. Three months after that, I finally released one of my own.

I suppose that opened the floodgates; a couple weeks later, glip and I decided to try making something for the PICO-8, and then we did that (almost exactly a year ago!). Then kept doing it.

It’s been incredibly rewarding — far moreso than any “pure” tooling problem I’ve ever approached. Moreso than even something like veekun, which is a useful thing. People have thoughts and opinions on games. Games give people feelings, which they then tell you about. Most of the commentary on a reference website is that something is missing or incorrect.

I like doing creative work. There was never a singular moment when this dawned on me; it was a slow process over the course of a year or more. I probably should’ve had an inkling when I started drawing, half a year before I quit; even my early (and very rough) daily comics made people laugh, and I liked that a lot. Even the most well-crafted software doesn’t tend to bring joy to people, but amateur art can.

I still like doing technical work, but I prefer when it’s a means to a creative end. And, just as important, I prefer when it has a clear and constrained scope. “Make a library/tool for X” is a nebulous problem that could go in a great many directions; “make a bot that tweets Perlin noise” has a pretty definitive finish line. It was interesting to write a little physics engine, but I would’ve hated doing it if it weren’t for a game I were making and didn’t have the clear scope of “do what I need for this game”.


It feels like creative work is something I’ve been wanting to do for a long time. If this were a made-for-TV movie, I would’ve discovered this impulse one day and immediately revealed myself as a natural-born artistic genius of immense unrealized talent.

That didn’t happen. Instead I’ve found that even something as mundane as having ideas is a skill, and while it’s one I enjoy, I’ve barely ever exercised it at all. I have plenty of ideas with technical work, but I run into brick walls all the time with creative stuff.

How do I theme this area? Well, I don’t know. How do I think of something? I don’t know that either. It’s a strange paradox to have an urge to create things but not quite know what those things are.

It’s such a new and completely different kind of problem. There’s no right answer, or even an answer I can check for “correctness”. I can do anything. With no landmarks to start from, it’s easy to feel completely lost and just draw blanks.

I’ve essentially recalibrated the texture of stuff I work on, and I have to find some completely new ways to approach problems. I haven’t found them yet. I don’t think they’re anything that can be told or taught. But I’m starting to get there, and part of it is just accepting that I can’t treat these like problems with clear best solutions and clear algorithms to find those solutions.

A particularly glaring irony is that I’ve had a really tough problem designing abstract spaces, even though that’s exactly the kind of architecture I praise in Doom. It’s much trickier than it looks — a good abstract design is reminiscent of something without quite being that something.

I suppose it’s similar to a struggle I’ve had with art. I’m drawn to a cartoony style, and cartooning is also a mild form of abstraction, of whittling away details to leave only what’s most important. I’m reminded in particular of the forest background in fox flux — I was completely lost on how to make something reminiscent of a tree line. I knew enough to know that drawing trees would’ve made the background far too busy, but trees are naturally busy, so how do you represent that?

The answer glip gave me was to make big chunky leaf shapes around the edges and where light levels change. Merely overlapping those shapes implies depth well enough to convey the overall shape of the tree. The result works very well and looks very simple — yet it took a lot of effort just to get to the idea.

It reminds me of mathematical research, in a way? You know the general outcome you want, and you know the tools at your disposal, and it’s up to you to make some creative leaps. I don’t think there’s a way to directly learn how to approach that kind of problem; all you can do is look at what others have done and let it fuel your imagination.


I think I’m getting a little distracted here, but this is stuff that’s been rattling around lately.

If there’s a more personal meaning to the tree story, it’s that this is a thing I can do. I can learn it, and it makes sense to me, despite being a huge nerd.

Two and a half years ago, I never would’ve thought I’d ever make an entire game from scratch and do all the art for it. It was completely unfathomable. Maybe we can do a lot of things we don’t expect we’re capable of, if only we give them a serious shot.

And ask for help, of course. I have a hell of a time doing that. I did a painting recently that factored in mountains of glip’s advice, and on some level I feel like I didn’t quite do it myself, even though every stroke was made by my hand. Hell, I don’t even look at references nearly as much as I should. It feels like cheating, somehow? I know that’s ridiculous, but my natural impulse is to put my head down and figure it out myself. Maybe I’ve been doing that for too long with programming. Trust me, it doesn’t work quite so well in a brand new field.


I’m getting distracted again!

To answer your actual questions: how do I go about learning about myself? I don’t! It happens completely by accident. I’ll consciously examine my surface-level thoughts or behaviors or whatever, sure, but the serious fundamental revelations have all caught me completely by surprise — sometimes slowly, sometimes suddenly.

Most of them also came from listening to the people who observe me from the outside: I only started drawing in the first place because of some ridiculous deal I made with glip. At the time I thought they just wanted everyone to draw because art is their thing, but now I’m starting to suspect they’d caught on after eight years of watching me lament that I couldn’t draw.

I don’t know how I handle such discoveries, either. What is handling? I imagine someone discovering something and trying to come to grips with it, but I don’t know that I have quite that experience — my grappling usually comes earlier, when I’m still trying to figure the thing out despite not knowing that there’s a thing to find out. Once I know it, it’s on the table; I can’t un-know it or reject it meaningfully. All I can do is figure out what to do with it, and I approach that the same way I approach every other problem: by flailing at it and hoping for the best.

This isn’t quite 2000 words. Sorry. I’ve run out of things to say about me. This paragraph is very conspicuous filler. Banana. Atmosphere. Vocation.

Julia language for Raspberry Pi

Post Syndicated from Ben Nuttall original https://www.raspberrypi.org/blog/julia-language-raspberry-pi/

Julia is a free and open-source general purpose programming language made specifically for scientific computing. It combines the ease of writing in high-level languages like Python and Ruby with the technical power of MATLAB and Mathematica and the speed of C. Julia is ideal for university-level scientific programming and it’s used in research.

Julia language logo

Some time ago Viral Shah, one of the language’s co-creators, got in touch with us at the Raspberry Pi Foundation to say his team was working on a port of Julia to the ARM platform, specifically for the Raspberry Pi. Since then, they’ve done sterling work to add support for ARM. We’re happy to announce that we’ve now added Julia to the Raspbian repository, and that all Raspberry Pi models are supported!

Not only did the Julia team port the language itself to the Pi, but they also added support for GPIO, the Sense HAT and Minecraft. What I find really interesting is that when they came to visit and show us a demo, they took a completely different approach to the Sense HAT than I’d seen before: Simon, one of the Julia developers, started by loading the Julia logo into a matrix within the Jupyter notebook and then displayed it on the Sense HAT LED matrix. He then did some matrix transformations and the Sense HAT showed the effect of these manipulations.

Viral says:

The combination of Julia’s performance and Pi’s hardware unlocks new possibilities. Julia on the Pi will attract new communities and drive applications in universities, research labs and compute modules. Instead of shipping the data elsewhere for advanced analytics, it can simply be processed on the Pi itself in Julia.

Our port to ARM took a while, since we started at a time when LLVM on ARM was not fully mature. We had a bunch of people contributing to it – chipping away for a long time. Yichao did a bunch of the hard work, since he was using it for his experiments. The folks at the Berkeley Race car project also put Julia and JUMP on their self-driving cars, giving a pretty compelling application. We think we will see many more applications.

I organised an Intro to Julia session for the Cambridge Python user group earlier this week, and rather than everyone having to install Julia, Jupyter and all the additional modules on their own laptops, we just set up a room full of Raspberry Pis and prepared an SD card image. This was much easier and also meant we could use the Sense HAT to display output.

Intro to Julia language session at Raspberry Pi Foundation
Getting started with Julia language on Raspbian
Julia language logo on the Sense HAT LED array

Simon kindly led the session, and before long we were using Julia to generate the Mandelbrot fractal and display it on the Sense HAT:

Ben Nuttall on Twitter

@richwareham’s Sense HAT Mandelbrot fractal with @JuliaLanguage at @campython https://t.co/8FK7Vrpwwf

Naturally, one of the attendees, Rich Wareham, progressed to the Julia set – find his code here: gist.github.com/bennuttall/…

Last year at JuliaCon, there were two talks about Julia on the Pi. You can watch them on YouTube:

Install Julia on your Raspberry Pi with:

sudo apt update
sudo apt install julia

You can install the Jupyter notebook for Julia with:

sudo apt install julia libzmq3-dev python3-zmq
sudo pip3 install jupyter
julia -e 'Pkg.add("IJulia");'

And you can easily install extra packages from the Julia console:

Pkg.add("SenseHat")

The Julia team have also created a resources website for getting started with Julia on the Pi: juliaberry.github.io

Julia team visiting Pi Towers

There never was a story of more joy / Than this of Julia and her Raspberry Pi

Many thanks to Viral Shah, Yichao Yu, Tim Besard, Valentin Churavy, Jameson Nash, Tony Kelman, Avik Sengupta and Simon Byrne for their work on the port. We’re all really excited to see what people do with Julia on Raspberry Pi, and we look forward to welcoming Julia programmers to the Raspberry Pi community.

The post Julia language for Raspberry Pi appeared first on Raspberry Pi.

A day with AIY Voice Projects Kit – The MagPi 57 aftermath

Post Syndicated from Rob Zwetsloot original https://www.raspberrypi.org/blog/aiy-voice-projects-kit-magpi-57-aftermath/

Hi folks, Rob here. It’s been a crazy day or so here over at The MagPi and Raspberry Pi as we try to answer all your questions and look at all the cool stuff you’re doing with the new AIY Voice Projects Kit that we bundled with issue 57. While it has been busy, it’s also been a lot of fun.

Got a question?

We know lots of you have got your hands on issue 57, but a lot more of you will have questions to ask. Here’s a quick FAQ before we go over the fun stuff you’ve been doing:

Which stores stock The MagPi in [insert country]?

The original edition of The MagPi is only currently stocked in bricks-and-mortar stores in the UK, Ireland, and the US:

  • In the UK, you can find copies at WHSmith, Asda, Tesco, and Sainsbury’s
  • In the US, you can find them at Barnes and Noble and at Micro Center
  • In Ireland, we’re in Tesco and Easons

Unfortunately, this means you will find very little (if any) stock of issue 57 in stores in other countries. Even Canada (we’ve been asked this a lot!)…

The map below shows the locations to which stock has been shipped (please note, though, that this doesn’t indicate live stock):

My Barnes and Noble still only has issue 55!

Issue 57 should have been in Barnes & Noble stores yesterday, but stock sometimes takes a few days to spread and get onto shelves. Keep trying over the next few days. We’re skipping issue 56 in the US so you can get 57 at the same time (you’ll be getting the issues at the same time from now on).

If I start a new subscription, will I get issue 57?

Yes. We have limited copies for new subscribers. It’s available on all new print subscriptions. You need to specify that you want issue 57 when you subscribe.

Will you be restocking online?

We’re looking into it. If we manage to, keep an eye on our social media channels and the blog for more details.

Is there any way to get the AIY Voice Projects Kit on its own?

Not yet, but you can sign up to Google’s mailing list to be notified when they become available.

Rob asked us to do no evil with our Raspberry Pi: how legally binding is that?

Highest galactic law. Here is a picture of me pointing at you to remind you of this.

Image of Rob with the free AIY kit

Please do not do evil with your Raspberry Pi

OK, with that out of the way, here’s the cool stuff!

AIY Voice Projects Kit builds

A lot of you built the kit very quickly, including Raspberry Pi Certified Educator Lorraine Underwood, who managed it before lunch.

Lorraine Underwood on Twitter

Ha, cool. I made it! Top notch instructions and pics @TheMagP1 Not going to finish the whole thing before youngest is out of nursery. Gah!!

We love Andy Grimley’s shot as the HAT seems to be floating. We had no idea it could levitate!

Andy Grimley on Twitter

This is awesome @TheMagP1 #AIYProjects

A few people reached out to tell us they were building it with children for their weekend project. These messages really are one of the best parts of our job.

Screenshot of Facebook comment on AIY kit

Screenshot of tweet about AIY kit

Screenshot of tweet about AIY kit

What have people been making with it? Domhnall O’Hanlon made the basic assistant setup, and photographed it in the stunning surroundings of the National Botanic Gardens of Ireland:

Domhnall O Hanlon on Twitter

Took my @Raspberry_Pi #AIYProjects on a field trip to the National Botanic Gardens. Thanks @TheMagP1! #edchatie #edtech https://t.co/f5dR9JBDEx

Friend of The MagPi David Pride has a cool idea:

David Pride on Twitter

@Raspberry_Pi @TheMagP1 Can feel a weekend mashup happening with the new #AIYProjects kit & my latest car boot find (the bird, not the cat!)

Check out Bastiaan Slee’s hack of an old IoT device:

Bastiaan Slee on Twitter

@TheMagP1 I’ve given my Nabaztag a second life with #AIYProjects https://t.co/udtWaAMz2x

Bastiaan Slee on Twitter

Hacking time with the Nabaztag and #AIYProjects ! https://t.co/udtWaAMz2x

Finally, Sandy Macdonald is doing a giveaway of the issue. Go and enter: a simple retweet could win you a great prize!

Sandy Macdonald on Twitter

I’m giving away this copy of @TheMagP1 with the @Raspberry_Pi #AIYProjects free, inc. p&p worldwide. RT to enter. Closes 9am BST tomorrow.

If you have got your hands on the AIY Voice Projects Kit, do show us what you’ve made with it! Remember to use the #AIYProjects hashtag on Twitter to show off your project as well.

There’s also a dedicated forum for discussing the AIY Voice Projects Kit which you can find on the main Raspberry Pi forum. Check it out if you have something to share or if you’re having any problems.

Yesterday I promised a double-dose of Picard gifs. So, what’s twice as good as a Picard gif? A Sisko gif, of course! See you next time…

No Title

No Description

The post A day with AIY Voice Projects Kit – The MagPi 57 aftermath appeared first on Raspberry Pi.

#CharityTuesday: What do kids say about Code Club?

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/code-club-kids/

We’ve recently released a series of new Code Club videos on our YouTube channel. These range from advice on setting up your own Code Club to testimonials from kids and volunteers. To offer a little more information on the themes of each video, we’ll be releasing #CharityTuesday blog posts for each, starting with the reason for it all: the kids.

What do kids say about Code Club?

The team visited Liverpool Central Library to find out what the children at their Code Club think about the club and its activities, and what they’re taking away from attending.

“It makes me all excited inside…”

Code Clubs are weekly after-school coding clubs for 9 – 11 year olds. Children learn to create games, animations and websites using our specially created resources, with the support of awesome volunteers. We visited Liverpool Central Library to find out what the children at their Code Club think about their coding club.

We love to hear the wonderful stories of exploration and growth from the kids that attend Code Clubs. The changes our volunteers see in many of their club members are both heart-warming and extraordinary.

Code Club kids two girls at Code Club

“It makes me all excited inside.”

“There’s a lad who comes to my club who was really not confident about coding. He said he was rubbish at maths and such when he first started,” explains Dan Powell, Code Club Regional Coordinator for the South East. “After he’d done a couple of terms he told us, ‘Tuesday is my favourite day now because I get to come to Code Club: it makes my brain feel sparkly.’ He’s now writing his own adventure game in Scratch!”

Code Club kids

“I think the best part of it is being able to interact with other people and to share ideas on projects.”

“I love the sentiments at the end of this advert a couple of my newbies made in Scratch,” continues Lorna Gibson, Regional Coordinator for Scotland. “The bit where they say that nobody is left behind and everyone has fun made me teary.”

Here is their wonderful Scratch advert:

Get involved in Code Club!

Code Club is a nationwide network of volunteer-led after-school coding clubs for children. It offers a great place for children of all abilities to learn and build upon their skills amongst like-minded peers.

There are currently over 10,000 active Code Clubs across the world and official Code Club communities in ten countries. If you want to find out more, visit the Code Club UK website, or Code Club International if you are outside of the UK.

The post #CharityTuesday: What do kids say about Code Club? appeared first on Raspberry Pi.

Tor exit node operator arrested in Russia (TorServers.net blog)

Post Syndicated from ris original https://lwn.net/Articles/720231/rss

On April 12 Dmitry Bogatov, a mathematician and Debian maintainer, was arrested
in Russia
for “incitation to terrorism” because of some messages that
went through his Tor exit node. “Though, the very nature of Bogatov
case is a controversial one, as it mixes technical and legal arguments, and
makes necessary both strong legal and technical expertise involved. Indeed,
as a Tor exit node operator, Dmitry does not have control and
responsibility on the content and traffic that passes through his node: it
would be the same as accusing someone who has a knife stolen from her house
for the murder committed with this knife by a stranger.
” The Debian
Project made a brief statement.

Sense HAT Emulator Upgrade

Post Syndicated from David Honess original https://www.raspberrypi.org/blog/sense-hat-emulator-upgrade/

Last year, we partnered with Trinket to develop a web-based emulator for the Sense HAT, the multipurpose add-on board for the Raspberry Pi. Today, we are proud to announce an exciting new upgrade to the emulator. We hope this will make it even easier for you to design amazing experiments with the Sense HAT!

What’s new?

The original release of the emulator didn’t fully support all of the Sense HAT features. Specifically, the movement sensors were not emulated. Thanks to funding from the UK Space Agency, we are delighted to announce that a new round of development has just been completed. From today, the movement sensors are fully supported. The emulator also comes with a shiny new 3D interface, Astro Pi skin mode, and Pygame event handling. Click the ▶︎ button below to see what’s new!

Upgraded sensors

On a physical Sense HAT, real sensors react to changes in environmental conditions like fluctuations in temperature or humidity. The emulator has sliders which are designed to simulate this. However, emulating the movement sensor is a bit more complicated. The upgrade introduces a 3D slider, which is essentially a model of the Sense HAT that you can move with your mouse. Moving the model affects the readings provided by the accelerometer, gyroscope, and magnetometer sensors.

Code written in this emulator is directly portable to a physical Raspberry Pi and Sense HAT without modification. This means you can now develop and test programs using the movement sensors from any internet-connected computer, anywhere in the world.

Astro Pi mode

Astro Pi is our series of competitions offering students the chance to have their code run in space! The code is run on two space-hardened Raspberry Pi units, with attached Sense HATs, on the International Space Station.

Image of Astro Pi unit Sense HAT emulator upgrade

Astro Pi skin mode

There are a number of practical things that can catch you out when you are porting your Sense HAT code to an Astro Pi unit, though, such as the orientation of the screen and joystick. Just as having a 3D-printed Astro Pi case enables you to discover and overcome these, so does the Astro Pi skin mode in this emulator. In the bottom right-hand panel, there is an Astro Pi button which enables the mode: click it again to go back to the Sense HAT.

The joystick and push buttons are operated by pressing your keyboard keys: use the cursor keys and Enter for the joystick, and U, D, L, R, A, and B for the buttons.

Sense Hat resources for Code Clubs

Image of gallery of Code Club Sense HAT projects Sense HAT emulator upgrade

Click the image to visit the Code Club projects page

We also have a new range of Code Club resources which are based on the emulator. Of these, three use the environmental sensors and two use the movement sensors. The resources are an ideal way for any Code Club to get into physical computing.

The technology

The 3D models in the emulator are represented entirely with HTML and CSS. “This project pushed the Trinket team, and the 3D web, to its limit,” says Elliott Hauser, CEO of Trinket. “Our first step was to test whether pure 3D HTML/CSS was feasible, using Julian Garnier’s Tridiv.”

Sense HAT 3D image mockup Sense HAT emulator upgrade

The Trinket team’s preliminary 3D model of the Sense HAT

“We added JavaScript rotation logic and the proof of concept worked!” Elliot continues. “Countless iterations, SVG textures, and pixel-pushing tweaks later, the finished emulator is far more than the sum of its parts.”

Sense HAT emulator 3d image final version Sense HAT emulator upgrade

The finished Sense HAT model: doesn’t it look amazing?

Check out this blog post from Trinket for more on the technology and mathematics behind the models.

One of the compromises we’ve had to make is browser support. Unfortunately, browsers like Firefox and Microsoft Edge don’t fully support this technology yet. Instead, we recommend that you use Chrome, Safari, or Opera to access the emulator.

Where do I start?

If you’re new to the Sense HAT, you can simply copy and paste many of the code examples from our educational resources, like this one. Alternatively, you can check out our Sense HAT Essentials e-book. For a complete list of all the functions you can use, have a look at the Sense HAT API reference here.

The post Sense HAT Emulator Upgrade appeared first on Raspberry Pi.

Commenting Policy for This Blog

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/03/commenting_poli.html

Over the past few months, I have been watching my blog comments decline in civility. I blame it in part on the contentious US election and its aftermath. It’s also a consequence of not requiring visitors to register in order to post comments, and of our tolerance for impassioned conversation. Whatever the causes, I’m tired of it. Partisan nastiness is driving away visitors who might otherwise have valuable insights to offer.

I have been engaging in more active comment moderation. What that means is that I have been quicker to delete posts that are rude, insulting, or off-topic. This is my blog. I consider the comments section as analogous to a gathering at my home. It’s not a town square. Everyone is expected to be polite and respectful, and if you’re an unpleasant guest, I’m going to ask you to leave. Your freedom of speech does not compel me to publish your words.

I like people who disagree with me. I like debate. I even like arguments. But I expect everyone to behave as if they’ve been invited into my home.

I realize that I sometimes express opinions on political matters; I find they are relevant to security at all levels. On those posts, I welcome on-topic comments regarding those opinions. I don’t welcome people pissing and moaning about the fact that I’ve expressed my opinion on something other than security technology. As I said, it’s my blog.

So, please… Assume good faith. Be polite. Minimize profanity. Argue facts, not personalities. Stay on topic. If you want a model to emulate, look at Clive Robinson’s posts.

Schneier on Security is not a professional operation. There’s no advertising, so no revenue to hire staff. My part-time moderator — paid out of my own pocket — and I do what we can when we can. If you see a comment that’s spam, or off-topic, or an ad hominem attack, flag it and be patient. Don’t reply or engage; we’ll get to it. And we won’t always post an explanation when we delete something.

My own stance on privacy and anonymity means that I’m not going to require commenters to register a name or e-mail address, so that isn’t an option. And I really don’t want to disable comments.

I dislike having to deal with this problem. I’ve been proud and happy to see how interesting and useful the comments section has been all these years. I’ve watched many blogs and discussion groups descend into toxicity as a result of trolls and drive-by ideologues derailing the conversations of regular posters. I’m not going to let that happen here.