Tag Archives: www

Mira, tiny robot of joyful delight

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/mira-robot-alonso-martinez/

The staff of Pi Towers are currently melting into puddles while making ‘Aaaawwwwwww’ noises as Mira, the adorable little Pi-controlled robot made by Pixar 3D artist Alonso Martinez, steals their hearts.

Mira the robot playing peek-a-boo

If you want to get updates on Mira’s progress, sign up for the mailing list! http://eepurl.com/bteigD Mira is a desk companion that makes your life better one smile at a time. This project explores human robot interactivity and emotional intelligence. Currently Mira uses face tracking to interact with the users and loves playing the game “peek-a-boo”.

Introducing Mira

Honestly, I can’t type words – I am but a puddle! If I could type at all, I would only produce a stream of affectionate fragments. Imagine walking into a room full of kittens. What you would sound like is what I’d type.

No! I can do this. I’m a professional. I write for a living! I can…

SHE BLINKS OHMYAAAARGH!!!

Mira Alonso Martinez Raspberry Pi

Weebl & Bob meets South Park’s Ike Broflovski in an adorable 3D-printed bundle of ‘Aaawwwww’

Introducing Mira (I promise I can do this)

Right. I’ve had a nap and a drink. I’ve composed myself. I am up for this challenge. As long as I don’t look directly at her, I’ll be fine!

Here I go.

As one of the many über-talented 3D artists at Pixar, Alonso Martinez knows a thing or two about bringing adorable-looking characters to life on screen. However, his work left him wondering:

In movies you see really amazing things happening but you actually can’t interact with them – what would it be like if you could interact with characters?

So with the help of his friends Aaron Nathan and Vijay Sundaram, Alonso set out to bring the concept of animation to the physical world by building a “character” that reacts to her environment. His experiments with robotics started with Gertie, a ball-like robot reminiscent of his time spent animating bouncing balls when he was learning his trade. From there, he moved on to Mira.

Mira Alonso Martinez

Many, many of the views of this Tested YouTube video have come from me. So many.

Mira swivels to follow a person’s face, plays games such as peekaboo, shows surprise when you finger-shoot her, and giggles when you give her a kiss.

Mira’s inner workings

To get Mira to turn her head in three dimensions, Alonso took inspiration from the Microsoft Sidewinder Pro joystick he had as a kid. He purchased one on eBay, took it apart to understand how it works, and replicated its mechanism for Mira’s Raspberry Pi-powered innards.

Mira Alonso Martinez

Alonso used the smallest components he could find so that they would fit inside Mira’s tiny body.

Mira’s axis of 3D-printed parts moves via tiny Power HD DSM44 servos, while a camera and OpenCV handle face-tracking, and a single NeoPixel provides a range of colours to indicate her emotions. As for the blinking eyes? Two OLED screens boasting acrylic domes fit within the few millimeters between all the other moving parts.

More on Mira, including her history and how she works, can be found in this wonderful video released by Tested this week.

Pixar Artist’s 3D-Printed Animated Robots!

We’re gushing with grins and delight at the sight of these adorable animated robots created by artist Alonso Martinez. Sean chats with Alonso to learn how he designed and engineered his family of robots, using processes like 3D printing, mold-making, and silicone casting. They’re amazing!

You can also sign up for Alonso’s newsletter here to stay up-to-date about this little robot. Hopefully one of these newsletters will explain how to buy or build your own Mira, as I for one am desperate to see her adorable little face on my desk every day for the rest of my life.

The post Mira, tiny robot of joyful delight appeared first on Raspberry Pi.

Estefannie’s GPS-Controlled GoPro Photo Taker

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/estefannie-gopro-selfie/

Are you tired of having to take selfies physically? Do you only use your GoPro for the occasional beach vacation? Are you maybe even wondering what to do with the load of velcro you bought on a whim? Then we have good news for you: Estefannie‘s back to help you out with her Personal Automated GPS-Controlled Portable Photo Taker…PAGCPPT for short…or pagsssspt, if you like.

RASPBERRY PI + GPS CONTROLLED PHOTO TAKER

Hey World! Do you like vacation pictures but don’t like taking them? Make your own Personal Automated GPS Controlled Portable Photo Taker! The code, components, and instructions are in my Hackster.io account: https://www.hackster.io/estefanniegg/automated-gps-controlled-photo-taker-3fc84c For this build, I decided to put together a backpack to take pictures of me when I am close to places that like.

The Personal Automated GPS-Controlled Portable Photo Taker

Try saying that five times in a row.

Go on. I’ll wait.

Using a Raspberry Pi 3, a GPS module, a power pack, and a GoPro plus GoPro Stick, Estefannie created the PAGCPPT as a means of automatically taking selfies at pre-specified tourist attractions across London.

Estefannie Explains it All Raspberry Pi GPS GoPro Camera

There’s pie in my backpack too…but it’s a bit messy

With velcro and hot glue, she secured the tech in place on (and inside) a backpack. Then it was simply a case of programming her set up to take pictures while she walked around the city.

Estefannie Explains it All Raspberry Pi GPS GoPro Camera

Making the GoPro…go

Estefannie made use of a GoPro API library to connect her GoPro to the Raspberry Pi via WiFi. With the help of this library, she wrote a Python script that made the GoPro take a photograph whenever her GPS module placed her within a ten-metre radius of a pre-selected landmark such as Tower Bridge, Abbey Road, or Platform 9 3/4.

Estefannie Explains it All Raspberry Pi GPS GoPro Camera

“Accio selfie.”

The full script, as well as details regarding the components she used for the project, can be found on her hackster.io page here.

Estefannie Explains it All

You’ll have noticed that we’ve covered Estefannie once or twice before on the Raspberry Pi blog. We love project videos that convey a sense of ‘Oh hey, I can totally build one of those!’, and hers always tick that box. They are imaginative, interesting, quirky, and to be totally honest with you, I’ve been waiting for this particular video since she hinted at it on her visit to Pi Towers in May. I got the inside scoop, yo!

What’s better than taking pictures? Not taking pictures. But STILL having pictures. I made a personal automated GPS controlled Portable Photo Taker ⚡ NEW VIDEO ALERT⚡ Link in bio.

1,351 Likes, 70 Comments – Estefannie Explains It All (@estefanniegg) on Instagram: “What’s better than taking pictures? Not taking pictures. But STILL having pictures. I made a…”

Make sure to follow her on YouTube and Instagram for more maker content and random shenanigans. And if you have your own maker social media channel, YouTube account, blog, etc, this is your chance to share it for the world to see in the comments below!

The post Estefannie’s GPS-Controlled GoPro Photo Taker appeared first on Raspberry Pi.

Подкаст за българска фантастика

Post Syndicated from Григор original http://www.gatchev.info/blog/?p=2062

Днес в скромния ми блог е на гости Валентин Д. Иванов. Известен по света като астроном и откривател на класа небесни тела „планемо“ – самотните планети, които нямат звезда. Едно от големите астрономически открития за последните 100 години.

У нас е „световно неизвестен“ – повечето българи мислят, че „астрономия“ е грешка и правилното е „астрология“. За щастие, в кръговете на любителите на фантастиката Вальо е отлично известен писател, преводач, популяризатор, застъпник на фендъма и какво ли не още. И може би най-много от всичко фен с душа, който не просто произвежда шум, а върши полезни неща.

За едно от тези неща – по любезния съвет на Александър Карапанчев – ще прочетете по-долу.

—-

Добре дошли в специализирания подкаст „Българска фантастика“ –

С еднакъв успех можете да го наричате и аудио списание. Целта ни е да произвеждаме аудио версии на български фантастични произведения.

Преди година и нещо, по силата на служебните си задължения в Eвропейската южна обсерватория, ми се наложи да правя образователни филмчета за нашите средства за обработка на наблюдателни данни (https://www.youtube.com/channel/UCCq4rxr30ydNyV94OWmLrMA). От друга страна, аудио фантастиката ми е близка, понеже доста често си запълвам времето, докато пътувам, със слушане на фантастични подкастове. Има много на английски (http://escapepod.org/; www.starshipsofa.com/) и руски език (https://fantlab.ru/work203487). Вече немалко списания слагат на страниците си и аудио версии на публикуваните разкази (http://www.newyorker.com/series/fiction-podcast, http://www.lightspeedmagazine.com/; http://strangehorizons.com/podcasts/).

Не беше далеч мисълта да опитам с българска фантастика, в частност с моята собствена, и на 7 юни 2016 г. се появи това – https://www.youtube.com/watch?v=7Rfpa3NvR34.

Ясно е, че аз не съм професионален актьор, и резултатът беше точно толкова зле, колкото очаквах. За известно време оставих това начинание настрани, но преди няколко месеца пак се наложи да се върна към видео ръководствата и събрах смелост да пробвам отново. Разказът на Иван Вазов можеше да стане по-добре, обаче последните два си ги харесвам, колкото и да е нескромно. Живот и здраве, по-нататък се надявам да стават още по-сполучливи.

Ще се опитам да подготвям нов разказ един път на месец, най-много на два месеца. Бързам да кажа, че не мога да гарантирам периодичността, тя ще зависи от обстоятелствата. Изданието е плод на колектив от хора, включващ Дружеството на българските фантасти „Тера Фантазия“ и Фондация „Човешката библиотека“. По-нататък ще представя всеки един от тях.

Поканвам всички желаещи да ми пращат разкази и стихове в обем до 2500 думи [email protected]

Възнамеряваме да редуваме художествените произведения с публицистика, обаче за нея моля първо да се свържете с нас, за да проверите дали би ни заинтересувала. Същото се отнася и за илюстрациите – всеки разказ има нужда от една. Не знаем предварително какво ще публикуваме, но достатъчно общи фантастични сюжети са подходящи. Предполагам, че с времето ще създадем резерв от илюстрации, които ще използваме в бъдеще.

Определена тема няма. Изисквания също няма освен обичайните – разказите да не разпалват вражда и да не включват ненужно насилие или сексуални описания. Ще правя аудио версии на това, което аз и колегите ми харесаме. Всичко е субективно, не се огорчавайте, ако не изберем разказа ви или не успеем да му подготвим аудио версия по някаква друга причина. Мислете си как са се разпространявали книгите през Средновековието – някой е трябвало толкова да хареса вашия текст, че да отдели няколко месеца, за да си направи копие собственоръчно или да плати на специалист калиграф, който да произведе копието.

Дебело подчертавам, че ние нямаме монопол. Винаги може да си направите ваша аудио версия на собственото си произведение. Авторите ни не получават хонорари, но и не плащат за публикацията. Преди да ни упреквате за нещо, моля не забравяйте, че за това начинание отделяме доброволно и безвъзмездно от собственото си време.

Освен автори, поканвам с нас да се свързват и желаещи да четат разкази. Подозирам, че от такива хора ще имаме много по-голяма нужда, отколкото от автори.

В началото казах „първо аудио списание“, но има някои предтечи, които е редно да спомена. Например Богдан Дуков (https://www.youtube.com/channel/UCzD5Irz7MHwGA0_yiANA5wQ) от доста време публикува чудесни аудио версии на българската класика, включително от Светослав Минков (https://www.youtube.com/watch?v=jK3jQ7TRQGQ). Един от подкастовете на „Правилният Мед“ (https://www.youtube.com/channel/UCuP9AG8V1M_LbNgL-Ku3uZw) от 2014 г. е разговор за фантастиката (https://www.youtube.com/watch?v=SpMKNQo1Ias). И, разбира се, Янчо Чолаков, който през 2012 г. чете откъс от книгата си „Историята на Самотния редник“ (https://www.youtube.com/watch?v=yGt0ToQM_Sw). Може би има и други – ако науча за тях, с удоволствие ще ги добавя.

Пожелайте ни успех!

Amazon Rekognition Update – Celebrity Recognition

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-rekognition-update-celebrity-recognition/

We launched Amazon Rekognition at re:Invent (Amazon Rekognition – Image Detection and Recognition Powered by Deep Learning) and added Image Moderation earlier this year.

Today we are adding celebrity recognition!

Rekognition has been trained to identify hundreds of thousands of people who are famous, noteworthy, or prominent in fields that includes politics, sports, entertainment, business, and media. The list is global, and is updated frequently.

To access this feature, simply call the new RecognizeCelebrities function. In addition to the bounding box and facial landmark feature returned by the existing DetectFaces function, the new function returns information about any celebrities that it recognizes:

"Id": "3Ir0du6", 
"MatchConfidence": 97, 
"Name": "Jeff Bezos", 
"Urls": [ "www.imdb.com/name/nm1757263" ]

The Urls provide additional information about the celebrity. The API currently return links to IMDB content; we may add other sources in the future.

You can use the Celebrity Recognition Demo in the AWS Management Console to experiment with this feature:

If you have an image archive you can now index it by celebrity. You could also use a combination of celebrity recognition and object detection to build all kinds of search tools. If your images are already stored in S3, you can process them in-place.

I’m sure that you will come up with all sorts of interesting uses for this new feature. Leave me a comment and let me know what you build!

Jeff;

 

Dish Network Sues ‘ZemTV’ and ‘TV Addons’ For Copyright Infringement

Post Syndicated from Ernesto original https://torrentfreak.com/dish-network-sues-zemtv-and-tv-addons-for-copyright-infringement-170605/

More and more people are starting to use Kodi-powered set-top boxes to stream video content to their TVs.

While Kodi itself is a neutral platform, third-party add-ons can turn it into the ultimate pirate machine, providing access to movies, TV-shows and IPTV channels.

These add-ons are direct competition for traditional broadcast providers, such as Dish Network in the United States, which filed a lawsuit in a Texas federal court late last week.

The complaint lists the add-on ZemTV as the prime target. The service in question allows users to watch a variety of Dish channels, without permission.

“The ZemTV service is retransmitting these channels over the Internet to end-users that download the ZemTV add-on for the Kodi media player, which is available for download at the websites www.tvaddons.ag and www.tvaddons.org,” Dish’s lawyers write.

The TVAddons platform, which hosts hundreds of unofficial Kodi add-ons including ZemTV, is also listed as a defendant. According to Dish, TVAddons plays an important role in the distribution of the infringing add-on.

The ZemTV operator, who is only known as “Shani” and “Shani_08,” used the TVAddons platform to share and promote its service while asking for donations, the complaint alleges.

“Website Operators have actual or constructive knowledge of this infringing activity and materially contribute to that activity by providing the forum where the ZemTV add-on can be downloaded and soliciting and accepting donations from ZemTV users,” Dish writes.

“But for the availability of the ZemTV add-on at www.tvaddons.ag or www.tvaddons.org, most if not all of Developer’s distribution and/or public performance would not occur,” the complaint adds.

Dish claims that it sent numerous takedown requests to Internet service providers associated with the ZemTV service, but the developer has continued to offer the add-on, circumventing any countermeasures.

With the lawsuit, the broadcast provider holds ZemTV accountable for direct copyright infringement, demanding $150,000 per infringement in damages. TVAddons is accused of contributory and vicarious copyright infringement and also faces statutory damages.

TorrentFreak spoke to a representative from TVAddons, who wasn’t aware of the lawsuit. Dish has not contacted them directly with any takedown requests, he says.

“This is the first we’ve heard of this lawsuit. No one ever sent us any type of takedown or DMCA notice or even tried to contact us prior, they could have easily done so through our contact page or site emails,” TVAddons informs us.

TVAddons says that the ZemTV add-on was already removed prior to the lawsuit due to a technical issue, and it won’t return.

“The Zem addon was actually removed from our addon library and community tools weeks ago due to a completely unrelated technical issue. I have already spoken to the developer, and he has since deleted the Zem addon entirely,” the TVAddons representative says.

Also, shortly after we started to inquire about the lawsuit, the ZemTV add-on appears to have shut down completely. According to Kodi Tips, developer “Shani” said it became too popular to maintain, but the legal threat likely played a role as well.

The lawsuit against ZemTV and TVAddons is the first of its kind in the United States. As such, it will be closely watched by other rightsholders, add-on developers, and platforms similar to TVAddons that distribute software.

The full complaint Dish Network filed is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

“Only a year? It’s felt like forever”: a twelve-month retrospective

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/12-months-raspberry-pi/

This weekend saw my first anniversary at Raspberry Pi, and this blog marks my 100th post written for the company. It would have been easy to let one milestone or the other slide had they not come along hand in hand, begging for some sort of acknowledgement.

Alex, Matt, and Courtney in a punt on the Cam

The day Liz decided to keep me

So here it is!

Joining the crew

Prior to my position in the Comms team as Social Media Editor, my employment history was largely made up of retail sales roles and, before that, bit parts in theatrical backstage crews. I never thought I would work for the Raspberry Pi Foundation, despite its firm position on my Top Five Awesome Places I’d Love to Work list. How could I work for a tech company when my knowledge of tech stretched as far as dismantling my Game Boy when I was a kid to see how the insides worked, or being the one friend everyone went to when their phone didn’t do what it was meant to do? I never thought about the other side of the Foundation coin, or how I could find my place within the hidden workings that turned the cogs that brought everything together.

… when suddenly, as if out of nowhere, a new job with a dream company. #raspberrypi #positive #change #dosomething

12 Likes, 1 Comments – Alex J’rassic (@thealexjrassic) on Instagram: “… when suddenly, as if out of nowhere, a new job with a dream company. #raspberrypi #positive…”

A little luck, a well-written though humorous resumé, and a meeting with Liz and Helen later, I found myself the newest member of the growing team at Pi Towers.

Ticking items off the Bucket List

I thought it would be fun to point out some of the chances I’ve had over the last twelve months and explain how they fit within the world of Raspberry Pi. After all, we’re about more than just a $35 credit card-sized computer. We’re a charitable Foundation made up of some wonderful and exciting projects, people, and goals.

High altitude ballooning (HAB)

Skycademy offers educators in the UK the chance to come to Pi Towers Cambridge to learn how to plan a balloon launch, build a payload with onboard Raspberry Pi and Camera Module, and provide teachers with the skills needed to take their students on an adventure to near space, with photographic evidence to prove it.

All the screens you need to hunt balloons. . We have our landing point and are now rushing to Therford to find the payload in a field. . #HAB #RasppberryPi

332 Likes, 5 Comments – Raspberry Pi (@raspberrypifoundation) on Instagram: “All the screens you need to hunt balloons. . We have our landing point and are now rushing to…”

I was fortunate enough to join Sky Captain James, along with Dan Fisher, Dave Akerman, and Steve Randell on a test launch back in August last year. Testing out new kit that James had still been tinkering with that morning, we headed to a field in Elsworth, near Cambridge, and provided Facebook Live footage of the process from payload build to launch…to the moment when our balloon landed in an RAF shooting range some hours later.

RAF firing range sign

“Can we have our balloon back, please, mister?”

Having enjoyed watching Blue Peter presenters send up a HAB when I was a child, I marked off the event on my bucket list with a bold tick, and I continue to show off the photographs from our Raspberry Pi as it reached near space.

Spend the day launching/chasing a high-altitude balloon. Look how high it went!!! #HAB #ballooning #space #wellspacekinda #ish #photography #uk #highaltitude

13 Likes, 2 Comments – Alex J’rassic (@thealexjrassic) on Instagram: “Spend the day launching/chasing a high-altitude balloon. Look how high it went!!! #HAB #ballooning…”

You can find more information on Skycademy here, plus more detail about our test launch day in Dan’s blog post here.

Dear Raspberry Pi Friends…

My desk is slowly filling with stuff: notes, mementoes, and trinkets that find their way to me from members of the community, both established and new to the life of Pi. There are thank you notes, updates, and more from people I’ve chatted to online as they explore their way around the world of Pi.

Letter of thanks to Raspberry Pi from a young fan

*heart melts*

By plugging myself into social media on a daily basis, I often find hidden treasures that go unnoticed due to the high volume of tags we receive on Facebook, Twitter, Instagram, and so on. Kids jumping off chairs in delight as they complete their first Scratch project, newcomers to the Raspberry Pi shedding a tear as they make an LED blink on their kitchen table, and seasoned makers turning their hobby into something positive to aid others.

It’s wonderful to join in the excitement of people discovering a new skill and exploring the community of Raspberry Pi makers: I’ve been known to shed a tear as a result.

Meeting educators at Bett, chatting to teen makers at makerspaces, and sharing a cupcake or three at the birthday party have been incredible opportunities to get to know you all.

You’re all brilliant.

The Queens of Robots, both shoddy and otherwise

Last year we welcomed the Queen of Shoddy Robots, Simone Giertz to Pi Towers, where we chatted about making, charity, and space while wandering the colleges of Cambridge and hanging out with flat Tim Peake.

Queen of Robots @simonegiertz came to visit #PiTowers today. We hung out with cardboard @astro_timpeake and ate chelsea buns at @fitzbillies #Cambridge. . We also had a great talk about the educational projects of the #RaspberryPi team, #AstroPi and how not enough people realise we’re a #charity. . If you’d like to learn more about the Raspberry Pi Foundation and the work we do with #teachers and #education, check out our website – www.raspberrypi.org. . How was your day? Get up to anything fun?

597 Likes, 3 Comments – Raspberry Pi (@raspberrypifoundation) on Instagram: “Queen of Robots @simonegiertz came to visit #PiTowers today. We hung out with cardboard…”

And last month, the wonderful Estefannie ‘Explains it All’ de La Garza came to hang out, make things, and discuss our educational projects.

Estefannie on Twitter

Ahhhh!!! I still can’t believe I got to hang out and make stuff at the @Raspberry_Pi towers!! Thank you thank you!!

Meeting such wonderful, exciting, and innovative YouTubers was a fantastic inspiration to work on my own projects and to try to do more to help others discover ways to connect with tech through their own interests.

Those ‘wow’ moments

Every Raspberry Pi project I see on a daily basis is awesome. The moment someone takes an idea and does something with it is, in my book, always worthy of awe and appreciation. Whether it be the aforementioned flashing LED, or sending Raspberry Pis to the International Space Station, if you have turned your idea into reality, I applaud you.

Some of my favourite projects over the last twelve months have not only made me say “Wow!”, they’ve also inspired me to want to do more with myself, my time, and my growing maker skill.

Museum in a Box on Twitter

Great to meet @alexjrassic today and nerd out about @Raspberry_Pi and weather balloons and @Space_Station and all things #edtech 🎈⛅🛰📚🤖

Projects such as Museum in a Box, a wonderful hands-on learning aid that brings the world to the hands of children across the globe, honestly made me tear up as I placed a miniaturised 3D-printed Virginia Woolf onto a wooden box and gasped as she started to speak to me.

Jill Ogle’s Let’s Robot project had me in awe as Twitch-controlled Pi robots tackled mazes, attempted to cut birthday cake, or swung to slap Jill in the face over webcam.

Jillian Ogle on Twitter

@SryAbtYourCats @tekn0rebel @Beam Lol speaking of faces… https://t.co/1tqFlMNS31

Every day I discover new, wonderful builds that both make me wish I’d thought of them first, and leave me wondering how they manage to make them work in the first place.

Space

We have Raspberry Pis in space. SPACE. Actually space.

Raspberry Pi on Twitter

New post: Mission accomplished for the European @astro_pi challenge and @esa @Thom_astro is on his way home 🚀 https://t.co/ycTSDR1h1Q

Twelve months later, this still blows my mind.

And let’s not forget…

  • The chance to visit both the Houses of Parliment and St James’s Palace

Raspberry Pi team at the Houses of Parliament

  • Going to a Doctor Who pre-screening and meeting Peter Capaldi, thanks to Clare Sutcliffe

There’s no need to smile when you’re #DoctorWho.

13 Likes, 2 Comments – Alex J’rassic (@thealexjrassic) on Instagram: “There’s no need to smile when you’re #DoctorWho.”

We’re here. Where are you? . . . . . #raspberrypi #vidconeu #vidcon #pizero #zerow #travel #explore #adventure #youtube

1,944 Likes, 30 Comments – Raspberry Pi (@raspberrypifoundation) on Instagram: “We’re here. Where are you? . . . . . #raspberrypi #vidconeu #vidcon #pizero #zerow #travel #explore…”

  • Making a GIF Cam and other builds, and sharing them with you all via the blog

Made a Gif Cam using a Raspberry Pi, Pi camera, button and a couple LEDs. . When you press the button, it takes 8 images and stitches them into a gif file. The files then appear on my MacBook. . Check out our Twitter feed (Raspberry_Pi) for examples! . Next step is to fit it inside a better camera body. . #DigitalMaking #Photography #Making #Camera #Gif #MakersGonnaMake #LED #Creating #PhotosofInstagram #RaspberryPi

19 Likes, 1 Comments – Alex J’rassic (@thealexjrassic) on Instagram: “Made a Gif Cam using a Raspberry Pi, Pi camera, button and a couple LEDs. . When you press the…”

The next twelve months

Despite Eben jokingly firing me near-weekly across Twitter, or Philip giving me the ‘Dad glare’ when I pull wires and buttons out of a box under my desk to start yet another project, I don’t plan on going anywhere. Over the next twelve months, I hope to continue discovering awesome Pi builds, expanding on my own skills, and curating some wonderful projects for you via the Raspberry Pi blog, the Raspberry Pi Weekly newsletter, my submissions to The MagPi Magazine, and the occasional video interview or two.

It’s been a pleasure. Thank you for joining me on the ride!

The post “Only a year? It’s felt like forever”: a twelve-month retrospective appeared first on Raspberry Pi.

How to track that annoying pop-up

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/06/how-to-track-that-annoying-pop-up.html

In a recent update to their Office suite on Windows, Microsoft made a mistake where every hour, for a fraction of a second,  a black window pops up on the screen. This leads many to fear their system has been infected by a virus. I thought I’d document how to track this down.

The short answer is to use Mark Russinovich’s “sysinternals.com” tools. He’s Windows internals guru at Microsoft and has been maintaining a suite of tools that are critical for Windows system maintenance and security. Copy all the tools from “https://live.sysinternals.com“. Also, you can copy with Microsoft Windows Networking (SMB).

Of these tools, what we want is something that looks at “processes”. There are several tools that do this, but focus on processes that are currently running. What we want is something that monitors process creation.

The tool for that is “sysmon.exe”. It can monitor not only process creation, but a large number of other system events that a techy can use to see what the system has been doing, and if you are infected with a virus.

Sysmon has a fairly complicated configuration file, and if you enabled everything, you’d soon be overwhelmed with events. @SwiftOnSecurity has published a configuration file they use in the real world in real environment that cuts down on the noise, and focuses on events that are really important. It enables monitoring of “process creation”, but filters out know good processes that might fill up your logs. You grab the file here. Save it to the same directory to where you saved Sysmon:

https://raw.githubusercontent.com/SwiftOnSecurity/sysmon-config/master/sysmonconfig-export.xml

Once you’ve done it, run the following command to activate the Sysmon monitoring service using this configuration file by running the following command as Administrator. (Right click on the Command Prompt icon and select More/Run as Administrator).

sysmon.exe -accepteula -i sysmonconfig-export.xml

Now sit back and relax until that popup happens again. Right after it does, go into the “Event Viewer” application (click on Windows menu and type “Event Viewer”, or run ‘eventvwr.exe’. Now you have to find where the sysmon events are located, since there are so many things that log events.

The Sysmon events are under the path:

Applications and Services Logs\Microsoft\Windows\Sysmon\operational

When you open that up, you should see the top event is the one we are looking for. Actually, the very top event is launching the process “eventvwr.exe”, but the next one down is our event. It looks like this:

Drilling down into the details, we find the the offending thing causing those annoying popups is “officebackgroundtask.exe” in Office.

We can see it’s started by the “Schedule” service. This means we can go look at it with “autoruns.exe”, another Sysinternals tool that looks at all the things configured to automatically start when you start/login to your computer.

They are pink, which [update] is how autoruns shows they are “unsigned” programs (Microsoft’s programs should, normally, always be signed, so this should be suspicious). I’m assuming the suspicious thing is that they run in the user’s context, rather than system context, creating popup screens.

Autoruns allows you to do a bunch of things. You can click on the [X] box and disable it from running in the future. You can [right-click] in order to upload to Virus Total and check if it’s a known virus.

You can also double-click, to open the Task Scheduler, and see the specific configuration. You can see here that this thing is scheduled to run every hour:

Conclusion

So the conclusions are this.
To solve this particular problem of identifying what’s causing a process to flash a screen occasionally, use sysmon.
To solve generation problems like this, use Sysinternals suite of applications.
I haven’t been, but I am now, using @SwiftOnSecurity’s sysmon configuration just to monitor the security of my computers. I should probably install something to move a copy of the logs off the system.

Some Notes

Some URLs:
Some tweets:

Crash Course Computer Science with Carrie Anne Philbin

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/crash-course-carrie-anne-philbin/

Get your teeth into the history of computer science with our Director of Education, Carrie Anne Philbin, and the team at YouTube’s incredible Crash Course channel.

Crash Course Computer Science Preview

Starting February 22nd, Carrie Anne Philbin will be hosting Crash Course Computer Science! In this series, we’re going to trace the origins of our modern computers, take a closer look at the ideas that gave us our current hardware and software, discuss how and why our smart devices just keep getting smarter, and even look towards the future!

The brainchild of Hank and John Green (the latter of whom is responsible for books such as The Fault in Our Stars and all of my resultant heartbroken tears), Crash Course is an educational YouTube channel specialising in courses for school-age tuition support.

As part of the YouTube Orginal Channel Initiative, and with their partners PBS Digital Studios, the team has completed courses in subjects such as physics, hosted by Dr. Shini Somara, astronomy with Phil Plait, and sociology with Nicole Sweeney.

Raspberry Pi Carrie Anne Philbin Crash Course

Oh, and they’ve recently released a new series on computer science with Carrie Anne Philbin , whom you may know as Raspberry Pi’s Director of Education and the host of YouTube’s Geek Gurl Diaries.

Computer Science with Carrie Anne

Covering topics such as RAM, Boolean logic, CPU design , and binary, the course is currently up to episode twelve of its run. Episodes are released every Tuesday, and there are lots more to come.

Crash Course Carrie Anne Philbin Raspberry Pi

Following the fast-paced, visual style of the Crash Course brand, Carrie Anne takes her viewers on a journey from early computing with Lovelace and Babbage through to the modern-day electronics that power our favourite gadgets such as tablets, mobile phones, and small single-board microcomputers…

The response so far

A few members of the Raspberry Pi team recently attended VidCon Europe in Amsterdam to learn more about making video content for our community – and also so I could exist in the same space as the Holy Trinity, albeit briefly.

At VidCon, Carrie Anne took part in an engaging and successful Women in Science panel with Sally Le Page, Viviane Lalande, Hana Shoib, Maddie Moate, and fellow Crash Course presenter Dr. Shini Somara. I could see that Crash Course Computer Science was going down well from the number of people who approached Carrie Anne to thank her for the course, from those who were learning for the first time to people who were rediscovering the subject.

Crash Course Carrie Anne Philbin Raspberry Pi

Take part in the conversation

Join in the conversation! Head over to YouTube, watch Crash Course Computer Science, and join the discussion in the comments.

Crash Course Carrie Anne Philbin Raspberry Pi

You can also follow Crash Course on Twitter for release updates, and subscribe on YouTube to get notifications of new content.

Oh, and who can spot the sneaky Raspberry Pi in the video introduction?

“Cheers!”

Crash Course Computer Science Outtakes

In which Carrie Anne presents a new sing-a-long format and faces her greatest challenge yet – signing off an episode. Want to find Crash Course elsewhere on the internet? Facebook – http://www.facebook.com/YouTubeCrashCourse Twitter – http://www.twitter.com/TheCrashCourse Tumblr – http://thecrashcourse.tumblr.com Support Crash Course on Patreon: http://patreon.com/crashcourse CC Kids: http://www.youtube.com/crashcoursekids Produced in collaboration with PBS Digital Studios: http://youtube.com/pbsdigitalstudios The Latest from PBS Digital Studios: https://www.youtube.com/playlist?list=PL1mtdjDVOoOqJzeaJAV15Tq0tZ1vKj7ZV We’ve got merch!

The post Crash Course Computer Science with Carrie Anne Philbin appeared first on Raspberry Pi.

Making sweet, sweet music with PiSound

Post Syndicated from Jonic original https://www.raspberrypi.org/blog/making-sweet-sweet-music-pisound/

I’d say I am a passable guitarist. Ever since I learnt about the existence of the Raspberry Pi in 2012, I’ve wondered how I could use one as a guitar effects unit. Unfortunately, I’m also quite lazy and have therefore done precisely nothing to make one. Now, though, I no longer have to beat myself up about this. Thanks to the PiSound board from Blokas, musicians can connect all manner of audio gear to their Raspberry Pi, bringing their projects to a whole new level. Essentially, it transforms your Pi into a complete audio workstation! What musician wouldn’t want a piece of that?

PiSound: a soundcard HAT for the Raspberry Pi

Raspberry Pi with PiSound attached

The PiSound in situ: do those dials go all the way to eleven?

PiSound is a HAT for the Raspberry Pi 3 which acts as a souped-up sound card. It allows you to send and receive audio signals from its jacks, and send MIDI input/output signals to compatible devices. It features two 6mm in/out jacks, two standard DIN-5 MIDI in/out sockets, potentiometers for volume and gain, and ‘The Button’ (with emphatic capitals) for activating audio manipulation patches. Following an incredibly successful Indiegogo campaign, the PiSound team is preparing the board for sale later in the year.

Setting the board up was simple, thanks to the excellent documentation on the PiSound site. First, I mounted the board on my Raspberry Pi’s GPIO pins and secured it with the supplied screws. Next, I ran one script in a terminal window on a fresh installation of Raspbian, which downloaded, installed, and set up all the software I needed to get going. All I had to do after that was connect my instruments and get to work creating patches for Pure Data, a popular visual programming interface for manipulating media streams.

PiSound with instruments and computer

Image from Blokas

Get creative with PiSound!

During my testing, I created some simple fuzz, delay, and tremolo guitar effects. The possibilities, though, are as broad as your imagination. I’ve come up with some ideas to inspire you:

  • You could create a web interface for the guitar effects, accessible over a local network on a smartphone or tablet.
  • How about controlling an interactive light show or projected visualisation on stage using the audio characteristics of the guitar signal?
  • Channel your inner Matt Bellamy and rig up some MIDI hardware on your guitar to trigger loops and samples while you play.
  • Use a tilt switch to increase the intensity of an effect when the angle of the guitar’s neck is changed (imagine you’re really going for it during a solo).
  • You could even use the audio input stream as a base for generating other non-audio results.

pisound – Audio & MIDI Interface for your Raspberry Pi

Indiegogo Campaign: https://igg.me/at/pisound More Info: http://www.blokas.io Sounds by Sarukas: http://bit.ly/2myN8lf

Now I have had a taste of what this incredible little board can do, I’m very excited to see what new things it will enable me to do as a performer. It’s compact and practical, too: as the entire thing is about the size of a standard guitar pedal, I could embed it into one of my guitars if I wanted to. Alternatively, I could get creative and design a custom enclosure for it.

Using Sonic Pi with PiSound

Community favourite Sonic Pi will also support the board very soon, as Sam Aaron and Ben Smith ably demonstrated at our fifth birthday party celebrations. This means you don’t even need to be able to play an instrument to make something awesome with this clever little HAT.

The Future of @Sonic_Pi with Sam Aaron & Ben Smith at #PiParty

Uploaded by Alan O’Donohoe on 2017-03-05.

I’m incredibly impressed with the hardware and the support on the PiSound website. It’s going to be my go-to HAT for advanced audio projects, and, when it finally launches later this year, I’ll have all the motivation I need to create the guitar effects unit I’ve always wanted.

Find out more about PiSound over at the Blokas website, and take a deeper look at the tech specs and other information over at the PiSound documentation site.

Disclaimer: I am personally a backer of the Indiegogo campaign, and Blokas very kindly supplied a beta board for this review.

The post Making sweet, sweet music with PiSound appeared first on Raspberry Pi.

Open Sourcing Athenz: Fine-Grained, Role-Based Access Control

Post Syndicated from mikesefanov original https://yahooeng.tumblr.com/post/160481899076

yahoocs:

By Lee Boynton, Henry Avetisyan, Ken Fox, Itsik Figenblat, Mujib Wahab, Gurpreet Kaur, Usha Parsa, and Preeti Somal

Today, we are pleased to offer Athenz, an open-source platform for fine-grained access control, to the community. Athenz is a role-based access control (RBAC) solution, providing trusted relationships between applications and services deployed within an organization requiring authorized access.

If you need to grant access to a set of resources that your applications or services manage, Athenz provides both a centralized and a decentralized authorization model to do so. Whether you are using container or VM technology independently or on bare metal, you may need a dynamic and scalable authorization solution. Athenz supports moving workloads from one node to another and gives new compute resources authorization to connect to other services within minutes, as opposed to relying on IP and network ACL solutions that take time to propagate within a large system. Moreover, in very high-scale situations, you may run out of the limited number of network ACL rules that your hardware can support.

Prior to creating Athenz, we had multiple ways of managing permissions and access control across all services within Yahoo. To simplify, we built a fine-grained, role-based authorization solution that would satisfy the feature and performance requirements our products demand. Athenz was built with open source in mind so as to share it with the community and further its development.

At Yahoo, Athenz authorizes the dynamic creation of compute instances and containerized workloads, secures builds and deployment of their artifacts to our Docker registry, and among other uses, manages the data access from our centralized key management system to an authorized application or service.

Athenz provides a REST-based set of APIs modeled in Resource Description Language (RDL) to manage all aspects of the authorization system, and includes Java and Go client libraries to quickly and easily integrate your application with Athenz. It allows product administrators to manage what roles are allowed or denied to their applications or services in a centralized management system through a self-serve UI.

Access Control Models

Athenz provides two authorization access control models based on your applications’ or services’ performance needs. More commonly used, the centralized access control model is ideal for provisioning and configuration needs. In instances where performance is absolutely critical for your applications or services, we provide a unique decentralized access control model that provides on-box enforcement of authorization.  

Athenz’s authorization system utilizes two types of tokens: principal tokens (N-Tokens) and role tokens (Z-Tokens). The principal token is an identity token that identifies either a user or a service. A service generates its principal token using that service’s private key. Role tokens authorize a given principal to assume some number of roles in a domain for a limited period of time. Like principal tokens, they are signed to prevent tampering. The name “Athenz” is derived from “Auth” and the ‘N’ and ‘Z’ tokens.

Centralized Access Control: The centralized access control model requires any Athenz-enabled application to contact the Athenz Management Service directly to determine if a specific authenticated principal (user and/or service) has been authorized to carry out the given action on the requested resource. At Yahoo, our internal continuous delivery solution uses this model. A service receives a simple Boolean answer whether or not the request should be processed or rejected. In this model, the Athenz Management Service is the only component that needs to be deployed and managed within your environment. Therefore, it is suitable for provisioning and configuration use cases where the number of requests processed by the server is small and the latency for authorization checks is not important.

The diagram below shows a typical control plane-provisioning request handled by an Athenz-protected service.

image

Athenz Centralized Access Control Model

Decentralized Access Control: This approach is ideal where the application is required to handle large number of requests per second and latency is a concern. It’s far more efficient to check authorization on the host itself and avoid the synchronous network call to a centralized Athenz Management Service. Athenz provides a way to do this with its decentralized service using a local policy engine library on the local box. At Yahoo, this is an approach we use for our centralized key management system. The authorization policies defining which roles have been authorized to carry out specific actions on resources, are asynchronously updated on application hosts and used by the Athenz local policy engine to evaluate the authorization check. In this model, a principal needs to contact the Athenz Token Service first to retrieve an authorization role token for the request and submit that token as part of its request to the Athenz protected service. The same role token can then be re-used for its lifetime.

The diagram below shows a typical decentralized authorization request handled by an Athenz-protected service.

image

Athenz Decentralized Access Control Model

With the power of an RBAC system in which you can choose a model to deploy according your performance latency needs, and the flexibility to choose either or both of the models in a complex environment of hosting platforms or products, it gives you the ability to run your business with agility and scale.

Looking to the Future

We are actively engaged in pushing the scale and reliability boundaries of Athenz. As we enhance Athenz, we look forward to working with the community on the following features:

  • Using local CA signed TLS certificates
  • Extending Athenz with a generalized model for service providers to launch instances with bootstrapped Athenz service identity TLS certificates
  • Integration with public cloud services like AWS. For example, launching an EC2 instance with a configured Athenz service identity or obtaining AWS temporary credentials based on authorization policies defined in ZMS.

Our goal is to integrate Athenz with other open source projects that require authorization support and we welcome contributions from the community to make that happen. It is available under Apache License Version 2.0. To evaluate Athenz, we provide both AWS AMI and Docker images so that you can quickly have a test development environment up and running with ZMS (Athenz Management Service), ZTS (Athenz Token Service), and UI services. Please join us on the path to making application authorization easy. Visit http://www.athenz.io to get started!

Netflix Focuses Piracy Takedown Efforts on “Orange is The New Black” Leak

Post Syndicated from Ernesto original https://torrentfreak.com/netflix-focuses-piracy-takedown-efforts-on-orange-is-the-new-black-leak-170505/

Last Friday, Netflix became the key victim in one of the biggest piracy leaks in history.

A hacking group or person calling itself TheDarkOverlord (TDO) released the premiere episode of the fifth season of Netflix’s Orange is The New Black, followed by nine more episodes a few hours later.

Netflix hasn’t said much about the issue in public, aside from the generic response. “We are aware of the situation. A production vendor used by several major TV studios had its security compromised and the appropriate law enforcement authorities are involved.”

However, it appears that behind the scenes something has changed.

While browsing through Google’s public repository of piracy takedown requests, hosted by Lumen, we noticed that the anti-piracy vendor “IP Arrow” suddenly started to submit requests on Netflix’s behalf.

The first request from IP Arrow came in on Saturday, the day after the leak, and there have been at least a dozen more since.

What’s unusual about these notices is that they only target the leaked “Orange is The New Black” episodes, no other content. This is also clearly reflected in a statement by the anti-piracy firm, which comes with the request.

“This is submitted for my client Netflix These links are facilitating piracy of my client’s work. The work can be seen by visiting their site www.netflix.com. The item this is relating to is Orange Is The New Black Season 5,” it reads.

Although Netflix might not believe that the leak is a disaster for its business, which is also reflected in several opinion pieces published in recent days, the IP-Arrow notices suggest the company is focusing part of its takedown efforts specifically on containing the fallout.

Netflix isn’t new to anti-piracy work. With help from Vobile Inc the company started sending takedown requests to Google roughly a year ago. Unlike IP-Arrow’s requests, Vobile targets a wide variety of content.

A few weeks ago we also reported that Netflix has its own “Global Copyright Protection Group” which is tasked with fighting online piracy. Given the recent leaks, we assume that the group has plenty of work to do now.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Hackers Leak Netflix’s Orange is The New Black, Season 5 Premiere

Post Syndicated from Andy original https://torrentfreak.com/hackers-leak-netflixs-orange-is-the-new-black-season-5-premiere-170429/

tdo-logoMuch to the disappointment of studios everywhere, movie and TV shows leak onto the Internet every single week.

However, if what is unfolding today lives up to its billing, we could be looking at the start of one of the most significant piracy leaks of recent times.

Earlier this evening, the first episode of the brand new season of Netflix’s Orange is the New Black was uploaded to The Pirate Bay, months ahead of its official June release date.

So how did this unreleased content fall into the wrong hands?

As seen from the torrent details uploaded to Pirate Bay, the leak is the work of a hacking entity calling itself TheDarkOverlord (TDO). An extraction of the .torrent file’s meta data reveals a 1.1GB file named:

‘Episode1/ORANGEep5001_HDSR_CTM_ProResProxy_8.15.16-H264_SD_16x9.mov’.

In information sent to TF, the group says that sometime during the closing months of 2016, it gained access to the systems of Larson Studios, an ADR (additional dialogue recorded) studio, based in Hollywood. The following screenshot reportedly from the leak indeed suggests a copy that was in production and possibly unfinished in some way.

After obtained its haul, TDO says it entered into “negotiations” with the video services company over the fate of the liberated content.

“After we had a copy of their data safely in our possession, we asked that we be paid a small fee in exchange for non-disclosure. We approached them on the Eve of their Christmas,” a member of the group previously told us over an encrypted channel.

So who are TDO? According to several security reports, TDO is a fairly prolific hacking group (their spokesman says they are more than one) that has claimed responsibility for a number of attacks in recent months.

One, which targeted construction company Pre-Con Products Ltd, involved the leak of contracts and a video which purported to show a fatal accident. Another, concerning polyurethane and epoxy product company GS Polymers, Inc, resulted in a leak of data after the company reportedly showed a “disinterest” in “working” with TDO. The group has also targeted medical organizations and leaked gigabytes of data obtained from Gorilla Glue.

As is clear from its actions, TDO takes its business seriously and when the group allegedly contacted Larson Studios before Christmas, they had extortion (their word) in mind. In a lengthy business-like ‘contract’ shared with TorrentFreak, TDO laid out its terms for cooperation with the California-based company.

“This agreement of accord, assurances, and satisfaction is between Larson Studios (the ‘Client’) and thedarkoverlord, a subsidiary of TheDarkOverlord Solutions, a subsidiary of World Wide Web, LLC [WWW, LLC] (the ‘Proposer’),” the wordy contract begins.

In section 2 of the contract, headed “Description of Services,” TheDarkOverLord offers to “refrain from communicating in any method, design, or otherwise to any individual, corporation, computer, or other entity any knowledge, information, or otherwise,” which appears to be an offer not to leak the content obtained.

Unsurprisingly, there were a number of conditions. The subsequent section 3 reveals that the “services” come at a price – 50 bitcoins – plus potential late payment fees, at TDO’s discretion.

tdo-contract

TDO informs TF that Larson Studios agreed to the pay the ransom and even sent back the contract.

“They printed, signed, and scanned the contract back to us,” the group says.

A copy seen by TF does have a signature, but TDO claims that Larson failed to follow through with the all-important bitcoin payment by the deadline of 31st December. That resulted in follow-up contact with the company.

“A late fee was levied and they still didn’t hold up their end of the agreement,” TDO says.

In an earlier discussion with TDO after the group reached out to us, we tried to establish what makes a group like this tick. Needless to say, they gave very little away. We got the impression from news reports that the group is mostly motivated by money, possibly power, but to remove doubt we asked the question.

“Are you familiar with the famous American bank robber, Willie Sutton?” a spokesperson replied.

“In an interview, he was once asked ‘Why do you rob banks?’ To which replied, ‘Because that’s where the money is.’ It’s said that this exchange led to the creation of Sutton’s law which states that when diagnosing, one should consider the obvious. We’ll leave you to interpret what we’re motivated by.”

Later, the group stated that its only motivation is its “greed for internet money.”

TorrentFreak understands that the leak of this single episode could represent just the start of an even bigger drop of pre-release TV series and movies. TDO claims to be sitting on a massive trove of unreleased video material, all of it high-quality.

“The quality is almost publish quality. One will find small audio errors and video errors like lack of color correction, but things are mostly complete with most of the material,” TDO says.

TheDarkOverlord did not explain what it hopes to achieve by leaking this video content now, months after it was obtained. However, when questioned the group told us that the information shared with us thus far represents just “the tip of the iceberg.”

In the past few minutes the group has taken to its Twitter account, posting messages directed at Netflix who are likely to be watching events unfold.

This is a breaking news story, updates will follow

Update: The group has published a statement on Pastebin.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Near Zero Downtime Migration from MySQL to DynamoDB

Post Syndicated from YongSeong Lee original https://aws.amazon.com/blogs/big-data/near-zero-downtime-migration-from-mysql-to-dynamodb/

Many companies consider migrating from relational databases like MySQL to Amazon DynamoDB, a fully managed, fast, highly scalable, and flexible NoSQL database service. For example, DynamoDB can increase or decrease capacity based on traffic, in accordance with business needs. The total cost of servicing can be optimized more easily than for the typical media-based RDBMS.

However, migrations can have two common issues:

  • Service outage due to downtime, especially when customer service must be seamlessly available 24/7/365
  • Different key design between RDBMS and DynamoDB

This post introduces two methods of seamlessly migrating data from MySQL to DynamoDB, minimizing downtime and converting the MySQL key design into one more suitable for NoSQL.

AWS services

I’ve included sample code that uses the following AWS services:

  • AWS Database Migration Service (AWS DMS) can migrate your data to and from most widely used commercial and open-source databases. It supports homogeneous and heterogeneous migrations between different database platforms.
  • Amazon EMR is a managed Hadoop framework that helps you process vast amounts of data quickly. Build EMR clusters easily with preconfigured software stacks that include Hive and other business software.
  • Amazon Kinesis can continuously capture and retain a vast amount of data such as transaction, IT logs, or clickstreams for up to 7 days.
  • AWS Lambda helps you run your code without provisioning or managing servers. Your code can be automatically triggered by other AWS services such Amazon Kinesis Streams.

Migration solutions

Here are the two options I describe in this post:

  1. Use AWS DMS

AWS DMS supports migration to a DynamoDB table as a target. You can use object mapping to restructure original data to the desired structure of the data in DynamoDB during migration.

  1. Use EMR, Amazon Kinesis, and Lambda with custom scripts

Consider this method when more complex conversion processes and flexibility are required. Fine-grained user control is needed for grouping MySQL records into fewer DynamoDB items, determining attribute names dynamically, adding business logic programmatically during migration, supporting more data types, or adding parallel control for one big table.

After the initial load/bulk-puts are finished, and the most recent real-time data is caught up by the CDC (change data capture) process, you can change the application endpoint to DynamoDB.

The method of capturing changed data in option 2 is covered in the AWS Database post Streaming Changes in a Database with Amazon Kinesis. All code in this post is available in the big-data-blog GitHub repo, including test codes.

Solution architecture

The following diagram shows the overall architecture of both options.

Option 1:  Use AWS DMS

This section discusses how to connect to MySQL, read the source data, and then format the data for consumption by the target DynamoDB database using DMS.

Create the replication instance and source and target endpoints

Create a replication instance that has sufficient storage and processing power to perform the migration job, as mentioned in the AWS Database Migration Service Best Practices whitepaper. For example, if your migration involves a large number of tables, or if you intend to run multiple concurrent replication tasks, consider using one of the larger instances. The service consumes a fair amount of memory and CPU.

As the MySQL user, connect to MySQL and retrieve data from the database with the privileges of SUPER, REPLICATION CLIENT. Enable the binary log and set the binlog_format parameter to ROW for CDC in the MySQL configuration. For more information about how to use DMS, see Getting Started  in the AWS Database Migration Service User Guide.

mysql> CREATE USER 'repl'@'%' IDENTIFIED BY 'welcome1';
mysql> GRANT all ON <database name>.* TO 'repl'@'%';
mysql> GRANT SUPER,REPLICATION CLIENT  ON *.* TO 'repl'@'%';

Before you begin to work with a DynamoDB database as a target for DMS, make sure that you create an IAM role for DMS to assume, and grant access to the DynamoDB target tables. Two endpoints must be created to connect the source and target. The following screenshot shows sample endpoints.

The following screenshot shows the details for one of the endpoints, source-mysql.

Create a task with an object mapping rule

In this example, assume that the MySQL table has a composite primary key (customerid + orderid + productid). You are going to restructure the key to the desired structure of the data in DynamoDB, using an object mapping rule.

In this case, the DynamoDB table has the hash key that is a combination of the customerid and orderid columns, and the sort key is the productid column. However, the partition key should be decided by the user in an actual migration, based on data ingestion and access pattern. You would usually use high-cardinality attributes. For more information about how to choose the right DynamoDB partition key, see the Choosing the Right DynamoDB Partition Key AWS Database blog post.

DMS automatically creates a corresponding attribute on the target DynamoDB table for the quantity column from the source table because rule-action is set to map-record-to-record and the column is not listed in the exclude-columns attribute list. For more information about map-record-to-record and map-record-to-document, see Using an Amazon DynamoDB Database as a Target for AWS Database Migration Service.

Migration starts immediately after the task is created, unless you clear the Start task on create option. I recommend enabling logging to make sure that you are informed about what is going on with the migration task in the background.

The following screenshot shows the task creation page.

You can use the console to specify the individual database tables to migrate and the schema to use for the migration, including transformations. On the Guided tab, use the Where section to specify the schema, table, and action (include or exclude). Use the Filter section to specify the column name in a table and the conditions to apply.

Table mappings also can be created in JSON format. On the JSON tab, check Enable JSON editing.

Here’s an example of an object mapping rule that determines where the source data is located in the target. If you copy the code, replace the values of the following attributes. For more examples, see Using an Amazon DynamoDB Database as a Target for AWS Database Migration Service.

  • schema-name
  • table-name
  • target-table-name
  • mapping-parameters
  • attribute-mappings
{
  "rules": [
   {
      "rule-type": "selection",
      "rule-id": "1",
      "rule-name": "1",
      "object-locator": {
        "schema-name": "mydatabase",
        "table-name": "purchase"
      },
      "rule-action": "include"
    },
    {
      "rule-type": "object-mapping",
      "rule-id": "2",
      "rule-name": "2",
      "rule-action": "map-record-to-record",
      "object-locator": {
        "schema-name": "mydatabase",
        "table-name": "purchase"
 
      },
      "target-table-name": "purchase",
      "mapping-parameters": {
        "partition-key-name": "customer_orderid",
        "sort-key-name": "productid",
        "exclude-columns": [
          "customerid",
          "orderid"           
        ],
        "attribute-mappings": [
          {
            "target-attribute-name": "customer_orderid",
            "attribute-type": "scalar",
            "attribute-sub-type": "string",
            "value": "${customerid}|${orderid}"
          },
          {
            "target-attribute-name": "productid",
            "attribute-type": "scalar",
            "attribute-sub-type": "string",
            "value": "${productid}"
          }
        ]
      }
    }
  ]
}

Start the migration task

If the target table specified in the target-table-name property does not exist in DynamoDB, DMS creates the table according to data type conversion rules for source and target data types. There are many metrics to monitor the progress of migration. For more information, see Monitoring AWS Database Migration Service Tasks.

The following screenshot shows example events and errors recorded by CloudWatch Logs.

DMS replication instances that you used for the migration should be deleted once all migration processes are completed. Any CloudWatch logs data older than the retention period is automatically deleted.

Option 2: Use EMR, Amazon Kinesis, and Lambda

This section discusses an alternative option using EMR, Amazon Kinesis, and Lambda to provide more flexibility and precise control. If you have a MySQL replica in your environment, it would be better to dump data from the replica.

Change the key design

When you decide to change your database from RDMBS to NoSQL, you need to find a more suitable key design for NoSQL, for performance as well as cost-effectiveness.

Similar to option #1, assume that the MySQL source has a composite primary key (customerid + orderid + productid). However, for this option, group the MySQL records into fewer DynamoDB items by customerid (hash key) and orderid (sort key). Also, remove the last column (productid) of the composite key by converting the record values productid column in MySQL to the attribute name in DynamoDB, and setting the attribute value as quantity.

This conversion method reduces the number of items. You can retrieve the same amount of information with fewer read capacity units, resulting in cost savings and better performance. For more information about how to calculate read/write capacity units, see Provisioned Throughput.

Migration steps

Option 2 has two paths for migration, performed at the same time:

  • Batch-puts: Export MySQL data, upload it to Amazon S3, and import into DynamoDB.
  • Real-time puts: Capture changed data in MySQL, send the insert/update/delete transaction to Amazon Kinesis Streams, and trigger the Lambda function to put data into DynamoDB.

To keep the data consistency and integrity, capturing and feeding data to Amazon Kinesis Streams should be started before the batch-puts process. The Lambda function should stand by and Streams should retain the captured data in the stream until the batch-puts process on EMR finishes. Here’s the order:

  1. Start real-time puts to Amazon Kinesis Streams.
  2. As soon as real-time puts commences, start batch-puts.
  3. After batch-puts finishes, trigger the Lambda function to execute put_item from Amazon Kinesis Streams to DynamoDB.
  4. Change the application endpoints from MySQL to DynamoDB.

Step 1:  Capture changing data and put into Amazon Kinesis Streams

Firstly, create an Amazon Kinesis stream to retain transaction data from MySQL. Set the Data retention period value based on your estimate for the batch-puts migration process. For data integrity, the retention period should be enough to hold all transactions until batch-puts migration finishes. However you do not necessarily need to select the maximum retention period. It depends on the amount of data to migrate.

In the MySQL configuration, set binlog_format to ROW to capture transactions by using the BinLogStreamReader module. The log_bin parameter must be set as well to enable the binlog. For more information, see the Streaming Changes in a Database with Amazon Kinesis AWS Database blog post.

 

[mysqld]
secure-file-priv = ""
log_bin=/data/binlog/binlog
binlog_format=ROW
server-id = 1
tmpdir=/data/tmp

The following sample code is a Python example that captures transactions and sends them to Amazon Kinesis Streams.

 

#!/usr/bin/env python
from pymysqlreplication import BinLogStreamReader
from pymysqlreplication.row_event import (
  DeleteRowsEvent,
  UpdateRowsEvent,
  WriteRowsEvent,
)

def main():
  kinesis = boto3.client("kinesis")

  stream = BinLogStreamReader(
    connection_settings= {
      "host": "<host IP address>",
      "port": <port number>,
      "user": "<user name>",
      "passwd": "<password>"},
    server_id=100,
    blocking=True,
    resume_stream=True,
    only_events=[DeleteRowsEvent, WriteRowsEvent, UpdateRowsEvent])

  for binlogevent in stream:
    for row in binlogevent.rows:
      event = {"schema": binlogevent.schema,
      "table": binlogevent.table,
      "type": type(binlogevent).__name__,
      "row": row
      }

      kinesis.put_record(StreamName="<Amazon Kinesis stream name>", Data=json.dumps(event), PartitionKey="default")
      print json.dumps(event)

if __name__ == "__main__":
main()

The following code is sample JSON data generated by the Python script. The type attribute defines the transaction recorded by that JSON record:

  • WriteRowsEvent = INSERT
  • UpdateRowsEvent = UPDATE
  • DeleteRowsEvent = DELETE
{"table": "purchase_temp", "row": {"values": {"orderid": "orderidA1", "quantity": 100, "customerid": "customeridA74187", "productid": "productid1"}}, "type": "WriteRowsEvent", "schema": "test"}
{"table": "purchase_temp", "row": {"before_values": {"orderid": "orderid1", "quantity": 1, "customerid": "customerid74187", "productid": "productid1"}, "after_values": {"orderid": "orderid1", "quantity": 99, "customerid": "customerid74187", "productid": "productid1"}}, "type": "UpdateRowsEvent", "schema": "test"}
{"table": "purchase_temp", "row": {"values": {"orderid": "orderid100", "quantity": 1, "customerid": "customerid74187", "productid": "productid1"}}, "type": "DeleteRowsEvent", "schema": "test"}

Step 2. Dump data from MySQL to DynamoDB

The easiest way is to use DMS, which recently added Amazon S3 as a migration target. For an S3 target, both full load and CDC data is written to CSV format. However, CDC is not a good fit as UPDATE and DELETE statements are not supported. For more information, see Using Amazon S3 as a Target for AWS Database Migration Service.

Another way to upload data to Amazon S3 is to use the INTO OUTFILE SQL clause and aws s3 sync CLI command in parallel with your own script. The degree of parallelism depends on your server capacity and local network bandwidth. You might find a third-party tool useful, such as pt-archiver (part of the Percona Toolkit see the appendix for details).

SELECT * FROM purchase WHERE <condition_1>
INTO OUTFILE '/data/export/purchase/1.csv' FIELDS TERMINATED BY ',' ESCAPED BY '\\' LINES TERMINATED BY '\n';
SELECT * FROM purchase WHERE <condition_2>
INTO OUTFILE '/data/export/purchase/2.csv' FIELDS TERMINATED BY ',' ESCAPED BY '\\' LINES TERMINATED BY '\n';
...
SELECT * FROM purchase WHERE <condition_n>
INTO OUTFILE '/data/export/purchase/n.csv' FIELDS TERMINATED BY ',' ESCAPED BY '\\' LINES TERMINATED BY '\n';

I recommend the aws s3 sync command for this use case. This command works internally with the S3 multipart upload feature. Pattern matching can exclude or include particular files. In addition, if the sync process crashes in the middle of processing, you do not need to upload the same files again. The sync command compares the size and modified time of files between local and S3 versions, and synchronizes only local files whose size and modified time are different from those in S3. For more information, see the sync command in the S3 section of the AWS CLI Command Reference.

$ aws s3 sync /data/export/purchase/ s3://<your bucket name>/purchase/ 
$ aws s3 sync /data/export/<other path_1>/ s3://<your bucket name>/<other path_1>/
...
$ aws s3 sync /data/export/<other path_n>/ s3://<your bucket name>/<other path_n>/ 

After all data is uploaded to S3, put it into DynamoDB. There are two ways to do this:

  • Use Hive with an external table
  • Write MapReduce code

Hive with an external table

Create a Hive external table against the data on S3 and insert it into another external table against the DynamoDB table, using the org.apache.hadoop.hive.dynamodb.DynamoDBStorageHandler property. To improve productivity and the scalability, consider using Brickhouse, which is a collection of UDFs for Hive.

The following sample code assumes that the Hive table for DynamoDB is created with the products column, which is of type ARRAY<STRING >.  The productid and quantity columns are aggregated, grouping by customerid and orderid, and inserted into the products column with the CollectUDAF columns provided by Brickhouse.

hive> DROP TABLE purchase_ext_s3; 
--- To read data from S3 
hive> CREATE EXTERNAL TABLE purchase_ext_s3 (
customerid string,
orderid    string,
productid  string,
quantity   string) 
ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' 
LOCATION 's3://<your bucket name>/purchase/';

Hive> drop table purchase_ext_dynamodb ; 
--- To connect to DynamoDB table  
Hive> CREATE EXTERNAL TABLE purchase_ext_dynamodb (
      customerid STRING, orderid STRING, products ARRAY<STRING>)
      STORED BY 'org.apache.hadoop.hive.dynamodb.DynamoDBStorageHandler' 
      TBLPROPERTIES ("dynamodb.table.name" = "purchase", 
      "dynamodb.column.mapping" = "customerid:customerid,orderid:orderid,products:products");

--- Batch-puts to DynamoDB using Brickhouse 
hive> add jar /<jar file path>/brickhouse-0.7.1-SNAPSHOT.jar ; 
hive> create temporary function collect as 'brickhouse.udf.collect.CollectUDAF';
hive> INSERT INTO purchase_ext_dynamodb 
select customerid as customerid , orderid as orderid
       ,collect(concat(productid,':' ,quantity)) as products
      from purchase_ext_s3
      group by customerid, orderid; 

Unfortunately, the MAP, LIST, BOOLEAN, and NULL data types are not supported by the  DynamoDBStorageHandler class, so the ARRAY<STRING> data type has been chosen. The products column of ARRAY<STRING> data type in Hive is matched to the StringSet type attribute in DynamoDB. The sample code mostly shows how Brickhouse works, and only for those who want to aggregate multiple records into one StringSet type attribute in DynamoDB.

Python MapReduce with Hadoop Streaming

A mapper task reads each record from the input data on S3, and maps input key-value pairs to intermediate key-value pairs. It divides source data from S3 into two parts (key part and value part) delimited by a TAB character (“\t”). Mapper data is sorted in order by their intermediate key (customerid and orderid) and sent to the reducer. Records are put into DynamoDB in the reducer step.

#!/usr/bin/env python
import sys
 
# get all lines from stdin
for line in sys.stdin:
    line = line.strip()
    cols = line.split(',')
# divide source data into Key and attribute part.
# example output : “cusotmer1,order1	product1,10”
    print '%s,%s\t%s,%s' % (cols[0],cols[1],cols[2],cols[3] )

Generally, the reduce task receives the output produced after map processing (which is key/list-of-values pairs) and then performs an operation on the list of values against each key.

In this case, the reducer is written in Python and is based on STDIN/STDOUT/hadoop streaming. The enumeration data type is not available. The reducer receives data sorted and ordered by the intermediate key set in the mapper, customerid and orderid (cols[0],cols[1]) in this case, and stores all attributes for the specific key in the item_data dictionary. The attributes in the item_data dictionary are put, or flushed, into DynamoDB every time a new intermediate key comes from sys.stdin.

#!/usr/bin/env python
import sys
import boto.dynamodb
 
# create connection to DynamoDB
current_keys = None
conn = boto.dynamodb.connect_to_region( '<region>', aws_access_key_id='<access key id>', aws_secret_access_key='<secret access key>')
table = conn.get_table('<dynamodb table name>')
item_data = {}

# input comes from STDIN emitted by Mapper
for line in sys.stdin:
    line = line.strip()
    dickeys, items  = line.split('\t')
    products = items.split(',')
    if current_keys == dickeys:
       item_data[products[0]]=products[1]  
    else:
        if current_keys:
          try:
              mykeys = current_keys.split(',') 
              item = table.new_item(hash_key=mykeys[0],range_key=mykeys[1], attrs=item_data )
              item.put() 
          except Exception ,e:
              print 'Exception occurred! :', e.message,'==> Data:' , mykeys
        item_data = {}
        item_data[products[0]]=products[1]
        current_keys = dickeys

# put last data
if current_keys == dickeys:
   print 'Last one:' , current_keys #, item_data
   try:
       mykeys = dickeys.split(',')
       item = table.new_item(hash_key=mykeys[0] , range_key=mykeys[1], attrs=item_data )
       item.put()
   except Exception ,e:
print 'Exception occurred! :', e.message, '==> Data:' , mykeys

To run the MapReduce job, connect to the EMR master node and run a Hadoop streaming job. The hadoop-streaming.jar file location or name could be different, depending on your EMR version. Exception messages that occur while reducers run are stored at the directory assigned as the –output option. Hash key and range key values are also logged to identify which data causes exceptions or errors.

$ hadoop fs -rm -r s3://<bucket name>/<output path>
$ hadoop jar /usr/lib/hadoop-mapreduce/hadoop-streaming.jar \
           -input s3://<bucket name>/<input path> -output s3://<bucket name>/<output path>\
           -file /<local path>/mapper.py -mapper /<local path>/mapper.py \
           -file /<local path>/reducer.py -reducer /<local path>/reducer.py

In my migration experiment using the above scripts, with self-generated test data, I found the following results, including database size and the time taken to complete the migration.

Server MySQL instance m4.2xlarge
EMR cluster

master : 1 x m3.xlarge

core  : 2 x m4.4xlarge

DynamoDB 2000 write capacity unit
Data Number of records 1,000,000,000
Database file size (.ibc) 100.6 GB
CSV files size 37 GB
Performance (time) Export to CSV 6 min 10 sec
Upload to S3 (sync) 3 min 30 sec
Import to DynamoDB depending on write capacity unit

 

The following screenshot shows the performance results by write capacity.

Note that the performance result is flexible and can vary depending on the server capacity, network bandwidth, degree of parallelism, conversion logic, program language, and other conditions. All provisioned write capacity units are consumed by the MapReduce job for data import, so the more you increase the size of the EMR cluster and write capacity units of DynamoDB table, the less time it takes to complete. Java-based MapReduce code would be more flexible for function and MapReduce framework.

Step 3: Amazon Lambda function updates DynamoDB by reading data from Amazon Kinesis

In the Lambda console, choose Create a Lambda function and the kinesis-process-record-python blueprint. Next, in the Configure triggers page, select the stream that you just created.

The Lambda function must have an IAM role with permissions to read from Amazon Kinesis and put items into DynamoDB.

The Lambda function can recognize the transaction type of the record by looking up the type attribute. The transaction type determines the method for conversion and update.

For example, when a JSON record is passed to the function, the function looks up the type attribute. It also checks whether an existing item in the DynamoDB table has the same key with the incoming record. If so, the existing item must be retrieved and saved in a dictionary variable (item, in this case). Apply a new update information command to the item dictionary before it is put back into DynamoDB table. This prevents the existing item from being overwritten by the incoming record.

from __future__ import print_function

import base64
import json
import boto3

print('Loading function')
client = boto3.client('dynamodb')

def lambda_handler(event, context):
    #print("Received event: " + json.dumps(event, indent=2))
    for record in event['Records']:
        # Amazon Kinesis data is base64-encoded so decode here
        payload = base64.b64decode(record['kinesis']['data'])
        print("Decoded payload: " + payload)
        data = json.loads(payload)
        
        # user logic for data triggered by WriteRowsEvent
        if data["type"] == "WriteRowsEvent":
            my_table = data["table"]
            my_hashkey = data["row"]["values"]["customerid"]
            my_rangekey = data["row"]["values"]["orderid"]
            my_productid = data["row"]["values"]["productid"]
            my_quantity = str( data["row"]["values"]["quantity"] )
            try:
                response = client.get_item( Key={'customerid':{'S':my_hashkey} , 'orderid':{'S':my_rangekey}} ,TableName = my_table )
                if 'Item' in response:
                    item = response['Item']
                    item[data["row"]["values"]["productid"]] = {"S":my_quantity}
                    result1 = client.put_item(Item = item , TableName = my_table )
                else:
                    item = { 'customerid':{'S':my_hashkey} , 'orderid':{'S':my_rangekey} , my_productid :{"S":my_quantity}  }
                    result2 = client.put_item( Item = item , TableName = my_table )
            except Exception, e:
                print( 'WriteRowsEvent Exception ! :', e.message  , '==> Data:' ,data["row"]["values"]["customerid"]  , data["row"]["values"]["orderid"] )
        
        # user logic for data triggered by UpdateRowsEvent
        if data["type"] == "UpdateRowsEvent":
            my_table = data["table"]
            
        # user logic for data triggered by DeleteRowsEvent    
        if data["type"] == "DeleteRowsEvent":
            my_table = data["table"]
            
            
    return 'Successfully processed {} records.'.format(len(event['Records']))

Step 4:  Switch the application endpoint to DynamoDB

Application codes need to be refactored when you change from MySQL to DynamoDB. The following simple Java code snippets focus on the connection and query part because it is difficult to cover all cases for all applications. For more information, see Programming with DynamoDB and the AWS SDKs.

Query to MySQL

The following sample code shows a common way to connect to MySQL and retrieve data.

import java.sql.* ;
...
try {
    Connection conn =  DriverManager.getConnection("jdbc:mysql://<host name>/<database name>" , "<user>" , "<password>");
    stmt = conn.createStatement();
    String sql = "SELECT quantity as quantity FROM purchase WHERE customerid = '<customerid>' and orderid = '<orderid>' and productid = '<productid>'";
    ResultSet rs = stmt.executeQuery(sql);

    while(rs.next()){ 
       int quantity  = rs.getString("quantity");   //Retrieve by column name 
       System.out.print("quantity: " + quantity);  //Display values 
       }
} catch (SQLException ex) {
    // handle any errors
    System.out.println("SQLException: " + ex.getMessage());}
...
==== Output ====
quantity:1
Query to DynamoDB

To retrieve items from DynamoDB, follow these steps:

  1. Create an instance of the DynamoDB
  2. Create an instance of the Table
  3. Add the withHashKey and withRangeKeyCondition methods to an instance of the QuerySpec
  4. Execute the query method with the querySpec instance previously created. Items are retrieved as JSON format, so use the getJSON method to look up a specific attribute in an item.
...
DynamoDB dynamoDB = new DynamoDB( new AmazonDynamoDBClient(new ProfileCredentialsProvider()));

Table table = dynamoDB.getTable("purchase");

QuerySpec querySpec = new QuerySpec()
        .withHashKey("customerid" , "customer1")  // hashkey name and its value 
        .withRangeKeyCondition(new RangeKeyCondition("orderid").eq("order1") ) ; // Ranage key and its condition value 

ItemCollection<QueryOutcome> items = table.query(querySpec); 

Iterator<Item> iterator = items.iterator();          
while (iterator.hasNext()) {
Item item = iterator.next();
System.out.println(("quantity: " + item.getJSON("product1"));   // 
}
...
==== Output ====
quantity:1

Conclusion

In this post, I introduced two options for seamlessly migrating data from MySQL to DynamoDB and minimizing downtime during the migration. Option #1 used DMS, and option #2 combined EMR, Amazon Kinesis, and Lambda. I also showed you how to convert the key design in accordance with database characteristics to improve read/write performance and reduce costs. Each option has advantages and disadvantages, so the best option depends on your business requirements.

The sample code in this post is not enough for a complete, efficient, and reliable data migration code base to be reused across many different environments. Use it to get started, but design for other variables in your actual migration.

I hope this post helps you plan and implement your migration and minimizes service outages. If you have questions or suggestions, please leave a comment below.

Appendix

To install the Percona Toolkit:

# Install Percona Toolkit

$ wget https://www.percona.com/downloads/percona-toolkit/3.0.2/binary/redhat/6/x86_64/percona-toolkit-3.0.2-1.el6.x86_64.rpm

$ yum install perl-IO-Socket-SSL

$ yum install perl-TermReadKey

$ rpm -Uvh percona-toolkit-3.0.2-1.el6.x86_64.rpm

# run pt-archiver

Example command:

$ pt-archiver –source h=localhost,D=blog,t=purchase –file ‘/data/export/%Y-%m-%d-%D.%t’  –where “1=1” –limit 10000 –commit-each

 


About the Author

Yong Seong Lee is a Cloud Support Engineer for AWS Big Data Services. He is interested in every technology related to data/databases and helping customers who have difficulties in using AWS services. His motto is “Enjoy life, be curious and have maximum experience.”

 

 

 


Converging Data Silos to Amazon Redshift Using AWS DMS

 

Pioneers: the second challenge is…

Post Syndicated from Olympia Brown original https://www.raspberrypi.org/blog/pioneers-second-challenge/

Pioneers, your next challenge is here!

Do you like making things? Do you fancy trying something new? Are you aged 11 to 16? The Pioneers programme is ready to challenge you to create something new using technology.

As you’ll know if you took part last time, Pioneers challenges are themed. So here’s the lovely Ana from ZSL London Zoo to reveal the theme of the next challenge:

Your next challenge, if you choose to accept it, is…

MakeYourIdeas The second Pioneers challenge is here! Wahoo! Have you registered your team yet? Make sure you do. Head to the Pioneers website for more details: http://www.raspberrypi.org/pioneers

Make it Outdoors

You have until the beginning of July to make something related to the outdoors. As Ana said, the outdoors is pretty big, so here are some ideas:

Resources and discounted kit

If you’re looking at all of these projects and thinking that you don’t know where to start, never fear! Our free resources offer a great starting point for any new project, and can help you to build on your existing skills and widen your scope for creating greatness.

We really want to see your creativity and ingenuity though, so we’d recommend using these projects as starting points rather than just working through the instructions. To help us out, the wonderful Pimoroni are offering 15 percent off kit for our Getting started with wearables and Getting started with picamera resources. You should also check out our new Poo near you resource for an example of a completely code-based project.



For this cycle of Pioneers, thanks to our friends at the Shell Centenary Scholarship Fund, we are making bursaries available to teams to cover the cost of these basic kits (one per team). This is for teens who haven’t taken part in digital making activities before, and for whom the financial commitment would be a barrier to taking part. Details about the bursaries and the discount will be sent to you when you register.

Your Pioneers team

We’ve introduced a few new things for this round of Pioneers, so pay special attention if you took part last time round!

Pioneers challenge: Make it Outdoors

We’re looking for UK-based teams of between two and five people, aged between 11 and 16, to work together to create something related to the outdoors. We’ve found that in our experience there are three main ways to run a Pioneers team. It’s up to you to decide how you’ll proceed when it comes to your participation in Pioneers.

  • You could organise a Group that meets once or twice a week. We find this method works well for school-based teams that can meet at the end of a school day for an hour or two every week.
  • You could mentor a Squad that is largely informal, where the members probably already have a good idea of what they’re doing. A Squad tends to be more independent, and meetings may be sporadic, informal or online only. This option isn’t recommended if it’s your first competition like this, or if you’re not a techie yourself.
  • You could join a local Event at a technology hub near you. We’re hoping to run more and more of these events around the country as Pioneers evolves and grows. If you think you’d like to help us run a Pioneers Event, get in touch! We love to hear from people who want to spread their love of making, and we’ll support you as much as we possibly can to get your event rocking along. If you want to run a Pioneers Event, you will need to preregister on the Pioneers website so that we can get you all the support you need well before you open your doors.

#MakeYourIdeas

As always, we’re excited to watch the progress of your projects via social media channels such as Twitter, Instagram, and Snapchat. As you work on your build, make sure to share the ‘making of…’ stages with us using #MakeYourIdeas.

For inspiration from previous entries, here’s the winner announcement video for the last Pioneers challenge:

Winners of the first Pioneers challenge are…

After months of planning and making, the first round of Pioneers is over! We laid down the epic challenge of making us laugh. And boy, did the teams deliver. We can honestly say that my face hurt from all the laughing on judging day. Congratulations to everyone who took part.

Once you’ve picked a project, the first step is to register. What are you waiting for? Head to the Pioneers website to get started!

The post Pioneers: the second challenge is… appeared first on Raspberry Pi.

Introducing DnsControl – “DNS as Code” has Arrived

Post Syndicated from Craig Peterson original http://blog.serverfault.com/2017/04/11/introducing-dnscontrol-dns-as-code-has-arrived/

DNS at Stack Overflow is… complex.  We have hundreds of DNS domains and thousands of DNS records. We have gone from running our own BIND server to hosting DNS with multiple cloud providers, and we change things fairly often. Keeping everything up to date and synced at multiple DNS providers is difficult. We built DnsControl to allow us to perform updates easily and automatically across all providers we use.

The old way

Originally, our DNS was hosted by our own BIND servers, using artisanal, hand crafted zone files. Large changes involved liberal sed usage, and every change was pretty error prone. We decided to start using cloud DNS providers for performance reasons, but those each have their own web panels, which are universally painful to use. Web interfaces rarely have any import/export functionality, and generally lack change control, history tracking, or comments. We quickly decided that web panels were not how we wanted to manage our zones. 

Introducing DnsControl

DNSControl is the system we built to manage our DNS. It permits “describe once, use anywhere” DNS management. It consists of a few key components:

  1. A Domain Specific Language (DSL) for describing domains in a single, provider-independent way.
  2. An “interpreter” application that executes the DSL and creates a standardized representation of your desired DNS state.
  3. Back-end “providers” that sync the desired state to a DNS provider.

At the time of this writing we have 9 different providers implemented, with 3 more on the way shortly. We use it to manage our domains with our own BIND servers, as well as Route 53, Google Cloud DNS, name.com, Cloudflare, and more.

A sample might look like this description of stackoverflow.com:

D(“stackoverflow.com”, REG_NAMEDOTCOM, DnsProvider(R53), DnsProvider(GCLOUD),
    A([email protected], “198.252.206.16”),
    A(“blog”, “198.252.206.20”),
    CNAME(“chat”, “chat.stackexchange.com.”),
    CNAME(“www”, [email protected], TTL(3600)),
    A(“meta”, “198.252.206.16”)
)

This is just a small, simple example. The DSL is a fully-featured way to express your DNS config. It is actually just javascript with some helpful functions. We have an examples page with more examples of the power of the language.

Running “dnscontrol preview” with this input will show what updates would be needed to bring DNS providers up to date with the new, desired, configuration. “dnscontrol push” will actually make the changes.

This allows us to manage our DNS configuration as code. Storing it this way has a bunch of advantages:

  • We can use variables to store common IP addresses or repeated data. We can make complicated changes, like failing-over services between data centers, by changing a single variable. We can activate or deactivate our CDN, which involves thousands of record changes, by commenting or uncommenting a single line of code.
  • We are not locked into any single provider, since the automation can sync to any of them. Keeping records synchronized between different cloud providers requires no manual steps.
  • We store our DNS config in git. Our build server runs all changes. We have central logging, access control, and history for our DNS changes. We’re trying to apply DevOps best practices to an area that has not seen those benefits so much yet.

I think the biggest benefit to this tool though is the freedom it has given us with our DNS.  It has allowed us to:

  • Switch providers with no fear of breaking things. We have changed CDNs or DNS providers at least 4 times in the last two years, and it has never been scary at all.
  • Dual-host our DNS with multiple providers simultaneously. The tool keeps them in sync for us.
  • Test fail-over procedures before an emergency happens. We are confident we can point DNS at our secondary datacenter easily, and we can quickly switch providers if one is being DDOSed.

DNS configuration is often difficult and error-prone.  We hope DnsControl makes it easy and more reliable. It has for us.

Some resources:

Court Orders Pornhub to Expose Copyright Infringers

Post Syndicated from Ernesto original https://torrentfreak.com/court-orders-pornhub-to-expose-copyright-infringers-170405/

As one of the largest websites on the Internet to largely rely on user uploaded content, Pornhub is no stranger to copyright infringement allegations.

Similar to other video streaming services, the company maintains a DMCA takedown policy which allows rightsholders to remove content posted without permission.

“We take claims of copyright infringement seriously,” the company states, adding that it reserves the right to terminate the accounts of repeat infringers.

“Responses may include removing, blocking or disabling access to material claimed to be the subject of infringing activity, terminating the user’s access to www.pornhub.com, or all of the foregoing.”

Most porn production companies restrict their enforcement efforts to sending takedown requests, but for some that is not enough. The Seychelles-based company Foshan Ltd, known for the “Wankz” brand, has gone a step further and wants to know who uploaded its videos to the site.

Last week the company obtained a DMCA subpoena from a federal court in California, which orders the adult video portal to identify and expose the uploaders of more than 1,000 copies of its videos.

The request for a DMCA subpoena was granted by a court clerk a day after it was filed. As a result, Pornhub now has until May 1st to hand over the requested information unless it decides to appeal.

The subpoena

The subpoena is rather broad and compels Pornhub to hand over all the information it has available on the uploaders. This includes names, email addresses, IP addresses, user and posting histories, physical addresses, telephone numbers, and any other identifying or account information.

TorrentFreak reached out to Pornhub to find out how the company plans to respond and what personal information it has available on users. However, at the time of publication we were yet to hear back.

It is unclear what Foshan is planning to do if they obtain the personal information of the uploaders. It is likely, however, that they’re considering legal action against one or more persons, if the evidence is sufficient.

As far as we’re aware, this is the first time that a rightsholder has used a DMCA subpoena to obtain information about Pornhub uploaders. And since its a relatively cheap and easy way to expose infringers, this might not be the last.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Raspberry Turk: a chess-playing robot

Post Syndicated from Lorna Lynch original https://www.raspberrypi.org/blog/raspberry-turk-chess-playing-robot/

Computers and chess have been a potent combination ever since the appearance of the first chess-playing computers in the 1970s. You might even be able to play a game of chess on the device you are using to read this blog post! For digital makers, though, adding a Raspberry Pi into the mix can be the first step to building something a little more exciting. Allow us to introduce you to Joey Meyer‘s chess-playing robot, the Raspberry Turk.

The Raspberry Turk chess-playing robot

Image credit: Joey Meyer

Being both an experienced software engineer with an interest in machine learning, and a skilled chess player, it’s not surprising that Joey was interested in tinkering with chess programs. What is really stunning, though, is the scale and complexity of the build he came up with. Fascinated by a famous historical hoax, Joey used his skills in programming and robotics to build an open-source Raspberry Pi-powered recreation of the celebrated Mechanical Turk automaton.

You can see the Raspberry Turk in action on Joey’s YouTube channel:

Chess Playing Robot Powered by Raspberry Pi – Raspberry Turk

The Raspberry Turk is a robot that can play chess-it’s entirely open source, based on Raspberry Pi, and inspired by the 18th century chess playing machine, the Mechanical Turk. Website: http://www.raspberryturk.com Source Code: https://github.com/joeymeyer/raspberryturk

A historical hoax

Joey explains that he first encountered the Mechanical Turk through a book by Tom Standage. A famous example of mechanical trickery, the original Turk was advertised as a chess-playing automaton, capable of defeating human opponents and solving complex puzzles.

Image of the Mechanical Turk Automaton

A modern reconstruction of the Mechanical Turk 
Image from Wikimedia Commons

Its inner workings a secret, the Turk toured Europe for the best part of a century, confounding everyone who encountered it. Unfortunately, it turned out not to be a fabulous example of early robotic engineering after all. Instead, it was just an elaborate illusion. The awesome chess moves were not being worked out by the clockwork brain of the automaton, but rather by a human chess master who was cunningly concealed inside the casing.

Building a modern Turk

A modern version of the Mechanical Turk was constructed in the 1980s. However, the build cost $120,000. At that price, it would have been impossible for most makers to create their own version. Impossible, that is, until now: Joey uses a Raspberry Pi 3 to drive the Raspberry Turk, while a Raspberry Pi Camera Module handles computer vision.

Image of chess board and Raspberry Turk robot

The Raspberry Turk in the middle of a game 
Image credit: Joey Meyer

Joey’s Raspberry Turk is built into a neat wooden table. All of the electronics are housed in a box on one side. The chessboard is painted directly onto the table’s surface. In order for the robot to play, a Camera Module located in a 3D-printed housing above the table takes an image of the chessboard. The image is then analysed to determine which pieces are in which positions at that point. By tracking changes in the positions of the pieces, the Raspberry Turk can determine which moves have been made, and which piece should move next. To train the system, Joey had to build a large dataset to validate a computer vision model. This involved painstakingly moving pieces by hand and collecting multiple images of each possible position.

Look, no hands!

A key feature of the Mechanical Turk was that the automaton appeared to move the chess pieces entirely by itself. Of course, its movements were actually being controlled by a person hidden inside the machine. The Raspberry Turk, by contrast, does move the chess pieces itself. To achieve this, Joey used a robotic arm attached to the table. The arm is made primarily out of Actobotics components. Joey explains:

The motion is controlled by the rotation of two servos which are attached to gears at the base of each link of the arm. At the end of the arm is another servo which moves a beam up and down. At the bottom of the beam is an electromagnet that can be dynamically activated to lift the chess pieces.

Joey individually fitted the chess pieces with tiny sections of metal dowel so that the magnet on the arm could pick them up.

Programming the Raspberry Turk

The Raspberry Turk is controlled by a daemon process that runs a perception/action sequence, and the status updates automatically as the pieces are moved. The code is written almost entirely in Python. It is all available on Joey’s GitHub repo for the project, together with his notebooks on the project.

Image of Raspberry Turk chessboard with Python script alongside

Image credit: Joey Meyer

The AI backend that gives the robot its chess-playing ability is currently Stockfish, a strong open-source chess engine. Joey says he would like to build his own engine when he has time. For the moment, though, he’s confident that this AI will prove a worthy opponent.

The project website goes into much more detail than we are able to give here. We’d definitely recommend checking it out. If you have been experimenting with any robotics or computer vision projects like this, please do let us know in the comments!

The post Raspberry Turk: a chess-playing robot appeared first on Raspberry Pi.

WWW Inventor Prefers Public Protest Over VPN Uptake

Post Syndicated from Andy original https://torrentfreak.com/www-inventor-prefers-public-protest-over-vpn-uptake-170405/

From being viewed as a somewhat niche interest product, VPNs were thrust into the mainstream in recent weeks. Long the go-to tools of privacy advocates and file-sharing enthusiasts alike, VPNs are now on the lips of countless non-tech savvy individuals.

That boost in awareness is largely down to recent moves by the Trump administration to repeal rules that forbid Internet service providers from selling the browsing histories of regular Internet users. After a not particularly long process, the amendments were written into law this week.

While the development possibly won’t have the massive short-term impact some are expecting, the changes have focused the minds of millions who are now aware that what they do online is far from secret. They see VPN use as a massive step forward in reclaiming their privacy online generally, and they’d be right.

Of interest, however, is the approach of Tim Berners-Lee, the man credited with inventing the world wide web. After being honored with the prestigious 2016 Turing Award this week for his “major contributions of lasting importance to computing”, Berners-Lee took time out to discuss a number of topics, with privacy high on the agenda.

“That bill was a disgusting bill, because when we use the web, we are so vulnerable,” he told The Guardian.

“There are things that people do on the web that reveal absolutely everything, more about them than they know themselves sometimes. Because so much of what we do in our lives that actually goes through those left-clicks, it can be ridiculously revealing. You have the right to go to a doctor in privacy where it’s just between you and the doctor. And similarly, you have to be able to go to the web.”

Describing privacy as a “core American value”, Berners-Lee says that if things start going bad with their current supplier, people will switch to more privacy-conscious ISPs. Others will seek out more radical technical measures, such as Tor and VPNs.

“People would start using Tor. They’d start going through proxies so that instead of your Internet traffic going straight to your house it goes to a VPN,” he says.

“Normal people in America will basically go into defense cybersecurity lockdown against their ISPs. Everything will get encrypted. People who care about it will find ways to deprive their ISPs of data. There’ll be a great market for people who provide that technology.”

Indeed, a number of VPN and security product providers have contacted TF indicating that searches for their products increased dramatically as news of the browser history repeal hits the mainstream. But for Berners-Lee, a man with technology in his bones, doing something in the physical world is preferable to electronic counter-measures.

“I’ve got [VPNs] available to me here, but I ought not to do that. Actually, you shouldn’t,” he says.

“[You should be] going to protest so the world outside becomes one where you don’t need to cheat to get around this problem, so that you don’t have to use skills as an expert to get around this problem.”

Of course, there have been protests from both citizens and companies (Private Internet Access spent a rumored $600,000 on a newspaper advert challenging the repeal last week) but it doesn’t appear that those in power are listening – or particularly care.

Whether we like it or not, the web is now largely financially sustained by the farming of our online activities and it seems likely that no amount of protest is going to be able to stop that juggernaut anytime soon.

File-sharers and other privacy advocates have known this for a much longer time than most and already realize that if people don’t look after their own privacy, someone else will do it for them. Protest is certainly good, but it doesn’t hurt to have some solid backup in the meantime.

Image: Paul Clarke (CC-BY-SA)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Harry Potter and the Real-Life Wizard Duel

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/real-life-wizard-duel/

Walk around the commons of Cambridge and you’re bound to see one or more of the Cambridge University Quidditch Club players mounted upon sticks with a makeshift quaffle. But try as they might, their broomsticks will never send them soaring through the air.

The same faux-wizardry charge can’t be levelled at Allen Pan‘s Real-Life Wizard Duel. For when the wand-wielding witches or wizards cast one of four Harry Potter spells, their opponent is struck accordingly… as if by magic.

Real Life Wizard Duel with ELECTRICITY | Sufficiently Advanced

Body shocking wands with speech recognition…It’s indistinguishable from magic! Follow Sufficiently Advanced! https://twitter.com/AnyTechnology https://www.facebook.com/sufficientlyadvanced https://www.instagram.com/sufficientlyadvanced/ Check out redRomina: https://www.youtube.com/user/redRomina Watch our TENS unit challenge! https://youtu.be/Ntovn4N9HNs These peeps helped film, check them out too!

Real spells, real consequences

Harry Potter GIF

Allen uses Transcutaneous Electrical Nerve Stimulation (TENS) machines to deliver the mighty blows to both himself and his opponent, setting off various sticky pads across the body via voice recognition.

The Google Cloud Speech Recognition API recognises one of five spells – Expelliarmus, Stupefy, Tarantallegra, Petrificus Totalus, and Protego – via a microphone plugged into a Raspberry Pi.

Harry Potter GIF

When the spell is pronounced correctly and understood by the Pi, it tells an Arduino to ‘shoot’ the spell out of the wand via an infrared LED. If the infrared receiver attached to the opponent recognises the spell, it sets off the TENS machine to deliver an electric current to the appropriate body part. Expelliarmus, for example, sets off the TENS connected to the arm, while calling out a successful Petrificus Totalus renders the opponent near immobilised as every pad is activated. For a moment’s rest, calling out “Protego” toward your own infrared receiver offers a few moments of protection against all spells aimed in your direction. Phew.

“But people only die in proper duels, you know, with real wizards. The most you and Malfoy’ll be able to do is send sparks at each other. Neither of you knows enough magic to do any real damage. I bet he expected you to refuse, anyway.”
“And what if I wave my wand and nothing happens?”
“Throw it away and punch him on the nose,” Ron suggested.
Harry Potter and the Philosopher’s Stone

Defence Against the Dark Arts

Harry Potter Wizard Duel Raspberry Pi

To prevent abuse of the spells, each one has its own recharge time, with available spells indicated via LEDs on the wand.

In the realm of Harry Potter fan builds, this has to be a favourite. And while visitors to The Wizarding World of Harry Potter may feel the magical effect of reimagined Butterbeer as they wander around Hogsmead, I’d definitely prefer to play Real Life Wizard Duel with Allen Pan.

The post Harry Potter and the Real-Life Wizard Duel appeared first on Raspberry Pi.