Tag Archives: cuba

Here, have some videos!

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/easter-monday-2018/

Today is Easter Monday and as such, the drawbridge is up at Pi Towers. So while we spend time with familytoo much chocolate…family and chocolate, here are some great Pi-themed videos from members of our community. Enjoy!

Eggies live stream!

Bluebird Birdhouse

Raspberry Pi and NoIR camera installed in roof of Bluebird house with IR LEDs. Currently 5 eggs being incubated.

Doctor Who TARDIS doorbell

Raspberry pi Tardis

Raspberry pi Tardis doorbell

Google AIY with Tech-nic-Allie

Ok Google! AIY Voice Kit MagPi

Allie assembles this Google Home kit, that runs on a Raspberry Pi, then uses the Google Home to test her space knowledge with a little trivia game. Stay tuned at the end to see a few printed cases you can use instead of the cardboard.

Buying a Coke with a Raspberry Pi rover

Buy a coke with raspberry pi rover

Mission date : March 26 2018 My raspberry pi project. I use LTE modem to connect internet. python programming. raspberry pi controls pi cam, 2servo motor, 2dc motor. (This video recoded with gopro to upload youtube. Actually I controll this rover by pi cam.

Raspberry Pi security camera

🔴How to Make a Smart Security Camera With Movement Notification – Under 60$

I built my first security camera with motion-control connected to my raspberry pi with MotionEyeOS. What you need: *Raspberry pi 3 (I prefer pi 3) *Any Webcam or raspberry pi cam *Mirco SD card (min 8gb) Useful links : Download the motioneyeOS software here ➜ https://github.com/ccrisan/motioneyeos/releases How to do it: – Download motioneyeOS to your empty SD card (I mounted it via Etcher ) – I always do a sudo apt-upgrade & sudo apt-update on my projects, in the Pi.

Happy Easter!

The post Here, have some videos! appeared first on Raspberry Pi.

Introducing Our Content Director: Roderick

Post Syndicated from Yev original https://www.backblaze.com/blog/introducing-content-director-roderick/

As Backblaze continues to grow, and as we go down the path of sharing our stories, we found ourselves in need of someone that could wrangle our content calendar, write blog posts, and come up with interesting ideas that we could share with our readers and fans. We put out the call, and found Roderick! As you’ll read below he has an incredibly interesting history, and we’re thrilled to have his perspective join our marketing team! Lets learn a bit more about Roderick, shall we?

What is your Backblaze Title?
Content Director

Where are you originally from?
I was born in Southern California, but have lived a lot of different places, including Alaska, Washington, Oregon, Texas, New Mexico, Austria, and Italy.

What attracted you to Backblaze?
I met Gleb a number of years ago at the Failcon Conference in San Francisco. I spoke with him and was impressed with him and his description of the company. We connected on LinkedIn after the conference and I ultimately saw his post for this position about a month ago.

What do you expect to learn while being at Backblaze?
I hope to learn about Backblaze’s customers and dive deep into the latest in cloud storage and other technologies. I also hope to get to know my fellow employees.

Where else have you worked?
I’ve worked for Microsoft, Adobe, Autodesk, and a few startups. I’ve also consulted to Apple, HP, Stanford, the White House, and startups in the U.S. and abroad. I mentored at incubators in Silicon Valley, including IndieBio and Founders Space. I used to own vineyards and a food education and event center in the Napa Valley with my former wife, and worked in a number of restaurants, hotels, and wineries. Recently, I taught part-time at the Culinary Institute of America at Greystone in the Napa Valley. I’ve been a partner in a restaurant and currently am a partner in a mozzarella di bufala company in Marin county where we have about 50 water buffalo that are amazing animals. They are named after famous rock and roll vocalists. Our most active studs now are Sting and Van Morrison. I think singing “a fantabulous night to make romance ‘neath the cover of October skies” works for Van.

Where did you go to school?
I studied at Reed College, U.C. Berkeley, U.C. Davis, and the Università per Stranieri di Perugia in Italy. I put myself through college so was in and out of school a number of times to make money. Some of the jobs I held to earn money for college were cook, waiter, dishwasher, bartender, courier, teacher, bookstore clerk, head of hotel maintenance, bookkeeper, lifeguard, journalist, and commercial salmon fisherman in Alaska.

What’s your dream job?
I think my dream would be having a job that would continually allow me to learn new things and meet new challenges. I love to learn, travel, and be surprised by things I don’t know.

I love animals and sometimes think I should have become a veterinarian.

Favorite place you’ve traveled?
I lived and studied in Italy, and would have to say the Umbria region of Italy is perhaps my favorite place. I also worked in my father’s home country of Austria, which is incredibly beautiful.

Favorite hobby?
I love foreign languages, and have studied Italian, French, German, and a few others. I am a big fan of literature and theatre and read widely and have attended theatre productions all over the world. That was my motivation to learn other languages—so I could enjoy literature and theatre in the languages they were written in. I started scuba diving when I was very young because I wanted to be Jacques-Yves Cousteau and explore the oceans. I also sail, motorcycle, ski, bicycle, hike, play music, and hope to finish my pilot’s license someday.

Coke or Pepsi?
Red Burgundy

Favorite food?
Both my parents are chefs, so I was exposed to a lot of great food growing up. I would have to give more than one answer to that question: fresh baked bread and bouillabaisse. Oh, and white truffles.

Not sure we’ll be able to stock our cupboards with Red Burgundy, but we’ll see what our office admin can do! Welcome to the team!

The post Introducing Our Content Director: Roderick appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Amazon ECS Events in February

Post Syndicated from Chris Barclay original https://aws.amazon.com/blogs/compute/amazon-ecs-events-in-february/

Here are some upcoming events for Amazon ECS this month:

Container World: Abby Fuller, senior AWS technical evangelist, will be speaking about Amazon ECS at Container World on Feb 21-23. Check out her schedule.

Microservices Day @ AWS NY Loft: Microservices Day is on Feb 24 as part of the DevOps | AWS Loft Architecture Week. Learn more about how to build and deploy microservices architectures on AWS. We will cover how to use Amazon ECS and AWS Lambda to build microservices. Signup here.

Seattle AWS Architects & Engineers Meetup: Join us Feb 28 at SURF Incubator to learn more about AWS Batch and Amazon ECS. Food and drinks provided. RSVP here.

Excited about MXNet joining Apache!

Post Syndicated from Ana Visneski original https://aws.amazon.com/blogs/aws/excited-about-mxnet-joining-apache/

Post by Dr. Matt Wood

From Alexa to Amazon Go, we use deep learning extensively across all areas of Amazon, and we’ve tried a lot of deep learning engines along the way. One has emerged as the most scalable, efficient way to perform deep learning, and for these reasons, we have selected MXNet as our engine of choice at Amazon.

MXNet is an open source, state of the art deep learning engine, which allows developers to build sophisticated, custom artificial intelligence systems. Training these systems is significantly faster in MXNet, due to its scale and performance. For example, for the popular image recognition network, Resnet, MXNet has 2X the throughput compared to other engines, letting you train equivalent models in half the time. MXNet also shows close to linear scaling across hundreds of GPUs, while the performance of other engines show diminishing returns at scale.

We have a significant team at Amazon working with the MXNet community to continue to evolve it. The team proposed MXNet joining the Apache Incubator to take advantage of the Apache Software Foundation’s process, stewardship, outreach, and community events. We’re excited to announce that it has been accepted.

We’re at the start what we’ll be investing in Apache MXNet, and look forward to partnering with the community to keep extending its already significant utility.

If you’d like to get started with MXNet, take a look at the keynote presentation I gave at AWS Re:Invent, and fire up an instance (or an entire cluster), of the AWS Deep Learning AMI which includes MXNet with example code, pre-compiled and ready to rock. You should also watch Leo’s presentation and tutorial on recommendation modeling.

You should follow @apachemxnet on Twitter, or check out the new Apache MXNet page for updates from the open source project.

 

International Phone Fraud Tactics

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/12/international_p.html

This article outlines two different types of international phone fraud. The first can happen when you call an expensive country like Cuba:

My phone call never actually made it to Cuba. The fraudsters make money because the last carrier simply pretends that it connected to Cuba when it actually connected me to the audiobook recording. So it charges Cuban rates to the previous carrier, which charges the preceding carrier, which charges the preceding carrier, and the costs flow upstream to my telecom carrier. The fraudsters siphoning money from the telecommunications system could be anywhere in the world.

The second happens when phones are forced to dial international premium-rate numbers:

The crime ring wasn’t interested in reselling the actual [stolen] phone hardware so much as exploiting the SIM cards. By using all the phones to call international premium numbers, similar to 900 numbers in the U.S. that charge extra, they were making hundreds of thousands of dollars. Elsewhere — Pakistan and the Philippines being two common locations — organized crime rings have hacked into phone systems to get those phones to constantly dial either international premium numbers or high-rate countries like Cuba, Latvia, or Somalia.

Why is this kind of thing so hard to stop?

Stamping out international revenue share fraud is a collective action problem. “The only way to prevent IRFS fraud is to stop the money. If everyone agrees, if no one pays for IRFS, that disrupts it,” says Yates. That would mean, for example, the second-to-last carrier would refuse to pay the last carrier that routed my call to the audiobooks and the third-to-last would refuse to pay the second-to-last, and so on, all the way back up the chain to my phone company. But when has it been easy to get so many companies to do the same thing? It costs money to investigate fraud cases too, and some companies won’t think it’s worth the trade off. “Some operators take a very positive approach toward fraud management. Others see it as cost of business and don’t put a lot of resources or systems in to manage it,” says Yates.

The Apache Traffic Server Project’s Next Chapter

Post Syndicated from mikesefanov original https://yahooeng.tumblr.com/post/152602307661

By Bryan Call, Yahoo Distinguished Software Engineer, Apache Traffic Server PMC Chair 

This post also appears on the ATS blog, https://blogs.apache.org/trafficserver.

Last week, the ATS Community held a productive and informative Apache Traffic Server (ATS) Fall Summit, hosted by LinkedIn in Sunnyvale, CA. At a hackathon during the Summit, we fixed bugs, cleaned up code, users were able to spend time with experts on ATS and have their questions answered, and the next release candidate for ATS 7.0.0 was made public. There were talks on operations, new and upcoming features, and supporting products. More than 80 people registered for the event and we had a packed room with remote video conferencing.

I have been attending the ATS Summits since their inception in 2010 and have had the pleasure of being involved with the Apache Traffic Server Project for the last nine years. I was also part of the team at Yahoo that open sourced the code to Apache. Today, I am honored to serve as the new Chair and VP of the ATS Project, having been elected to the position by the ATS community a couple weeks ago.

Traffic Server was originally created by Inktomi and distributed as a commercial product. After Yahoo acquired Inktomi, Yahoo open sourced Traffic Server and submitted it to the Apache Incubator in July 2009.

Since graduating as Apache Traffic Server (an Apache Top-Level Project as of April 2010), many large and small companies use it for caching and proxying HTTP requests. ATS supports HTTP/2, HTTP/1.1, TLS, and many other standards. The Apache Committers on the project are actively involved with the Internet Engineering Task Force (IETF) – whose mission it is to “make the Internet work better by producing high quality, relevant technical documents that influence the way people design, use, and manage the Internet” – to make sure ATS is able to support the latest standards going forward.

Many companies have greatly benefited from the open sourcing of ATS; numerous industry colleagues and invested individuals have improved the project by fixing bugs and adding features, tests, and documentation. An example is Yahoo, which uses ATS for nearly all of its incoming HTTP/2 and HTTP/1.1 traffic. It is a common layer that all users go through before making a request to the origin server. Having a common layer has made it easier for Yahoo to deploy security fixes and updates extremely quickly. ATS is used as a caching proxy in locations worldwide and is also used to proxy requests for dynamic content from remote locations through already-established persistent connections. This decreases the latency for users when their cacheable content can be served, and connection establishments can be made to a nearby server.

The ATS PMC and I will focus on continuing to increase the ATS user base and having more developers contribute to the project. The ATS community welcomes other companies’ contributions and enhancements to the software through a well-established process with Apache. Unlike other commercial products, ATS has no limits or restrictions with accepting open source contributions.

Moving forward, we would also like to focus on three specific areas of ATS as a means of increasing the user base, while maintaining the performance advantage of the server: ease of use, features, and stability.

I support the further simplification of the configuration of ATS to make it so that end users can quickly get a server up with little effort. Common requirements should be easy to configure, while continuing to allow users to write custom plugins for more advanced requirements.

Adding new features to ATS is important and there are a lot of new drafts and standards currently being worked on in IETF with HTTP, TLS, and QUIC that will improve user experience. ATS will need to continue to support the latest standards that allow deployments of ATS to decrease the latency for the users. Having our developers attend the IETF meetings and participate in the decision-making is key to our ability to keep on top of these latest developments.

Stability is a fundamental requirement for a proxy server. Since all the incoming HTTP/2 and HTTP/1.1 traffic is handled by the server, it must be stable and resilient. We are continually working on improving our continuous integration and testing. We are making it easier for developers to write testing and run tests before making contributions to the code.

The ATS community is a welcoming group of people that encourages contributions and input from users, and I am excited to help lead the Project into its next chapter. Please feel free to join the mailing lists, attend one of our events such as the recent ATS Summit, or jump on IRC to talk to the users and developers of this project. We invite you to learn more about ATS at http://trafficserver.apache.org. 

Weekly roundup: Addled

Post Syndicated from Eevee original https://eev.ee/dev/2016/10/02/weekly-roundup-addled/

September was continuing the three big things in particular, but, ah.

  • art: I finished a secret loophole commission that I can’t show yet; drew a birthday thing for someone; edited my avatar to be more seasonal; resolved to cross Inktober with daily Pokémon; and then was convinced to maybe try real ink instead. That’s a lot! Making up for not drawing anything for a couple weeks while I was obsessed with Isaac’s Descent, I guess.

  • doom: I spent a little more time fiddling with Sandy-styled maps while watching Liz Ryerson stream Doom stuff, but I still have my classic frustrating problems of drawing everything too small and not having a good idea for the overall shape/flow of the world. I also streamed a couple hours of exploring the new Oblige, which was interesting, at least to me.

  • blog: Some more progress towards upstreaming my fix for <summary> in Pelican Atom feeds. Started taking notes for a paid post. Worked on a MegaZeux post and wrote most of a Doom metrics one, but couldn’t finish either.

  • twitter: I wrote @calloutbot, which will either make perfect sense to you, or not.

I think I may have the flu, but without any respiratory symptoms. Just enough that I’m vaguely tired and sore and not quite able to plan larger stuff like, say, blog posts. Or Doom maps, perhaps. I spent several frustrating days wrangling with two different posts and not getting anywhere with either of them before I remembered a roommate had had the flu about one incubation period ago.

I am thus slightly behind on writing, and haven’t done much else mentally-intense either. It sucks and I’m annoyed, but I’m taking a few days off to draw and do other low-intensity stuff before I make a mad scramble to catch up.


Meanwhile, about those, um, three things. I slipped a bit. A lot.

  • Draft three chapters of this book, September: a second chapter

    No, that didn’t happen.

    But! I decided that Isaac’s Descent HD would make a really good final chapter, since it’s an entire real game written completely from scratch. That means it’s also going to be a whopper. I spent like a third of the month distracted by building Isaac’s Descent HD, which is a prerequisite for writing about how I built it, so that’s some really good progress nonetheless.

    The game isn’t too far along from a player perspective, but I did a lot of engine work and took a lot of notes about it. And of course I took all those notes during Ludum Dare, which I can reverse-engineer into a PICO-8 chapter. So while there’s less visible progress than I wanted, I have a ton more stuff to work with now.

  • Get veekun beta-worthy, September: most games dumped; lookup; core pages working; new site in publicly-available beta

    Ha ha none of this happened. I totally dropped the ball.

    I did work on veekun, but I got caught up in dealing with encounters, and then my brain stopped working so good, so overall it didn’t make a whole lot of progress.

  • Runed Awakening, September: blah blah it doesn’t even matter

    I didn’t touch Runed Awakening all month. Sob.

I should probably be learning a lesson here about biting off more than I can chew, but I adamantly refuse to learn anything.

It’s okay if new veekun isn’t done in time for Sun/Moon, which is looking like it’ll be the case — the old site layout and schema should still hobble along just fine as they are. It would’ve been a great time to breathe some life back into the site, is all. I’ll still try to get as much done as I can, so maybe there can still be a usable beta, but I don’t know what I can magic up in six weeks when I’m also doing several other things.

I don’t know what I’m going to do for October. I’ve got a lot of blogging to do now, I still want to find time to experiment with music, and I also want to do two ink drawings a day for Inktober.

Other than all that, I suppose I’ll still “focus” on these three big things and just see what happens.

AWS Pop-up Loft and Innovation Lab in Munich

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-pop-up-loft-and-innovation-lab-in-munich/

I’m happy to be able to announce that an AWS Pop-up Loft is opening in Munich on October 26th, with a full calendar of events and a brand-new AWS Innovation Lab, all created with the help of our friends at Intel and Nordcloud. Developers, entrepreneurs, students come to AWS Lofts around the world to learn, code, collaborate, and to ask questions. The Loft will provide developers and architects in Munich with access to local technical resources and expertise that will help them to build robust and successful cloud-powered applications.

Near Munich Königsplatz Station
This loft is located at Brienner Str 49, 80333 in Munich, close to Königsplatz Station and convenient to Stiglmaierplatz. Hours are 10 AM to 6 PM Monday through Friday, with special events in the evening.

During the day, you will have access to the Ask an Architect Bar, daily education sessions, Wi-Fi, a co-working space, coffee, and snacks, all at no charge. There will also be resources to help you to create, run, and grow your startup including educational sessions from local AWS partners, accelerators, and incubators.

Ask an Architect
Step up to the Ask an Architect Bar with your code, architecture diagrams, and your AWS questions at the ready! Simply walk in. You will have access to deep technical expertise and will be able to get guidance on AWS architecture, usage of specific AWS services and features, cost optimization, and more.

AWS Education Sessions
During the day, AWS Solution Architects, Product Managers, and Evangelists will be leading 60-minute educational sessions designed to help you to learn more about specific AWS services and use cases. You can attend these sessions to learn about Serverless Architectures, Mobile & Gaming, Databases, Big Data, Compute & Networking, Architecture, Operations, Security, Machine Learning, and more, all at no charge.

Startup Education Sessions
AWS startup community representatives, incubators, accelerators, startup scene influencers, and hot startup customers running on AWS will share best-practices, entrepreneurial know-how, and lessons learned. Pop in to learn the art of pitching, customer validation & profiling, PR for startups & corporations, and more.

Innovation Lab
The new AWS Innovation Lab is adjacent to the Munich Loft. With over 350 square meters of space, the Lab is Designed to be a resource for mid-market and enterprise companies that are ready to grow their business. It will feature interactive demos, videos, and other materials designed to explain the benefits of digital transformation and cloud-powered innovation, with a focus on Big Data, mobile applications, and the fourth industrial revolution (Industry 4.0).

Come in and Say Hello
We look forward to using the Loft to meet and to connect with our customers, and expect that it will be a place that they visit on a regular basis. Please feel free to stop in and say hello to my colleagues at the Munich Loft if you happen to find yourself in the city!

Jeff;

Apache NetBeans Incubator Proposal

Post Syndicated from ris original http://lwn.net/Articles/700542/rss

Geertjan Wielenga posted
a proposal
to the Apache incubator list to adopt NetBeans, an open
source development environment, tooling platform, and application
framework. “NetBeans has been run by Oracle, with the majority of
code contributions coming from Oracle. The specific reason for moving
to Apache is to expand the diversity of contributors and to increase
the level of meritocracy in NetBeans. Apache NetBeans will be actively
seeking new contributors and will welcome them warmly and provide a
friendly and productive environment for purposes of providing a
development environment, tooling environment, and application
framework.
” (Thanks to Stephen Kitt)

I wish I enjoyed Pokémon Go

Post Syndicated from Eevee original https://eev.ee/blog/2016/07/31/i-wish-i-enjoyed-pok%C3%A9mon-go/

I’ve been trying really hard not to be a sourpuss about this, because everyone seems to enjoy it a lot and I don’t want to be the jerk pissing in their cornflakes.

And yet!

Despite all the potential of the game, despite all the fervor all across the world, it doesn’t tickle my fancy.

It seems like the sort of thing I ought to enjoy. Pokémon is kind of my jam, if you hadn’t noticed. When I don’t enjoy a Pokémon thing, something is wrong with at least one of us.

The app is broken

I’m not talking about the recent update that everyone’s mad about and that I haven’t even tried. They removed pawprints, which didn’t work anyway? That sucks, yeah, but I think it’s more significant that the thing is barely usable.

I’ve gone out hunting Pokémon several times with my partner and their husband. We wandered around for about an hour each time, and like clockwork, the game would just stop working for me every fifteen minutes. It would still run, and the screen would still update, but it would completely ignore all taps or swipes. The only fix seems to be killing it and restarting it, which takes like a week, and meanwhile the rest of my party has already caught the Zubat or whatever and is moving on.

For the brief moments when it works, it seems to be constantly confused about exactly where I am and which way I’m facing. Pokéstops (Poké Stops?) have massive icons when they’re nearby, and more than once I’ve had to mess around with the camera angle to be able to tap a nearby Pokémon, because a cluster of several already-visited Pokéstops are in the way. There’s also a strip along the bottom of the screen, surrounding the menu buttons, where tapping just does nothing at all.

I’ve had the AR Pokémon catching screen — the entire conceit of the game — lag so badly on multiple occasions that a Pokéball just stayed frozen in midair, and I couldn’t tell if I’d hit the Pokémon or not. There was also the time the Pokéball hit the Pokémon, landed on the ground, and… slowly rolled into the distance. For at least five minutes. I’m not exaggerating this time.

The game is much more responsive with AR disabled, so the Pokémon appear on a bland and generic background, which… seems to defeat the purpose of the game.

(Catching Pokémon doesn’t seem to have any real skill to it, either? Maybe I’m missing something, but I don’t understand how I’m supposed to gauge distance to an isolated 3D model and somehow connect this to how fast I flick my finger. I don’t really like “squishy” physics games like Angry Birds, and this is notably worse. It might as well be random.)

I had a better time just enjoying my party’s company and looking at actual wildlife, which in this case consists of cicadas and a few semi-wild rabbits that inexplicably live in a nearby park. I feel that something has gone wrong with your augmented reality game when it is worse than reality.

It’s not about Pokémon

Let’s see if my reasoning is sound, here.

In the mainline Pokémon games, you play as a human, but many of your important interactions are with Pokémon. You carry a number of Pokémon with you. When you encounter a Pokémon, you immediately send out your own. All the NPCs talk about how much they love Pokémon. There are overworld Pokémon hanging out. It’s pretty clear what the focus is. It’s right there on the title screen, even: both the word itself and an actual Pokémon.

Contrast this with Pokémon Go.

Most of the time, the only thing of interest on the screen is your avatar, a human. Once you encounter a Pokémon, you don’t send out your own; it’s just you, and it. In fact, once you catch a Pokémon, you hardly ever interact with it again. You can go look at its stats, assuming you can find it in your party of, what, 250?

The best things I’ve seen done with the app are AR screenshots of Pokémon in funny or interesting real-world places. It didn’t even occur to me that you can only do this with wild Pokémon until I played it. You can’t use the AR feature — again, the main conceit of the game — with your own Pokémon. How obvious is this? How can it not be possible? (If it is possible, it’s so well-hidden that several rounds of poking through the app haven’t revealed how to do it, which is still a knock for hiding the most obvious thing to want to do.)

So you are a human, and you wander around hoping you see Pokémon, and then you catch them, and then they are effectively just a sprite in a list until you feed them to your other Pokémon. And feed them you must, because the only way to level up a Pokémon is to feed them the corpses — sorry, “candies” — of their brethren. The Pokémon themselves aren’t involved in this process; they are passive consumers you fatten up.

If you’re familiar with Nuzlocke runs, you might be aware of just how attached players — or even passive audiences — can get to their Pokémon in mainline games. Yet in Pokémon Go, the critters themselves are just something to collect, just something to have, just something to sacrifice. No other form of interaction is offered.

In Pokémon X and Y, you can pet your Pokémon and feed them cakes, then go solve puzzles with them. They will love you in return. In Pokémon Go, you can swipe to make the model rotate.

There is some kind of battle system in here somewhere, but as far as I can tell, you only ever battle against gym leaders, who are jerks who’ve been playing the damn thing since it came out and have Pokémon whose CP have more digits than you even knew were possible. Also the battling is real-time with some kind of weird gestural interface, so it’s kind of a crapshoot whether you even do the thing you want, a far cry from the ostensibly strategic theme of the mainline games.

If I didn’t know any better, I’d think some no-name third-party company just took an existing product and poorly plastered Pokémon onto it.

There are very few Pokémon per given area

The game is limited to generation 1, the Red/Blue/Yellow series. And that’s fine.

I’ve seen about six of them.

Rumor has it that they are arranged very cleverly, with fire Pokémon appearing in deserts and water Pokémon appearing in waterfronts. That sounds really cool, except that I don’t live at the intersection of fifteen different ecosystems. How do you get ice Pokémon? Visit my freezer?

I freely admit, I’m probably not the target audience here; I don’t have a commute at all, and on an average day I have no reason to leave the house at all. I can understand that I might not see a huge variety, sure. But I’ve seen several friends lamenting that they don’t see much variety on their own commutes, or around the points of interest near where they live.

If you spend most of your time downtown in a major city, the game is probably great; if you live out in the sticks, it sounds a bit barren. It might be a little better if you could actually tell how to find Pokémon that are more than a few feet away — there used to be a distance indicator for nearby Pokémon, which I’m told even worked at one point, but it’s never worked since I first tried the game and it’s gone now.

Ah, of course, there’s always Pokévision, a live map of what Pokémon are where… which Niantic just politely asked to cease and desist.

It’s full of obvious “free-to-play” nudges

I put “free-to-play” in quotes because it’s a big ol’ marketing lie and I don’t know why the gaming community even tolerates the phrase. The game is obviously designed to be significantly worse if you don’t give them money, and there are little reminders of this everywhere.

The most obvious example: eggs rain from the sky, and are the only way to get Pokémon that don’t appear naturally nearby. You have to walk a certain number of kilometers to hatch an egg, much like the mainline games, which is cute.

Ah, but you also have to put an egg in an incubator for the steps to count. And you only start with one. And they’re given to you very rarely, and any beyond the one you start with only have limited uses at a time. And you can carry 9 eggs at a time.

Never fear! You can an extra (limited use) incubator for the low low price of $1.48. Or maybe $1.03. It’s hard to tell, since (following the usual pattern of flagrant dishonesty) you first have to turn real money into game-specific trinkets at one of several carefully obscured exchange rates.

The thing is, you could just sell a Pokémon game. Nintendo has done so quite a few times, in fact. But who would pay for Pokémon Go, in the state it’s in?

In conclusion

This game is bad and I wish it weren’t bad. If you enjoy it, that’s awesome, and I’m not trying to rain on your parade, really. I just wish I enjoyed it too.

Airbnb – Reinventing the Hospitality Industry on AWS

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/airbnb-reinventing-the-hospitality-industry-on-aws/

Airbnb is a classic story of how a few people with a great idea can disrupt an entire industry. Launched in 2008, over 80 million guests have stayed on Airbnb in over 2 million homes in over 190 countries. They recently opened 4,000 homes in Cuba to travelers around the globe. The company was also an early adopter of AWS.
In the guest post below, Airbnb Engineering Manager Kevin Rice talks about how AWS was an important part of the company’s startup days, and how it stays that way today. —
Jeff;
PS – Learn more about how startups can use AWS to get their business going.

Early Days Our founders recognized that for Airbnb to succeed, they would need to move fast and stay lean. Critical to that was minimizing the time and resources devoted to infrastructure. Our teams needed to focus on getting the business off the ground, not on basic hosting tasks.
Fortunately, at the time, Amazon Web Services had built up a pretty mature offering of compute and storage services that allowed our staff to spin up servers without having to contact anyone or commit to minimum usage requirements. They decided to migrate nearly all of the company’s cloud computing functions to AWS. When you’re a small company starting out, you need to be as leveraged as possible with your available resources. The company’s employees wanted to focus on things that were unique to the business success.
Airbnb quickly adopted many of the essential services of AWS, such as Amazon EC2 and Amazon S3. The original MySQL database was migrated to the Amazon Relational Database Service (Amazon RDS) because RDS greatly simplifies so many of the time-consuming administrative tasks typically associated with databases, like automating replication and scaling procedures with a basic API call or through the AWS Management Console.

Sample Airbnb Listings for Barcelona, Spain as of March 23, 2016

Continuous Innovation A big part of our success is due to an intense focus on continual innovation. For us, an investment in AWS is really about making sure our engineers are focused on the things that are uniquely core to our business. Everything that we do in engineering is ultimately about creating great matches between people. Every traveler and every host is unique, and people have different preferences for what they want out of a travel experience.
So a lot of the work that we do in engineering is about matching the right people together for a real world, offline experience. Part of it is machine learning, part of it is search ranking, and part of it is fraud detection—getting bad people off of the site and verifying that people are who they say they are. Part of it is about the user interface and how we get explicit signals about your preferences. In addition, we build infrastructure that both enables these services and that supports our engineers to be productive and to safely deploy code any time of the day or night.
We’ve stayed with AWS through the years because we have a close relationship, which gives us insight and input in to the AWS roadmap. For example, we considered building a key management system in house, then saw that the AWS Key Management Service could provide the functionality we were looking for to enhance security. Turning to KMS saved three engineers about six months of development time—valuable resources that we could redirect to other business challenges, like making our matching engine even better. Or take Amazon RDS, which we’ve now relied on for years. We take advantage of the RDS Multi-AZ deployments for failover, which would be really time-consuming to create in house. It’s a huge feature for us that protects our main data store.
Supporting Growth As we’ve grown from a startup to a company with a global presence, we’re still paying close attention to the value of our hosting platform. The flexibility AWS gives us is important. We experiment quickly and continuously with new ideas. We are constantly looking at ways to better serve our customers. We don’t always know what’s coming and what kind of technology we’ll need for new projects, and being able to go to AWS and get the hosting and services we need within a matter of minutes is huge.
We haven’t slowed down as we’ve gotten bigger, and we don’t intend to. We still view ourselves as a scrappy startup, and we’ll continue to need the same things we’ve always needed from AWS.
I should mention that we are looking for developers with AWS experience. Here are a couple of openings:

Software Engineer, Site Reliability.
Software Engineer, Production Infrastructure.

— Kevin Rice, Engineering Manager, Airbnb
 

Airbnb – Reinventing the Hospitality Industry on AWS

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/airbnb-reinventing-the-hospitality-industry-on-aws/

Airbnb is a classic story of how a few people with a great idea can disrupt an entire industry. Launched in 2008, over 80 million guests have stayed on Airbnb in over 2 million homes in over 190 countries. They recently opened 4,000 homes in Cuba to travelers around the globe. The company was also an early adopter of AWS.
In the guest post below, Airbnb Engineering Manager Kevin Rice talks about how AWS was an important part of the company’s startup days, and how it stays that way today. —
Jeff;
PS – Learn more about how startups can use AWS to get their business going.

Early Days Our founders recognized that for Airbnb to succeed, they would need to move fast and stay lean. Critical to that was minimizing the time and resources devoted to infrastructure. Our teams needed to focus on getting the business off the ground, not on basic hosting tasks.
Fortunately, at the time, Amazon Web Services had built up a pretty mature offering of compute and storage services that allowed our staff to spin up servers without having to contact anyone or commit to minimum usage requirements. They decided to migrate nearly all of the company’s cloud computing functions to AWS. When you’re a small company starting out, you need to be as leveraged as possible with your available resources. The company’s employees wanted to focus on things that were unique to the business success.
Airbnb quickly adopted many of the essential services of AWS, such as Amazon EC2 and Amazon S3. The original MySQL database was migrated to the Amazon Relational Database Service (Amazon RDS) because RDS greatly simplifies so many of the time-consuming administrative tasks typically associated with databases, like automating replication and scaling procedures with a basic API call or through the AWS Management Console.

Sample Airbnb Listings for Barcelona, Spain as of March 23, 2016

Continuous Innovation A big part of our success is due to an intense focus on continual innovation. For us, an investment in AWS is really about making sure our engineers are focused on the things that are uniquely core to our business. Everything that we do in engineering is ultimately about creating great matches between people. Every traveler and every host is unique, and people have different preferences for what they want out of a travel experience.
So a lot of the work that we do in engineering is about matching the right people together for a real world, offline experience. Part of it is machine learning, part of it is search ranking, and part of it is fraud detection—getting bad people off of the site and verifying that people are who they say they are. Part of it is about the user interface and how we get explicit signals about your preferences. In addition, we build infrastructure that both enables these services and that supports our engineers to be productive and to safely deploy code any time of the day or night.
We’ve stayed with AWS through the years because we have a close relationship, which gives us insight and input in to the AWS roadmap. For example, we considered building a key management system in house, then saw that the AWS Key Management Service could provide the functionality we were looking for to enhance security. Turning to KMS saved three engineers about six months of development time—valuable resources that we could redirect to other business challenges, like making our matching engine even better. Or take Amazon RDS, which we’ve now relied on for years. We take advantage of the RDS Multi-AZ deployments for failover, which would be really time-consuming to create in house. It’s a huge feature for us that protects our main data store.
Supporting Growth As we’ve grown from a startup to a company with a global presence, we’re still paying close attention to the value of our hosting platform. The flexibility AWS gives us is important. We experiment quickly and continuously with new ideas. We are constantly looking at ways to better serve our customers. We don’t always know what’s coming and what kind of technology we’ll need for new projects, and being able to go to AWS and get the hosting and services we need within a matter of minutes is huge.
We haven’t slowed down as we’ve gotten bigger, and we don’t intend to. We still view ourselves as a scrappy startup, and we’ll continue to need the same things we’ve always needed from AWS.
I should mention that we are looking for developers with AWS experience. Here are a couple of openings:

Software Engineer, Site Reliability.
Software Engineer, Production Infrastructure.

— Kevin Rice, Engineering Manager, Airbnb
 

Running an External Zeppelin Instance using S3 Backed Notebooks with Spark on Amazon EMR

Post Syndicated from Dominic Murphy original https://blogs.aws.amazon.com/bigdata/post/Tx2HJD3Z74J2U8U/Running-an-External-Zeppelin-Instance-using-S3-Backed-Notebooks-with-Spark-on-Am

Dominic Murphy is an Enterprise Solution Architect with Amazon Web Services

Apache Zeppelin is an open source GUI which creates interactive and collaborative notebooks for data exploration using Spark. You can use Scala, Python, SQL (using Spark SQL), or HiveQL to manipulate data and quickly visualize results. Zeppelin notebooks can be shared among several users, and visualizations can be published to external dashboards. Zeppelin uses the Spark settings on your cluster and can utilize Spark’s dynamic allocation of executors to let YARN estimate the optimal resource consumption.

With the 4.1.0 release, Amazon EMR introduced Zeppelin as an application that could be installed on an EMR cluster during set up. Zeppelin is installed on the master node of the EMR cluster and creates a Spark Context to run interactive Spark jobs on the EMR cluster where it’s installed. Also, Zeppelin notebooks are stored by default on the master node.

In this blog post, I will show you how to set up Zeppelin running “off-cluster” on a separate EC2 instance. You will be able to submit Spark jobs to an EMR cluster directly from your Zeppelin instance. By setting up Zeppelin off cluster, rather than on the master node of an EMR cluster, you will have the flexibility to choose which EMR cluster to submit jobs to, and can interact with your Zeppelin notebooks when your EMR cluster isn’t active. Finally, I will demonstrate how to store your Zeppelin notebooks on Amazon S3 for durable storage.

Getting started

Make sure you have these resources before beginning the tutorial:

AWS Command Line Interface installed

An SSH client

A key pair in the region where you’ll launch the Zeppelin instance

An S3 bucket in same region to store your Zeppelin notebooks, and to transfer files from EMR to your Zeppelin instance

IAM permissions to create S3 buckets, launch EC2 instances, and create EMR clusters

Create an EMR cluster

The first step is to set up an EMR cluster.

On the Amazon EMR console, choose Create cluster.

Choose Go to advanced options and enter the following options:

Vendor: Amazon

Release: emr-4.2.0

Applications: Ensure that Hadoop 2.6.0, Hive 1.0.0, and Spark 1.5.2 are selected. Deselect Pig and Hue.

In the Add steps section, for Step type, choose Custom JAR.  

Choose Configure and enter:

JAR location: command-runner.jar

Arguments: aws s3 cp /etc/hadoop/conf/ s3://<YOUR_S3_BUCKET>/hadoopconf –recursive

Action on Failure: Continue

Choose Add and add a second step by choosing Configure again.

JAR location: command-runner.jar

Arguments: aws s3 cp /etc/hive/conf/hive-site.xml s3://<YOUR_S3_BUCKET/hiveconf/hive-site.xml

Action on failure: Continue

Choose Add, Next.

On the Hardware Configuration page, select your VPC and the subnet where you want to launch the cluster, keep the default selection of one master and two core nodes of m3.xlarge, and choose Next.

On the General Options page, give your cluster a name (e.g., Spark-Cluster) and choose Next.

On the Security Options page, for EC2 key pair, select a key pair. Keep all other settings at the default values and choose Create cluster.

Your three-node cluster takes a few moments to start up. Your cluster is ready when the cluster status is Waiting.

Note: You need the master public DNS, subnet ID, security groups, and VPC ID for Master and Core/Task for use in subsequent steps. You can retrieve the first three from the EMR console, and the VPC ID from the EC2 Instances page.

Launch an EC2 instance with Apache Zeppelin

Launch an EC2 Zeppelin instance with a CloudFormation template.

In the CloudFormation console, choose Create Stack.

Choose Specify an Amazon S3 template URL, and enter the following

https://s3.amazonaws.com/aws-bigdata-blog/artifacts/zeppelin-yarn-on-ec2.json

Choose Next.

In the next page, give your stack a name and enter the following parameters:

EMRMasterSecurityGroup: Security group of EMR master.

EMRSlaveSecurityGroup: Security group of EMR core & task.

Instance Type: I recommend m3.xlarge for this procedure.

KeyName: Your key pair.

S3HadoopConfFolder: Replace <mybucket> with an S3 bucket from your account.

S3HiveConfFolder: Replace <mybucket> with an S3 bucket from your account.

SSHLocation: CIDR block that will be allowed to connect using SSH into the Zeppelin instance.

ZeppelinAccessLocation: CIDR block that will be allowed to connect to Zeppelin Web over port 8080.

ZeppelinSubnetId: Subnet where your EMR cluster launched.

ZeppelinVPCId: VPC where your EMR cluster launched.

Choose Next.

Optionally, specify a tag for your instance. Choose Next.

Review your choices, and check the IAM acknowledgement, choose Create.

Your stack will take several minutes to complete as it creates the EC2 instance and provisions Zeppelin and its prerequisites. While you are waiting, navigate to the S3 console and create a bucket for Zeppelin notebook storage. Create a folder in S3 for your Zeppelin user, and then a subfolder under that’s called notebook.

In the screen shot below, the Zeppelin storage bucket is called “zeppelin-bucket,” the Zeppelin user is “zeppelin-user,” and the notebook subfolder is in the user folder.

Return to the CloudFormation console. When the CloudFormation stack status returns CREATE_COMPLETE, your EC2 instance is ready.

Open the EC2 console to view your EC2 instance. Note the IP address and security group as you will use that in a subsequent step.

Configure your EMR security group to allow traffic from Zeppelin instance

In the EMR console, select your cluster and navigate to the Cluster Details page.

For Security group for Master, select a security group. The default is ElasticMapReduce-master.

On the Security Group page, choose Inbound, Edit, Add Rule, All TCP. For Source, choose Custom IP, and in the next field enter the EC2 Zeppelin instance’s security group.

Repeat the above steps for Security groups for Core & Task.

Finalize the Zeppelin instance configuration

Connect to your Zeppelin EC2 instance using SSH. Note, if you are using PuTTY, you can follow the instructions in the Connecting to Your Linux Instance from Windows Using PuTTY topic.

## SSH as ec2-user to your instance
ssh –i <your key pair file> [email protected]<your EC2 instance IP address>

Complete the zeppelin-env.sh settings with the S3 Bucket and S3 User Folder you entered earlier.

sudo nano /home/ec2-user/zeppelin/conf/zeppelin-env.sh
export JAVA_HOME=/etc/alternatives/java_sdk_openjdk
export MASTER=yarn-client
export HADOOP_CONF_DIR=/home/ec2-user/hadoopconf
#
#
export ZEPPELIN_NOTEBOOK_STORAGE=org.apache.zeppelin.notebook.repo.S3NotebookRepo
export ZEPPELIN_NOTEBOOK_S3_BUCKET=<myZeppelinBucket>
export ZEPPELIN_NOTEBOOK_USER=<myZeppelinUser>

Edit your /home/ec2-user/zeppelin/conf/zeppelin-site.xml file. Navigate to the following section and replace the bolded text with your S3 bucket and folder:

<!–If you use S3 for storage, the following folder structure is necessary: bucket_name/username/notebook/–>
<property>
<name>zeppelin.notebook.s3.user</name>
<value><myZeppelinUser></value>
<description>user name for S3 folder structure</description>
</property>
<property>
<name>zeppelin.notebook.s3.bucket</name>
<value><myZeppelinBucket>/value>
<description>bucket name for notebook storage</description>
</property>

Start and test Zeppelin

Start your Zeppelin instance. From your /home/ec2-user/zeppelin directory, type:

sudo bin/zeppelin-daemon.sh start

You are now done with your SSH session.

Switch to your client’s browser window. Test your instance by navigating to http://<yourZeppelinInstanceIP>:8080/#/

On the Zeppelin homepage, choose Import Note and enter the location for the Zeppelin Tutorial JSON file as follows:

https://raw.githubusercontent.com/apache/incubator-zeppelin/master/notebook/2A94M5J1Z/note.json

Complete the import process and execute the notebook by choosing Run All Paragraphs.

After a few moments, you should see the Tutorial dashboard as follows:

In your Hadoop Resource Manager, you should see the Zeppelin application running on your EMR cluster.

 

Navigate to the S3 console and verify that you can see your notebook.json file in the following folder:

s3://<yourZeppelinBucket>/<userfolder>/notebook/<notebookid>/note.json

Clean up

You can now clean up your instances to stop incurring charges:

Navigate to the CloudFormation console and choose Delete Stack.

Navigate to the EMR console, select your cluster, and choose Terminate.

Navigate to the S3 console, from the Actions menu choose Delete Bucket. Type the name of the S3 bucket you used for this exercise and choose Delete.

Summary

In this blog post, you learned how to create a Zeppelin instance on EC2 and configure it as a YARN client. You also configured Zeppelin to store notebooks durably in S3 rather than on a local disk, so you can shutdown or even terminate your instance and still persist your notebook data.

In this example you first created an EMR cluster, and then configured Zeppelin to submit jobs to that cluster. In a future post, I will examine submitting jobs to multiple EMR clusters from Zeppelin.

If you have a question or suggestion, please leave a comment below

———————————–

Related:

Building a Recommendation Engine with Spark ML on Amazon EMR using Zeppelin

 

Running an External Zeppelin Instance using S3 Backed Notebooks with Spark on Amazon EMR

Post Syndicated from Dominic Murphy original https://blogs.aws.amazon.com/bigdata/post/Tx2HJD3Z74J2U8U/Running-an-External-Zeppelin-Instance-using-S3-Backed-Notebooks-with-Spark-on-Am

Dominic Murphy is an Enterprise Solution Architect with Amazon Web Services

Apache Zeppelin is an open source GUI which creates interactive and collaborative notebooks for data exploration using Spark. You can use Scala, Python, SQL (using Spark SQL), or HiveQL to manipulate data and quickly visualize results. Zeppelin notebooks can be shared among several users, and visualizations can be published to external dashboards. Zeppelin uses the Spark settings on your cluster and can utilize Spark’s dynamic allocation of executors to let YARN estimate the optimal resource consumption.

With the 4.1.0 release, Amazon EMR introduced Zeppelin as an application that could be installed on an EMR cluster during set up. Zeppelin is installed on the master node of the EMR cluster and creates a Spark Context to run interactive Spark jobs on the EMR cluster where it’s installed. Also, Zeppelin notebooks are stored by default on the master node.

In this blog post, I will show you how to set up Zeppelin running “off-cluster” on a separate EC2 instance. You will be able to submit Spark jobs to an EMR cluster directly from your Zeppelin instance. By setting up Zeppelin off cluster, rather than on the master node of an EMR cluster, you will have the flexibility to choose which EMR cluster to submit jobs to, and can interact with your Zeppelin notebooks when your EMR cluster isn’t active. Finally, I will demonstrate how to store your Zeppelin notebooks on Amazon S3 for durable storage.

Getting started

Make sure you have these resources before beginning the tutorial:

AWS Command Line Interface installed

An SSH client

A key pair in the region where you’ll launch the Zeppelin instance

An S3 bucket in same region to store your Zeppelin notebooks, and to transfer files from EMR to your Zeppelin instance

IAM permissions to create S3 buckets, launch EC2 instances, and create EMR clusters

Create an EMR cluster

The first step is to set up an EMR cluster.

On the Amazon EMR console, choose Create cluster.

Choose Go to advanced options and enter the following options:

Vendor: Amazon

Release: emr-4.2.0

Applications: Ensure that Hadoop 2.6.0, Hive 1.0.0, and Spark 1.5.2 are selected. Deselect Pig and Hue.

In the Add steps section, for Step type, choose Custom JAR.  

Choose Configure and enter:

JAR location: command-runner.jar

Arguments: aws s3 cp /etc/hadoop/conf/ s3://<YOUR_S3_BUCKET>/hadoopconf –recursive

Action on Failure: Continue

Choose Add and add a second step by choosing Configure again.

JAR location: command-runner.jar

Arguments: aws s3 cp /etc/hive/conf/hive-site.xml s3://<YOUR_S3_BUCKET/hiveconf/hive-site.xml

Action on failure: Continue

Choose Add, Next.

On the Hardware Configuration page, select your VPC and the subnet where you want to launch the cluster, keep the default selection of one master and two core nodes of m3.xlarge, and choose Next.

On the General Options page, give your cluster a name (e.g., Spark-Cluster) and choose Next.

On the Security Options page, for EC2 key pair, select a key pair. Keep all other settings at the default values and choose Create cluster.

Your three-node cluster takes a few moments to start up. Your cluster is ready when the cluster status is Waiting.

Note: You need the master public DNS, subnet ID, security groups, and VPC ID for Master and Core/Task for use in subsequent steps. You can retrieve the first three from the EMR console, and the VPC ID from the EC2 Instances page.

Launch an EC2 instance with Apache Zeppelin

Launch an EC2 Zeppelin instance with a CloudFormation template.

In the CloudFormation console, choose Create Stack.

Choose Specify an Amazon S3 template URL, and enter the following

https://s3.amazonaws.com/aws-bigdata-blog/artifacts/zeppelin-yarn-on-ec2.json

Choose Next.

In the next page, give your stack a name and enter the following parameters:

EMRMasterSecurityGroup: Security group of EMR master.

EMRSlaveSecurityGroup: Security group of EMR core & task.

Instance Type: I recommend m3.xlarge for this procedure.

KeyName: Your key pair.

S3HadoopConfFolder: Replace <mybucket> with an S3 bucket from your account.

S3HiveConfFolder: Replace <mybucket> with an S3 bucket from your account.

SSHLocation: CIDR block that will be allowed to connect using SSH into the Zeppelin instance.

ZeppelinAccessLocation: CIDR block that will be allowed to connect to Zeppelin Web over port 8080.

ZeppelinSubnetId: Subnet where your EMR cluster launched.

ZeppelinVPCId: VPC where your EMR cluster launched.

Choose Next.

Optionally, specify a tag for your instance. Choose Next.

Review your choices, and check the IAM acknowledgement, choose Create.

Your stack will take several minutes to complete as it creates the EC2 instance and provisions Zeppelin and its prerequisites. While you are waiting, navigate to the S3 console and create a bucket for Zeppelin notebook storage. Create a folder in S3 for your Zeppelin user, and then a subfolder under that’s called notebook.

In the screen shot below, the Zeppelin storage bucket is called “zeppelin-bucket,” the Zeppelin user is “zeppelin-user,” and the notebook subfolder is in the user folder.

Return to the CloudFormation console. When the CloudFormation stack status returns CREATE_COMPLETE, your EC2 instance is ready.

Open the EC2 console to view your EC2 instance. Note the IP address and security group as you will use that in a subsequent step.

Configure your EMR security group to allow traffic from Zeppelin instance

In the EMR console, select your cluster and navigate to the Cluster Details page.

For Security group for Master, select a security group. The default is ElasticMapReduce-master.

On the Security Group page, choose Inbound, Edit, Add Rule, All TCP. For Source, choose Custom IP, and in the next field enter the EC2 Zeppelin instance’s security group.

Repeat the above steps for Security groups for Core & Task.

Finalize the Zeppelin instance configuration

Connect to your Zeppelin EC2 instance using SSH. Note, if you are using PuTTY, you can follow the instructions in the Connecting to Your Linux Instance from Windows Using PuTTY topic.

## SSH as ec2-user to your instance
ssh –i <your key pair file> [email protected]<your EC2 instance IP address>

Complete the zeppelin-env.sh settings with the S3 Bucket and S3 User Folder you entered earlier.

sudo nano /home/ec2-user/zeppelin/conf/zeppelin-env.sh
export JAVA_HOME=/etc/alternatives/java_sdk_openjdk
export MASTER=yarn-client
export HADOOP_CONF_DIR=/home/ec2-user/hadoopconf
#
#
export ZEPPELIN_NOTEBOOK_STORAGE=org.apache.zeppelin.notebook.repo.S3NotebookRepo
export ZEPPELIN_NOTEBOOK_S3_BUCKET=<myZeppelinBucket>
export ZEPPELIN_NOTEBOOK_USER=<myZeppelinUser>

Edit your /home/ec2-user/zeppelin/conf/zeppelin-site.xml file. Navigate to the following section and replace the bolded text with your S3 bucket and folder:

<!–If you use S3 for storage, the following folder structure is necessary: bucket_name/username/notebook/–>
<property>
<name>zeppelin.notebook.s3.user</name>
<value><myZeppelinUser></value>
<description>user name for S3 folder structure</description>
</property>
<property>
<name>zeppelin.notebook.s3.bucket</name>
<value><myZeppelinBucket>/value>
<description>bucket name for notebook storage</description>
</property>

Start and test Zeppelin

Start your Zeppelin instance. From your /home/ec2-user/zeppelin directory, type:

sudo bin/zeppelin-daemon.sh start

You are now done with your SSH session.

Switch to your client’s browser window. Test your instance by navigating to http://<yourZeppelinInstanceIP>:8080/#/

On the Zeppelin homepage, choose Import Note and enter the location for the Zeppelin Tutorial JSON file as follows:

https://raw.githubusercontent.com/apache/incubator-zeppelin/master/notebook/2A94M5J1Z/note.json

Complete the import process and execute the notebook by choosing Run All Paragraphs.

After a few moments, you should see the Tutorial dashboard as follows:

In your Hadoop Resource Manager, you should see the Zeppelin application running on your EMR cluster.

 

Navigate to the S3 console and verify that you can see your notebook.json file in the following folder:

s3://<yourZeppelinBucket>/<userfolder>/notebook/<notebookid>/note.json

Clean up

You can now clean up your instances to stop incurring charges:

Navigate to the CloudFormation console and choose Delete Stack.

Navigate to the EMR console, select your cluster, and choose Terminate.

Navigate to the S3 console, from the Actions menu choose Delete Bucket. Type the name of the S3 bucket you used for this exercise and choose Delete.

Summary

In this blog post, you learned how to create a Zeppelin instance on EC2 and configure it as a YARN client. You also configured Zeppelin to store notebooks durably in S3 rather than on a local disk, so you can shutdown or even terminate your instance and still persist your notebook data.

In this example you first created an EMR cluster, and then configured Zeppelin to submit jobs to that cluster. In a future post, I will examine submitting jobs to multiple EMR clusters from Zeppelin.

If you have a question or suggestion, please leave a comment below

———————————–

Related:

Building a Recommendation Engine with Spark ML on Amazon EMR using Zeppelin

 

Help Fund Open-Wash-Free Zones

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2014/12/03/conservancy-supporter.html

Recently, I was forwarded an email from an executive at a 501(c)(6) trade
association. In answering a question about accepting small donations for
an “Open Source” project through their organization, the Trade
Association Executive responded Accepting [small] donations [from
individuals] is possible, but [is] generally not a sustainable way to raise
funds for a project based on our experience. It’s extremely
difficult … to raise any meaningful or reliable amounts.

I was aghast, but not surprised. The current Zeitgeist of the broader
Open Source and Free Software community incubated his disturbing mindset.
Our community suffers now from regular and active cooption by for-profit
interests. The Trade Association Executive’s fundraising claim —
which probably even bears true in their subset of the community —
shows the primary mechanism of cooption: encourage funding only from a few,
big sources so they can slowly but surely dictate project policy.

Today, more revenue than ever goes to the development of code released
under licenses that respect software freedom. That belabored sentence
contains the key subtlety: most Free Software communities are not
receiving more funding than before, in fact, they’re probably receiving
less. Instead, Open Source became a fad, and now it’s “cool”
for for-profit companies to release code, or channel funds through some
trade associations to get the code they want written and released. This
problem is actually much worse
than traditional
open-washing
. I’d call this for-profit cooption its own subtle
open-washing: picking a seemingly acceptable license for the software, but
“engineering” the “community” as a proxy group
controlled by for-profit interests.

This cooption phenomenon leaves the community-oriented efforts of Free
Software charities underfunded and (quite often) under attack. These same
companies that fund plenty of Open Source development also often oppose
copyleft. Meanwhile, the majority of
Free Software projects that predate the “Open Source Boom”
didn’t rise to worldwide fame and discover a funding bonanza. Such less
famous projects still struggle financially for the very basics. For
example, I participate in email threads nearly every day
with Conservancy
member projects
who are just trying to figure out how to fund
developers to a conference to give a talk about their project.

Thus, a sad kernel of truth hides in the Trade Association Executive’s
otherwise inaccurate statement: big corporate donations buy influence, and
a few of our traditionally community-oriented Free Software projects have
been “bought” in various ways with this influx of cash. The
trade associations seek to facilitate more of this. Unless we change our
behavior, the larger Open Source and Free Software community may soon look
much like the political system in the USA: where a few lobbyist-like
organizations control the key decision-making through funding. In such a
structure, who will stand up for those developers who
prefer copyleft? Who will make sure
individual developers receive the organizational infrastructure they need?
In short, who will put the needs of individual developers and users ahead
of for-profit companies?

Become a Conservancy Supporter!

The answer is simple: non-profit 501(c)(3) charities in our community.
These organizations that are required by IRS regulation to pass
a public
support test
, which means they must seek large portions of their
revenue from individuals in the general public and not receive too much
from any small group of sources. Our society charges these organizations
with the difficult but attainable tasks of (a) answering to the general
public, and never for-profit corporate donors, and (b) funding the organization
via mechanisms appropriate to that charge. The best part is
that you, the individual,
have the strongest say in reaching those goals
.

Those who favor for-profit corporate control of “Open Source”
projects will always insist that Free Software initiatives and plans just
cannot be funded effectively via small, individual donations. Please, for
the sake of software freedom, help us prove them wrong. There’s even an
easy way that you can do that. For just $10 a month, you
can join the Conservancy
Supporter program
. You can help Conservancy stand up for Free Software
projects who seek to keep project control in the hands of developers and
users.

Of course, I realize you might not like my work at Conservancy. If you
don’t, then give to the FSF instead.
If you don’t like Conservancy nor the FSF,
then give to the GNOME
Foundation
. Just pick the 501(c)(3) non-profit charity in the Free
Software community that you like best and donate. The future of software
freedom depends on it.