Tag Archives: forest

Weekly roundup: Taking a breather

Post Syndicated from Eevee original https://eev.ee/dev/2017/08/09/weekly-roundup-taking-a-breather/

Nothing too special about this week; it went a little slow, but that’s been nice after the mad panic I was in at the end of July.

  • cc: I’m getting the hang of Unity and forming an uneasy truce with C#. Mostly did refactoring of some existing actor code, trying to move all the reading of controls to a single place so the rest of it can be reused for non-players.

  • fox flux: I put some work into a new forest background, which is already just… hilariously better than the one from the original game. Complex textures like leaves are one of my serious weak points, but this is forcing me to do it anyway and I’m slowly learning.

  • blog: I finished that post on Pokémon datamining, which ended up extraordinarily long and slightly late.

  • veekun: Dug into some missing stuff regarding items.

  • art: Spent a day or two doodling.

Still behind by one blog post (oops), and slacked on veekun a bit, but I’ve still got momentum.

AWS CloudFormation Supports Amazon Kinesis Analytics Applications

Post Syndicated from Ryan Nienhuis original https://aws.amazon.com/blogs/big-data/aws-cloudformation-supports-amazon-kinesis-analytics-applications/

You can now provision and manage resources for Amazon Kinesis Analytics applications using AWS CloudFormation.  Kinesis Analytics is the easiest way to process streaming data in real time with standard SQL, without having to learn new programming languages or processing frameworks. Kinesis Analytics enables you to query streaming data or build entire streaming applications using SQL. Using the service, you gain actionable insights and can respond to your business and customer needs promptly.

Customers can create CloudFormation templates that easily create or update Kinesis Analytics applications. Typically, a template is used as a way to manage code across different environments, or to prototype a new streaming data solution quickly.

We have created two sample templates using past AWS Big Data Blog posts that referenced Kinesis Analytics.

For more information about the new feature, see the AWS Cloudformation User Guide.

 

TVStreamCMS Brings Pirate Streaming Site Clones to The Masses

Post Syndicated from Ernesto original https://torrentfreak.com/tvstreamcms-brings-pirate-streaming-site-clones-to-the-masses-170723/

In recent years many pirates have moved from more traditional download sites and tools, to streaming portals.

These streaming sites come in all shapes and sizes, and there is fierce competition among site owners to grab the most traffic. More traffic means more money, after all.

While building a streaming from scratch is quite an operation, there are scripts on the market that allow virtually anyone to set up their own streaming index in just a few minutes.

TVStreamCMS is one of the leading players in this area. To find out more we spoke to one of the people behind the project, who prefers to stay anonymous, but for the sake of this article, we’ll call him Rick.

“The idea came up when I wanted to make my own streaming site. I saw that they make a lot of money, and many people had them,” Rick tells us.

After discovering that there were already a few streaming site scripts available, Rick saw an opportunity. None of the popular scripts at the time offered automatic updates with freshly pirated content, a gap that was waiting to be filled.

“I found out that TVStreamScript and others on ThemeForest like MTDB were available, but these were not automatized. Instead, they were kinda generic and hard to update. We wanted to make our own site, but as we made it, we also thought about reselling it.”

Soon after TVStreamCMS was born. In addition to using it for his own project, Rick also decided to offer it to others who wanted to run their own streaming portal, for a monthly subscription fee.

TVStreamCMS website

According to Rick, the script’s automated content management system has been its key selling point. The buyers don’t have to update or change much themselves, as pretty much everything is automatized.

This has generated hundreds of sales over the years, according to the developer. And several of the sites that run on the script are successfully “stealing” traffic from the original, such as gomovies.co, which ranks well above the real GoMovies in Google’s search results.

“Currently, a lot of the sites competing against the top level streaming sites are using our script. This includes 123movies.co, gomovies.co and putlockers.tv, keywords like yesmovies fmovies gomovies 123movies, even in different Languages like Portuguese, French and Italian,” Rick says.

The pirated videos that appear on these sites come from a database maintained by the TVStreamCMS team. These are hosted on their own servers, but also by third parties such as Google and Openload.

When we looked at one of the sites we noticed a few dead links, but according to Rick, these are regularly replaced.

“Dead links are maintained by our team, DMCA removals are re-uploaded, and so on. This allows users not to worry about re-uploading or adding content daily and weekly as movies and episodes release,” Rick explains.

While this all sounds fine and dandy for prospective pirates, there are some significant drawbacks.

Aside from the obvious legal risks that come with operating one of these sites, there is also a financial hurdle. The full package costs $399 plus a monthly fee of $99, and the basic option is $399 and $49 per month.

TVStreamCMS subscription plans

There are apparently plenty of site owners who don’t mind paying this kind of money. That said, not everyone is happy with the script. TorrentFreak spoke to a source at one of the larger streaming sites, who believes that these clones are misleading their users.

TVStreamCMS is not impressed by the criticism. They know very well what they are doing. Their users asked for these clone templates, and they are delivering them, so both sides can make more money.

“We’re are in the business to make money and grow the sales,” Rick says.

“So we have made templates looking like 123movies, Yesmovies, Fmovies and Putlocker to accommodate the demands of the buyers. A similar design gets buyers traffic and is very, very effective for new sites, as users who come from Google they think it is the real website.”

The fact that 123Movies changed its name to GoMovies and recently changed to a GoStream.is URL, only makes it easier for clones to get traffic, according to the developer.

“This provides us with a lot of business because every time they change their name the buyers come back and want another site with the new name. GoMovies, for instance, and now Gostream,” Rick notes.

Of course, the infringing nature of the clone sites means that there are many copyright holders who would rather see the script and its associated sites gone. Previously, the Hollywood group FACT managed to shut down TVstreamScript, taking down hundreds of sites that relied on it, and it’s likely that TVStreamCMS is being watched too.

For now, however, more and more clones continue to flood the web with pirated streams.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Perform Near Real-time Analytics on Streaming Data with Amazon Kinesis and Amazon Elasticsearch Service

Post Syndicated from Tristan Li original https://aws.amazon.com/blogs/big-data/perform-near-real-time-analytics-on-streaming-data-with-amazon-kinesis-and-amazon-elasticsearch-service/

Nowadays, streaming data is seen and used everywhere—from social networks, to mobile and web applications, IoT devices, instrumentation in data centers, and many other sources. As the speed and volume of this type of data increases, the need to perform data analysis in real time with machine learning algorithms and extract a deeper understanding from the data becomes ever more important. For example, you might want a continuous monitoring system to detect sentiment changes in a social media feed so that you can react to the sentiment in near real time.

In this post, we use Amazon Kinesis Streams to collect and store streaming data. We then use Amazon Kinesis Analytics to process and analyze the streaming data continuously. Specifically, we use the Kinesis Analytics built-in RANDOM_CUT_FOREST function, a machine learning algorithm, to detect anomalies in the streaming data. Finally, we use Amazon Kinesis Firehose to export the anomalies data to Amazon Elasticsearch Service (Amazon ES). We then build a simple dashboard in the open source tool Kibana to visualize the result.

Solution overview

The following diagram depicts a high-level overview of this solution.

Amazon Kinesis Streams

You can use Amazon Kinesis Streams to build your own streaming application. This application can process and analyze streaming data by continuously capturing and storing terabytes of data per hour from hundreds of thousands of sources.

Amazon Kinesis Analytics

Kinesis Analytics provides an easy and familiar standard SQL language to analyze streaming data in real time. One of its most powerful features is that there are no new languages, processing frameworks, or complex machine learning algorithms that you need to learn.

Amazon Kinesis Firehose

Kinesis Firehose is the easiest way to load streaming data into AWS. It can capture, transform, and load streaming data into Amazon S3, Amazon Redshift, and Amazon Elasticsearch Service.

Amazon Elasticsearch Service

Amazon ES is a fully managed service that makes it easy to deploy, operate, and scale Elasticsearch for log analytics, full text search, application monitoring, and more.

Solution summary

The following is a quick walkthrough of the solution that’s presented in the diagram:

  1. IoT sensors send streaming data into Kinesis Streams. In this post, you use a Python script to simulate an IoT temperature sensor device that sends the streaming data.
  2. By using the built-in RANDOM_CUT_FOREST function in Kinesis Analytics, you can detect anomalies in real time with the sensor data that is stored in Kinesis Streams. RANDOM_CUT_FOREST is also an appropriate algorithm for many other kinds of anomaly-detection use cases—for example, the media sentiment example mentioned earlier in this post.
  3. The processed anomaly data is then loaded into the Kinesis Firehose delivery stream.
  4. By using the built-in integration that Kinesis Firehose has with Amazon ES, you can easily export the processed anomaly data into the service and visualize it with Kibana.

Implementation steps

The following sections walk through the implementation steps in detail.

Creating the delivery stream

  1. Open the Amazon Kinesis Streams console.
  2. Create a new Kinesis stream. Give it a name that indicates it’s for raw incoming stream data—for example, RawStreamData. For Number of shards, type 1.
  3. The Python code provided below simulates a streaming application, such as an IoT device, and generates random data and anomalies into a Kinesis stream. The code generates two temperature ranges, where the first range is the hypothetical sensor’s normal operating temperature range (10–20), and the second is the anomaly temperature range (100–120).Make sure to change the stream name on line 16 and 20 and the Region on line 6 to match your configuration. Alternatively, you can download the Amazon Kinesis Data Generator from this repository and use it to generate the data.
    import json
    import datetime
    import random
    import testdata
    from boto import kinesis
    
    kinesis = kinesis.connect_to_region("us-east-1")
    
    def getData(iotName, lowVal, highVal):
       data = {}
       data["iotName"] = iotName
       data["iotValue"] = random.randint(lowVal, highVal) 
       return data
    
    while 1:
       rnd = random.random()
       if (rnd < 0.01):
          data = json.dumps(getData("DemoSensor", 100, 120))  
          kinesis.put_record("RawStreamData", data, "DemoSensor")
          print '***************************** anomaly ************************* ' + data
       else:
          data = json.dumps(getData("DemoSensor", 10, 20))  
          kinesis.put_record("RawStreamData", data, "DemoSensor")
          print data

  4. Open the Amazon Elasticsearch Service console and create a new domain.
    1. Give the domain a unique name. In the Configure cluster screen, use the default settings.
    2. In the Set up access policy screen, in the Set the domain access policy list, choose Allow access to the domain from specific IP(s).
    3. Enter the public IP address of your computer.
      Note: If you’re working behind a proxy or firewall, see the “Use a proxy to simplify request signing” section in this AWS Database blog post to learn how to work with a proxy. For additional information about securing access to your Amazon ES domain, see How to Control Access to Your Amazon Elasticsearch Domain in the AWS Security Blog.
  5. After the Amazon ES domain is up and running, you can set up and configure Kinesis Firehose to export results to Amazon ES:
    1. Open the Amazon Kinesis Firehose console and choose Create Delivery Stream.
    2. In the Destination dropdown list, choose Amazon Elasticsearch Service.
    3. Type a stream name, and choose the Amazon ES domain that you created in Step 4.
    4. Provide an index name and ES type. In the S3 bucket dropdown list, choose Create New S3 bucket. Choose Next.
    5. In the configuration, change the Elasticsearch Buffer size to 1 MB and the Buffer interval to 60s. Use the default settings for all other fields. This shortens the time for the data to reach the ES cluster.
    6. Under IAM Role, choose Create/Update existing IAM role.
      The best practice is to create a new role every time. Otherwise, the console keeps adding policy documents to the same role. Eventually the size of the attached policies causes IAM to reject the role, but it does it in a non-obvious way, where the console basically quits functioning.
    7. Choose Next to move to the Review page.
  6. Review the configuration, and then choose Create Delivery Stream.
  7. Run the Python file for 1–2 minutes, and then press Ctrl+C to stop the execution. This loads some data into the stream for you to visualize in the next step.

Analyzing the data

Now it’s time to analyze the IoT streaming data using Amazon Kinesis Analytics.

  1. Open the Amazon Kinesis Analytics console and create a new application. Give the application a name, and then choose Create Application.
  2. On the next screen, choose Connect to a source. Choose the raw incoming data stream that you created earlier. (Note the stream name Source_SQL_STREAM_001 because you will need it later.)
  3. Use the default settings for everything else. When the schema discovery process is complete, it displays a success message with the formatted stream sample in a table as shown in the following screenshot. Review the data, and then choose Save and continue.
  4. Next, choose Go to SQL editor. When prompted, choose Yes, start application.
  5. Copy the following SQL code and paste it into the SQL editor window.
    CREATE OR REPLACE STREAM "TEMP_STREAM" (
       "iotName"        varchar (40),
       "iotValue"   integer,
       "ANOMALY_SCORE"  DOUBLE);
    -- Creates an output stream and defines a schema
    CREATE OR REPLACE STREAM "DESTINATION_SQL_STREAM" (
       "iotName"       varchar(40),
       "iotValue"       integer,
       "ANOMALY_SCORE"  DOUBLE,
       "created" TimeStamp);
     
    -- Compute an anomaly score for each record in the source stream
    -- using Random Cut Forest
    CREATE OR REPLACE PUMP "STREAM_PUMP_1" AS INSERT INTO "TEMP_STREAM"
    SELECT STREAM "iotName", "iotValue", ANOMALY_SCORE FROM
      TABLE(RANDOM_CUT_FOREST(
        CURSOR(SELECT STREAM * FROM "SOURCE_SQL_STREAM_001")
      )
    );
    
    -- Sort records by descending anomaly score, insert into output stream
    CREATE OR REPLACE PUMP "OUTPUT_PUMP" AS INSERT INTO "DESTINATION_SQL_STREAM"
    SELECT STREAM "iotName", "iotValue", ANOMALY_SCORE, ROWTIME FROM "TEMP_STREAM"
    ORDER BY FLOOR("TEMP_STREAM".ROWTIME TO SECOND), ANOMALY_SCORE DESC;

 

  1. Choose Save and run SQL.
    As the application is running, it displays the results as stream data arrives. If you don’t see any data coming in, run the Python script again to generate some fresh data. When there is data, it appears in a grid as shown in the following screenshot.Note that you are selecting data from the source stream name Source_SQL_STREAM_001 that you created previously. Also note the ANOMALY_SCORE column. This is the value that the Random_Cut_Forest function calculates based on the temperature ranges provided by the Python script. Higher (anomaly) temperature ranges have a higher score.Looking at the SQL code, note that the first two blocks of code create two new streams to store temporary data and the final result. The third block of code analyzes the raw source data (Stream_Pump_1) using the Random_Cut_Forest function. It calculates an anomaly score (ANOMALY_SCORE) and inserts it into the TEMP_STREAM stream. The final code block loads the result stored in the TEMP_STREAM into DESTINATION_SQL_STREAM.
  2. Choose Exit (done editing) next to the Save and run SQL button to return to the application configuration page.

Load processed data into the Kinesis Firehose delivery stream

Now, you can export the result from DESTINATION_SQL_STREAM into the Amazon Kinesis Firehose stream that you created previously.

  1. On the application configuration page, choose Connect to a destination.
  2. Choose the stream name that you created earlier, and use the default settings for everything else. Then choose Save and Continue.
  3. On the application configuration page, choose Exit to Kinesis Analytics applications to return to the Amazon Kinesis Analytics console.
  4. Run the Python script again for 4–5 minutes to generate enough data to flow through Amazon Kinesis Streams, Kinesis Analytics, Kinesis Firehose, and finally into the Amazon ES domain.
  5. Open the Kinesis Firehose console, choose the stream, and then choose the Monitoring
  6. As the processed data flows into Kinesis Firehose and Amazon ES, the metrics appear on the Delivery Stream metrics page. Keep in mind that the metrics page takes a few minutes to refresh with the latest data.
  7. Open the Amazon Elasticsearch Service dashboard in the AWS Management Console. The count in the Searchable documents column increases as shown in the following screenshot. In addition, the domain shows a cluster health of Yellow. This is because, by default, it needs two instances to deploy redundant copies of the index. To fix this, you can deploy two instances instead of one.

Visualize the data using Kibana

Now it’s time to launch Kibana and visualize the data.

  1. Use the ES domain link to go to the cluster detail page, and then choose the Kibana link as shown in the following screenshot.

    If you’re working behind a proxy or firewall, see the “Use a proxy to simplify request signing” section in this blog post to learn how to work with a proxy.
  2. In the Kibana dashboard, choose the Discover tab to perform a query.
  3. You can also visualize the data using the different types of charts offered by Kibana. For example, by going to the Visualize tab, you can quickly create a split bar chart that aggregates by ANOMALY_SCORE per minute.


Conclusion

In this post, you learned how to use Amazon Kinesis to collect, process, and analyze real-time streaming data, and then export the results to Amazon ES for analysis and visualization with Kibana. If you have comments about this post, add them to the “Comments” section below. If you have questions or issues with implementing this solution, please open a new thread on the Amazon Kinesis or Amazon ES discussion forums.


Next Steps

Take your skills to the next level. Learn real-time clickstream anomaly detection with Amazon Kinesis Analytics.

 


About the Author

Tristan Li is a Solutions Architect with Amazon Web Services. He works with enterprise customers in the US, helping them adopt cloud technology to build scalable and secure solutions on AWS.

 

 

 

 

GameTale

Post Syndicated from Григор original http://www.gatchev.info/blog/?p=2060

Are you a parent to a several years old?

Do you want to teach the little kid to like books, while all she or he wants is games?

There is now a way to have both!

Sure, there are a lot of gamebooks, but they are targeted to teenagers. I will tell now of one that was written for children between three and nine years.

It is the tale of Gremmy – the little gremlin who goes to a big adventure. Who will climb The Big Mountain, or maybe will travel down The Deep River. Will venture into The Enchanted Forest, unless you would go with it inside The Dark Cave. Who will meet magical creatures and will face ingenious choices…

It is a tale you can read to your kids. Lead them through a kingdom of magic and wonder, meet them with its inhabitants and have them make their choices and see their funny and witty results. Nurture their curiosity and imagination, while also teaching them wise and important things.

The author – Nikola Raykov – is the youngest writer ever to win the most prestigious award for children’s literature in Bulgaria. The number of copies in Bulgarian that have been sold is higher than the typical for a book by Stephen King or Paulo Coelho! Since some time, it has been published also in Russian, Italian and Latvian. And now you can have the English translation.

Most gamebooks will have few illustrations, typically black-and-white ones. GameTale is full of excellent true color ones, as a book for children must be. And it provides not only entertainment, but also value.

Don’t you believe it? Take a look yourself – the entire book is available freely on the author’s website, even before it is printed – to read and play it, to download and enjoy it. Like all of its translations and the Bulgarian original. Yes, all these sales were done while the book has been available to everybody. The ability of the readers to see what they are buying has been its best advertisement.

Here is what the writer says:

“I believe it would be cruel if children weren’t able to enjoy my books because their parents could not afford them, and children’s authors should not be cruel. They should be gentle, caring and loving. The values we write about should not be just words on paper. We should be the living and breathing examples of those values, because what we write HAS to be true. Every good author will tell you that you cannot lie to your readers (or little listeners). They will catch you in a second. When you read a book, you can actually feel if the author is being honest about his or her inner self.”

“I DO believe that people are inherently good. If you have poured your heart into something, if you have tried your best, people will feel that and give you their unconditional support. There is no need to hide your work: people are not thieves! If you share, they will care, they will follow you, they will nag you about when your next book comes out, and yes, they will gladly support you because they will know that their children’s favorite author actually believes in the values he’s writing about. The same things they believe in – friendship, love and freedom!”

Nikola started a campaign on Kickstarter. Its goal is to fund the printing of 1000 copies of the book in English. And you do get for your donations things your kid will love!

Years ago, when I read this book, I felt like a kid. And now envy you a little for the joy that you will get from it. 🙂 Do give it a try. There is nothing to lose, and a lot to win!

Introspection

Post Syndicated from Eevee original https://eev.ee/blog/2017/05/28/introspection/

This month, IndustrialRobot has generously donated in order to ask:

How do you go about learning about yourself? Has your view of yourself changed recently? How did you handle it?

Whoof. That’s incredibly abstract and open-ended — there’s a lot I could say, but most of it is hard to turn into words.


The first example to come to mind — and the most conspicuous, at least from where I’m sitting — has been the transition from technical to creative since quitting my tech job. I think I touched on this a year ago, but it’s become all the more pronounced since then.

I quit in part because I wanted more time to work on my own projects. Two years ago, those projects included such things as: giving the Python ecosystem a better imaging library, designing an alternative to regular expressions, building a Very Correct IRC bot framework, and a few more things along similar lines. The goals were all to solve problems — not hugely important ones, but mildly inconvenient ones that I thought I could bring something novel to. Problem-solving for its own sake.

Now that I had all the time in the world to work on these things, I… didn’t. It turned out they were almost as much of a slog as my job had been!

The problem, I think, was that there was no point.

This was really weird to realize and come to terms with. I do like solving problems for its own sake; it’s interesting and educational. And most of the programming folks I know and surround myself with have that same drive and use it to create interesting tools like Twisted. So besides taking for granted that this was the kind of stuff I wanted to do, it seemed like the kind of stuff I should want to do.

But even if I create a really interesting tool, what do I have? I don’t have a thing; I have a tool that can be used to build things. If I want a thing, I have to either now build it myself — starting from nearly zero despite all the work on the tool, because it can only do so much in isolation — or convince a bunch of other people to use my tool to build things. Then they’d be depending on my tool, which means I have to maintain and support it, which is even more time and effort poured into this non-thing.

Despite frequently being drawn to think about solving abstract tooling problems, it seems I truly want to make things. This is probably why I have a lot of abandoned projects boldly described as “let’s solve X problem forever!” — I go to scratch the itch, I do just enough work that it doesn’t itch any more, and then I lose interest.

I spent a few months quietly flailing over this minor existential crisis. I’d spent years daydreaming about making tools; what did I have if not that drive? I was having to force myself to work on what I thought were my passion projects.

Meanwhile, I’d vaguely intended to do some game development, but for some reason dragged my feet forever and then took my sweet time dipping my toes in the water. I did work on a text adventure, Runed Awakening, on and off… but it was a fractal of creative decisions and I had a hard time making all of them. It might’ve been too ambitious, despite feeling small, and that might’ve discouraged me from pursuing other kinds of games earlier.

A big part of it might have been the same reason I took so long to even give art a serious try. I thought of myself as a technical person, and art is a thing for creative people, so I’m simply disqualified, right? Maybe the same thing applies to games.

Lord knows I had enough trouble when I tried. I’d orbited the Doom community for years but never released a single finished level. I did finally give it a shot again, now that I had the time. Six months into my funemployment, I wrote a three-part guide on making Doom levels. Three months after that, I finally released one of my own.

I suppose that opened the floodgates; a couple weeks later, glip and I decided to try making something for the PICO-8, and then we did that (almost exactly a year ago!). Then kept doing it.

It’s been incredibly rewarding — far moreso than any “pure” tooling problem I’ve ever approached. Moreso than even something like veekun, which is a useful thing. People have thoughts and opinions on games. Games give people feelings, which they then tell you about. Most of the commentary on a reference website is that something is missing or incorrect.

I like doing creative work. There was never a singular moment when this dawned on me; it was a slow process over the course of a year or more. I probably should’ve had an inkling when I started drawing, half a year before I quit; even my early (and very rough) daily comics made people laugh, and I liked that a lot. Even the most well-crafted software doesn’t tend to bring joy to people, but amateur art can.

I still like doing technical work, but I prefer when it’s a means to a creative end. And, just as important, I prefer when it has a clear and constrained scope. “Make a library/tool for X” is a nebulous problem that could go in a great many directions; “make a bot that tweets Perlin noise” has a pretty definitive finish line. It was interesting to write a little physics engine, but I would’ve hated doing it if it weren’t for a game I were making and didn’t have the clear scope of “do what I need for this game”.


It feels like creative work is something I’ve been wanting to do for a long time. If this were a made-for-TV movie, I would’ve discovered this impulse one day and immediately revealed myself as a natural-born artistic genius of immense unrealized talent.

That didn’t happen. Instead I’ve found that even something as mundane as having ideas is a skill, and while it’s one I enjoy, I’ve barely ever exercised it at all. I have plenty of ideas with technical work, but I run into brick walls all the time with creative stuff.

How do I theme this area? Well, I don’t know. How do I think of something? I don’t know that either. It’s a strange paradox to have an urge to create things but not quite know what those things are.

It’s such a new and completely different kind of problem. There’s no right answer, or even an answer I can check for “correctness”. I can do anything. With no landmarks to start from, it’s easy to feel completely lost and just draw blanks.

I’ve essentially recalibrated the texture of stuff I work on, and I have to find some completely new ways to approach problems. I haven’t found them yet. I don’t think they’re anything that can be told or taught. But I’m starting to get there, and part of it is just accepting that I can’t treat these like problems with clear best solutions and clear algorithms to find those solutions.

A particularly glaring irony is that I’ve had a really tough problem designing abstract spaces, even though that’s exactly the kind of architecture I praise in Doom. It’s much trickier than it looks — a good abstract design is reminiscent of something without quite being that something.

I suppose it’s similar to a struggle I’ve had with art. I’m drawn to a cartoony style, and cartooning is also a mild form of abstraction, of whittling away details to leave only what’s most important. I’m reminded in particular of the forest background in fox flux — I was completely lost on how to make something reminiscent of a tree line. I knew enough to know that drawing trees would’ve made the background far too busy, but trees are naturally busy, so how do you represent that?

The answer glip gave me was to make big chunky leaf shapes around the edges and where light levels change. Merely overlapping those shapes implies depth well enough to convey the overall shape of the tree. The result works very well and looks very simple — yet it took a lot of effort just to get to the idea.

It reminds me of mathematical research, in a way? You know the general outcome you want, and you know the tools at your disposal, and it’s up to you to make some creative leaps. I don’t think there’s a way to directly learn how to approach that kind of problem; all you can do is look at what others have done and let it fuel your imagination.


I think I’m getting a little distracted here, but this is stuff that’s been rattling around lately.

If there’s a more personal meaning to the tree story, it’s that this is a thing I can do. I can learn it, and it makes sense to me, despite being a huge nerd.

Two and a half years ago, I never would’ve thought I’d ever make an entire game from scratch and do all the art for it. It was completely unfathomable. Maybe we can do a lot of things we don’t expect we’re capable of, if only we give them a serious shot.

And ask for help, of course. I have a hell of a time doing that. I did a painting recently that factored in mountains of glip’s advice, and on some level I feel like I didn’t quite do it myself, even though every stroke was made by my hand. Hell, I don’t even look at references nearly as much as I should. It feels like cheating, somehow? I know that’s ridiculous, but my natural impulse is to put my head down and figure it out myself. Maybe I’ve been doing that for too long with programming. Trust me, it doesn’t work quite so well in a brand new field.


I’m getting distracted again!

To answer your actual questions: how do I go about learning about myself? I don’t! It happens completely by accident. I’ll consciously examine my surface-level thoughts or behaviors or whatever, sure, but the serious fundamental revelations have all caught me completely by surprise — sometimes slowly, sometimes suddenly.

Most of them also came from listening to the people who observe me from the outside: I only started drawing in the first place because of some ridiculous deal I made with glip. At the time I thought they just wanted everyone to draw because art is their thing, but now I’m starting to suspect they’d caught on after eight years of watching me lament that I couldn’t draw.

I don’t know how I handle such discoveries, either. What is handling? I imagine someone discovering something and trying to come to grips with it, but I don’t know that I have quite that experience — my grappling usually comes earlier, when I’m still trying to figure the thing out despite not knowing that there’s a thing to find out. Once I know it, it’s on the table; I can’t un-know it or reject it meaningfully. All I can do is figure out what to do with it, and I approach that the same way I approach every other problem: by flailing at it and hoping for the best.

This isn’t quite 2000 words. Sorry. I’ve run out of things to say about me. This paragraph is very conspicuous filler. Banana. Atmosphere. Vocation.

How to Enable the Use of Remote Desktops by Deploying Microsoft Remote Desktop Licensing Manager on AWS Microsoft AD

Post Syndicated from Ron Cully original https://aws.amazon.com/blogs/security/how-to-enable-the-use-of-remote-desktops-by-deploying-microsoft-remote-desktop-licensing-manager-on-aws-microsoft-ad/

AWS Directory Service for Microsoft Active Directory (Enterprise Edition), also known as AWS Microsoft AD, now supports Microsoft Remote Desktop Licensing Manager (RD Licensing). By using AWS Microsoft AD as the directory for your Remote Desktop Services solution, you reduce the time it takes to deploy remote desktop solutions on Amazon EC2 for Windows Server instances, and you enable your users to use remote desktops with the credentials they already know. In this blog post, I explain how to deploy RD Licensing Manager on AWS Microsoft AD to enable your users to sign in to remote desktops by using credentials stored in an AWS Microsoft AD or an on-premises Active Directory (AD) domain.

Enable your AWS Microsoft AD users to open remote desktop sessions

To use RD Licensing, you must authorize RD Licensing servers in the same Active Directory domain as the Windows Remote Desktop Session Hosts (RD Session Hosts) by adding them to the Terminal Service Licensing Server security group in AD. This new release grants your AWS Microsoft AD administrative account permissions to do this. As a result, you can now deploy RD Session Hosts in the AWS Cloud without the extra time and effort to set up and configure your own AD domain on Amazon EC2 for Windows Server.

The following diagram illustrates the steps to set up remote desktops with RD Licensing with users in AWS Microsoft AD and shows what happens when users connect to remote desktops.

Diagram illustrating the steps to set up remote desktops with RD Licensing with users in AWS Microsoft AD

In detail, here is how the process works, as it is illustrated in the preceding diagram:

  1. Create an AWS Microsoft AD directory and create users in the directory. You can add user accounts (in this case jsmith) using Active Directory Users and Computers on an EC2 for Windows Server instance that you joined to the domain.
  2. Create EC2 for Windows Server instances to use as your RD Licensing servers (RDLS1 in the preceding diagram). Add the instances to the same domain to which you will join your Windows Remote Desktop Session Hosts (RD Session Hosts).
  3. Configure your EC2 for Windows Server instances as RD Licensing servers and add them to the Terminal Service Licensing Servers security group in AWS Microsoft AD. You can connect to the instances from the AWS Management Console to configure RD Licensing. You also can use Active Directory Users and Computers to add the RD Licensing servers to the security group, thereby authorizing the instances for RD Licensing.
  4. Install your Remote Desktop Services client access licenses (RDS CALs) on the RD Licensing server. You can connect to the instances from the AWS Management Console to install the RDS CALs.
  5. Create other hosts for use as RD Session Hosts (RDSH1 in the diagram). Add the hosts to the same domain as your RD Licensing servers.
  6. A user (in this case jsmith) attempts to open an RDS session.
  7. The RD Session Host requests an RDS CAL from the RD Licensing Server.
  8. The RD Licensing Server returns an RDS CAL to the RD Session Host.

Because the user exists in AWS Microsoft AD, authentication happens against AWS Microsoft AD. The order of authentication relative to session creation depends on whether you configure your RD Session Host for Network Level Authentication.

Enable your users to open remote desktop sessions with their on-premises credentials

If you have an on-premises AD domain with users, your users can open remote desktop sessions with their on-premises credentials if you create a forest trust from AWS Microsoft AD to your Active Directory. The trust enables using on-premises credentials without the need for complex directory synchronization or replication. The following diagram illustrates how to configure a system using the same steps as in the previous section, except that you must create a one-way trust to your on-premises domain in Step 1a. With the trust in place, AWS Microsoft AD refers the RD Session Host to the on-premises domain for authentication.

Diagram illustrating how to configure a system using the same steps as in the previous section, except that you must create a one-way trust to your on-premises domain in Step 1a

Summary

In this post, I have explained how to authorize RD Licensing in AWS Microsoft AD to support EC2-based remote desktop sessions for AWS managed users and on-premises AD managed users. To learn more about how to use AWS Microsoft AD, see the AWS Directory Service documentation. For general information and pricing, see the AWS Directory Service home page.

If you have comments about this blog post, submit a comment in the “Comments” section below. If you have implementation or troubleshooting questions, please start a new thread on the Directory Service forum.

– Ron

In Case You Missed These: AWS Security Blog Posts from January, February, and March

Post Syndicated from Craig Liebendorfer original https://aws.amazon.com/blogs/security/in-case-you-missed-these-aws-security-blog-posts-from-january-february-and-march/

Image of lock and key

In case you missed any AWS Security Blog posts published so far in 2017, they are summarized and linked to below. The posts are shown in reverse chronological order (most recent first), and the subject matter ranges from protecting dynamic web applications against DDoS attacks to monitoring AWS account configuration changes and API calls to Amazon EC2 security groups.

March

March 22: How to Help Protect Dynamic Web Applications Against DDoS Attacks by Using Amazon CloudFront and Amazon Route 53
Using a content delivery network (CDN) such as Amazon CloudFront to cache and serve static text and images or downloadable objects such as media files and documents is a common strategy to improve webpage load times, reduce network bandwidth costs, lessen the load on web servers, and mitigate distributed denial of service (DDoS) attacks. AWS WAF is a web application firewall that can be deployed on CloudFront to help protect your application against DDoS attacks by giving you control over which traffic to allow or block by defining security rules. When users access your application, the Domain Name System (DNS) translates human-readable domain names (for example, www.example.com) to machine-readable IP addresses (for example, 192.0.2.44). A DNS service, such as Amazon Route 53, can effectively connect users’ requests to a CloudFront distribution that proxies requests for dynamic content to the infrastructure hosting your application’s endpoints. In this blog post, I show you how to deploy CloudFront with AWS WAF and Route 53 to help protect dynamic web applications (with dynamic content such as a response to user input) against DDoS attacks. The steps shown in this post are key to implementing the overall approach described in AWS Best Practices for DDoS Resiliency and enable the built-in, managed DDoS protection service, AWS Shield.

March 21: New AWS Encryption SDK for Python Simplifies Multiple Master Key Encryption
The AWS Cryptography team is happy to announce a Python implementation of the AWS Encryption SDK. This new SDK helps manage data keys for you, and it simplifies the process of encrypting data under multiple master keys. As a result, this new SDK allows you to focus on the code that drives your business forward. It also provides a framework you can easily extend to ensure that you have a cryptographic library that is configured to match and enforce your standards. The SDK also includes ready-to-use examples. If you are a Java developer, you can refer to this blog post to see specific Java examples for the SDK. In this blog post, I show you how you can use the AWS Encryption SDK to simplify the process of encrypting data and how to protect your encryption keys in ways that help improve application availability by not tying you to a single region or key management solution.

March 21: Updated CJIS Workbook Now Available by Request
The need for guidance when implementing Criminal Justice Information Services (CJIS)–compliant solutions has become of paramount importance as more law enforcement customers and technology partners move to store and process criminal justice data in the cloud. AWS services allow these customers to easily and securely architect a CJIS-compliant solution when handling criminal justice data, creating a durable, cost-effective, and secure IT infrastructure that better supports local, state, and federal law enforcement in carrying out their public safety missions. AWS has created several documents (collectively referred to as the CJIS Workbook) to assist you in aligning with the FBI’s CJIS Security Policy. You can use the workbook as a framework for developing CJIS-compliant architecture in the AWS Cloud. The workbook helps you define and test the controls you operate, and document the dependence on the controls that AWS operates (compute, storage, database, networking, regions, Availability Zones, and edge locations).

March 9: New Cloud Directory API Makes It Easier to Query Data Along Multiple Dimensions
Today, we made available a new Cloud Directory API, ListObjectParentPaths, that enables you to retrieve all available parent paths for any directory object across multiple hierarchies. Use this API when you want to fetch all parent objects for a specific child object. The order of the paths and objects returned is consistent across iterative calls to the API, unless objects are moved or deleted. In case an object has multiple parents, the API allows you to control the number of paths returned by using a paginated call pattern. In this blog post, I use an example directory to demonstrate how this new API enables you to retrieve data across multiple dimensions to implement powerful applications quickly.

March 8: How to Access the AWS Management Console Using AWS Microsoft AD and Your On-Premises Credentials
AWS Directory Service for Microsoft Active Directory, also known as AWS Microsoft AD, is a managed Microsoft Active Directory (AD) hosted in the AWS Cloud. Now, AWS Microsoft AD makes it easy for you to give your users permission to manage AWS resources by using on-premises AD administrative tools. With AWS Microsoft AD, you can grant your on-premises users permissions to resources such as the AWS Management Console instead of adding AWS Identity and Access Management (IAM) user accounts or configuring AD Federation Services (AD FS) with Security Assertion Markup Language (SAML). In this blog post, I show how to use AWS Microsoft AD to enable your on-premises AD users to sign in to the AWS Management Console with their on-premises AD user credentials to access and manage AWS resources through IAM roles.

March 7: How to Protect Your Web Application Against DDoS Attacks by Using Amazon Route 53 and an External Content Delivery Network
Distributed Denial of Service (DDoS) attacks are attempts by a malicious actor to flood a network, system, or application with more traffic, connections, or requests than it is able to handle. To protect your web application against DDoS attacks, you can use AWS Shield, a DDoS protection service that AWS provides automatically to all AWS customers at no additional charge. You can use AWS Shield in conjunction with DDoS-resilient web services such as Amazon CloudFront and Amazon Route 53 to improve your ability to defend against DDoS attacks. Learn more about architecting for DDoS resiliency by reading the AWS Best Practices for DDoS Resiliency whitepaper. You also have the option of using Route 53 with an externally hosted content delivery network (CDN). In this blog post, I show how you can help protect the zone apex (also known as the root domain) of your web application by using Route 53 to perform a secure redirect to prevent discovery of your application origin.

Image of lock and key

February

February 27: Now Generally Available – AWS Organizations: Policy-Based Management for Multiple AWS Accounts
Today, AWS Organizations moves from Preview to General Availability. You can use Organizations to centrally manage multiple AWS accounts, with the ability to create a hierarchy of organizational units (OUs). You can assign each account to an OU, define policies, and then apply those policies to an entire hierarchy, specific OUs, or specific accounts. You can invite existing AWS accounts to join your organization, and you can also create new accounts. All of these functions are available from the AWS Management Console, the AWS Command Line Interface (CLI), and through the AWS Organizations API.To read the full AWS Blog post about today’s launch, see AWS Organizations – Policy-Based Management for Multiple AWS Accounts.

February 23: s2n Is Now Handling 100 Percent of SSL Traffic for Amazon S3
Today, we’ve achieved another important milestone for securing customer data: we have replaced OpenSSL with s2n for all internal and external SSL traffic in Amazon Simple Storage Service (Amazon S3) commercial regions. This was implemented with minimal impact to customers, and multiple means of error checking were used to ensure a smooth transition, including client integration tests, catching potential interoperability conflicts, and identifying memory leaks through fuzz testing.

February 22: Easily Replace or Attach an IAM Role to an Existing EC2 Instance by Using the EC2 Console
AWS Identity and Access Management (IAM) roles enable your applications running on Amazon EC2 to use temporary security credentials. IAM roles for EC2 make it easier for your applications to make API requests securely from an instance because they do not require you to manage AWS security credentials that the applications use. Recently, we enabled you to use temporary security credentials for your applications by attaching an IAM role to an existing EC2 instance by using the AWS CLI and SDK. To learn more, see New! Attach an AWS IAM Role to an Existing Amazon EC2 Instance by Using the AWS CLI. Starting today, you can attach an IAM role to an existing EC2 instance from the EC2 console. You can also use the EC2 console to replace an IAM role attached to an existing instance. In this blog post, I will show how to attach an IAM role to an existing EC2 instance from the EC2 console.

February 22: How to Audit Your AWS Resources for Security Compliance by Using Custom AWS Config Rules
AWS Config Rules enables you to implement security policies as code for your organization and evaluate configuration changes to AWS resources against these policies. You can use Config rules to audit your use of AWS resources for compliance with external compliance frameworks such as CIS AWS Foundations Benchmark and with your internal security policies related to the US Health Insurance Portability and Accountability Act (HIPAA), the Federal Risk and Authorization Management Program (FedRAMP), and other regimes. AWS provides some predefined, managed Config rules. You also can create custom Config rules based on criteria you define within an AWS Lambda function. In this post, I show how to create a custom rule that audits AWS resources for security compliance by enabling VPC Flow Logs for an Amazon Virtual Private Cloud (VPC). The custom rule meets requirement 4.3 of the CIS AWS Foundations Benchmark: “Ensure VPC flow logging is enabled in all VPCs.”

February 13: AWS Announces CISPE Membership and Compliance with First-Ever Code of Conduct for Data Protection in the Cloud
I have two exciting announcements today, both showing AWS’s continued commitment to ensuring that customers can comply with EU Data Protection requirements when using our services.

February 13: How to Enable Multi-Factor Authentication for AWS Services by Using AWS Microsoft AD and On-Premises Credentials
You can now enable multi-factor authentication (MFA) for users of AWS services such as Amazon WorkSpaces and Amazon QuickSight and their on-premises credentials by using your AWS Directory Service for Microsoft Active Directory (Enterprise Edition) directory, also known as AWS Microsoft AD. MFA adds an extra layer of protection to a user name and password (the first “factor”) by requiring users to enter an authentication code (the second factor), which has been provided by your virtual or hardware MFA solution. These factors together provide additional security by preventing access to AWS services, unless users supply a valid MFA code.

February 13: How to Create an Organizational Chart with Separate Hierarchies by Using Amazon Cloud Directory
Amazon Cloud Directory enables you to create directories for a variety of use cases, such as organizational charts, course catalogs, and device registries. Cloud Directory offers you the flexibility to create directories with hierarchies that span multiple dimensions. For example, you can create an organizational chart that you can navigate through separate hierarchies for reporting structure, location, and cost center. In this blog post, I show how to use Cloud Directory APIs to create an organizational chart with two separate hierarchies in a single directory. I also show how to navigate the hierarchies and retrieve data. I use the Java SDK for all the sample code in this post, but you can use other language SDKs or the AWS CLI.

February 10: How to Easily Log On to AWS Services by Using Your On-Premises Active Directory
AWS Directory Service for Microsoft Active Directory (Enterprise Edition), also known as Microsoft AD, now enables your users to log on with just their on-premises Active Directory (AD) user name—no domain name is required. This new domainless logon feature makes it easier to set up connections to your on-premises AD for use with applications such as Amazon WorkSpaces and Amazon QuickSight, and it keeps the user logon experience free from network naming. This new interforest trusts capability is now available when using Microsoft AD with Amazon WorkSpaces and Amazon QuickSight Enterprise Edition. In this blog post, I explain how Microsoft AD domainless logon works with AD interforest trusts, and I show an example of setting up Amazon WorkSpaces to use this capability.

February 9: New! Attach an AWS IAM Role to an Existing Amazon EC2 Instance by Using the AWS CLI
AWS Identity and Access Management (IAM) roles enable your applications running on Amazon EC2 to use temporary security credentials that AWS creates, distributes, and rotates automatically. Using temporary credentials is an IAM best practice because you do not need to maintain long-term keys on your instance. Using IAM roles for EC2 also eliminates the need to use long-term AWS access keys that you have to manage manually or programmatically. Starting today, you can enable your applications to use temporary security credentials provided by AWS by attaching an IAM role to an existing EC2 instance. You can also replace the IAM role attached to an existing EC2 instance. In this blog post, I show how you can attach an IAM role to an existing EC2 instance by using the AWS CLI.

February 8: How to Remediate Amazon Inspector Security Findings Automatically
The Amazon Inspector security assessment service can evaluate the operating environments and applications you have deployed on AWS for common and emerging security vulnerabilities automatically. As an AWS-built service, Amazon Inspector is designed to exchange data and interact with other core AWS services not only to identify potential security findings but also to automate addressing those findings. Previous related blog posts showed how you can deliver Amazon Inspector security findings automatically to third-party ticketing systems and automate the installation of the Amazon Inspector agent on new Amazon EC2 instances. In this post, I show how you can automatically remediate findings generated by Amazon Inspector. To get started, you must first run an assessment and publish any security findings to an Amazon Simple Notification Service (SNS) topic. Then, you create an AWS Lambda function that is triggered by those notifications. Finally, the Lambda function examines the findings and then implements the appropriate remediation based on the type of issue.

February 6: How to Simplify Security Assessment Setup Using Amazon EC2 Systems Manager and Amazon Inspector
In a July 2016 AWS Blog post, I discussed how to integrate Amazon Inspector with third-party ticketing systems by using Amazon Simple Notification Service (SNS) and AWS Lambda. This AWS Security Blog post continues in the same vein, describing how to use Amazon Inspector to automate various aspects of security management. In this post, I show you how to install the Amazon Inspector agent automatically through the Amazon EC2 Systems Manager when a new Amazon EC2 instance is launched. In a subsequent post, I will show you how to update EC2 instances automatically that run Linux when Amazon Inspector discovers a missing security patch.

Image of lock and key

January

January 30: How to Protect Data at Rest with Amazon EC2 Instance Store Encryption
Encrypting data at rest is vital for regulatory compliance to ensure that sensitive data saved on disks is not readable by any user or application without a valid key. Some compliance regulations such as PCI DSS and HIPAA require that data at rest be encrypted throughout the data lifecycle. To this end, AWS provides data-at-rest options and key management to support the encryption process. For example, you can encrypt Amazon EBS volumes and configure Amazon S3 buckets for server-side encryption (SSE) using AES-256 encryption. Additionally, Amazon RDS supports Transparent Data Encryption (TDE). Instance storage provides temporary block-level storage for Amazon EC2 instances. This storage is located on disks attached physically to a host computer. Instance storage is ideal for temporary storage of information that frequently changes, such as buffers, caches, and scratch data. By default, files stored on these disks are not encrypted. In this blog post, I show a method for encrypting data on Linux EC2 instance stores by using Linux built-in libraries. This method encrypts files transparently, which protects confidential data. As a result, applications that process the data are unaware of the disk-level encryption.

January 27: How to Detect and Automatically Remediate Unintended Permissions in Amazon S3 Object ACLs with CloudWatch Events
Amazon S3 Access Control Lists (ACLs) enable you to specify permissions that grant access to S3 buckets and objects. When S3 receives a request for an object, it verifies whether the requester has the necessary access permissions in the associated ACL. For example, you could set up an ACL for an object so that only the users in your account can access it, or you could make an object public so that it can be accessed by anyone. If the number of objects and users in your AWS account is large, ensuring that you have attached correctly configured ACLs to your objects can be a challenge. For example, what if a user were to call the PutObjectAcl API call on an object that is supposed to be private and make it public? Or, what if a user were to call the PutObject with the optional Acl parameter set to public-read, therefore uploading a confidential file as publicly readable? In this blog post, I show a solution that uses Amazon CloudWatch Events to detect PutObject and PutObjectAcl API calls in near-real time and helps ensure that the objects remain private by making automatic PutObjectAcl calls, when necessary.

January 26: Now Available: Amazon Cloud Directory—A Cloud-Native Directory for Hierarchical Data
Today we are launching Amazon Cloud Directory. This service is purpose-built for storing large amounts of strongly typed hierarchical data. With the ability to scale to hundreds of millions of objects while remaining cost-effective, Cloud Directory is a great fit for all sorts of cloud and mobile applications.

January 24: New SOC 2 Report Available: Confidentiality
As with everything at Amazon, the success of our security and compliance program is primarily measured by one thing: our customers’ success. Our customers drive our portfolio of compliance reports, attestations, and certifications that support their efforts in running a secure and compliant cloud environment. As a result of our engagement with key customers across the globe, we are happy to announce the publication of our new SOC 2 Confidentiality report. This report is available now through AWS Artifact in the AWS Management Console.

January 18: Compliance in the Cloud for New Financial Services Cybersecurity Regulations
Financial regulatory agencies are focused more than ever on ensuring responsible innovation. Consequently, if you want to achieve compliance with financial services regulations, you must be increasingly agile and employ dynamic security capabilities. AWS enables you to achieve this by providing you with the tools you need to scale your security and compliance capabilities on AWS. The following breakdown of the most recent cybersecurity regulations, NY DFS Rule 23 NYCRR 500, demonstrates how AWS continues to focus on your regulatory needs in the financial services sector.

January 9: New Amazon GameDev Blog Post: Protect Multiplayer Game Servers from DDoS Attacks by Using Amazon GameLift
In online gaming, distributed denial of service (DDoS) attacks target a game’s network layer, flooding servers with requests until performance degrades considerably. These attacks can limit a game’s availability to players and limit the player experience for those who can connect. Today’s new Amazon GameDev Blog post uses a typical game server architecture to highlight DDoS attack vulnerabilities and discusses how to stay protected by using built-in AWS Cloud security, AWS security best practices, and the security features of Amazon GameLift. Read the post to learn more.

January 6: The Top 10 Most Downloaded AWS Security and Compliance Documents in 2016
The following list includes the 10 most downloaded AWS security and compliance documents in 2016. Using this list, you can learn about what other people found most interesting about security and compliance last year.

January 6: FedRAMP Compliance Update: AWS GovCloud (US) Region Receives a JAB-Issued FedRAMP High Baseline P-ATO for Three New Services
Three new services in the AWS GovCloud (US) region have received a Provisional Authority to Operate (P-ATO) from the Joint Authorization Board (JAB) under the Federal Risk and Authorization Management Program (FedRAMP). JAB issued the authorization at the High baseline, which enables US government agencies and their service providers the capability to use these services to process the government’s most sensitive unclassified data, including Personal Identifiable Information (PII), Protected Health Information (PHI), Controlled Unclassified Information (CUI), criminal justice information (CJI), and financial data.

January 4: The Top 20 Most Viewed AWS IAM Documentation Pages in 2016
The following 20 pages were the most viewed AWS Identity and Access Management (IAM) documentation pages in 2016. I have included a brief description with each link to give you a clearer idea of what each page covers. Use this list to see what other people have been viewing and perhaps to pique your own interest about a topic you’ve been meaning to research.

January 3: The Most Viewed AWS Security Blog Posts in 2016
The following 10 posts were the most viewed AWS Security Blog posts that we published during 2016. You can use this list as a guide to catch up on your blog reading or even read a post again that you found particularly useful.

January 3: How to Monitor AWS Account Configuration Changes and API Calls to Amazon EC2 Security Groups
You can use AWS security controls to detect and mitigate risks to your AWS resources. The purpose of each security control is defined by its control objective. For example, the control objective of an Amazon VPC security group is to permit only designated traffic to enter or leave a network interface. Let’s say you have an Internet-facing e-commerce website, and your security administrator has determined that only HTTP (TCP port 80) and HTTPS (TCP 443) traffic should be allowed access to the public subnet. As a result, your administrator configures a security group to meet this control objective. What if, though, someone were to inadvertently change this security group’s rules and enable FTP or other protocols to access the public subnet from any location on the Internet? That expanded access could weaken the security posture of your assets. Consequently, your administrator might need to monitor the integrity of your company’s security controls so that the controls maintain their desired effectiveness. In this blog post, I explore two methods for detecting unintended changes to VPC security groups. The two methods address not only control objectives but also control failures.

If you have questions about or issues with implementing the solutions in any of these posts, please start a new thread on the forum identified near the end of each post.

– Craig

Zelda-inspired ocarina-controlled home automation

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/zelda-home-automation/

Allen Pan has wired up his home automation system to be controlled by memorable tunes from the classic Zelda franchise.

Zelda Ocarina Controlled Home Automation – Zelda: Ocarina of Time | Sufficiently Advanced

With Zelda: Breath of the Wild out on the Nintendo Switch, I made a home automation system based off the Zelda series using the ocarina from The Legend of Zelda: Ocarina of Time. Help Me Make More Awesome Stuff! https://www.patreon.com/sufficientlyadvanced Subscribe! http://goo.gl/xZvS5s Follow Sufficiently Advanced!

Listen!

Released in 1998, The Legend of Zelda: Ocarina of Time is the best game ever is still an iconic entry in the retro gaming history books.

Very few games have stuck with me in the same way Ocarina has, and I think it’s fair to say that, with the continued success of the Zelda franchise, I’m not the only one who has a special place in their heart for Link, particularly in this musical outing.

Legend of Zelda: Ocarina of Time screenshot

Thanks to Cynosure Gaming‘s Ocarina of Time review for the image.

Allen, or Sufficiently Advanced, as his YouTube subscribers know him, has used a Raspberry Pi to detect and recognise key tunes from the game, with each tune being linked (geddit?) to a specific task. By playing Zelda’s Lullaby (E, G, D, E, G, D), for instance, Allen can lock or unlock the door to his house. Other tunes have different functions: Epona’s Song unlocks the car (for Ocarina noobs, Epona is Link’s horse sidekick throughout most of the game), and Minuet of Forest waters the plants.

So how does it work?

It’s a fairly simple setup based around note recognition. When certain notes are played in a specific sequence, the Raspberry Pi detects the tune via a microphone within the Amazon Echo-inspired body of the build, and triggers the action related to the specific task. The small speaker you can see in the video plays a confirmation tune, again taken from the video game, to show that the task has been completed.

Legend of Zelda Ocarina of Time Raspberry Pi Home Automation system setup image

As for the tasks themselves, Allen has built a small controller for each action, whether it be a piece of wood that presses down on his car key, a servomotor that adjusts the ambient temperature, or a water pump to hydrate his plants. Each controller has its own small ESP8266 wireless connectivity module that links back to the wireless-enabled Raspberry Pi, cutting down on the need for a ton of wires about the home.

And yes, before anybody says it, we’re sure that Allen is aware that using tone recognition is not the safest means of locking and unlocking your home. This is just for fun.

Do-it-yourself home automation

While we don’t necessarily expect everyone to brush up on their ocarina skills and build their own Zelda-inspired home automation system, the idea of using something other than voice or text commands to control home appliances is a fun one.

You could use facial recognition at the door to start the kettle boiling, or the detection of certain gasses to – ahem!– spray an air freshener.

We love to see what you all get up to with the Raspberry Pi. Have you built your own home automation system controlled by something other than your voice? Share it in the comments below.

 

The post Zelda-inspired ocarina-controlled home automation appeared first on Raspberry Pi.

How to Access the AWS Management Console Using AWS Microsoft AD and Your On-Premises Credentials

Post Syndicated from Vijay Sharma original https://aws.amazon.com/blogs/security/how-to-access-the-aws-management-console-using-aws-microsoft-ad-and-your-on-premises-credentials/

AWS Directory Service for Microsoft Active Directory, also known as AWS Microsoft AD, is a managed Microsoft Active Directory (AD) hosted in the AWS Cloud. Now, AWS Microsoft AD makes it easy for you to give your users permission to manage AWS resources by using on-premises AD administrative tools. With AWS Microsoft AD, you can grant your on-premises users permissions to resources such as the AWS Management Console instead of adding AWS Identity and Access Management (IAM) user accounts or configuring AD Federation Services (AD FS) with Security Assertion Markup Language (SAML).

In this blog post, I show how to use AWS Microsoft AD to enable your on-premises AD users to sign in to the AWS Management Console with their on-premises AD user credentials to access and manage AWS resources through IAM roles.

Background

AWS customers use on-premises AD to administer user accounts, manage group memberships, and control access to on-premises resources. If you are like many AWS Microsoft AD customers, you also might want to enable your users to sign in to the AWS Management Console using on-premises AD credentials to manage AWS resources such as Amazon EC2, Amazon RDS, and Amazon S3.

Enabling such sign-in permissions has four key benefits:

  1. Your on-premises AD group administrators can now manage access to AWS resources with standard AD administration tools instead of IAM.
  2. Your users need to remember only one identity to sign in to AD and the AWS Management Console.
  3. Because users sign in with their on-premises AD credentials, access to the AWS Management Console benefits from your AD-enforced password policies.
  4. When you remove a user from AD, AWS Microsoft AD and IAM automatically revoke their access to AWS resources.

IAM roles provide a convenient way to define permissions to manage AWS resources. By using an AD trust between AWS Microsoft AD and your on-premises AD, you can assign your on-premises AD users and groups to IAM roles. This gives the assigned users and groups the IAM roles’ permissions to manage AWS resources. By assigning on-premises AD groups to IAM roles, you can now manage AWS access through standard AD administrative tools such as AD Users and Computers (ADUC).

After you assign your on-premises users or groups to IAM roles, your users can sign in to the AWS Management Console with their on-premises AD credentials. From there, they can select from a list of their assigned IAM roles. After they select a role, they can perform the management functions that you assigned to the IAM role.

In the rest of this post, I show you how to accomplish this in four steps:

  1. Create an access URL.
  2. Enable AWS Management Console access.
  3. Assign on-premises users and groups to IAM roles.
  4. Connect to the AWS Management Console.

Prerequisites

The instructions in this blog post require you to have the following components running:

Note: You can assign IAM roles to user identities stored in AWS Microsoft AD. For this post, I focus on assigning IAM roles to user identities stored in your on-premises AD. This requires a forest trust relationship between your on-premises Active Directory and your AWS Microsoft AD directory.

Solution overview

For the purposes of this post, I am the administrator who manages both AD and IAM roles in my company. My company wants to enable all employees to use on-premises credentials to sign in to the AWS Management Console to access and manage their AWS resources. My company uses EC2, RDS, and S3. To manage administrative permissions to these resources, I created a role for each service that gives full access to the service. I named these roles EC2FullAccess, RDSFullAccess, and S3FullAccess.

My company has two teams with different responsibilities, and we manage users in AD security groups. Mary is a member of the DevOps security group and is responsible for creating and managing our RDS databases, running data collection applications on EC2, and archiving information in S3. John and Richard are members of the BIMgrs security group and use EC2 to run analytics programs against the database. Though John and Richard need access to the database and archived information, they do not need to operate those systems. They do need permission to administer their own EC2 instances.

To grant appropriate access to the AWS resources, I need to assign the BIMgrs security group in AD to the EC2FullAccess role in IAM, and I need to assign the DevOps group to all three roles (EC2FullAccess, RDSFullAccess, and S3FullAccess). Also, I want to make sure all our employees have adequate time to complete administrative actions after signing in to the AWS Management Console, so I increase the console session timeout from 60 minutes to 240 minutes (4 hours).

The following diagram illustrates the relationships between my company’s AD users and groups and my company’s AWS roles and services. The left side of the diagram represents my on-premises AD that contains users and groups. The right side represents the AWS Cloud that contains the AWS Management Console, AWS resources, IAM roles, and our AWS Microsoft AD directory connected to our on-premises AD via a forest trust relationship.

NEWDiagram-VijayS-a

Let’s get started with the steps for this scenario. For this post, I have already created an AWS Microsoft AD directory and established a two-way forest trust from AWS Microsoft AD to my on-premises AD. To manage access to AWS resources, I have also created the following IAM roles:

  • EC2FullAccess: Provides full access to EC2 and has the AmazonEC2FullAccess AWS managed policy attached.
  • RDSFullAccess: Provides full access to RDS via the AWS Management Console and has the AmazonRDSFullAccess managed policy attached.
  • S3FullAccess: Provides full access to S3 via the AWS Management Console and has the AmazonS3FullAccess managed policy attached.

To learn more about how to create IAM roles and attach managed policies, see Attaching Managed Policies.

Note: You must include a Directory Service trust policy on all roles that require access by users who sign in to the AWS Management Console using Microsoft AD. To learn more, see Editing the Trust Relationship for an Existing Role.

Step 1 – Create an access URL

The first step to enabling access to the AWS Management Console is to create a unique Access URL for your AWS Microsoft AD directory. An Access URL is a globally unique URL. AWS applications, such as the AWS Management Console, use the URL to connect to the AWS sign-in page that is linked to your AWS Microsoft AD directory. The Access URL does not provide any other access to your directory. To learn more about Access URLs, see Creating an Access URL.

Follow these steps to create an Access URL:

  1. Navigate to the Directory Service Console and choose your AWS Microsoft AD Directory ID.
  2. On the Directory Details page, choose the Apps & Services tab, type a unique access alias in the Access URL box, and then choose Create Access URL to create an Access URL for your directory.
    Screenshot of creating an Access URL

Your directory Access URL should be in the following format: <access-alias>.awsapps.com. In this example, I am using https://example-corp.awsapps.com.

Step 2 – Enable AWS Management Console access

To allow users to sign in to AWS Management Console with their on-premises credentials, you must enable AWS Management Console access for your AWS Microsoft AD directory:

  1. From the Directory Service console, choose your AWS Microsoft AD Directory ID. Choose the AWS Management Console link in the AWS apps & services section.
    Screenshot of choosing the AWS Management Console link
  2. In the Enable AWS Management Console dialog box, choose Enable Access to enable console access for your directory.
    Screenshot of choosing Enable Access

This enables AWS Management Console access for your AWS Microsoft AD directory and provides you a URL that you can use to connect to the console. The URL is generated by appending “/console” to the end of the access URL that you created in Step 1: <access-alias>.awsapps.com/console. In this example, the AWS Management Console URL is https://example-corp.awsapps.com/console.
Screenshot of the URL to connect to the console

Step 3 – Assign on-premises users and groups to IAM roles

Before you users can use your Access URL to sign in to the AWS Management Console, you need to assign on-premises users or groups to IAM roles. This critical step enables you to control which AWS resources your on-premises users and groups can access from the AWS Management Console.

In my on-premises Active Directory, Mary is already a member of the DevOps group, and John and Richard are members of the BIMgrs group. I already set up the trust from AWS Microsoft AD to my on-premises AD, and I already created the EC2FullAccess, RDSFullAccess, and S3FullAccess roles that I will use.

I am now ready to assign on-premises groups to IAM roles. I do this by assigning the DevOps group to the EC2FullAccess, RDSFullAccess, and S3FullAccess IAM roles, and the BIMgrs group to the EC2FullAccess IAM role. Follow these steps to assign on-premises groups to IAM roles:

  1. Open the Directory Service details page of your AWS Microsoft AD directory and choose the AWS Management Console link on the Apps & services tab. Choose Continue to navigate to the Add Users and Groups to Roles page.
    Screenshot of Manage access to AWS Resources dialog box
  2. On the Add Users and Groups to Roles page, I see the three IAM roles that I have already configured (shown in the following screenshot). If you do not have any IAM roles with a Directory Service trust policy enabled, you can create new roles or enable Directory Service for existing roles.
  3. I will now assign the on-premises DevOps and BIMgrs groups to the EC2FullAccess role. To do so, I choose the EC2FullAccess IAM role link to navigate to the Role Detail page. Next, I choose the Add button to assign users or groups to the role, as shown in the following screenshot.
  4. In the Add Users and Groups to Role pop-up window, I select the on-premises Active Directory forest that contains the users and groups to assign. In this example, that forest is amazondomains.comNote: If you do not use a trust to an on-premises AD and you create users and groups in your AWS Microsoft AD directory, you can choose the default this forest to search for users in Microsoft AD.
  5. To assign an Active Directory group, choose the Group filter above the Search for field. Type the name of the Active Directory group in the search box and choose the search button (the magnifying glass). You can see that I was able to search for the DevOps group from my on-premises Active Directory.
  6. In this case, I added the on-premises groups, DevOps and BIMgrs, to the EC2FullAccess role. When finished, choose the Add button to assign users and groups to the IAM role. You have now successfully granted DevOps and BIMgrs on-premises AD groups full access to EC2. Users in these AD groups can now sign in to AWS Management Console using their on-premises credentials and manage EC2 instances.

From the Add Users and Groups to Roles page, I repeat the process to assign the remaining groups to the IAM roles. In the following screenshot, you can see that I have assigned the DevOps group to three roles and the BIMgrs group to only one role.

With my AD security groups assigned to my IAM roles, I can now add and delete on-premises users to the security groups to grant or revoke permissions to the IAM roles. Users in these security groups have access to all of their assigned roles.

  1. You can optionally set the login session length for your AWS Microsoft AD directory. The default length is 1 hour, but you can increase it up to 12 hours. In my example, I set the console session time to 240 minutes (4 hours).

Step 4 – Connect to the AWS Management Console

I am now ready for my users to sign in to the AWS Management Console with their on-premises credentials. I emailed my users the access URL I created in Step 2: https://example-corp.awsapps.com/console. Now my users can go to the URL to sign in to the AWS Management Console.

When Mary, who is a member of DevOps group, goes to the access URL, she sees a sign-in page to connect to the AWS Management Console. In the Username box, she can enter her sign-in name in three different ways:

Because the DevOps group is associated with three IAM roles, and because Mary is in the DevOps group, she can choose the role she wants from the list presented after she successfully logs in. The following screenshot shows this step.

If you also would like to secure the AWS Management Console with multi-factor authentication (MFA), you can add MFA to your AWS Microsoft AD configuration. To learn more about enabling MFA on Microsoft AD, see How to Enable Multi-Factor Authentication for AWS Services by Using AWS Microsoft AD and On-Premises Credentials.

Summary

AWS Microsoft AD makes it easier for you to connect to the AWS Management Console by using your on-premises credentials. It also enables you to reuse your on-premises AD security policies such as password expiration, password history, and account lockout policies while still controlling access to AWS resources.

To learn more about Directory Service, see the AWS Directory Service home page. If you have questions about this blog post, please start a new thread on the Directory Service forum.

– Vijay

How to Easily Log On to AWS Services by Using Your On-Premises Active Directory

Post Syndicated from Ron Cully original https://aws.amazon.com/blogs/security/how-to-easily-log-on-to-aws-services-by-using-your-on-premises-active-directory/

AWS Directory Service for Microsoft Active Directory (Enterprise Edition), also known as Microsoft AD, now enables your users to log on with just their on-premises Active Directory (AD) user name—no domain name is required. This new domainless logon feature makes it easier to set up connections to your on-premises AD for use with applications such as Amazon WorkSpaces and Amazon QuickSight, and it keeps the user logon experience free from network naming. This new interforest trusts capability is now available when using Microsoft AD with Amazon WorkSpaces and Amazon QuickSight Enterprise Edition.

In this blog post, I explain how Microsoft AD domainless logon works with AD interforest trusts, and I show an example of setting up Amazon WorkSpaces to use this capability.

To follow along, you must have already implemented an on-premises AD infrastructure. You will also need to have an AWS account with an Amazon Virtual Private Cloud (Amazon VPC). I start with some basic concepts to explain domainless logon. If you have prior knowledge of AD domain names, NetBIOS names, logon names, and AD trusts, you can skip the following “Concepts” section and move ahead to the “Interforest Trust with Domainless Logon” section.

Concepts: AD domain names, NetBIOS names, logon names, and AD trusts

AD directories are distributed hierarchical databases that run on one or more domain controllers. AD directories comprise a forest that contains one or more domains. Each forest has a root domain and a global catalog that runs on at least one domain controller. Optionally, a forest may contain child domains as a way to organize and delegate administration of objects. The domains contain user accounts each with a logon name. Domains also contain objects such as groups, computers, and policies; however, these are outside the scope of this blog post. When child domains exist in a forest, root domains are frequently unused for user accounts. The global catalog contains a list of all user accounts for all domains within the forest, similar to a searchable phonebook listing of all domain accounts. The following diagram illustrates the basic structure and naming of a forest for the company example.com.

Diagram of basic structure and naming of forest for example.com

Domain names

AD domains are Domain Name Service (DNS) names, and domain names are used to locate user accounts and other objects in the directory. A forest has one root domain, and its name consists of a prefix name and a suffix name. Often administrators configure their forest suffix to be the registered DNS name for their organization (for example, example.com) and the prefix is a name associated with their forest root domain (for example, us). Child domain names consist of a prefix followed by the root domain name. For example, let’s say you have a root domain us.example.com, and you created a child domain for your sales organization with a prefix of sales. The FQDN is the domain prefix of the child domain combined with the root domain prefix and the organization suffix, all separated by periods (“.”). In this example, the FQDN for the sales domain is sales.us.example.com.

NetBIOS names

NetBIOS is a legacy application programming interface (API) that worked over network protocols. NetBIOS names were used to locate services in the network and, for compatibility with legacy applications, AD associates a NetBIOS name with each domain in the directory. Today, NetBIOS names continue to be used as simplified names to find user accounts and services that are managed within AD and must be unique within the forest and any trusted forests (see “Interforest trusts” section that follows). NetBIOS names must be 15 or fewer characters long.

For this post, I have chosen the following strategy to ensure that my NetBIOS names are unique across all domains and all forests. For my root domain, I concatenate the root domain prefix with the forest suffix, without the .com and without the periods. In this case, usexample is the NetBIOS name for my root domain us.example.com. For my child domains, I concatenate the child domain prefix with the root domain prefix without periods. This results in salesus as the NetBIOS name for the child domain sales.us.example.com. For my example, I can use the NetBIOS name salesus instead of the FQDN sales.us.example.com when searching for users in the sales domain.

Logon names

Logon names are used to log on to Active Directory and must be 20 or fewer characters long (for example, jsmith or dadams). Logon names must be unique within a domain, but they do not have to be unique between different domains in the same forest. For example, there can be only one dadams in the sales.us.example.com (salesus) domain, but there could also be a dadams in the hr.us.example.com (hrus) domain. When possible, it is a best practice for logon names to be unique across all forests and domains in your AD infrastructure. By doing so, you can typically use the AD logon name as a person’s email name (the local-part of an email address), and your forest suffix as the email domain (for example, [email protected]). This way, end users only have one name to remember for email and logging on to AD. Failure to use unique logon names results in some people having different logon and email names.

For example, let’s say there is a Daryl Adams in hrus with a logon name of dadams and a Dale Adams in salesus with a logon name of dadams. The company is using example.com as its email domain. Because email requires addresses to be unique, you can only have one [email protected] email address. Therefore, you would have to give one of these two people (let’s say Dale Adams) a different email address such as [email protected]. Now Dale has to remember to logon to the network as dadams (the AD logon name) but have an email name of daleadams. If unique user names were assigned instead, Dale could have a logon name of daleadams and an email name of daleadams.

Logging on to AD

To allow AD to find user accounts in the forest during log on, users must include their logon name and the FQDN or the NetBIOS name for the domain where their account is located. Frequently, the computers used by people are joined to the same domain as the user’s account. The Windows desktop logon screen chooses the computer’s domain as the default domain for logon, so users typically only need to type their logon name and password. However, if the computer is joined to a different domain than the user, the user’s FQDN or NetBIOS name are also required.

For example, suppose jsmith has an account in sales.us.example.com, and the domain has a NetBIOS name salesus. Suppose jsmith tries to log on using a shared computer that is in the computers.us.example.com domain with a NetBIOS name of uscomputers. The computer defaults the logon domain to uscomputers, but jsmith does not exist in the uscomputers domain. Therefore, jsmith must type her logon name and her FQDN or NetBIOS name in the user name field of the Windows logon screen. Windows supports multiple syntaxes to do this including NetBIOS\username (salesus\jsmith) and FQDN\username (sales.us.com\jsmith).

Interforest trusts

Most organizations have a single AD forest in which to manage user accounts, computers, printers, services, and other objects. Within a single forest, AD uses a transitive trust between all of its domains. A transitive trust means that within a trust, domains trust users, computers, and services that exist in other domains in the same forest. For example, a printer in printers.us.example.com trusts sales.us.example.com\jsmith. As long as jsmith is given permissions to do so, jsmith can use the printer in printers.us.example.com.

An organization at times might need two or more forests. When multiple forests are used, it is often desirable to allow a user in one forest to access a resource, such as a web application, in a different forest. However, trusts do not work between forests unless the administrators of the two forests agree to set up a trust.

For example, suppose a company that has a root domain of us.example.com has another forest in the EU with a root domain of eu.example.com. The company wants to let users from both forests share the same printers to accommodate employees who travel between locations. By creating an interforest trust between the two forests, this can be accomplished. In the following diagram, I illustrate that us.example.com trusts users from eu.example.com, and the forest eu.example.com trusts users from us.example.com through a two-way forest trust.

Diagram of a two-way forest trust

In rare cases, an organization may require three or more forests. Unlike domain trusts within a single forest, interforest trusts are not transitive. That means, for example, that if the forest us.example.com trusts eu.example.com, and eu.example.com trusts jp.example.com, us.example.com does not automatically trust jp.example.com. For us.example.com to trust jp.example.com, an explicit, separate trust must be created between these two forests.

When setting up trusts, there is a notion of trust direction. The direction of the trust determines which forest is trusting and which forest is trusted. In a one-way trust, one forest is the trusting forest, and the other is the trusted forest. The direction of the trust is from the trusting forest to the trusted forest. A two-way trust is simply two one-way trusts going in opposite directions; in this case, both forests are both trusting and trusted.

Microsoft Windows and AD use an authentication technology called Kerberos. After a user logs on to AD, Kerberos gives the user’s Windows account a Kerberos ticket that can be used to access services. Within a forest, the ticket can be presented to services such as web applications to prove who the user is, without the user providing a logon name and password again. Without a trust, the Kerberos ticket from one forest will not be honored in a different forest. In a trust, the trusting forest agrees to trust users who have logged on to the trusted forest, by trusting the Kerberos ticket from the trusted forest. With a trust, the user account associated with the Kerberos ticket can access services in the trusting forest if the user account has been granted permissions to use the resource in the trusting forest.

Interforest Trust with Domainless Logon

For many users, remembering domain names or NetBIOS names has been a source of numerous technical support calls. With the new updates to Microsoft AD, AWS applications such as Amazon WorkSpaces can be updated to support domainless logon through interforest trusts between Microsoft AD and your on-premises AD. Domainless logon eliminates the need for people to enter a domain name or a NetBIOS name to log on if their logon name is unique across all forests and all domains.

As described in the “Concepts” section earlier in this post, AD authentication requires a logon name to be presented with an FQDN or NetBIOS name. If AD does not receive an FQDN or NetBIOS name, it cannot find the user account in the forest. Windows can partially hide domain details from users if the Windows computer is joined to the same domain in which the user’s account is located. For example, if jsmith in salesus uses a computer that is joined to the sales.us.example.com domain, jsmith does not have to remember her domain name or NetBIOS name. Instead, Windows uses the domain of the computer as the default domain to try when jsmith enters only her logon name. However, if jsmith is using a shared computer that is joined to the computers.us.example.com domain, jsmith must log on by specifying her domain of sales.us.example.com or her NetBIOS name salesus.

With domainless logon, Microsoft AD takes advantage of global catalogs, and because most user names are unique across an entire organization, the need for an FQDN or NetBIOS name for most users to log on is eliminated.

Let’s look at how domainless logon works.

AWS applications that use Directory Service use a similar AWS logon page and identical logon process. Unlike a Windows computer joined to a domain, the AWS logon page is associated with a Directory Service directory, but it is not associated with any particular domain. When Microsoft AD is used, the User name field of the logon page accepts an FQDN\logon name, NetBIOS\logon name, or just a logon name. For example, the logon screen will accept sales.us.example.com\jsmith, salesus\jsmith, or jsmith.

In the following example, the company example.com has a forest in the US and EU, and one in AWS using Microsoft AD. To make NetBIOS names unique, I use my naming strategy described earlier in the section “NetBIOS names.” For the US root domain, the FQDN is us.example.com,and the NetBIOS name is usexample. For the EU, the FQDN is eu.example.com and the NetBIOS is euexample. For AWS, the FQDN is aws.example.com and the NetBIOS awsexample. Continuing with my naming strategy, my unique child domains have the NetBIOS names salesus, hrus, saleseu, hreu. Each of the forests has a global catalog that lists all users from all domains within the forest. The following graphic illustrates the forest configuration.

Diagram of the forest configuration

As shown in the preceding diagram, the global catalog for the US forest contains a jsmith in sales and dadams in hr. For the EU, there is a dadams in sales and a tpella in hr, and the AWS forest has a bharvey. The users shown in green type (jsmith, tpella, and bharvey) have unique names across all forests in the trust and qualify for domainless logon. The two dadams shown in red do not qualify for domainless logon because the user name is not unique across all trusted forests.

As shown in the following diagram, when a user types in only a logon name (such as jsmith or dadams) without an FQDN or NetBIOS name, domainless logon simultaneously searches for a matching logon name in the global catalogs of the Microsoft AD forest (aws.example.com) and all trusted forests (us.example.com and eu.example.com). For jsmith, the domainless logon finds a single user account that matches the logon name in sales.us.example.com and adds the domain to the logon name before authenticating. If no accounts match the logon name, authentication fails before attempting to authenticate. If dadams in the EU attempts to use only his logon name, domainless logon finds two dadams users, one in hr.us.example.com and one in sales.eu.example.com. This ambiguity prevents domainless logon. To log on, dadams must provide his FQDN or NetBIOS name (in other words, sales.eu.example.com\dadams or saleseu\dadams).

Diagram showing when a user types in only a logon name without an FQDN or NetBIOS name

Upon successful logon, the logon page caches in a cookie the logon name and domain that were used. In subsequent logons, the end user does not have to type anything except their password. Also, because the domain is cached, the global catalogs do not need to be searched on subsequent logons. This minimizes global catalog searching, maximizes logon performance, and eliminates the need for users to remember domains (in most cases).

To maximize security associated with domainless logon, all authentication failures result in an identical failure notification that tells the user to check their domain name, user name, and password before trying again. This prevents hackers from using error codes or failure messages to glean information about logon names and domains in your AD directory.

If you follow best practices so that all user names are unique across all domains and all forests, domainless logon eliminates the requirement for your users to remember their FQDN or NetBIOS name to log on. This simplifies the logon experience for end users and can reduce your technical support resources that you use currently to help end users with logging on.

Solution overview

In this example of domainless logon, I show how Amazon WorkSpaces can use your existing on-premises AD user accounts through Microsoft AD. This example requires:

  1. An AWS account with an Amazon VPC.
  2. An AWS Microsoft AD directory in your Amazon VPC.
  3. An existing AD deployment in your on-premises network.
  4. A secured network connection from your on-premises network to your Amazon VPC.
  5. A two-way AD trust between your Microsoft AD and your on-premises AD.

I configure Amazon WorkSpaces to use a Microsoft AD directory that exists in the same Amazon VPC. The Microsoft AD directory is configured to have a two-way trust to the on-premises AD. Amazon WorkSpaces uses Microsoft AD and the two-way trust to find users in your on-premises AD and create Amazon WorkSpaces instances. After the instances are created, I send end users an invitation to use their Amazon WorkSpaces. The invitation includes a link for them to complete their configuration and a link to download an Amazon WorkSpaces client to their directory. When the user logs in to their Amazon WorkSpaces account, the user specifies the login name and password for their on-premises AD user account. Through the two-way trust between Microsoft AD and the on-premises AD, the user is authenticated and gains access to their Amazon WorkSpaces desktop.

Getting started

Now that we have covered how the pieces fit together and you understand how FQDN, NetBIOS, and logon names are used, let’s walk through the steps to use Microsoft AD with domainless logon to your on-premises AD for Amazon WorkSpaces.

Step 1 – Set up your Microsoft AD in your Amazon VPC

If you already have a Microsoft AD directory running, skip to Step 2. If you do not have a Microsoft AD directory to use with Amazon WorkSpaces, you can create the directory in the Directory Service console and attach to it from the Amazon WorkSpaces console, or you can create the directory within the Amazon WorkSpaces console.

To create the directory from Amazon WorkSpaces (as shown in the following screenshot):

  1. Sign in to the AWS Management Console.
  2. Under All services, choose WorkSpaces from the Desktop & App Streaming section.
  3. Choose Get Started Now.
  4. Choose Launch next to Advanced Setup, and then choose Create Microsoft AD.

To create the directory from the Directory Service console:

  1. Sign in to the AWS Management Console.
  2. Under Security & Identity, choose Directory Service.
  3. Choose Get Started Now.
  4. Choose Create Microsoft AD.
    Screenshot of choosing "Create Microsoft AD"

In this example, I use example.com as my organization name. The Directory DNS is the FQDN for the root domain, and it is aws.example.com in this example. For my NetBIOS name, I follow the naming model I showed earlier and use awsexample. Note that the Organization Name shown in the following screenshot is required only when creating a directory from Amazon WorkSpaces; it is not required when you create a Microsoft AD directory from the AWS Directory Service workflow.

Screenshot of establishing directory details

For more details about Microsoft AD creation, review the steps in AWS Directory Service for Microsoft Active Directory (Enterprise Edition). After entering the required parameters, it may take up to 40 minutes for the directory to become active so that you might want to exit the console and come back later.

Note: First-time directory users receive 750 free directory hours.

Step 2 – Create a trust relationship between your Microsoft AD and on-premises domains

To create a trust relationship between your Microsoft AD and on-premises domains:

  1. From the AWS Management Console, open Directory Service.
  2. Locate the Microsoft AD directory to use with Amazon WorkSpaces and choose its Directory ID link (as highlighted in the following screenshot).
    Screenshot of Directory ID link
  3. Choose the Trust relationships tab for the directory and follow the steps in Create a Trust Relationship (Microsoft AD) to create the trust relationships between your Microsoft AD and your on-premises domains.

For details about creating the two-way trust to your on-premises AD forest, see Tutorial: Create a Trust Relationship Between Your Microsoft AD on AWS and Your On-Premises Domain.

Step 3 – Create Amazon Workspaces for on-premises users

For details about getting started with Amazon WorkSpaces, see Getting Started with Amazon WorkSpaces. The following are the setup steps.

  1. From the AWS Management Console, choose
  2. Choose Directories in the left pane.
  3. Locate and select the Microsoft AD directory that you set up in Steps 1 and 2.
  4. If the Registered status for the directory says No, open the Actions menu and choose Register.
    Screenshot of "Register" in "Actions" menu
  5. Wait until the Registered status changes to Yes. The status change should take only a few seconds.
  6. Choose the WorkSpaces in the left pane.
  7. Choose Launch WorkSpaces.
  8. Select the Microsoft AD directory that you set up in Steps 1 and 2 and choose Next Step.
    Screenshot of choosing the Microsoft AD directory
  1. In the Select Users from Directory section, type a partial or full logon name, email address, or user name for an on-premises user for whom you want to create an Amazon WorkSpace and choose Search. The returned list of users should be the users from your on-premises AD forest.
  2. In the returned results, scroll through the list and select the users for whom to create an Amazon WorkSpace and choose Add Selected. You may repeat the search and select processes until up to 20 users appear in the Amazon WorkSpaces list at the bottom of the screen. When finished, choose Next Step.
    Screenshot of identifying users for whom to create a WorkSpace
  3. Select a bundle to be used for the Amazon WorkSpaces you are creating and choose Next Step.
  4. Choose the Running Mode, Encryption settings, and configure any Tags. Choose Next Step.
  5. Review the configuration of the Amazon WorkSpaces and click Launch WorkSpaces. It may take up to 20 minutes for the Amazon WorkSpaces to be available.
    Screenshot of reviewing the WorkSpaces configuration

Step 4 – Invite the users to log in to their Amazon Workspaces

  1. From the AWS Management Console, choose WorkSpaces from the Desktop & App Streaming section.
  2. Choose the WorkSpaces menu item in the left pane.
  3. Select the Amazon WorkSpaces you created in Step 3. Then choose the Actions menu and choose Invite User. A login email is sent to the users.
  4. Copy the text from the Invite screen, then paste the text into an email to the user.

Step 5 – Users log in to their Amazon WorkSpace

  1. The users receive their Amazon WorkSpaces invitations in email and follow the instructions to launch the Amazon WorkSpaces login screen.
  2. Each user enters their user name and password.
  3. After a successful login, future Amazon WorkSpaces logins from the same computer will present what the user last typed on the login screen. The user only needs to provide their password to complete the login. If only a login name were provided by the user in the last successful login, the domain for the user account is silently added to the subsequent login attempt.

To learn more about Directory Service, see the AWS Directory Service home page. If you have questions about Directory Service products, please post them on the Directory Service forum. To learn more about Amazon WorkSpaces, visit the Amazon WorkSpaces home page. For questions related to Amazon WorkSpaces, please post them on the Amazon WorkSpaces forum.

– Ron

Security and Privacy Guidelines for the Internet of Things

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/02/security_and_pr.html

Lately, I have been collecting IoT security and privacy guidelines. Here’s everything I’ve found:

  1. Internet of Things (IoT) Broadband Internet Technical Advisory Group, Broadband Internet Technical Advisory Group, Nov 2016.
  2. IoT Security Guidance,” Open Web Application Security Project (OWASP), May 2016.

  3. Strategic Principles for Securing the Internet of Things (IoT),” US Department of Homeland Security, Nov 2016.

  4. Security,” OneM2M Technical Specification, Aug 2016.

  5. Security Solutions,” OneM2M Technical Specification, Aug 2016.

  6. IoT Security Guidelines Overview Document,” GSM Alliance, Feb 2016.

  7. IoT Security Guidelines For Service Ecosystems,” GSM Alliance, Feb 2016.

  8. IoT Security Guidelines for Endpoint Ecosystems,” GSM Alliance, Feb 2016.

  9. IoT Security Guidelines for Network Operators,” GSM Alliance, Feb 2016.

  10. Establishing Principles for Internet of Things Security,” IoT Security Foundation, undated.

  11. IoT Design Manifesto,” May 2015.

  12. NYC Guidelines for the Internet of Things,” City of New York, undated.

  13. IoT Security Compliance Framework,” IoT Security Foundation, 2016.

  14. Principles, Practices and a Prescription for Responsible IoT and Embedded Systems Development,” IoTIAP, Nov 2016.

  15. IoT Trust Framework,” Online Trust Alliance, Jan 2017.

  16. Five Star Automotive Cyber Safety Framework,” I am the Cavalry, Feb 2015.

  17. Hippocratic Oath for Connected Medical Devices,” I am the Cavalry, Jan 2016.

Other, related, items:

  1. We All Live in the Computer Now,” The Netgain Partnership, Oct 2016.
  2. Comments of EPIC to the FTC on the Privacy and Security Implications of the Internet of Things,” Electronic Privacy Information Center, Jun 2013.

  3. Internet of Things Software Update Workshop (IoTSU),” Internet Architecture Board, Jun 2016.

They all largely say the same things: avoid known vulnerabilities, don’t have insecure defaults, make your systems patchable, and so on.

My guess is that everyone knows that IoT regulation is coming, and is either trying to impose self-regulation to forestall government action or establish principles to influence government action. It’ll be interesting to see how the next few years unfold.

If there are any IoT security or privacy guideline documents that I’m missing, please tell me in the comments.

Amazon Cloud Directory – A Cloud-Native Directory for Hierarchical Data

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-cloud-directory-a-cloud-native-directory-for-hierarchical-data/

Our customers have traditionally used directories (typically Active Directory Lightweight Directory Service or LDAP-based) to manage hierarchically organized data. Device registries, course catalogs, network configurations, and user directories are often represented as hierarchies, sometimes with multiple types of relationships between objects in the same collection. For example, a user directory could have one hierarchy based on physical location (country, state, city, building, floor, and office), a second one based on projects and billing codes, and a third based on the management chain. However, traditional directory technologies do not support the use of multiple relationships in a single directory; you’d have to create and maintain additional directories if you needed to do this.

Scale is another important challenge. The fundamental operations on a hierarchy involve locating the parent or the child object of a given object. Given that hierarchies can be used to represent large, nested collections of information, these fundamental operations must be as efficient as possible, regardless of how many objects there are or how deeply they are nested. Traditional directories can be difficult to scale, and the pain only grows if you are using two or more in order to represent multiple hierarchies.

New Amazon Cloud Directory
Today we are launching Cloud Directory. This service is purpose-built for storing large amounts of strongly typed hierarchical data as described above. With the ability to scale to hundreds of millions of objects while remaining cost-effective, Cloud Directory is a great fit for all sorts of cloud and mobile applications.

Cloud Directory is a building block that already powers other AWS services including Amazon Cognito and AWS Organizations. Because it plays such a crucial role within AWS, it was designed with scalability, high availability, and security in mind (data is encrypted at rest and while in transit).

Amazon Cloud Directory is a managed service; you don’t need to think about installing or patching software, managing servers, or scaling any storage or compute infrastructure. You simply define the schemas, create a directory, and then populate your directory by making calls to the Cloud Directory API. This API is designed for speed and for scale, with efficient, batch-based read and write functions.

The long-lasting nature of a directory, combined with the scale and the diversity of use cases that it must support over its lifetime, brings another challenge to light. Experience has shown that static schemas lack the flexibility to adapt to the changes that arise with scale and with new use cases. In order to address this challenge and to make the directory future-proof, Cloud Directory is built around a model that explicitly makes room for change. You simply extend your existing schemas by adding new facets. This is a safe operation that leaves existing data intact so that existing applications will continue to work as expected. Combining schemas and facets allows you to represent multiple hierarchies within the same directory. For example, your first hierarchy could mirror your org chart. Later, you could add an additional facet to track some additional properties for each employee, perhaps a second phone number or a social network handle. After that, you can could create a geographically oriented hierarchy within the same data: Countries, states, buildings, floors, offices, and employees.

As I mentioned, other parts of AWS already use Amazon Cloud Directory. Cognito User Pools use Cloud Directory to offer application-specific user directories with support for user sign-up, sign-in and multi-factor authentication. With Cognito Your User Pools, you can easily and securely add sign-up and sign-in functionality to your mobile and web apps with a fully-managed service that scales to support hundreds of millions of users. Similarly, AWS Organizations uses Cloud Directory to support creation of groups of related AWS accounts and makes good use of multiple hierarchies to enforce a wide range of policies.

Before we dive in, let’s take a quick look at some important Amazon Cloud Directory concepts:

Directories are named, and must have at least one schema. Directories store objects, relationships between objects, schemas, and policies.

Facets model the data by defining required and allowable attributes. Each facet provides an independent scope for attribute names; this allows multiple applications that share a directory to safely and independently extend a given schema without fear of collision or confusion.

Schemas define the “shape” of data stored in a directory by making reference to one or more facets. Each directory can have one or more schemas. Schemas exist in one of three phases (Development, Published, or Applied). Development schemas can be modified; Published schemas are immutable. Amazon Cloud Directory includes a collection of predefined schemas for people, organizations, and devices. The combination of schemas and facets leaves the door open to significant additions to the initial data model and subject area over time, while ensuring that existing applications will still work as expected.

Attributes are the actual stored data. Each attribute is named and typed; data types include Boolean, binary (blob), date/time, number, and string. Attributes can be mandatory or optional, and immutable or editable. The definition of an attribute can specify a rule that is used to validate the length and/or content of an attribute before it is stored or updated. Binary and string objects can be length-checked against minimum and maximum lengths. A rule can indicate that a string must have a value chosen from a list, or that a number is within a given range.

Objects are stored in directories, have attributes, and are defined by a schema. Each object can have multiple children and multiple parents, as specified by the schema. You can use the multiple-parent feature to create multiple, independent hierarchies within a single directory (sometimes known as a forest of trees).

Policies can be specified at any level of the hierarchy, and are inherited by child objects. Cloud Directory does not interpret or assign any meaning to policies, leaving this up to the application. Policies can be used to specify and control access permissions, user rights, device characteristics, and so forth.

Creating a Directory
Let’s create a directory! I start by opening up the AWS Directory Service Console and clicking on the first Create directory button:

I enter a name for my directory (users), choose the person schema (which happens to have two facets; more about this in a moment), and click on Next:

The predefined (AWS) schema will be copied to my directory. I give it a name and a version, and click on Publish:

Then I review the configuration and click on Launch:

The directory is created, and I can then write code to add objects to it.

Pricing and Availability
Cloud Directory is available now in the US East (Northern Virginia), US East (Ohio), US West (Oregon), EU (Ireland), Asia Pacific (Sydney), and Asia Pacific (Singapore) Regions and you can start using it today.

Pricing is based on three factors: the amount of data that you store, the number of reads, and the number of writes (these prices are for US East (Northern Virginia)):

  • Storage – $0.25 / GB / month
  • Reads – $0.0040 for every 10,000 reads
  • Writes – $0.0043 for every 1,000 writes

In the Works
We have some big plans for Cloud Directory!

While the priorities can change due to customer feedback, we are working on cross-region replication, AWS Lambda integration, and the ability to create new directories via AWS CloudFormation.

Jeff;

CES 2017: Trends For the Tech Savvy To Watch

Post Syndicated from Peter Cohen original https://www.backblaze.com/blog/ces-2017-trends-tech-savvy-watch/

This year’s Consumer Electronics Show (CES) just wrapped up in Las Vegas. The usual parade of cool tech toys created a lot of headlines this year, but there were some genuine trends to keep an eye on too. If you’re like us, you’re probably one of the first people around to adopt promising new technologies when they emerge. As early adopters we can sometimes lose the forest through the trees when it comes to understanding what this means for everyone else, so we’re going to look at it through that prism.

Alexa everywhere

2017 promises to be a big year for voice-activated “smart home” devices. The final landscape for this is still to be determined – all the expected players have their foot in it right now. Amazon, Apple, Google, Microsoft, even some smaller players.

Amazon deserves props after a holiday season that saw its Echo and Echo Dot devices in high demand. The company’s published an API that is Alexa is picking up plenty of support from third party manufacturers. Alexa’s testing for far beyond Echo, it seems.

Electronics giant LG is building Alexa into a line of robots designed for domestic duties and a refrigerator that also sports interior fridge cams, for example. Ford is integrating Alexa support into its Sync 3 automotive interface. Televisions, lighting devices, and home security products are among the many devices to feature Alexa integration.

Alexa is the new hotness, but the real trend here is in voice-assisted connectivity around the home. Even if Alexa runs out of steam, this tech is here to stay. The Internet of Things and voice activated interfaces are converging quickly, though that day isn’t today. It’s tantalizingly close. It’s still a niche, though, where it will stay for as long as consumers have to piece different things together to get it to work. That means there’s still room for disruption.

There’s especially ripe opportunity in underserved verticals. Take the home health market, for example: Natural language interfaces have huge implications for elderly and disabled care and assistance. Finding and developing solutions for those sorts of vertical markets is an awesome opportunity for the right players.

Of course, with great power comes great responsibility. A family of a six-year-old recently got stuck with a $160 bill after she told Alexa to order her cookies and a dollhouse. The family ended up donating the accidental order to charity. For what it’s worth, that problem can be avoided by activating a confirmation code feature in the Alexa software.

The Electric Vehicle (EV) Market Heats Up

One of the trickiest things to unpack from CES is hype from substance. Nowhere was that more apparent last week than the unveiling of Faraday Future’s FF91, a new Electric Vehicle (EV) positioned to go toe-to-toe with Tesla’s EV fleet.

The FF91 EV can purportedly go 378 miles on a single charge and also possesses autonomous driving capabilities (although its vaunted self-parking abilities didn’t demo as well as planned). When or if it’ll make it into production is still a head-scratcher, however. Faraday Future says it’ll be out next year, assuming that the company is beyond the production and manufacturing woes that have plagued it up until now.

While new vehicles and vehicle concepts are still largely the domain of auto shows, some auto manufacturers used CES to float new concepts ahead of the Detroit Auto Show, which happens this week. Toyota, for example, showed off its Concept-i, a car with artificial intelligence and natural language processing (like Siri or Alexa) designed to learn from you and adapt.

As we mentioned, Alexa is integrated into Ford’s Sync 3 platform, too. Already you can buy new cars with CarPlay and Android Auto, which makes it a lot easier to just talk with your mobile device to stay connected, get directions and entertain yourself on the morning commute simply by talking to your car instead of touching buttons. That’s a smart user interface change, but it’s still a potentially dangerous distraction for the driver. For this technology to succeed, it’s imperative that natural language interface designers make the experience as frictionless as possible.

Chrysler is making a play for future millennial families. We’re not making this up – they used “millennial” to describe the target market for this several times. The Portal concept is an electric minivan of sorts that’s chock-full of buzzwords: Facial recognition, Wi-Fi, media sharing, ten charging ports, semi-autonomous driving abilities and more).

2017 marks a pivot for car makers in this respect. For years the conventional wisdom that millennials were a lost cause for auto makers – Uber and Zipcar was all they needed. It turns out that was totally wrong. Economic pressures and diverse lifestyles may have delayed millennials’ trek toward auto ownership, but they’re turning out now in big numbers to buy wheels. Millennial families will need transportation just like generations before them back to the station wagon, which is why Chrysler says this “fifth-generation” family car will go into production sometime after 2018.

Volkswagen showed off its new I.D. concept car, a Golf-looking EV that also has all the requisite buzzwords. Speaking of buzzwords, what really excited us was the I.D. Buzz. This new EV resurrects the styling of the Hippy-era Microbus, with mood lighting, autonomous driving capabilities and a retractable steering wheel.

Rumors have persisted for years that VW was on the cusp of introducing a refreshed Microbus, but those rumors have never come to pass. And unfortunately, VW has no concrete plans to actually produce this – it seems to be a marketing effort to draw on nostalgic Boomer appeal, more than anything..

Both Buzz and Chrysler’s Portal do give us some insight about where auto makers are going when it comes to future generations of minivans: Electric, autonomous, customizable and more social than ever. If we are headed towards a future where vehicles drive themselves, family transportation will look very different than it is today.

Laptops At Both Extremes

CES saw the rollout of several new PC laptop models and concepts that will be hitting store shelves over the next several months.

Gamers looking for more real estate – a lot more real estate – were interested in Razer’s latest concept, Project Valerie. The laptop sports not one but three 4K displays which fold out on hinges. That’s 12K pixels of horizontal image space, mated to an Nvidia GeForce GTX 1080 graphics processor. A unibody aluminum chassis keeps it relatively thin (1.5 inches) when closed, but the entire rig weighs more than 12 pounds. Razer doesn’t have any immediate production plans, which may explain why their prototype was stolen before the end of the show.

Unlike Razer, Acer has production plans – immediate plans – for its gargantuan 21-inch Predator 21X laptop, priced at $8,999 and headed to store shelves next month. It was announced last year, but Acer finally offered launch details last week. A 17-inch model is also coming soon.

Big gaming laptops make for pretty pictures and certainly have their place in the PC ecosystem, but they’re niche devices. After a ramp up on 2-in-1s and low-powered laptops, Intel’s Kaby Lake processors are finally ready for the premium and mid-range laptop market. Kaby Lake efficiency improvements are helping PC makers build thinner and lighter laptops with better battery life, 4K video processing, faster solid state storage and more.

HP, Asus, MSI, Dell (and its gaming arm Alienware) were among the many companies with sleek new Kaby Lake-equipped models.

Gaming in the cloud with Nvidia

Nvidia, makers of premium graphics processors, offers GeForce Now cloud gaming to users of its Shield, an Android-based gaming handheld. That service is expanding to Windows and Mac in March.

Gaming as a Service, if you will, isn’t a new idea. OnLive pioneered the concept more than a decade ago. Gaikai followed, then was acquired by Sony in 2012. Nvidia’s had limited success with GeForce Now, but it’s been a single-platform offering up until now.

Nvidia has robust data centers to handle the processing and traffic, so best of luck to them as they scale up to meet demand. Gaming is very sensitive to network disruption – no gamer appreciates lag – so it’ll be interesting to see how GeForce Now scales to accommodate the new devices.

Mesh Networking

Mesh networking delivers more consistent, stronger network reception and performance than a conventional Wi-Fi router. Some of us have set up routers and extenders to fix dead spots – mesh networking works differently through smart traffic and better radio management between multiple network bases.

Eero, Ubiquiti, and even Google (with Google Wifi) are already offering mesh networking products, and this market segment looks to expand big in 2017. Netgear, Linksys, Asus, TP-Link and others are among those with new mesh networking setups. Mesh networking gear is still hampered by a higher price than plain old routers. That means the value isn’t there for some of us who have networking gear that gets the job done, even with shortcomings like dead zones or slow zones. But prices are coming down fast as more companies get into the market. If you have an 802.11ac router you’re happy with, stick with it for now, and move to a mesh networking setup for your next Wi-Fi upgrade.

Getting Your Feet Into VR

Our award for wackiest CES product has to go to Cerevo Taclim. Tactile feedback shoes and wireless hand controllers that help you “feel” the surface you’re walking on. Crunching snow underfoot, splashing through water. At an expected $1,000-$1,500 a pop, these probably won’t be next year’s Hatchimals, but it’s fun to imagine what game devs can do with the technology. Strap these to your feet then break out your best Hadouken in Street Fighter VR!

CES isn’t the real world. Only a fraction of what’s shown off ever sees the light of day, but it’s always interesting to see the trend-focused consumer electronics market shift and change from year to year. At the end of the year we hope to look back and see how much of this stuff ended up resonating with the actual consumer the show is named for.

The post CES 2017: Trends For the Tech Savvy To Watch appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Running Jupyter Notebook and JupyterHub on Amazon EMR

Post Syndicated from Tom Zeng original https://aws.amazon.com/blogs/big-data/running-jupyter-notebook-and-jupyterhub-on-amazon-emr/

Tom Zeng is a Solutions Architect for Amazon EMR

Jupyter Notebook (formerly IPython) is one of the most popular user interfaces for running Python, R, Julia, Scala, and other languages to process and visualize data, perform statistical analysis, and train and run machine learning models. Jupyter notebooks are self-contained documents that can include live code, charts, narrative text, and more. The notebooks can be easily converted to HTML, PDF, and other formats for sharing.

Amazon EMR is a popular hosted big data processing service that allows users to easily run Hadoop, Spark, Presto, and other Hadoop ecosystem applications, such as Hive and Pig.

Python, Scala, and R provide support for Spark and Hadoop, and running them in Jupyter on Amazon EMR makes it easy to take advantage of:

  • the big-data processing capabilities of Hadoop applications.
  • the large selection of Python and R packages for analytics and visualization.

JupyterHub is a multiple-user environment for Jupyter. You can use the following bootstrap action (BA) to install Jupyter and JupyterHub on Amazon EMR:

s3://aws-bigdata-blog/artifacts/aws-blog-emr-jupyter/install-jupyter-emr5.sh

These are the supported Jupyter kernels:

  • Python
  • R
  • Scala
  • Apache Toree (which provides the Spark, PySpark, SparkR, and SparkSQL kernels)
  • Julia
  • Ruby
  • JavaScript
  • CoffeeScript
  • Torch

The BA will install Jupyter, JupyterHub, and sample notebooks on the master node.

Commonly used Python and R data science and machine learning packages can be optionally installed on all nodes.

The following arguments can be passed to the BA:

--r Install the IRKernel for R.
--toree Install the Apache Toree kernel that supports Scala, PySpark, SQL, SparkR for Apache Spark.
--julia Install the IJulia kernel for Julia.
--torch Install the iTorch kernel for Torch (machine learning and visualization).
--ruby Install the iRuby kernel for Ruby.
--ds-packages Install the Python data science-related packages (scikit-learn pandas statsmodels).
--ml-packages Install the Python machine learning-related packages (theano keras tensorflow).
--python-packages Install specific Python packages (for example, ggplot and nilearn).
--port Set the port for Jupyter notebook. The default is 8888.
--password Set the password for the Jupyter notebook.
--localhost-only Restrict Jupyter to listen on localhost only. The default is to listen on all IP addresses.
--jupyterhub Install JupyterHub.
--jupyterhub-port Set the port for JuputerHub. The default is 8000.
--notebook-dir Specify the notebook folder. This could be a local directory or an S3 bucket.
--cached-install Use some cached dependency artifacts on S3 to speed up installation.
--ssl Enable SSL. For production, make sure to use your own certificate and key files.
--copy-samples Copy sample notebooks to the notebook folder.
--spark-opts User-supplied Spark options to override the default values.
--s3fs Use instead of s3nb (the default) for storing notebooks on Amazon S3. This argument can cause slowness if the S3 bucket has lots of files. The Upload file and Create folder menu options do not work with s3nb.

By default (with no --password and --port arguments), Jupyter will run on port 8888 with no password protection; JupyterHub will run on port 8000.  The --port and --jupyterhub-port arguments can be used to override the default ports to avoid conflicts with other applications.

The --r option installs the IRKernel for R. It also installs SparkR and sparklyr for R, so make sure Spark is one of the selected EMR applications to be installed. You’ll need the Spark application if you use the --toree argument.

If you used --jupyterhub, use Linux users to sign in to JupyterHub. (Be sure to create passwords for the Linux users first.)  hadoop, the default admin user for JupyterHub, can be used to set up other users. The –password option sets the password for Jupyter and for the hadoop user for JupyterHub.

Jupyter on EMR allows users to save their work on Amazon S3 rather than on local storage on the EMR cluster (master node).

To store notebooks on S3, use:

--notebook-dir <s3://your-bucket/folder/>

To store notebooks in a directory different from the user’s home directory, use:

--notebook-dir <local directory>

The following example CLI command is used to launch a five-node (c3.4xlarge) EMR 5.2.0 cluster with the bootstrap action. The BA will install all the available kernels. It will also install the ggplot and nilearn Python packages and set:

  • the Jupyter port to 8880
  • the password to jupyter
  • the JupyterHub port to 8001
aws emr create-cluster --release-label emr-5.2.0 \
  --name 'emr-5.2.0 sparklyr + jupyter cli example' \
  --applications Name=Hadoop Name=Hive Name=Spark Name=Pig Name=Tez Name=Ganglia Name=Presto \
  --ec2-attributes KeyName=<your-ec2-key>,InstanceProfile=EMR_EC2_DefaultRole \
  --service-role EMR_DefaultRole \
  --instance-groups \
    InstanceGroupType=MASTER,InstanceCount=1,InstanceType=c3.4xlarge \
    InstanceGroupType=CORE,InstanceCount=4,InstanceType=c3.4xlarge \
  --region us-east-1 \
  --log-uri s3://<your-s3-bucket>/emr-logs/ \
  --bootstrap-actions \
    Name='Install Jupyter notebook',Path="s3://aws-bigdata-blog/artifacts/aws-blog-emr-jupyter/install-jupyter-emr5.sh",Args=[--r,--julia,--toree,--torch,--ruby,--ds-packages,--ml-packages,--python-packages,'ggplot nilearn',--port,8880,--password,jupyter,--jupyterhub,--jupyterhub-port,8001,--cached-install,--notebook-dir,s3://<your-s3-bucket>/notebooks/,--copy-samples]

Replace <your-ec2-key> with your AWS access key and <your-s3-bucket> with the S3 bucket where you store notebooks. You can also change the instance types to suit your needs and budget.

If you are using the EMR console to launch a cluster, you can specify the bootstrap action as follows:

o_jupyter_1

When the cluster is available, set up the SSH tunnel and web proxy. The Jupyter notebook should be available at localhost:8880 (as specified in the example CLI command).

o_jupyter_2

After you have signed in, you will see the home page, which displays the notebook files:

o_jupyter_3

If JupyterHub is installed, the Sign in page should be available at port 8001 (as specified in the CLI example):

o_jupyter_4

After you are signed in, you’ll see the JupyterHub and Jupyter home pages are the same. The JupyterHub URL, however, is /user/<username>/tree instead of /tree.

o_jupyter_5

The JupyterHub Admin page is used for managing users:

o_jupyter_6

You can install Jupyter extensions from the Nbextensions tab:

o_jupyter_7

If you specified the --copy-samples option in the BA, you should see sample notebooks on the home page. To try the samples, first open and run the CopySampleDataToHDFS.ipynb notebook to copy some sample data files to HDFS. In the CLI example, --python-packages,'ggplot nilearn' is used to install the ggplot and nilearn packages. You can verify those packages were installed by running the Py-ggplot and PyNilearn notebooks.

The CreateUser.ipynb notebook contains examples for setting up JupyterHub users.

The PySpark.ipynb and ScalaSpark.ipynb notebooks contain the Python and Scala versions of some machine learning examples from the Spark distribution (Logistic Regression, Neural Networks, Random Forest, and Support Vector Machines):

o_jupyter_8

PyHivePrestoHDFS.ipynb shows how to access Hive, Presto, and HDFS in Python. (Be sure to run the CreateHivePrestoS3Tables.ipynb first to create tables.) The %%time and %%timeit cell magics can be used to benchmark Hive and Presto queries (and other executable code):

o_jupyter_9

Here are some other sample notebooks for you to try.

SparkSQLParquetJSON.ipynb:

o_SparkSQLParquetJSON

plot_separating_hyperplane.ipynb:

o_plot_separating_hyperplane

R-SVMLinearNonLinear.ipynb:

o_R-SVMLinearNonLinear

plot_iris.ipynb:

o_plot_iris

Julia-IrisPlot.ipynb:

o_Julia-IrisPlot

PyIrisPlot.ipynb:

o_PyIrisPlot

R-RandomForestVisualization.ipynb:

o_R-RandomForestVisualization

GrangerCausality.ipynb:

o_GrangerCausality

SQLite.ipynb:

o_SQLite

GraphvizDot.ipynb:

o_GraphvizDot

Conclusion

Data scientists who run Jupyter and JupyterHub on Amazon EMR can use Python, R, Julia, and Scala to process, analyze, and visualize big data stored in Amazon S3. Jupyter notebooks can be saved to S3 automatically, so users can shut down and launch new EMR clusters, as needed. EMR makes it easy to spin up clusters with different sizes and CPU/memory configurations to suit different workloads and budgets. This can greatly reduce the cost of data-science investigations.

If you have questions about using Jupyter and JupyterHub on EMR or would like share your use cases, please leave a comment below.


Related

Running sparklyr – RStudio’s R Interface to Spark on Amazon EMR

sparklyr_2

New – Amazon QuickSight Enterprise Edition

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-amazon-quicksight-enterprise-edition/

When I first wrote about Amazon QuickSight in 2015 (Amazon QuickSight – Fast & Easy to Use Business Intelligence for Big Data at 1/10th the Cost of Traditional Solutions), I mentioned that we would make the service available in Standard and Enterprise Editions.

Enterprise Edition
We launched the Standard Edition of Amazon QuickSight last month. Today we are launching the Enterprise Edition. Building on the capabilities of the Standard Edition, the Enterprise Edition adds Active Directory Integration and Encryption at Rest.

The Enterprise Edition supports user authentication via AWS Managed Microsoft Active Directory (AD). This allows your users to sign on to QuickSight using the credentials already in your AWS-hosted Microsoft AD or in a trusted on-premises AD. Either way, the Single Sign On (SSO) experience allows your users to get started more quickly and with less administrative overhead.

If you are responsible for administering your organization’s base of QuickSight users, you can bring thousands of users on board and manage their permissions with just a few clicks. You can manage this user base using your existing toolset and with respect to your existing governance policies.

Here’s how it all fits together:

QuickSight relies on SPICE (Super-fast, Parallel, In-memory Calculation Engine) for highly scalable ad hoc analytics. The Enterprise Edition of QuickSight encrypts data at rest within SPICE using keys managed by Amazon in order to provide an additional layer of protection for your data.

Take Enterprise Edition for a Spin
From the administrative side, setting up the Enterprise Edition of QuickSight is really easy, and requires that you sign in as an IAM user with the requisite set of permissions (visit Sign up for Amazon QuickSight with an Existing AWS Account and scroll down to Set Your IAM Policy to learn more).

You select the Enterprise Edition, choose the AWS Managed AD Directory that describes your user community, and authorize access to the directory. Then you add (if not already present) an alias for the directory and use this as the QuickSight account name. Finally, you pick the AD groups within the Managed AD or trusted forest and enable them for QuickSight access.

After you have completed the sign up process, users in the designated groups will be able to log in to QuickSight using the QuickSight account name (directory alias) and their existing AD credentials.

Password restrictions, timeouts, and user management are all handled through the appropriate AD (AWS or on-premises) settings in accord with your organization’s policies. You can manage group membership using your existing tools, adding and removing users and assigning user & admin roles as needed.

Pricing and Availability
Amazon QuickSight Enterprise Edition is now available in the US East (Northern Virginia) Region, with SPICE capacity available in the US East (Northern Virginia), US West (Oregon), and EU (Ireland) Regions. Pricing starts at $18 per month per user and includes 10 GB of SPICE capacity, which is pooled across all of the QuickSight users on the account (the single user QuickSight free tier and the 60 day, 4 user free trial apply to the Enterprise Edition as well). To learn more, visit the QuickSight Pricing page.

If you have a Microsoft AD instance provisioned in the US East (Northern Virginia) Region, you can take advantage of the free tier and free trial, and get started today at no cost.

 

Virtual Forest

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/virtual-forest/

The RICOH THETA S is a fairly affordable consumer 360° camera, which allows users to capture interesting locations and events for viewing through VR headsets and mobile-equipped Google Cardboard. When set up alongside a Raspberry Pi acting as a controller, plus a protective bubble, various cables, and good ol’ Mother Nature, the camera becomes a gateway to a serene escape from city life.

Virtual Forest

Ecologist Koen Hufkens, from the Richardson Lab in the Department of Organismic and Evolutionary Biology at Harvard University, decided to do exactly that, creating the Virtual Forest with the aim of “showing people how the forest changes throughout the seasons…and the beauty of the forest”.

The camera takes a still photograph every 15 minutes, uploading it for our viewing pleasure. The setup currently only supports daylight viewing, as the camera is not equipped for night vision, so check your watch first.

one autumn day

360 view of a day in a North-Eastern Hardwood forest during autumn

The build cost somewhere in the region of $500 to create; Hufkens provides a complete ingredients list here, with supporting code on GitHub. He also aims to improve the setup by using the new Nikon KeyMission, which can record video at 4K ultra-HD resolution.

The Virtual Forest has been placed deep within the heart of Harvard Forest, a university-owned plot of land used both by researchers and by the general public. If you live nearby, you could go look at it and possibly even appear in a photo. Please resist the urge to photobomb, though, because that would totally defeat the peopleless zen tranquility that we’re feeling here in Pi Towers.

The post Virtual Forest appeared first on Raspberry Pi.

Down the Amazon

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/photos/amazon.html

After BOSSA in
Manaus/Brazil
we took a very enjoyable boat trip down the Amazon, to
Santarém and particularly Alter do Chão, a ridiculously amazing
island paradise with glaring white sand in the middle of the jungle:

Tapajos 2

The town is located on the Tapajós River:

Tapajos 1

Tapajos 3

Up the river you find the Tapajós National Forest:

Tapajos 4

From there we went on to São Luís, a beautiful old colonial town:

Sao Luis 1

Sao Luis 3

Sao Luis 4

Sao Luis 2

A windy and wet sailing catamaran ride from São Luís you find Alcântara, another old colonial town, now partly in ruins and deserted:

Alcantara 1

Alcantara 2

Alcantara 3