Tag Archives: controller

[$] Authentication and authorization in Samba 4

Post Syndicated from jake original https://lwn.net/Articles/747122/rss

Volker Lendecke is one of the first contributors to Samba,
having submitted his first patches in 1994. In addition to developing
other important file-sharing tools, he’s heavily involved in development of
the winbind service, which is implemented in winbindd. Although the core Active Directory (AD) domain controller
(DC) code was written by his colleague Stefan Metzmacher, winbind is a
crucial component of Samba’s AD functionality.
In his information-packed talk at FOSDEM
2018
, Lendecke
said he aimed to give a high-level
overview of what AD and Samba authentication is, and in particular the
communication pathways and trust relationships between the parts of
Samba that authenticate a Samba user in an AD environment.

The Fisher Piano: make music in the air

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/air-piano/

Piano keys are so limiting! Why not swap them out for LEDs and the wealth of instruments in Pygame to build air keys, as demonstrated by Instructables maker 2fishy?

Raspberry Pi LED Light Schroeder Piano – Twinkle Little Star

Raspberry Pi LED Light Schroeder Piano – Twinkle Little Star

Keys? Where we’re going you don’t need keys!

This project, created by either Yolanda or Ken Fisher (or both!), uses an array of LEDs and photoresistors to form a MIDI sequencer. Twelve LEDs replace piano keys, and another three change octaves and access the menu.

Each LED is paired with a photoresistor, which detects the emitted light to form a closed circuit. Interrupting the light beam — in this case with a finger — breaks the circuit, telling the Python program to perform an action.

2fishy LED light piano raspberry pi

We’re all hoping this is just the scaled-down prototype of a full-sized LED grand piano

Using Pygame, the 2fishy team can access 75 different instruments and 128 notes per instrument, making their wooden piano more than just a one-hit wonder.

Piano building

The duo made the piano’s body out of plywood, hardboard, and dowels, and equipped it with a Raspberry Pi 2, a speaker, and the aforementioned LEDs and photoresistors.

2fishy LED light piano raspberry pi

A Raspberry Pi 2 and speaker sit within the wooden body, with LEDs and photoresistors in place of the keys.

A complete how-to for the build, including some rather fancy and informative schematics, is available at Instructables, where 2fishy received a bronze medal for their project. Congratulations!

Learn more

If you’d like to learn more about using Pygame, check out The MagPi’s Make Games with Python Essentials Guide, available both in print and as a free PDF download.

And for more music-based projects using a variety of tech, be sure to browse our free resources.

Lastly, if you’d like to see more piano-themed Raspberry Pi projects, take a look at our Big Minecraft Piano, these brilliant piano stairs, this laser-guided piano teacher, and our video below about the splendid Street Fighter duelling pianos we witnessed at Maker Faire.

Pianette: Piano Street Fighter at Maker Faire NYC 2016

Two pianos wired up as Playstation 2 controllers allow users to battle…musically! We caught up with makers Eric Redon and Cyril Chapellier of foobarflies a…

The post The Fisher Piano: make music in the air appeared first on Raspberry Pi.

The 4.15 kernel is out

Post Syndicated from corbet original https://lwn.net/Articles/744875/rss

Linus has released the 4.15 kernel.
After a release cycle that was unusual in so many (bad) ways, this
last week was really pleasant. Quiet and small, and no last-minute
panics, just small fixes for various issues. I never got a feeling
that I’d need to extend things by yet another week, and 4.15 looks
fine to me.

Some of the more significant features in this release include:
the long-awaited CPU controller for the
version-2 control-group interface,
significant live-patching improvements,
initial support for the RISC-V architecture,
support for AMD’s secure encrypted virtualization feature, and
the MAP_SYNC mechanism for working
with nonvolatile memory.
This release also, of course, includes mitigations for the Meltdown and Spectre variant-2
vulnerabilities
though, as Linus points out in the announcement, the
work of dealing with these issues is not yet done.

Analyzing the Linux boot process (opensource.com)

Post Syndicated from corbet original https://lwn.net/Articles/744528/rss

Alison Chaiken looks
in detail at how the kernel boots
on opensource.com.
Besides starting buggy spyware, what function does early boot
firmware serve? The job of a bootloader is to make available to a newly
powered processor the resources it needs to run a general-purpose operating
system like Linux. At power-on, there not only is no virtual memory, but no
DRAM until its controller is brought up.

Turn your smartphone into a universal remote

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/zero-universal-remote/

Honolulu-based software developer bbtinkerer was tired of never being able to find the TV remote. So he made his own using a Raspberry Pi Zero, and connected it to a web app accessible on his smartphone.

bbtinkerer universal remote Raspberry Pi zero

Finding a remote alternative

“I needed one because the remote in my house tends to go missing a lot,” explains Bernard aka bbtinkerer on the Instructables page for his Raspberry Pi Zero Universal Remote.”If I want the controller, I have to hunt down three people and hope one of them remembers that they took it.”

bbtinkerer universal remote Raspberry Pi zero

For the build, Bernard used a Raspberry Pi Zero, an IR LED and corresponding receiver, Raspbian Lite, and a neat little 3D-printed housing.

bbtinkerer universal remote Raspberry Pi zero
bbtinkerer universal remote Raspberry Pi zero
bbtinkerer universal remote Raspberry Pi zero

First, he soldered a circuit for the LED and resistors on a small piece of perf board. Then he assembled the hardware components. Finally, all he needed to do was to write the code to control his devices (including a tower fan), and to set up the app.

bbtinkerer universal remote Raspberry Pi zero

Bernard employed the Linux Infrared Remote Control (LIRC) package to control the television with the Raspberry Pi Zero, accessing the Zero via SSH. He gives a complete rundown of the installation process on Instructables.

bbtinkerer universal remote Raspberry Pi zero

Setting up a remote’s buttons with LIRC is a simple case of pressing them and naming their functions one by one. You’ll need the remote to set up the system, but after that, feel free to lock it in a drawer and use your smartphone instead.



Finally, Bernard created the web interface using Node.js, and again, because he’s lovely, he published the code for anyone wanting to build their own. Thanks, Bernard!

Life hacks

If you’ve used a Raspberry Pi to build a time-saving life hack like Bernard’s, be sure to share it with us. Other favourites of ours include fridge cameras, phone app doorbell notifications, and Alan’s ocarina home automation system. I’m not sure if this last one can truly be considered a time-saving life hack. It’s still cool though!

The post Turn your smartphone into a universal remote appeared first on Raspberry Pi.

Wanted: Sales Engineer

Post Syndicated from Yev original https://www.backblaze.com/blog/wanted-sales-engineer/

At inception, Backblaze was a consumer company. Thousands upon thousands of individuals came to our website and gave us $5/mo to keep their data safe. But, we didn’t sell business solutions. It took us years before we had a sales team. In the last couple of years, we’ve released products that businesses of all sizes love: Backblaze B2 Cloud Storage and Backblaze for Business Computer Backup. Those businesses want to integrate Backblaze deeply into their infrastructure, so it’s time to hire our first Sales Engineer!

Company Description:
Founded in 2007, Backblaze started with a mission to make backup software elegant and provide complete peace of mind. Over the course of almost a decade, we have become a pioneer in robust, scalable low cost cloud backup. Recently, we launched B2 – robust and reliable object storage at just $0.005/gb/mo. Part of our differentiation is being able to offer the lowest price of any of the big players while still being profitable.

We’ve managed to nurture a team oriented culture with amazingly low turnover. We value our people and their families. Don’t forget to check out our “About Us” page to learn more about the people and some of our perks.

We have built a profitable, high growth business. While we love our investors, we have maintained control over the business. That means our corporate goals are simple – grow sustainably and profitably.

Some Backblaze Perks:

  • Competitive healthcare plans
  • Competitive compensation and 401k
  • All employees receive Option grants
  • Unlimited vacation days
  • Strong coffee
  • Fully stocked Micro kitchen
  • Catered breakfast and lunches
  • Awesome people who work on awesome projects
  • Childcare bonus
  • Normal work hours
  • Get to bring your pets into the office
  • San Mateo Office – located near Caltrain and Highways 101 & 280.

Backblaze B2 cloud storage is a building block for almost any computing service that requires storage. Customers need our help integrating B2 into iOS apps to Docker containers. Some customers integrate directly to the API using the programming language of their choice, others want to solve a specific problem using ready made software, already integrated with B2.

At the same time, our computer backup product is deepening it’s integration into enterprise IT systems. We are commonly asked for how to set Windows policies, integrate with Active Directory, and install the client via remote management tools.

We are looking for a sales engineer who can help our customers navigate the integration of Backblaze into their technical environments.

Are you 1/2” deep into many different technologies, and unafraid to dive deeper?

Can you confidently talk with customers about their technology, even if you have to look up all the acronyms right after the call?

Are you excited to setup complicated software in a lab and write knowledge base articles about your work?

Then Backblaze is the place for you!

Enough about Backblaze already, what’s in it for me?
In this role, you will be given the opportunity to learn about the technologies that drive innovation today; diverse technologies that customers are using day in and out. And more importantly, you’ll learn how to learn new technologies.

Just as an example, in the past 12 months, we’ve had the opportunity to learn and become experts in these diverse technologies:

  • How to setup VM servers for lab environments, both on-prem and using cloud services.
  • Create an automatically “resetting” demo environment for the sales team.
  • Setup Microsoft Domain Controllers with Active Directory and AD Federation Services.
  • Learn the basics of OAUTH and web single sign on (SSO).
  • Archive video workflows from camera to media asset management systems.
  • How upload/download files from Javascript by enabling CORS.
  • How to install and monitor online backup installations using RMM tools, like JAMF.
  • Tape (LTO) systems. (Yes – people still use tape for storage!)

How can I know if I’ll succeed in this role?

You have:

  • Confidence. Be able to ask customers questions about their environments and convey to them your technical acumen.
  • Curiosity. Always want to learn about customers’ situations, how they got there and what problems they are trying to solve.
  • Organization. You’ll work with customers, integration partners, and Backblaze team members on projects of various lengths. You can context switch and either have a great memory or keep copious notes. Your checklists have their own checklists.

You are versed in:

  • The fundamentals of Windows, Linux and Mac OS X operating systems. You shouldn’t be afraid to use a command line.
  • Building, installing, integrating and configuring applications on any operating system.
  • Debugging failures – reading logs, monitoring usage, effective google searching to fix problems excites you.
  • The basics of TCP/IP networking and the HTTP protocol.
  • Novice development skills in any programming/scripting language. Have basic understanding of data structures and program flow.
  • Your background contains:

  • Bachelor’s degree in computer science or the equivalent.
  • 2+ years of experience as a pre or post-sales engineer.
  • The right extra credit:
    There are literally hundreds of previous experiences you can have had that would make you perfect for this job. Some experiences that we know would be helpful for us are below, but make sure you tell us your stories!

  • Experience using or programming against Amazon S3.
  • Experience with large on-prem storage – NAS, SAN, Object. And backing up data on such storage with tools like Veeam, Veritas and others.
  • Experience with photo or video media. Media archiving is a key market for Backblaze B2.
  • Program arduinos to automatically feed your dog.
  • Experience programming against web or REST APIs. (Point us towards your projects, if they are open source and available to link to.)
  • Experience with sales tools like Salesforce.
  • 3D print door stops.
  • Experience with Windows Servers, Active Directory, Group policies and the like.
  • What’s it like working with the Sales team?
    The Backblaze sales team collaborates. We help each other out by sharing ideas, templates, and our customer’s experiences. When we talk about our accomplishments, there is no “I did this,” only “we”. We are truly a team.

    We are honest to each other and our customers and communicate openly. We aim to have fun by embracing crazy ideas and creative solutions. We try to think not outside the box, but with no boxes at all. Customers are the driving force behind the success of the company and we care deeply about their success.

    If this all sounds like you:

    1. Send an email to [email protected] with the position in the subject line.
    2. Tell us a bit about your Sales Engineering experience.
    3. Include your resume.

    The post Wanted: Sales Engineer appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

    AWS Online Tech Talks – January 2018

    Post Syndicated from Ana Visneski original https://aws.amazon.com/blogs/aws/aws-online-tech-talks-january-2018/

    Happy New Year! Kick of 2018 right by expanding your AWS knowledge with a great batch of new Tech Talks. We’re covering some of the biggest launches from re:Invent including Amazon Neptune, Amazon Rekognition Video, AWS Fargate, AWS Cloud9, Amazon Kinesis Video Streams, AWS PrivateLink, AWS Single-Sign On and more!

    January 2018– Schedule

    Noted below are the upcoming scheduled live, online technical sessions being held during the month of January. Make sure to register ahead of time so you won’t miss out on these free talks conducted by AWS subject matter experts.

    Webinars featured this month are:

    Monday January 22

    Analytics & Big Data
    11:00 AM – 11:45 AM PT Analyze your Data Lake, Fast @ Any Scale  Lvl 300

    Database
    01:00 PM – 01:45 PM PT Deep Dive on Amazon Neptune Lvl 200

    Tuesday, January 23

    Artificial Intelligence
    9:00 AM – 09:45 AM PT  How to get the most out of Amazon Rekognition Video, a deep learning based video analysis service Lvl 300

    Containers

    11:00 AM – 11:45 AM Introducing AWS Fargate Lvl 200

    Serverless
    01:00 PM – 02:00 PM PT Overview of Serverless Application Deployment Patterns Lvl 400

    Wednesday, January 24

    DevOps
    09:00 AM – 09:45 AM PT Introducing AWS Cloud9  Lvl 200

    Analytics & Big Data
    11:00 AM – 11:45 AM PT Deep Dive: Amazon Kinesis Video Streams
    Lvl 300
    Database
    01:00 PM – 01:45 PM PT Introducing Amazon Aurora with PostgreSQL Compatibility Lvl 200

    Thursday, January 25

    Artificial Intelligence
    09:00 AM – 09:45 AM PT Introducing Amazon SageMaker Lvl 200

    Mobile
    11:00 AM – 11:45 AM PT Ionic and React Hybrid Web/Native Mobile Applications with Mobile Hub Lvl 200

    IoT
    01:00 PM – 01:45 PM PT Connected Product Development: Secure Cloud & Local Connectivity for Microcontroller-based Devices Lvl 200

    Monday, January 29

    Enterprise
    11:00 AM – 11:45 AM PT Enterprise Solutions Best Practices 100 Achieving Business Value with AWS Lvl 100

    Compute
    01:00 PM – 01:45 PM PT Introduction to Amazon Lightsail Lvl 200

    Tuesday, January 30

    Security, Identity & Compliance
    09:00 AM – 09:45 AM PT Introducing Managed Rules for AWS WAF Lvl 200

    Storage
    11:00 AM – 11:45 AM PT  Improving Backup & DR – AWS Storage Gateway Lvl 300

    Compute
    01:00 PM – 01:45 PM PT  Introducing the New Simplified Access Model for EC2 Spot Instances Lvl 200

    Wednesday, January 31

    Networking
    09:00 AM – 09:45 AM PT  Deep Dive on AWS PrivateLink Lvl 300

    Enterprise
    11:00 AM – 11:45 AM PT Preparing Your Team for a Cloud Transformation Lvl 200

    Compute
    01:00 PM – 01:45 PM PT  The Nitro Project: Next-Generation EC2 Infrastructure Lvl 300

    Thursday, February 1

    Security, Identity & Compliance
    09:00 AM – 09:45 AM PT  Deep Dive on AWS Single Sign-On Lvl 300

    Storage
    11:00 AM – 11:45 AM PT How to Build a Data Lake in Amazon S3 & Amazon Glacier Lvl 300

    Musician’s White Noise YouTube Video Hit With Copyright Complaints

    Post Syndicated from Andy original https://torrentfreak.com/musicians-white-noise-youtube-video-hit-with-copyright-complaints-180105/

    When people upload original content to YouTube, there should be no problem with getting paid for that content, should it attract enough interest from the public.

    Those who upload infringing content get a much less easy ride, with their uploads getting flagged for abuse, potentially putting their accounts at risk.

    That’s what’s happened to Australia-based music technologist Sebastian Tomczak, who uploaded a completely non-infringing work to YouTube and now faces five separate copyright complaints.

    “I teach and work in a music department at a University here in Australia. I’ve got a PhD in chiptune, and my main research interests are various intersections of music / sound / tech e.g. arduino programming and DIY stuff, modular synthesis, digital production, sound design for games, etc,” Tomczak informs TF.

    “I started blogging about music around a decade ago or so, mainly to write about stuff I was interested in, researching or doing. At the time this would have been physical interaction, music controller design, sound design and composition involving computers.”

    One of Tomczak videos was a masterpiece entitled “10 Hours of Low Level White Noise” which features – wait for it – ten hours of low-level white noise.

    “The white noise video was part of a number of videos I put online at the time. I was interested in listening to continuous sounds of various types, and how our perception of these kinds of sounds and our attention changes over longer periods – e.g. distracted, focused, sleeping, waking, working etc,” Tomczak says.

    White noise is the sound created when all different frequencies are combined together into a kind of audio mush that’s a little baffling and yet soothing in the right circumstances. Some people use it to fall asleep a little easier, others to distract their attention away from irritating sounds in the environment, like an aircon system or fan, for example.

    The white noise made by Tomczak and presented in his video was all his own work.

    “I ‘created’ and uploaded the video in question. The video was created by generating a noise waveform of 10 hours length using the freeware software Audacity and the built-in noise generator. The resulting 10-hour audio file was then imported into ScreenFlow, where the text was added and then rendered as one 10-hour video file,” he explains.

    This morning, however, Tomczak received a complaint from YouTube after a copyright holder claimed that it had the rights to his composition. When he checked his YouTube account, yet more complaints greeted him. In fact, since July 2015, when the video was first uploaded, a total of five copyright complaints had been filed against Tomczak’s composition.

    As seen from the image below, posted by Tomczak to his Twitter account, the five complaints came from four copyright holders, with one feeling the need to file two separate complaints while citing two different works.

    The complaints against Tomczak’s white noise

    One company involved – Catapult Distribution – say that Tomczak’s composition infringes on the copyrights of “White Noise Sleep Therapy”, a client selling the title “Majestic Ocean Waves”. It also manages to do the same for the company’s “Soothing Baby Sleep” title. The other complaints come from Merlin Symphonic Distribution and Dig Dis for similar works .

    Under normal circumstances, Tomczak’s account could have been disabled by YouTube for so many infringements but in all cases the copyright holders chose to monetize the musician’s ‘infringement’ instead, via the site’s ContentID system. In other words, after creating the video himself with his own efforts, copyright holders are now taking all the revenue. It’s a situation that Tomczak will now dispute with YouTube.

    “I’ve had quite a few copyright claims against me, usually based on cases where I’ve made long mixes of work, or longer pieces. Usually I don’t take them too seriously,” he explains.

    “In any of the cases where I think a given claim would be an issue, I would dispute it by saying I could either prove that I have made the work, have the original materials that generated the work, or could show enough of the components included in the work to prove originality. This has always been successful for me and I hope it will be in this case as well.”

    Sadly, this isn’t the only problem Tomczak’s had with YouTube’s copyright complaints system. A while back the musician was asked to take part in a video for his workplace but things didn’t go well.

    “I was asked to participate in a video for my workplace and the production team asked if they could use my music and I said ‘no problem’. A month later, the video was uploaded to one of our work channels, and then YouTube generated a copyright claim against me for my own music from the work channel,” he reveals.

    Tomczak says that to him, automated copyright claims are largely an annoyance and if he was making enough money from YouTube, the system would be detrimental in the long run. He feels it’s something that YouTube should adjust, to ensure that false claims aren’t filed against uploads like his.

    While he tries to sort out this mess with YouTube, there is some good news. Other videos of his including “10 Hours of a Perfect Fifth“, “The First 106 Fifths Derived from a 3/2 Ratio” and “Hour-Long Octave Shift” all remain copyright-complaint free.

    For now……

    Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

    The Raspberry Pi PiServer tool

    Post Syndicated from Gordon Hollingworth original https://www.raspberrypi.org/blog/piserver/

    As Simon mentioned in his recent blog post about Raspbian Stretch, we have developed a new piece of software called PiServer. Use this tool to easily set up a network of client Raspberry Pis connected to a single x86-based server via Ethernet. With PiServer, you don’t need SD cards, you can control all clients via the server, and you can add and configure user accounts — it’s ideal for the classroom, your home, or an industrial setting.

    PiServer diagram

    Client? Server?

    Before I go into more detail, let me quickly explain some terms.

    • Server — the server is the computer that provides the file system, boot files, and password authentication to the client(s)
    • Client — a client is a computer that retrieves boot files from the server over the network, and then uses a file system the server has shared. More than one client can connect to a server, but all clients use the same file system.
    • User – a user is a user name/password combination that allows someone to log into a client to access the file system on the server. Any user can log into any client with their credentials, and will always see the same server and share the same file system. Users do not have sudo capability on a client, meaning they cannot make significant changes to the file system and software.

    I see no SD cards

    Last year we described how the Raspberry Pi 3 Model B can be booted without an SD card over an Ethernet network from another computer (the server). This is called network booting or PXE (pronounced ‘pixie’) booting.

    Why would you want to do this?

    • A client computer (the Raspberry Pi) doesn’t need any permanent storage (an SD card) to boot.
    • You can network a large number of clients to one server, and all clients are exactly the same. If you log into one of the clients, you will see the same file system as if you logged into any other client.
    • The server can be run on an x86 system, which means you get to take advantage of the performance, network, and disk speed on the server.

    Sounds great, right? Of course, for the less technical, creating such a network is very difficult. For example, there’s setting up all the required DHCP and TFTP servers, and making sure they behave nicely with the rest of the network. If you get this wrong, you can break your entire network.

    PiServer to the rescue

    To make network booting easy, I thought it would be nice to develop an application which did everything for you. Let me introduce: PiServer!

    PiServer has the following functionalities:

    • It automatically detects Raspberry Pis trying to network boot, so you don’t have to work out their Ethernet addresses.
    • It sets up a DHCP server — the thing inside the router that gives all network devices an IP address — either in proxy mode or in full IP mode. No matter the mode, the DHCP server will only reply to the Raspberry Pis you have specified, which is important for network safety.
    • It creates user names and passwords for the server. This is great for a classroom full of Pis: just set up all the users beforehand, and everyone gets to log in with their passwords and keep all their work in a central place. Moreover, users cannot change the software, so educators have control over which programs their learners can use.
    • It uses a slightly altered Raspbian build which allows separation of temporary spaces, doesn’t have the default ‘pi’ user, and has LDAP enabled for log-in.

    What can I do with PiServer?

    Serve a whole classroom of Pis

    In a classroom, PiServer allows all files for lessons or projects to be stored on a central x86-based computer. Each user can have their own account, and any files they create are also stored on the server. Moreover, the networked Pis doesn’t need to be connected to the internet. The teacher has centralised control over all Pis, and all Pis are user-agnostic, meaning there’s no need to match a person with a computer or an SD card.

    Build a home server

    PiServer could be used in the home to serve file systems for all Raspberry Pis around the house — either a single common Raspbian file system for all Pis or a different operating system for each. Hopefully, our extensive OS suppliers will provide suitable build files in future.

    Use it as a controller for networked Pis

    In an industrial scenario, it is possible to use PiServer to develop a network of Raspberry Pis (maybe even using Power over Ethernet (PoE)) such that the control software for each Pi is stored remotely on a server. This enables easy remote control and provisioning of the Pis from a central repository.

    How to use PiServer

    The client machines

    So that you can use a Pi as a client, you need to enable network booting on it. Power it up using an SD card with a Raspbian Lite image, and open a terminal window. Type in

    echo program_usb_boot_mode=1 | sudo tee -a /boot/config.txt

    and press Return. This adds the line program_usb_boot_mode=1 to the end of the config.txt file in /boot. Now power the Pi down and remove the SD card. The next time you connect the Pi to a power source, you will be able to network boot it.

    The server machine

    As a server, you will need an x86 computer on which you can install x86 Debian Stretch. Refer to Simon’s blog post for additional information on this. It is possible to use a Raspberry Pi to serve to the client Pis, but the file system will be slower, especially at boot time.

    Make sure your server has a good amount of disk space available for the file system — in general, we recommend at least 16Gb SD cards for Raspberry Pis. The whole client file system is stored locally on the server, so the disk space requirement is fairly significant.

    Next, start PiServer by clicking on the start icon and then clicking Preferences > PiServer. This will open a graphical user interface — the wizard — that will walk you through setting up your network. Skip the introduction screen, and you should see a screen looking like this:

    PiServer GUI screenshot

    If you’ve enabled network booting on the client Pis and they are connected to a power source, their MAC addresses will automatically appear in the table shown above. When you have added all your Pis, click Next.

    PiServer GUI screenshot

    On the Add users screen, you can set up users on your server. These are pairs of user names and passwords that will be valid for logging into the client Raspberry Pis. Don’t worry, you can add more users at any point. Click Next again when you’re done.

    PiServer GUI screenshot

    The Add software screen allows you to select the operating system you want to run on the attached Pis. (You’ll have the option to assign an operating system to each client individually in the setting after the wizard has finished its job.) There are some automatically populated operating systems, such as Raspbian and Raspbian Lite. Hopefully, we’ll add more in due course. You can also provide your own operating system from a local file, or install it from a URL. For further information about how these operating system images are created, have a look at the scripts in /var/lib/piserver/scripts.

    Once you’re done, click Next again. The wizard will then install the necessary components and the operating systems you’ve chosen. This will take a little time, so grab a coffee (or decaffeinated drink of your choice).

    When the installation process is finished, PiServer is up and running — all you need to do is reboot the Pis to get them to run from the server.

    Shooting troubles

    If you have trouble getting clients connected to your network, there are a fewthings you can do to debug:

    1. If some clients are connecting but others are not, check whether you’ve enabled the network booting mode on the Pis that give you issues. To do that, plug an Ethernet cable into the Pi (with the SD card removed) — the LEDs on the Pi and connector should turn on. If that doesn’t happen, you’ll need to follow the instructions above to boot the Pi and edit its /boot/config.txt file.
    2. If you can’t connect to any clients, check whether your network is suitable: format an SD card, and copy bootcode.bin from /boot on a standard Raspbian image onto it. Plug the card into a client Pi, and check whether it appears as a new MAC address in the PiServer GUI. If it does, then the problem is a known issue, and you can head to our forums to ask for advice about it (the network booting code has a couple of problems which we’re already aware of). For a temporary fix, you can clone the SD card on which bootcode.bin is stored for all your clients.

    If neither of these things fix your problem, our forums are the place to find help — there’s a host of people there who’ve got PiServer working. If you’re sure you have identified a problem that hasn’t been addressed on the forums, or if you have a request for a functionality, then please add it to the GitHub issues.

    The post The Raspberry Pi PiServer tool appeared first on Raspberry Pi.

    Instrumenting Web Apps Using AWS X-Ray

    Post Syndicated from Bharath Kumar original https://aws.amazon.com/blogs/devops/instrumenting-web-apps-using-aws-x-ray/

    This post was written by James Bowman, Software Development Engineer, AWS X-Ray

    AWS X-Ray helps developers analyze and debug distributed applications and underlying services in production. You can identify and analyze root-causes of performance issues and errors, understand customer impact, and extract statistical aggregations (such as histograms) for optimization.

    In this blog post, I will provide a step-by-step walkthrough for enabling X-Ray tracing in the Go programming language. You can use these steps to add X-Ray tracing to any distributed application.

    Revel: A web framework for the Go language

    This section will assist you with designing a guestbook application. Skip to “Instrumenting with AWS X-Ray” section below if you already have a Go language application.

    Revel is a web framework for the Go language. It facilitates the rapid development of web applications by providing a predefined framework for controllers, views, routes, filters, and more.

    To get started with Revel, run revel new github.com/jamesdbowman/guestbook. A project base is then copied to $GOPATH/src/github.com/jamesdbowman/guestbook.

    $ tree -L 2
    .
    ├── README.md
    ├── app
    │ ├── controllers
    │ ├── init.go
    │ ├── routes
    │ ├── tmp
    │ └── views
    ├── conf
    │ ├── app.conf
    │ └── routes
    ├── messages
    │ └── sample.en
    ├── public
    │ ├── css
    │ ├── fonts
    │ ├── img
    │ └── js
    └── tests
    └── apptest.go

    Writing a guestbook application

    A basic guestbook application can consist of just two routes: one to sign the guestbook and another to list all entries.
    Let’s set up these routes by adding a Book controller, which can be routed to by modifying ./conf/routes.

    ./app/controllers/book.go:
    package controllers
    
    import (
        "math/rand"
        "time"
    
        "github.com/aws/aws-sdk-go/aws"
        "github.com/aws/aws-sdk-go/aws/endpoints"
        "github.com/aws/aws-sdk-go/aws/session"
        "github.com/aws/aws-sdk-go/service/dynamodb"
        "github.com/aws/aws-sdk-go/service/dynamodb/dynamodbattribute"
        "github.com/revel/revel"
    )
    
    const TABLE_NAME = "guestbook"
    const SUCCESS = "Success.\n"
    const DAY = 86400
    
    var letters = []rune("ABCDEFGHIJKLMNOPQRSTUVWXYZ")
    
    func init() {
        rand.Seed(time.Now().UnixNano())
    }
    
    // randString returns a random string of len n, used for DynamoDB Hash key.
    func randString(n int) string {
        b := make([]rune, n)
        for i := range b {
            b[i] = letters[rand.Intn(len(letters))]
        }
        return string(b)
    }
    
    // Book controls interactions with the guestbook.
    type Book struct {
        *revel.Controller
        ddbClient *dynamodb.DynamoDB
    }
    
    // Signature represents a user's signature.
    type Signature struct {
        Message string
        Epoch   int64
        ID      string
    }
    
    // ddb returns the controller's DynamoDB client, instatiating a new client if necessary.
    func (c Book) ddb() *dynamodb.DynamoDB {
        if c.ddbClient == nil {
            sess := session.Must(session.NewSession(&aws.Config{
                Region: aws.String(endpoints.UsWest2RegionID),
            }))
            c.ddbClient = dynamodb.New(sess)
        }
        return c.ddbClient
    }
    
    // Sign allows users to sign the book.
    // The message is to be passed as application/json typed content, listed under the "message" top level key.
    func (c Book) Sign() revel.Result {
        var s Signature
    
        err := c.Params.BindJSON(&s)
        if err != nil {
            return c.RenderError(err)
        }
        now := time.Now()
        s.Epoch = now.Unix()
        s.ID = randString(20)
    
        item, err := dynamodbattribute.MarshalMap(s)
        if err != nil {
            return c.RenderError(err)
        }
    
        putItemInput := &dynamodb.PutItemInput{
            TableName: aws.String(TABLE_NAME),
            Item:      item,
        }
        _, err = c.ddb().PutItem(putItemInput)
        if err != nil {
            return c.RenderError(err)
        }
    
        return c.RenderText(SUCCESS)
    }
    
    // List allows users to list all signatures in the book.
    func (c Book) List() revel.Result {
        scanInput := &dynamodb.ScanInput{
            TableName: aws.String(TABLE_NAME),
            Limit:     aws.Int64(100),
        }
        res, err := c.ddb().Scan(scanInput)
        if err != nil {
            return c.RenderError(err)
        }
    
        messages := make([]string, 0)
        for _, v := range res.Items {
            messages = append(messages, *(v["Message"].S))
        }
        return c.RenderJSON(messages)
    }
    

    ./conf/routes:
    POST /sign Book.Sign
    GET /list Book.List

    Creating the resources and testing

    For the purposes of this blog post, the application will be run and tested locally. We will store and retrieve messages from an Amazon DynamoDB table. Use the following AWS CLI command to create the guestbook table:

    aws dynamodb create-table --region us-west-2 --table-name "guestbook" --attribute-definitions AttributeName=ID,AttributeType=S AttributeName=Epoch,AttributeType=N --key-schema AttributeName=ID,KeyType=HASH AttributeName=Epoch,KeyType=RANGE --provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5

    Now, let’s test our sign and list routes. If everything is working correctly, the following result appears:

    $ curl -d '{"message":"Hello from cURL!"}' -H "Content-Type: application/json" http://localhost:9000/book/sign
    Success.
    $ curl http://localhost:9000/book/list
    [
      "Hello from cURL!"
    ]%
    

    Integrating with AWS X-Ray

    Download and run the AWS X-Ray daemon

    The AWS SDKs emit trace segments over UDP on port 2000. (This port can be configured.) In order for the trace segments to make it to the X-Ray service, the daemon must listen on this port and batch the segments in calls to the PutTraceSegments API.
    For information about downloading and running the X-Ray daemon, see the AWS X-Ray Developer Guide.

    Installing the AWS X-Ray SDK for Go

    To download the SDK from GitHub, run go get -u github.com/aws/aws-xray-sdk-go/... The SDK will appear in the $GOPATH.

    Enabling the incoming request filter

    The first step to instrumenting an application with AWS X-Ray is to enable the generation of trace segments on incoming requests. The SDK conveniently provides an implementation of http.Handler which does exactly that. To ensure incoming web requests travel through this handler, we can modify app/init.go, adding a custom function to be run on application start.

    import (
        "github.com/aws/aws-xray-sdk-go/xray"
        "github.com/revel/revel"
    )
    
    ...
    
    func init() {
      ...
        revel.OnAppStart(installXRayHandler)
    }
    
    func installXRayHandler() {
        revel.Server.Handler = xray.Handler(xray.NewFixedSegmentNamer("GuestbookApp"), revel.Server.Handler)
    }
    

    The application will now emit a segment for each incoming web request. The service graph appears:

    You can customize the name of the segment to make it more descriptive by providing an alternate implementation of SegmentNamer to xray.Handler. For example, you can use xray.NewDynamicSegmentNamer(fallback, pattern) in place of the fixed namer. This namer will use the host name from the incoming web request (if it matches pattern) as the segment name. This is often useful when you are trying to separate different instances of the same application.

    In addition, HTTP-centric information such as method and URL is collected in the segment’s http subsection:

    "http": {
        "request": {
            "url": "/book/list",
            "method": "GET",
            "user_agent": "curl/7.54.0",
            "client_ip": "::1"
        },
        "response": {
            "status": 200
        }
    },
    

    Instrumenting outbound calls

    To provide detailed performance metrics for distributed applications, the AWS X-Ray SDK needs to measure the time it takes to make outbound requests. Trace context is passed to downstream services using the X-Amzn-Trace-Id header. To draw a detailed and accurate representation of a distributed application, outbound call instrumentation is required.

    AWS SDK calls

    The AWS X-Ray SDK for Go provides a one-line AWS client wrapper that enables the collection of detailed per-call metrics for any AWS client. We can modify the DynamoDB client instantiation to include this line:

    // ddb returns the controller's DynamoDB client, instatiating a new client if necessary.
    func (c Book) ddb() *dynamodb.DynamoDB {
        if c.ddbClient == nil {
            sess := session.Must(session.NewSession(&aws.Config{
                Region: aws.String(endpoints.UsWest2RegionID),
            }))
            c.ddbClient = dynamodb.New(sess)
            xray.AWS(c.ddbClient.Client) // add subsegment-generating X-Ray handlers to this client
        }
        return c.ddbClient
    }
    

    We also need to ensure that the segment generated by our xray.Handler is passed to these AWS calls so that the X-Ray SDK knows to which segment these generated subsegments belong. In Go, the context.Context object is passed throughout the call path to achieve this goal. (In most other languages, some variant of ThreadLocal is used.) AWS clients provide a *WithContext method variant for each AWS operation, which we need to switch to:

    _, err = c.ddb().PutItemWithContext(c.Request.Context(), putItemInput)
        res, err := c.ddb().ScanWithContext(c.Request.Context(), scanInput)
    

    We now see much more detail in the Timeline view of the trace for the sign and list operations:

    We can use this detail to help diagnose throttling on our DynamoDB table. In the following screenshot, the purple in the DynamoDB service graph node indicates that our table is underprovisioned. The red in the GuestbookApp node indicates that the application is throwing faults due to this throttling.

    HTTP calls

    Although the guestbook application does not make any non-AWS outbound HTTP calls in its current state, there is a similar one-liner to wrap HTTP clients that make outbound requests. xray.Client(c *http.Client) wraps an existing http.Client (or nil if you want to use a default HTTP client). For example:

    resp, err := ctxhttp.Get(ctx, xray.Client(nil), "https://aws.amazon.com/")

    Instrumenting local operations

    X-Ray can also assist in measuring the performance of local compute operations. To see this in action, let’s create a custom subsegment inside the randString method:

    
    // randString returns a random string of len n, used for DynamoDB Hash key.
    func randString(ctx context.Context, n int) string {
        xray.Capture(ctx, "randString", func(innerCtx context.Context) {
            b := make([]rune, n)
            for i := range b {
                b[i] = letters[rand.Intn(len(letters))]
            }
            s := string(b)
        })
        return s
    }
    
    // we'll also need to change the callsite
    
    s.ID = randString(c.Request.Context(), 20)
    

    Summary

    By now, you are an expert on how to instrument X-Ray for your Go applications. Instrumenting X-Ray with your applications is an easy way to analyze and debug performance issues and understand customer impact. Please feel free to give any feedback or comments below.

    For more information about advanced configuration of the AWS X-Ray SDK for Go, see the AWS X-Ray SDK for Go in the AWS X-Ray Developer Guide and the aws/aws-xray-sdk-go GitHub repository.

    For more information about some of the advanced X-Ray features such as histograms, annotations, and filter expressions, see the Analyzing Performance for Amazon Rekognition Apps Written on AWS Lambda Using AWS X-Ray blog post.

    Announcing Amazon FreeRTOS – Enabling Billions of Devices to Securely Benefit from the Cloud

    Post Syndicated from Tara Walker original https://aws.amazon.com/blogs/aws/announcing-amazon-freertos/

    I was recently reading an article on ReadWrite.com titled “IoT devices go forth and multiply, to increase 200% by 2021“, and while the article noted the benefit for consumers and the industry of this growth, two things in the article stuck with me. The first was the specific statement that read “researchers warned that the proliferation of IoT technology will create a new bevvy of challenges. Particularly troublesome will be IoT deployments at scale for both end-users and providers.” Not only was that sentence a mouthful, but it really addressed some of the challenges that can come building solutions and deployment of this exciting new technology area. The second sentiment in the article that stayed with me was that Security issues could grow.

    So the article got me thinking, how can we create these cool IoT solutions using low-cost efficient microcontrollers with a secure operating system that can easily connect to the cloud. Luckily the answer came to me by way of an exciting new open-source based offering coming from AWS that I am happy to announce to you all today. Let’s all welcome, Amazon FreeRTOS to the technology stage.

    Amazon FreeRTOS is an IoT microcontroller operating system that simplifies development, security, deployment, and maintenance of microcontroller-based edge devices. Amazon FreeRTOS extends the FreeRTOS kernel, a popular real-time operating system, with libraries that enable local and cloud connectivity, security, and (coming soon) over-the-air updates.

    So what are some of the great benefits of this new exciting offering, you ask. They are as follows:

    • Easily to create solutions for Low Power Connected Devices: provides a common operating system (OS) and libraries that make the development of common IoT capabilities easy for devices. For example; over-the-air (OTA) updates (coming soon) and device configuration.
    • Secure Data and Device Connections: devices only run trusted software using the Code Signing service, Amazon FreeRTOS provides a secure connection to the AWS using TLS, as well as, the ability to securely store keys and sensitive data on the device.
    • Extensive Ecosystem: contains an extensive hardware and technology ecosystem that allows you to choose a variety of qualified chipsets, including Texas Instruments, Microchip, NXP Semiconductors, and STMicroelectronics.
    • Cloud or Local Connections:  Devices can connect directly to the AWS Cloud or via AWS Greengrass.

     

    What’s cool is that it is easy to get started. 

    The Amazon FreeRTOS console allows you to select and download the software that you need for your solution.

    There is a Qualification Program that helps to assure you that the microcontroller you choose will run consistently across several hardware options.

    Finally, Amazon FreeRTOS kernel is an open-source FreeRTOS operating system that is freely available on GitHub for download.

    But I couldn’t leave you without at least showing you a few snapshots of the Amazon FreeRTOS Console.

    Within the Amazon FreeRTOS Console, I can select a predefined software configuration that I would like to use.

    If I want to have a more customized software configuration, Amazon FreeRTOS allows you to customize a solution that is targeted for your use by adding or removing libraries.

    Summary

    Thanks for checking out the new Amazon FreeRTOS offering. To learn more go to the Amazon FreeRTOS product page or review the information provided about this exciting IoT device targeted operating system in the AWS documentation.

    Can’t wait to see what great new IoT systems are will be enabled and created with it! Happy Coding.

    Tara

     

    Presenting AWS IoT Analytics: Delivering IoT Analytics at Scale and Faster than Ever Before

    Post Syndicated from Tara Walker original https://aws.amazon.com/blogs/aws/launch-presenting-aws-iot-analytics/

    One of the technology areas I thoroughly enjoy is the Internet of Things (IoT). Even as a child I used to infuriate my parents by taking apart the toys they would purchase for me to see how they worked and if I could somehow put them back together. It seems somehow I was destined to end up the tough and ever-changing world of technology. Therefore, it’s no wonder that I am really enjoying learning and tinkering with IoT devices and technologies. It combines my love of development and software engineering with my curiosity around circuits, controllers, and other facets of the electrical engineering discipline; even though an electrical engineer I can not claim to be.

    Despite all of the information that is collected by the deployment of IoT devices and solutions, I honestly never really thought about the need to analyze, search, and process this data until I came up against a scenario where it became of the utmost importance to be able to search and query through loads of sensory data for an anomaly occurrence. Of course, I understood the importance of analytics for businesses to make accurate decisions and predictions to drive the organization’s direction. But it didn’t occur to me initially, how important it was to make analytics an integral part of my IoT solutions. Well, I learned my lesson just in time because this re:Invent a service is launching to make it easier for anyone to process and analyze IoT messages and device data.

     

    Hello, AWS IoT Analytics!  AWS IoT Analytics is a fully managed service of AWS IoT that provides advanced data analysis of data collected from your IoT devices.  With the AWS IoT Analytics service, you can process messages, gather and store large amounts of device data, as well as, query your data. Also, the new AWS IoT Analytics service feature integrates with Amazon Quicksight for visualization of your data and brings the power of machine learning through integration with Jupyter Notebooks.

    Benefits of AWS IoT Analytics

    • Helps with predictive analysis of data by providing access to pre-built analytical functions
    • Provides ability to visualize analytical output from service
    • Provides tools to clean up data
    • Can help identify patterns in the gathered data

    Be In the Know: IoT Analytics Concepts

    • Channel: archives the raw, unprocessed messages and collects data from MQTT topics.
    • Pipeline: consumes messages from channels and allows message processing.
      • Activities: perform transformations on your messages including filtering attributes and invoking lambda functions advanced processing.
    • Data Store: Used as a queryable repository for processed messages. Provide ability to have multiple datastores for messages coming from different devices or locations or filtered by message attributes.
    • Data Set: Data retrieval view from a data store, can be generated by a recurring schedule. 

    Getting Started with AWS IoT Analytics

    First, I’ll create a channel to receive incoming messages.  This channel can be used to ingest data sent to the channel via MQTT or messages directed from the Rules Engine. To create a channel, I’ll select the Channels menu option and then click the Create a channel button.

    I’ll name my channel, TaraIoTAnalyticsID and give the Channel a MQTT topic filter of Temperature. To complete the creation of my channel, I will click the Create Channel button.

    Now that I have my Channel created, I need to create a Data Store to receive and store the messages received on the Channel from my IoT device. Remember you can set up multiple Data Stores for more complex solution needs, but I’ll just create one Data Store for my example. I’ll select Data Stores from menu panel and click Create a data store.

     

    I’ll name my Data Store, TaraDataStoreID, and once I click the Create the data store button and I would have successfully set up a Data Store to house messages coming from my Channel.

    Now that I have my Channel and my Data Store, I will need to connect the two using a Pipeline. I’ll create a simple pipeline that just connects my Channel and Data Store, but you can create a more robust pipeline to process and filter messages by adding Pipeline activities like a Lambda activity.

    To create a pipeline, I’ll select the Pipelines menu option and then click the Create a pipeline button.

    I will not add an Attribute for this pipeline. So I will click Next button.

    As we discussed there are additional pipeline activities that I can add to my pipeline for the processing and transformation of messages but I will keep my first pipeline simple and hit the Next button.

    The final step in creating my pipeline is for me to select my previously created Data Store and click Create Pipeline.

    All that is left for me to take advantage of the AWS IoT Analytics service is to create an IoT rule that sends data to an AWS IoT Analytics channel.  Wow, that was a super easy process to set up analytics for IoT devices.

    If I wanted to create a Data Set as a result of queries run against my data for visualization with Amazon Quicksight or integrate with Jupyter Notebooks to perform more advanced analytical functions, I can choose the Analyze menu option to bring up the screens to create data sets and access the Juypter Notebook instances.

    Summary

    As you can see, it was a very simple process to set up the advanced data analysis for AWS IoT. With AWS IoT Analytics, you have the ability to collect, visualize, process, query and store large amounts of data generated from your AWS IoT connected device. Additionally, you can access the AWS IoT Analytics service in a myriad of different ways; the AWS Command Line Interface (AWS CLI), the AWS IoT API, language-specific AWS SDKs, and AWS IoT Device SDKs.

    AWS IoT Analytics is available today for you to dig into the analysis of your IoT data. To learn more about AWS IoT and AWS IoT Analytics go to the AWS IoT Analytics product page and/or the AWS IoT documentation.

    Tara

    HackSpace magazine #1 is out now!

    Post Syndicated from Andrew Gregory original https://www.raspberrypi.org/blog/hackspace-magazine-1/

    HackSpace magazine is finally here! Grab your copy of the new magazine for makers today, and try your hand at some new, exciting skills.

    HackSpace magazine issue 1 cover

    What is HackSpace magazine?

    HackSpace magazine is the newest publication from the team behind The MagPi. Chock-full of amazing projects, tutorials, features, and maker interviews, HackSpace magazine brings together the makers of the world every month, with you — the community — providing the content.

    HackSpace magazine is out now!

    The new magazine for the modern maker is out now! Learn more at https://hsmag.cc HackSpace magazine is the new monthly magazine for people who love to make things and those who want to learn. Grab some duct tape, fire up a microcontroller, ready a 3D printer and hack the world around you!

    Inside issue 1

    Fancy smoking bacon with your very own cold smoker? How about protecting your home with a mini trebuchet for your front lawn? Or maybe you’d like to learn from awesome creator Becky Stern how to get paid for making the things you love? No matter whether it’s handheld consoles, robot prosthetics, Christmas projects, or, er, duct tape — whatever your maker passion, issue 1 is guaranteed to tick your boxes!



    HackSpace magazine is packed with content from every corner of the maker world: from welding to digital making, and from woodwork to wearables. And whatever you enjoy making, we want to see it! So as you read through this first issue, imagine your favourite homemade projects on our pages, then make that a reality by emailing us the details via [email protected].

    Get your copy

    You can grab issue 1 of HackSpace magazine right now from WHSmith, Tesco, Sainsbury’s, and independent newsagents. If you live in the US, check out your local Barnes & Noble, Fry’s, or Micro Center next week. We’re also shipping to stores in Australia, Hong Kong, Canada, Singapore, Belgium and Brazil — ask your local newsagent whether they’ll be getting HackSpace magazine. Alternatively, you can get the new issue online from our store, or digitally via our Android or iOS apps. And don’t forget, as with all our publications, a free PDF of HackSpace magazine is available from release day.

    We’re also offering money-saving subscriptions — find details on the the magazine website. And if you’re a subscriber of The MagPi, your free copy of HackSpace magazine is on its way, with details of a super 50% discount on subscriptions! Could this be the Christmas gift you didn’t know you wanted?

    Share your makes and thoughts

    Make sure to follow HackSpace magazine on Facebook and Twitter, or email the team at [email protected] to tell us about your projects and share your thoughts about issue 1. We’ve loved creating this new magazine for the maker community, and we hope you enjoy it as much as we do.

    The post HackSpace magazine #1 is out now! appeared first on Raspberry Pi.

    How to Patch, Inspect, and Protect Microsoft Windows Workloads on AWS—Part 1

    Post Syndicated from Koen van Blijderveen original https://aws.amazon.com/blogs/security/how-to-patch-inspect-and-protect-microsoft-windows-workloads-on-aws-part-1/

    Most malware tries to compromise your systems by using a known vulnerability that the maker of the operating system has already patched. To help prevent malware from affecting your systems, two security best practices are to apply all operating system patches to your systems and actively monitor your systems for missing patches. In case you do need to recover from a malware attack, you should make regular backups of your data.

    In today’s blog post (Part 1 of a two-part post), I show how to keep your Amazon EC2 instances that run Microsoft Windows up to date with the latest security patches by using Amazon EC2 Systems Manager. Tomorrow in Part 2, I show how to take regular snapshots of your data by using Amazon EBS Snapshot Scheduler and how to use Amazon Inspector to check if your EC2 instances running Microsoft Windows contain any common vulnerabilities and exposures (CVEs).

    What you should know first

    To follow along with the solution in this post, you need one or more EC2 instances. You may use existing instances or create new instances. For the blog post, I assume this is an EC2 for Microsoft Windows Server 2012 R2 instance installed from the Amazon Machine Images (AMIs). If you are not familiar with how to launch an EC2 instance, see Launching an Instance. I also assume you launched or will launch your instance in a private subnet. A private subnet is not directly accessible via the internet, and access to it requires either a VPN connection to your on-premises network or a jump host in a public subnet (a subnet with access to the internet). You must make sure that the EC2 instance can connect to the internet using a network address translation (NAT) instance or NAT gateway to communicate with Systems Manager and Amazon Inspector. The following diagram shows how you should structure your Amazon Virtual Private Cloud (VPC). You should also be familiar with Restoring an Amazon EBS Volume from a Snapshot and Attaching an Amazon EBS Volume to an Instance.

    Later on, you will assign tasks to a maintenance window to patch your instances with Systems Manager. To do this, the AWS Identity and Access Management (IAM) user you are using for this post must have the iam:PassRole permission. This permission allows this IAM user to assign tasks to pass their own IAM permissions to the AWS service. In this example, when you assign a task to a maintenance window, IAM passes your credentials to Systems Manager. This safeguard ensures that the user cannot use the creation of tasks to elevate their IAM privileges because their own IAM privileges limit which tasks they can run against an EC2 instance. You should also authorize your IAM user to use EC2, Amazon Inspector, Amazon CloudWatch, and Systems Manager. You can achieve this by attaching the following AWS managed policies to the IAM user you are using for this example: AmazonInspectorFullAccess, AmazonEC2FullAccess, and AmazonSSMFullAccess.

    Architectural overview

    The following diagram illustrates the components of this solution’s architecture.

    Diagram showing the components of this solution's architecture

    For this blog post, Microsoft Windows EC2 is Amazon EC2 for Microsoft Windows Server 2012 R2 instances with attached Amazon Elastic Block Store (Amazon EBS) volumes, which are running in your VPC. These instances may be standalone Windows instances running your Windows workloads, or you may have joined them to an Active Directory domain controller. For instances joined to a domain, you can be using Active Directory running on an EC2 for Windows instance, or you can use AWS Directory Service for Microsoft Active Directory.

    Amazon EC2 Systems Manager is a scalable tool for remote management of your EC2 instances. You will use the Systems Manager Run Command to install the Amazon Inspector agent. The agent enables EC2 instances to communicate with the Amazon Inspector service and run assessments, which I explain in detail later in this blog post. You also will create a Systems Manager association to keep your EC2 instances up to date with the latest security patches.

    You can use the EBS Snapshot Scheduler to schedule automated snapshots at regular intervals. You will use it to set up regular snapshots of your Amazon EBS volumes. EBS Snapshot Scheduler is a prebuilt solution by AWS that you will deploy in your AWS account. With Amazon EBS snapshots, you pay only for the actual data you store. Snapshots save only the data that has changed since the previous snapshot, which minimizes your cost.

    You will use Amazon Inspector to run security assessments on your EC2 for Windows Server instance. In this post, I show how to assess if your EC2 for Windows Server instance is vulnerable to any of the more than 50,000 CVEs registered with Amazon Inspector.

    In today’s and tomorrow’s posts, I show you how to:

    1. Launch an EC2 instance with an IAM role, Amazon EBS volume, and tags that Systems Manager and Amazon Inspector will use.
    2. Configure Systems Manager to install the Amazon Inspector agent and patch your EC2 instances.
    3. Take EBS snapshots by using EBS Snapshot Scheduler to automate snapshots based on instance tags.
    4. Use Amazon Inspector to check if your EC2 instances running Microsoft Windows contain any common vulnerabilities and exposures (CVEs).

    Step 1: Launch an EC2 instance

    In this section, I show you how to launch your EC2 instances so that you can use Systems Manager with the instances and use instance tags with EBS Snapshot Scheduler to automate snapshots. This requires three things:

    • Create an IAM role for Systems Manager before launching your EC2 instance.
    • Launch your EC2 instance with Amazon EBS and the IAM role for Systems Manager.
    • Add tags to instances so that you can automate policies for which instances you take snapshots of and when.

    Create an IAM role for Systems Manager

    Before launching your EC2 instance, I recommend that you first create an IAM role for Systems Manager, which you will use to update the EC2 instance you will launch. AWS already provides a preconfigured policy that you can use for your new role, and it is called AmazonEC2RoleforSSM.

    1. Sign in to the IAM console and choose Roles in the navigation pane. Choose Create new role.
      Screenshot of choosing "Create role"
    2. In the role-creation workflow, choose AWS service > EC2 > EC2 to create a role for an EC2 instance.
      Screenshot of creating a role for an EC2 instance
    3. Choose the AmazonEC2RoleforSSM policy to attach it to the new role you are creating.
      Screenshot of attaching the AmazonEC2RoleforSSM policy to the new role you are creating
    4. Give the role a meaningful name (I chose EC2SSM) and description, and choose Create role.
      Screenshot of giving the role a name and description

    Launch your EC2 instance

    To follow along, you need an EC2 instance that is running Microsoft Windows Server 2012 R2 and that has an Amazon EBS volume attached. You can use any existing instance you may have or create a new instance.

    When launching your new EC2 instance, be sure that:

    • The operating system is Microsoft Windows Server 2012 R2.
    • You attach at least one Amazon EBS volume to the EC2 instance.
    • You attach the newly created IAM role (EC2SSM).
    • The EC2 instance can connect to the internet through a network address translation (NAT) gateway or a NAT instance.
    • You create the tags shown in the following screenshot (you will use them later).

    If you are using an already launched EC2 instance, you can attach the newly created role as described in Easily Replace or Attach an IAM Role to an Existing EC2 Instance by Using the EC2 Console.

    Add tags

    The final step of configuring your EC2 instances is to add tags. You will use these tags to configure Systems Manager in Step 2 of this blog post and to configure Amazon Inspector in Part 2. For this example, I add a tag key, Patch Group, and set the value to Windows Servers. I could have other groups of EC2 instances that I treat differently by having the same tag key but a different tag value. For example, I might have a collection of other servers with the Patch Group tag key with a value of IAS Servers.

    Screenshot of adding tags

    Note: You must wait a few minutes until the EC2 instance becomes available before you can proceed to the next section.

    At this point, you now have at least one EC2 instance you can use to configure Systems Manager, use EBS Snapshot Scheduler, and use Amazon Inspector.

    Note: If you have a large number of EC2 instances to tag, you may want to use the EC2 CreateTags API rather than manually apply tags to each instance.

    Step 2: Configure Systems Manager

    In this section, I show you how to use Systems Manager to apply operating system patches to your EC2 instances, and how to manage patch compliance.

    To start, I will provide some background information about Systems Manager. Then, I will cover how to:

    • Create the Systems Manager IAM role so that Systems Manager is able to perform patch operations.
    • Associate a Systems Manager patch baseline with your instance to define which patches Systems Manager should apply.
    • Define a maintenance window to make sure Systems Manager patches your instance when you tell it to.
    • Monitor patch compliance to verify the patch state of your instances.

    Systems Manager is a collection of capabilities that helps you automate management tasks for AWS-hosted instances on EC2 and your on-premises servers. In this post, I use Systems Manager for two purposes: to run remote commands and apply operating system patches. To learn about the full capabilities of Systems Manager, see What Is Amazon EC2 Systems Manager?

    Patch management is an important measure to prevent malware from infecting your systems. Most malware attacks look for vulnerabilities that are publicly known and in most cases are already patched by the maker of the operating system. These publicly known vulnerabilities are well documented and therefore easier for an attacker to exploit than having to discover a new vulnerability.

    Patches for these new vulnerabilities are available through Systems Manager within hours after Microsoft releases them. There are two prerequisites to use Systems Manager to apply operating system patches. First, you must attach the IAM role you created in the previous section, EC2SSM, to your EC2 instance. Second, you must install the Systems Manager agent on your EC2 instance. If you have used a recent Microsoft Windows Server 2012 R2 AMI published by AWS, Amazon has already installed the Systems Manager agent on your EC2 instance. You can confirm this by logging in to an EC2 instance and looking for Amazon SSM Agent under Programs and Features in Windows. To install the Systems Manager agent on an instance that does not have the agent preinstalled or if you want to use the Systems Manager agent on your on-premises servers, see the documentation about installing the Systems Manager agent. If you forgot to attach the newly created role when launching your EC2 instance or if you want to attach the role to already running EC2 instances, see Attach an AWS IAM Role to an Existing Amazon EC2 Instance by Using the AWS CLI or use the AWS Management Console.

    To make sure your EC2 instance receives operating system patches from Systems Manager, you will use the default patch baseline provided and maintained by AWS, and you will define a maintenance window so that you control when your EC2 instances should receive patches. For the maintenance window to be able to run any tasks, you also must create a new role for Systems Manager. This role is a different kind of role than the one you created earlier: Systems Manager will use this role instead of EC2. Earlier we created the EC2SSM role with the AmazonEC2RoleforSSM policy, which allowed the Systems Manager agent on our instance to communicate with the Systems Manager service. Here we need a new role with the policy AmazonSSMMaintenanceWindowRole to make sure the Systems Manager service is able to execute commands on our instance.

    Create the Systems Manager IAM role

    To create the new IAM role for Systems Manager, follow the same procedure as in the previous section, but in Step 3, choose the AmazonSSMMaintenanceWindowRole policy instead of the previously selected AmazonEC2RoleforSSM policy.

    Screenshot of creating the new IAM role for Systems Manager

    Finish the wizard and give your new role a recognizable name. For example, I named my role MaintenanceWindowRole.

    Screenshot of finishing the wizard and giving your new role a recognizable name

    By default, only EC2 instances can assume this new role. You must update the trust policy to enable Systems Manager to assume this role.

    To update the trust policy associated with this new role:

    1. Navigate to the IAM console and choose Roles in the navigation pane.
    2. Choose MaintenanceWindowRole and choose the Trust relationships tab. Then choose Edit trust relationship.
    3. Update the policy document by copying the following policy and pasting it in the Policy Document box. As you can see, I have added the ssm.amazonaws.com service to the list of allowed Principals that can assume this role. Choose Update Trust Policy.
      {
         "Version":"2012-10-17",
         "Statement":[
            {
               "Sid":"",
               "Effect":"Allow",
               "Principal":{
                  "Service":[
                     "ec2.amazonaws.com",
                     "ssm.amazonaws.com"
                 ]
               },
               "Action":"sts:AssumeRole"
            }
         ]
      }

    Associate a Systems Manager patch baseline with your instance

    Next, you are going to associate a Systems Manager patch baseline with your EC2 instance. A patch baseline defines which patches Systems Manager should apply. You will use the default patch baseline that AWS manages and maintains. Before you can associate the patch baseline with your instance, though, you must determine if Systems Manager recognizes your EC2 instance.

    Navigate to the EC2 console, scroll down to Systems Manager Shared Resources in the navigation pane, and choose Managed Instances. Your new EC2 instance should be available there. If your instance is missing from the list, verify the following:

    1. Go to the EC2 console and verify your instance is running.
    2. Select your instance and confirm you attached the Systems Manager IAM role, EC2SSM.
    3. Make sure that you deployed a NAT gateway in your public subnet to ensure your VPC reflects the diagram at the start of this post so that the Systems Manager agent can connect to the Systems Manager internet endpoint.
    4. Check the Systems Manager Agent logs for any errors.

    Now that you have confirmed that Systems Manager can manage your EC2 instance, it is time to associate the AWS maintained patch baseline with your EC2 instance:

    1. Choose Patch Baselines under Systems Manager Services in the navigation pane of the EC2 console.
    2. Choose the default patch baseline as highlighted in the following screenshot, and choose Modify Patch Groups in the Actions drop-down.
      Screenshot of choosing Modify Patch Groups in the Actions drop-down
    3. In the Patch group box, enter the same value you entered under the Patch Group tag of your EC2 instance in “Step 1: Configure your EC2 instance.” In this example, the value I enter is Windows Servers. Choose the check mark icon next to the patch group and choose Close.Screenshot of modifying the patch group

    Define a maintenance window

    Now that you have successfully set up a role and have associated a patch baseline with your EC2 instance, you will define a maintenance window so that you can control when your EC2 instances should receive patches. By creating multiple maintenance windows and assigning them to different patch groups, you can make sure your EC2 instances do not all reboot at the same time. The Patch Group resource tag you defined earlier will determine to which patch group an instance belongs.

    To define a maintenance window:

    1. Navigate to the EC2 console, scroll down to Systems Manager Shared Resources in the navigation pane, and choose Maintenance Windows. Choose Create a Maintenance Window.
      Screenshot of starting to create a maintenance window in the Systems Manager console
    2. Select the Cron schedule builder to define the schedule for the maintenance window. In the example in the following screenshot, the maintenance window will start every Saturday at 10:00 P.M. UTC.
    3. To specify when your maintenance window will end, specify the duration. In this example, the four-hour maintenance window will end on the following Sunday morning at 2:00 A.M. UTC (in other words, four hours after it started).
    4. Systems manager completes all tasks that are in process, even if the maintenance window ends. In my example, I am choosing to prevent new tasks from starting within one hour of the end of my maintenance window because I estimated my patch operations might take longer than one hour to complete. Confirm the creation of the maintenance window by choosing Create maintenance window.
      Screenshot of completing all boxes in the maintenance window creation process
    5. After creating the maintenance window, you must register the EC2 instance to the maintenance window so that Systems Manager knows which EC2 instance it should patch in this maintenance window. To do so, choose Register new targets on the Targets tab of your newly created maintenance window. You can register your targets by using the same Patch Group tag you used before to associate the EC2 instance with the AWS-provided patch baseline.
      Screenshot of registering new targets
    6. Assign a task to the maintenance window that will install the operating system patches on your EC2 instance:
      1. Open Maintenance Windows in the EC2 console, select your previously created maintenance window, choose the Tasks tab, and choose Register run command task from the Register new task drop-down.
      2. Choose the AWS-RunPatchBaseline document from the list of available documents.
      3. For Parameters:
        1. For Role, choose the role you created previously (called MaintenanceWindowRole).
        2. For Execute on, specify how many EC2 instances Systems Manager should patch at the same time. If you have a large number of EC2 instances and want to patch all EC2 instances within the defined time, make sure this number is not too low. For example, if you have 1,000 EC2 instances, a maintenance window of 4 hours, and 2 hours’ time for patching, make this number at least 500.
        3. For Stop after, specify after how many errors Systems Manager should stop.
        4. For Operation, choose Install to make sure to install the patches.
          Screenshot of stipulating maintenance window parameters

    Now, you must wait for the maintenance window to run at least once according to the schedule you defined earlier. Note that if you don’t want to wait, you can adjust the schedule to run sooner by choosing Edit maintenance window on the Maintenance Windows page of Systems Manager. If your maintenance window has expired, you can check the status of any maintenance tasks Systems Manager has performed on the Maintenance Windows page of Systems Manager and select your maintenance window.

    Screenshot of the maintenance window successfully created

    Monitor patch compliance

    You also can see the overall patch compliance of all EC2 instances that are part of defined patch groups by choosing Patch Compliance under Systems Manager Services in the navigation pane of the EC2 console. You can filter by Patch Group to see how many EC2 instances within the selected patch group are up to date, how many EC2 instances are missing updates, and how many EC2 instances are in an error state.

    Screenshot of monitoring patch compliance

    In this section, you have set everything up for patch management on your instance. Now you know how to patch your EC2 instance in a controlled manner and how to check if your EC2 instance is compliant with the patch baseline you have defined. Of course, I recommend that you apply these steps to all EC2 instances you manage.

    Summary

    In Part 1 of this blog post, I have shown how to configure EC2 instances for use with Systems Manager, EBS Snapshot Scheduler, and Amazon Inspector. I also have shown how to use Systems Manager to keep your Microsoft Windows–based EC2 instances up to date. In Part 2 of this blog post tomorrow, I will show how to take regular snapshots of your data by using EBS Snapshot Scheduler and how to use Amazon Inspector to check if your EC2 instances running Microsoft Windows contain any CVEs.

    If you have comments about this post, submit them in the “Comments” section below. If you have questions about or issues implementing this solution, start a new thread on the EC2 forum or the Amazon Inspector forum, or contact AWS Support.

    – Koen

    Pip: digital creation in your pocket from Curious Chip

    Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/pip-curious-chip/

    Get your hands on Pip, the handheld Raspberry Pi–based device for aspiring young coders and hackers from Curious Chip.

    A GIF of Pip - Curious Chip - Pip handheld device - Raspberry Pi

    Pip is a handheld gaming console from Curios Chip which you can now back on Kickstarter. Using the Raspberry Pi Compute Module 3, Pip allows users to code, hack, and play wherever they are.

    We created Pip so that anyone can tinker with technology. From beginners to those who know more — Pip makes it easy, simple, and fun!

    For gaming

    Pip’s smart design may well remind you of a certain handheld gaming console released earlier this year. With its central screen and detachable side controllers, Pip has a size and shape ideal for gaming.

    A GIF of Pip - Curious Chip - Pip handheld device - Raspberry Pi

    Those who have used a Raspberry Pi with the Raspbian OS might be familiar with Minecraft Pi, a variant of the popular Minecraft game created specifically for Pi users to play and hack for free. Users of Pip will be able to access Minecraft Pi from the portable device and take their block-shaped creations with them wherever they go.

    And if that’s not enough, Pip’s Pi brain allows coders to create their own games using Scratch, in addition to giving access a growing library of games in Curious Chip’s online arcade.

    Digital making

    Pip’s GPIO pins are easily accessible, so that you can expand upon your digital making skills with physical computing projects. Grab your Pip and a handful of jumper leads, and you will be able to connect and control components such as lights, buttons, servomotors, and more!

    A smiling girl with Pip and a laptop

    You can also attach any of the range of HAT add-on boards available on the market, such as our own Sense HAT, or ones created by Pimoroni, Adafruit, and others. And if you’re looking to learn a new coding language, you’re in luck: Pip supports Python, HTML/CSS, JavaScript, Lua, and PHP.

    Maker Pack and add-ons

    Backers can also pledge their funds for additional hardware, such as the Maker Pack, an integrated camera, or a Pip Breadboard Kit.

    PipHAT and Breadboard add-ons - Curious Chip - Pip handheld device - Raspberry Pi

    The breadboard and the optional PipHAT are also compatible with any Raspberry Pi 2 and 3. Nice!

    Curiosity from Curious Chip

    Users of Pip can program their device via Curiosity, a tool designed specifically for this handheld device.

    Pip’s programming tool is called Curiosity, and it’s hosted on Pip itself and accessed via WiFi from any modern web browser, so there’s no software to download and install. Curiosity allows Pip to be programmed using a number of popular programming languages, including JavaScript, Python, Lua, PHP, and HTML5. Scratch-inspired drag-and-drop block programming is also supported with our own Google Blockly–based editor, making it really easy to access all of Pip’s built-in functionality from a simple, visual programming language.

    Back the project

    If you’d like to back Curious Chip and bag your own Pip, you can check out their Kickstarter page here. And if you watch their promo video closely, you may see a familiar face from the Raspberry Pi community.

    Are you planning on starting your own Raspberry Pi-inspired crowd-funded campaign? Then be sure to tag us on social media. We love to see what the community is creating for our little green (or sometimes blue) computer.

    The post Pip: digital creation in your pocket from Curious Chip appeared first on Raspberry Pi.

    I Still Prefer Eclipse Over IntelliJ IDEA

    Post Syndicated from Bozho original https://techblog.bozho.net/still-prefer-eclipse-intellij-idea/

    Over the years I’ve observed an inevitable shift from Eclipse to IntelliJ IDEA. Last year they were almost equal in usage, and I have the feeling things are swaying even more towards IDEA.

    IDEA is like the iPhone of IDEs – its users tell you that “you will feel how much better it is once you get used to it”, “are you STILL using Eclipse??”, “IDEA is so much better, I thought everyone has switched”, etc.

    I’ve been using mostly Eclipse for the past 12 years, but in some cases I did use IDEA – when I was writing Scala, when I was writing Android, and most recently – when Eclipse failed to be ready for the Java 9 release, so after half a day of trying to get it working, I just switched to IDEA until Eclipse finally gets a working Java 9 version (with Maven and the rest of the stuff).

    But I will get back to Eclipse again, soon. And I still prefer it. Not just because of all the key combinations I’ve internalized (you can reuse those in IDEA), but because there are still things I find worse in IDEA. Of course, IDEA has so much more cool features like code improvement suggestions and actually working plugins for everything. But at least some of the problems I see have to do with the more basic development workflow and experience. And you can’t compensate for those with sugarcoating. So here they are:

    • Projects are not automatically built (by default), so you can end up with compilation errors that you don’t see until you open a non-compiling file or run a build. And turning the autobild on makes my machine crawl. I know I need an upgrade, but that’s not the point – not having “build on change” was a huge surprise to me the first time I tried IDEA. I recently complained about that on twitter and it turns out “it’s a feature”. The rationale seems to be that if you use refactoring, that shouldn’t happen. Well, there are dozens of cases when it does happen. Refactoring by adding a method parameter, by changing the type of a parameter, by removing a parameter (where the IDE can’t infer which parameter is removed based on the types), by changing return types. Also, a change in maven/gradle dependencies may introduces compilation issues that you don’t get to see. This is not a reasonable default at all, and I think the performance issues are the only reason it’s still the default. I think this makes the experience much worse.
    • You can have only one project per screen. Maybe there are those small companies with greenfield projects where you only need one. But I’ve never been in a situation, where you don’t at least occasionally need a separate project. Be it an “experiments” one, a “tools” one, or whatever. And no, multi-module maven projects (which IDEA handles well) are not sufficient. So each time you need to step out of your main project, you launch another screen. Apart from the bad usability, it’s double the memory, double the fun.
    • Speaking of memory, It seems to be taking more memory than Eclipse. I don’t have representative benchmarks of that, and I know that my 8 GB RAM home machine is way to small for development nowadays, but still.
    • It feels less responsive and clunky. There is some minor delay that I can’t define well, but “I feel it”. I read somewhere that they were excessively repainting the screen elements, so that might be the explanation. Eclipse feels smoother (I know that’s not a proper argument, but I can’t be more precise)
    • Due to some extra cleverness, I have “unused methods” and “never assigned fields” all around the project. It uses spring, so these methods and fields are controller methods and autowired fields. Maybe some spring plugin would take care of that, but spring is not the only framework that uses reflection. Even getters and setters on POJOs get the unused warnings. What’s the problem with those warnings? That warnings are devalued. They don’t mean anything now. There isn’t a “yellow” indicator on the class either, so you don’t actually see the amount of warnings you have. Eclipse displays warnings better, and the false positives are much less.
    • The call hierarchy is slightly worse. But since that’s the most important IDE feature for me (alongside refactoring), it matters. It doesn’t give you the call hierarchy of default constructors that are not explicitly defined. Also, from what I’ve seen IDEA users don’t often use the call hierarchy feature. “Find usage” I think predates the call hierarchy, and is also much more visible through the UI, so some of the IDEA users don’t even know what a call hierarchy is. And repeatedly do “find usage”. That’s only partly the IDE’s fault.
    • No search in the output console. Come one, why I do I have an IDE, where I have to copy the output and paste it in a text editor in order to search. Now, to clarify, the console does have search. But when I run my (spring-boot) application, it outputs stuff in a panel at the bottom that is not the console and doesn’t have search.
    • CTRL+arrows by default jumps over whole words, and not camel cased words. This is configurable, but is yet another odd default. You almost always want to be able to traverse your variables word by word (in camel case), rather than skipping over the whole variable (method/class) name.
    • A few years ago when I used it for Scala, the project never actually compiled. But I guess that’s more Scala’s fault than of the IDE

    Apart from the first two, the rest are not major issues, I agree. But they add up. Ultimately, it’s a matter of personal choice whether you can turn a blind eye to these issues. But I’m getting back to Eclipse again. At some point I will propose improvements in the IntelliJ IDEA backlog and will check it again in a few years, I guess.

    The post I Still Prefer Eclipse Over IntelliJ IDEA appeared first on Bozho's tech blog.

    HackSpace: a new magazine for makers

    Post Syndicated from Andrew Gregory original https://www.raspberrypi.org/blog/hackspace/

    HackSpace is the new monthly magazine for people who love to make things and those who want to learn. Grab some duct tape, fire up a microcontroller, ready a 3D printer and hack the world around you!

    This is HackSpace magazine!

    HackSpace is the new monthly magazine for the modern maker. Learn more at http://hsmag.cc. Launching on the 23rd November the magazine will be packed with projects for fixers and tinkerers of all abilities. We’ll teach you new techniques and give you refreshers on familiar ones, from 3D printing, laser cutting, and woodworking to electronics and Internet of Things.

    HackSpace magazine

    Each month, HackSpace will feature tutorials and projects to help you build and learn. Whether you’re into 3D printing, woodworking, or weird and wonderful IoT projects, HackSpace will help you get more out of hardware hacking by giving you the ideas and skills to take your builds to the next level.

    HackSpace is a community magazine written by makers for makers, and we want your input. So if there’s something you want to see in the magazine, tell us about it. And if you have a great project that you believe deserves a place within a future issue, then show it to us.

    The front cover of HackSpace magazine issue 1

    Get your free copy

    Eager to get your hands on HackSpace? Sign up for a free copy of issue 1 by visiting the website! You have until 17 November to do so. Moreover, if you’re the manager of a hack- and makerspace, you can also sign up for a whole box of free copies for your members to enjoy by filling in the details of your venue here.

    We want HackSpace magazine to be available to as many people as possible, so we’ll be releasing a free PDF of every monthly issue alongside the print version. You won’t have to wait for us to release articles online — everything will be available free of charge from day one!

    The front cover of HackSpace magazine issue 1

    Get your monthly copy

    For those who’d rather have the hard copy of HackSpace for their home library, garden shed, or coffee table, subscriptions start at just £4.00 a month for a rolling subscription, and even less than that if you’re already a subscriber to The MagPi magazine.

    You will also be able to purchase this new magazine from selected newsagents in the UK from 23 November onward, and in the USA and Australia a few weeks later.

    The post HackSpace: a new magazine for makers appeared first on Raspberry Pi.

    Introducing AWS Directory Service for Microsoft Active Directory (Standard Edition)

    Post Syndicated from Peter Pereira original https://aws.amazon.com/blogs/security/introducing-aws-directory-service-for-microsoft-active-directory-standard-edition/

    Today, AWS introduced AWS Directory Service for Microsoft Active Directory (Standard Edition), also known as AWS Microsoft AD (Standard Edition), which is managed Microsoft Active Directory (AD) that is performance optimized for small and midsize businesses. AWS Microsoft AD (Standard Edition) offers you a highly available and cost-effective primary directory in the AWS Cloud that you can use to manage users, groups, and computers. It enables you to join Amazon EC2 instances to your domain easily and supports many AWS and third-party applications and services. It also can support most of the common use cases of small and midsize businesses. When you use AWS Microsoft AD (Standard Edition) as your primary directory, you can manage access and provide single sign-on (SSO) to cloud applications such as Microsoft Office 365. If you have an existing Microsoft AD directory, you can also use AWS Microsoft AD (Standard Edition) as a resource forest that contains primarily computers and groups, allowing you to migrate your AD-aware applications to the AWS Cloud while using existing on-premises AD credentials.

    In this blog post, I help you get started by answering three main questions about AWS Microsoft AD (Standard Edition):

    1. What do I get?
    2. How can I use it?
    3. What are the key features?

    After answering these questions, I show how you can get started with creating and using your own AWS Microsoft AD (Standard Edition) directory.

    1. What do I get?

    When you create an AWS Microsoft AD (Standard Edition) directory, AWS deploys two Microsoft AD domain controllers powered by Microsoft Windows Server 2012 R2 in your Amazon Virtual Private Cloud (VPC). To help deliver high availability, the domain controllers run in different Availability Zones in the AWS Region of your choice.

    As a managed service, AWS Microsoft AD (Standard Edition) configures directory replication, automates daily snapshots, and handles all patching and software updates. In addition, AWS Microsoft AD (Standard Edition) monitors and automatically recovers domain controllers in the event of a failure.

    AWS Microsoft AD (Standard Edition) has been optimized as a primary directory for small and midsize businesses with the capacity to support approximately 5,000 employees. With 1 GB of directory object storage, AWS Microsoft AD (Standard Edition) has the capacity to store 30,000 or more total directory objects (users, groups, and computers). AWS Microsoft AD (Standard Edition) also gives you the option to add domain controllers to meet the specific performance demands of your applications. You also can use AWS Microsoft AD (Standard Edition) as a resource forest with a trust relationship to your on-premises directory.

    2. How can I use it?

    With AWS Microsoft AD (Standard Edition), you can share a single directory for multiple use cases. For example, you can share a directory to authenticate and authorize access for .NET applications, Amazon RDS for SQL Server with Windows Authentication enabled, and Amazon Chime for messaging and video conferencing.

    The following diagram shows some of the use cases for your AWS Microsoft AD (Standard Edition) directory, including the ability to grant your users access to external cloud applications and allow your on-premises AD users to manage and have access to resources in the AWS Cloud. Click the diagram to see a larger version.

    Diagram showing some ways you can use AWS Microsoft AD (Standard Edition)--click the diagram to see a larger version

    Use case 1: Sign in to AWS applications and services with AD credentials

    You can enable multiple AWS applications and services such as the AWS Management Console, Amazon WorkSpaces, and Amazon RDS for SQL Server to use your AWS Microsoft AD (Standard Edition) directory. When you enable an AWS application or service in your directory, your users can access the application or service with their AD credentials.

    For example, you can enable your users to sign in to the AWS Management Console with their AD credentials. To do this, you enable the AWS Management Console as an application in your directory, and then assign your AD users and groups to IAM roles. When your users sign in to the AWS Management Console, they assume an IAM role to manage AWS resources. This makes it easy for you to grant your users access to the AWS Management Console without needing to configure and manage a separate SAML infrastructure.

    Use case 2: Manage Amazon EC2 instances

    Using familiar AD administration tools, you can apply AD Group Policy objects (GPOs) to centrally manage your Amazon EC2 for Windows or Linux instances by joining your instances to your AWS Microsoft AD (Standard Edition) domain.

    In addition, your users can sign in to your instances with their AD credentials. This eliminates the need to use individual instance credentials or distribute private key (PEM) files. This makes it easier for you to instantly grant or revoke access to users by using AD user administration tools you already use.

    Use case 3: Provide directory services to your AD-aware workloads

    AWS Microsoft AD (Standard Edition) is an actual Microsoft AD that enables you to run traditional AD-aware workloads such as Remote Desktop Licensing Manager, Microsoft SharePoint, and Microsoft SQL Server Always On in the AWS Cloud. AWS Microsoft AD (Standard Edition) also helps you to simplify and improve the security of AD-integrated .NET applications by using group Managed Service Accounts (gMSAs) and Kerberos constrained delegation (KCD).

    Use case 4: SSO to Office 365 and other cloud applications

    You can use AWS Microsoft AD (Standard Edition) to provide SSO for cloud applications. You can use Azure AD Connect to synchronize your users into Azure AD, and then use Active Directory Federation Services (AD FS) so that your users can access Microsoft Office 365 and other SAML 2.0 cloud applications by using their AD credentials.

    Use case 5: Extend your on-premises AD to the AWS Cloud

    If you already have an AD infrastructure and want to use it when migrating AD-aware workloads to the AWS Cloud, AWS Microsoft AD (Standard Edition) can help. You can use AD trusts to connect AWS Microsoft AD (Standard Edition) to your existing AD. This means your users can access AD-aware and AWS applications with their on-premises AD credentials, without needing you to synchronize users, groups, or passwords.

    For example, your users can sign in to the AWS Management Console and Amazon WorkSpaces by using their existing AD user names and passwords. Also, when you use AD-aware applications such as SharePoint with AWS Microsoft AD (Standard Edition), your logged-in Windows users can access these applications without needing to enter credentials again.

    3. What are the key features?

    AWS Microsoft AD (Standard Edition) includes the features detailed in this section.

    Extend your AD schema

    With AWS Microsoft AD, you can run customized AD-integrated applications that require changes to your directory schema, which defines the structures of your directory. The schema is composed of object classes such as user objects, which contain attributes such as user names. AWS Microsoft AD lets you extend the schema by adding new AD attributes or object classes that are not present in the core AD attributes and classes.

    For example, if you have a human resources application that uses employee badge color to assign specific benefits, you can extend the schema to include a badge color attribute in the user object class of your directory. To learn more, see How to Move More Custom Applications to the AWS Cloud with AWS Directory Service.

    Create user-specific password policies

    With user-specific password policies, you can apply specific restrictions and account lockout policies to different types of users in your AWS Microsoft AD (Standard Edition) domain. For example, you can enforce strong passwords and frequent password change policies for administrators, and use less-restrictive policies with moderate account lockout policies for general users.

    Add domain controllers

    You can increase the performance and redundancy of your directory by adding domain controllers. This can help improve application performance by enabling directory clients to load-balance their requests across a larger number of domain controllers.

    Encrypt directory traffic

    You can use AWS Microsoft AD (Standard Edition) to encrypt Lightweight Directory Access Protocol (LDAP) communication between your applications and your directory. By enabling LDAP over Secure Sockets Layer (SSL)/Transport Layer Security (TLS), also called LDAPS, you encrypt your LDAP communications end to end. This helps you to protect sensitive information you keep in your directory when it is accessed over untrusted networks.

    Improve the security of signing in to AWS services by using multi-factor authentication (MFA)

    You can improve the security of signing in to AWS services, such as Amazon WorkSpaces and Amazon QuickSight, by enabling MFA in your AWS Microsoft AD (Standard Edition) directory. With MFA, your users must enter a one-time passcode (OTP) in addition to their AD user names and passwords to access AWS applications and services you enable in AWS Microsoft AD (Standard Edition).

    Get started

    To get started, use the Directory Service console to create your first directory with just a few clicks. If you have not used Directory Service before, you may be eligible for a 30-day limited free trial.

    Summary

    In this blog post, I explained what AWS Microsoft AD (Standard Edition) is and how you can use it. With a single directory, you can address many use cases for your business, making it easier to migrate and run your AD-aware workloads in the AWS Cloud, provide access to AWS applications and services, and connect to other cloud applications. To learn more about AWS Microsoft AD, see the Directory Service home page.

    If you have comments about this post, submit them in the “Comments” section below. If you have questions about this blog post, start a new thread on the Directory Service forum.

    – Peter