Tag Archives: alarms

Circadia Sunrise Lamp Alarm

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/circadia-sunrise-lamp/

Florian loves sleeping and, like many of us, he doesn’t enjoy waking up. Alarm clocks irritate him, and radio alarms can be a musical disappointment, depending on the station.

For many, the lack of sunlight during winter months makes waking up even more of a struggle, with no bright glare through the curtains helping to prise our eyelids apart.

Iiiii… don’t knooooow-aaaaa… what the words reaaaaally aaaaaare…

Picking up on the concept of sunrise alarm clocks, and wanting to incorporate music into the idea, Florian decided to build the Circadia Sunrise Lamp. 

Circadia – sunrise medley

Circadia – sunrise lamp project https://sites.google.com/site/fpgaandco/sunrise Theme summary

Standing just under two metres tall, the lamp consists of three parts: the top section, housing a 3D-printed omnidirectional speaker system and orbiting text display; the midsection, home to 288 independently controlled RGB NeoPixel LEDs; and the bottom section, snugly fitting a midwoofer, Raspberry Pi, audio amp, and power supplies.

SUNRISE LAMP

Florian spent two years, on and off, working on the lamp and it’s fair to say that once he started getting to grips with the Python code, and was able to see the visual results, he became hooked on adding more and more themes. From Manila Sunrise to Sumatra Rain, each theme boasts its own colour cycle and soundtrack, all lasting approximately 40 minutes from start to refreshingly wonderful complete awakening. Florian writes:

[The lamp] makes it quite a bit easier for me to get out of bed every morning (with a silly grin on my face). It’s really surprisingly effective and hard to describe. Rather than being resentful that it is already time to get up, I am now more inclined to be eager to get going. If someone had told me how well this actually works I would have put a sunrise lamp in my bedroom years ago. 

But he didn’t stop there.

As the lamp’s main purpose is to wake Florian up in the morning, it was inevitably spending the majority of the day idle. To tackle this, Florian incorporated a music-reactive light show, plus an interactive version of Tetris because, to quote from makers the world over, “Why not?”

Circadia – Tetris

Circadia – sunrise lamp project https://sites.google.com/site/fpgaandco/sunrise

Florian, in all his brilliant maker glory, has provided an in-depth blog of the Circadia Sunrise Lamp, documenting the processes, the successes, and failures of the build, as well as his continued development of new themes.

We’ve seen a few different sunrise lamps, alarm clocks, and light shows over the years, all using a Raspberry Pi. But this one, combining elegant physical style with well-coded functionality, is certainly one of our favourites.

The post Circadia Sunrise Lamp Alarm appeared first on Raspberry Pi.

New – Auto Scaling for EC2 Spot Fleets

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-auto-scaling-for-ec2-spot-fleets/

The EC2 Spot Fleet model (see Amazon EC2 Spot Fleet API – Manage Thousands of Spot Instances with one Request for more information) allows you to create a fleet of EC2 instances with a single request. You simply specify the fleet’s target capacity, enter a bid price per hour, and choose the instance types that you would like to have as part of your fleet.

Behind the scenes, AWS will maintain the desired target capacity (expressed in terms of instances or a vCPU count) by launching Spot instances that result in the best prices for you. Over time, as instances in the fleet are terminated due to rising prices, replacement instances will be launched using the specifications that result in the lowest price at that point in time.

New Auto Scaling
Today we are enhancing the Spot Fleet model with the addition of Auto Scaling. You can now arrange to scale your fleet up and down based on a Amazon CloudWatch metric. The metric can originate from an AWS service such as EC2, Amazon EC2 Container Service, or Amazon Simple Queue Service (SQS). Alternatively, your application can publish a custom metric and you can use it to drive the automated scaling. Either way, using these metrics to control the size of your fleet gives you very fine-grained control over application availability, performance, and cost even as conditions and loads change. Here are some ideas to get you started:

  • Containers – Scale container-based applications running on Amazon ECS using CPU or memory usage metrics.
  • Batch Jobs – Scale queue-driven batch jobs based on the number of messages in an SQS queue.
  • Spot Fleets – Scale a fleet based on Spot Fleet metrics such as MaxPercentCapacityAllocation.
  • Web Service – Scale web services based on measured response time and average requests per second.

You can set up Auto Scaling using the Spot Fleet Console, the AWS Command Line Interface (CLI), AWS CloudFormation, or by making API calls using one of the AWS SDKs.

I started by launching a fleet. I used the request type Request and maintain in order to be able to scale the fleet up and down:

My fleet was up and running within a minute or so:

Then (for illustrative purposes) I created an SQS queue, put some messages in it, and defined a CloudWatch alarm (AppQueueBackingUp) that would fire if there were 10 or more messages visible in the queue:

I also defined an alarm (AppQueueNearlyEmpty) that would fire if the queue was just about empty (2 messages or less).

Finally, I attached the alarms to the ScaleUp and ScaleDown policies for my fleet:

Before I started writing this post, I put 5 messages into the SQS queue. With the fleet launched and the scaling policies in place, I added 5 more, and then waited for the alarm to fire:

Then I checked in on my fleet, and saw that the capacity had been increased as expected. This was visible in the History tab (“New targetCapacity: 5”):

To wrap things up I purged all of the messages from my queue, watered my plants, and returned to find that my fleet had been scaled down as expected (“New targetCapacity: 2”):

Available Now
This new feature is available now and you can start using it today in all regions where Spot instances are supported.


Jeff;

 

Run Containerized Microservices with Amazon EC2 Container Service and Application Load Balancer

Post Syndicated from Daniele Stroppa original https://aws.amazon.com/blogs/compute/microservice-delivery-with-amazon-ecs-and-application-load-balancers/

This is a guest post from Sathiya Shunmugasundaram, Gnani Dathathreya, and Jeff Storey from the Capital One Technology team.

—————–

At Capital One, we are rapidly embracing cloud-native microservices architectures and applying them to both existing and new workloads. To advance microservices adoption, increase efficiencies of cloud resources and decouple application layer from the underlying infrastructure, we are starting to use Docker to containerize the workloads and Amazon EC2 Container Service (Amazon ECS) to manage them.

Docker enables environment consistency and allows us to spin up new containers in a location (Dev / QA / Performance / Prod) of choice in seconds vs. minutes / hours / days it used to take us in the past.

Amazon ECS gives us a platform to manage Docker containers with no hassle. We chose ECS because of its simplicity in deploying and managing containers. Our API platform is an early adopter and ECS-based deployments quickly became the norm for managing the lifecycle of stateless workloads.

Container orchestration has traditionally been challenging. With the Elastic Load Balancing Classic Load Balancer, we have had limitations routing to multiple ports on the same server and are also unable to route services based on context, which meant that we needed to use one load balancer per service. By using open source software like Consul, Nginx, and registrator it was possible to achieve dynamic service discovery and context based routing but at the cost of adding more complexity and costs for running these additional components.

This post shows how the arrival of the Application Load Balancer has significantly simplified Docker based deployments on ECS and enabled delivering microservices in the cloud with enterprise class capabilities like service discovery, health checks, and load balancing.

Overview of Application Load Balancer

With the announcement of the new Application Load Balancer, we can take advantage of several out of the box features that are readily integrated with ECS. With dynamic port mapping option for containers we can simply register a service with a load balancer, and ECS transparently manages the registration and de-registration of Docker containers. We no longer need to know the host port ahead of time, as the load balancer automatically detects it and dynamically reconfigures itself. In addition to the port mapping, we also get all the features of traditional load balancers, like health checking, connection draining and access logs to name a few.

Similar to EC2 Auto Scaling, ECS also has the ability to auto scale services based on CloudWatch alarms. This functionality is critical as it allows us to scale-in or scale-out based on demand. This feature coupled with the new Application Load Balancer gives us fully-featured container orchestration. The pace at which we can now spin up new applications on ECS has greatly improved and we can spend much less time managing orchestration tools.

The Application Load Balancer also introduces path-based routing. This feature allows us to easily map different URL paths to different services. In a traditional monolithic application, URL paths are often used to denote different parts of the application, for example, http://.com/service1 and http://.com/service2.

Traditionally this was done using a context root in an application server or by using different load balancers for each service. With the new path-based routing, these parts of the application can be split into individual ECS-backed services without changing the existing URL patterns. This makes migrating applications seamless, as clients can continue to call the same URLs. A large monolithic application with many subcontexts like www.example.com/orders, www.example.com/inventory can be refactored into smaller micro services and each such path can be directed to a different target group of servers such as an ECS Service with Docker containers.

Key features of Application Load Balancers include:

  • Path-based routing – URL-based routing policies enable using the same ELB URL to route to different microservices
  • Multiple ports routing on same server
  • AWS integration – Integrated with many AWS services, such as ECS, IAM, Auto Scaling, and CloudFormation
  • Application monitoring – Improved metrics and health checks for the application

Core components of Application Load Balancers include:

  • Load balancer – The entry point for clients
  • Listener – Listens to requests from clients on a specific protocol/port and forwards to one or more target group based on rules
  • Rule – Determines how to route the request – based on path-based condition and priority matches to one or more target groups
  • Target – The entity that runs the backend servers – currently EC2 is the available target group. The same EC2 instance can be registered multiple times with different ports
  • Target group – Each target group identifies a set of backend servers which can be routed based on a rule. Health checks can be defined per target group. The same load balancer can have many target groups

ALB Components

Sample application architecture

In the following example, we show two services with three tasks deployed on two ECS container instances. This shows the ability to route to multiple ports on the same host. Also, there is only one Application Load Balancer that provides path-based routing for both ECS services, simplifying the architecture and reducing costs.

Sample App

Configuring Application Load Balancers

The following steps create and configure an Application Load Balancer using the AWS Management Console. These steps can also be done using the AWS CLI.

    1. Create an Application Load Balancer using the AWS Console
      • Login to the EC2 Console (https://console.aws.amazon.com/ec2)
      • Select Load Balancers
      • Select Create Load Balancer
      • ALB Console

      • Choose Application Load Balancer
    2. Configure Load Balancer
    3. Configure Load Balancer

      • For Name, type a name for your load balancer.
      • For Scheme, an Internet-facing load balancer routes requests from clients over the Internet to targets. An internal load balancer routes requests to targets using private IP addresses.
      • For Listeners, the default is a listener that accepts HTTP traffic on port 80. You can keep the default listener settings, modify the protocol or port of the listener, or choose Add to add another listener.
      • For VPC, select the same VPC that you used for the container instances on which you intend to run your service.
      • For Available subnets, select at least two subnets from different Availability Zones, and choose the icon in the Actions column.
    4. Configure Security Groups
    5. You must assign a security group to your load balancer that allows inbound traffic to the ports that you specified for your listeners.

    6. Configure Routing
      • For Target group, keep the default, New target group.
      • For Name, type a name for the new target group.
      • Set Protocol and Port as needed.
      • For Health checks, keep the default health check settings.

      ALB Routing

    7. Register Targets
    8. Your load balancer distributes traffic between the targets that are registered to its target groups. When you associate a target group to an Amazon ECS service, Amazon ECS automatically registers and deregisters containers with your target group. Because Amazon ECS handles target registration, you do not add targets to your target group at this time.

      ALB Targets

      • Click Next:Review
      • Click Create
    9. Registering Docker containers as a target
      • If you don’t already have an ECS cluster, open the Amazon ECS console first run wizard at https://console.aws.amazon.com/ecs/home#/firstRun.
      • From the ECS Cluster’s Services tab, Click Create
      • Create Service

      • Provide the Task definition for service
      • Provide a Service Name and number of tasks
      • Click the Configure ELB button
      • Choose Application Load Balancer
      • ECS Service ALB

      • Choose the ecsService Role as IAM role
      • Choose the Application Load Balancer created above
      • Select a container that you want the load balancer to use and click Add to ELB
      • ECS Service ALB

      • Select the target group name that was created above
      • Click Save and then Create Service

Cleaning up

  • Using the ECS Console, go to your Cluster and Service, update the number of tasks to 0, and then delete the service.
  • Using the EC2 Console, select the Load Balancer created above and delete.

Conclusion
Docker has given our developers a simplified application automation mechanism. Amazon ECS and Application Load Balancers make it easy to deliver these applications without needing to manage dynamic service discovery, load balancing and container orchestration. Using ECS and Application Load Balancers new services can be deployed in less than 30 minutes and existing services updated with newer versions in less than a minute. ECS automatically takes care of rolling updates and Application Load Balancer takes care of registering new versions quickly and unregistering existing containers gracefully. This not only improves the agility of teams, but also reduces the overall time to market.

New MagPi Essentials book: simple electronics

Post Syndicated from Russell Barnes original https://www.raspberrypi.org/blog/new-magpi-essentials-book-simple-electronics/

Less than a month has passed since we released Hacking & Making in Minecraft and we’re back again with our seventh Essentials book!

Simple Electronics with GPIO Zero is dedicated to helping you build your own electronics projects in easy steps – everything from push buttons to Raspberry Pi robots, and from laser-powered trip wires to motion-sensing alarms.

Essentials-07-GPIO-ZERO_Flat_Cover

Those GPIO pins aren’t as daunting as they might first appear!

The book boasts 12 chapters and 100+ pages of GPIO Zero – but wait, hang on… just download the free PDF and get reading already! If you can’t grab it straight away, here are a few of the chapter highlights:

  • Program LED lights
  • Add push buttons to your project
  • Build a motion-sensing alarm
  • Create your own distance rangefinder
  • Make a laser-powered tripwire
  • Build a Raspberry Pi robot
  • Create a motion-sensing alarm
  • and much more!

We think our latest Essentials book is a great introduction to using the GPIO pins on your Raspberry Pi and programming them with the fab GPIO Zero Python library. It unlocks a whole new world of potential for your projects and it’s much easier to learn than you might think!

You can also buy Simple Electronics with GPIO Zero in our app for Android and iOS. The print version is coming soon too. In fact, we’re just off to have a word with the printers now…

Simple Electronics with GPIO Zero is freely licensed under Creative Commons (BY-SA-NC 3.0). You can download the PDF for free now and forever, but buying digitally supports the Raspberry Pi Foundation’s charitable aims.

The post New MagPi Essentials book: simple electronics appeared first on Raspberry Pi.

Powerful AWS Platform Features, Now for Containers

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/powerful-aws-platform-features-now-for-containers/

Containers are great but they come with their own management challenges. Our customers have been using containers on AWS for quite some time to run workloads ranging from microservices to batch jobs. They told us that managing a cluster, including the state of the EC2 instances and containers, can be tricky, especially as the environment grows. They also told us that integrating the capabilities you get with the AWS platform, such as load balancing, scaling, security, monitoring, and more, with containers is a key requirement. Amazon ECS was designed to meet all of these needs and more.

We created Amazon ECS  to make it easy for customers to run containerized applications in production. There is no container management software to install and operate because it is all provided to you as a service. You just add the EC2 capacity you need to your cluster and upload your container images. Amazon ECS takes care of the rest, deploying your containers across a cluster of EC2 instances and monitoring their health. Customers such as Expedia and Remind have built Amazon ECS into their development workflow, creating PaaS platforms on top of it. Others, such as Prezi and Shippable, are leveraging ECS to eliminate operational complexities of running containers, allowing them to spend more time delivering features for their apps.

AWS has highly reliable and scalable fully-managed services for load balancing, auto scaling, identity and access management, logging, and monitoring. Over the past year, we have continued to natively integrate the capabilities of the AWS platform with your containers through ECS, giving you the same capabilities you are used to on EC2 instances.

Amazon ECS recently delivered container support for application load balancing (Today), IAM roles (July), and Auto Scaling (May). We look forward to bringing more of the AWS platform to containers over time.

Let’s take a look at the new capabilities!

Application Load Balancing
Load balancing and service discovery are essential parts of any microservices architecture. Because Amazon ECS uses Elastic Load Balancing, you don’t need to manage and scale your own load balancing layer. You also get direct access to other AWS services that support ELB such as AWS Certificate Manager (ACM) to automatically manage your service’s certificates and Amazon API Gateway to authenticate callers, among other features.

Today, I am happy to announce that ECS supports the new application load balancer, a high-performance load balancing option that operates at the application layer and allows you to define content-based routing rules. The application load balancer includes two features that simplify running microservices on ECS: dynamic ports and the ability for multiple services to share a single load balancer.

Dynamic ports makes it easier to start tasks in your cluster without having to worry about port conflicts. Previously, to use Elastic Load Balancing to route traffic to your applications, you had to define a fixed host port in the ECS task. This added operational complexity, as you had to track the ports each application used, and it reduced cluster efficiency, as only one task could be placed per instance. Now, you can specify a dynamic port in the ECS task definition, which gives the container an unused port when it is scheduled on the EC2 instance. The ECS scheduler automatically adds the task to the application load balancer’s target group using this port. To get started, you can create an application load balancer from the EC2 Console or using the AWS Command Line Interface (CLI). Create a task definition in the ECS console with a container that sets the host port to 0. This container automatically receives a port in the ephemeral port range when it is scheduled.

Previously, there was a one-to-one mapping between ECS services and load balancers. Now, a load balancer can be shared with multiple services, using path-based routing. Each service can define its own URI, which can be used to route traffic to that service. In addition, you can create an environment variable with the service’s DNS name, supporting basic service discovery. For example, a stock service could be http://example.com/stock and a weather service could be http://example.com/weather, both served from the same load balancer. A news portal could then use the load balancer to access both the stock and weather services.

IAM Roles for ECS Tasks
In Amazon ECS, you have always been able to use IAM roles for your Amazon EC2 container instances to simplify the process of making API requests from your containers. This also allows you to follow AWS best practices by not storing your AWS credentials in your code or configuration files, as well as providing benefits such as automatic key rotation.

With the introduction of the recently launched IAM roles for ECS tasks, you can secure your infrastructure by assigning an IAM role directly to the ECS task rather than to the EC2 container instance. This way, you can have one task that uses a specific IAM role for access to, let’s say, S3 and another task that uses an IAM role to access a DynamoDB table, both running on the same EC2 instance.

Service Auto Scaling
The third feature I want to highlight is Service Auto Scaling. With Service Auto Scaling and Amazon CloudWatch alarms, you can define scaling policies to scale your ECS services in the same way that you scale your EC2 instances up and down. With Service Auto Scaling, you can achieve high availability by scaling up when demand is high, and optimize costs by scaling down your service and the cluster, when demand is lower, all automatically and in real-time.

You simply choose the desired, minimum and maximum number of tasks, create one or more scaling policies, and Service Auto Scaling handles the rest. The service scheduler is also Availability Zone–aware, so you don’t have to worry about distributing your ECS tasks across multiple zones.

Available Now
These features are available now and you can start using them today!


Jeff;

How AWS Powered Amazon’s Biggest Day Ever

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/how-aws-powered-amazons-biggest-day-ever/

The second annual Prime Day was another record-breaking success for Amazon, surpassing global orders compared to Black Friday, Cyber Monday and Prime Day 2015.

According to a report published by Slice Intelligence, Amazon accounted for 74% of all US consumer e-commerce on Prime Day 2016. This one-day only global shopping event, exclusively for Amazon Prime members saw record-high levels of traffic including double the number of orders on the Amazon Mobile App compared to Prime Day 2015. Members around the world purchased more than 2 million toys, more than 1 million pairs of shoes and more than 90,000 TVs in one day (see Amazon’s Prime Day is the Biggest Day Ever for more stats). An event of this scale requires infrastructure that can easily scale up to match the surge in traffic.

Scaling AWS-Style
The Amazon retail site uses a fleet of EC2 instances to handle web traffic. To serve the massive increase in customer traffic for Prime Day, the Amazon retail team increased the size of their EC2 fleet, adding capacity that was equal to all of AWS and Amazon.com back in 2009. Resources were drawn from multiple AWS regions around the world.

The morning of July 11th was cool and a few morning clouds blanketed Amazon’s Seattle headquarters. As 8 AM approached, the Amazon retail team was ready for the first of 10 global Prime Day launches. Across the Pacific, it was almost midnight. In Japan, mobile phones, tablets, and laptops glowed in anticipation of Prime Day deals. As traffic began to surge in Japan, CloudWatch metrics reflected the rising fleet utilization as CloudFront endpoints and ElastiCache nodes lit up with high-velocity mobile and web requests. This wave of traffic then circled the globe, arriving in Europe and the US over the course of 40 hours and generating 85 billion clickstream log entries. Orders surpassed Prime Day 2015 by more than 60% worldwide and more than 50% in the US alone. On the mobile side, more than one million customers downloaded and used the Amazon Mobile App for the first time.

As part of Prime Day, Amazon.com saw a significant uptick in their use of 38 different AWS services including:

To further illustrate the scale of Prime Day and the opportunity for other AWS customers to host similar large-scale, single-day events, let’s look at Prime Day through the lens of several AWS services:

  • Amazon Mobile Analytics events increased 1,661% compared to the same day the previous week.
  • Amazon’s use of CloudWatch metrics increased 400% worldwide on Prime Day, compared to the same day the previous week.
  • DynamoDB served over 56 billion extra requests worldwide on Prime Day compared to the same day the previous week.

Running on AWS
The AWS team treats Amazon.com just like any of our other big customers. The two organizations are business partners and communicate through defined support plans and channels. Sticking to this somewhat formal discipline helps the AWS team to improve the support plans and the communication processes for all AWS customers.

Running the Amazon website and mobile app on AWS makes short-term, large scale global events like Prime Day technically feasible and economically viable. When I joined Amazon.com back in 2002 (before the site moved to AWS), preparation for the holiday shopping season involved a lot of planning, budgeting, and expensive hardware acquisition. This hardware helped to accommodate the increased traffic, but the acquisition process meant that Amazon.com sat on unused and lightly utilized hardware after the traffic subsided. AWS enables customers to add the capacity required to power big events like Prime Day, and enables this capacity to be acquired in a much more elastic, cost-effective manner. All of the undifferentiated heavy lifting required to create an online event at this scale is now handled by AWS so the Amazon retail team can focus on delivering the best possible experience for its customers.

Lessons Learned
The Amazon retail team was happy that Prime Day was over, and ready for some rest, but they shared some of what they learned with me:

  • Prepare – Planning and testing are essential. Use historical metrics to help forecast and model future traffic, and to estimate your resource needs accordingly. Prepare for failures with GameDay exercises – intentionally breaking various parts of the infrastructure and the site in order to simulate several failure scenarios (read Resilience Engineering – Learning to Embrace Failure to learn more about GameDay exercises at Amazon).
  • Automate – Reduce manual efforts and automate everything. Take advantage of services that can scale automatically in response to demand – Route53 to automatically scale your DNS, Auto Scaling to scale your EC2 capacity according to demand, and Elastic Load Balancing for automatic failover and to balance traffic across multiple regions and availability zones (AZs).
  • Monitor – Use Amazon CloudWatch metrics and alarms liberally. CloudWatch monitoring helps you stay on top of your usage to ensure the best experience for your customers.
  • Think Big – Using AWS gave the team the resources to create another holiday season. Confidence in your infrastructure is what enables you to scale your big events.

As I mentioned before, nothing is stopping you from envisioning and implementing an event of this scale and scope!

I would encourage you to think big, and to make good use of our support plans and services. Our Solutions Architects and Technical Account Managers are ready to help, as are our APN Consulting Partners. If you are planning for a large-scale one-time event, give us a heads-up and we’ll work with you before and during the event.


Jeff;

PS – What did you buy on Prime Day?

Emoji Ticker

Post Syndicated from Matt Richardson original https://www.raspberrypi.org/blog/emoji-ticker/

What was my reaction when I first saw this scrolling emoji ticker project? 😍🙌👏

lightbulb-emoji-ticker-750x500

Up until recently I’ve been a bit reluctant to adopt emoji characters in my everyday communication. But ever since they’ve been elevated to greater prominence on phones and on services such as Slack, I’ve given in completely. If I had the creative energy and patience, I’d write this whole post with emoji (though it mightn’t make it past Liz’s editorial discretion)!

This is where Dean comes in. Dean is a community member who helped us out at Maker Faire Bay Area in 2015. Normally a web developer, he rolled up his sleeves and took on the responsibility for a fun physical project for his company’s office. He works at Yeti; they built the app Chelsea Handler: Gotta Go!, which they describe as “a way to generate excuses and set them as alarms. It’s the perfect solution for bad dates, awkward convos with your in-laws, boring meetings and whatever else you might want to hit the eject button on.”

glowy-dysfunction-750x500

Each hilarious excuse has its own emoji character, and Dean wanted the office’s Raspberry Pi-driven LED matrix ticker to show which emojis were being used by the users of the app. After some turbulence with wiring up the hardware and some clever web implementation, he was lighting up the office with 🐻 👮 and 📞, using a blend of Python for the network requests and C for driving the LED matrix.

Dean documented the experience on the Yeti blog, where he offers a few takeaways: collaborate, use documentation but stay flexible, and know when to ask for help. His most valuable lesson? He says it was “the value of code modularity, or the practice of breaking a project into function-specific components (i.e. functions for rendering on the LED matrix, classes for communicating with the Gotta Go server).”

Dean, 🙏 for sharing!

The post Emoji Ticker appeared first on Raspberry Pi.

Automatic Scaling with Amazon ECS

Post Syndicated from Chris Barclay original https://aws.amazon.com/blogs/compute/automatic-scaling-with-amazon-ecs/

My colleague Mayank Thakkar sent a nice guest post that describes how to scale Amazon ECS clusters and services.

You’ve always had the option to scale clusters automatically with Amazon EC2 Container Service (Amazon ECS). Now, with the new Service Auto Scaling feature and Amazon CloudWatch alarms, you can now use scaling policies to scale ECS services. With Service Auto Scaling, you can achieve high availability by scaling up when demand is high, and optimize costs by scaling down your service and the cluster, when demand is lower, all automatically and in real-time.

This post shows how you can use this new feature, along with automatic cluster resizing to match demand.

Service Auto Scaling overview

Out-of-the-box scaling for ECS services has been a top request and today we are pleased to announce this feature. The process to create services that scale automatically has been made very easy, and is supported by the ECS console, CLI, and SDK. You choose the desired, minimum and maximum number of tasks, create one or more scaling policies, and Service Auto Scaling handles the rest. The service scheduler is also Availability Zone–aware, so you don’t have to worry about distributing your ECS tasks across multiple zones.

In addition to the above, ECS also makes it very easy to run your ECS tasks on a multi-AZ cluster. The Auto Scaling group for the ECS cluster manages the availability of the cluster across multiple zones to give you the resiliency and dependability that you are looking for, and ECS manages the task distribution across these zones, allowing you to focus on your business logic.

The benefits include:

  1. Match deployed capacity to the incoming application load, using scaling policies for both the ECS service and the Auto Scaling group in which the ECS cluster runs. Scaling up cluster instances and service tasks when needed and safely scaling them down when demand subsides, keeps you out of the capacity guessing game. This provides you high availability with lowered costs in the long run.
  2. Multi-AZ clusters make your ECS infrastructure highly available, keeping it safeguarded from potential zone failure. The Availability Zone–aware ECS scheduler manages, scales, and distributes the tasks across the cluster, thus making your architecture highly available.

Service Auto Scaling Walkthrough

This post walks you through the process of using these features and creating a truly scalable, highly available, microservices architecture. To achieve these goals, we show how to:

  1. Spin up an ECS cluster, within an Auto Scaling group, spanning 2 (or more) zones.
  2. Set up an ECS service over the cluster and define the desired number of tasks.
  3. Configure an Elastic Load Balancing load balancer in front of the ECS service. This serves as an entry point for the workload.
  4. Set up CloudWatch alarms to scale in and scale out the ECS service.
  5. Set up CloudWatch alarms to scale in and scale out the ECS cluster. (Note that these alarms are separate from the ones created in the previous step.)
  6. Create scaling policies for the ECS service, defining scaling actions while scaling out and scaling in.
  7. Create scaling policies for the Auto Scaling group in which the ECS cluster is running. These policies are used to scale in and scale out the ECS cluster.
  8. Test the highly available, scalable ECS service, along with the scalable cluster by gradually increasing the load and followed by decreasing the load.

In this post, we walk you through setting up one ECS service on the cluster. However, this pattern can also be applied to multiple ECS services running on the same cluster.

Please note: You are responsible for any AWS costs incurred as a result of running this example.

Conceptual diagram

Set up Service Auto Scaling with ECS

Before you set up the scaling, you should have an ECS service running on a multi-AZ (2 zone) cluster, fronted by a load balancer.

Set up CloudWatch alarms

  1. In the Amazon CloudWatch console, set up a CloudWatch alarm, to be used during scale in and scale out of the ECS service. This walkthrough uses CPUUtilization (from the ECS, ClusterName, ServiceName category), but you can use other metrics if you wish. (Note: Alternatively, you can set up these alarms in the ECS Console when configuring scaling policies for your service.)
  2. Name the alarm ECSServiceScaleOutAlarm and set the threshold for CPUUtilization to 75.
  3. Under the Actions section, delete the notifications. For this walkthrough, you’ll configure an action through the ECS and Auto Scaling consoles.
  4. Repeat the two steps above to create the scale in alarm, setting the CPUUtilization threshold to 25 and the operator to ‘<=’”.
  5. In the Alarms section, you should see your scale in alarm in the ALARM state. This is expected, as there is currently no load on the ECS service.
  6. Follow the same actions as in the previous step to set up CloudWatch alarms on the ECS cluster. This time, use CPUReservation as a metric (from ECS, ClusterName). Create 2 alarms, as in the previous step, one to scale out the ECS cluster and other to scale in. Name them ECSClusterScaleOutAlarm and ECSClusterScaleInAlarm (or whatever name you like).

Note: This is a cluster specific metric (as opposed to a cluster_service specific metric), which makes the pattern useful even in multiple ECS service scenarios. The ECS cluster is always scaled according to the load on the cluster, irrespective of where it originates.

Because scaling ECS services is much faster than scaling an ECS cluster, we recommend keeping the ECS cluster scaling alarm more responsive than the ECS service alarm. This ensures that you always have extra cluster capacity available during scaling events, to accommodate instantaneous peak loads. Keep in mind that running this extra EC2 capacity increases your cost, so find the balance between reserve cluster capacity and cost, which will vary from application to application.

Add scaling policies on the ECS service

Add a scale out and a scale in policy on the ECS service created earlier.

  1. Sign in to the ECS console, choose the cluster that your service is running on, choose Services, and select the service.
  2. On the service page, choose Auto Scaling, Update.
  3. Make sure the Number of Tasks is set to 2. This is the default number of tasks that your service will be running.
  4. On the Update Service page, under Optional configurations, choose Configure Service Auto Scaling.
  5. On the Service Auto Scaling (optional) page, under Scaling, choose Configure Service Auto Scaling to adjust your service’s desired count. For both Minimum number of tasks and Desired number of tasks, enter 2. For Maximum number of tasks, enter 10. Because you mapped port 80 of the host (EC2 instance) to port 80 of the ECS container when you created the ECS service, make sure that you set the same numbers for both the Auto Scaling group and the ECS tasks.
  6. Under the Automatic task scaling policies section, choose Add Scaling Policy.
  7. On the Add Policy page, enter a value for Policy Name. For Execute policy when, enter the scale out CloudWatch alarm created earlier (ECSServiceScaleOutAlarm). For Take the action, choose Add 100 percent. Choose Save.
  8. Repeat the two steps above to create the scale in policy, using the scale in CloudWatch alarm created earlier (ECSServiceScaleInAlarm). For Take the action, choose Remove 50 percent. Choose Save.
  9. On the Service Auto Scaling (optional) page, choose Save.

Add scaling policies on the ECS cluster

Add a scale out and a scale in policy on the ECS cluster (Auto Scaling group).

  1. Sign in to the Auto Scaling console and select the Auto Scaling Group which was created for this walkthrough.
  2. Choose Details, Edit.
  3. Make sure the Desired and Min are set to 2, and Max is set to 10. Choose Save.
  4. Choose Scaling Policies, Add Policy.
  5. First, create the scale out policy. Enter a value for Name. For Execute policy when, choose the scale out alarm (ECSClusterScaleOutAlarm) created earlier. For Take the action, choose Add 100 percent of group and then choose Create.
  6. Repeat the above step to add the scale in policy, using the scale in alarm (ECSClusterScaleInAlarm) and setting Take the action as Remove 50 percent of group.

You should be able to see the scale in and scale out polices for your Auto Scaling group. Using these policies, the Auto Scaling group can increase or decrease the size of the cluster on which the ECS service is running.

Note: You may set the cluster scaling policies in such a way so that you can have some additional cluster capacity in reserve. This will help your ECS service scale up faster, but at the same time, depending on your demand, keep some EC2 instances underutilized.

This completes the Auto Scaling configuration of the ECS service and the Auto Scaling group, which in this case, will be triggered from the different CloudWatch alarms. You can always use a different combination of CloudWatch alarms to drive each of these policies for more sophisticated scaling policies.

Now that you have the service running on a cluster that has capacity to scale out on, send traffic to the load balancer that should trigger the alarm.

Load test the ECS service scaling

Now, load test the ECS service using the Apache ab utility and make sure that the scaling configuration is working (see the Create a load-testing instance section). On the CloudWatch console, you can see your service scale up and down. Because the Auto Scaling group is set up with two Availability Zones, you should be able to see five EC2 instances in each zone. Also, because the ECS service scheduler is Availability Zone–aware, the tasks would be distributed across those two zones too.

You can further test the high availability by terminating your EC2 instances manually from the EC2 console. The Auto Scaling group and ECS service scheduler should bring up additional EC2 instances, followed by tasks.

Additional Considerations

  • Reserve capacity. As discussed before, keeping some additional ECS cluster capacity in reserve helps the ECS service to scale out much faster, without waiting for the cluster’s newly provisioned instances to warm up. This can easily be achieved by either changing the values on which CloudWatch alarms are triggered, or by changing the parameters of the scaling policy itself.
  • Instance termination protection. While scaling in, in some cases, a decrease in available ECS cluster capacity might force some tasks to be terminated or relocated from one host to another. This can be mitigated by either tweaking ECS cluster scale in policies to be less responsive to demand or by gracefully allowing tasks to finish on an EC2 host, before it is terminated. This can easily be achieved by tapping into the Auto Scaling Lifecycle events or instance termination protection, which is a topic for a separate post.

Although we have used the AWS console to create this walkthrough, you can always use the AWS SDK or the CLI to achieve the same result.

Conclusion

When you run a mission-critical microservices architecture, keeping your TCO down is critical, along with having the ability to deploy the workload on multiple zones and to adjust ECS service and cluster capacity to respond to load variations. Using the procedure outlined in this post, which leverages two-dimensional scaling, you can achieve the same results.