Tag Archives: cap

Managing AWS Lambda Function Concurrency

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/managing-aws-lambda-function-concurrency/

One of the key benefits of serverless applications is the ease in which they can scale to meet traffic demands or requests, with little to no need for capacity planning. In AWS Lambda, which is the core of the serverless platform at AWS, the unit of scale is a concurrent execution. This refers to the number of executions of your function code that are happening at any given time.

Thinking about concurrent executions as a unit of scale is a fairly unique concept. In this post, I dive deeper into this and talk about how you can make use of per function concurrency limits in Lambda.

Understanding concurrency in Lambda

Instead of diving right into the guts of how Lambda works, here’s an appetizing analogy: a magical pizza.
Yes, a magical pizza!

This magical pizza has some unique properties:

  • It has a fixed maximum number of slices, such as 8.
  • Slices automatically re-appear after they are consumed.
  • When you take a slice from the pizza, it does not re-appear until it has been completely consumed.
  • One person can take multiple slices at a time.
  • You can easily ask to have the number of slices increased, but they remain fixed at any point in time otherwise.

Now that the magical pizza’s properties are defined, here’s a hypothetical situation of some friends sharing this pizza.

Shawn, Kate, Daniela, Chuck, Ian and Avleen get together every Friday to share a pizza and catch up on their week. As there is just six of them, they can easily all enjoy a slice of pizza at a time. As they finish each slice, it re-appears in the pizza pan and they can take another slice again. Given the magical properties of their pizza, they can continue to eat all they want, but with two very important constraints:

  • If any of them take too many slices at once, the others may not get as much as they want.
  • If they take too many slices, they might also eat too much and get sick.

One particular week, some of the friends are hungrier than the rest, taking two slices at a time instead of just one. If more than two of them try to take two pieces at a time, this can cause contention for pizza slices. Some of them would wait hungry for the slices to re-appear. They could ask for a pizza with more slices, but then run the same risk again later if more hungry friends join than planned for.

What can they do?

If the friends agreed to accept a limit for the maximum number of slices they each eat concurrently, both of these issues are avoided. Some could have a maximum of 2 of the 8 slices, or other concurrency limits that were more or less. Just so long as they kept it at or under eight total slices to be eaten at one time. This would keep any from going hungry or eating too much. The six friends can happily enjoy their magical pizza without worry!

Concurrency in Lambda

Concurrency in Lambda actually works similarly to the magical pizza model. Each AWS Account has an overall AccountLimit value that is fixed at any point in time, but can be easily increased as needed, just like the count of slices in the pizza. As of May 2017, the default limit is 1000 “slices” of concurrency per AWS Region.

Also like the magical pizza, each concurrency “slice” can only be consumed individually one at a time. After consumption, it becomes available to be consumed again. Services invoking Lambda functions can consume multiple slices of concurrency at the same time, just like the group of friends can take multiple slices of the pizza.

Let’s take our example of the six friends and bring it back to AWS services that commonly invoke Lambda:

  • Amazon S3
  • Amazon Kinesis
  • Amazon DynamoDB
  • Amazon Cognito

In a single account with the default concurrency limit of 1000 concurrent executions, any of these four services could invoke enough functions to consume the entire limit or some part of it. Just like with the pizza example, there is the possibility for two issues to pop up:

  • One or more of these services could invoke enough functions to consume a majority of the available concurrency capacity. This could cause others to be starved for it, causing failed invocations.
  • A service could consume too much concurrent capacity and cause a downstream service or database to be overwhelmed, which could cause failed executions.

For Lambda functions that are launched in a VPC, you have the potential to consume the available IP addresses in a subnet or the maximum number of elastic network interfaces to which your account has access. For more information, see Configuring a Lambda Function to Access Resources in an Amazon VPC. For information about elastic network interface limits, see Network Interfaces section in the Amazon VPC Limits topic.

One way to solve both of these problems is applying a concurrency limit to the Lambda functions in an account.

Configuring per function concurrency limits

You can now set a concurrency limit on individual Lambda functions in an account. The concurrency limit that you set reserves a portion of your account level concurrency for a given function. All of your functions’ concurrent executions count against this account-level limit by default.

If you set a concurrency limit for a specific function, then that function’s concurrency limit allocation is deducted from the shared pool and assigned to that specific function. AWS also reserves 100 units of concurrency for all functions that don’t have a specified concurrency limit set. This helps to make sure that future functions have capacity to be consumed.

Going back to the example of the consuming services, you could set throttles for the functions as follows:

Amazon S3 function = 350
Amazon Kinesis function = 200
Amazon DynamoDB function = 200
Amazon Cognito function = 150
Total = 900

With the 100 reserved for all non-concurrency reserved functions, this totals the account limit of 1000.

Here’s how this works. To start, create a basic Lambda function that is invoked via Amazon API Gateway. This Lambda function returns a single “Hello World” statement with an added sleep time between 2 and 5 seconds. The sleep time simulates an API providing some sort of capability that can take a varied amount of time. The goal here is to show how an API that is underloaded can reach its concurrency limit, and what happens when it does.
To create the example function

  1. Open the Lambda console.
  2. Choose Create Function.
  3. For Author from scratch, enter the following values:
    1. For Name, enter a value (such as concurrencyBlog01).
    2. For Runtime, choose Python 3.6.
    3. For Role, choose Create new role from template and enter a name aligned with this function, such as concurrencyBlogRole.
  4. Choose Create function.
  5. The function is created with some basic example code. Replace that code with the following:

import time
from random import randint
seconds = randint(2, 5)

def lambda_handler(event, context):
time.sleep(seconds)
return {"statusCode": 200,
"body": ("Hello world, slept " + str(seconds) + " seconds"),
"headers":
{
"Access-Control-Allow-Headers": "Content-Type,X-Amz-Date,Authorization,X-Api-Key,X-Amz-Security-Token",
"Access-Control-Allow-Methods": "GET,OPTIONS",
}}

  1. Under Basic settings, set Timeout to 10 seconds. While this function should only ever take up to 5-6 seconds (with the 5-second max sleep), this gives you a little bit of room if it takes longer.

  1. Choose Save at the top right.

At this point, your function is configured for this example. Test it and confirm this in the console:

  1. Choose Test.
  2. Enter a name (it doesn’t matter for this example).
  3. Choose Create.
  4. In the console, choose Test again.
  5. You should see output similar to the following:

Now configure API Gateway so that you have an HTTPS endpoint to test against.

  1. In the Lambda console, choose Configuration.
  2. Under Triggers, choose API Gateway.
  3. Open the API Gateway icon now shown as attached to your Lambda function:

  1. Under Configure triggers, leave the default values for API Name and Deployment stage. For Security, choose Open.
  2. Choose Add, Save.

API Gateway is now configured to invoke Lambda at the Invoke URL shown under its configuration. You can take this URL and test it in any browser or command line, using tools such as “curl”:


$ curl https://ofixul557l.execute-api.us-east-1.amazonaws.com/prod/concurrencyBlog01
Hello world, slept 2 seconds

Throwing load at the function

Now start throwing some load against your API Gateway + Lambda function combo. Right now, your function is only limited by the total amount of concurrency available in an account. For this example account, you might have 850 unreserved concurrency out of a full account limit of 1000 due to having configured a few concurrency limits already (also the 100 concurrency saved for all functions without configured limits). You can find all of this information on the main Dashboard page of the Lambda console:

For generating load in this example, use an open source tool called “hey” (https://github.com/rakyll/hey), which works similarly to ApacheBench (ab). You test from an Amazon EC2 instance running the default Amazon Linux AMI from the EC2 console. For more help with configuring an EC2 instance, follow the steps in the Launch Instance Wizard.

After the EC2 instance is running, SSH into the host and run the following:


sudo yum install go
go get -u github.com/rakyll/hey

“hey” is easy to use. For these tests, specify a total number of tests (5,000) and a concurrency of 50 against the API Gateway URL as follows(replace the URL here with your own):


$ ./go/bin/hey -n 5000 -c 50 https://ofixul557l.execute-api.us-east-1.amazonaws.com/prod/concurrencyBlog01

The output from “hey” tells you interesting bits of information:


$ ./go/bin/hey -n 5000 -c 50 https://ofixul557l.execute-api.us-east-1.amazonaws.com/prod/concurrencyBlog01

Summary:
Total: 381.9978 secs
Slowest: 9.4765 secs
Fastest: 0.0438 secs
Average: 3.2153 secs
Requests/sec: 13.0891
Total data: 140024 bytes
Size/request: 28 bytes

Response time histogram:
0.044 [1] |
0.987 [2] |
1.930 [0] |
2.874 [1803] |∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
3.817 [1518] |∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
4.760 [719] |∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
5.703 [917] |∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
6.647 [13] |
7.590 [14] |
8.533 [9] |
9.477 [4] |

Latency distribution:
10% in 2.0224 secs
25% in 2.0267 secs
50% in 3.0251 secs
75% in 4.0269 secs
90% in 5.0279 secs
95% in 5.0414 secs
99% in 5.1871 secs

Details (average, fastest, slowest):
DNS+dialup: 0.0003 secs, 0.0000 secs, 0.0332 secs
DNS-lookup: 0.0000 secs, 0.0000 secs, 0.0046 secs
req write: 0.0000 secs, 0.0000 secs, 0.0005 secs
resp wait: 3.2149 secs, 0.0438 secs, 9.4472 secs
resp read: 0.0000 secs, 0.0000 secs, 0.0004 secs

Status code distribution:
[200] 4997 responses
[502] 3 responses

You can see a helpful histogram and latency distribution. Remember that this Lambda function has a random sleep period in it and so isn’t entirely representational of a real-life workload. Those three 502s warrant digging deeper, but could be due to Lambda cold-start timing and the “second” variable being the maximum of 5, causing the Lambda functions to time out. AWS X-Ray and the Amazon CloudWatch logs generated by both API Gateway and Lambda could help you troubleshoot this.

Configuring a concurrency reservation

Now that you’ve established that you can generate this load against the function, I show you how to limit it and protect a backend resource from being overloaded by all of these requests.

  1. In the console, choose Configure.
  2. Under Concurrency, for Reserve concurrency, enter 25.

  1. Click on Save in the top right corner.

You could also set this with the AWS CLI using the Lambda put-function-concurrency command or see your current concurrency configuration via Lambda get-function. Here’s an example command:


$ aws lambda get-function --function-name concurrencyBlog01 --output json --query Concurrency
{
"ReservedConcurrentExecutions": 25
}

Either way, you’ve set the Concurrency Reservation to 25 for this function. This acts as both a limit and a reservation in terms of making sure that you can execute 25 concurrent functions at all times. Going above this results in the throttling of the Lambda function. Depending on the invoking service, throttling can result in a number of different outcomes, as shown in the documentation on Throttling Behavior. This change has also reduced your unreserved account concurrency for other functions by 25.

Rerun the same load generation as before and see what happens. Previously, you tested at 50 concurrency, which worked just fine. By limiting the Lambda functions to 25 concurrency, you should see rate limiting kick in. Run the same test again:


$ ./go/bin/hey -n 5000 -c 50 https://ofixul557l.execute-api.us-east-1.amazonaws.com/prod/concurrencyBlog01

While this test runs, refresh the Monitoring tab on your function detail page. You see the following warning message:

This is great! It means that your throttle is working as configured and you are now protecting your downstream resources from too much load from your Lambda function.

Here is the output from a new “hey” command:


$ ./go/bin/hey -n 5000 -c 50 https://ofixul557l.execute-api.us-east-1.amazonaws.com/prod/concurrencyBlog01
Summary:
Total: 379.9922 secs
Slowest: 7.1486 secs
Fastest: 0.0102 secs
Average: 1.1897 secs
Requests/sec: 13.1582
Total data: 164608 bytes
Size/request: 32 bytes

Response time histogram:
0.010 [1] |
0.724 [3075] |∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
1.438 [0] |
2.152 [811] |∎∎∎∎∎∎∎∎∎∎∎
2.866 [11] |
3.579 [566] |∎∎∎∎∎∎∎
4.293 [214] |∎∎∎
5.007 [1] |
5.721 [315] |∎∎∎∎
6.435 [4] |
7.149 [2] |

Latency distribution:
10% in 0.0130 secs
25% in 0.0147 secs
50% in 0.0205 secs
75% in 2.0344 secs
90% in 4.0229 secs
95% in 5.0248 secs
99% in 5.0629 secs

Details (average, fastest, slowest):
DNS+dialup: 0.0004 secs, 0.0000 secs, 0.0537 secs
DNS-lookup: 0.0002 secs, 0.0000 secs, 0.0184 secs
req write: 0.0000 secs, 0.0000 secs, 0.0016 secs
resp wait: 1.1892 secs, 0.0101 secs, 7.1038 secs
resp read: 0.0000 secs, 0.0000 secs, 0.0005 secs

Status code distribution:
[502] 3076 responses
[200] 1924 responses

This looks fairly different from the last load test run. A large percentage of these requests failed fast due to the concurrency throttle failing them (those with the 0.724 seconds line). The timing shown here in the histogram represents the entire time it took to get a response between the EC2 instance and API Gateway calling Lambda and being rejected. It’s also important to note that this example was configured with an edge-optimized endpoint in API Gateway. You see under Status code distribution that 3076 of the 5000 requests failed with a 502, showing that the backend service from API Gateway and Lambda failed the request.

Other uses

Managing function concurrency can be useful in a few other ways beyond just limiting the impact on downstream services and providing a reservation of concurrency capacity. Here are two other uses:

  • Emergency kill switch
  • Cost controls

Emergency kill switch

On occasion, due to issues with applications I’ve managed in the past, I’ve had a need to disable a certain function or capability of an application. By setting the concurrency reservation and limit of a Lambda function to zero, you can do just that.

With the reservation set to zero every invocation of a Lambda function results in being throttled. You could then work on the related parts of the infrastructure or application that aren’t working, and then reconfigure the concurrency limit to allow invocations again.

Cost controls

While I mentioned how you might want to use concurrency limits to control the downstream impact to services or databases that your Lambda function might call, another resource that you might be cautious about is money. Setting the concurrency throttle is another way to help control costs during development and testing of your application.

You might want to prevent against a function performing a recursive action too quickly or a development workload generating too high of a concurrency. You might also want to protect development resources connected to this function from generating too much cost, such as APIs that your Lambda function calls.

Conclusion

Concurrent executions as a unit of scale are a fairly unique characteristic about Lambda functions. Placing limits on how many concurrency “slices” that your function can consume can prevent a single function from consuming all of the available concurrency in an account. Limits can also prevent a function from overwhelming a backend resource that isn’t as scalable.

Unlike monolithic applications or even microservices where there are mixed capabilities in a single service, Lambda functions encourage a sort of “nano-service” of small business logic directly related to the integration model connected to the function. I hope you’ve enjoyed this post and configure your concurrency limits today!

MQTT 5: Introduction to MQTT 5

Post Syndicated from The HiveMQ Team original https://www.hivemq.com/blog/mqtt-5-introduction-to-mqtt-5/

MQTT 5 Introduction

Introduction to MQTT 5

Welcome to our brand new blog post series MQTT 5 – Features and Hidden Gems. Without doubt, the MQTT protocol is the most popular and best received Internet of Things protocol as of today (see the Google Trends Chart below), supporting large scale use cases ranging from Connected Cars, Manufacturing Systems, Logistics, Military Use Cases to Enterprise Chat Applications, Mobile Apps and connecting constrained IoT devices. Of course, with huge amounts of production deployments, the wish list for future versions of the MQTT protocol grew bigger and bigger.

MQTT 5 is by far the most extensive and most feature-rich update to the MQTT protocol specification ever. We are going to explore all hidden gems and protocol features with use case discussion and useful background information – one blog post at a time.

Be sure to read the MQTT Essentials Blog Post series first before diving into our new MQTT 5 series. To get the most out of the new blog posts, it’s important to have a basic understanding of the MQTT 3.1.1 protocol as we are going to highlight key changes as well as all improvements.

Roguelike Simulator

Post Syndicated from Eevee original https://eev.ee/release/2017/12/09/roguelike-simulator/

Screenshot of a monochromatic pixel-art game designed to look mostly like ASCII text

On a recent game night, glip and I stumbled upon bitsy — a tiny game maker for “games where you can walk around and talk to people and be somewhere.” It’s enough of a genre to have become a top tag on itch, so we flicked through a couple games.

What we found were tiny windows into numerous little worlds, ill-defined yet crisply rendered in chunky two-colored pixels. Indeed, all you can do is walk around and talk to people and be somewhere, but the somewheres are strangely captivating. My favorite was the last days of our castle, with a day on the town in a close second (though it cheated and extended the engine a bit), but there are several hundred of these tiny windows available. Just single, short, minimal, interactive glimpses of an idea.

I’ve been wanting to do more of that, so I gave it a shot today. The result is Roguelike Simulator, a game that condenses the NetHack experience into about ninety seconds.


Constraints breed creativity, and bitsy is practically made of constraints — the only place you can even make any decisions at all is within dialogue trees. There are only three ways to alter the world: the player can step on an ending tile to end the game, step on an exit tile to instantly teleport to a tile on another map (or not), or pick up an item. That’s it. You can’t even implement keys; the best you can do is make an annoying maze of identical rooms, then have an NPC tell you the solution.

In retrospect, a roguelike — a genre practically defined by its randomness — may have been a poor choice.

I had a lot of fun faking it, though, and it worked well enough to fool at least one person for a few minutes! Some choice hacks follow. Probably play the game a couple times before reading them?

  • Each floor reveals itself, of course, by teleporting you between maps with different chunks of the floor visible. I originally intended for this to be much more elaborate, but it turns out to be a huge pain to juggle multiple copies of the same floor layout.

  • Endings can’t be changed or randomized; even the text is static. I still managed to implement multiple variants on the “ascend” ending! See if you can guess how. (It’s not that hard.)

  • There are no Boolean operators, but there are arithmetic operators, so in one place I check whether you have both of two items by multiplying together how many of each you have.

  • Monsters you “defeat” are actually just items you pick up. They’re both drawn in the same color, and you can’t see your inventory, so you can’t tell the difference.

Probably the best part was writing the text, which is all completely ridiculous. I really enjoy writing a lot of quips — which I guess is why I like Twitter — and I’m happy to see they’ve made people laugh!


I think this has been a success! It’s definitely made me more confident about making smaller things — and about taking the first idea I have and just running with it. I’m going to keep an eye out for other micro game engines to play with, too.

CrimeStoppers Campaign Targets Pirate Set-Top Boxes & Their Users

Post Syndicated from Andy original https://torrentfreak.com/crimestoppers-campaign-targets-pirate-set-top-boxes-their-users-171209/

While many people might believe CrimeStoppers to be an official extension of the police in the UK, the truth is a little more subtle.

CrimeStoppers is a charity that operates a service through which members of the public can report crime anonymously, either using a dedicated phone line or via a website. Callers are not required to give their name, meaning that for those concerned about reprisals or becoming involved in a case for other sensitive reasons, it’s the perfect buffer between them and the authorities.

The people at CrimeStoppers deal with all kinds of crime but perhaps a little surprisingly, they’ve just got involved in the set-top box controversy in the UK.

“Advances in technology have allowed us to enjoy on-screen entertainment in more ways than ever before, with ever increasing amounts of exciting and original content,” the CrimeStoppers campaign begins.

“However, some people are avoiding paying for this content by using modified streaming hardware devices, like a set-top box or stick, in conjunction with software such as illegal apps or add-ons, or illegal mobile apps which allow them to watch new movie releases, TV that hasn’t yet aired, and subscription sports channels for free.”

The campaign has been launched in partnership with the Intellectual Property Office and unnamed “industry partners”. Who these companies are isn’t revealed but given the standard messages being portrayed by the likes of ACE, Premier League and Federation Against Copyright Theft lately, it wouldn’t be a surprise if some or all of them were involved.

Those messages are revealed in a series of four video ads, each taking a different approach towards discouraging the public from using devices loaded with pirate software.

The first video clearly targets the consumer, dispelling the myth that watching pirate video isn’t against the law. It is, that’s not in any doubt, but from the constant tone of the video, one could be forgiven that it’s an extremely serious crime rather than something which is likely to be a civil matter, if anything at all.

It also warns people who are configuring and selling pirate devices that they are breaking the law. Again, this is absolutely true but this activity is clearly several magnitudes more serious than simply viewing. The video blurs the boundaries for what appears to be dramatic effect, however.

Selling and watching is illegal

The second video is all about demonizing the people and groups who may offer set-top boxes to the public.

Instead of portraying the hundreds of “cottage industry” suppliers behind many set-top box sales in the UK, the CrimeStoppers video paints a picture of dark organized crime being the main driver. By buying from these people, the charity warns, criminals are being welcomed in.

“It is illegal. You could also be helping to fund organized crime and bringing it into your community,” the video warns.

Are you funding organized crime?

The third video takes another approach, warning that set-top boxes have few if any parental controls. This could lead to children being exposed to inappropriate content, the charity warns.

“What are your children watching. Does it worry you?” the video asks.

Of course, the same can be said about the Internet, period. Web browsers don’t filter what content children have access to unless parents take pro-active steps to configure special services or software for the purpose.

There’s always the option to supervise children, of course, but Netflix is probably a safer option for those with a preference to stand off. It’s also considerably more expensive, a fact that won’t have escaped users of these devices.

Got kids? Take care….

Finally, video four picks up a theme that’s becoming increasingly common in anti-piracy campaigns – malware and identity theft.

“Why risk having your identity stolen or your bank account or home network hacked. If you access entertainment or sports using dodgy streaming devices or apps, or illegal addons for Kodi, you are increasing the risks,” the ad warns.

Danger….Danger….

Perhaps of most interest is that this entire campaign, which almost certainly has Big Media behind the scenes in advisory and financial capacities, barely mentions the entertainment industries at all.

Indeed, the success of the whole campaign hinges on people worrying about the supposed ill effects of illicit streaming on them personally and then feeling persuaded to inform on suppliers and others involved in the chain.

“Know of someone supplying or promoting these dodgy devices or software? It is illegal. Call us now and help stop crime in your community,” the videos warn.

That CrimeStoppers has taken on this campaign at all is a bit of a head-scratcher, given the bigger crime picture. Struggling with severe budget cuts, police in the UK are already de-prioritizing a number of crimes, leading to something called “screening out”, a process through which victims are given a crime number but no investigation is carried out.

This means that in 2016, 45% of all reported crimes in Greater Manchester weren’t investigated and a staggering 57% of all recorded domestic burglaries weren’t followed up by the police. But it gets worse.

“More than 62pc of criminal damage and arson offenses were not investigated, along with one in three reported shoplifting incidents,” MEN reports.

Given this backdrop, how will police suddenly find the resources to follow up lots of leads from the public and then subsequently prosecute people who sell pirate boxes? Even if they do, will that be at the expense of yet more “screening out” of other public-focused offenses?

No one is saying that selling pirate devices isn’t a crime or at least worthy of being followed up, but is this niche likely to be important to the public when they’re being told that nothing will be done when their homes are emptied by intruders? “NO” says a comment on one of the CrimeStoppers videos on YouTube.

“This crime affects multi-million dollar corporations, I’d rather see tax payers money invested on videos raising awareness of crimes committed against the people rather than the 0.001%,” it concludes.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Looking Forward to 2018

Post Syndicated from Let's Encrypt - Free SSL/TLS Certificates original https://letsencrypt.org//2017/12/07/looking-forward-to-2018.html

Let’s Encrypt had a great year in 2017. We more than doubled the number of active (unexpired) certificates we service to 46 million, we just about tripled the number of unique domains we service to 61 million, and we did it all while maintaining a stellar security and compliance track record. Most importantly though, the Web went from 46% encrypted page loads to 67% according to statistics from Mozilla – a gain of 21% in a single year – incredible. We’re proud to have contributed to that, and we’d like to thank all of the other people and organizations who also worked hard to create a more secure and privacy-respecting Web.

While we’re proud of what we accomplished in 2017, we are spending most of the final quarter of the year looking forward rather than back. As we wrap up our own planning process for 2018, I’d like to share some of our plans with you, including both the things we’re excited about and the challenges we’ll face. We’ll cover service growth, new features, infrastructure, and finances.

Service Growth

We are planning to double the number of active certificates and unique domains we service in 2018, to 90 million and 120 million, respectively. This anticipated growth is due to continuing high expectations for HTTPS growth in general in 2018.

Let’s Encrypt helps to drive HTTPS adoption by offering a free, easy to use, and globally available option for obtaining the certificates required to enable HTTPS. HTTPS adoption on the Web took off at an unprecedented rate from the day Let’s Encrypt launched to the public.

One of the reasons Let’s Encrypt is so easy to use is that our community has done great work making client software that works well for a wide variety of platforms. We’d like to thank everyone involved in the development of over 60 client software options for Let’s Encrypt. We’re particularly excited that support for the ACME protocol and Let’s Encrypt is being added to the Apache httpd server.

Other organizations and communities are also doing great work to promote HTTPS adoption, and thus stimulate demand for our services. For example, browsers are starting to make their users more aware of the risks associated with unencrypted HTTP (e.g. Firefox, Chrome). Many hosting providers and CDNs are making it easier than ever for all of their customers to use HTTPS. Government agencies are waking up to the need for stronger security to protect constituents. The media community is working to Secure the News.

New Features

We’ve got some exciting features planned for 2018.

First, we’re planning to introduce an ACME v2 protocol API endpoint and support for wildcard certificates along with it. Wildcard certificates will be free and available globally just like our other certificates. We are planning to have a public test API endpoint up by January 4, and we’ve set a date for the full launch: Tuesday, February 27.

Later in 2018 we plan to introduce ECDSA root and intermediate certificates. ECDSA is generally considered to be the future of digital signature algorithms on the Web due to the fact that it is more efficient than RSA. Let’s Encrypt will currently sign ECDSA keys from subscribers, but we sign with the RSA key from one of our intermediate certificates. Once we have an ECDSA root and intermediates, our subscribers will be able to deploy certificate chains which are entirely ECDSA.

Infrastructure

Our CA infrastructure is capable of issuing millions of certificates per day with multiple redundancy for stability and a wide variety of security safeguards, both physical and logical. Our infrastructure also generates and signs nearly 20 million OCSP responses daily, and serves those responses nearly 2 billion times per day. We expect issuance and OCSP numbers to double in 2018.

Our physical CA infrastructure currently occupies approximately 70 units of rack space, split between two datacenters, consisting primarily of compute servers, storage, HSMs, switches, and firewalls.

When we issue more certificates it puts the most stress on storage for our databases. We regularly invest in more and faster storage for our database servers, and that will continue in 2018.

We’ll need to add a few additional compute servers in 2018, and we’ll also start aging out hardware in 2018 for the first time since we launched. We’ll age out about ten 2u compute servers and replace them with new 1u servers, which will save space and be more energy efficient while providing better reliability and performance.

We’ll also add another infrastructure operations staff member, bringing that team to a total of six people. This is necessary in order to make sure we can keep up with demand while maintaining a high standard for security and compliance. Infrastructure operations staff are systems administrators responsible for building and maintaining all physical and logical CA infrastructure. The team also manages a 24/7/365 on-call schedule and they are primary participants in both security and compliance audits.

Finances

We pride ourselves on being an efficient organization. In 2018 Let’s Encrypt will secure a large portion of the Web with a budget of only $3.0M. For an overall increase in our budget of only 13%, we will be able to issue and service twice as many certificates as we did in 2017. We believe this represents an incredible value and that contributing to Let’s Encrypt is one of the most effective ways to help create a more secure and privacy-respecting Web.

Our 2018 fundraising efforts are off to a strong start with Platinum sponsorships from Mozilla, Akamai, OVH, Cisco, Google Chrome and the Electronic Frontier Foundation. The Ford Foundation has renewed their grant to Let’s Encrypt as well. We are seeking additional sponsorship and grant assistance to meet our full needs for 2018.

We had originally budgeted $2.91M for 2017 but we’ll likely come in under budget for the year at around $2.65M. The difference between our 2017 expenses of $2.65M and the 2018 budget of $3.0M consists primarily of the additional infrastructure operations costs previously mentioned.

Support Let’s Encrypt

We depend on contributions from our community of users and supporters in order to provide our services. If your company or organization would like to sponsor Let’s Encrypt please email us at [email protected]. We ask that you make an individual contribution if it is within your means.

We’re grateful for the industry and community support that we receive, and we look forward to continuing to create a more secure and privacy-respecting Web!

Libertarians are against net neutrality

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/12/libertarians-are-against-net-neutrality.html

This post claims to be by a libertarian in support of net neutrality. As a libertarian, I need to debunk this. “Net neutrality” is a case of one-hand clapping, you rarely hear the competing side, and thus, that side may sound attractive. This post is about the other side, from a libertarian point of view.

That post just repeats the common, and wrong, left-wing talking points. I mean, there might be a libertarian case for some broadband regulation, but this isn’t it.

This thing they call “net neutrality” is just left-wing politics masquerading as some sort of principle. It’s no different than how people claim to be “pro-choice”, yet demand forced vaccinations. Or, it’s no different than how people claim to believe in “traditional marriage” even while they are on their third “traditional marriage”.

Properly defined, “net neutrality” means no discrimination of network traffic. But nobody wants that. A classic example is how most internet connections have faster download speeds than uploads. This discriminates against upload traffic, harming innovation in upload-centric applications like DropBox’s cloud backup or BitTorrent’s peer-to-peer file transfer. Yet activists never mention this, or other types of network traffic discrimination, because they no more care about “net neutrality” than Trump or Gingrich care about “traditional marriage”.

Instead, when people say “net neutrality”, they mean “government regulation”. It’s the same old debate between who is the best steward of consumer interest: the free-market or government.

Specifically, in the current debate, they are referring to the Obama-era FCC “Open Internet” order and reclassification of broadband under “Title II” so they can regulate it. Trump’s FCC is putting broadband back to “Title I”, which means the FCC can’t regulate most of its “Open Internet” order.

Don’t be tricked into thinking the “Open Internet” order is anything but intensely politically. The premise behind the order is the Democrat’s firm believe that it’s government who created the Internet, and all innovation, advances, and investment ultimately come from the government. It sees ISPs as inherently deceitful entities who will only serve their own interests, at the expense of consumers, unless the FCC protects consumers.

It says so right in the order itself. It starts with the premise that broadband ISPs are evil, using illegitimate “tactics” to hurt consumers, and continues with similar language throughout the order.

A good contrast to this can be seen in Tim Wu’s non-political original paper in 2003 that coined the term “net neutrality”. Whereas the FCC sees broadband ISPs as enemies of consumers, Wu saw them as allies. His concern was not that ISPs would do evil things, but that they would do stupid things, such as favoring short-term interests over long-term innovation (such as having faster downloads than uploads).

The political depravity of the FCC’s order can be seen in this comment from one of the commissioners who voted for those rules:

FCC Commissioner Jessica Rosenworcel wants to increase the minimum broadband standards far past the new 25Mbps download threshold, up to 100Mbps. “We invented the internet. We can do audacious things if we set big goals, and I think our new threshold, frankly, should be 100Mbps. I think anything short of that shortchanges our children, our future, and our new digital economy,” Commissioner Rosenworcel said.

This is indistinguishable from communist rhetoric that credits the Party for everything, as this booklet from North Korea will explain to you.

But what about monopolies? After all, while the free-market may work when there’s competition, it breaks down where there are fewer competitors, oligopolies, and monopolies.

There is some truth to this, in individual cities, there’s often only only a single credible high-speed broadband provider. But this isn’t the issue at stake here. The FCC isn’t proposing light-handed regulation to keep monopolies in check, but heavy-handed regulation that regulates every last decision.

Advocates of FCC regulation keep pointing how broadband monopolies can exploit their renting-seeking positions in order to screw the customer. They keep coming up with ever more bizarre and unlikely scenarios what monopoly power grants the ISPs.

But the never mention the most simplest: that broadband monopolies can just charge customers more money. They imagine instead that these companies will pursue a string of outrageous, evil, and less profitable behaviors to exploit their monopoly position.

The FCC’s reclassification of broadband under Title II gives it full power to regulate ISPs as utilities, including setting prices. The FCC has stepped back from this, promising it won’t go so far as to set prices, that it’s only regulating these evil conspiracy theories. This is kind of bizarre: either broadband ISPs are evilly exploiting their monopoly power or they aren’t. Why stop at regulating only half the evil?

The answer is that the claim “monopoly” power is a deception. It starts with overstating how many monopolies there are to begin with. When it issued its 2015 “Open Internet” order the FCC simultaneously redefined what they meant by “broadband”, upping the speed from 5-mbps to 25-mbps. That’s because while most consumers have multiple choices at 5-mbps, fewer consumers have multiple choices at 25-mbps. It’s a dirty political trick to convince you there is more of a problem than there is.

In any case, their rules still apply to the slower broadband providers, and equally apply to the mobile (cell phone) providers. The US has four mobile phone providers (AT&T, Verizon, T-Mobile, and Sprint) and plenty of competition between them. That it’s monopolistic power that the FCC cares about here is a lie. As their Open Internet order clearly shows, the fundamental principle that animates the document is that all corporations, monopolies or not, are treacherous and must be regulated.

“But corporations are indeed evil”, people argue, “see here’s a list of evil things they have done in the past!”

No, those things weren’t evil. They were done because they benefited the customers, not as some sort of secret rent seeking behavior.

For example, one of the more common “net neutrality abuses” that people mention is AT&T’s blocking of FaceTime. I’ve debunked this elsewhere on this blog, but the summary is this: there was no network blocking involved (not a “net neutrality” issue), and the FCC analyzed it and decided it was in the best interests of the consumer. It’s disingenuous to claim it’s an evil that justifies FCC actions when the FCC itself declared it not evil and took no action. It’s disingenuous to cite the “net neutrality” principle that all network traffic must be treated when, in fact, the network did treat all the traffic equally.

Another frequently cited abuse is Comcast’s throttling of BitTorrent.Comcast did this because Netflix users were complaining. Like all streaming video, Netflix backs off to slower speed (and poorer quality) when it experiences congestion. BitTorrent, uniquely among applications, never backs off. As most applications become slower and slower, BitTorrent just speeds up, consuming all available bandwidth. This is especially problematic when there’s limited upload bandwidth available. Thus, Comcast throttled BitTorrent during prime time TV viewing hours when the network was already overloaded by Netflix and other streams. BitTorrent users wouldn’t mind this throttling, because it often took days to download a big file anyway.

When the FCC took action, Comcast stopped the throttling and imposed bandwidth caps instead. This was a worse solution for everyone. It penalized heavy Netflix viewers, and prevented BitTorrent users from large downloads. Even though BitTorrent users were seen as the victims of this throttling, they’d vastly prefer the throttling over the bandwidth caps.

In both the FaceTime and BitTorrent cases, the issue was “network management”. AT&T had no competing video calling service, Comcast had no competing download service. They were only reacting to the fact their networks were overloaded, and did appropriate things to solve the problem.

Mobile carriers still struggle with the “network management” issue. While their networks are fast, they are still of low capacity, and quickly degrade under heavy use. They are looking for tricks in order to reduce usage while giving consumers maximum utility.

The biggest concern is video. It’s problematic because it’s designed to consume as much bandwidth as it can, throttling itself only when it experiences congestion. This is what you probably want when watching Netflix at the highest possible quality, but it’s bad when confronted with mobile bandwidth caps.

With small mobile devices, you don’t want as much quality anyway. You want the video degraded to lower quality, and lower bandwidth, all the time.

That’s the reasoning behind T-Mobile’s offerings. They offer an unlimited video plan in conjunction with the biggest video providers (Netflix, YouTube, etc.). The catch is that when congestion occurs, they’ll throttle it to lower quality. In other words, they give their bandwidth to all the other phones in your area first, then give you as much of the leftover bandwidth as you want for video.

While it sounds like T-Mobile is doing something evil, “zero-rating” certain video providers and degrading video quality, the FCC allows this, because they recognize it’s in the customer interest.

Mobile providers especially have great interest in more innovation in this area, in order to conserve precious bandwidth, but they are finding it costly. They can’t just innovate, but must ask the FCC permission first. And with the new heavy handed FCC rules, they’ve become hostile to this innovation. This attitude is highlighted by the statement from the “Open Internet” order:

And consumers must be protected, for example from mobile commercial practices masquerading as “reasonable network management.”

This is a clear declaration that free-market doesn’t work and won’t correct abuses, and that that mobile companies are treacherous and will do evil things without FCC oversight.

Conclusion

Ignoring the rhetoric for the moment, the debate comes down to simple left-wing authoritarianism and libertarian principles. The Obama administration created a regulatory regime under clear Democrat principles, and the Trump administration is rolling it back to more free-market principles. There is no principle at stake here, certainly nothing to do with a technical definition of “net neutrality”.

The 2015 “Open Internet” order is not about “treating network traffic neutrally”, because it doesn’t do that. Instead, it’s purely a left-wing document that claims corporations cannot be trusted, must be regulated, and that innovation and prosperity comes from the regulators and not the free market.

It’s not about monopolistic power. The primary targets of regulation are the mobile broadband providers, where there is plenty of competition, and who have the most “network management” issues. Even if it were just about wired broadband (like Comcast), it’s still ignoring the primary ways monopolies profit (raising prices) and instead focuses on bizarre and unlikely ways of rent seeking.

If you are a libertarian who nonetheless believes in this “net neutrality” slogan, you’ve got to do better than mindlessly repeating the arguments of the left-wing. The term itself, “net neutrality”, is just a slogan, varying from person to person, from moment to moment. You have to be more specific. If you truly believe in the “net neutrality” technical principle that all traffic should be treated equally, then you’ll want a rewrite of the “Open Internet” order.

In the end, while libertarians may still support some form of broadband regulation, it’s impossible to reconcile libertarianism with the 2015 “Open Internet”, or the vague things people mean by the slogan “net neutrality”.

Running Windows Containers on Amazon ECS

Post Syndicated from Nathan Taber original https://aws.amazon.com/blogs/compute/running-windows-containers-on-amazon-ecs/

This post was developed and written by Jeremy Cowan, Thomas Fuller, Samuel Karp, and Akram Chetibi.

Containers have revolutionized the way that developers build, package, deploy, and run applications. Initially, containers only supported code and tooling for Linux applications. With the release of Docker Engine for Windows Server 2016, Windows developers have started to realize the gains that their Linux counterparts have experienced for the last several years.

This week, we’re adding support for running production workloads in Windows containers using Amazon Elastic Container Service (Amazon ECS). Now, Amazon ECS provides an ECS-Optimized Windows Server Amazon Machine Image (AMI). This AMI is based on the EC2 Windows Server 2016 AMI, and includes Docker 17.06 Enterprise Edition and the ECS Agent 1.16. This AMI provides improved instance and container launch time performance. It’s based on Windows Server 2016 Datacenter and includes Docker 17.06.2-ee-5, along with a new version of the ECS agent that now runs as a native Windows service.

In this post, I discuss the benefits of this new support, and walk you through getting started running Windows containers with Amazon ECS.

When AWS released the Windows Server 2016 Base with Containers AMI, the ECS agent ran as a process that made it difficult to monitor and manage. As a service, the agent can be health-checked, managed, and restarted no differently than other Windows services. The AMI also includes pre-cached images for Windows Server Core 2016 and Windows Server Nano Server 2016. By caching the images in the AMI, launching new Windows containers is significantly faster. When Docker images include a layer that’s already cached on the instance, Docker re-uses that layer instead of pulling it from the Docker registry.

The ECS agent and an accompanying ECS PowerShell module used to install, configure, and run the agent come pre-installed on the AMI. This guarantees there is a specific platform version available on the container instance at launch. Because the software is included, you don’t have to download it from the internet. This saves startup time.

The Windows-compatible ECS-optimized AMI also reports CPU and memory utilization and reservation metrics to Amazon CloudWatch. Using the CloudWatch integration with ECS, you can create alarms that trigger dynamic scaling events to automatically add or remove capacity to your EC2 instances and ECS tasks.

Getting started

To help you get started running Windows containers on ECS, I’ve forked the ECS reference architecture, to build an ECS cluster comprised of Windows instances instead of Linux instances. You can pull the latest version of the reference architecture for Windows.

The reference architecture is a layered CloudFormation stack, in that it calls other stacks to create the environment. Within the stack, the ecs-windows-cluster.yaml file contains the instructions for bootstrapping the Windows instances and configuring the ECS cluster. To configure the instances outside of AWS CloudFormation (for example, through the CLI or the console), you can add the following commands to your instance’s user data:

Import-Module ECSTools
Initialize-ECSAgent

Or

Import-Module ECSTools
Initialize-ECSAgent –Cluster MyCluster -EnableIAMTaskRole

If you don’t specify a cluster name when you initialize the agent, the instance is joined to the default cluster.

Adding -EnableIAMTaskRole when initializing the agent adds support for IAM roles for tasks. Previously, enabling this setting meant running a complex script and setting an environment variable before you could assign roles to your ECS tasks.

When you enable IAM roles for tasks on Windows, it consumes port 80 on the host. If you have tasks that listen on port 80 on the host, I recommend configuring a service for them that uses load balancing. You can use port 80 on the load balancer, and the traffic can be routed to another host port on your container instances. For more information, see Service Load Balancing.

Create a cluster

To create a new ECS cluster, choose Launch stack, or pull the GitHub project to your local machine and run the following command:

aws cloudformation create-stack –template-body file://<path to master-windows.yaml> --stack-name <name>

Upload your container image

Now that you have a cluster running, step through how to build and push an image into a container repository. You use a repository hosted in Amazon Elastic Container Registry (Amazon ECR) for this, but you could also use Docker Hub. To build and push an image to a repository, install Docker on your Windows* workstation. You also create a repository and assign the necessary permissions to the account that pushes your image to Amazon ECR. For detailed instructions, see Pushing an Image.

* If you are building an image that is based on Windows layers, then you must use a Windows environment to build and push your image to the registry.

Write your task definition

Now that your image is built and ready, the next step is to run your Windows containers using a task.

Start by creating a new task definition based on the windows-simple-iis image from Docker Hub.

  1. Open the ECS console.
  2. Choose Task Definitions, Create new task definition.
  3. Scroll to the bottom of the page and choose Configure via JSON.
  4. Copy and paste the following JSON into that field.
  5. Choose Save, Create.
{
   "family": "windows-simple-iis",
   "containerDefinitions": [
   {
     "name": "windows_sample_app",
     "image": "microsoft/iis",
     "cpu": 100,
     "entryPoint":["powershell", "-Command"],
     "command":["New-Item -Path C:\\inetpub\\wwwroot\\index.html -Type file -Value '<html><head><title>Amazon ECS Sample App</title> <style>body {margin-top: 40px; background-color: #333;} </style> </head><body> <div style=color:white;text-align:center><h1>Amazon ECS Sample App</h1> <h2>Congratulations!</h2> <p>Your application is now running on a container in Amazon ECS.</p></body></html>'; C:\\ServiceMonitor.exe w3svc"],
     "portMappings": [
     {
       "protocol": "tcp",
       "containerPort": 80,
       "hostPort": 8080
     }
     ],
     "memory": 500,
     "essential": true
   }
   ]
}

You can now go back into the Task Definition page and see windows-simple-iis as an available task definition.

There are a few important aspects of the task definition file to note when working with Windows containers. First, the hostPort is configured as 8080, which is necessary because the ECS agent currently uses port 80 to enable IAM roles for tasks required for least-privilege security configurations.

There are also some fairly standard task parameters that are intentionally not included. For example, network mode is not available with Windows at the time of this release, so keep that setting blank to allow Docker to configure WinNAT, the only option available today.

Also, some parameters work differently with Windows than they do with Linux. The CPU limits that you define in the task definition are absolute, whereas on Linux they are weights. For information about other task parameters that are supported or possibly different with Windows, see the documentation.

Run your containers

At this point, you are ready to run containers. There are two options to run containers with ECS:

  1. Task
  2. Service

A task is typically a short-lived process that ECS creates. It can’t be configured to actively monitor or scale. A service is meant for longer-running containers and can be configured to use a load balancer, minimum/maximum capacity settings, and a number of other knobs and switches to help ensure that your code keeps running. In both cases, you are able to pick a placement strategy and a specific IAM role for your container.

  1. Select the task definition that you created above and choose Action, Run Task.
  2. Leave the settings on the next page to the default values.
  3. Select the ECS cluster created when you ran the CloudFormation template.
  4. Choose Run Task to start the process of scheduling a Docker container on your ECS cluster.

You can now go to the cluster and watch the status of your task. It may take 5–10 minutes for the task to go from PENDING to RUNNING, mostly because it takes time to download all of the layers necessary to run the microsoft/iis image. After the status is RUNNING, you should see the following results:

You may have noticed that the example task definition is named windows-simple-iis:2. This is because I created a second version of the task definition, which is one of the powerful capabilities of using ECS. You can make the task definitions part of your source code and then version them. You can also roll out new versions and practice blue/green deployment, switching to reduce downtime and improve the velocity of your deployments!

After the task has moved to RUNNING, you can see your website hosted in ECS. Find the public IP or DNS for your ECS host. Remember that you are hosting on port 8080. Make sure that the security group allows ingress from your client IP address to that port and that your VPC has an internet gateway associated with it. You should see a page that looks like the following:

This is a nice start to deploying a simple single instance task, but what if you had a Web API to be scaled out and in based on usage? This is where you could look at defining a service and collecting CloudWatch data to add and remove both instances of the task. You could also use CloudWatch alarms to add more ECS container instances and keep up with the demand. The former is built into the configuration of your service.

  1. Select the task definition and choose Create Service.
  2. Associate a load balancer.
  3. Set up Auto Scaling.

The following screenshot shows an example where you would add an additional task instance when the CPU Utilization CloudWatch metric is over 60% on average over three consecutive measurements. This may not be aggressive enough for your requirements; it’s meant to show you the option to scale tasks the same way you scale ECS instances with an Auto Scaling group. The difference is that these tasks start much faster because all of the base layers are already on the ECS host.

Do not confuse task dynamic scaling with ECS instance dynamic scaling. To add additional hosts, see Tutorial: Scaling Container Instances with CloudWatch Alarms.

Conclusion

This is just scratching the surface of the flexibility that you get from using containers and Amazon ECS. For more information, see the Amazon ECS Developer Guide and ECS Resources.

– Jeremy, Thomas, Samuel, Akram

Game night 1: Lisa, Lisa, MOOP

Post Syndicated from Eevee original https://eev.ee/blog/2017/12/05/game-night-1-lisa-lisa-moop/

For the last few weeks, glip (my partner) and I have spent a couple hours most nights playing indie games together. We started out intending to play a short list of games that had been recommended to glip, but this turns out to be a nice way to wind down, so we’ve been keeping it up and clicking on whatever looks interesting in the itch app.

Most of the games are small and made by one or two people, so they tend to be pretty tightly scoped and focus on a few particular kinds of details. I’ve found myself having brain thoughts about all that, so I thought I’d write some of them down.

I also know that some people (cough) tend not to play games they’ve never heard of, even if they want something new to play. If that’s you, feel free to play some of these, now that you’ve heard of them!

Also, I’m still figuring the format out here, so let me know if this is interesting or if you hope I never do it again!

First up:

  • Lisa: The Painful
  • Lisa: The Joyful
  • MOOP

These are impressions, not reviews. I try to avoid major/ending spoilers, but big plot points do tend to leave impressions.

Lisa: The Painful

long · classic rpg · dec 2014 · lin/mac/win · $10 on itch or steam · website

(cw: basically everything??)

Lisa: The Painful is true to its name. I hesitate to describe it as fun, exactly, but I’m glad we played it.

Everything about the game is dark. It’s a (somewhat loose) sequel to another game called Lisa, whose titular character ultimately commits suicide; her body hanging from a noose is the title screen for this game.

Ah, but don’t worry, it gets worse. This game takes place in a post-apocalyptic wasteland, where every female human — women, children, babies — is dead. You play as Brad (Lisa’s brother), who has discovered the lone exception: a baby girl he names Buddy and raises like a daughter. Now, Buddy has been kidnapped, and you have to go rescue her, presumably from being raped.

Ah, but don’t worry, it gets worse.


I’ve had a hard time putting my thoughts in order here, because so much of what stuck with me is the way the game entangles the plot with the mechanics.

I love that kind of thing, but it’s so hard to do well. I can’t really explain why, but I feel like most attempts to do it fall flat — they have a glimmer of an idea, but they don’t integrate it well enough, or they don’t run nearly as far as they could have. I often get the same feeling as, say, a hyped-up big moral choice that turns out to be picking “yes” or “no” from a menu. The idea is there, but the execution is so flimsy that it leaves no impact on me at all.

An obvious recent success here is Undertale, where the entire story is about violence and whether you choose to engage or avoid it (and whether you can do that). If you choose to eschew violence, not only does the game become more difficult, it arguably becomes a different game entirely. Granted, the contrast is lost if you (like me) tried to play as a pacifist from the very beginning. I do feel that you could go further with the idea than Undertale, but Undertale itself doesn’t feel incomplete.

Christ, I’m not even talking about the right game any more.

Okay, so: this game is a “classic” RPG, by which I mean, it was made with RPG Maker. (It’s kinda funny that RPG Maker was designed to emulate a very popular battle style, and now the only games that use that style are… made with RPG Maker.) The main loop, on the surface, is standard RPG fare: you walk around various places, talk to people, solve puzzles, recruit party members, and get into turn-based fights.

Now, Brad is addicted to a drug called Joy. He will regularly go into withdrawal, which manifests in the game as a status effect that cuts his stats (even his max HP!) dramatically.

It is really, really, incredibly inconvenient. And therein lies the genius here. The game could have simply told me that Brad is an addict, and I don’t think I would’ve cared too much. An addiction to a fantasy drug in a wasteland doesn’t mean anything to me, especially about this tiny sprite man I just met, so I would’ve filed this away as a sterile fact and forgotten about it. By making his addiction affect me, I’m now invested in it. I wish Brad weren’t addicted, even if only because it’s annoying. I found a party member once who turned out to have the same addiction, and I felt dread just from seeing the icon for the status effect. I’ve been looped into the events of this story through the medium I use to interact with it: the game.

It’s a really good use of games as a medium. Even before I’m invested in the characters, I’m invested in what’s happening to them, because it impacts the game!

Incidentally, you can get Joy as an item, which will temporarily cure your withdrawal… but you mostly find it by looting the corpses of grotesque mutant flesh horrors you encounter. I don’t think the game would have the player abruptly mutate out of nowhere, but I wasn’t about to find out, either. We never took any.


Virtually every staple of the RPG genre has been played with in some way to tie it into the theme/setting. I love it, and I think it works so well precisely because it plays with expectations of how RPGs usually work.

Most obviously, the game is a sidescroller, not top-down. You can’t jump freely, but you can hop onto one-tile-high boxes and climb ropes. You can also drop off off ledges… but your entire party will take fall damage, which gets rapidly more severe the further you fall.

This wouldn’t be too much of a problem, except that healing is hard to come by for most of the game. Several hub areas have campfires you can sleep next to to restore all your health and MP, but when you wake up, something will have happened to you. Maybe just a weird cutscene, or maybe one of your party members has decided to leave permanently.

Okay, so use healing items instead? Good luck; money is also hard to come by, and honestly so are shops, and many of the healing items are woefully underpowered.

Grind for money? Good luck there, too! While the game has plenty of battles, virtually every enemy is a unique overworld human who only appears once, and then is dead, because you killed him. Only a handful of places have unlimited random encounters, and grinding is not especially pleasant.

The “best” way to get a reliable heal is to savescum — save the game, sleep by the campfire, and reload if you don’t like what you wake up to.

In a similar vein, there’s a part of the game where you’re forced to play Russian Roulette. You choose a party member; he and an opponent will take turns shooting themselves in the head until someone finds a loaded chamber. If your party member loses, he is dead. And you have to keep playing until you win three times, so there’s no upper limit on how many people you might lose. I couldn’t find any way to influence who won, so I just had to savescum for a good half hour until I made it through with minimal losses.

It was maddening, but also a really good idea. Games don’t often incorporate the existence of saves into the gameplay, and when they do, they usually break the fourth wall and get all meta about it. Saves are never acknowledged in-universe here (aside from the existence of save points), but surely these parts of the game were designed knowing that the best way through them is by reloading. It’s rarely done, it can easily feel unfair, and it drove me up the wall — but it was certainly painful, as intended, and I kinda love that.

(Naturally, I’m told there’s a hard mode, where you can only use each save point once.)

The game also drives home the finality of death much better than most. It’s not hard to overlook the death of a redshirt, a character with a bit part who simply doesn’t appear any more. This game permanently kills your party members. Russian Roulette isn’t even the only way you can lose them! Multiple cutscenes force you to choose between losing a life or some other drastic consequence. (Even better, you can try to fight the person forcing this choice on you, and he will decimate you.) As the game progresses, you start to encounter enemies who can simply one-shot murder your party members.

It’s such a great angle. Just like with Brad’s withdrawal, you don’t want to avoid their deaths because it’d be emotional — there are dozens of party members you can recruit (though we only found a fraction of them), and most of them you only know a paragraph about — but because it would inconvenience you personally. Chances are, you have your strongest dudes in your party at any given time, so losing one of them sucks. And with few random encounters, you can’t just grind someone else up to an appropriate level; it feels like there’s a finite amount of XP in the game, and if someone high-level dies, you’ve lost all the XP that went into them.


The battles themselves are fairly straightforward. You can attack normally or use a special move that costs MP. SP? Some kind of points.

Two things in particular stand out. One I mentioned above: the vast majority of the encounters are one-time affairs against distinct named NPCs, who you then never see again, because they are dead, because you killed them.

The other is the somewhat unusual set of status effects. The staples like poison and sleep are here, but don’t show up all that often; more frequent are statuses like weird, drunk, stink, or cool. If you do take Joy (which also cures depression), you become joyed for a short time.

The game plays with these in a few neat ways, besides just Brad’s withdrawal. Some party members have a status like stink or cool permanently. Some battles are against people who don’t want to fight at all — and so they’ll spend most of the battle crying, purely for flavor impact. Seeing that for the first time hit me pretty hard; until then we’d only seen crying as a mechanical side effect of having sand kicked in one’s face.


The game does drag on a bit. I think we poured 10 in-game hours into it, which doesn’t count time spent reloading. It doesn’t help that you walk not super fast.

My biggest problem was with getting my bearings; I’m sure we spent a lot of that time wandering around accomplishing nothing. Most of the world is focused around one of a few hub areas, and once you’ve completed one hub, you can move onto the next one. That’s fine. Trouble is, you can go any of a dozen different directions from each hub, and most of those directions will lead you to very similar-looking hills built out of the same tiny handful of tiles. The connections between places are mostly cave entrances, which also largely look the same. Combine that with needing to backtrack for puzzle or progression reasons, and it’s incredibly difficult to keep track of where you’ve been, what you’ve done, and where you need to go next.

I don’t know that the game is wrong here; the aesthetic and world layout are fantastic at conveying a desolate wasteland. I wouldn’t even be surprised if the navigation were deliberately designed this way. (On the other hand, assuming every annoyance in a despair-ridden game is deliberate might be giving it too much credit.) But damn it’s still frustrating.

I felt a little lost in the battle system, too. Towards the end of the game, Brad in particular had over a dozen skills he could use, but I still couldn’t confidently tell you which were the strongest. New skills sometimes appear in the middle of the list or cost less than previous skills, and the game doesn’t outright tell you how much damage any of them do. I know this is the “classic RPG” style, and I don’t think it was hugely inconvenient, but it feels weird to barely know how my own skills work. I think this puts me off getting into new RPGs, just generally; there’s a whole new set of things I have to learn about, and games in this style often won’t just tell me anything, so there’s this whole separate meta-puzzle to figure out before I can play the actual game effectively.

Also, the sound could use a little bit of… mastering? Some music and sound effects are significantly louder and screechier than others. Painful, you could say.


The world is full of side characters with their own stuff going on, which is also something I love seeing in games; too often, the whole world feels like an obstacle course specifically designed for you.

Also, many of those characters are, well, not great people. Really, most of the game is kinda fucked up. Consider: the weird status effect is most commonly inflicted by the “Grope” skill. It makes you feel weird, you see. Oh, and the currency is porn magazines.

And then there are the gangs, the various spins on sex clubs, the forceful drug kingpins, and the overall violence that permeates everything (you stumble upon an alarming number of corpses). The game neither condones nor condemns any of this; it simply offers some ideas of how people might behave at the end of the world. It’s certainly the grittiest interpretation I’ve seen.

I don’t usually like post-apocalypses, because they try to have these very hopeful stories, but then at the end the world is still a blighted hellscape so what was the point of any of that? I like this game much better for being a blighted hellscape throughout. The story is worth following to see where it goes, not just because you expect everything wrapped up neatly at the end.

…I realize I’ve made this game sound monumentally depressing throughout, but it manages to pack in a lot of funny moments as well, from the subtle to the overt. In retrospect, it’s actually really good at balancing the mood so it doesn’t get too depressing. If nothing else, it’s hilarious to watch this gruff, solemn, battle-scarred, middle-aged man pedal around on a kid’s bike he found.


An obvious theme of the game is despair, but the more I think about it, the more I wonder if ambiguity is a theme as well. It certainly fits the confusing geography.

Even the premise is a little ambiguous. Is/was Olathe a city, a country, a whole planet? Did the apocalypse affect only Olathe, or the whole world? Does it matter in an RPG, where the only world that exists is the one mapped out within the game?

Towards the end of the game, you catch up with Buddy, but she rejects you, apparently resentful that you kept her hidden away for her entire life. Brad presses on anyway, insisting on protecting her.

At that point I wasn’t sure I was still on Brad’s side. But he’s not wrong, either. Is he? Maybe it depends on how old Buddy is — but the game never tells us. Her sprite is a bit smaller than the men’s, but it’s hard to gauge much from small exaggerated sprites, and she might just be shorter. In the beginning of the game, she was doing kid-like drawings, but we don’t know how much time passed after that. Everyone seems to take for granted that she’s capable of bearing children, and she talks like an adult. So is she old enough to be making this decision, or young enough for parent figure Brad to overrule her? What is the appropriate age of agency, anyway, when you’re the last girl/woman left more than a decade after the end of the world?

Can you repopulate a species with only one woman, anyway?


Well, that went on a bit longer than I intended. This game has a lot of small touches that stood out to me, and they all wove together very well.

Should you play it? I have absolutely no idea.

FINAL SCORE: 1 out of 6 chambers

Lisa: The Joyful

fairly short · classic rpg · aug 2015 · lin/mac/win · $5 on itch or steam

Surprise! There’s a third game to round out this trilogy.

Lisa: The Joyful is much shorter, maybe three hours long — enough to be played in a night rather than over the better part of a week.

This one picks up immediately after the end of Painful, with you now playing as Buddy. It takes a drastic turn early on: Buddy decides that, rather than hide from the world, she must conquer it. She sets out to murder all the big bosses and become queen.

The battle system has been inherited from the previous game, but battles are much more straightforward this time around. You can’t recruit any party members; for much of the game, it’s just you and a sword.

There is a catch! Of course.

The catch is that you do not have enough health to survive most boss battles without healing. With no party members, you cannot heal via skills. I don’t think you could buy healing items anywhere, either. You have a few when the game begins, but once you run out, that’s it.

Except… you also have… some Joy. Which restores you to full health and also makes you crit with every hit. And drops off of several enemies.

We didn’t even recognize Joy as a healing item at first, since we never used it in Painful; it’s description simply says that it makes you feel nothing, and we’d assumed the whole point of it was to stave off withdrawal, which Buddy doesn’t experience. Luckily, the game provided a hint in the form of an NPC who offers to switch on easy mode:

What’s that? Bad guys too tough? Not enough jerky? You don’t want to take Joy!? Say no more, you’ve come to the right place!

So the game is aware that it’s unfairly difficult, and it’s deliberately forcing you to take Joy, and it is in fact entirely constructed around this concept. I guess the title is a pretty good hint, too.

I don’t feel quite as strongly about Joyful as I do about Painful. (Admittedly, I was really tired and starting to doze off towards the end of Joyful.) Once you get that the gimmick is to force you to use Joy, the game basically reduces to a moderate-difficulty boss rush. Other than that, the only thing that stood out to me mechanically was that Buddy learns a skill where she lifts her shirt to inflict flustered as a status effect — kind of a lingering echo of how outrageous the previous game could be.

You do get a healthy serving of plot, which is nice and ties a few things together. I wouldn’t say it exactly wraps up the story, but it doesn’t feel like it’s missing anything either; it’s exactly as murky as you’d expect.

I think it’s worth playing Joyful if you’ve played Painful. It just didn’t have the same impact on me. It probably doesn’t help that I don’t like Buddy as a person. She seems cold, violent, and cruel. Appropriate for the world and a product of her environment, I suppose.

FINAL SCORE: 300 Mags

MOOP

fairly short · inventory game · nov 2017 · win · free on itch

Finally, as something of a palate cleanser, we have MOOP: a delightful and charming little inventory game.

I don’t think “inventory game” is a real genre, but I mean the kind of game where you go around collecting items and using them in the right place. Puzzle-driven, but with “puzzles” that can largely be solved by simply trying everything everywhere. I’d put a lot of point and click adventures in the same category, despite having a radically different interface. Is that fair? Yes, because it’s my blog.

MOOP was almost certainly also made in RPG Maker, but it breaks the mold in a very different way by not being an RPG. There are no battles whatsoever, only interactions on the overworld; you progress solely via dialogue and puzzle-solving. Examining something gives you a short menu of verbs — use, talk, get — reminiscent of interactive fiction, or perhaps the graphical “adventure” games that took inspiration from interactive fiction. (God, “adventure game” is the worst phrase. Every game is an adventure! It doesn’t mean anything!)

Everything about the game is extremely chill. I love the monochrome aesthetic combined with a large screen resolution; it feels like I’m peeking into an alternate universe where the Game Boy got bigger but never gained color. I played halfway through the game before realizing that the protagonist (Moop) doesn’t have a walk animation; they simply slide around. Somehow, it works.

The puzzles are a little clever, yet low-pressure; the world is small enough that you can examine everything again if you get stuck, and there’s no way to lose or be set back. The music is lovely, too. It just feels good to wander around in a world that manages to make sepia look very pretty.

The story manages to pack a lot into a very short time. It’s… gosh, I don’t know. It has a very distinct texture to it that I’m not sure I’ve seen before. The plot weaves through several major events that each have very different moods, and it moves very quickly — but it’s well-written and doesn’t feel rushed or disjoint. It’s lighthearted, but takes itself seriously enough for me to get invested. It’s fucking witchcraft.

I think there was even a non-binary character! Just kinda nonchalantly in there. Awesome.

What a happy, charming game. Play if you would like to be happy and charmed.

FINAL SCORE: 1 waxing moon

The re:Invent 2017 Containers After-party Guide

Post Syndicated from Tiffany Jernigan original https://aws.amazon.com/blogs/compute/the-reinvent-2017-containers-after-party-guide/

Feeling uncontainable? re:Invent 2017 might be over, but the containers party doesn’t have to stop. Here are some ways you can keep learning about containers on AWS.

Learn about containers in Austin and New York

Come join AWS this week at KubeCon in Austin, Texas! We’ll be sharing best practices for running Kubernetes on AWS and talking about Amazon ECS, AWS Fargate, and Amazon EKS. Want to take Amazon EKS for a test drive? Sign up for the preview.

We’ll also be talking Containers at the NYC Pop-up Loft during AWS Compute Evolved: Containers Day on December 13th. Register to attend.

Join an upcoming webinar

Didn’t get to attend re:Invent or want to hear a recap? Join our upcoming webinar, What You Missed at re:Invent 2017, on December 11th from 12:00 PM – 12:40 PM PT (3:00 PM – 3:40 PM ET). Register to attend.

Start (or finish) a workshop

All of the containers workshops given at re:Invent are available online. Get comfortable, fire up your browser, and start building!

re:Watch your favorite talks

All of the keynote and breakouts from re:Invent are available to watch on our YouTube playlist. Slides can be found as they are uploaded on the AWS Slideshare. Just slip into your pajamas, make some popcorn, and start watching!

Learn more about what’s new

Andy Jassy announced two big updates to the container landscape at re:Invent, AWS Fargate and Amazon EKS. Here are some resources to help you learn more about all the new features and products we announced, why we built them, and how they work.

AWS Fargate

AWS Fargate is a technology that allows you to run containers without having to manage servers or clusters.

Amazon Elastic Container Service for Kubernetes (Amazon EKS)

Amazon Elastic Container Service for Kubernetes (Amazon EKS) is a managed service that makes it easy for you to run Kubernetes on AWS without needing to configure and operate your own Kubernetes clusters.

We hope you had a great re:Invent and look forward to seeing what you build on AWS in 2018!

– The AWS Containers Team

The Pi Towers Secret Santa Babbage

Post Syndicated from Mark Calleja original https://www.raspberrypi.org/blog/secret-santa-babbage/

Tired of pulling names out of a hat for office Secret Santa? Upgrade your festive tradition with a Raspberry Pi, thermal printer, and everybody’s favourite microcomputer mascot, Babbage Bear.

Raspberry Pi Babbage Bear Secret Santa

The name’s Santa. Secret Santa.

It’s that time of year again, when the cosiness gets turned up to 11 and everyone starts thinking about jolly fat men, reindeer, toys, and benevolent home invasion. At Raspberry Pi, we’re running a Secret Santa pool: everyone buys a gift for someone else in the office. Obviously, the person you buy for has to be picked in secret and at random, or the whole thing wouldn’t work. With that in mind, I created Secret Santa Babbage to do the somewhat mundane task of choosing gift recipients. This could’ve just been done with some names in a hat, but we’re Raspberry Pi! If we don’t make a Python-based Babbage robot wearing a jaunty hat and programmed to spread Christmas cheer, who will?

Secret Santa Babbage

Ho ho ho!

Mecha-Babbage Xmas shenanigans

The script the robot runs is pretty basic: a list of names entered as comma-separated strings is shuffled at the press of a GPIO button, then a name is popped off the end and stored as a variable. The name is matched to a photo of the person stored on the Raspberry Pi, and a thermal printer pinched from Alex’s super awesome PastyCam (blog post forthcoming, maybe) prints out the picture and name of the person you will need to shower with gifts at the Christmas party. (Well, OK — with one gift. No more than five quid’s worth. Nothing untoward.) There’s also a redo function, just in case you pick yourself: press another button and the last picked name — still stored as a variable — is appended to the list again, which is shuffled once more, and a new name is popped off the end.

Secret Santa Babbage prototyping

Prototyping!

As the build was a bit of a rush job undertaken at the request of our ‘Director of Vibe’ Emily, there are a few things I’d like to improve about this functionality that I didn’t get around to — more on that later. To add some extra holiday spirit to the project at the last minute, I used Pygame to play a WAV file of Santa’s jolly laugh while Babbage chooses a name for you. The file is included in the GitHub repo along with everything else, because ‘tis the season, etc., etc.

Secret Santa Babbage prototyping

Editor’s note: Considering these desk adornments, Mark’s Secret Santa gift-giver has a lot to go on.

Writing the code for Xmas Mecha-Babbage was fairly straightforward, though it uses some tricky bits for managing the thermal printer. You’ll need to install the drivers to make it go, as well as the CUPS package for managing the print hosting. You can find instructions for these things here, thanks to the wonderful Adafruit crew. Also, for reasons I couldn’t fathom, this will all only work on a Pi 2 and not a Pi 3, as there are some compatibility issues with the thermal printer otherwise. (I also tested the script on a Pi Zero W…no dice.)

Building a Christmassy throne

The hardest (well, fiddliest) parts of making the whole build were constructing the throne and wiring the bear. Using MakerCase, Inkscape, a bit of ingenuity, and a laser cutter, I was able to rig up a Christmassy plywood throne which has a hole through the seat so I could run the wires down from Babbage and to the Pi inside. I finished the throne by rubbing a couple of fingers of beeswax into it; as well as making the wood shine just a little bit and protecting it against getting wet, this had the added bonus of making it smell awesome.

Secret Santa Babbage inside

Next year’s iteration will be mulled wine–scented.

I next soldered two LEDs to some lengths of wire, and then ran the wires through holes at the top of the throne and down the back along a small channel I had carved with a narrow chisel to connect them to the Pi’s GPIO pins. The green LED will remain on as long as Babbage is running his program, and the red one will light up while he is processing your request. Once the red LED goes off again, the next person can have a go. I also laser-cut a final piece of wood to overlay the back of Babbage’s Xmas throne and cover the wiring a bit.

Creating a Xmas cyborg bear

Taking two 6 mm tactile buttons, I clipped the spiky metal legs off one side of each (the buttons were going into a stuffed christmas toy, after all) and soldered a length of wire to each of the remaining legs. Next, I made a small incision into Babbage with my trusty Swiss army knife (in a place that actually made me cringe a little) and fed the buttons up into his paws. At some point in this process I was standing in the office wrestling with the bear and muttering to myself, which elicited some very strange looks from my colleagues.

Secret Santa Babbage throne

Poor Babbage…

One thing to note here is to make sure the wires remain attached at the solder points while you push them up into Babbage’s paws. The first time I tried it, I snapped one of my connections and had to start again. It helped to remove some stuffing like a tunnel and then replace it afterward. Moreover, you can use your fingertip to support the joints as you poke the wire in. Finally, a couple of squirts of hot glue to keep Babbage’s furry cheeks firmly on the seat, and done!

Secret Santa Babbage

Next year: Game of Thrones–inspired candy cane throne

The Secret Santa Babbage masterpiece

The whole build process was the perfect holiday mix of cheerful and macabre, and while getting the thermal printer to work was a little time-consuming, the finished product definitely raised some smiles around the office and added a bit of interesting digital flavour to a staid office tradition. And it also helped people who are new to the office or from other branches of the Foundation to know for whom they will be buying a gift.

Secret Santa Babbage

Ready to dispense Christmas cheer!

There are a few ways in which I’ll polish this project before next year, such as having the script write the names to external text files to create a record that will persist in case of a reboot, and maybe having Secret Santa Babbage play you a random Christmas carol when you squeeze his paw instead of just laughing merrily every time. (I also thought about adding electric shocks for those people who are on the naughty list, but HR said no. Bah, humbug!)

Make your own

The code and laser cut plans for the whole build are available here. If you plan to make your own, let us know which stuffed toy you will be turning into a Secret Santa cyborg! And if you’ve been working on any other Christmas-themed Raspberry Pi projects, we’d like to see those too, so tag us on social media to share the festive maker cheer.

The post The Pi Towers Secret Santa Babbage appeared first on Raspberry Pi.

AWS Contributes to Milestone 1.0 Release and Adds Model Serving Capability for Apache MXNet

Post Syndicated from Ana Visneski original https://aws.amazon.com/blogs/aws/aws-contributes-to-milestone-1-0-release-and-adds-model-serving-capability-for-apache-mxnet/

Post by Dr. Matt Wood

Today AWS announced contributions to the milestone 1.0 release of the Apache MXNet deep learning engine including the introduction of a new model-serving capability for MXNet. The new capabilities in MXNet provide the following benefits to users:

1) MXNet is easier to use: The model server for MXNet is a new capability introduced by AWS, and it packages, runs, and serves deep learning models in seconds with just a few lines of code, making them accessible over the internet via an API endpoint and thus easy to integrate into applications. The 1.0 release also includes an advanced indexing capability that enables users to perform matrix operations in a more intuitive manner.

  • Model Serving enables set up of an API endpoint for prediction: It saves developers time and effort by condensing the task of setting up an API endpoint for running and integrating prediction functionality into an application to just a few lines of code. It bridges the barrier between Python-based deep learning frameworks and production systems through a Docker container-based deployment model.
  • Advanced indexing for array operations in MXNet: It is now more intuitive for developers to leverage the powerful array operations in MXNet. They can use the advanced indexing capability by leveraging existing knowledge of NumPy/SciPy arrays. For example, it supports MXNet NDArray and Numpy ndarray as index, e.g. (a[mx.nd.array([1,2], dtype = ‘int32’]).

2) MXNet is faster: The 1.0 release includes implementation of cutting-edge features that optimize the performance of training and inference. Gradient compression enables users to train models up to five times faster by reducing communication bandwidth between compute nodes without loss in convergence rate or accuracy. For speech recognition acoustic modeling like the Alexa voice, this feature can reduce network bandwidth by up to three orders of magnitude during training. With the support of NVIDIA Collective Communication Library (NCCL), users can train a model 20% faster on multi-GPU systems.

  • Optimize network bandwidth with gradient compression: In distributed training, each machine must communicate frequently with others to update the weight-vectors and thereby collectively build a single model, leading to high network traffic. Gradient compression algorithm enables users to train models up to five times faster by compressing the model changes communicated by each instance.
  • Optimize the training performance by taking advantage of NCCL: NCCL implements multi-GPU and multi-node collective communication primitives that are performance optimized for NVIDIA GPUs. NCCL provides communication routines that are optimized to achieve high bandwidth over interconnection between multi-GPUs. MXNet supports NCCL to train models about 20% faster on multi-GPU systems.

3) MXNet provides easy interoperability: MXNet now includes a tool for converting neural network code written with the Caffe framework to MXNet code, making it easier for users to take advantage of MXNet’s scalability and performance.

  • Migrate Caffe models to MXNet: It is now possible to easily migrate Caffe code to MXNet, using the new source code translation tool for converting Caffe code to MXNet code.

MXNet has helped developers and researchers make progress with everything from language translation to autonomous vehicles and behavioral biometric security. We are excited to see the broad base of users that are building production artificial intelligence applications powered by neural network models developed and trained with MXNet. For example, the autonomous driving company TuSimple recently piloted a self-driving truck on a 200-mile journey from Yuma, Arizona to San Diego, California using MXNet. This release also includes a full-featured and performance optimized version of the Gluon programming interface. The ease-of-use associated with it combined with the extensive set of tutorials has led significant adoption among developers new to deep learning. The flexibility of the interface has driven interest within the research community, especially in the natural language processing domain.

Getting started with MXNet
Getting started with MXNet is simple. To learn more about the Gluon interface and deep learning, you can reference this comprehensive set of tutorials, which covers everything from an introduction to deep learning to how to implement cutting-edge neural network models. If you’re a contributor to a machine learning framework, check out the interface specs on GitHub.

To get started with the Model Server for Apache MXNet, install the library with the following command:

$ pip install mxnet-model-server

The Model Server library has a Model Zoo with 10 pre-trained deep learning models, including the SqueezeNet 1.1 object classification model. You can start serving the SqueezeNet model with just the following command:

$ mxnet-model-server \
  --models squeezenet=https://s3.amazonaws.com/model-server/models/squeezenet_v1.1/squeezenet_v1.1.model \
  --service dms/model_service/mxnet_vision_service.py

Learn more about the Model Server and view the source code, reference examples, and tutorials here: https://github.com/awslabs/mxnet-model-server/

-Dr. Matt Wood

GPIO expander: access a Pi’s GPIO pins on your PC/Mac

Post Syndicated from Gordon Hollingworth original https://www.raspberrypi.org/blog/gpio-expander/

Use the GPIO pins of a Raspberry Pi Zero while running Debian Stretch on a PC or Mac with our new GPIO expander software! With this tool, you can easily access a Pi Zero’s GPIO pins from your x86 laptop without using SSH, and you can also take advantage of your x86 computer’s processing power in your physical computing projects.

A Raspberry Pi zero connected to a laptop - GPIO expander

What is this magic?

Running our x86 Stretch distribution on a PC or Mac, whether installed on the hard drive or as a live image, is a great way of taking advantage of a well controlled and simple Linux distribution without the need for a Raspberry Pi.

The downside of not using a Pi, however, is that there aren’t any GPIO pins with which your Scratch or Python programs could communicate. This is a shame, because it means you are limited in your physical computing projects.

I was thinking about this while playing around with the Pi Zero’s USB booting capabilities, having seen people employ the Linux gadget USB mode to use the Pi Zero as an Ethernet device. It struck me that, using the udev subsystem, we could create a simple GUI application that automatically pops up when you plug a Pi Zero into your computer’s USB port. Then the Pi Zero could be programmed to turn into an Ethernet-connected computer running pigpio to provide you with remote GPIO pins.

So we went ahead and built this GPIO expander application, and your PC or Mac can now have GPIO pins which are accessible through Scratch or the GPIO Zero Python library. Note that you can only use this tool to access the Pi Zero.

You can also install the application on the Raspberry Pi. Theoretically, you could connect a number of Pi Zeros to a single Pi and (without a USB hub) use a maximum of 140 pins! But I’ve not tested this — one for you, I think…

Making the GPIO expander work

If you’re using a PC or Mac and you haven’t set up x86 Debian Stretch yet, you’ll need to do that first. An easy way to do it is to download a copy of the Stretch release from this page and image it onto a USB stick. Boot from the USB stick (on most computers, you just need to press F10 during booting and select the stick when asked), and then run Stretch directly from the USB key. You can also install it to the hard drive, but be aware that installing it will overwrite anything that was on your hard drive before.

Whether on a Mac, PC, or Pi, boot through to the Stretch desktop, open a terminal window, and install the GPIO expander application:

sudo apt install usbbootgui

Next, plug in your Raspberry Pi Zero (don’t insert an SD card), and after a few seconds the GUI will appear.

A screenshot of the GPIO expander GUI

The Raspberry Pi USB programming GUI

Select GPIO expansion board and click OK. The Pi Zero will now be programmed as a locally connected Ethernet port (if you run ifconfig, you’ll see the new interface usb0 coming up).

What’s really cool about this is that your plugged-in Pi Zero is now running pigpio, which allows you to control its GPIOs through the network interface.

With Scratch 2

To utilise the pins with Scratch 2, just click on the start bar and select Programming > Scratch 2.

In Scratch, click on More Blocks, select Add an Extension, and then click Pi GPIO.

Two new blocks will be added: the first is used to set the output pin, the second is used to get the pin value (it is true if the pin is read high).

This a simple application using a Pibrella I had hanging around:

A screenshot of a Scratch 2 program - GPIO expander

With Python

This is a Python example using the GPIO Zero library to flash an LED:

[email protected]:~ $ export GPIOZERO_PIN_FACTORY=pigpio
[email protected]:~ $ export PIGPIO_ADDR=fe80::1%usb0
[email protected]:~ $ python3
>>> from gpiozero import LED
>>> led = LED(17)
>>> led.blink()
A Raspberry Pi zero connected to a laptop - GPIO expander

The pinout command line tool is your friend

Note that in the code above the IP address of the Pi Zero is an IPv6 address and is shortened to fe80::1%usb0, where usb0 is the network interface created by the first Pi Zero.

With pigs directly

Another option you have is to use the pigpio library and the pigs application and redirect the output to the Pi Zero network port running IPv6. To do this, you’ll first need to set some environment variable for the redirection:

[email protected]:~ $ export PIGPIO_ADDR=fe80::1%usb0
[email protected]:~ $ pigs bc2 0x8000
[email protected]:~ $ pigs bs2 0x8000

With the commands above, you should be able to flash the LED on the Pi Zero.

The secret sauce

I know there’ll be some people out there who would be interested in how we put this together. And I’m sure many people are interested in the ‘buildroot’ we created to run on the Pi Zero — after all, there are lots of things you can create if you’ve got a Pi Zero on the end of a piece of IPv6 string! For a closer look, find the build scripts for the GPIO expander here and the source code for the USB boot GUI here.

And be sure to share your projects built with the GPIO expander by tagging us on social media or posting links in the comments!

The post GPIO expander: access a Pi’s GPIO pins on your PC/Mac appeared first on Raspberry Pi.

Coalition Against Piracy Wants Singapore to Block Streaming Piracy Software

Post Syndicated from Andy original https://torrentfreak.com/coalition-against-piracy-wants-singapore-to-block-streaming-piracy-software-171204/

Earlier this year, major industry players including Disney, HBO, Netflix, Amazon and NBCUniversal formed the Alliance for Creativity and Entertainment (ACE), a huge coalition set to tackle piracy on a global scale.

Shortly after the Coalition Against Piracy (CAP) was announced. With a focus on Asia and backed by CASBAA, CAP counts Disney, Fox, HBO Asia, NBCUniversal, Premier League, Turner Asia-Pacific, A&E Networks, BBC Worldwide, National Basketball Association, Viacom International, and others among its members.

In several recent reports, CAP has homed in on the piracy situation in Singapore. Describing the phenomenon as “rampant”, the group says that around 40% of locals engage in the practice, many of them through unlicensed streaming. Now CAP, in line with its anti-streaming stance, wants the government to do more – much more.

Since a large proportion of illicit streaming takes place through set-top devices, CAP’s 21 members want the authorities to block the software inside them that enables piracy, Straits Times reports.

“Within the Asia-Pacific region, Singapore is the worst in terms of availability of illicit streaming devices,” said CAP General Manager Neil Gane.

“They have access to hundreds of illicit broadcasts of channels and video-on-demand content.”

There are no precise details on CAP’s demands but it is far from clear how any government could effectively block software.

Blocking access to the software package itself would prove all but impossible, so that would leave blocking the infrastructure the software uses. While that would be relatively straightforward technically, the job would be large and fast-moving, particularly when dozens of apps and addons would need to be targeted.

However, CAP is also calling on the authorities to block pirate streams from entering Singapore. The country already has legislation in place that can be used for site-blocking, so that is not out of the question. It’s notable that the English Premier League is part of the CAP coalition and following legal action taken in the UK earlier this year, now has plenty of experience in blocking streams, particularly of live broadcasts.

While that is a game of cat-and-mouse, TorrentFreak sources that have been monitoring the Premier League’s actions over the past several months report that the soccer outfit has become more effective over time. Its blocks can still be evaded but it can be hard work for those involved. That kind of expertise could prove invaluable to CAP.

“The Premier League is currently engaged in its most comprehensive global anti-piracy programme,” a spokesperson told ST. “This includes supporting our broadcast partners in South-east Asia with their efforts to prevent the sale of illicit streaming devices.”

In common with other countries around the world, the legality of using ‘pirate’ streaming boxes is somewhat unclear in Singapore. A Bloomberg report cites a local salesman who reports sales of 10 to 20 boxes on a typical weekend, rising to 300 a day during electronic fairs. He believes the devices are legal, since they don’t download full copies of programs.

While that point is yet to be argued in court (previously an Intellectual Property Office of Singapore spokesperson said that copyright owners could potentially go after viewers), it seems unlikely that those selling the devices will be allowed to continue completely unhindered. The big question is how current legislation can be successfully applied.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Glenn’s Take on re:Invent Part 2

Post Syndicated from Glenn Gore original https://aws.amazon.com/blogs/architecture/glenns-take-on-reinvent-part-2/

Glenn Gore here, Chief Architect for AWS. I’m in Las Vegas this week — with 43K others — for re:Invent 2017. We’ve got a lot of exciting announcements this week. I’m going to check in to the Architecture blog with my take on what’s interesting about some of the announcements from an cloud architectural perspective. My first post can be found here.

The Media and Entertainment industry has been a rapid adopter of AWS due to the scale, reliability, and low costs of our services. This has enabled customers to create new, online, digital experiences for their viewers ranging from broadcast to streaming to Over-the-Top (OTT) services that can be a combination of live, scheduled, or ad-hoc viewing, while supporting devices ranging from high-def TVs to mobile devices. Creating an end-to-end video service requires many different components often sourced from different vendors with different licensing models, which creates a complex architecture and a complex environment to support operationally.

AWS Media Services
Based on customer feedback, we have developed AWS Media Services to help simplify distribution of video content. AWS Media Services is comprised of five individual services that can either be used together to provide an end-to-end service or individually to work within existing deployments: AWS Elemental MediaConvert, AWS Elemental MediaLive, AWS Elemental MediaPackage, AWS Elemental MediaStore and AWS Elemental MediaTailor. These services can help you with everything from storing content safely and durably to setting up a live-streaming event in minutes without having to be concerned about the underlying infrastructure and scalability of the stream itself.

In my role, I participate in many AWS and industry events and often work with the production and event teams that put these shows together. With all the logistical tasks they have to deal with, the biggest question is often: “Will the live stream work?” Compounding this fear is the reality that, as users, we are also quick to jump on social media and make noise when a live stream drops while we are following along remotely. Worse is when I see event organizers actively selecting not to live stream content because of the risk of failure and and exposure — leading them to decide to take the safe option and not stream at all.

With AWS Media Services addressing many of the issues around putting together a high-quality media service, live streaming, and providing access to a library of content through a variety of mechanisms, I can’t wait to see more event teams use live streaming without the concern and worry I’ve seen in the past. I am excited for what this also means for non-media companies, as video becomes an increasingly common way of sharing information and adding a more personalized touch to internally- and externally-facing content.

AWS Media Services will allow you to focus more on the content and not worry about the platform. Awesome!

Amazon Neptune
As a civilization, we have been developing new ways to record and store information and model the relationships between sets of information for more than a thousand years. Government census data, tax records, births, deaths, and marriages were all recorded on medium ranging from knotted cords in the Inca civilization, clay tablets in ancient Babylon, to written texts in Western Europe during the late Middle Ages.

One of the first challenges of computing was figuring out how to store and work with vast amounts of information in a programmatic way, especially as the volume of information was increasing at a faster rate than ever before. We have seen different generations of how to organize this information in some form of database, ranging from flat files to the Information Management System (IMS) used in the 1960s for the Apollo space program, to the rise of the relational database management system (RDBMS) in the 1970s. These innovations drove a lot of subsequent innovations in information management and application development as we were able to move from thousands of records to millions and billions.

Today, as architects and developers, we have a vast variety of database technologies to select from, which have different characteristics that are optimized for different use cases:

  • Relational databases are well understood after decades of use in the majority of companies who required a database to store information. Amazon Relational Database (Amazon RDS) supports many popular relational database engines such as MySQL, Microsoft SQL Server, PostgreSQL, MariaDB, and Oracle. We have even brought the traditional RDBMS into the cloud world through Amazon Aurora, which provides MySQL and PostgreSQL support with the performance and reliability of commercial-grade databases at 1/10th the cost.
  • Non-relational databases (NoSQL) provided a simpler method of storing and retrieving information that was often faster and more scalable than traditional RDBMS technology. The concept of non-relational databases has existed since the 1960s but really took off in the early 2000s with the rise of web-based applications that required performance and scalability that relational databases struggled with at the time. AWS published this Dynamo whitepaper in 2007, with DynamoDB launching as a service in 2012. DynamoDB has quickly become one of the critical design elements for many of our customers who are building highly-scalable applications on AWS. We continue to innovate with DynamoDB, and this week launched global tables and on-demand backup at re:Invent 2017. DynamoDB excels in a variety of use cases, such as tracking of session information for popular websites, shopping cart information on e-commerce sites, and keeping track of gamers’ high scores in mobile gaming applications, for example.
  • Graph databases focus on the relationship between data items in the store. With a graph database, we work with nodes, edges, and properties to represent data, relationships, and information. Graph databases are designed to make it easy and fast to traverse and retrieve complex hierarchical data models. Graph databases share some concepts from the NoSQL family of databases such as key-value pairs (properties) and the use of a non-SQL query language such as Gremlin. Graph databases are commonly used for social networking, recommendation engines, fraud detection, and knowledge graphs. We released Amazon Neptune to help simplify the provisioning and management of graph databases as we believe that graph databases are going to enable the next generation of smart applications.

A common use case I am hearing every week as I talk to customers is how to incorporate chatbots within their organizations. Amazon Lex and Amazon Polly have made it easy for customers to experiment and build chatbots for a wide range of scenarios, but one of the missing pieces of the puzzle was how to model decision trees and and knowledge graphs so the chatbot could guide the conversation in an intelligent manner.

Graph databases are ideal for this particular use case, and having Amazon Neptune simplifies the deployment of a graph database while providing high performance, scalability, availability, and durability as a managed service. Security of your graph database is critical. To help ensure this, you can store your encrypted data by running AWS in Amazon Neptune within your Amazon Virtual Private Cloud (Amazon VPC) and using encryption at rest integrated with AWS Key Management Service (AWS KMS). Neptune also supports Amazon VPC and AWS Identity and Access Management (AWS IAM) to help further protect and restrict access.

Our customers now have the choice of many different database technologies to ensure that they can optimize each application and service for their specific needs. Just as DynamoDB has unlocked and enabled many new workloads that weren’t possible in relational databases, I can’t wait to see what new innovations and capabilities are enabled from graph databases as they become easier to use through Amazon Neptune.

Look for more on DynamoDB and Amazon S3 from me on Monday.

 

Glenn at Tour de Mont Blanc

 

 

Seven Years of Hadopi: Nine Million Piracy Warnings, 189 Convictions

Post Syndicated from Andy original https://torrentfreak.com/seven-years-of-hadopi-nine-million-piracy-warnings-189-convictions-171201/

More than seven years ago, it was predicted that the next big thing in anti-piracy enforcement would be the graduated response scheme.

Commonly known as “three strikes” or variants thereof, these schemes were promoted as educational in nature, with alleged pirates receiving escalating warnings designed to discourage further infringing behavior.

In the fall of 2010, France became one of the pioneers of the warning system and now almost more than seven years later, a new report from the country’s ‘Hadopi’ anti-piracy agency has revealed the extent of its operations.

Between July 2016 and June 2017, Hadopi sent a total of 889 cases to court, a 30% uplift on the 684 cases handed over during the same period 2015/2016. This boost is notable, not least since the use of peer-to-peer protocols (such as BitTorrent, which Hadopi closely monitors) is declining in favor of streaming methods.

When all the seven years of the scheme are added together ending August 31, 2017, the numbers are even more significant.

“Since the launch of the graduated response scheme, more than 2,000 cases have been sent to prosecutors for possible prosecution,” Hadopi’s report reads.

“The number of cases sent to the prosecutor’s office has increased every year, with a significant increase in the last two years. Three-quarters of all the cases sent to prosecutors have been sent since July 2015.”

In all, the Hadopi agency has sent more than nine million first warning notices to alleged pirates since 2012, with more than 800,000 follow-up warnings on top, 200,000 of them during 2016-2017. But perhaps of most interest is the number of French citizens who, despite all the warnings, carried on with their pirating behavior and ended up prosecuted as a result.

Since the program’s inception, 583 court decisions have been handed down against pirates. While 394 of them resulted in a small fine, a caution, or other community-based punishment, 189 citizens walked away with a criminal conviction.

These can include fines of up to 1,500 euros or in more extreme cases, up to three years in prison and/or a 300,000 euro fine.

While this approach looks set to continue into 2018, Hadopi’s report highlights the need to adapt to a changing piracy landscape, one which requires a multi-faceted approach. In addition to tracking pirates, Hadopi also has a mission to promote legal offerings while educating the public. However, it is fully aware that these strategies alone won’t be enough.

To that end, the agency is calling for broader action, such as faster blocking of sites, expanding to the blocking of mirror sites, tackling unauthorized streaming platforms and, of course, dealing with the “fully-loaded” set-top box phenomenon that’s been sweeping the world for the past two years.

The full report can be downloaded here (pdf, French)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Glenn’s Take on re:Invent 2017 Part 1

Post Syndicated from Glenn Gore original https://aws.amazon.com/blogs/architecture/glenns-take-on-reinvent-2017-part-1/

GREETINGS FROM LAS VEGAS

Glenn Gore here, Chief Architect for AWS. I’m in Las Vegas this week — with 43K others — for re:Invent 2017. We have a lot of exciting announcements this week. I’m going to post to the AWS Architecture blog each day with my take on what’s interesting about some of the announcements from a cloud architectural perspective.

Why not start at the beginning? At the Midnight Madness launch on Sunday night, we announced Amazon Sumerian, our platform for VR, AR, and mixed reality. The hype around VR/AR has existed for many years, though for me, it is a perfect example of how a working end-to-end solution often requires innovation from multiple sources. For AR/VR to be successful, we need many components to come together in a coherent manner to provide a great experience.

First, we need lightweight, high-definition goggles with motion tracking that are comfortable to wear. Second, we need to track movement of our body and hands in a 3-D space so that we can interact with virtual objects in the virtual world. Third, we need to build the virtual world itself and populate it with assets and define how the interactions will work and connect with various other systems.

There has been rapid development of the physical devices for AR/VR, ranging from iOS devices to Oculus Rift and HTC Vive, which provide excellent capabilities for the first and second components defined above. With the launch of Amazon Sumerian we are solving for the third area, which will help developers easily build their own virtual worlds and start experimenting and innovating with how to apply AR/VR in new ways.

Already, within 48 hours of Amazon Sumerian being announced, I have had multiple discussions with customers and partners around some cool use cases where VR can help in training simulations, remote-operator controls, or with new ideas around interacting with complex visual data sets, which starts bringing concepts straight out of sci-fi movies into the real (virtual) world. I am really excited to see how Sumerian will unlock the creative potential of developers and where this will lead.

Amazon MQ
I am a huge fan of distributed architectures where asynchronous messaging is the backbone of connecting the discrete components together. Amazon Simple Queue Service (Amazon SQS) is one of my favorite services due to its simplicity, scalability, performance, and the incredible flexibility of how you can use Amazon SQS in so many different ways to solve complex queuing scenarios.

While Amazon SQS is easy to use when building cloud-native applications on AWS, many of our customers running existing applications on-premises required support for different messaging protocols such as: Java Message Service (JMS), .Net Messaging Service (NMS), Advanced Message Queuing Protocol (AMQP), MQ Telemetry Transport (MQTT), Simple (or Streaming) Text Orientated Messaging Protocol (STOMP), and WebSockets. One of the most popular applications for on-premise message brokers is Apache ActiveMQ. With the release of Amazon MQ, you can now run Apache ActiveMQ on AWS as a managed service similar to what we did with Amazon ElastiCache back in 2012. For me, there are two compelling, major benefits that Amazon MQ provides:

  • Integrate existing applications with cloud-native applications without having to change a line of application code if using one of the supported messaging protocols. This removes one of the biggest blockers for integration between the old and the new.
  • Remove the complexity of configuring Multi-AZ resilient message broker services as Amazon MQ provides out-of-the-box redundancy by always storing messages redundantly across Availability Zones. Protection is provided against failure of a broker through to complete failure of an Availability Zone.

I believe that Amazon MQ is a major component in the tools required to help you migrate your existing applications to AWS. Having set up cross-data center Apache ActiveMQ clusters in the past myself and then testing to ensure they work as expected during critical failure scenarios, technical staff working on migrations to AWS benefit from the ease of deploying a fully redundant, managed Apache ActiveMQ cluster within minutes.

Who would have thought I would have been so excited to revisit Apache ActiveMQ in 2017 after using SQS for many, many years? Choice is a wonderful thing.

Amazon GuardDuty
Maintaining application and information security in the modern world is increasingly complex and is constantly evolving and changing as new threats emerge. This is due to the scale, variety, and distribution of services required in a competitive online world.

At Amazon, security is our number one priority. Thus, we are always looking at how we can increase security detection and protection while simplifying the implementation of advanced security practices for our customers. As a result, we released Amazon GuardDuty, which provides intelligent threat detection by using a combination of multiple information sources, transactional telemetry, and the application of machine learning models developed by AWS. One of the biggest benefits of Amazon GuardDuty that I appreciate is that enabling this service requires zero software, agents, sensors, or network choke points. which can all impact performance or reliability of the service you are trying to protect. Amazon GuardDuty works by monitoring your VPC flow logs, AWS CloudTrail events, DNS logs, as well as combing other sources of security threats that AWS is aggregating from our own internal and external sources.

The use of machine learning in Amazon GuardDuty allows it to identify changes in behavior, which could be suspicious and require additional investigation. Amazon GuardDuty works across all of your AWS accounts allowing for an aggregated analysis and ensuring centralized management of detected threats across accounts. This is important for our larger customers who can be running many hundreds of AWS accounts across their organization, as providing a single common threat detection of their organizational use of AWS is critical to ensuring they are protecting themselves.

Detection, though, is only the beginning of what Amazon GuardDuty enables. When a threat is identified in Amazon GuardDuty, you can configure remediation scripts or trigger Lambda functions where you have custom responses that enable you to start building automated responses to a variety of different common threats. Speed of response is required when a security incident may be taking place. For example, Amazon GuardDuty detects that an Amazon Elastic Compute Cloud (Amazon EC2) instance might be compromised due to traffic from a known set of malicious IP addresses. Upon detection of a compromised EC2 instance, we could apply an access control entry restricting outbound traffic for that instance, which stops loss of data until a security engineer can assess what has occurred.

Whether you are a customer running a single service in a single account, or a global customer with hundreds of accounts with thousands of applications, or a startup with hundreds of micro-services with hourly release cycle in a devops world, I recommend enabling Amazon GuardDuty. We have a 30-day free trial available for all new customers of this service. As it is a monitor of events, there is no change required to your architecture within AWS.

Stay tuned for tomorrow’s post on AWS Media Services and Amazon Neptune.

 

Glenn during the Tour du Mont Blanc

Announcing Amazon FreeRTOS – Enabling Billions of Devices to Securely Benefit from the Cloud

Post Syndicated from Tara Walker original https://aws.amazon.com/blogs/aws/announcing-amazon-freertos/

I was recently reading an article on ReadWrite.com titled “IoT devices go forth and multiply, to increase 200% by 2021“, and while the article noted the benefit for consumers and the industry of this growth, two things in the article stuck with me. The first was the specific statement that read “researchers warned that the proliferation of IoT technology will create a new bevvy of challenges. Particularly troublesome will be IoT deployments at scale for both end-users and providers.” Not only was that sentence a mouthful, but it really addressed some of the challenges that can come building solutions and deployment of this exciting new technology area. The second sentiment in the article that stayed with me was that Security issues could grow.

So the article got me thinking, how can we create these cool IoT solutions using low-cost efficient microcontrollers with a secure operating system that can easily connect to the cloud. Luckily the answer came to me by way of an exciting new open-source based offering coming from AWS that I am happy to announce to you all today. Let’s all welcome, Amazon FreeRTOS to the technology stage.

Amazon FreeRTOS is an IoT microcontroller operating system that simplifies development, security, deployment, and maintenance of microcontroller-based edge devices. Amazon FreeRTOS extends the FreeRTOS kernel, a popular real-time operating system, with libraries that enable local and cloud connectivity, security, and (coming soon) over-the-air updates.

So what are some of the great benefits of this new exciting offering, you ask. They are as follows:

  • Easily to create solutions for Low Power Connected Devices: provides a common operating system (OS) and libraries that make the development of common IoT capabilities easy for devices. For example; over-the-air (OTA) updates (coming soon) and device configuration.
  • Secure Data and Device Connections: devices only run trusted software using the Code Signing service, Amazon FreeRTOS provides a secure connection to the AWS using TLS, as well as, the ability to securely store keys and sensitive data on the device.
  • Extensive Ecosystem: contains an extensive hardware and technology ecosystem that allows you to choose a variety of qualified chipsets, including Texas Instruments, Microchip, NXP Semiconductors, and STMicroelectronics.
  • Cloud or Local Connections:  Devices can connect directly to the AWS Cloud or via AWS Greengrass.

 

What’s cool is that it is easy to get started. 

The Amazon FreeRTOS console allows you to select and download the software that you need for your solution.

There is a Qualification Program that helps to assure you that the microcontroller you choose will run consistently across several hardware options.

Finally, Amazon FreeRTOS kernel is an open-source FreeRTOS operating system that is freely available on GitHub for download.

But I couldn’t leave you without at least showing you a few snapshots of the Amazon FreeRTOS Console.

Within the Amazon FreeRTOS Console, I can select a predefined software configuration that I would like to use.

If I want to have a more customized software configuration, Amazon FreeRTOS allows you to customize a solution that is targeted for your use by adding or removing libraries.

Summary

Thanks for checking out the new Amazon FreeRTOS offering. To learn more go to the Amazon FreeRTOS product page or review the information provided about this exciting IoT device targeted operating system in the AWS documentation.

Can’t wait to see what great new IoT systems are will be enabled and created with it! Happy Coding.

Tara