Tag Archives: mvc

Migrating .NET Classic Applications to Amazon ECS Using Windows Containers

Post Syndicated from Sundar Narasiman original https://aws.amazon.com/blogs/compute/migrating-net-classic-applications-to-amazon-ecs-using-windows-containers/

This post contributed by Sundar Narasiman, Arun Kannan, and Thomas Fuller.

AWS recently announced the general availability of Windows container management for Amazon Elastic Container Service (Amazon ECS). Docker containers and Amazon ECS make it easy to run and scale applications on a virtual machine by abstracting the complex cluster management and setup needed.

Classic .NET applications are developed with .NET Framework 4.7.1 or older and can run only on a Windows platform. These include Windows Communication Foundation (WCF), ASP.NET Web Forms, and an ASP.NET MVC web app or web API.

Why classic ASP.NET?

ASP.NET MVC 4.6 and older versions of ASP.NET occupy a significant footprint in the enterprise web application space. As enterprises move towards microservices for new or existing applications, containers are one of the stepping stones for migrating from monolithic to microservices architectures. Additionally, the support for Windows containers in Windows 10, Windows Server 2016, and Visual Studio Tooling support for Docker simplifies the containerization of ASP.NET MVC apps.

Getting started

In this post, you pick an ASP.NET 4.6.2 MVC application and get step-by-step instructions for migrating to ECS using Windows containers. The detailed steps, AWS CloudFormation template, Microsoft Visual Studio solution, ECS service definition, and ECS task definition are available in the aws-ecs-windows-aspnet GitHub repository.

To help you getting started running Windows containers, here is the reference architecture for Windows containers on GitHub: ecs-refarch-cloudformation-windows. This reference architecture is the layered CloudFormation stack, in that it calls the other stacks to create the environment. The CloudFormation YAML template in this reference architecture is referenced to create a single JSON CloudFormation stack, which is used in the steps for the migration.

Steps for Migration

The code and templates to implement this migration can be found on GitHub: https://github.com/aws-samples/aws-ecs-windows-aspnet.

  1. Your development environment needs to have the latest version and updates for Visual Studio 2017, Windows 10, and Docker for Windows Stable.
  2. Next, containerize the ASP.NET application and test it locally. The size of Windows container application images is generally larger compared to Linux containers. This is because the base image of the Windows container itself is large in size, typically greater than 9 GB.
  3. After the application is containerized, the container image needs to be pushed to Amazon Elastic Container Registry (Amazon ECR). Images stored in ECR are compressed to improve pull times and reduce storage costs. In this case, you can see that ECR compresses the image to around 1 GB, for an optimization factor of 90%.
  4. Create a CloudFormation stack using the template in the ‘CloudFormation template’ folder. This creates an ECS service, task definition (referring the containerized ASP.NET application), and other related components mentioned in the ECS reference architecture for Windows containers.
  5. After the stack is created, verify the successful creation of the ECS service, ECS instances, running tasks (with the threshold mentioned in the task definition), and the Application Load Balancer’s successful health check against running containers.
  6. Navigate to the Application Load Balancer URL and see the successful rendering of the containerized ASP.NET MVC app in the browser.

Key Notes

  • Generally, Windows container images occupy large amount of space (in the order of few GBs).
  • All the task definition parameters for Linux containers are not available for Windows containers. For more information, see Windows Task Definitions.
  • An Application Load Balancer can be configured to route requests to one or more ports on each container instance in a cluster. The dynamic port mapping allows you to have multiple tasks from a single service on the same container instance.
  • IAM roles for Windows tasks require extra configuration. For more information, see Windows IAM Roles for Tasks. For this post, configuration was handled by the CloudFormation template.
  • The ECS container agent log file can be accessed for troubleshooting Windows containers: C:\ProgramData\Amazon\ECS\log\ecs-agent.log

Summary

In this post, you migrated an ASP.NET MVC application to ECS using Windows containers.

The logical next step is to automate the activities for migration to ECS and build a fully automated continuous integration/continuous deployment (CI/CD) pipeline for Windows containers. This can be orchestrated by leveraging services such as AWS CodeCommit, AWS CodePipeline, AWS CodeBuild, Amazon ECR, and Amazon ECS. You can learn more about how this is done in the Set Up a Continuous Delivery Pipeline for Containers Using AWS CodePipeline and Amazon ECS post.

If you have questions or suggestions, please comment below.

Haas: MVCC and VACUUM

Post Syndicated from corbet original https://lwn.net/Articles/741842/rss

Robert Haas gets into
the details
of how PostgreSQL concurrency works and why an occasional
VACUUM is necessary. “The second approach to providing transactions
with atomicity and isolation is multi-version concurrency control
(MVCC). The basic idea is simple: instead of locking a row that we want to
update, let’s just create a new version of it which, initially, is visible
only to the transaction which created it. Once the updating transaction
commits, we’ll make the new row visible to all new transactions that start
after that point, while existing transactions continue to see the old
row.

Enabling Two-Factor Authentication For Your Web Application

Post Syndicated from Bozho original https://techblog.bozho.net/enabling-two-factor-authentication-web-application/

It’s almost always a good idea to support two-factor authentication (2FA), especially for back-office systems. 2FA comes in many different forms, some of which include SMS, TOTP, or even hardware tokens.

Enabling them requires a similar flow:

  • The user goes to their profile page (skip this if you want to force 2fa upon registration)
  • Clicks “Enable two-factor authentication”
  • Enters some data to enable the particular 2FA method (phone number, TOTP verification code, etc.)
  • Next time they login, in addition to the username and password, the login form requests the 2nd factor (verification code) and sends that along with the credentials

I will focus on Google Authenticator, which uses a TOTP (Time-based one-time password) for generating a sequence of verification codes. The ideas is that the server and the client application share a secret key. Based on that key and on the current time, both come up with the same code. Of course, clocks are not perfectly synced, so there’s a window of a few codes that the server accepts as valid.

How to implement that with Java (on the server)? Using the GoogleAuth library. The flow is as follows:

  • The user goes to their profile page
  • Clicks “Enable two-factor authentication”
  • The server generates a secret key, stores it as part of the user profile and returns a URL to a QR code
  • The user scans the QR code with their Google Authenticator app thus creating a new profile in the app
  • The user enters the verification code shown the app in a field that has appeared together with the QR code and clicks “confirm”
  • The server marks the 2FA as enabled in the user profile
  • If the user doesn’t scan the code or doesn’t verify the process, the user profile will contain just a orphaned secret key, but won’t be marked as enabled
  • There should be an option to later disable the 2FA from their user profile page

The most important bit from theoretical point of view here is the sharing of the secret key. The crypto is symmetric, so both sides (the authenticator app and the server) have the same key. It is shared via a QR code that the user scans. If an attacker has control on the user’s machine at that point, the secret can be leaked and thus the 2FA – abused by the attacker as well. But that’s not in the threat model – in other words, if the attacker has access to the user’s machine, the damage is already done anyway.

Upon login, the flow is as follows:

  • The user enters username and password and clicks “Login”
  • Using an AJAX request the page asks the server whether this email has 2FA enabled
  • If 2FA is not enabled, just submit the username & password form
  • If 2FA is enabled, the login form is not submitted, but instead an additional field is shown to let the user input the verification code from the authenticator app
  • After the user enters the code and presses login, the form can be submitted. Either using the same login button, or a new “verify” button, or the verification input + button could be an entirely new screen (hiding the username/password inputs).
  • The server then checks again if the user has 2FA enabled and if yes, verifies the verification code. If it matches, login is successful. If not, login fails and the user is allowed to reenter the credentials and the verification code. Note here that you can have different responses depending on whether username/password are wrong or in case the code is wrong. You can also attempt to login prior to even showing the verification code input. That way is arguably better, because that way you don’t reveal to a potential attacker that the user uses 2FA.

While I’m speaking of username and password, that can apply to any other authentication method. After you get a success confirmation from an OAuth / OpenID Connect / SAML provider, or after you can a token from SecureLogin, you can request the second factor (code).

In code, the above processes look as follows (using Spring MVC; I’ve merged the controller and service layer for brevity. You can replace the @AuthenticatedPrincipal bit with your way of supplying the currently logged in user details to the controllers). Assuming the methods are in controller mapped to “/user/”:

@RequestMapping(value = "/init2fa", method = RequestMethod.POST)
@ResponseBody
public String initTwoFactorAuth(@AuthenticationPrincipal LoginAuthenticationToken token) {
    User user = getLoggedInUser(token);
    GoogleAuthenticatorKey googleAuthenticatorKey = googleAuthenticator.createCredentials();
    user.setTwoFactorAuthKey(googleAuthenticatorKey.getKey());
    dao.update(user);
    return GoogleAuthenticatorQRGenerator.getOtpAuthURL(GOOGLE_AUTH_ISSUER, email, googleAuthenticatorKey);
}

@RequestMapping(value = "/confirm2fa", method = RequestMethod.POST)
@ResponseBody
public boolean confirmTwoFactorAuth(@AuthenticationPrincipal LoginAuthenticationToken token, @RequestParam("code") int code) {
    User user = getLoggedInUser(token);
    boolean result = googleAuthenticator.authorize(user.getTwoFactorAuthKey(), code);
    user.setTwoFactorAuthEnabled(result);
    dao.update(user);
    return result;
}

@RequestMapping(value = "/disable2fa", method = RequestMethod.GET)
@ResponseBody
public void disableTwoFactorAuth(@AuthenticationPrincipal LoginAuthenticationToken token) {
    User user = getLoggedInUser(token);
    user.setTwoFactorAuthKey(null);
    user.setTwoFactorAuthEnabled(false);
    dao.update(user);
}

@RequestMapping(value = "/requires2fa", method = RequestMethod.POST)
@ResponseBody
public boolean login(@RequestParam("email") String email) {
    // TODO consider verifying the password here in order not to reveal that a given user uses 2FA
    return userService.getUserDetailsByEmail(email).isTwoFactorAuthEnabled();
}

On the client side it’s simple AJAX requests to the above methods (sidenote: I kind of feel the term AJAX is no longer trendy, but I don’t know how to call them. Async? Background? Javascript?).

$("#two-fa-init").click(function() {
    $.post("/user/init2fa", function(qrImage) {
	$("#two-fa-verification").show();
	$("#two-fa-qr").prepend($('<img>',{id:'qr',src:qrImage}));
	$("#two-fa-init").hide();
    });
});

$("#two-fa-confirm").click(function() {
    var verificationCode = $("#verificationCode").val().replace(/ /g,'')
    $.post("/user/confirm2fa?code=" + verificationCode, function() {
       $("#two-fa-verification").hide();
       $("#two-fa-qr").hide();
       $.notify("Successfully enabled two-factor authentication", "success");
       $("#two-fa-message").html("Successfully enabled");
    });
});

$("#two-fa-disable").click(function() {
    $.post("/user/disable2fa", function(qrImage) {
       window.location.reload();
    });
});

The login form code depends very much on the existing login form you are using, but the point is to call the /requires2fa with the email (and password) to check if 2FA is enabled and then show a verification code input.

Overall, the implementation if two-factor authentication is simple and I’d recommend it for most systems, where security is more important than simplicity of the user experience.

The post Enabling Two-Factor Authentication For Your Web Application appeared first on Bozho's tech blog.

timeShift(GrafanaBuzz, 1w) Issue 16

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2017/10/06/timeshiftgrafanabuzz-1w-issue-16/

Welcome to another issue of TimeShift. In addition to the roundup of articles and plugin updates, we had a big announcement this week – Early Bird tickets to GrafanaCon EU are now available! We’re also accepting CFPs through the end of October, so if you have a topic in mind, don’t wait until the last minute, please send it our way. Speakers who are selected will receive a comped ticket to the conference.


Early Bird Tickets Now Available

We’ve released a limited number of Early Bird tickets before General Admission tickets are available. Take advantage of this discount before they’re sold out!

Get Your Early Bird Ticket Now

Interested in speaking at GrafanaCon? We’re looking for technical and non-tecnical talks of all sizes. Submit a CFP Now.


From the Blogosphere

Get insights into your Azure Cosmos DB: partition heatmaps, OMS, and More: Microsoft recently announced the ability to access a subset of Azure Cosmos DB metrics via Azure Monitor API. Grafana Labs built an Azure Monitor Plugin for Grafana 4.5 to visualize the data.

How to monitor Docker for Mac/Windows: Brian was tired of guessing about the performance of his development machines and test environment. Here, he shows how to monitor Docker with Prometheus to get a better understanding of a dev environment in his quest to monitor all the things.

Prometheus and Grafana to Monitor 10,000 servers: This article covers enokido’s process of choosing a monitoring platform. He identifies three possible solutions, outlines the pros and cons of each, and discusses why he chose Prometheus.

GitLab Monitoring: It’s fascinating to see Grafana dashboards with production data from companies around the world. For instance, we’ve previously highlighted the huge number of dashboards Wikimedia publicly shares. This week, we found that GitLab also has public dashboards to explore.

Monitoring a Docker Swarm Cluster with cAdvisor, InfluxDB and Grafana | The Laboratory: It’s important to know the state of your applications in a scalable environment such as Docker Swarm. This video covers an overview of Docker, VM’s vs. containers, orchestration and how to monitor Docker Swarm.

Introducing Telemetry: Actionable Time Series Data from Counters: Learn how to use counters from mulitple disparate sources, devices, operating systems, and applications to generate actionable time series data.

ofp_sniffer Branch 1.2 (docker/influxdb/grafana) Upcoming Features: This video demo shows off some of the upcoming features for OFP_Sniffer, an OpenFlow sniffer to help network troubleshooting in production networks.


Grafana Plugins

Plugin authors add new features and bugfixes all the time, so it’s important to always keep your plugins up to date. To update plugins from on-prem Grafana, use the Grafana-cli tool, if you are using Hosted Grafana, you can update with 1 click! If you have questions or need help, hit up our community site, where the Grafana team and members of the community are happy to help.

UPDATED PLUGIN

PNP for Nagios Data Source – The latest release for the PNP data source has some fixes and adds a mathematical factor option.

Update

UPDATED PLUGIN

Google Calendar Data Source – This week, there was a small bug fix for the Google Calendar annotations data source.

Update

UPDATED PLUGIN

BT Plugins – Our friends at BT have been busy. All of the BT plugins in our catalog received and update this week. The plugins are the Status Dot Panel, the Peak Report Panel, the Trend Box Panel and the Alarm Box Panel.

Changes include:

  • Custom dashboard links now work in Internet Explorer.
  • The Peak Report panel no longer supports click-to-sort.
  • The Status Dot panel tooltips now look like Grafana tooltips.


This week’s MVC (Most Valuable Contributor)

Each week we highlight some of the important contributions from our amazing open source community. This week, we’d like to recognize a contributor who did a lot of work to improve Prometheus support.

pdoan017
Thanks to Alin Sinpaleanfor his Prometheus PR – that aligns the step and interval parameters. Alin got a lot of feedback from the Prometheus community and spent a lot of time and energy explaining, debating and iterating before the PR was ready.
Thank you!


Grafana Labs is Hiring!

We are passionate about open source software and thrive on tackling complex challenges to build the future. We ship code from every corner of the globe and love working with the community. If this sounds exciting, you’re in luck – WE’RE HIRING!

Check out our Open Positions


Tweet of the Week

We scour Twitter each week to find an interesting/beautiful dashboard and show it off! #monitoringLove

Wow – Excited to be a part of exploring data to find out how Mexico City is evolving.

We Need Your Help!

Do you have a graph that you love because the data is beautiful or because the graph provides interesting information? Please get in touch. Tweet or send us an email with a screenshot, and we’ll tell you about this fun experiment.

Tell Me More


What do you think?

That’s a wrap! How are we doing? Submit a comment on this article below, or post something at our community forum. Help us make these weekly roundups better!

Follow us on Twitter, like us on Facebook, and join the Grafana Labs community.

timeShift(GrafanaBuzz, 1w) Issue 14

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2017/09/22/timeshiftgrafanabuzz-1w-issue-14/

Summer is officially in the rear-view mirror, but we at Grafana Labs are excited. Next week, the team will gather in Stockholm, Sweden where we’ll be discussing Grafana 5.0, GrafanaCon EU and setting other goals. If you’re attending Percona Live Europe 2017 in Dublin, be sure and catch Grafana developer, Daniel Lee on Tuesday, September 26. He’ll be showing off the new MySQL data source and a sneak peek of Grafana 5.0.

And with that – we hope you enjoy this issue of TimeShift!


Latest Release

Grafana 4.5.2 is now available! Various fixes to the Graphite data source, HTTP API, and templating.

To see details on what’s been fixed in the newest version, please see the release notes.

Download Grafana 4.5.2 Now


From the Blogosphere

A Monitoring Solution for Docker Hosts, Containers and Containerized Services: Stefan was searching for an open source, self-hosted monitoring solution. With an ever-growing number of open source TSDBs, Stefan outlines why he chose Prometheus and provides a rundown of how he’s monitoring his Docker hosts, containers and services.

Real-time API Performance Monitoring with ES, Beats, Logstash and Grafana: As APIs become a centerpiece for businesses, monitoring API performance is extremely important. Hiren recently configured real time API response time monitoring for a project and shares his implementation plan and configurations.

Monitoring SSL Certificate Expiry in GCP and Kubernetes: This article discusses how to use Prometheus and Grafana to automatically monitor SSL certificates in use by load balancers across GCP projects.

Node.js Performance Monitoring with Prometheus: This is a good primer for monitoring in general. It discusses what monitoring is, important signals to know, instrumentation, and things to consider when selecting a monitoring tool.

DIY Dashboard with Grafana and MariaDB: Mark was interested in testing out the new beta MySQL support in Grafana, so he wrote a short article on how he is using Grafana with MariaDB.

Collecting Temperature Data with Raspberry Pi Computers: Many of us use monitoring for tracking mission-critical systems, but setting up environment monitoring can be a fun way to improve your programming skills as well.


GrafanaCon EU CFP is Open

Have a big idea to share? A shorter talk or a demo you’d like to show off? We’re looking for technical and non-technical talks of all sizes. The proposals are rolling in, but we are happy to save a speaking slot for you!

I’d Like to Speak at GrafanaCon


Grafana Plugins

There were a lot of plugin updates to highlight this week, many of which were due to changes in Grafana 4.5. It’s important to keep your plugins up to date, since bug fixes and new features are added frequently. We’ve made the process of installing and updating plugins simple. On an on-prem instance, use the Grafana-cli, or on Hosted Grafana, install and update with 1-click.

NEW PLUGIN

Linksmart HDS Data Source – The LinkSmart Historical Data Store is a new Grafana data source plugin. LinkSmart is an open source IoT platform for developing IoT applications. IoT applications need to deal with large amounts of data produced by a growing number of sensors and other devices. The Historical Datastore is for storing, querying, and aggregating (time-series) sensor data.

Install Now

UPDATED PLUGIN

Simple JSON Data Source – This plugin received a bug fix for the query editor.

Update Now

UPDATED PLUGIN

Stagemonitor Elasticsearch App – Numerous small updates and the version updated to match the StageMonitor version number.

Update Now

UPDATED PLUGIN

Discrete Panel – Update to fix breaking change in Grafana 4.5.

Update Now

UPDATED PLUGIN

Status Dot Panel – Minor HTML Update in this version.

Update Now

UPDATED PLUGIN

Alarm Box Panel – This panel was updated to fix breaking changes in Grafana 4.5.

Update Now


This week’s MVC (Most Valuable Contributor)

Each week we highlight a contributor to Grafana or the surrounding ecosystem as a thank you for their participation in making open source software great.

Sven Klemm opened a PR for adding a new Postgres data source and has been very quick at implementing proposed changes. The Postgres data source is on our roadmap for Grafana 5.0 so this PR really helps. Thanks Sven!


Tweet of the Week

We scour Twitter each week to find an interesting/beautiful dashboard and show it off! #monitoringLove

Glad you’re finding Grafana useful! Curious about that annotation just before midnight 🙂

We Need Your Help

Last week we announced an experiment we were conducting, and need your help! Do you have a graph that you love because the data is beautiful or because the graph provides interesting information? Please get in touch. Tweet or send us an email with a screenshot, and we’ll tell you about this fun experiment.

I Want to Help


Grafana Labs is Hiring!

We are passionate about open source software and thrive on tackling complex challenges to build the future. We ship code from every corner of the globe and love working with the community. If this sounds exciting, you’re in luck – WE’RE HIRING!

Check out our Open Positions


What do you think?

What would you like to see here? Submit a comment on this article below, or post something at our community forum. Help us make these weekly roundups better!

Follow us on Twitter, like us on Facebook, and join the Grafana Labs community.

SecureLogin For Java Web Applications

Post Syndicated from Bozho original https://techblog.bozho.net/securelogin-java-web-applications/

No, there is not a missing whitespace in the title. It’s not about any secure login, it’s about the SecureLogin protocol developed by Egor Homakov, a security consultant, who became famous for committing to master in the Rails project without having permissions.

The SecureLogin protocol is very interesting, as it does not rely on any central party (e.g. OAuth providers like Facebook and Twitter), thus avoiding all the pitfalls of OAuth (which Homakov has often criticized). It is not a password manager either. It is just a client-side software that performs a bit of crypto in order to prove to the server that it is indeed the right user. For that to work, two parts are key:

  • Using a master password to generate a private key. It uses a key-derivation function, which guarantees that the produced private key has sufficient entropy. That way, using the same master password and the same email, you will get the same private key everytime you use the password, and therefore the same public key. And you are the only one who can prove this public key is yours, by signing a message with your private key.
  • Service providers (websites) identify you by your public key by storing it in the database when you register and then looking it up on each subsequent login

The client-side part is performed ideally by a native client – a browser plugin (one is available for Chrome) or a OS-specific application (including mobile ones). That may sound tedious, but it’s actually quick and easy and a one-time event (and is easier than password managers).

I have to admit – I like it, because I’ve been having a similar idea for a while. In my “biometric identification” presentation (where I discuss the pitfalls of using biometrics-only identification schemes), I proposed (slide 23) an identification scheme that uses biometrics (e.g. scanned with your phone) + a password to produce a private key (using a key-derivation function). And the biometric can easily be added to SecureLogin in the future.

It’s not all roses, of course, as one issue isn’t fully resolved yet – revocation. In case someone steals your master password (or you suspect it might be stolen), you may want to change it and notify all service providers of that change so that they can replace your old public key with a new one. That has two implications – first, you may not have a full list of sites that you registered on, and since you may have changed devices, or used multiple devices, there may be websites that never get to know about your password change. There are proposed solutions (points 3 and 4), but they are not intrinsic to the protocol and rely on centralized services. The second issue is – what if the attacker changes your password first? To prevent that, service providers should probably rely on email verification, which is neither part of the protocol, nor is encouraged by it. But you may have to do it anyway, as a safeguard.

Homakov has not only defined a protocol, but also provided implementations of the native clients, so that anyone can start using it. So I decided to add it to a project I’m currently working on (the login page is here). For that I needed a java implementation of the server verification, and since no such implementation existed (only ruby and node.js are provided for now), I implemented it myself. So if you are going to use SecureLogin with a Java web application, you can use that instead of rolling out your own. While implementing it, I hit a few minor issues that may lead to protocol changes, so I guess backward compatibility should also be somehow included in the protocol (through versioning).

So, how does the code look like? On the client side you have a button and a little javascript:

<!-- get the latest sdk.js from the GitHub repo of securelogin
   or include it from https://securelogin.pw/sdk.js -->
<script src="js/securelogin/sdk.js"></script>
....
<p class="slbutton" id="securelogin">&#9889; SecureLogin</p>
$("#securelogin").click(function() {
  SecureLogin(function(sltoken){
	// TODO: consider adding csrf protection as in the demo applications
        // Note - pass as request body, not as param, as the token relies 
        // on url-encoding which some frameworks mess with
	$.post('/app/user/securelogin', sltoken, function(result) {
            if(result == 'ok') {
		 window.location = "/app/";
            } else {
                 $.notify("Login failed, try again later", "error");
            }
	});
  });
  return false;
});

A single button can be used for both login and signup, or you can have a separate signup form, if it has to include additional details rather than just an email. Since I added SecureLogin in addition to my password-based login, I kept the two forms.

On the server, you simply do the following:

@RequestMapping(value = "/securelogin/register", method = RequestMethod.POST)
@ResponseBody
public String secureloginRegister(@RequestBody String token, HttpServletResponse response) {
    try {
        SecureLogin login = SecureLogin.verify(request.getSecureLoginToken(), Options.create(websiteRootUrl));
        UserDetails details = userService.getUserDetailsByEmail(login.getEmail());
        if (details == null || !login.getRawPublicKey().equals(details.getSecureLoginPublicKey())) {
            return "failure";
        }
        // sets the proper cookies to the response
        TokenAuthenticationService.addAuthentication(response, login.getEmail(), secure));
        return "ok";
    } catch (SecureLoginVerificationException e) {
        return "failure";
    }
}

This is spring-mvc, but it can be any web framework. You can also incorporate that into a spring-security flow somehow. I’ve never liked spring-security’s complexity, so I did it manually. Also, instead of strings, you can return proper status codes. Note that I’m doing a lookup by email and only then checking the public key (as if it’s a password). You can do the other way around if you have the proper index on the public key column.

I wouldn’t suggest having a SecureLogin-only system, as the project is still in an early stage and users may not be comfortable with it. But certainly adding it as an option is a good idea.

The post SecureLogin For Java Web Applications appeared first on Bozho's tech blog.

timeShift(GrafanaBuzz, 1w) Issue 11

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2017/09/01/timeshiftgrafanabuzz-1w-issue-11/

September is here and summer is officially drawing to a close, but the Grafana team has stayed busy. We’re prepping for an upcoming Grafana 4.5 release, had some new and updated plugins, and would like to thank two contributors for fixing a non-obvious bug. Also – The CFP for GrafanaCon EU is open, and we’d like you to speak!


GrafanaCon EU CFP is Open

Have a big idea to share? Have a shorter talk or a demo you’d like to show off?
We’re looking for 40-minute detailed talks, 20-minute general talks and 10-minute lightning talks. We have a perfect slot for any type of content.

I’d Like to Speak at GrafanaCon

Grafana Labs is Hiring!

Do you believe in open source software? Build the future with us, and ship code.

Check out our open positions

From the Blogosphere

Zabbix, Grafana and Python, a Match Made in Heaven: David’s article, published earlier this year, hits on some great points about open source software and how you don’t have to spend much (or any) money to get valuable monitoring for your infrastructure.

The Business of Democratizing Metrics: Our friends over at Packet stopped by the office recently to sit down and chat with the Grafana Labs co-founders. They discussed how Grafana started, how monitoring has evolved, and democratizing metrics.

Visualizing CloudWatch with Grafana: Yuzo put together an article outlining his first experience adding a CloudWatch data source in Grafana, importing his first dashboard, then comparing the graphs between Grafana and CloudWatch.

Monitoring Linux performance with Grafana: Jim wanted to monitor his CentOS home router to get network traffic and disk usage stats, but wanted to try something different than his previous cacti monitoring. This walkthrough shows how he set things up to collect, store and visualize the data.

Visualizing Jenkins Pipeline Results in Grafana: Piotr provides a walkthrough of his setup and configuration to view Jenkins build results for his continuous delivery environment in Grafana.


Grafana Plugins

This week we’ve added a plugin for the new time series database Sidewinder, and updates to the Carpet Plot graph panel. If you haven’t installed a plugin, it’s easy. For on-premises installations, the Grafana-cli will do the work for you. If you’re using Hosted Grafana, you can install any plugin with one click.

NEW PLUGIN

Sidewinder Data Source – This is a data source plugin for the new Sidewinder database. Sidewinder is an open source, fast time series database designed for real-time analytics. It can be used for a variety of use cases that need storage of metrics data like APM and IoT.

Install Now

UPDATED PLUGIN

Carpet Plot Panel – This plugin received an update, which includes the following features and fixes:

  • New aggregate functions: Min, Max, First, Last
  • Possibility to invert color scheme
  • Possibility to change X axis label format
  • Possibility to hide X and Y axis labels

Update Now


This week’s MVC (Most Valuable Contributor)

This week we want to thank two contributors who worked together to fix a non-obvious bug in the new MySQL data source (a bug with sorting values in the legend).

robinsonjj
Thank you Joe, for tackling this issue and submitting a PR with an initial fix.

pdoan017
pdoan017 took robinsonjj’s contribution and added a new PR to retain the order in which keys are added.

Thank you both for taking the time to both troubleshoot and fix the issue. Much appreciated!


Tweet of the Week

We scour Twitter each week to find an interesting/beautiful dashboard and show it off! #monitoringLove

Nice! Combining different panel types on a dashboard can add more context to your data – Looks like a very functional dashboard.


What do you think?

Let us know how we’re doing! Submit a comment on this article below, or post something at our community forum. Help us make these roundups better and better!

Follow us on Twitter, like us on Facebook, and join the Grafana Labs community.

timeShift(GrafanaBuzz, 1w) Issue 10

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2017/08/25/timeshiftgrafanabuzz-1w-issue-10/

This week, in addition to the articles we collected from around the web and a number of new Plugins and updates, we have a special announcement. GrafanaCon EU has been announced! Join us in Amsterdam March 1-2, 2018. The call for papers is officially open! We’ll keep you up to date as we fill in the details.


Grafana <3 Prometheus

Last week we mentioned that our colleague Carl Bergquist spoke at PromCon 2017 in Munich. His presentation is now available online. We will post the video once it’s available.


From the Blogosphere

Grafana-based GUI for mgstat, a system monitoring tool for InterSystems Caché, Ensemble or HealthShare: This is the second article in a series about Making Prometheus Monitoring for InterSystems Caché. Mikhail goes into great detail about setting this up on Docker, configuring the first dashboard, and adding templating.

Installation and Integration of Grafana in Zabbix 3.x: Daniel put together an installation guide to get Grafana to display metrics from Zabbix, which utilizes the Zabbix Plugin developed by Grafana Labs Developer Alex Zobnin.

Visualize with RRDtool x Grafana: Atfujiwara wanted to update his MRTG graphs from RRDtool. This post talks about the components needed and how he connected RRDtool to Grafana.

Huawei OceanStor metrics in Grafana: Dennis is using Grafana to display metrics for his storage devices. In this post he walks you through the setup and provides a comprehensive dashboard for all the metrics.

Grafana on a Raspberry Pi2: Pete discusses how he uses Grafana with his garden sensors, and walks you through how to get it up and running on a Pi2.


Grafana Plugins

This week was pretty active on the plugin front. Today we’re announcing two brand new plugins and updates to three others. Installing plugins in Grafana is easy – if you have Hosted Grafana, simply use the one-click install, if you’re using an on-prem instance you can use the Grafana-cli.

NEW PLUGIN

IBM APM Data Source – This plugin collects metrics from the IBM APM (Application Performance Management) products and allows you to visualize it on Grafana dashboards. The plugin supports:

  • IBM Tivoli Monitoring 6.x
  • IBM SmartCloud Application Performance Management 7.x
  • IBM Performance Management 8.x (only on-premises version)

Install Now

NEW PLUGIN

Skydive Data Source – This data source plugin collects metrics from Skydive, an open source real-time network topology and protocols analyzer. Using the Skydive Gremlin query language, you can fetch metrics for flows in your network.

Install now

UPDATED PLUGIN

Datatable Panel – Lots of changes in the latest update to the Datatable Panel Here are some highlights from the changelog:

  • NEW: Export options for Clipboard/CSV/PDF/Excel/Print
  • NEW: Column Aliasing – modify the name of a column as sent by the datasource
  • NEW: Added option for a cell or row to link to another page
  • NEW: Supports Clickable links inside table
  • BUGFIX: CSS files now load when Grafana has a subpath
  • NEW: Added multi-column sorting – sort by any number of columns ascending/descending
  • NEW: Column width hints – suggest a width for a named column
  • BUGFIX: Columns from datasources other than JSON can now be aliased

Update Now

UPDATED PLUGIN

D3 Gauge Panel – The D3 Gauge Panel has a new feature – Tick Mapping. Ticks on the gauge can now be mapped to text.

Update Now

UPDATED PLUGIN

PNP4Nagios Data Source – The most recent update to the PNP Data Source adds support for template variables in queries and as well as support for querying warning and critical thresholds.

Update Now


This week’s MVC (Most Valuable Contributor)

Each week we highlight a contributor to Grafana or the surrounding ecosystem as a thank you for their participation in making open source software great.

Brian Gann
Brian is the maintainer of two Grafana Plugins and this week he submitted substantial updates to both of them (Datatable and D3 Gauge panel plugins); and he says there’s more to come! Thanks for all your hard work, Brian.


Tweet of the Week

We scour Twitter each week to find an interesting/beautiful dashboard and show it off! #monitoringLove

The Dark Knight popping up in graphs seems to be a recurring theme!
This is the graph Jakub deserves, but not the one he needs right now.



What do you think?

That’s it for the 10th issue of timeShift. Let us know how we’re doing! Submit a comment on this article below, or post something at our community forum. Help us make this better!

Follow us on Twitter, like us on Facebook, and join the Grafana Labs community.

timeShift(GrafanaBuzz, 1w) Issue 9

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2017/08/18/timeshiftgrafanabuzz-1w-issue-9/

Matt from Grafana NYC spent the week visiting Stockholm to focus on v5.0 with Torkel. Despite warnings otherwise, the weather has been beautiful, making a nice backdrop for many UX discussions. Very, very excited to soon show what we’ve been working on.


Latest Release

Grafana v4.4.3 is Available for download

To see the full changelog, head over to our community site.


Grafana <3 Prometheus

Our very own Carl Bergquist spoke at PromCon 2017 yesterday in Munich, highlighting recent Grafana features and enhancements.

We also used the opportunity to debut our coming Prometheus query editor with a load of new functionality; seems the community approves,
in fact this is our most popular tweet ever!


From the Blogosphere

  • Wikimedia Metrics: A tweet this week reminded us of the public metrics Wikimedia exposes using Grafana. Exploring the performance stats in real time for the 5th mot popular site on the internet is pretty fun.

  • Creating Grafana Annotations with InfluxDB: Nice short article by Max Chadwick showing how to quickly add InfluxDB as a source for Grafana annotations.


This week’s MVC (Most Valuable Contributor)

This week’s MVC highlights what is great about Open Source software.

ericslaw
ericslaw submitted his first PR to a public project this past week. Speaking from personal experience, submitting a PR can feel daunting and and we were lucky that he chose Grafana. Even the smallest contributions, like Eric fixing a bogus link within our templating has big impact.


Tweet of the Week

We scour Twitter each week to find an interesting/beautiful dashboard and show it off! #monitoringLove

Seems the excitement about Prometheus and Grafana has also caught the attention of a certain superhero.



What do you think?

That wraps up another issue. Hope you’re finding these roundups valuable. Let us know how we’re doing! Submit a comment on this article below, or post something at our community forum. Help us make this better!

Follow us on Twitter, like us on Facebook, and join the Grafana Labs community.

timeShift(GrafanaBuzz, 1w) Issue 5

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2017/07/21/timeshiftgrafanabuzz-1w-issue-5/

We cover a lot of ground in this week’s timeShift. From diving into building your own plugin, finding the right dashboard, configuration options in the alerting feature, to monitoring your local weather, there’s something for everyone. Are you writing an article about Grafana, or have you come across an article you found interesting? Please get in touch, we’ll add it to our roundup.


From the Blogosphere

  • Going open-source in monitoring, part III: 10 most useful Grafana dashboards to monitor Kubernetes and services: We have hundreds of pre-made dashboards ready for you to install into your on-prem or hosted Grafana, but not every one will fit your specific monitoring needs. In part three of the series, Sergey discusses is experiences with finding useful dashboards and shows off ten of the best dashboards you can install for monitoring Kubernetes clusters and the services deployed on them.

  • Using AWS Lambda and API gateway for server-less Grafana adapters: Sometimes you’ll want to visualize metrics from a data source that may not yet be supported in Grafana natively. With the plugin functionality introduced in Grafana 3.0, anyone can create their own data sources. Using the SimpleJson data source, Jonas describes how he used AWS Lambda and AWS API gateway to write data source adapters for Grafana.

  • How to Use Grafana to Monitor JMeter Non-GUI Results – Part 2: A few issues ago we listed an article for using Grafana to monitor JMeter Non-GUI results, which required a number of non-trivial steps to complete. This article shows of an easier way to accomplish this that doesn’t require any additional configuration of InfluxDB.

  • Programming your Personal Weather Chart: It’s always great to see Grafana used outside of the typical dev-ops usecase. This article runs you through the steps to create your own weather chart and show off your local weather stats in Grafana. BONUS: Rob shows off a magic mirror he created, which can display this data.

  • vSphere Performance data – Part 6 – The Dashboard(s): This 6-part series goes into a ton of detail and walks you through the various methods of retrieving vSphere performance data, storing the data in a TSDB, and creating dashboards for the metrics. Part 6 deals specifically with Grafana, but I highly recommend reading all of the articles, as it chronicles the journey of metrics exploration, storage, and visualization from someone who had no prior experience with time series data.

  • Alerting in Grafana: Alerting in Grafana is a fairly new feature and one that we’re continuing to iterate on. We’re soon adding additional data source support, new notification channels, clustering, silencing rules, and more. This article steps you through all the configuration options to get you to your first alert.


Plugins and Dashboards

It can seem like work slows during July and August, but we’re still seeing a lot of activity in the community. This week we have a new graph panel to show off that gives you some unique looking dashboards, and an update to the Zabbix data source, which adds some really great features. You can install both of the plugins now on your on-prem Grafana via our cli, or with one-click on GrafanaCloud.

NEW PLUGIN

Bubble Chart Panel This super-cool looking panel groups your tag values into clusters of circles. The size of the circle represents the aggregated value of the time series data. There are also multiple color schemes to make those bubbles POP (pun intended)! Currently it works against OpenTSDB and Bosun, so give it a try!

Install Now

UPDATED PLUGIN

Zabbix Alex has been hard at work, making improvements on the Zabbix App for Grafana. This update adds annotations, template variables, alerting and more. Thanks Alex! If you’d like to try out the app, head over to http://play.grafana-zabbix.org/dashboard/db/zabbix-db-mysql?orgId=2

Install 3.5.1 Now


This week’s MVC (Most Valuable Contributor)

Open source software can’t thrive without the contributions from the community. Each week we’ll recognize a Grafana contributor and thank them for all of their PRs, bug reports and feedback.

mk-dhia (Dhia)
Thank you so much for your improvements to the Elasticsearch data source!


Tweet of the Week

We scour Twitter each week to find an interesting/beautiful dashboard and show it off! #monitoringLove

This week’s tweet comes from @geek_dave

Great looking dashboard Dave! And thank you for adding new features and keeping it updated. It’s creators like you who make the dashboard repository so awesome!


Upcoming Events

We love when people talk about Grafana at meetups and conferences.

Monday, July 24, 2017 – 7:30pm | Google Campus Warsaw


Ząbkowska 27/31, Warsaw, Poland

Iot & HOME AUTOMATION #3 openHAB, InfluxDB, Grafana:
If you are interested in topics of the internet of things and home automation, this might be a good occasion to meet people similar to you. If you are into it, we will also show you how we can all work together on our common projects.

RSVP


Tell us how we’re Doing.

We’d love your feedback on what kind of content you like, length, format, etc – so please keep the comments coming! You can submit a comment on this article below, or post something at our community forum. Help us make this better.

Follow us on Twitter, like us on Facebook, and join the Grafana Labs community.

Basic API Rate-Limiting

Post Syndicated from Bozho original https://techblog.bozho.net/basic-api-rate-limiting/

It is likely that you are developing some form of (web/RESTful) API, and in case it is publicly-facing (or even when it’s internal), you normally want to rate-limit it somehow. That is, to limit the number of requests performed over a period of time, in order to save resources and protect from abuse.

This can probably be achieved on web-server/load balancer level with some clever configurations, but usually you want the rate limiter to be client-specific (i.e. each client of your API sohuld have a separate rate limit), and the way the client is identified varies. It’s probably still possible to do it on the load balancer, but I think it makes sense to have it on the application level.

I’ll use spring-mvc for the example, but any web framework has a good way to plug an interceptor.

So here’s an example of a spring-mvc interceptor:

@Component
public class RateLimitingInterceptor extends HandlerInterceptorAdapter {

    private static final Logger logger = LoggerFactory.getLogger(RateLimitingInterceptor.class);
    
    @Value("${rate.limit.enabled}")
    private boolean enabled;
    
    @Value("${rate.limit.hourly.limit}")
    private int hourlyLimit;

    private Map<String, Optional<SimpleRateLimiter>> limiters = new ConcurrentHashMap<>();
    
    @Override
    public boolean preHandle(HttpServletRequest request, HttpServletResponse response, Object handler)
            throws Exception {
        if (!enabled) {
            return true;
        }
        String clientId = request.getHeader("Client-Id");
        // let non-API requests pass
        if (clientId == null) {
            return true;
        }
        SimpleRateLimiter rateLimiter = getRateLimiter(clientId);
        boolean allowRequest = limiter.tryAcquire();
    
        if (!allowRequest) {
            response.setStatus(HttpStatus.TOO_MANY_REQUESTS.value());
        }
        response.addHeader("X-RateLimit-Limit", String.valueOf(hourlyLimit));
        return allowRequest;
    }
    
    private SimpleRateLimiter getRateLimiter(String clientId) {
        return limiters.computeIfAbsent(clientId, clientId -> {
            return Optional.of(createRateLimiter(clientId));
        });
    }

	
    @PreDestroy
    public void destroy() {
        // loop and finalize all limiters
    }
}

This initializes rate-limiters per client on demand. Alternatively, on startup you could just loop through all registered API clients and create a rate limiter for each. In case the rate limiter doesn’t allow more requests (tryAcquire() returns false), then raturn “Too many requests” and abort the execution of the request (return “false” from the interceptor).

This sounds simple. But there are a few catches. You may wonder where the SimpleRateLimiter above is defined. We’ll get there, but first let’s see what options do we have for rate limiter implementations.

The most recommended one seems to be the guava RateLimiter. It has a straightforward factory method that gives you a rate limiter for a specified rate (permits per second). However, it doesn’t accomodate web APIs very well, as you can’t initilize the RateLimiter with pre-existing number of permits. That means a period of time should elapse before the limiter would allow requests. There’s another issue – if you have less than one permits per second (e.g. if your desired rate limit is “200 requests per hour”), you can pass a fraction (hourlyLimit / secondsInHour), but it still won’t work the way you expect it to, as internally there’s a “maxPermits” field that would cap the number of permits to much less than you want it to. Also, the rate limiter doesn’t allow bursts – you have exactly X permits per second, but you cannot spread them over a long period of time, e.g. have 5 requests in one second, and then no requests for the next few seconds. In fact, all of the above can be solved, but sadly, through hidden fields that you don’t have access to. Multiple feature requests exist for years now, but Guava just doesn’t update the rate limiter, making it much less applicable to API rate-limiting.

Using reflection, you can tweak the parameters and make the limiter work. However, it’s ugly, and it’s not guaranteed it will work as expected. I have shown here how to initialize a guava rate limiter with X permits per hour, with burstability and full initial permits. When I thought that would do, I saw that tryAcquire() has a synchronized(..) block. Will that mean all requests will wait for each other when simply checking whether allowed to make a request? That would be horrible.

So in fact the guava RateLimiter is not meant for (web) API rate-limiting. Maybe keeping it feature-poor is Guava’s way for discouraging people from misusing it?

That’s why I decided to implement something simple myself, based on a Java Semaphore. Here’s the naive implementation:

public class SimpleRateLimiter {
    private Semaphore semaphore;
    private int maxPermits;
    private TimeUnit timePeriod;
    private ScheduledExecutorService scheduler;

    public static SimpleRateLimiter create(int permits, TimeUnit timePeriod) {
        SimpleRateLimiter limiter = new SimpleRateLimiter(permits, timePeriod);
        limiter.schedulePermitReplenishment();
        return limiter;
    }

    private SimpleRateLimiter(int permits, TimeUnit timePeriod) {
        this.semaphore = new Semaphore(permits);
        this.maxPermits = permits;
        this.timePeriod = timePeriod;
    }

    public boolean tryAcquire() {
        return semaphore.tryAcquire();
    }

    public void stop() {
        scheduler.shutdownNow();
    }

    public void schedulePermitReplenishment() {
        scheduler = Executors.newScheduledThreadPool(1);
        scheduler.schedule(() -> {
            semaphore.release(maxPermits - semaphore.availablePermits());
        }, 1, timePeriod);

    }
}

It takes a number of permits (allowed number of requests) and a time period. The time period is “1 X”, where X can be second/minute/hour/daily – depending on how you want your limit to be configured – per second, per minute, hourly, daily. Every 1 X a scheduler replenishes the acquired permits (in the example above there’s one scheduler per client, which may be inefficient with large number of clients – you can pass a shared scheduler pool instead). There is no control for bursts (a client can spend all permits with a rapid succession of requests), there is no warm-up functionality, there is no gradual replenishment. Depending on what you want, this may not be ideal, but that’s just a basic rate limiter that is thread-safe and doesn’t have any blocking. I wrote a unit test to confirm that the limiter behaves properly, and also ran performance tests against a local application to make sure the limit is obeyed. So far it seems to be working.

Are there alternatives? Well, yes – there are libraries like RateLimitJ that uses Redis to implement rate-limiting. That would mean, however, that you need to setup and run Redis. Which seems like an overhead for “simply” having rate-limiting. (Note: it seems to also have an in-memory version)

On the other hand, how would rate-limiting work properly in a cluster of application nodes? Application nodes probably need some database or gossip protocol to share data about the per-client permits (requests) remaining? Not necessarily. A very simple approach to this issue would be to assume that the load balancer distributes the load equally among your nodes. That way you would just have to set the limit on each node to be equal to the total limit divided by the number of nodes. It won’t be exact, but you rarely need it to be – allowing 5-10 more requests won’t kill your application, allowing 5-10 less won’t be dramatic for the users.

That, however, would mean that you have to know the number of application nodes. If you employ auto-scaling (e.g. in AWS), the number of nodes may change depending on the load. If that is the case, instead of configuring a hard-coded number of permits, the replenishing scheduled job can calculate the “maxPermits” on the fly, by calling an AWS (or other cloud-provider) API to obtain the number of nodes in the current auto-scaling group. That would still be simpler than supporting a redis deployment just for that.

Overall, I’m surprised there isn’t a “canonical” way to implement rate-limiting (in Java). Maybe the need for rate-limiting is not as common as it may seem. Or it’s implemented manually – by temporarily banning API clients that use “too much resources”.

Update: someone pointed out the bucket4j project, which seems nice and worth taking a look at.

The post Basic API Rate-Limiting appeared first on Bozho's tech blog.

Launch – .NET Core Support In AWS CodeStar and AWS Codebuild

Post Syndicated from Tara Walker original https://aws.amazon.com/blogs/aws/launch-net-core-support-in-aws-codestar-and-aws-codebuild/

A few months ago, I introduced the AWS CodeStar service, which allows you to quickly develop, build, and deploy applications on AWS. AWS CodeStar helps development teams to increase the pace of releasing applications and solutions while reducing some of the challenges of building great software.

When the CodeStar service launched in April, it was released with several project templates for Amazon EC2, AWS Elastic Beanstalk, and AWS Lambda using five different programming languages; JavaScript, Java, Python, Ruby, and PHP. Each template provisions the underlying AWS Code Services and configures an end-end continuous delivery pipeline for the targeted application using AWS CodeCommit, AWS CodeBuild, AWS CodePipeline, and AWS CodeDeploy.

As I have participated in some of the AWS Summits around the world discussing AWS CodeStar, many of you have shown curiosity in learning about the availability of .NET templates in CodeStar and utilizing CodeStar to deploy .NET applications. Therefore, it is with great pleasure and excitement that I announce that you can now develop, build, and deploy cross-platform .NET Core applications with the AWS CodeStar and AWS CodeBuild services.

AWS CodeBuild has added the ability to build and deploy .NET Core application code to both Amazon EC2 and AWS Lambda. This new CodeBuild capability has enabled the addition of two new project templates in AWS CodeStar for .NET Core applications.  These new project templates enable you to deploy .NET Code applications to Amazon EC2 Linux Instances, and provides everything you need to get started quickly, including .NET Core sample code and a full software development toolchain.

Of course, I can’t wait to try out the new addition to the project templates within CodeStar and the update .NET application build options with CodeBuild. For my test scenario, I will use CodeStar to create, build, and deploy my .NET Code ASP.Net web application on EC2. Then, I will extend my ASP.Net application by creating a .NET Lambda function to be compiled and deployed with CodeBuild as a part of my application’s pipeline. This Lambda function can then be called and used within my ASP.Net application to extend the functionality of my web application.

So, let’s get started!

First, I’ll log into the CodeStar console and start a new CodeStar project. I am presented with the option to select a project template.


Right now, I would like to focus on building .NET Core projects, therefore, I’ll filter the project templates by selecting the C# in the Programming Languages section. Now, CodeStar only shows me the new .NET Core project templates that I can use to build web applications and services with ASP.NET Core.

I think I’ll use the ASP.NET Core web application project template for my first CodeStar .NET Core application. As you can see by the project template information display, my web application will be deployed on Amazon EC2, which signifies to me that my .NET Core code will be compiled and packaged using AWS CodeBuild and deployed to EC2 using the AWS CodeDeploy service.


My hunch about the services is confirmed on the next screen when CodeStar shows the AWS CodePipeline and the AWS services that will be configured for my new project. I’ll name this web application project, ASPNetCore4Tara, and leave the default Project ID that CodeStar generates from the project name. Yes, I know that this is one of the goofiest names I could ever come up with, but, hey, it will do for this test project so I’ll go ahead and click the Next button. I should mention that you have the option to edit your Amazon EC2 configuration for your project on this screen before CodeStar starts configuring and provisioning the services needed to run your application.

Since my ASP.Net Core web application will be deployed to an Amazon EC2 instance, I will need to choose an Amazon EC2 Key Pair for encryption of the login used to allow me to SSH into this instance. For my ASPNetCore4Tara project, I will use an existing Amazon EC2 key pair I have previously used for launching my other EC2 instances. However, if I was creating this project and I did not have an EC2 key pair or if I didn’t have access to the .pem file (private key file) for an existing EC2 key pair, I would have to first visit the EC2 console and create a new EC2 key pair to use for my project. This is important because if you remember, without having the EC2 key pair with the associated .pem file, I would not be able to log into my EC2 instance.

With my EC2 key pair selected and confirmation that I have the related private file checked, I am ready to click the Create Project button.


After CodeStar completes the creation of the project and the provisioning of the project related AWS services, I am ready to view the CodeStar sample application from the application endpoint displayed in the CodeStar dashboard. This sample application should be familiar to you if have been working with the CodeStar service or if you had an opportunity to read the blog post about the AWS CodeStar service launch. I’ll click the link underneath Application Endpoints to view the sample ASP.NET Core web application.

Now I’ll go ahead and clone the generated project and connect my Visual Studio IDE to the project repository. I am going to make some changes to the application and since AWS CodeBuild now supports .NET Core builds and deployments to both Amazon EC2 and AWS Lambda, I will alter my build specification file appropriately for the changes to my web application that will include the use of the Lambda function.  Don’t worry if you are not familiar with how to clone the project and connect it to the Visual Studio IDE, CodeStar provides in-console step-by-step instructions to assist you.

First things first, I will open up the Visual Studio IDE and connect to AWS CodeCommit repository provisioned for my ASPNetCore4Tara project. It is important to note that the Visual Studio 2017 IDE is required for .NET Core projects in AWS CodeStar and the AWS Toolkit for Visual Studio 2017 will need to be installed prior to connecting your project repository to the IDE.

In order to connect to my repo within Visual Studio, I will open up Team Explorer and select the Connect link under the AWS CodeCommit option under Hosted Service Providers. I will click Ok to keep my default AWS profile toolkit credentials.

I’ll then click Clone under the Manage Connections and AWS CodeCommit hosted provider section.

Once I select my aspnetcore4tara repository in the Clone AWS CodeCommit Repository dialog, I only have to enter my IAM role’s HTTPS Git credentials in the Git Credentials for AWS CodeCommit dialog and my process is complete. If you’re following along and receive a dialog for Git Credential Manager login, don’t worry just your enter the same IAM role’s Git credentials.


My project is now connected to the aspnetcore4tara CodeCommit repository and my web application is loaded to editing. As you will notice in the screenshot below, the sample project is structured as a standard ASP.NET Core MVC web application.

With the project created, I can make changes and updates. Since I want to update this project with a .NET Lambda function, I’ll quickly start a new project in Visual Studio to author a very simple C# Lambda function to be compiled with the CodeStar project. This AWS Lambda function will be included in the CodeStar ASP.NET Core web application project.

The Lambda function I’ve created makes a call to the REST API of NASA’s popular Astronomy Picture of the Day website. The API sends back the latest planetary image and related information in JSON format. You can see the Lambda function code below.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;

using System.Net.Http;
using Amazon.Lambda.Core;

// Assembly attribute to enable the Lambda function's JSON input to be converted into a .NET class.
[assembly: LambdaSerializer(typeof(Amazon.Lambda.Serialization.Json.JsonSerializer))]

namespace NASAPicOfTheDay
{
    public class SpacePic
    {
        HttpClient httpClient = new HttpClient();
        string nasaRestApi = "https://api.nasa.gov/planetary/apod?api_key=DEMO_KEY";

        /// <summary>
        /// A simple function that retreives NASA Planetary Info and 
        /// Picture of the Day
        /// </summary>
        /// <param name="context"></param>
        /// <returns>nasaResponse-JSON String</returns>
        public async Task<string> GetNASAPicInfo(ILambdaContext context)
        {
            string nasaResponse;
            
            //Call NASA Picture of the Day API
            nasaResponse = await httpClient.GetStringAsync(nasaRestApi);
            Console.WriteLine("NASA API Response");
            Console.WriteLine(nasaResponse);
            
            //Return NASA response - JSON format
            return nasaResponse; 
        }
    }
}

I’ll now publish this C# Lambda function and test by using the Publish to AWS Lambda option provided by the AWS Toolkit for Visual Studio with NASAPicOfTheDay project. After publishing the function, I can test it and verify that it is working correctly within Visual Studio and/or the AWS Lambda console. You can learn more about building AWS Lambda functions with C# and .NET at: http://docs.aws.amazon.com/lambda/latest/dg/dotnet-programming-model.html

 

Now that I have my Lambda function completed and tested, all that is left is to update the CodeBuild buildspec.yml file within my aspnetcore4tara CodeStar project to include publishing and deploying of the Lambda function.

To accomplish this, I will create a new folder named functions and copy the folder that contains my Lambda function .NET project to my aspnetcore4tara web application project directory.

 

 

To build and publish my AWS Lambda function, I will use commands in the buildspec.yml file from the aws-lambda-dotnet tools library, which helps .NET Core developers develop AWS Lambda functions. I add a file, funcprof, to the NASAPicOfTheDay folder which contains customized profile information for use with aws-lambda-dotnet tools. All that is left is to update the buildspec.yml file used by CodeBuild for the ASPNetCore4Tara project build to include the packaging and the deployment of the NASAPictureOfDay AWS Lambda function. The updated buildspec.yml is as follows:

version: 0.2
phases:
  env:
  variables:
    basePath: 'hold'
  install:
    commands:
      - echo set basePath for project
      - basePath=$(pwd)
      - echo $basePath
      - echo Build restore and package Lambda function using AWS .NET Tools...
      - dotnet restore functions/*/NASAPicOfTheDay.csproj
      - cd functions/NASAPicOfTheDay
      - dotnet lambda package -c Release -f netcoreapp1.0 -o ../lambda_build/nasa-lambda-function.zip
  pre_build:
    commands:
      - echo Deploy Lambda function used in ASPNET application using AWS .NET Tools. Must be in path of Lambda function build 
      - cd $basePath
      - cd functions/NASAPicOfTheDay
      - dotnet lambda deploy-function NASAPicAPI -c Release -pac ../lambda_build/nasa-lambda-function.zip --profile-location funcprof -fd 'NASA API for Picture of the Day' -fn NASAPicAPI -fh NASAPicOfTheDay::NASAPicOfTheDay.SpacePic::GetNASAPicInfo -frun dotnetcore1.0 -frole arn:aws:iam::xxxxxxxxxxxx:role/lambda_exec_role -framework netcoreapp1.0 -fms 256 -ft 30  
      - echo Lambda function is now deployed - Now change directory back to Base path
      - cd $basePath
      - echo Restore started on `date`
      - dotnet restore AspNetCoreWebApplication/AspNetCoreWebApplication.csproj
  build:
    commands:
      - echo Build started on `date`
      - dotnet publish -c release -o ./build_output AspNetCoreWebApplication/AspNetCoreWebApplication.csproj
artifacts:
  files:
    - AspNetCoreWebApplication/build_output/**/*
    - scripts/**/*
    - appspec.yml
    

That’s it! All that is left is for me to add and commit all my file additions and updates to the AWS CodeCommit git repository provisioned for my ASPNetCore4Tara project. This kicks off the AWS CodePipeline for the project which will now use AWS CodeBuild new support for .NET Core to build and deploy both the ASP.NET Core web application and the .NET AWS Lambda function.

 

Summary

The support for .NET Core in AWS CodeStar and AWS CodeBuild opens the door for .NET developers to take advantage of the benefits of Continuous Integration and Delivery when building .NET based solutions on AWS.  Read more about .NET Core support in AWS CodeStar and AWS CodeBuild here or review product pages for AWS CodeStar and/or AWS CodeBuild for more information on using the services.

Enjoy building .NET projects more efficiently with Amazon Web Services using .NET Core with AWS CodeStar and AWS CodeBuild.

Tara

 

timeShift(GrafanaBuzz, 1w) Issue 3

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2017/07/07/timeshiftgrafanabuzz-1w-issue-3/

Many in the US were on holiday for Independence Day earlier this week, but that didn’t slow us down: team Stockholm even shipped a new Grafana release. This issue of timeShift has plenty of great articles to highlight. If you know of a recent article about Grafana, or are writing one yourself, please get in touch, we’d be happy to feature it here.


Grafana 4.4 Released

Grafana v4.4 is now Available for download

Dashboard history and version control is here! A big thanks to Walmart Labs for their massive code contribution.

Check out what’s new in Grafana 4.4 in the release announcement.


From the Blogosphere

Plugins and Dashboards

We are excited that there have been over 100,000 plugin installations since we launched the new plugable architecture in Grafana v3. You can discover and install plugins in your own on-premises or Hosted Grafana instance from our website. Below are some recent additions and updates.

Zabbix Updated to v3.5.0 CHANGELOG.md

  • rate() function, which calculates per-second rate for growing counters.
  • Template query format. New format is {group}{host}{app}{item}. It allows to use names with dot.
  • Improved performance of groupBy() functions (at 6-10x faster than old).
  • lots of bug fixes and more

In addition to the plugins available for download, there are hundreds of pre-made dashboards ready for you to import into Grafana to get up and running quickly. Check out some of the popular dashboards.

Server Metrics (Collectd) Collectd/Graphite Server metrics dashboard (Load,CPU, Memory, Temp etc).

Data Source: Graphite | Collector: Collectd

Apache Overview System stats for uptime, cpu count, RAM, free memory %, and panels for load, I/O and network traffic. Apache workers and scoreboard panels and uptime and CPU load single stats.

Data Source: InfluxDB | Collector: Telegraf

Node Exporter Server Metrics A simple dashboard configured to be able to view multiple servers side by side.

Data Source: Prometheus | Collector: Nodeexporter

This week’s MVC (Most Valuable Contributor)

Each week we’ll recognize a Grafana contributor and thank them for all of their PRs, bug reports and feedback. Many of the fixes and improvements come from our fantastic community!

ryantxu (Ryan McKinley)

Ryan has contributed PR’s to Grafana as well as being the author of 4 well-maintained plugins (Ajax Panel, Discrete Panel, Plotly Panel and Influx Admin plugins). Thank you for all your hard work!

What do you think?

Anything in particular you’d like to see in this series of posts? Too long? Too short? Boring? Let us know. Comment on this article below, or post something at our community forum. With your help, we can make this a worthwhile resource.

Follow us on Twitter, like us on Facebook, and join the Grafana Labs community.

timeShift(GrafanaBuzz, 1w) Issue 2

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2017/06/30/timeshiftgrafanabuzz-1w-issue-2/

A big thank you to everyone for the likes, retweets, comments and questions from last week’s timeShift debut. We were delighted to learn that people found this new resource useful, and are excited to continue to publish weekly issues. If you know of a recent article about Grafana, or are writing one yourself, please get in touch, we’d be happy to feature it here.

From the Blogosphere

Plugins and Dashboards

We are excited that there have been over 100,000 plugin installations since we launched the new plugable architecture in Grafana v3. You can discover and install plugins in your own on-premises or Hosted Grafana instance from our website. Below are some recent additions and updates.

SimpleJson SimpleJson is a generic backend datasource that has been the foundation of a number of Grafana data source plugins. It’s also a mechanism by which any application can expose metrics over http directly to Grafana. The newest version adds basic auth.

NetXMS Grafana datasource for NetXMS open source monitoring system.

GoogleCalendar This plugin shows the event description as an annotation on your graphs.

Discrete Panel Show discrete values in a horizontal graph. This panel now supports results from the table format.

Alarm Box This panel shows the total count of values across all series. This update adds a new option to customize how the display and color values are calculated.

Status Dot This panel shows a colored dot for each series; useful to monitor latest values at a glance.

This week’s MVC (Most Valuable Contributor)

Each week we’ll recognize a Grafana contributor and thank them for all of their PRs, bug reports and feedback. A majority of fixes and improvements come from our fantastic community!

mtanda (Mitsuhiro Tanda)

159 PR’s during the last 2 years and still going strong. Thank you for your contributions mtanda!

What do you think?

Anything in particular you’d like to see in this series of posts? Too long? Too short? Boring? Let us know. Comment on this article below, or post something at our community forum. With your help, we can make this a worthwhile resource.

Follow us on Twitter, like us on Facebook, and join the Grafana Labs community.

timeShift(GrafanaBuzz, 1w) Issue 1

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2017/06/23/timeshiftgrafanabuzz-1w-issue-1/

Introducing timeShift

TimeShift is a new blog series we’ve created to provide a weekly curated list of links and articles centered around Grafana and the growing Grafana community. Each week we come across great articles from people who have written about how they are using Grafana, how to build effective dashboards, and a lot of discussion about the state of open source monitoring. We want to collect this information in one place and post an article every Friday afternoon highlighting some of this great content.

From the Blogosphere

We see a lot of articles covering the devops side of monitoring, but it’s interesting to see how people are using Grafana for different use cases.

Plugins and Dashboards

We are excited that there have been over 100,000 plugin installations since we launched the new plugable architecture in Grafana v3. You can discover and install plugins in your own on-premises or Hosted Grafana instance from our website. Below are some recent additions and updates.

Carpet plot A varient of the heatmap graph panel with additional display options.

DalmatinerDB No-fluff, purpose-built metric database.

Gnocchi This plugin was renamed. Users should uninstall the old version and install this new version.

This week’s MVC (Most Valuable Contributor)

Each week we’ll recognize a Grafana contributor and thank them for all of their PRs, bug reports and feedback. A majority of fixes and improvements come from our fantastic community!

thuck (Denis Doria)

Thank you for all of your PRs!

What do you think?

Anything in particular you’d like to see in this series of posts? Too long? Too short? Boring as shit? Let us know. Comment on this article below, or post something at our community forum. With your help, we can make this a worthwhile resource.

Follow us on Twitter, like us on Facebook, and join the Grafana Labs community.

Building Enterprise Level Web Applications on AWS Lambda with the DEEP Framework

Post Syndicated from Bryan Liston original https://aws.amazon.com/blogs/compute/building-enterprise-level-web-applications-on-aws-lambda-with-deep/

This is a guest post by Eugene Istrati, the co-creator of the DEEP Framework, a full-stack web framework that enables developers to build cloud-native applications using microservices architecture.

 

From the beginning, Mitoc Group has been building web applications for enterprise customers. We are a small group of developers who are helping customers with their entire web development process, from conception through execution and down to maintenance. Being in the business of doing everything is very hard, and it would be impossible without using AWS foundational services, but we incrementally needed more. That is why we became early adopters of the serverless computing approach and developed an ecosystem called Digital Enterprise End-to-end Platform (DEEP) with AWS Lambda at the core.

In this post, we dive deeper into how DEEP is using AWS Lambda to empower developers to build cloud-native applications or platforms using microservices architecture. We will walk you through the process of identifying the front-end, back-end and data tiers required to build web applications with AWS Lambda at the core. We will focus on the structure of the AWS Lambda functions we use, as well as security, performance and benchmarking steps that we take to build enterprise-level web applications.

Enterprise-level web applications

Our approach to web development is full-stack and user-driven, focused on UI (aka the user interface) and UX (aka user eXperience). Before going into the details, we’d like to emphasize the strategical (biased and opinionated) decisions we made early on:

  • We don’t say “no” to customers; every problem is seriously evaluated and sometimes we offer options that involve our direct competitors.
  • We are developers and we focus only on the application level; everything else (platform level and infrastructure level) must be managed by AWS.
  • We focus 20% of effort to solve 80% of work load; everything must be automated and pushed on the service side rather than the client side.

To be honest and fair, it doesn’t work all the time as expected, but it does help us to learn fast and move quickly, sustainably and incrementally solving business problems through technical solutions that really matter. However, the definition of “really matter” differs from customer to customer, quite uniquely in some cases.

Nevertheless, what we have learned from our customers is that enterprise-level web applications must provide the following common expectations:

Architecture

This post describes how we transformed a self-managed task management application (aka todo app) in minutes. The original version can be seen on www.todomvc.com and the original code can be downloaded from https://github.com/tastejs/todomvc/tree/master/examples/angularjs.

The architecture of every web application we build or transform, including the one described above, is similar to the reference architecture of the realtime voting application published recently by AWS on GitHub.

The todo app is written in AngularJS and deployed on Amazon S3, behind Amazon CloudFront (front-end). Task management is processed by AWS Lambda, optionally behind Amazon API Gateway (back-end). Task metadata is stored in Amazon DynamoDB (data tier). The transformed todo app, along with instructions on how to install and deploy this web application, is described in the Building Scalable Web Apps with AWS Lambda and Home-Grown Serverless blog post and the todo code is available on GitHub.

Let’s look at AWS Lambda functions and the value proposition they offer to us and our customers.

AWS Lambda functions

The goal of the todo app is to manage tasks in a self-service mode. End users can view tasks, create new tasks, mark or unmark a task as done, and clear completed tasks. From the UI point of view, that leads to four user interactions that require different back-end calls:

  • web service that retrieves tasks
  • web service that creates tasks
  • web service that deletes tasks
  • web service that updates tasks

A simple reordering of the above identified back-end services calls leads to basic CRUD (create, retrieve, update, delete) operations on the Task data object. These are the simple logical steps that we take to identify the front-end, back-end, and data tiers of (drums beating, trumpets playing) our approach to microservices, which we prefer to call microapplications.

Therefore, coming back to AWS Lambda, we have written four small Node.js functions that are context-bounded and self-sustained (each microservice corresponds to the above identified back-end web service):

Microservice that retrieves tasks
'use strict';

import DeepFramework from 'deep-framework';

export default class Handler extends DeepFramework.Core.AWS.Lambda.Runtime {
  /**
   * @param {Array} args
   */
  constructor(...args) {
    super(...args);
  }

  /**
   * @param request
   */
  handle(request) {
    let taskId = request.getParam('Id');

    if (taskId) {
      this.retrieveTask(taskId, (task) => {
        return this.createResponse(task).send();
      });
    } else {
      this.retrieveAllTasks((result) => {
        return this.createResponse(result).send();
      });
    }
  }

  /**
   * @param {Function} callback
   */
  retrieveAllTasks(callback) {
    let TaskModel = this.kernel.get('db').get('Task');

    TaskModel.findAll((err, task) => {
      if (err) {
        throw new DeepFramework.Core.Exception.DatabaseOperationException(err);
      }

      return callback(task.Items);
    });
  }

  /**
   * @param {String} taskId
   * @param {Function} callback
   */
  retrieveTask(taskId, callback) {
    let TaskModel = this.kernel.get('db').get('Task');

    TaskModel.findOneById(taskId, (err, task) => {
      if (err) {
        throw new DeepFramework.Core.Exception.DatabaseOperationException(err);
      }

      return callback(task ? task.get() : null);
    });
  }
}
Microservice that creates a task
'use strict';

import DeepFramework from 'deep-framework';

export default class extends DeepFramework.Core.AWS.Lambda.Runtime {
  /**
   * @param {Array} args
   */
  constructor(...args) {
    super(...args);
  }

  /**
   * @param request
   */
  handle(request) {
    let TaskModel = this.kernel.get('db').get('Task');

    TaskModel.createItem(request.data, (err, task) => {
      if (err) {
        throw new DeepFramework.Core.Exception.DatabaseOperationException(err);
      }

      return this.createResponse(task.get()).send();
    });
  }
}
Microservice that updates a task
'use strict';

import DeepFramework from 'deep-framework';

export default class Handler extends DeepFramework.Core.AWS.Lambda.Runtime {
  /**
   * @param {Array} args
   */
  constructor(...args) {
    super(...args);
  }

  /**
   * @param request
   */
  handle(request) {
    let taskId = request.getParam('Id');

    if (typeof taskId !== 'string') {
      throw new InvalidArgumentException(taskId, 'string');
    }

    let TaskModel = this.kernel.get('db').get('Task');

    TaskModel.updateItem(taskId, request.data, (err, task) => {
      if (err) {
        throw new DeepFramework.Core.Exception.DatabaseOperationException(err);
      }

      return this.createResponse(task.get()).send();
    });
  }
}
Microservice that deletes a task
'use strict';

import DeepFramework from 'deep-framework';

export default class extends DeepFramework.Core.AWS.Lambda.Runtime {
  /**
   * @param {Array} args
   */
  constructor(...args) {
    super(...args);
  }

  /**
   * @param request
   */
  handle(request) {
    let taskId = request.getParam('Id');

    if (typeof taskId !== 'string') {
      throw new DeepFramework.Core.Exception.InvalidArgumentException(taskId, 'string');
    }

    let TaskModel = this.kernel.get('db').get('Task');

    TaskModel.deleteById(taskId, (err) => {
      if (err) {
        throw new DeepFramework.Core.Exception.DatabaseOperationException(err);
      }

      return this.createResponse({}).send();
    });
  }
}

Each above file with related dependencies is compressed into .zip file and uploaded to AWS Lambda. If you’re new to this process, we strongly recommend following the How to Create, Upload and Invoke an AWS Lambda function tutorial.

Back to the four small Node.js functions, you can see that we have adopted ES6 (aka ES2015) as our coding standard. And we are importing deep-framework in every function. What is this framework anyway and why are we using it everywhere?

Full-stack web framework

Step back for a minute. Building and uploading AWS Lambda functions to the service is very simple and straight-forward, but now imagine that you need to manage 100–150 web services to access a web page, multiplied by hundreds or thousands of web pages.

We believe that the only way to achieve this kind of flexibility and scale is automation and code reuse. These principles led us to build and open source DEEP Framework — a full-stack web framework that abstracts web services and web applications from specific cloud services — and DEEP CLI (aka deepify) — a development tool-chain that abstracts package management and associated development operations.

Therefore, to make sure that the process of managing AWS Lambda functions is streamlined and automated, we consistently include two more files in each uploaded .zip:

DEEP microservice bootstrap
'use strict';

import DeepFramework from 'deep-framework';
import Handler from './Handler';

export default DeepFramework.LambdaHandler(Handler);
DEEP microservice package metadata (for npm) 
{
  "name": "deep-todo-task-create",
  "version": "0.0.1",
  "description": "Create a new todo task",
  "scripts": {
    "postinstall": "npm run compile",
    "compile": "deepify compile-es6 `pwd`"
  },
  "dependencies": {
    "deep-framework": "^1.8.x"
  },
  "preferGlobal": false,
  "private": true,
  "analyze": true
}

Having these three files (Handler.es6, bootstrap.es6, and package.json) in each Lambda function doesn’t mean that your final .zip file will be that small. Actually, a lot of additional operations happen before the .zip file is created. To name a few:

  • AWS Lambda performs better when the uploaded codebase is smaller. Because we provide both local development capabilities and one-step push to production, our process optimizes resources before deploying to AWS.
  • ES6 is not supported by the node.js v0.10.x runtime that we use in AWS Lambda, it is however available in the Node 4.3 runtime, so we compile .es6 files into ES5-compliant .js files using Babel.
  • Dependencies that are defined in package.json are automatically pulled and fine-tuned for node.js v0.10.x to provide the best performance possible.

Putting everything together

First, you need the following pre-requisites:

  1. AWS account (Create an Amazon Web Services Account)
  2. AWS CLI (Configure AWS Command Line Interface)
  3. Git v2+ (Get Started — Installing Git)
  4. Java / JRE v6+ (JDK 8 and JRE 8 Installation Start Here)
  5. js v4+ (Install nvm and Use latest node v4)

Note: Don’t use sudo to install nvm. Otherwise, you’ll have to fix npm permissions.

Second, install the DEEP CLI with the following command:

npm install deepify -g

Next, deploy the todo app using deepify:

deepify install github://MitocGroup/deep-microservices-todo-app ~/deep-todo-app

deepify server ~/deep-todo-app

deepify deploy ~/deep-todo-app

Note: When the deepify server command is finished, you can open http://localhost:8000 in your browser and enjoy the todo app running locally.

Cleaning up

There are at least half a dozen services and several dozen of resources created during deepify deploy. If only there was a simple command that would clean up everything when we’re done. We thought of that and created deepify undeploy to address this need. When you are done using todo app and want to remove web app related resources, execute the following:

deepify undeploy ~/deep-todo-app

As you can see, we empower developers to build hassle-free, cloud-native applications or platforms using microservices architecture and serverless computing.

And what about security?

Security

One of the biggest value propositions on AWS is out-of-the-box security and compliance. The beauty of the cloud-native approach is that security comes by design (in other words, it won’t work otherwise). We take full advantage of that shared responsibility model and enforce security in every layer.

End users benefit from IAM best practices through streamlined implementations of least privilege access, delegated roles instead of credentials, and integration with logging and monitoring services (e.g., AWS CloudTrail, Amazon CloudWatch, and Amazon Elasticsearch Service + Kibana). For example, developers and end users of the todo app didn’t need to explicitly define any security roles (it was done by deepify deploy), but they can rest assured that only their instance of todo app will be using their infrastructure, platform, and application resources.

The following are two security roles (back-end and front-end) that have been seamlessly generated and enforced in each layer:

IAM role that allows back-end invocation of AWS Lambda function (e.g. DeepProdTodoCreate1234abcd) in web application AWS account (e.g. 123456789000)
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": ["lambda:InvokeFunction"],
            "Resource": ["arn:aws:lambda:us-east-1:123456789000:function:DeepProdTodoCreate1234abcd*"]
        }
    ]
}
DEEP role that allows front-end resource (e.g deep.todo:task) to execute action (e.g. deep.todo:task:create)
{
  "Version": "2015-10-07",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": ["deep.todo:task:create"],
      "Resource": ["deep.todo:task"]
    }
  ]
}

Benchmarking

We have been continuously benchmarking AWS Lambda for various use cases in our microapplications. After a couple of repetitive situations doing similar analysis, we decided to build the benchmarking as another microapplication and re-use the ecosystem to include it automatically where we needed it. You can find the open-source code for the benchmarking microapplication on GitHub:

Particularly, for todo app, we performed various benchmarking analysis on AWS Lambda by tweaking different components in a specific function (e.g. function size, memory size, billable cost, etc.). Next, we would like to share results with you:

Benchmarking for todo app

Req NoFunction Size (MB)Memory Size (MB)Max Memory Used (MB)Start timeStop timeFront-end Call (ms)Back-end Call (ms)Billed Time (ms)Billed Time ($)
11.11283420:15.820:16.2359200.473000.000000624
21.11283420:17.820:18.2381202.453000.000000624
31.11283420:19.920:20.3406192.522000.000000416
41.11283420:21.920:22.2306152.192000.000000416
51.11283420:23.920:24.2333175.012000.000000416
61.11283420:25.920:26.3431278.033000.000000624
71.11283420:27.920:28.2323170.972000.000000416
81.11283420:29.920:30.2327160.242000.000000416
91.11283420:31.920:32.4556225.253000.000000624
101.11283520:33.920:34.2333179.592000.000000416
Average375.50193.67Total0.000004992

Performance

Speaking of performance, we find AWS Lambda mature enough to power large-scale web applications. The key is to build the functions as small as possible, focusing on a simple rule of one function to achieve only one task. Over time, these functions might grow in size; therefore, we always keep an eye on them and re-factor / split into the lowest possible logical denominator (smallest task).

Using the benchmarking tool, we ran multiple scenarios on the same function from todo app

Function Size (MB)Memory Size (MB)Max Memory Used (MB)Avg Front-end (ms)Avg Back-end (ms)Total Calls (#)Total Billed (ms)Total Billed ($/1B)*
1.112834-35375.50193.67102,4004,992
1.125634-37399.40153.25102,0008,340
1.151233-35341.60134.32101,80015,012
1.112834-49405.57223.8210027,30056,784
1.125628-48354.75177.9110023,80099,246
1.151232-47345.92163.1710023,100192,654
55.812849-50543.00284.03103,4007,072
55.825649-50339.80153.13102,1008,757
55.851249-50342.60141.02102,00016,680
55.812883-87416.10220.9110026,90055,952
55.825650-71377.69194.2210025,600106,752
55.851257-81353.46174.6510023,300194,322

Based on performance data, we have learned some pretty cool stuff:

  • The smaller the function is, the better it performs; On the other hand, if more memory is allocated, the size of the function matters less and less.
  • Memory size is not directly proportional to billable costs; developers can decide the memory size based on performance requirements combined with associated costs.
  • The key to better performance is continuous load, thanks to container reuse in AWS Lambda.

Conclusion

In this post, we presented a small web application that is built with AWS Lambda at the core. We walked you through the process of identifying the front-end, back-end, and data tiers required to build the todo app. You can fork the example code repository as a starting point for your own web applications.

If you have questions or suggestions, please leave a comment below.