Tag Archives: mvc

The serverless LAMP stack part 6: From MVC to serverless microservices

Post Syndicated from Benjamin Smith original https://aws.amazon.com/blogs/compute/the-serverless-lamp-stack-part-6-from-mvc-to-serverless-microservices/

In this post, you learn how to build serverless PHP applications using microservices.

I show how to move from using a single Lambda function as scalable web host with an MVC framework, to a decoupled microservice model. The accompanying code examples for this blog post can be found in this GitHub repository.

The MVC architectural pattern

A traditional LAMP stack often implements the Model-View-Controller (MVC) architecture. This is a well-established way of separating application logic into three parts: the model, the view, and the controller.

  • Model: This part is responsible for managing the data of the application. Its role is to retrieve raw information from the database or receive user input from the controller.
  • View: This component focuses on the display. Data received from the model is presented to the user. Any response from the user is also recognized and sent to the controller component.
  • Controller: This part is responsible for the application logic. It responds to the user input and performs interactions on the data model objects.

The MVC principal of decoupling data, logic, and presentation layers means that changes in one layer have minimal impact on the others. This speeds the development process and makes it easier to update layouts, change business rules, and add new features. Components are more adaptable for reuse and refactoring, and allow for a degree of simultaneous development.

The serverless LAMP stack

The serverless LAMP stack

The preceding serverless LAMP stack architecture is first discussed in this post. A web application is split in to two components. A single AWS Lambda function contains the application’s MVC framework. Each response is synchronously returned via Amazon API Gateway. This architecture addresses the scalability challenge that is often seen in traditional LAMP stack applications. It scales automatically with a managed infrastructure and a pay-per-use billing model. However, the serverless paradigm makes it possible to apply the MVC principles of decoupling and reusability to an even greater degree.

The “Lambda-lith”

The preceding architecture represents a serverless monolith or “Lambda-lith”. A single Lambda function contains the entire business logic within an MVC framework. This implementation can be used to “lift and shift” from a legacy MVC to a serverless application. Simple applications often start this way too, but as the application grows more complex over time new challenges can occur.

 

day1-day100

Lambda Day 1 to day 100

A Lambda-lith is often maintained in a single repository that contains the entire application logic. This is sometimes referred to as a mono-repo.

Lamba-lith monorepo

Lamba-lith monorepo

A mono-repo makes it harder to separate responsibility of ownership between development teams. Consequently, projects in a mono-repo are prone to depend on each other, creating tight coupling. The tightly coupled code base with all of its interconnected modules be challenging to maintain a regular release cadence. Any small fix can require updates to other parts of the code base, making maintenance challenging without fracturing the whole application. Onboarding can be slow as new developers take time to learn and understand the code base and all of the interdependencies.

By applying the following principles, Lambda-lith MVC applications can be refactored into decoupled serverless microservices.

Divide into independent Lambda functions with finite business logic

The following example illustrates a Lambda-lith with all business and routing logic stored in a single Lambda function. Every request is routed to this function from API Gateway. The function code base contains a `router.php` file to direct requests to the correct model, view, or controller.

This is similar to a traditional LAMP stack implementation in which a web server such as Apache or NGINX routes all requests to a single index.php function. However, it’s often more practical to split applications into multiple functions or services.

Lambda as a web server

In the following example, this Lambda function is split into multiple functions based on each CRUD operation. The internal routing logic is now decoupled from the business logic. The API Gateway service uses rules to route requests to the correct Lambda function. This allows each function to scale independently and updates can be made to one function without impacting another.

Routing decoupled from business logic

Build micro-perimeters to enforce strict verification of every person or service.

Traditional MVC applications often use a castle-and-moat security model. This provides security by placing a perimeter around the entire application to protect it from malicious actors. This perimeter guards the application or network by verifying requests and user identities at the point of entry or exit.

This is typically achieved with firewalls, proxy servers, honeypots, and other intrusion prevention tools. It assumes that activity inside the perimeter is safe. However, a network vulnerability may provide access to everything inside.

Microservice-based applications allow developers to apply a “zero trust” security model. This enables developers to build micro-perimeters around each resource. This is sometimes referred to as the principle of least privilege. It ensures that each request, service, or user can access only the data or resource that is necessary for its legitimate purpose. Even with a vulnerability, the blast radius is limited only to the service within that micro-perimeter.

Castle-and-moat vs zero trust security model

Use AWS Identity and Access Management (IAM) resource policies and execution roles to decouple business logic from security posture. Lambda resource policies define the events and services that are authorized to invoke the function. Lambda execution roles place constraints the resource or service the Lambda function has access to. When defining resource policies and execution roles, start with a minimum set of permissions and grant additional permissions as necessary.

Create building blocks based on common functionality

Each component is a single building block that makes up an application together with other blocks. These blocks form microservices that deliver a set of capabilities on a specific domain. This makes is easy to change, upgrade, and replace with no impact on the remaining microservice components. This creates natural ownership boundaries to help organize repositories.

Development teams can then easily be assigned ownership to individual microservice repositories. Use the AWS Serverless Application Model (AWS SAM) to organize microservices into multiple code repositories, as explained in this blog post.

Use messages to connect and communicate between microservices.

In traditional MVC applications, one part of the application uses method calls to communicate with the other parts. With serverless microservices, the code base is spread across short-lived stateless functions and services. Communication between these services is achieved using asynchronous messages or synchronous HTTP requests.

Synchronous communication

In this method, a service calls an API and waits for a response from the receiving service before proceeding. Use API Gateway to create a front door to your backend microservices. API Gateway is a fully managed service for creating and managing RESTful and WebSocket APIs.

Using API Gateway to transport data addresses common concerns such as authorization, API tokens, access control and rate limiting from your code, and helps to reduce code complexity. API Gateway can also be used for synchronous internal microservice communications where the services have clear separation, strict authentication requirements, or have been deployed across accounts.

The following architecture demonstrates an application that is deployed across two accounts. The Booking microservice, invokes a loyalty booking function via API Gateway that exists in the Loyalty points account.

Synchronous internal microservice communications

Asynchronous communication

In this pattern, a service sends a message without waiting for a response, and one or more services process the message asynchronously. Here, the services involved do not directly communicate with each other. Instead, services publish messages to a broker such as Amazon Simple Queue Service (SQS) or Amazon EventBridge. Other services can choose to subscribe to the topic in the broker that they care about. This enables further decoupling of business logic from data transportation and reduces your code complexity.

Use services instead of code, where possible

A service-first mindset is an important part of serverless application development. Each line of code you write may limit your project’s responsiveness to change and adds cognitive overhead for new developers. Using an appropriate AWS service for each domain (messaging, storage, orchestration) helps to build faster. Embracing this mind-set allows developers to focus on solving those unique challenges that add the most value to their customers.

By applying these principles to refactor an MVC Lambda-lith, I build the following CRUD API microservice. This application can be deployed from this GitHub repository. It uses an AWS Serverless Application Model (AWS SAM) template to define an HTTP API, 5 Lambda functions, an Amazon DynamoDB table and all the IAM roles required.

All routing logic and authentication is managed by Amazon API Gateway. Each Lambda function has limited scope and minimal business logic. It uses a lightweight custom-built PHP runtime, explained in this post. Each Lambda function uses the AWS PHP SDK to interact with the DynamoDB table. This architecture is suitable as a serverless microservice for a website backend.

A serverless API microservice with PHP

Conclusion

In this post, I show how to move from using a single Lambda function as a scalable web host with an MVC framework, to a decoupled microservice model. I explain the principles that can be applied to help transition an MCV application into a collection of microservices and show the benefits of doing so. I provide code examples for a serverless PHP CRUD microservice with a deployable AWS SAM template.

PHP development teams can transition from Lambda-lith MVC applications to a decoupled microservice model. This allows them to focus on shipping code to delight their customers without managing infrastructure.

Find more resources for building serverless PHP applications at ServerlessLand.com.

Migrating .NET Classic Applications to Amazon ECS Using Windows Containers

Post Syndicated from Sundar Narasiman original https://aws.amazon.com/blogs/compute/migrating-net-classic-applications-to-amazon-ecs-using-windows-containers/

This post contributed by Sundar Narasiman, Arun Kannan, and Thomas Fuller.

AWS recently announced the general availability of Windows container management for Amazon Elastic Container Service (Amazon ECS). Docker containers and Amazon ECS make it easy to run and scale applications on a virtual machine by abstracting the complex cluster management and setup needed.

Classic .NET applications are developed with .NET Framework 4.7.1 or older and can run only on a Windows platform. These include Windows Communication Foundation (WCF), ASP.NET Web Forms, and an ASP.NET MVC web app or web API.

Why classic ASP.NET?

ASP.NET MVC 4.6 and older versions of ASP.NET occupy a significant footprint in the enterprise web application space. As enterprises move towards microservices for new or existing applications, containers are one of the stepping stones for migrating from monolithic to microservices architectures. Additionally, the support for Windows containers in Windows 10, Windows Server 2016, and Visual Studio Tooling support for Docker simplifies the containerization of ASP.NET MVC apps.

Getting started

In this post, you pick an ASP.NET 4.6.2 MVC application and get step-by-step instructions for migrating to ECS using Windows containers. The detailed steps, AWS CloudFormation template, Microsoft Visual Studio solution, ECS service definition, and ECS task definition are available in the aws-ecs-windows-aspnet GitHub repository.

To help you getting started running Windows containers, here is the reference architecture for Windows containers on GitHub: ecs-refarch-cloudformation-windows. This reference architecture is the layered CloudFormation stack, in that it calls the other stacks to create the environment. The CloudFormation YAML template in this reference architecture is referenced to create a single JSON CloudFormation stack, which is used in the steps for the migration.

Steps for Migration

The code and templates to implement this migration can be found on GitHub: https://github.com/aws-samples/aws-ecs-windows-aspnet.

  1. Your development environment needs to have the latest version and updates for Visual Studio 2017, Windows 10, and Docker for Windows Stable.
  2. Next, containerize the ASP.NET application and test it locally. The size of Windows container application images is generally larger compared to Linux containers. This is because the base image of the Windows container itself is large in size, typically greater than 9 GB.
  3. After the application is containerized, the container image needs to be pushed to Amazon Elastic Container Registry (Amazon ECR). Images stored in ECR are compressed to improve pull times and reduce storage costs. In this case, you can see that ECR compresses the image to around 1 GB, for an optimization factor of 90%.
  4. Create a CloudFormation stack using the template in the ‘CloudFormation template’ folder. This creates an ECS service, task definition (referring the containerized ASP.NET application), and other related components mentioned in the ECS reference architecture for Windows containers.
  5. After the stack is created, verify the successful creation of the ECS service, ECS instances, running tasks (with the threshold mentioned in the task definition), and the Application Load Balancer’s successful health check against running containers.
  6. Navigate to the Application Load Balancer URL and see the successful rendering of the containerized ASP.NET MVC app in the browser.

Key Notes

  • Generally, Windows container images occupy large amount of space (in the order of few GBs).
  • All the task definition parameters for Linux containers are not available for Windows containers. For more information, see Windows Task Definitions.
  • An Application Load Balancer can be configured to route requests to one or more ports on each container instance in a cluster. The dynamic port mapping allows you to have multiple tasks from a single service on the same container instance.
  • IAM roles for Windows tasks require extra configuration. For more information, see Windows IAM Roles for Tasks. For this post, configuration was handled by the CloudFormation template.
  • The ECS container agent log file can be accessed for troubleshooting Windows containers: C:\ProgramData\Amazon\ECS\log\ecs-agent.log

Summary

In this post, you migrated an ASP.NET MVC application to ECS using Windows containers.

The logical next step is to automate the activities for migration to ECS and build a fully automated continuous integration/continuous deployment (CI/CD) pipeline for Windows containers. This can be orchestrated by leveraging services such as AWS CodeCommit, AWS CodePipeline, AWS CodeBuild, Amazon ECR, and Amazon ECS. You can learn more about how this is done in the Set Up a Continuous Delivery Pipeline for Containers Using AWS CodePipeline and Amazon ECS post.

If you have questions or suggestions, please comment below.

Haas: MVCC and VACUUM

Post Syndicated from corbet original https://lwn.net/Articles/741842/rss

Robert Haas gets into
the details
of how PostgreSQL concurrency works and why an occasional
VACUUM is necessary. “The second approach to providing transactions
with atomicity and isolation is multi-version concurrency control
(MVCC). The basic idea is simple: instead of locking a row that we want to
update, let’s just create a new version of it which, initially, is visible
only to the transaction which created it. Once the updating transaction
commits, we’ll make the new row visible to all new transactions that start
after that point, while existing transactions continue to see the old
row.

Enabling Two-Factor Authentication For Your Web Application

Post Syndicated from Bozho original https://techblog.bozho.net/enabling-two-factor-authentication-web-application/

It’s almost always a good idea to support two-factor authentication (2FA), especially for back-office systems. 2FA comes in many different forms, some of which include SMS, TOTP, or even hardware tokens.

Enabling them requires a similar flow:

  • The user goes to their profile page (skip this if you want to force 2fa upon registration)
  • Clicks “Enable two-factor authentication”
  • Enters some data to enable the particular 2FA method (phone number, TOTP verification code, etc.)
  • Next time they login, in addition to the username and password, the login form requests the 2nd factor (verification code) and sends that along with the credentials

I will focus on Google Authenticator, which uses a TOTP (Time-based one-time password) for generating a sequence of verification codes. The ideas is that the server and the client application share a secret key. Based on that key and on the current time, both come up with the same code. Of course, clocks are not perfectly synced, so there’s a window of a few codes that the server accepts as valid.

How to implement that with Java (on the server)? Using the GoogleAuth library. The flow is as follows:

  • The user goes to their profile page
  • Clicks “Enable two-factor authentication”
  • The server generates a secret key, stores it as part of the user profile and returns a URL to a QR code
  • The user scans the QR code with their Google Authenticator app thus creating a new profile in the app
  • The user enters the verification code shown the app in a field that has appeared together with the QR code and clicks “confirm”
  • The server marks the 2FA as enabled in the user profile
  • If the user doesn’t scan the code or doesn’t verify the process, the user profile will contain just a orphaned secret key, but won’t be marked as enabled
  • There should be an option to later disable the 2FA from their user profile page

The most important bit from theoretical point of view here is the sharing of the secret key. The crypto is symmetric, so both sides (the authenticator app and the server) have the same key. It is shared via a QR code that the user scans. If an attacker has control on the user’s machine at that point, the secret can be leaked and thus the 2FA – abused by the attacker as well. But that’s not in the threat model – in other words, if the attacker has access to the user’s machine, the damage is already done anyway.

Upon login, the flow is as follows:

  • The user enters username and password and clicks “Login”
  • Using an AJAX request the page asks the server whether this email has 2FA enabled
  • If 2FA is not enabled, just submit the username & password form
  • If 2FA is enabled, the login form is not submitted, but instead an additional field is shown to let the user input the verification code from the authenticator app
  • After the user enters the code and presses login, the form can be submitted. Either using the same login button, or a new “verify” button, or the verification input + button could be an entirely new screen (hiding the username/password inputs).
  • The server then checks again if the user has 2FA enabled and if yes, verifies the verification code. If it matches, login is successful. If not, login fails and the user is allowed to reenter the credentials and the verification code. Note here that you can have different responses depending on whether username/password are wrong or in case the code is wrong. You can also attempt to login prior to even showing the verification code input. That way is arguably better, because that way you don’t reveal to a potential attacker that the user uses 2FA.

While I’m speaking of username and password, that can apply to any other authentication method. After you get a success confirmation from an OAuth / OpenID Connect / SAML provider, or after you can a token from SecureLogin, you can request the second factor (code).

In code, the above processes look as follows (using Spring MVC; I’ve merged the controller and service layer for brevity. You can replace the @AuthenticatedPrincipal bit with your way of supplying the currently logged in user details to the controllers). Assuming the methods are in controller mapped to “/user/”:

@RequestMapping(value = "/init2fa", method = RequestMethod.POST)
@ResponseBody
public String initTwoFactorAuth(@AuthenticationPrincipal LoginAuthenticationToken token) {
    User user = getLoggedInUser(token);
    GoogleAuthenticatorKey googleAuthenticatorKey = googleAuthenticator.createCredentials();
    user.setTwoFactorAuthKey(googleAuthenticatorKey.getKey());
    dao.update(user);
    return GoogleAuthenticatorQRGenerator.getOtpAuthURL(GOOGLE_AUTH_ISSUER, email, googleAuthenticatorKey);
}

@RequestMapping(value = "/confirm2fa", method = RequestMethod.POST)
@ResponseBody
public boolean confirmTwoFactorAuth(@AuthenticationPrincipal LoginAuthenticationToken token, @RequestParam("code") int code) {
    User user = getLoggedInUser(token);
    boolean result = googleAuthenticator.authorize(user.getTwoFactorAuthKey(), code);
    user.setTwoFactorAuthEnabled(result);
    dao.update(user);
    return result;
}

@RequestMapping(value = "/disable2fa", method = RequestMethod.GET)
@ResponseBody
public void disableTwoFactorAuth(@AuthenticationPrincipal LoginAuthenticationToken token) {
    User user = getLoggedInUser(token);
    user.setTwoFactorAuthKey(null);
    user.setTwoFactorAuthEnabled(false);
    dao.update(user);
}

@RequestMapping(value = "/requires2fa", method = RequestMethod.POST)
@ResponseBody
public boolean login(@RequestParam("email") String email) {
    // TODO consider verifying the password here in order not to reveal that a given user uses 2FA
    return userService.getUserDetailsByEmail(email).isTwoFactorAuthEnabled();
}

On the client side it’s simple AJAX requests to the above methods (sidenote: I kind of feel the term AJAX is no longer trendy, but I don’t know how to call them. Async? Background? Javascript?).

$("#two-fa-init").click(function() {
    $.post("/user/init2fa", function(qrImage) {
	$("#two-fa-verification").show();
	$("#two-fa-qr").prepend($('<img>',{id:'qr',src:qrImage}));
	$("#two-fa-init").hide();
    });
});

$("#two-fa-confirm").click(function() {
    var verificationCode = $("#verificationCode").val().replace(/ /g,'')
    $.post("/user/confirm2fa?code=" + verificationCode, function() {
       $("#two-fa-verification").hide();
       $("#two-fa-qr").hide();
       $.notify("Successfully enabled two-factor authentication", "success");
       $("#two-fa-message").html("Successfully enabled");
    });
});

$("#two-fa-disable").click(function() {
    $.post("/user/disable2fa", function(qrImage) {
       window.location.reload();
    });
});

The login form code depends very much on the existing login form you are using, but the point is to call the /requires2fa with the email (and password) to check if 2FA is enabled and then show a verification code input.

Overall, the implementation if two-factor authentication is simple and I’d recommend it for most systems, where security is more important than simplicity of the user experience.

The post Enabling Two-Factor Authentication For Your Web Application appeared first on Bozho's tech blog.

SecureLogin For Java Web Applications

Post Syndicated from Bozho original https://techblog.bozho.net/securelogin-java-web-applications/

No, there is not a missing whitespace in the title. It’s not about any secure login, it’s about the SecureLogin protocol developed by Egor Homakov, a security consultant, who became famous for committing to master in the Rails project without having permissions.

The SecureLogin protocol is very interesting, as it does not rely on any central party (e.g. OAuth providers like Facebook and Twitter), thus avoiding all the pitfalls of OAuth (which Homakov has often criticized). It is not a password manager either. It is just a client-side software that performs a bit of crypto in order to prove to the server that it is indeed the right user. For that to work, two parts are key:

  • Using a master password to generate a private key. It uses a key-derivation function, which guarantees that the produced private key has sufficient entropy. That way, using the same master password and the same email, you will get the same private key everytime you use the password, and therefore the same public key. And you are the only one who can prove this public key is yours, by signing a message with your private key.
  • Service providers (websites) identify you by your public key by storing it in the database when you register and then looking it up on each subsequent login

The client-side part is performed ideally by a native client – a browser plugin (one is available for Chrome) or a OS-specific application (including mobile ones). That may sound tedious, but it’s actually quick and easy and a one-time event (and is easier than password managers).

I have to admit – I like it, because I’ve been having a similar idea for a while. In my “biometric identification” presentation (where I discuss the pitfalls of using biometrics-only identification schemes), I proposed (slide 23) an identification scheme that uses biometrics (e.g. scanned with your phone) + a password to produce a private key (using a key-derivation function). And the biometric can easily be added to SecureLogin in the future.

It’s not all roses, of course, as one issue isn’t fully resolved yet – revocation. In case someone steals your master password (or you suspect it might be stolen), you may want to change it and notify all service providers of that change so that they can replace your old public key with a new one. That has two implications – first, you may not have a full list of sites that you registered on, and since you may have changed devices, or used multiple devices, there may be websites that never get to know about your password change. There are proposed solutions (points 3 and 4), but they are not intrinsic to the protocol and rely on centralized services. The second issue is – what if the attacker changes your password first? To prevent that, service providers should probably rely on email verification, which is neither part of the protocol, nor is encouraged by it. But you may have to do it anyway, as a safeguard.

Homakov has not only defined a protocol, but also provided implementations of the native clients, so that anyone can start using it. So I decided to add it to a project I’m currently working on (the login page is here). For that I needed a java implementation of the server verification, and since no such implementation existed (only ruby and node.js are provided for now), I implemented it myself. So if you are going to use SecureLogin with a Java web application, you can use that instead of rolling out your own. While implementing it, I hit a few minor issues that may lead to protocol changes, so I guess backward compatibility should also be somehow included in the protocol (through versioning).

So, how does the code look like? On the client side you have a button and a little javascript:

<!-- get the latest sdk.js from the GitHub repo of securelogin
   or include it from https://securelogin.pw/sdk.js -->
<script src="js/securelogin/sdk.js"></script>
....
<p class="slbutton" id="securelogin">&#9889; SecureLogin</p>
$("#securelogin").click(function() {
  SecureLogin(function(sltoken){
	// TODO: consider adding csrf protection as in the demo applications
        // Note - pass as request body, not as param, as the token relies 
        // on url-encoding which some frameworks mess with
	$.post('/app/user/securelogin', sltoken, function(result) {
            if(result == 'ok') {
		 window.location = "/app/";
            } else {
                 $.notify("Login failed, try again later", "error");
            }
	});
  });
  return false;
});

A single button can be used for both login and signup, or you can have a separate signup form, if it has to include additional details rather than just an email. Since I added SecureLogin in addition to my password-based login, I kept the two forms.

On the server, you simply do the following:

@RequestMapping(value = "/securelogin/register", method = RequestMethod.POST)
@ResponseBody
public String secureloginRegister(@RequestBody String token, HttpServletResponse response) {
    try {
        SecureLogin login = SecureLogin.verify(request.getSecureLoginToken(), Options.create(websiteRootUrl));
        UserDetails details = userService.getUserDetailsByEmail(login.getEmail());
        if (details == null || !login.getRawPublicKey().equals(details.getSecureLoginPublicKey())) {
            return "failure";
        }
        // sets the proper cookies to the response
        TokenAuthenticationService.addAuthentication(response, login.getEmail(), secure));
        return "ok";
    } catch (SecureLoginVerificationException e) {
        return "failure";
    }
}

This is spring-mvc, but it can be any web framework. You can also incorporate that into a spring-security flow somehow. I’ve never liked spring-security’s complexity, so I did it manually. Also, instead of strings, you can return proper status codes. Note that I’m doing a lookup by email and only then checking the public key (as if it’s a password). You can do the other way around if you have the proper index on the public key column.

I wouldn’t suggest having a SecureLogin-only system, as the project is still in an early stage and users may not be comfortable with it. But certainly adding it as an option is a good idea.

The post SecureLogin For Java Web Applications appeared first on Bozho's tech blog.

Basic API Rate-Limiting

Post Syndicated from Bozho original https://techblog.bozho.net/basic-api-rate-limiting/

It is likely that you are developing some form of (web/RESTful) API, and in case it is publicly-facing (or even when it’s internal), you normally want to rate-limit it somehow. That is, to limit the number of requests performed over a period of time, in order to save resources and protect from abuse.

This can probably be achieved on web-server/load balancer level with some clever configurations, but usually you want the rate limiter to be client-specific (i.e. each client of your API sohuld have a separate rate limit), and the way the client is identified varies. It’s probably still possible to do it on the load balancer, but I think it makes sense to have it on the application level.

I’ll use spring-mvc for the example, but any web framework has a good way to plug an interceptor.

So here’s an example of a spring-mvc interceptor:

@Component
public class RateLimitingInterceptor extends HandlerInterceptorAdapter {

    private static final Logger logger = LoggerFactory.getLogger(RateLimitingInterceptor.class);
    
    @Value("${rate.limit.enabled}")
    private boolean enabled;
    
    @Value("${rate.limit.hourly.limit}")
    private int hourlyLimit;

    private Map<String, Optional<SimpleRateLimiter>> limiters = new ConcurrentHashMap<>();
    
    @Override
    public boolean preHandle(HttpServletRequest request, HttpServletResponse response, Object handler)
            throws Exception {
        if (!enabled) {
            return true;
        }
        String clientId = request.getHeader("Client-Id");
        // let non-API requests pass
        if (clientId == null) {
            return true;
        }
        SimpleRateLimiter rateLimiter = getRateLimiter(clientId);
        boolean allowRequest = limiter.tryAcquire();
    
        if (!allowRequest) {
            response.setStatus(HttpStatus.TOO_MANY_REQUESTS.value());
        }
        response.addHeader("X-RateLimit-Limit", String.valueOf(hourlyLimit));
        return allowRequest;
    }
    
    private SimpleRateLimiter getRateLimiter(String clientId) {
        return limiters.computeIfAbsent(clientId, clientId -> {
            return Optional.of(createRateLimiter(clientId));
        });
    }

	
    @PreDestroy
    public void destroy() {
        // loop and finalize all limiters
    }
}

This initializes rate-limiters per client on demand. Alternatively, on startup you could just loop through all registered API clients and create a rate limiter for each. In case the rate limiter doesn’t allow more requests (tryAcquire() returns false), then raturn “Too many requests” and abort the execution of the request (return “false” from the interceptor).

This sounds simple. But there are a few catches. You may wonder where the SimpleRateLimiter above is defined. We’ll get there, but first let’s see what options do we have for rate limiter implementations.

The most recommended one seems to be the guava RateLimiter. It has a straightforward factory method that gives you a rate limiter for a specified rate (permits per second). However, it doesn’t accomodate web APIs very well, as you can’t initilize the RateLimiter with pre-existing number of permits. That means a period of time should elapse before the limiter would allow requests. There’s another issue – if you have less than one permits per second (e.g. if your desired rate limit is “200 requests per hour”), you can pass a fraction (hourlyLimit / secondsInHour), but it still won’t work the way you expect it to, as internally there’s a “maxPermits” field that would cap the number of permits to much less than you want it to. Also, the rate limiter doesn’t allow bursts – you have exactly X permits per second, but you cannot spread them over a long period of time, e.g. have 5 requests in one second, and then no requests for the next few seconds. In fact, all of the above can be solved, but sadly, through hidden fields that you don’t have access to. Multiple feature requests exist for years now, but Guava just doesn’t update the rate limiter, making it much less applicable to API rate-limiting.

Using reflection, you can tweak the parameters and make the limiter work. However, it’s ugly, and it’s not guaranteed it will work as expected. I have shown here how to initialize a guava rate limiter with X permits per hour, with burstability and full initial permits. When I thought that would do, I saw that tryAcquire() has a synchronized(..) block. Will that mean all requests will wait for each other when simply checking whether allowed to make a request? That would be horrible.

So in fact the guava RateLimiter is not meant for (web) API rate-limiting. Maybe keeping it feature-poor is Guava’s way for discouraging people from misusing it?

That’s why I decided to implement something simple myself, based on a Java Semaphore. Here’s the naive implementation:

public class SimpleRateLimiter {
    private Semaphore semaphore;
    private int maxPermits;
    private TimeUnit timePeriod;
    private ScheduledExecutorService scheduler;

    public static SimpleRateLimiter create(int permits, TimeUnit timePeriod) {
        SimpleRateLimiter limiter = new SimpleRateLimiter(permits, timePeriod);
        limiter.schedulePermitReplenishment();
        return limiter;
    }

    private SimpleRateLimiter(int permits, TimeUnit timePeriod) {
        this.semaphore = new Semaphore(permits);
        this.maxPermits = permits;
        this.timePeriod = timePeriod;
    }

    public boolean tryAcquire() {
        return semaphore.tryAcquire();
    }

    public void stop() {
        scheduler.shutdownNow();
    }

    public void schedulePermitReplenishment() {
        scheduler = Executors.newScheduledThreadPool(1);
        scheduler.schedule(() -> {
            semaphore.release(maxPermits - semaphore.availablePermits());
        }, 1, timePeriod);

    }
}

It takes a number of permits (allowed number of requests) and a time period. The time period is “1 X”, where X can be second/minute/hour/daily – depending on how you want your limit to be configured – per second, per minute, hourly, daily. Every 1 X a scheduler replenishes the acquired permits (in the example above there’s one scheduler per client, which may be inefficient with large number of clients – you can pass a shared scheduler pool instead). There is no control for bursts (a client can spend all permits with a rapid succession of requests), there is no warm-up functionality, there is no gradual replenishment. Depending on what you want, this may not be ideal, but that’s just a basic rate limiter that is thread-safe and doesn’t have any blocking. I wrote a unit test to confirm that the limiter behaves properly, and also ran performance tests against a local application to make sure the limit is obeyed. So far it seems to be working.

Are there alternatives? Well, yes – there are libraries like RateLimitJ that uses Redis to implement rate-limiting. That would mean, however, that you need to setup and run Redis. Which seems like an overhead for “simply” having rate-limiting. (Note: it seems to also have an in-memory version)

On the other hand, how would rate-limiting work properly in a cluster of application nodes? Application nodes probably need some database or gossip protocol to share data about the per-client permits (requests) remaining? Not necessarily. A very simple approach to this issue would be to assume that the load balancer distributes the load equally among your nodes. That way you would just have to set the limit on each node to be equal to the total limit divided by the number of nodes. It won’t be exact, but you rarely need it to be – allowing 5-10 more requests won’t kill your application, allowing 5-10 less won’t be dramatic for the users.

That, however, would mean that you have to know the number of application nodes. If you employ auto-scaling (e.g. in AWS), the number of nodes may change depending on the load. If that is the case, instead of configuring a hard-coded number of permits, the replenishing scheduled job can calculate the “maxPermits” on the fly, by calling an AWS (or other cloud-provider) API to obtain the number of nodes in the current auto-scaling group. That would still be simpler than supporting a redis deployment just for that.

Overall, I’m surprised there isn’t a “canonical” way to implement rate-limiting (in Java). Maybe the need for rate-limiting is not as common as it may seem. Or it’s implemented manually – by temporarily banning API clients that use “too much resources”.

Update: someone pointed out the bucket4j project, which seems nice and worth taking a look at.

The post Basic API Rate-Limiting appeared first on Bozho's tech blog.

Launch – .NET Core Support In AWS CodeStar and AWS Codebuild

Post Syndicated from Tara Walker original https://aws.amazon.com/blogs/aws/launch-net-core-support-in-aws-codestar-and-aws-codebuild/

A few months ago, I introduced the AWS CodeStar service, which allows you to quickly develop, build, and deploy applications on AWS. AWS CodeStar helps development teams to increase the pace of releasing applications and solutions while reducing some of the challenges of building great software.

When the CodeStar service launched in April, it was released with several project templates for Amazon EC2, AWS Elastic Beanstalk, and AWS Lambda using five different programming languages; JavaScript, Java, Python, Ruby, and PHP. Each template provisions the underlying AWS Code Services and configures an end-end continuous delivery pipeline for the targeted application using AWS CodeCommit, AWS CodeBuild, AWS CodePipeline, and AWS CodeDeploy.

As I have participated in some of the AWS Summits around the world discussing AWS CodeStar, many of you have shown curiosity in learning about the availability of .NET templates in CodeStar and utilizing CodeStar to deploy .NET applications. Therefore, it is with great pleasure and excitement that I announce that you can now develop, build, and deploy cross-platform .NET Core applications with the AWS CodeStar and AWS CodeBuild services.

AWS CodeBuild has added the ability to build and deploy .NET Core application code to both Amazon EC2 and AWS Lambda. This new CodeBuild capability has enabled the addition of two new project templates in AWS CodeStar for .NET Core applications.  These new project templates enable you to deploy .NET Code applications to Amazon EC2 Linux Instances, and provides everything you need to get started quickly, including .NET Core sample code and a full software development toolchain.

Of course, I can’t wait to try out the new addition to the project templates within CodeStar and the update .NET application build options with CodeBuild. For my test scenario, I will use CodeStar to create, build, and deploy my .NET Code ASP.Net web application on EC2. Then, I will extend my ASP.Net application by creating a .NET Lambda function to be compiled and deployed with CodeBuild as a part of my application’s pipeline. This Lambda function can then be called and used within my ASP.Net application to extend the functionality of my web application.

So, let’s get started!

First, I’ll log into the CodeStar console and start a new CodeStar project. I am presented with the option to select a project template.


Right now, I would like to focus on building .NET Core projects, therefore, I’ll filter the project templates by selecting the C# in the Programming Languages section. Now, CodeStar only shows me the new .NET Core project templates that I can use to build web applications and services with ASP.NET Core.

I think I’ll use the ASP.NET Core web application project template for my first CodeStar .NET Core application. As you can see by the project template information display, my web application will be deployed on Amazon EC2, which signifies to me that my .NET Core code will be compiled and packaged using AWS CodeBuild and deployed to EC2 using the AWS CodeDeploy service.


My hunch about the services is confirmed on the next screen when CodeStar shows the AWS CodePipeline and the AWS services that will be configured for my new project. I’ll name this web application project, ASPNetCore4Tara, and leave the default Project ID that CodeStar generates from the project name. Yes, I know that this is one of the goofiest names I could ever come up with, but, hey, it will do for this test project so I’ll go ahead and click the Next button. I should mention that you have the option to edit your Amazon EC2 configuration for your project on this screen before CodeStar starts configuring and provisioning the services needed to run your application.

Since my ASP.Net Core web application will be deployed to an Amazon EC2 instance, I will need to choose an Amazon EC2 Key Pair for encryption of the login used to allow me to SSH into this instance. For my ASPNetCore4Tara project, I will use an existing Amazon EC2 key pair I have previously used for launching my other EC2 instances. However, if I was creating this project and I did not have an EC2 key pair or if I didn’t have access to the .pem file (private key file) for an existing EC2 key pair, I would have to first visit the EC2 console and create a new EC2 key pair to use for my project. This is important because if you remember, without having the EC2 key pair with the associated .pem file, I would not be able to log into my EC2 instance.

With my EC2 key pair selected and confirmation that I have the related private file checked, I am ready to click the Create Project button.


After CodeStar completes the creation of the project and the provisioning of the project related AWS services, I am ready to view the CodeStar sample application from the application endpoint displayed in the CodeStar dashboard. This sample application should be familiar to you if have been working with the CodeStar service or if you had an opportunity to read the blog post about the AWS CodeStar service launch. I’ll click the link underneath Application Endpoints to view the sample ASP.NET Core web application.

Now I’ll go ahead and clone the generated project and connect my Visual Studio IDE to the project repository. I am going to make some changes to the application and since AWS CodeBuild now supports .NET Core builds and deployments to both Amazon EC2 and AWS Lambda, I will alter my build specification file appropriately for the changes to my web application that will include the use of the Lambda function.  Don’t worry if you are not familiar with how to clone the project and connect it to the Visual Studio IDE, CodeStar provides in-console step-by-step instructions to assist you.

First things first, I will open up the Visual Studio IDE and connect to AWS CodeCommit repository provisioned for my ASPNetCore4Tara project. It is important to note that the Visual Studio 2017 IDE is required for .NET Core projects in AWS CodeStar and the AWS Toolkit for Visual Studio 2017 will need to be installed prior to connecting your project repository to the IDE.

In order to connect to my repo within Visual Studio, I will open up Team Explorer and select the Connect link under the AWS CodeCommit option under Hosted Service Providers. I will click Ok to keep my default AWS profile toolkit credentials.

I’ll then click Clone under the Manage Connections and AWS CodeCommit hosted provider section.

Once I select my aspnetcore4tara repository in the Clone AWS CodeCommit Repository dialog, I only have to enter my IAM role’s HTTPS Git credentials in the Git Credentials for AWS CodeCommit dialog and my process is complete. If you’re following along and receive a dialog for Git Credential Manager login, don’t worry just your enter the same IAM role’s Git credentials.


My project is now connected to the aspnetcore4tara CodeCommit repository and my web application is loaded to editing. As you will notice in the screenshot below, the sample project is structured as a standard ASP.NET Core MVC web application.

With the project created, I can make changes and updates. Since I want to update this project with a .NET Lambda function, I’ll quickly start a new project in Visual Studio to author a very simple C# Lambda function to be compiled with the CodeStar project. This AWS Lambda function will be included in the CodeStar ASP.NET Core web application project.

The Lambda function I’ve created makes a call to the REST API of NASA’s popular Astronomy Picture of the Day website. The API sends back the latest planetary image and related information in JSON format. You can see the Lambda function code below.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;

using System.Net.Http;
using Amazon.Lambda.Core;

// Assembly attribute to enable the Lambda function's JSON input to be converted into a .NET class.
[assembly: LambdaSerializer(typeof(Amazon.Lambda.Serialization.Json.JsonSerializer))]

namespace NASAPicOfTheDay
{
    public class SpacePic
    {
        HttpClient httpClient = new HttpClient();
        string nasaRestApi = "https://api.nasa.gov/planetary/apod?api_key=DEMO_KEY";

        /// <summary>
        /// A simple function that retreives NASA Planetary Info and 
        /// Picture of the Day
        /// </summary>
        /// <param name="context"></param>
        /// <returns>nasaResponse-JSON String</returns>
        public async Task<string> GetNASAPicInfo(ILambdaContext context)
        {
            string nasaResponse;
            
            //Call NASA Picture of the Day API
            nasaResponse = await httpClient.GetStringAsync(nasaRestApi);
            Console.WriteLine("NASA API Response");
            Console.WriteLine(nasaResponse);
            
            //Return NASA response - JSON format
            return nasaResponse; 
        }
    }
}

I’ll now publish this C# Lambda function and test by using the Publish to AWS Lambda option provided by the AWS Toolkit for Visual Studio with NASAPicOfTheDay project. After publishing the function, I can test it and verify that it is working correctly within Visual Studio and/or the AWS Lambda console. You can learn more about building AWS Lambda functions with C# and .NET at: http://docs.aws.amazon.com/lambda/latest/dg/dotnet-programming-model.html

 

Now that I have my Lambda function completed and tested, all that is left is to update the CodeBuild buildspec.yml file within my aspnetcore4tara CodeStar project to include publishing and deploying of the Lambda function.

To accomplish this, I will create a new folder named functions and copy the folder that contains my Lambda function .NET project to my aspnetcore4tara web application project directory.

 

 

To build and publish my AWS Lambda function, I will use commands in the buildspec.yml file from the aws-lambda-dotnet tools library, which helps .NET Core developers develop AWS Lambda functions. I add a file, funcprof, to the NASAPicOfTheDay folder which contains customized profile information for use with aws-lambda-dotnet tools. All that is left is to update the buildspec.yml file used by CodeBuild for the ASPNetCore4Tara project build to include the packaging and the deployment of the NASAPictureOfDay AWS Lambda function. The updated buildspec.yml is as follows:

version: 0.2
phases:
  env:
  variables:
    basePath: 'hold'
  install:
    commands:
      - echo set basePath for project
      - basePath=$(pwd)
      - echo $basePath
      - echo Build restore and package Lambda function using AWS .NET Tools...
      - dotnet restore functions/*/NASAPicOfTheDay.csproj
      - cd functions/NASAPicOfTheDay
      - dotnet lambda package -c Release -f netcoreapp1.0 -o ../lambda_build/nasa-lambda-function.zip
  pre_build:
    commands:
      - echo Deploy Lambda function used in ASPNET application using AWS .NET Tools. Must be in path of Lambda function build 
      - cd $basePath
      - cd functions/NASAPicOfTheDay
      - dotnet lambda deploy-function NASAPicAPI -c Release -pac ../lambda_build/nasa-lambda-function.zip --profile-location funcprof -fd 'NASA API for Picture of the Day' -fn NASAPicAPI -fh NASAPicOfTheDay::NASAPicOfTheDay.SpacePic::GetNASAPicInfo -frun dotnetcore1.0 -frole arn:aws:iam::xxxxxxxxxxxx:role/lambda_exec_role -framework netcoreapp1.0 -fms 256 -ft 30  
      - echo Lambda function is now deployed - Now change directory back to Base path
      - cd $basePath
      - echo Restore started on `date`
      - dotnet restore AspNetCoreWebApplication/AspNetCoreWebApplication.csproj
  build:
    commands:
      - echo Build started on `date`
      - dotnet publish -c release -o ./build_output AspNetCoreWebApplication/AspNetCoreWebApplication.csproj
artifacts:
  files:
    - AspNetCoreWebApplication/build_output/**/*
    - scripts/**/*
    - appspec.yml
    

That’s it! All that is left is for me to add and commit all my file additions and updates to the AWS CodeCommit git repository provisioned for my ASPNetCore4Tara project. This kicks off the AWS CodePipeline for the project which will now use AWS CodeBuild new support for .NET Core to build and deploy both the ASP.NET Core web application and the .NET AWS Lambda function.

 

Summary

The support for .NET Core in AWS CodeStar and AWS CodeBuild opens the door for .NET developers to take advantage of the benefits of Continuous Integration and Delivery when building .NET based solutions on AWS.  Read more about .NET Core support in AWS CodeStar and AWS CodeBuild here or review product pages for AWS CodeStar and/or AWS CodeBuild for more information on using the services.

Enjoy building .NET projects more efficiently with Amazon Web Services using .NET Core with AWS CodeStar and AWS CodeBuild.

Tara

 

Building Enterprise Level Web Applications on AWS Lambda with the DEEP Framework

Post Syndicated from Bryan Liston original https://aws.amazon.com/blogs/compute/building-enterprise-level-web-applications-on-aws-lambda-with-deep/

This is a guest post by Eugene Istrati, the co-creator of the DEEP Framework, a full-stack web framework that enables developers to build cloud-native applications using microservices architecture.

 

From the beginning, Mitoc Group has been building web applications for enterprise customers. We are a small group of developers who are helping customers with their entire web development process, from conception through execution and down to maintenance. Being in the business of doing everything is very hard, and it would be impossible without using AWS foundational services, but we incrementally needed more. That is why we became early adopters of the serverless computing approach and developed an ecosystem called Digital Enterprise End-to-end Platform (DEEP) with AWS Lambda at the core.

In this post, we dive deeper into how DEEP is using AWS Lambda to empower developers to build cloud-native applications or platforms using microservices architecture. We will walk you through the process of identifying the front-end, back-end and data tiers required to build web applications with AWS Lambda at the core. We will focus on the structure of the AWS Lambda functions we use, as well as security, performance and benchmarking steps that we take to build enterprise-level web applications.

Enterprise-level web applications

Our approach to web development is full-stack and user-driven, focused on UI (aka the user interface) and UX (aka user eXperience). Before going into the details, we’d like to emphasize the strategical (biased and opinionated) decisions we made early on:

  • We don’t say “no” to customers; every problem is seriously evaluated and sometimes we offer options that involve our direct competitors.
  • We are developers and we focus only on the application level; everything else (platform level and infrastructure level) must be managed by AWS.
  • We focus 20% of effort to solve 80% of work load; everything must be automated and pushed on the service side rather than the client side.

To be honest and fair, it doesn’t work all the time as expected, but it does help us to learn fast and move quickly, sustainably and incrementally solving business problems through technical solutions that really matter. However, the definition of “really matter” differs from customer to customer, quite uniquely in some cases.

Nevertheless, what we have learned from our customers is that enterprise-level web applications must provide the following common expectations:

Architecture

This post describes how we transformed a self-managed task management application (aka todo app) in minutes. The original version can be seen on www.todomvc.com and the original code can be downloaded from https://github.com/tastejs/todomvc/tree/master/examples/angularjs.

The architecture of every web application we build or transform, including the one described above, is similar to the reference architecture of the realtime voting application published recently by AWS on GitHub.

The todo app is written in AngularJS and deployed on Amazon S3, behind Amazon CloudFront (front-end). Task management is processed by AWS Lambda, optionally behind Amazon API Gateway (back-end). Task metadata is stored in Amazon DynamoDB (data tier). The transformed todo app, along with instructions on how to install and deploy this web application, is described in the Building Scalable Web Apps with AWS Lambda and Home-Grown Serverless blog post and the todo code is available on GitHub.

Let’s look at AWS Lambda functions and the value proposition they offer to us and our customers.

AWS Lambda functions

The goal of the todo app is to manage tasks in a self-service mode. End users can view tasks, create new tasks, mark or unmark a task as done, and clear completed tasks. From the UI point of view, that leads to four user interactions that require different back-end calls:

  • web service that retrieves tasks
  • web service that creates tasks
  • web service that deletes tasks
  • web service that updates tasks

A simple reordering of the above identified back-end services calls leads to basic CRUD (create, retrieve, update, delete) operations on the Task data object. These are the simple logical steps that we take to identify the front-end, back-end, and data tiers of (drums beating, trumpets playing) our approach to microservices, which we prefer to call microapplications.

Therefore, coming back to AWS Lambda, we have written four small Node.js functions that are context-bounded and self-sustained (each microservice corresponds to the above identified back-end web service):

Microservice that retrieves tasks
'use strict';

import DeepFramework from 'deep-framework';

export default class Handler extends DeepFramework.Core.AWS.Lambda.Runtime {
  /**
   * @param {Array} args
   */
  constructor(...args) {
    super(...args);
  }

  /**
   * @param request
   */
  handle(request) {
    let taskId = request.getParam('Id');

    if (taskId) {
      this.retrieveTask(taskId, (task) => {
        return this.createResponse(task).send();
      });
    } else {
      this.retrieveAllTasks((result) => {
        return this.createResponse(result).send();
      });
    }
  }

  /**
   * @param {Function} callback
   */
  retrieveAllTasks(callback) {
    let TaskModel = this.kernel.get('db').get('Task');

    TaskModel.findAll((err, task) => {
      if (err) {
        throw new DeepFramework.Core.Exception.DatabaseOperationException(err);
      }

      return callback(task.Items);
    });
  }

  /**
   * @param {String} taskId
   * @param {Function} callback
   */
  retrieveTask(taskId, callback) {
    let TaskModel = this.kernel.get('db').get('Task');

    TaskModel.findOneById(taskId, (err, task) => {
      if (err) {
        throw new DeepFramework.Core.Exception.DatabaseOperationException(err);
      }

      return callback(task ? task.get() : null);
    });
  }
}
Microservice that creates a task
'use strict';

import DeepFramework from 'deep-framework';

export default class extends DeepFramework.Core.AWS.Lambda.Runtime {
  /**
   * @param {Array} args
   */
  constructor(...args) {
    super(...args);
  }

  /**
   * @param request
   */
  handle(request) {
    let TaskModel = this.kernel.get('db').get('Task');

    TaskModel.createItem(request.data, (err, task) => {
      if (err) {
        throw new DeepFramework.Core.Exception.DatabaseOperationException(err);
      }

      return this.createResponse(task.get()).send();
    });
  }
}
Microservice that updates a task
'use strict';

import DeepFramework from 'deep-framework';

export default class Handler extends DeepFramework.Core.AWS.Lambda.Runtime {
  /**
   * @param {Array} args
   */
  constructor(...args) {
    super(...args);
  }

  /**
   * @param request
   */
  handle(request) {
    let taskId = request.getParam('Id');

    if (typeof taskId !== 'string') {
      throw new InvalidArgumentException(taskId, 'string');
    }

    let TaskModel = this.kernel.get('db').get('Task');

    TaskModel.updateItem(taskId, request.data, (err, task) => {
      if (err) {
        throw new DeepFramework.Core.Exception.DatabaseOperationException(err);
      }

      return this.createResponse(task.get()).send();
    });
  }
}
Microservice that deletes a task
'use strict';

import DeepFramework from 'deep-framework';

export default class extends DeepFramework.Core.AWS.Lambda.Runtime {
  /**
   * @param {Array} args
   */
  constructor(...args) {
    super(...args);
  }

  /**
   * @param request
   */
  handle(request) {
    let taskId = request.getParam('Id');

    if (typeof taskId !== 'string') {
      throw new DeepFramework.Core.Exception.InvalidArgumentException(taskId, 'string');
    }

    let TaskModel = this.kernel.get('db').get('Task');

    TaskModel.deleteById(taskId, (err) => {
      if (err) {
        throw new DeepFramework.Core.Exception.DatabaseOperationException(err);
      }

      return this.createResponse({}).send();
    });
  }
}

Each above file with related dependencies is compressed into .zip file and uploaded to AWS Lambda. If you’re new to this process, we strongly recommend following the How to Create, Upload and Invoke an AWS Lambda function tutorial.

Back to the four small Node.js functions, you can see that we have adopted ES6 (aka ES2015) as our coding standard. And we are importing deep-framework in every function. What is this framework anyway and why are we using it everywhere?

Full-stack web framework

Step back for a minute. Building and uploading AWS Lambda functions to the service is very simple and straight-forward, but now imagine that you need to manage 100–150 web services to access a web page, multiplied by hundreds or thousands of web pages.

We believe that the only way to achieve this kind of flexibility and scale is automation and code reuse. These principles led us to build and open source DEEP Framework — a full-stack web framework that abstracts web services and web applications from specific cloud services — and DEEP CLI (aka deepify) — a development tool-chain that abstracts package management and associated development operations.

Therefore, to make sure that the process of managing AWS Lambda functions is streamlined and automated, we consistently include two more files in each uploaded .zip:

DEEP microservice bootstrap
'use strict';

import DeepFramework from 'deep-framework';
import Handler from './Handler';

export default DeepFramework.LambdaHandler(Handler);
DEEP microservice package metadata (for npm) 
{
  "name": "deep-todo-task-create",
  "version": "0.0.1",
  "description": "Create a new todo task",
  "scripts": {
    "postinstall": "npm run compile",
    "compile": "deepify compile-es6 `pwd`"
  },
  "dependencies": {
    "deep-framework": "^1.8.x"
  },
  "preferGlobal": false,
  "private": true,
  "analyze": true
}

Having these three files (Handler.es6, bootstrap.es6, and package.json) in each Lambda function doesn’t mean that your final .zip file will be that small. Actually, a lot of additional operations happen before the .zip file is created. To name a few:

  • AWS Lambda performs better when the uploaded codebase is smaller. Because we provide both local development capabilities and one-step push to production, our process optimizes resources before deploying to AWS.
  • ES6 is not supported by the node.js v0.10.x runtime that we use in AWS Lambda, it is however available in the Node 4.3 runtime, so we compile .es6 files into ES5-compliant .js files using Babel.
  • Dependencies that are defined in package.json are automatically pulled and fine-tuned for node.js v0.10.x to provide the best performance possible.

Putting everything together

First, you need the following pre-requisites:

  1. AWS account (Create an Amazon Web Services Account)
  2. AWS CLI (Configure AWS Command Line Interface)
  3. Git v2+ (Get Started — Installing Git)
  4. Java / JRE v6+ (JDK 8 and JRE 8 Installation Start Here)
  5. js v4+ (Install nvm and Use latest node v4)

Note: Don’t use sudo to install nvm. Otherwise, you’ll have to fix npm permissions.

Second, install the DEEP CLI with the following command:

npm install deepify -g

Next, deploy the todo app using deepify:

deepify install github://MitocGroup/deep-microservices-todo-app ~/deep-todo-app

deepify server ~/deep-todo-app

deepify deploy ~/deep-todo-app

Note: When the deepify server command is finished, you can open http://localhost:8000 in your browser and enjoy the todo app running locally.

Cleaning up

There are at least half a dozen services and several dozen of resources created during deepify deploy. If only there was a simple command that would clean up everything when we’re done. We thought of that and created deepify undeploy to address this need. When you are done using todo app and want to remove web app related resources, execute the following:

deepify undeploy ~/deep-todo-app

As you can see, we empower developers to build hassle-free, cloud-native applications or platforms using microservices architecture and serverless computing.

And what about security?

Security

One of the biggest value propositions on AWS is out-of-the-box security and compliance. The beauty of the cloud-native approach is that security comes by design (in other words, it won’t work otherwise). We take full advantage of that shared responsibility model and enforce security in every layer.

End users benefit from IAM best practices through streamlined implementations of least privilege access, delegated roles instead of credentials, and integration with logging and monitoring services (e.g., AWS CloudTrail, Amazon CloudWatch, and Amazon Elasticsearch Service + Kibana). For example, developers and end users of the todo app didn’t need to explicitly define any security roles (it was done by deepify deploy), but they can rest assured that only their instance of todo app will be using their infrastructure, platform, and application resources.

The following are two security roles (back-end and front-end) that have been seamlessly generated and enforced in each layer:

IAM role that allows back-end invocation of AWS Lambda function (e.g. DeepProdTodoCreate1234abcd) in web application AWS account (e.g. 123456789000)
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": ["lambda:InvokeFunction"],
            "Resource": ["arn:aws:lambda:us-east-1:123456789000:function:DeepProdTodoCreate1234abcd*"]
        }
    ]
}
DEEP role that allows front-end resource (e.g deep.todo:task) to execute action (e.g. deep.todo:task:create)
{
  "Version": "2015-10-07",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": ["deep.todo:task:create"],
      "Resource": ["deep.todo:task"]
    }
  ]
}

Benchmarking

We have been continuously benchmarking AWS Lambda for various use cases in our microapplications. After a couple of repetitive situations doing similar analysis, we decided to build the benchmarking as another microapplication and re-use the ecosystem to include it automatically where we needed it. You can find the open-source code for the benchmarking microapplication on GitHub:

Particularly, for todo app, we performed various benchmarking analysis on AWS Lambda by tweaking different components in a specific function (e.g. function size, memory size, billable cost, etc.). Next, we would like to share results with you:

Benchmarking for todo app

Req No Function Size (MB) Memory Size (MB) Max Memory Used (MB) Start time Stop time Front-end Call (ms) Back-end Call (ms) Billed Time (ms) Billed Time ($)
1 1.1 128 34 20:15.8 20:16.2 359 200.47 300 0.000000624
2 1.1 128 34 20:17.8 20:18.2 381 202.45 300 0.000000624
3 1.1 128 34 20:19.9 20:20.3 406 192.52 200 0.000000416
4 1.1 128 34 20:21.9 20:22.2 306 152.19 200 0.000000416
5 1.1 128 34 20:23.9 20:24.2 333 175.01 200 0.000000416
6 1.1 128 34 20:25.9 20:26.3 431 278.03 300 0.000000624
7 1.1 128 34 20:27.9 20:28.2 323 170.97 200 0.000000416
8 1.1 128 34 20:29.9 20:30.2 327 160.24 200 0.000000416
9 1.1 128 34 20:31.9 20:32.4 556 225.25 300 0.000000624
10 1.1 128 35 20:33.9 20:34.2 333 179.59 200 0.000000416
Average 375.50 193.67 Total 0.000004992

Performance

Speaking of performance, we find AWS Lambda mature enough to power large-scale web applications. The key is to build the functions as small as possible, focusing on a simple rule of one function to achieve only one task. Over time, these functions might grow in size; therefore, we always keep an eye on them and re-factor / split into the lowest possible logical denominator (smallest task).

Using the benchmarking tool, we ran multiple scenarios on the same function from todo app

Function Size (MB) Memory Size (MB) Max Memory Used (MB) Avg Front-end (ms) Avg Back-end (ms) Total Calls (#) Total Billed (ms) Total Billed ($/1B)*
1.1 128 34-35 375.50 193.67 10 2,400 4,992
1.1 256 34-37 399.40 153.25 10 2,000 8,340
1.1 512 33-35 341.60 134.32 10 1,800 15,012
1.1 128 34-49 405.57 223.82 100 27,300 56,784
1.1 256 28-48 354.75 177.91 100 23,800 99,246
1.1 512 32-47 345.92 163.17 100 23,100 192,654
55.8 128 49-50 543.00 284.03 10 3,400 7,072
55.8 256 49-50 339.80 153.13 10 2,100 8,757
55.8 512 49-50 342.60 141.02 10 2,000 16,680
55.8 128 83-87 416.10 220.91 100 26,900 55,952
55.8 256 50-71 377.69 194.22 100 25,600 106,752
55.8 512 57-81 353.46 174.65 100 23,300 194,322

Based on performance data, we have learned some pretty cool stuff:

  • The smaller the function is, the better it performs; On the other hand, if more memory is allocated, the size of the function matters less and less.
  • Memory size is not directly proportional to billable costs; developers can decide the memory size based on performance requirements combined with associated costs.
  • The key to better performance is continuous load, thanks to container reuse in AWS Lambda.

Conclusion

In this post, we presented a small web application that is built with AWS Lambda at the core. We walked you through the process of identifying the front-end, back-end, and data tiers required to build the todo app. You can fork the example code repository as a starting point for your own web applications.

If you have questions or suggestions, please leave a comment below.

Why MVC?

Post Syndicated from Delian Delchev original http://deliantech.blogspot.com/2014/10/why-mvc.html

As you all probably know, the MVC approach has been very modern lately. MVC stands for Model-View-Controller where it is expected that your data, your visualization and your gluing and managing code has to be fully separated in separated files (they are not separated really as they are linked to each other in the same program). As I am coming from the world of the system and embedded programming it was hard for me to understand the reason behind this. Instinctively I thought this should be somehow related to the ease of the development. May be this makes it easier for the separation of the work of UI designers, back-end communication (and development) and the UI execution control. You can easily split the work among different people with different skills, I thought. But now I realize it is something absolutely different.It is maintainability, therefore easier support!And it is best illustrated with HTML.You can easily insert JavaScript code directly within an HTML tag:<INPUT TYPE=BUTTON onClick=”alert(‘blabla’)” VALUE=”Click Me!”>If you go for the MVC approach, you should have a separated code that do something like this:Separate HTML:<INPUT ID=”myButton” TYPE=BUTTON VALUE=”Click Me!”>Separate JavaScript:document.getElementById(“myButton”).addListener(“click”,function() { alert(‘blabla’) })It is obvious – MVC is more expensive in terms of code, structure, style and preparations. So why to walk this extra mile? Some programmers with my background would usually say – it has overheads and therefore is ineffective to program.However, if you have a case of a software that has to be rewriten constantly – introducing new functionality, new features, fixing it, you have a lot of other issues to deal with. Your major problem will be the maintainability and readability of your code.And I am sure everyone will agree that having all your control code, execution and control flow merged in the same code structure is much better, than having them split among a lot of data processing code and UI visualisations. If you have a huge HTML code with a lot of javascript code separated and bound directly into the tags (non MVC) it is extremely hard to know and keep in mind what are all the events that happens and what is the order of the execution of the code. MVC will make that much much easier, even though in the beginning it may be costly with an extra overhead.

Why MVC?

Post Syndicated from Delian Delchev original http://deliantech.blogspot.com/2014/10/why-mvc.html

As you all probably know, the MVC approach has been very modern lately. MVC stands for Model-View-Controller where it is expected that your data, your visualization and your gluing and managing code has to be fully separated in separated files (they are not separated really as they are linked to each other in the same program). As I am coming from the world of the system and embedded programming it was hard for me to understand the reason behind this. Instinctively I thought this should be somehow related to the ease of the development. May be this makes it easier for the separation of the work of UI designers, back-end communication (and development) and the UI execution control. You can easily split the work among different people with different skills, I thought. But now I realize it is something absolutely different.It is maintainability, therefore easier support!And it is best illustrated with HTML.You can easily insert JavaScript code directly within an HTML tag:<INPUT TYPE=BUTTON onClick=”alert(‘blabla’)” VALUE=”Click Me!”>If you go for the MVC approach, you should have a separated code that do something like this:Separate HTML:<INPUT ID=”myButton” TYPE=BUTTON VALUE=”Click Me!”>Separate JavaScript:document.getElementById(“myButton”).addListener(“click”,function() { alert(‘blabla’) })It is obvious – MVC is more expensive in terms of code, structure, style and preparations. So why to walk this extra mile? Some programmers with my background would usually say – it has overheads and therefore is ineffective to program.However, if you have a case of a software that has to be rewriten constantly – introducing new functionality, new features, fixing it, you have a lot of other issues to deal with. Your major problem will be the maintainability and readability of your code.And I am sure everyone will agree that having all your control code, execution and control flow merged in the same code structure is much better, than having them split among a lot of data processing code and UI visualisations. If you have a huge HTML code with a lot of javascript code separated and bound directly into the tags (non MVC) it is extremely hard to know and keep in mind what are all the events that happens and what is the order of the execution of the code. MVC will make that much much easier, even though in the beginning it may be costly with an extra overhead.