All posts by Eddie Moser

How Pushly Media used AWS to pivot and quickly spin up a StartUp

Post Syndicated from Eddie Moser original https://aws.amazon.com/blogs/devops/how-pushly-media-used-aws-to-pivot-and-quickly-spin-up-a-startup/

This is a guest post from Pushly. In their own words, “Pushly provides a scalable, easy-to-use platform designed to deliver targeted and timely content via web push notifications across all modern desktop browsers and Android devices.”

Introduction

As a software engineer at Pushly, I’m part of a team of developers responsible for building our SaaS platform.

Our customers are content publishers spanning the news, ecommerce, and food industries, with the primary goal of increasing page views and paid subscriptions, ultimately resulting in increased revenue.

Pushly’s platform is designed to integrate seamlessly into a publisher’s workflow and enables advanced features such as customizable opt-in flow management, behavioral targeting, and real-time reporting and campaign delivery analytics.

As developers, we face various challenges to make all this work seamlessly. That’s why we turned to Amazon Web Services (AWS). In this post, I explain why and how we use AWS to enable the Pushly user experience.

At Pushly, my primary focus areas are developer and platform user experience. On the developer side, I’m responsible for building and maintaining easy-to-use APIs and a web SDK. On the UX side, I’m responsible for building a user-friendly and stable platform interface.

The CI/CD process

We’re a cloud native company and have gone all in with AWS.

AWS CodePipeline lets us automate the software release process and release new features to our users faster. Rapid delivery is key here, and CodePipeline lets us automate our build, test, and release process so we can quickly and easily test each code change and fail fast if needed. CodePipeline is vital to ensuring the quality of our code by running each change through a staging and release process.

One of our use cases is continuous reiteration deployment. We foster an environment where developers can fully function in their own mindset while adhering to our company’s standards and the architecture within AWS.

We deploy code multiple times per day and rely on AWS services to run through all checks and make sure everything is packaged uniformly. We want to fully test in a staging environment before moving to a customer-facing production environment.

The development and staging environments

Our development environment allows developers to securely pull down applications as needed and access the required services in a development AWS account. After an application is tested and is ready for staging, the application is deployed to our staging environment—a smaller reproduction of our production environment—so we can test how the changes work together. This flow allows us to see how the changes run within the entire Pushly ecosystem in a secure environment without pushing to production.

When testing is complete, a pull request is created for stakeholder review and to merge the changes to production branches. We use AWS CodeBuild, CodePipeline, and a suite of in-house tools to ensure that the application has been thoroughly tested to our standards before being deployed to our production AWS account.

Here is a high level diagram of the environment described above:

Diagram showing at a high level the Pushly environment.Ease of development

Ease of development was—and is—key. AWS provides the tools that allow us to quickly iterate and adapt to ever-changing customer needs. The infrastructure as code (IaC) approach of AWS CloudFormation allows us to quickly and simply define our infrastructure in an easily reproducible manner and rapidly create and modify environments at scale. This has given us the confidence to take on new challenges without concern over infrastructure builds impacting the final product or causing delays in development.

The Pushly team

Although Pushly’s developers all have the skill-set to work on both front-end-facing and back-end-facing projects, primary responsibilities are split between front-end and back-end developers. Developers that primarily focus on front-end projects concentrate on public-facing projects and internal management systems. The back-end team focuses on the underlying architecture, delivery systems, and the ecosystem as a whole. Together, we create and maintain a product that allows you to segment and target your audiences, which ensures relevant delivery of your content via web push notifications.

Early on we ran all services entirely off of AWS Lambda. This allowed us to develop new features quickly in an elastic, cost efficient way. As our applications have matured, we’ve identified some services that would benefit from an always on environment and moved them to AWS Elastic Beanstalk. The capability to quickly iterate and move from service to service is a credit to AWS, because it allows us to customize and tailor our services across multiple AWS offerings.

Elastic Beanstalk has been the fastest and simplest way for us to deploy this suite of services on AWS; their blue/green deployments allow us to maintain minimal downtime during deployments. We can easily configure deployment environments with capacity provisioning, load balancing, autoscaling, and application health monitoring.

The business side

We had several business drivers behind choosing AWS: we wanted to make it easier to meet customer demands and continually scale as much as needed without worrying about the impact on development or on our customers.

Using AWS services allowed us to build our platform from inception to our initial beta offering in fewer than 2 months! AWS made it happen with tools for infrastructure deployment on top of the software deployment. Specifically, IaC allowed us to tailor our infrastructure to our specific needs and be confident that it’s always going to work.

On the infrastructure side, we knew that we wanted to have a staging environment that truly mirrored the production environment, rather than managing two entirely disparate systems. We could provide different sets of mappings based on accounts and use the templates across multiple environments. This functionality allows us to use the exact same code we use in our current production environment and easily spin up additional environments in 2 hours.

The need for speed

It took a very short time to get our project up and running, which included rewriting different pieces of the infrastructure in some places and completely starting from scratch in others.

One of the new services that we adopted is AWS CodeArtifact. It lets us have fully customized private artifact stores in the cloud. We can keep our in-house libraries within our current AWS accounts instead of relying on third-party services.

CodeBuild lets us compile source code, run test suites, and produce software packages that are ready to deploy while only having to pay for the runtime we use. With CodeBuild, you don’t need to provision, manage, and scale your own build servers, which saves us time.

The new tools that AWS is releasing are going to even further streamline our processes. We’re interested in the impact that CodeArtifact will have on our ability to share libraries in Pushly and with other business units.

Cost savings is key

What are we saving by choosing AWS? A lot. AWS lets us scale while keeping costs at a minimum. This was, and continues to be, a major determining factor when choosing a cloud provider.

By using Lambda and designing applications with horizontal scale in mind, we have scaled from processing millions of requests per day to hundreds of millions, with very little change to the underlying infrastructure. Due to the nature of our offering, our traffic patterns are unpredictable. Lambda allows us to process these requests elastically and avoid over-provisioning. As a result, we can increase our throughput tenfold at any time, pay for the few minutes of extra compute generated by a sudden burst of traffic, and scale back down in seconds.

In addition to helping us process these requests, AWS has been instrumental in helping us manage an ever-growing data warehouse of clickstream data. With Amazon Kinesis Data Firehose, we automatically convert all incoming events to Parquet and store them in Amazon Simple Storage Service (Amazon S3), which we can query directly using Amazon Athena within minutes of being received. This has once again allowed us to scale our near-real-time data reporting to a degree that would have otherwise required a significant investment of time and resources.

As we look ahead, one thing we’re interested in is Lambda custom stacks, part of AWS’s Lambda-backed custom resources. Amazon supports many languages, so we can run almost every language we need. If we want to switch to a language that AWS doesn’t support by default, they still provide a way for us to customize a solution. All we have to focus on is the code we’re writing!

The importance of speed for us and our customers is one of our highest priorities. Think of a news publisher in the middle of a briefing who wants to get the story out before any of the competition and is relying on Pushly—our confidence in our ability to deliver on this need comes from AWS services enabling our code to perform to its fullest potential.

Another way AWS has met our needs was in the ease of using Amazon ElastiCache, a fully managed in-memory data store and cache service. Although we try to be as horizontal thinking as possible, some services just can’t scale with the immediate elasticity we need to handle a sudden burst of requests. We avoid duplicate lookups for the same resources with ElastiCache. ElastiCache allows us to process requests quicker and protects our infrastructure from being overwhelmed.

In addition to caching, ElastiCache is a great tool for job locking. By locking messages by their ID as soon as they are received, we can use the near-unlimited throughput of Amazon Simple Queue Service (Amazon SQS) in a massively parallel environment without worrying that messages are processed more than once.

The heart of our offering is in the segmentation of subscribers. We allow building complex queries in our dashboard that calculate reach in real time and are available to use immediately after creation. These queries are often never-before-seen and may contain custom properties provided by our clients, operate on complex data types, and include geospatial conditions. No matter the size of the audience, we see consistent sub-second query times when calculating reach. We can provide this to our clients using Amazon Elasticsearch Service (Amazon ES) as the backbone to our subscriber store.

Summary

AWS has countless positives, but one key theme that we continue to see is overall ease of use, which enables us to rapidly iterate. That’s why we rely on so many different AWS services—Amazon API Gateway with Lambda integration, Elastic Beanstalk, Amazon Relational Database Service (Amazon RDS), ElastiCache, and many more.

We feel very secure about our future working with AWS and our continued ability to improve, integrate, and provide a quality service. The AWS team has been extremely supportive. If we run into something that we need to adjust outside of the standard parameters, or that requires help from the AWS specialists, we can reach out and get feedback from subject matter experts quickly. The all-around capabilities of AWS and its teams have helped Pushly get where we are, and we’ll continue to rely on them for the foreseeable future.

 

Using AWS CodePipeline and AWS CodeStar Connections to deploy from Bitbucket

Post Syndicated from Eddie Moser original https://aws.amazon.com/blogs/devops/using-aws-codepipeline-and-aws-codestar-connections-to-deploy-from-bitbucket/

AWS CodeStar Connections is a new feature that allows services like AWS CodePipeline to access third-party code source provider. For example, you can now seamlessly connect your Atlassian Bitbucket Cloud source repository to AWS CodePipeline. This allows you to automate  the build, test, and deploy phases of your release process each time a code change occurs.

This new feature is available in the following Regions:

  • US East (Ohio)
  • US East (N. Virginia)
  • US West (N. California)
  • US West (Oregon)
  • Asia Pacific (Mumbai)
  • Asia Pacific (Seoul)
  • Asia Pacific (Singapore)
  • Asia Pacific (Sydney)
  • Asia Pacific (Tokyo)
  • Canada (Central)
  • EU (Frankfurt)
  • EU (Ireland)
  • EU (London)
  • EU (Paris)
  • South America (São Paulo)

The practice of tracking and managing changes to code, or source control, is a foundational element to the development process. Therefore, source control management systems are an essential tool for any developer. In this post, we focus on one specific Git code management product: Atlassian Bitbucket. You can get started for free with Bitbucket Cloud.

Atlassian provides detailed documentation on getting started with Bitbucket Cloud, which includes topics such as setting up a team, creating a repository, working with branches, and more. For more information, see Get started with Bitbucket Cloud.

Prerequisite

For this use case, you use a Bitbucket account, repository, and Amazon Simple Storage Service (Amazon S3) bucket that we have already created. To follow along, you should have the following:

  • A working knowledge of Git and how to fork or clone within your source provider
  • Familiarity with hosting a static website on Amazon S3

To follow along you will need a sample page. Here is some simple html code that you can name index.html and add to your repo.

<html>
    <head>
        Example Header
    </head>
    <body>
        Example Body Text
    </body>
</html>

 

Solution overview

For this use case, you deploy a Hugo website from your Bitbucket Cloud repository to your S3 bucket using CodePipeline. You can then connect your Bitbucket Cloud account to your AWS account to deploy code natively.

The walkthrough contains the following steps:

  1. Set up CodeStar connections.
  2. Add a deployment stage.
  3. Use CI/CD to update your website.

Setting up CodeStar connections

When connecting CodePipeline to Bitbucket Cloud, it helps if you already signed in to Bitbucket. After you sign in to Bitbucket Cloud, you perform the rest of the connection steps on the AWS Management Console.

 

  1. On the console, search for CodePipeline.
  2. Choose CodePipeline.Search for the CodePipeline Service from the AWS Console Search box.
  3. Choose Pipelines.
  4. Choose Create pipeline.CodePipeline Create Pipeline from console.
  5.  For Pipeline name, enter a name.
  6. For Service role, select New service role.
  7. For Role name, enter a name for the service role.
  8.  Choose Next.
    Enter name of pipeline in CodePipeline Console.
  9. For Source provider, choose Bitbucket Cloud.
    Select BitBucket from the dropdown list
  10. For Connection, choose Connect to Bitbucket Cloud.
    Click Connect to Bitbucket.
  11. For Connection name, enter a name.
  12. For Bitbucket Cloud apps, choose Install a new app.
    If this isn’t your first time making a connection, you can choose an existing connection.
  13. Choose Connect.
    Install new BitBucket App
  14. Confirm you’re logged in as the correct user and choose Grant access.
    Grant Access from BitBucket
  15. Choose Connect.
    Clikc Connect to BitBucket
  16. For Repository name, choose your repository.
  17. For Branch name, choose your branch.
  18. For Output artifact format, select CodePipeline default.
  19. Choose Next.Select repository, branch, and output artifact then select next.

Adding a deployment stage

Now that you have created a source stage, you can add a deployment stage.

  1. On the Add build stage page, choose Skip build stage.For this use case, you skip the build stage, but if you need to build your own code, choose your build provider from the drop-down menu.You are prompted to confirm you want to skip the build stage.Skip the build stage.
  2. Choose Skip.Confirm skip build stage
  3. For Deploy provider, choose Amazon S3. If you have a different destination type or are hosting on traditional compute, you can choose other providers.
    Select S3 from the deploy provider.
  4. For Region, choose the Region your S3 bucket is in.
  5. For Bucket, choose the bucket you are deploying to.
  6. Optionally, you can also choose a deploy path if you need to deploy to a sub-folder.
  7. Select Extract file before deploy.
  8. Choose next.
    Fill in the information for the deployment options.
  9. Review your configuration and choose Create pipeline.Review pipeline options and select create pipeline.

If the settings are correct, you see a green success banner and the initial deployment of your pipeline runs successfully. The following screenshot shows our first deployment.

What a successful pipeline creation looks like.

Now that the pipeline shows that the deployment was successful, you can check the S3 bucket to make sure the site is being hosted. You should see your static webpage, as in the following screenshot.

Our successfully deployed website.

 

Using CI/CD to update our website

Now that you have created your pipeline, you can edit your website using your IDE, push the changes, and validate that those changes are automatically deployed to the website. For this step, I already cloned my repository and have it opened in my IDE.

 

  1. Open your code in your preferred IDE.
    Open your files for editing in your favorite IDE.
  2. Make the change to your code and push it to Bitbucket.The following screenshot shows that we updated the message that viewers see on our website and pushed our code.
    make edits and push your changes to bitbucket.
  3. Look at the pipeline and make sure your code is being processed.

The following screenshot shows that the stages were successful and the pipeline processed the correct commit.

checking CodePipeline for our push.

 

After your pipeline is successful, you can check the end result. The following screenshot shows our static webpage.

Our newly updated website.

 

Clean up

If you created any resources during this that you do not plan on keeping, make sure you clean it up to keep from incurring cost associated with the services.

Summary

Being able to let your developers use their repository of choice can be important in your transition to the cloud. CodeStar connections makes it easy for you to set up Bitbucket Cloud as a source provider in the AWS Code Suite.

Get started building your CI/CD pipeline using Bitbucket Cloud and the AWS Code Suite.

Build ARM-based applications using CodeBuild

Post Syndicated from Eddie Moser original https://aws.amazon.com/blogs/devops/build-arm-based-applications-using-codebuild/

AWS CodeBuild has announced support for ARM-based workloads, which will allow you to build and test your software updates natively, without needing to emulate or cross-compile. ARM is a quickly growing platform for application development today and if you rely on emulation and/or cross-compile, the downside is time and reliability. However, a more native approach can be faster and more reliable: Enter ARM-based workload support.

In this post, you will learn how to build a sample Java application with an ARM-based Docker image, you will then upload the artifact to an S3 bucket.

Prerequisite

A new repository in CodeCommit with the code from the sample Java application linked above has already been created. A working knowledge of git and how to fork or clone within your source provider is a pre-requisite.

Configuration Steps

Working with our source code:

  1. Fork or clone the repo and upload/push the code to your source provider of choice. As of this writing, CodeBuild supports the following Source Providers: S3, CodeCommit, BitBucket, GitHub, and GitHub Enterprise.
  2. Go to your IDE of choice* and within your repo/source create a new file named buildspec.yml and copy in the following code. *In this post, the AWS Cloud9 IDE will be referenced when discussing edits. (buildspec.yml reference page)
    version: 0.2
    phases:
        install:
            runtime-versions:
               java: corretto8
               
        build:
            commands:
                - echo Starting Java build at `date`
                - mvn package
                
            finally:
                - echo Finished build of Java Sample at `date`
                
    artifacts:
        files:
            - 'target/aws-java-sample-1.0.jar'

    For this sample, specify that your container run an Amazon Corretto 8 Java environment. An artifact file will also be output, which we will be sent to an S3 bucket later in the process.

    The two phases are:

    1. Install – Since version 0.2 is being used, the Install phase is required to specify the runtime-version.
    2. Build – This phase is where the commands used to build the software will be passed.
  3. Once the buildspec.yml file has been added and saved, you will commit your changes and push your code to your source.
    We are doing a git push to our repo.

 

Creating your CodeBuild Project:

Now that you have created your source code, it’s time to create your CodeBuild project. For this post, The AWS Management Console will be used, though other tools such as AWS Cloud Dvelopement Kit (CDK), AWS CloudFormation, or the AWS CLI can also be leveraged.

Creating your artifact destination:

The first thing you are going to do is create where your artifact will be stored. For this blog, you are going to put your artifact into an S3 bucket.

1) In the console search bar type ‘S3.’

2) Select ‘S3’ to go to the S3 Console

image displaying how to search for the S3 service in the AWS Management Console

3) Select ‘Create bucket’ from the top left of the console.

image showing where to click to create your S3 Bucket

 

4) Type in your bucket name and Region and click ‘Next.’ For this blog, you are going to use the name “mydemobuildbucket.” It is important to note that it must an be all lowercase and globally unique name.

image showing the fields in the name and region of the S3 bucket creation.

5) Leave the defaults as is for the configuration page and select ‘Next.’

image displaying to leave the configuration options as they are and click next.

6) Under the permissions tab, choose to ‘Block all public access’ to the bucket, then click ‘Next.’ This well help keep your artifact secure.

image displaying to select block all public access and select next. this is important to secure our bucket.

7) The Review pane is where you can verify all of your settings. Once you have confirmed that all settings are correct, click ‘Create bucket’ to finish.

image showing the summary of the bucket creation process and to click create bucket.

You should see your S3 Bucket. With your bucket, you can now create your build project.

 

Create CodeBuild Project:

If you have worked with CodeBuild in the past, most of this will look familiar, however, as part of the ARM release, a few new options have been added within the Build Environment section that will let you build and test any of your ARM-based applications.

1) Go to the console search and type ‘CodeBuild’ and select the service.

image showing how to search for the CodeBuild service in the AWS Management Console

2) In the top right corner click ‘Create build project.’

Image showing where to click to create a build project

3) Enter a name for your project. (‘myDemoBuild’ will be used as the default name in this post.) Descriptions are optional but can be useful.

image showing entering our project name myDemoBuild and an optional description

4) Select your source provider. I am going to use CodeCommit and select my repo and the branch where my new code is located.

image shows selecing our Source provider, Repository name, reference type, and branch name.

5) This is where the differences are that I mentioned earlier. We are now going to setup our Build Environment. For ARM support we must select the following options:

  1. Operating System: Amazon Linux 2 (At time of publishing, ARM support only supports using the Amazon Linux 2 operating system.)
  2. Runtime(s): Standard
  3. Image: amazonlinux-aarch64-standard

You can either create a new service role or select an existing service role if you have previously created one. I am going to create a new role called myDemoRole. The system will automatically create the required permissions to allow CodeBuild to access the resources based on your input from the Project. In a production environment, I would recommend creating a service role that follows a least access needed principal, instead of creating a holistically new role.

image showing the environment settings for CodeBuild.

6) Configure buildspec settings. I am going to select ‘Use a buildspec file’ and leave the name blank as it will default to buildspec.yml. However, you can select a specific name if you have multiple buildspec files for use with difference environments. E.g., buildspec-prod.yml, buildspec-staging.yml, buildspec-dev.yml

Image showing us select to use a buildspec file.

7) Configure your artifact settings. I am going to upload my artifact to the S3 Bucket we created earlier. I have named the file artifact.zip for simplicity, but any name can be used. I have selected to Zip the artifact file, however, that is not required.

image showing the artifact configuration for our CodeBuild project.

8) Configure logging. I am going to enable CloudWatch logging so that the build logs are uploaded to the logging service.

image showing configuring logs which is an optional step.

9) Select ‘Create build project.’

image showing the review panel where you can review all of the settings for your CodeBuild Project.

Run the build:

Now that your application is created, your S3 bucket has been built, and you’ve created your build project, it is time to run your build. If everything is successful, your artifact file should be stored in your S3 bucket.

1) After you have created your build project, you should now be in the build page which allows you to run/edit/delete your build project. Select ‘Start build’ in the top right part of the page.

image showing how to start the build you just created.

2) You can review or override the build settings that you configured when setting up the project. Once you have verified settings, click ‘Start build’ in the top right.

image showing the verify run options and to kick off the build.

3) Once your build has been started, it will run through the steps of you buildspec file (this can take a while depending on your application). Once complete, the status should show “Succeeded.”

image showing where to check for build status.

If for any reason your build did not succeed, look in the phase details to find the error. Validate all of your settings are correct and use the documentation to help you troubleshoot any issues.

4) Now that your build has been successful, verify that your artifact is in the S3 bucket you created.

image showing where you will find the Artifact in the S3 Bucket.

 

Clean Up

Reminder, if you created any resources just for testing purposes, you should delete them to keep from incurring additional cost.

Make sure and check the following when cleaning up:

  • S3 bucket
  • CodeBuild Project
  • CodeCommit repository
  • Cloud9 Environment

Conclusion

We have walked through the process of using CodeBuild to build a sample Java application in the new ARM environment. Now that you have built your ARM-based artifact, you can download it for any local use or get started developing your own ARM-based applications using AWS Developer Tools.