Tag Archives: developer

Announcing Generative AI CDK Constructs

Post Syndicated from Michael Tran original https://aws.amazon.com/blogs/devops/announcing-generative-ai-cdk-constructs/

Announced by Werner Vogels in his 2023 re:Invent Keynote, Generative AI CDK Constructs, an open-source extension of the AWS Cloud Development Kit (AWS CDK), provides well-architected multi-service patterns to quickly and efficiently create repeatable infrastructure required for generative AI projects on AWS. Our initial release includes five CDK constructs enabling key generative AI capabilities like question and answering, summarization, data ingestion for Retrieval Augmented Generation (RAG), and model deployment capabilities.

Simplify and Accelerate the Development of Applications with Generative AI

Developing generative AI applications is particularly challenging due to the rapidly evolving nature of the underlying technologies. As a result, developers are faced with the challenge of keeping up with changing paradigms and best practices. This often leads to varied and disjointed patterns as developers try crafting their own solutions from scratch. For instance, there are already hundreds of implementations of patterns like retrieval augmented generation(RAG), summarization, or chatbots.

AWS introduced the Cloud Development Kit (CDK), an open-source software development framework to define cloud infrastructure in code and provision it through AWS CloudFormation. CDK empowers developers to use high-level construct libraries that encapsulate AWS best practices, allowing for the creation of cloud applications without needing to know every detail of the AWS services.

A ‘construct‘ in CDK terminology is a building block that represents an AWS resource or a combination of AWS resources configured together. These constructs can range from low-level (individual resources) to high-level (complete architectures), enabling developers to compose and share their cloud application models as code.

Our library harnesses the power of CDK, offering pre-built constructs designed to expedite the deployment of generative AI applications. These constructs use LLMs/FMs available in Amazon Bedrock facilitating easier modeling and rapid deployment of cloud architectures tailored for generative AI. With these constructs available in both Python and Typescript, developers can leverage AWS services to build industry-agnostic solutions swiftly, regardless of their expertise level in generative AI. The constructs are designed to integrate smoothly with existing CDK applications, offering a scalable approach to enhancing generative AI capabilities across various business verticals.

Getting Started with the Constructs

Prerequisites

You can install the CDK constructs in your preferred language:

  • Typescript app: npm i @cdklabs/generative-ai-cdk-constructs
  • Python app: pip install cdklabs.generative-ai-cdk-constructs

Within your existing CDK application, import the new library and get access to available constructs:

  • Typescript app: import * as genai from '@cdklabs/generative-ai-cdk-constructs';
  • Python app: import cdklabs.generative-ai-cdk-constructs as genai

Example

aws-qa-appsync-opensearch is one of the constructs available in the generative-ai-cdk-constructs library. This construct provides a question answering workflow using Amazon Bedrock and a provisioned Amazon OpenSearch cluster. Amazon Cognito is required to authenticate calls to the AWS AppSync GraphQL API created by the construct.

Typescript app:


import { Construct } from 'constructs';
import { Stack, StackProps } from 'aws-cdk-lib';
import * as os from 'aws-cdk-lib/aws-opensearchservice';
import * as cognito from 'aws-cdk-lib/aws-cognito';
import { QaAppsyncOpensearch, QaAppsyncOpensearchProps } from '@cdklabs/generative-ai-cdk-constructs';


class MyStack extends Stack {
    // get an existing OpenSearch provisioned cluster
    const osDomain = os.Domain.fromDomainAttributes(this, 'osdomain', {
        domainArn: 'arn:aws:es:us-east-1:XXXXXX',
        domainEndpoint: 'https://XXXXX.us-east-1.es.amazonaws.com'
    });
    
    // get an existing userpool 
    const cognitoPoolId = 'us-east-1_XXXXX';
    const userPoolLoaded = cognito.UserPool.fromUserPoolId(this, 'myuserpool', cognitoPoolId);
   
    // Create a QA Appsync OpenSearch construct 
    const ragSource = new QaAppsyncOpensearch(
        this,
        'QaAppsyncOpensearch',
        {
            existingOpensearchDomain: osDomain,
            openSearchIndexName: 'demoindex',
            cognitoUserPool: userPoolLoaded
        }
    )

Python app:

from aws_cdk import (
    Stack,
    aws_opensearchservice as os,
    aws_cognito as cognito
)
from constructs import Construct
from cdklabs.generative_ai_cdk_constructs import QaAppsyncOpensearch

class MyStack(Stack):
    def __init__(self, scope: Construct, id: str, **kwargs):
        super().__init__(scope, id, **kwargs)

        # Get an existing OpenSearch provisioned cluster
        os_domain = os.Domain.from_domain_attributes(self, 'osdomain',
            domain_arn='arn:aws:es:us-east-1:XXXXXX',
            domain_endpoint='https://XXXXX.us-east-1.es.amazonaws.com'
        )

        # Get an existing user pool
        cognito_pool_id = 'us-east-1_XXXXX'
        user_pool_loaded = cognito.UserPool.from_user_pool_id(self, 'myuserpool', cognito_pool_id)

        # Create a QA Appsync OpenSearch construct
        rag_source = QaAppsyncOpensearch(
            self, 
            'QaAppsyncOpensearch',
            existing_opensearch_domain=os_domain,
            open_search_index_name='demoindex',
            cognito_user_pool=user_pool_loaded
        )

Refer to the documentation for additional guidance on a particular construct: Catalog

Currently, the following constructs that are available:

Feature Description
Data ingestion pipeline Ingestion pipeline providing a RAG (retrieval augmented generation) source for storing documents in a knowledge base.
Question answering Question answering with a large language model (Anthropic Claude V2) using a RAG (retrieval augmented generation) source and/or long context.
Summarization Document summarization with a large language model (Anthropic Claude V2).
Lambda layer Python Lambda layer providing dependencies and utilities to develop generative AI applications on AWS.
SageMaker model deployment Deploy a foundation model from Amazon SageMaker JumpStart, Hugging Face, or an S3 location to an Amazon SageMaker endpoint.

Explore More Constructs

The library includes constructs for data ingestion pipelines, question answering workflows, document summarization, Lambda layer for generative AI applications, and SageMaker model deployment.

Next Steps

Visit the Generative AI CDK Constructs GitHub repository for a full list of constructs and documentation. For practical examples, check out the AWS samples repository. Your feedback and contributions are welcome on our GitHub repository to enhance the capabilities of Generative AI CDK Constructs.

Alain Krok

Alain Krok is a Senior Solutions Architect with a passion for emerging technologies. His past experience includes designing and implementing IIoT solutions for the oil and gas industry and working on robotics projects. He enjoys pushing the limits and indulging in extreme sports when he is not designing software.

Dinesh Sajwan

Dinesh Sajwan is a Senior Solutions Architect. His passion for emerging technologies allows him to stay on the cutting edge and identify new ways to apply the latest advancements to solve even the most complex business problems. His diverse expertise and enthusiasm for both technology and adventure position him as a uniquely creative problem-solver.

Michael Tran

Michael Tran is a Sr. Solutions Architect with Prototyping Acceleration team at Amazon Web Services. He provides technical guidance and helps customers innovate by showing the art of the possible on AWS. He specializes in building prototypes in the AI/ML space. You can contact him @Mike_Trann on Twitter.

 

Deploying containers to AWS using ECS and CodePipeline 

Post Syndicated from Jessica Veljanovska original https://www.anchor.com.au/blog/2022/10/deploying-containers-to-aws/

Deploying containers to AWS using ECS and CodePipeline 

In this post, it will look at deploying containers to AWS using ECS and CodePipeline. Note that this is only an overview of the process and is not a complete tutorial on how to set this up. 

Containers 101 

However, before we look at how we can deploy Containers onto AWS, it is first worth looking at why we want to use containers, so what are containers?

Containers provide environments for applications to run in isolation. Unlike virtualisation, containers do not require a full guest operating system for each container instance, instead, they share the kernel and other lower-level components of the host operating system as provided by the underlying containerization platform (the most popular of which is Docker, which we will be using in examples from here onwards). This makes containers more efficient at using the underlying hardware resources to run applications. Since Containers are isolated, they can include all their dependencies separate from other running Containers. Suppose that you have two applications that each require specific, conflicting, versions of Python to run, they will not run correctly if this requirement is not met. With containers, you can run both applications with their own environments and dependencies on the same Host Operating system without conflict. 

Containers are also portable, as a developer you can create, run, and test an image on your local workstation and then deploy the same image into your production environment. Of course, deploying into production is never that simple, but that example highlights the flexibility that containers can afford to developers in creating their applications. 

This is achieved by using images, which are read-only templates that provide the containerization platform the instructions required to run the image as a Container. Each image is built from a DockerFile that provides the specification on how to build the image and can also include other images. This can be as simple or as complicated as it needs to be to successfully run the application.

FROM ubuntu:22.04 

COPY . /app 

RUN make /app 

CMD python /app/app.py

However, it is important to know that each image is made up of different layers, which are created based on each line of instruction in the DockerFile that is used to build the image. Each layer is cached by Docker which provides performance benefits to well optimised DockerFiles and the resulting images they create. When the image is run by Docker it creates a Container that flips the layers from the image and adds a runtime Read-Write layer on top, which can be used for logging and any other activity that the application needs to perform and cannot be read from the image. You can use the same image to run as many Containers (running instances of the image) as you desire. Finally, when a Container is removed, the image is retained and only the Read-Write layer is lost. 


Elastic Container Service 

Elastic Container Service or ECS is AWS’ native Container management system. As an orchestration system it makes it easy to deploy and manage containerized applications at scale with built in scheduling that can allow you to spin up/down Containers at specific times or even configure auto-scaling. ECS has three primary modes of operation, known as launch types. Elastic Container Service or ECS is AWS’ native Container management system. As an orchestration system it makes it easy to deploy and manage containerized applications at scale with built in scheduling that can allow you to spin up/down Containers at specific times or even configure auto-scaling. ECS has three primary modes of operation, known as launch types. The first launch type is the AWS EC2 launch type, where you run the ECS agent on EC2 instances that you maintain and manage. The available resource capacity is dependent on the number and type of EC2 instances that you are using. As you are already billed for the AWS resources consumed (EC2, EBS, etc.), there are no additional charges for using ECS with this launch type. If you are using AWS Outposts, you can use also utilise that capacity instead of Cloud-based EC2 instances. 

The second launch type is the AWS Fargate launch type. This is similar to the EC2 launch type, however, the major difference is that AWS are managing the underlying infrastructure. Because of this there are no inherent capacity constraints, and the billing models is based on the amount of vCPU and memory your Containers consume. 

The last launch type is the External launch type, which is known as ECS Anywhere. This allows you to use the features of ECS with your own on-premises hardware/infrastructure. This is billed at an hourly rate per managed instance. 

ECS operates using a hierarchy that connects together the different aspects of the service and allows flexibility in exactly how the application is run. This hierarchy consists of ECS Clusters, ECS Services, ECS Tasks, and finally the running Containers. We will briefly look at each of these services. ECS Clusters 

ECS Clusters are a logical grouping of ECS Services and their associated ECS Task. A Cluster is where scheduled tasks can be defined, either using fixed intervals or cron expression. If you are using the AWS EC2 Launch Type, it is also where you can configure Auto-Scaling for the EC2 instances in the form of Capacity Providers. 

ECS Services 

ECS Services are responsible for running ECS Tasks. They will ensure that the desired number of Tasks are running and will start new Tasks if an existing Task stops or fails for any reason in order to maintain the desired number of running Tasks. Auto-Scaling can be configured here to automatically update the desired number of Tasks based on CPU and Memory. You can also associate the Service with an Elastic Load Balancer Target Group and if you do this you can also use the number of requests as a metric for Auto-Scaling. 

ECS Tasks 

ECS Tasks are running instances of an ECS Task Definition. The Task Definition describes one or more Containers that make up the Task. This is where the main configuration for the Containers is defined, including the Image/s that are used, the Ports exposed, Environment variables, and if any storage volumes (for example EFS) are required. The Task as a whole can also have Resource sizing configured, which is a hard requirement for Fargate due to its billing model. Task definitions are versioned, so each time you make a change it will create a new revision, allowing you to easily roll back to an older configuration if required.


Elastic Container Registry 

Elastic Container Registry or ECR, is not formally part of ECS but instead supports it. ECR can be used to publicly or privately store Container images and like all AWS services has granular permissions provided using IAM. The main features of ECR, besides its ability to integrate with ECS, is the built-in vulnerability scanning for images, and the lack of throttling limitations that public container registries might impose. It is not a free service though, so you will need to pay for any usage that falls outside of the included Free-Tier inclusions.

ECR is not strictly required for using ECS, you can continue to use whatever image registry you want. However, if you are using a CodePipeline to deploy your Containers we recommend using ECR purely to avoid throttling issues preventing the CodePipeline from successfully running. 


CodePipeline 

In software development, a Pipeline is a collection of multiple tools and services working together in order to achieve a common goal, most commonly building and deploying application code. AWS CodePipeline is AWS’ native form of a Pipeline that supports both AWS services as well as external services in order to help automate deployments/releases.

CodePipeline is one part of the complete set of developer tools that AWS provides, which commonly have names starting with “Code”. This is important as by itself CodePipeline can only orchestrate other tooling. For a Pipeline that will deploy a Container to ECS, we will need AWS CodeCommit, AWS CodeBuild, and AWS CodeDeploy. In addition to running the components that are configured, CodePipeline can also provide and retrieve artifacts stored in S3 for each step. For example, it will store the application code from CodeCommit into S3 and provide this to CodeBuild, Code Build will then take this and will create its own artifact files that are provided to CodeDeploy. 

AWS CodeCommit 

AWS CodeCommit is a fully managed source control service that hosts private Git repositories. While this is not strictly required for the CodePipeline, we recommend using it to ensure that AWS has a cached copy of the code at all times. External git repositories can use actions or their own pipelines to push code to CodeCommit when it has been committed. CodeCommit is used in the Source stage of the CodePipeline to both provide the code that is used and to trigger the Pipeline to run when a commit is made. Alternatively, you can use CodeStar Connections to directly use GitHub or BitBucket as the Source stage instead.

AWS CodeBuild 

AWS CodeBuild is a fully managed integration service that can be used to run commands as defined in the BuildSpec that it draws its configuration from, either in CodeBuild itself, or from the Source repository. This flexibility allows it to compile source code, run tests, make API calls, or in our case build Docker Images using the DockerFile that is part of a code repository. CodeBuild is used in the Build stage of the CodePipeline to build the Docker Image, push it to ECR, and update any Environment Variables used later in the deployment.

The following is an example of what a typical BuildSpec might look like for our purposes. 

version: 0.2
  
  
  phases:
  
    pre_build:
  
      commands:
  
        - echo Logging in to Amazon ECR...
  
        - aws --version
  
        - aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin $ACCOUNTID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com
  
    build:
  
      commands:
  
        - echo Build started on `date`
  
        - echo Building the Docker image...
  
        - docker build -t "$REPOSITORY_URI:$IMAGE_TAG-$CODEBUILD_START_TIME" .
  
        - docker tag "$REPOSITORY_URI:$IMAGE_TAG-$CODEBUILD_START_TIME" "$REPOSITORY_URI:$IMAGE_TAG-$CODEBUILD_START_TIME"
  
    post_build:
  
      commands:
  
        - echo Build completed on `date`
  
        - echo Pushing the Docker images...
  
        - docker push "$REPOSITORY_URI:$IMAGE_TAG-$CODEBUILD_START_TIME"
  
        - echo Writing image definitions file...
  
        - printf '{"ImageURI":"%s"}'" $REPOSITORY_URI:$IMAGE_TAG-$CODEBUILD_START_TIME"  > imageDetail.json
  
        - echo $REPOSITORY_URI:$IMAGE_TAG-$CODEBUILD_START_TIME
  
        - sed -i 's|<APP_NAME>|'$IMAGE_REPO_NAME'|g' appspec.yaml taskdef.json
  
        - sed -i 's|<SERVICE_PORT>|'$SERVICE_PORT'|g' appspec.yaml taskdef.json
  
        - sed -i 's|<AWS_ACCOUNT_ID>|'$AWS_ACCOUNT_ID'|g' taskdef.json
  
        - sed -i 's|<MEMORY_RESV>|'$MEMORY_RESV'|g' taskdef.json
  
        - sed -i 's|<IMAGE_NAME>|'$REPOSITORY_URI'\:'$IMAGE_TAG-$CODEBUILD_START_TIME'|g' taskdef.json
  
        - sed -i 's|<IAM_EXEC_ROLE>|'$IAM_EXEC_ROLE'|g' taskdef.json
  
        - sed -i 's|<REGION>|'$REGION'|g' taskdef.json
  
  artifacts:
  
    files:
  
      - appspec.yaml
  
      - taskdef.json
  
      - imageDetail.json

Failures in this step are most likely caused by an incorrect, non-functioning DockerFile or hitting throttling from external Docker repositories. 

AWS CodeDeploy 

AWS CodeDeploy is a fully managed deployment service that integrates with several AWS services including ECS. It can also be used to deploy software to on-premises servers. The configuration of the deployment is defined using an appspec file. CodeDeploy offers several different deployment types and configurations. We tend to use the `Blue/Green` deployment type and the `CodeDeployDefault.ECSAllAtOnce` deployment configuration by default. 

The Blue/Green deployment model allows for the deployment to rollback to the previously deployed Tasks if the deployment is not successful. This makes use of Load Balancer Target Groups to determine if the created Tasks in the ECS Service are healthy. If the health checks fail, then the deployment will fail and trigger a rollback.  

In our ECS CodePipeline, CodeDeploy will run based on the following example appspec.yaml file. Note that the place holder variables in <> are updated with real configuration by the CodeBuild stage.

version: 0.0 

Resources: 

  - TargetService: 

      Type: AWS::ECS::Service 

      Properties: 

        TaskDefinition: <TASK_DEFINITION> 

        LoadBalancerInfo: 

          ContainerName: "<APP_NAME>" 

          ContainerPort: <SERVICE_PORT> 

You may have noticed a lack of configuration regarding the actual Container we will be running up to this point. As we noted earlier, ECS Tasks can use a taskdef file to configure the Task and Containers, which is exactly what we are doing here. One of the files the CodeBuild configuration above expects to be present is taskdef.json. Below is an example taskdef.json file, ss with the appspec.yaml file there are placeholder variables as indicated by <>.


{ 

  "executionRoleArn": "<IAM_EXEC_ROLE>", 

  "containerDefinitions": [ 

    { 

      "name": "<APP_NAME>", 

      "image": "<IMAGE_NAME>", 

      "essential": true, 

      "memoryReservation": <MEMORY_RESV>, 

      "portMappings": [ 

        { 

          "hostPort": 0, 

          "protocol": "tcp", 

          "containerPort": <SERVICE_PORT> 

        } 

      ], 

      "environment": [ 

        { 

          "name": "PORT", 

          "value": "<SERVICE_PORT>" 

        }, 

        { 

          "name": "APP_NAME", 

          "value": "<APP_NAME>" 

        }, 

        { 

          "name": "IMAGE_BUILD_DATE", 

          "value": "<IMAGE_BUILD_DATE>" 

        }, 

      ], 

      "mountPoints": [] 

    } 

  ], 

  "volumes": [], 

  "requiresCompatibilities": [ 

    "EC2" 

  ], 

  "networkMode": "bridge", 

  "family": "<APP_NAME>" 

Failures in this stage of the CodePipeline generally mean that there is something wrong with the Container that is preventing it from starting successfully and passing its health check. Causes of this are varied and rely heavily on having good logging in place. Failures can range from the DockerFile being configured to execute a command that does not exist, to the application itself erroring out when starting for whatever reason might be logged, such as pulling incomplete data from RDS or failing to connect to an external service. 

Summary 

In summary, we have seen what Containers are, why they are useful, and what options AWS provides with its Elastic Container Service (ECS) for running them. Additionally, we looked at what parts of AWS CodePipeline we would need to use in order to deploy our Containers to ECS using CodePipeline.

For further information on the services used, I would highly recommend looking at the documentation that AWS provides. 

For a more hands-on guided walkthrough on setting up ECS and CodePipeline, AWS provide the following resources, there is also plenty of third party material you can find online. 

The post Deploying containers to AWS using ECS and CodePipeline  appeared first on Anchor | Cloud Engineering Services.

Using CodePipeline to Host a Static Website

Post Syndicated from Jessica Veljanovska original https://www.anchor.com.au/blog/2022/10/using-codepipeline-to-host-a-static-website/

There are a variety of ways in which to host a website online. This blog (post) explores how to simply publish and automate a statically built website. Hugo is one such example of a system which can create static websites and is popularly used for blogs.

The final website itself, will consist and contain; HTML, CSS, Images and JavaScript. During this process, (Anchor’s cloud experts) have listed the AWS Services in order to achieve our goal. These include:

  • Cloudfront
  • CodeBuild
  • CodePipeline
  • IAM
  • Amazon S3
  • CodeCommit

This Solution should be well below $5 per month as most of the items are within the Always Free Tier, except the AWS S3 Storage (Which only has valid free-tier for the first 12 month) depending on the traffic.   

In order to keep this simple, the below steps are done via the console, although I have also published a similar simplified project using Terraform which can be found on Github. 

Setup the AWS Components
Setup S3 

To begin with, you will need to setup some storage for the website’s content, as well as the pipeline’s artifacts. 

Create the bucket for your Pipeline Files. Ensure that Block all public access is checked. It’s recommended you also enable Server-side encryption by default with SSE-S3. 

Create a bucket for your Website Files. Ensure that block all public access is NOT checked. This bucket will be open to the world as it will need to be configured to host a website. 

 

When the Website Files bucket is created. Go to Properties tab and Edit Static website hosting. Set it to Enable, and select the type as Host a static website. Save Changes. Note the URL under Bucket website endpoint.Go to Permissions tab and Edit the Bucket policy. Paste in a policy such as the sample below. Update ${website-bucket-name} accordingly to match the name of the bucket. 

<{ "Version": "2012-10-17", "Statement": [ { "Sid": "", "Effect": "Allow", "Principal": { "AWS": "*" }, "Action": "s3:GetObject", "Resource": "arn:aws:s3:::${website-bucket-name}/*" } ] } >

 

Setup IAM Users

You will need to first create an IAM Role which your CodePipeline and CodeBuild will be able to assume. Some of the below permissions can be drilled down further, as they are fairly generic. In this case we are merging the CodePipeline/CodeBuild into one user.  Create a Role in IAM with a unique name based on your project. You will need to setup the below trust policy.  


<{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": [ "codebuild.amazonaws.com", "codepipeline.amazonaws.com" ] }, "Action": "sts:AssumeRole" } ] } >

Create a Policy. Paste in a policy such as the below sample. Update ${website-bucket-name} and ${pipeline-bucket-name} accordingly. Attach the policy to the role you created in the step prior. 


<{ "Version": "2012-10-17", "Statement": [ { "Sid": "", "Effect": "Allow", "Action": "s3:*", "Resource": [ "arn:aws:s3:::${pipeline-bucket-name}/*", "arn:aws:s3:::${pipeline-bucket-name}", "arn:aws:s3:::${website-bucket-name}/*", "arn:aws:s3:::${website-bucket-name}" ] }>,

<{ "Sid": "", "Effect": "Allow", "Action": [ "codebuild:*", "codecommit:*", "cloudfront:CreateInvalidation" ], "Resource": "*" }>,

< { "Sid": "", "Effect": "Allow", "Action": [ "logs:Put*", "logs:FilterLogEvents", "logs:Create*" ], "Resource": "*" } ] } >

Setup CloudFront

  • Access Cloudfront and select Create distribution.
    Under Origin domain – select the S3 Bucket you created earlier.
  • Under Viewer protocol policy, set your desired actions. If you have a proper SSL you can set it up later and use Redirect HTTP to HTTPS.
  • Under Allowed HTTP methods, select GET, HEAD.
  • You can setup Alternate domain name here, but make sure you have an ACM Certificate to cover it, and setup Customer SSL certificate if you wish to use HTTPS.
  • Set the Default root object to index.html. This will ensure it loads the website if someone visits the root of the domain.
  • Leave everything else as default for the moment and click Create distribution.
  • Note down the Distribution ID, as you will need it for the Pipeline.

Setup the Pipeline

Now that all the components have been created, let’s setup the Pipeline to Tie them all Together.
Setup CodeCommit 
Navigate in to CodeCommit, and select Create Repository
You can use git-remote-codecommit in order to checkout the repository locally. You will need to check it out to make additional changes.  

You will need to make a sample commit, so create a directory called public and a file called index.html within it with some sample content, and push it up. 

$ cat public/index.html 
This is an S3-hosted website test 

After this is done, you should have a branch called “master” or “main” depending on your local configuration. This will need to be referenced during pipeline creation.
Setup buildspec in Repository 

Add a buildSpec.yml file to the CodeCommit Repo in order to automatically upload your files to AWS S3, and Invalidate the Cloudfront Cache. Note that ${bucketname} and ${cloudfront_id} in the examples below need to be replaced with the real values.  


version: 0.2

phases: 

  install: 

    commands: 

      - aws s3 sync ./public/ s3://${bucketname}/ 

      - aws cloudfront create-invalidation --distribution-id ${cloudfront_id} --paths "/*" 

Setup CodePipeline 

In our example, we’re going to use a very simple and straightforward pipeline. It will have a Source and Deployment phase. 

Navigate in to CodePipeline, and select Create Pipeline. Enter your Pipeline name, and select the role you created earlier under Service role. 

Under Advanced Settings, set the Artifact Store to Custom Location and update the field to reference the pipeline bucket you created earlier on. 

Click next, and move to Adding a Source Provider. Select the Source provider, Repository name and Branch name setup previously, and click next leaving everything else as default. 

On the Build section, select Build provider as AWS CodeBuild, and click Create Project under Project name 

This will open a new window. Codebuild will need to be created through this interface, otherwise it doesn’t support the artifacts, and source configuration correctly. 

Under Environment, select the latest Ubuntu Runtime for codebuild, and under Service role select the IAM role you created earlier. Once that’s all done, click Continue to CodePipeline and it will close out the window and your project will now be created. 

Click Next, and then Skip deploy stage (we’re doing it during the build stage!). Finally, click on create the pipeline and it will start running based on the work you’ve done so far.  

The website so far should now be available in the browser! Any further changes to the CodeCommit repository will result in the website being updated on S3 within minutes! 

The post Using CodePipeline to Host a Static Website appeared first on Anchor | Cloud Engineering Services.

Breaking the Bias – Women at AWS Developer Relations

Post Syndicated from Rashmi Nambiar original https://aws.amazon.com/blogs/aws/breaking-the-bias-women-at-aws-developer-relations/

Today for International Women’s Day we’re joined by a special guest writer, Rashmi Nambiar. She’s here to share her conversations with a few other members of the AWS Developer Relations team, talking about their work and experience as women in tech. Enjoy!

– The AWS News Blog Team


When I was contemplating joining AWS, many warned me about boarding the “rocket ship.” But I took the leap of faith. It has been four years since then. Now when I look back, the growth trajectory is something that I am proud of, from starting my AWS journey with a regional role in India to going global and now driving the Worldwide Developer Marketing Strategy. #HereatAWS, I get to choose the direction of my career and prioritize my time between family and work.

At AWS, we believe that the future of technology is accessible, flexible, and inclusive. So we take it very seriously when we say, “All Builders Welcome.” As a woman in tech, I have felt that strong sense of belonging with the team and acceptance for who I am.

Being part of the AWS Developer Relations (DevRel) team, I get to meet and work with awesome builders within and outside of the organization who are changing the status quo and pushing technological boundaries. This International Women’s Day, I took the opportunity to talk to some of the women at AWS DevRel about their role as tech/dev advocates.

Veliswa Boya

Headshot of Veliswa Boya

Veliswa Boya, Senior Developer Advocate

What is it that you like about being a developer advocate at AWS?
“Becoming a developer advocate is something I didn’t even dare to dream about. Some of us go through life and at some point admit that some dreams are just not meant for us. That today I am a developer advocate at AWS working with the builder community of sub-Saharan Africa and beyond is one of the most fulfilling and exciting roles I can recall throughout my entire tech career. I especially enjoy working with those new to AWS and new to tech in general, so my role spans technical content creation and delivery all the way to the mentoring of community members. I enjoy working for an organization that’s at the forefront of innovation, but at the same time not innovating for the sake of innovating, but always being customer obsessed and innovating on behalf of the customer.”

You are an icon of possibilities with many titles. How did the transition from AWS Hero to AWS employee work out for you?
“I became an AWS Hero in May 2020, and with that, I became the first woman out of Africa to ever be named an AWS Hero. I have always enjoyed sharing knowledge. Every little bit I learn, I always make sure to share. I believe that this—and more—led to my nomination. Joining AWS as a developer advocate is awesome. I continue to live the passion that led to me being a Hero, sharing knowledge with others and at the same time learning from both the community and my wonderful peers.”

Antje Barth

Headshot photo of Antje Barth

Antje Barth, Principal Developer Advocate – AI/ML

What do you like about your role as an AI/ML specialist on the AWS Developer Relations Team?
“I’ve always been excited about technology and the speed of innovation in this field. What I like most about my role as a principal developer advocate for AI/ML is that I get to share this passion and enable customers, developers, and students to build amazing things. I recently organized a hackathon asking participants to think about creative ways of using machine learning to improve disaster response. And I was simply blown away by all the ideas the teams came up with.”

You have authored books like Data Science on AWS. What is your guidance for someone planning to get on the publishing path?
“The piece of advice I would give anyone interested in becoming a book author: Find the topic you are really passionate about, dive into the topic, and start developing content—whether it’s blog posts, code samples, or videos. Become a subject matter expert and make yourself visible. Speak at meetups, submit a talk to a conference. Grow your network. Find peers, discuss your ideas, ask for feedback, make sure the topic is relevant for a large audience group. And eventually, reach out to publishers, share your content ideas and collected feedback, and put together a book proposal.”

Lena Hall

Headshot photo of Lena Hall

Lena Hall, Head of Developer Relations – North America

What excites you about AWS Developer Relations?
“I love it because AWS culture empowers anyone at AWS, including developer advocates, to always advocate for the customer and the community. While doing that, no matter how hard it is or how much friction you run into, you can be confident in doing the right thing for our customers and community. This translates to our ability to influence internally across the company, using strong data and logical narratives to support our improvement proposals.”

You have recently joined the team as the DevRel Head for North America. What does it take to lead a team of builders?
“It is important to recognize that people on your team have unique strengths and superpowers. I found it valuable to identify those early on and offer paths to develop them even more. In many cases, it leads to a bigger impact and improved motivation. It is also crucial to listen to your team, be supportive and welcoming of ideas, and protective of their time.”

Rohini Gaonkar

Headshot photo of Rohini Gaonkar

Rohini Gaonkar, Senior Developer Advocate

You have been with AWS for over eight years. What attracted you to developer advocacy?
“As a developer advocate, I love being autonomous, and I have the freedom to pick the tech of my choice. The other fun part is to work closely with the community—my efforts, however small, help someone in their career, and that is the most satisfying part of my work.”

You have worked in customer support, solutions architect, and technical evangelist roles. What’s your tip on developing multiple technical skills?
“Skills are like flowers in your bouquet; you should keep adding beautiful colors to it. Sometimes it takes months to years to develop a skill, so keep an eye on your next thing and start adding the skills for it today. Interestingly, at AWS, the ‘Learn and be curious’ leadership principle encourages us to always find ways to improve ourselves, to explore new possibilities and act on them.”

Jenna Pederson

Headshot photo of Jenna Pederson

Jenna Pederson, Senior Developer Advocate

What is your reason for taking up a developer advocate role at AWS?
“I like being a developer advocate at AWS because it lets me scale my impact. I get to work with and help so many more builders gain knowledge, level up their skills, and bring their ideas to life through technology.”

It is such a delight to watch your presentations and demo at events and other programs. What is your advice to people who want to get into public speaking?
“If you’re a new speaker, talk about what you’re learning, even if you think everyone is talking about the same thing. You will have a fresh perspective on whatever it is.”

Kris Howard

Headshot photo of Kris Howard

Kris Howard, DevRel Manager

Why did you join the Developer Relations Team?
“I joined DevRel because I love being on stage and sharing my creativity and passion for tech with others. The most rewarding part is when someone tells you that you inspired them to learn a new skill, or change their career, or stretch themselves to reach a new goal.”

Since you have worked in different geographies, what would you say to someone who is exploring working in different countries?
“The last two years have really emphasized that if you want to see the world, you should take advantage of every opportunity you get. That’s one of the benefits of Amazon: that there are so many career paths available to you in lots of different places! As a hiring manager, I was always excited to get applications from internal transfers, and in 2020 I got the chance to experience it from the other side when I moved with my partner from Sydney to Munich. It was a challenging time to relocate, but in retrospect, I’m so glad we did.”

Join Us!

Interested in working with DevRel Team? Here are some of the available opportunities.

Anchor | Cloud Engineering Services 2022-05-27 07:02:18

Post Syndicated from Gerald Bachlmayr original https://www.anchor.com.au/blog/2022/05/death-by-nodevops/

The CEO of ‘Waterfall & Silo’ walks into the meeting room and asks his three internal advisors: How are we progressing with our enterprise transformation towards DevOps, business agility and simplification? 

The well-prepared advisors, who had read at least a book and a half about organisational transformation and also watched a considerable number of Youtube videos, confidently reply: We are nearly there. We only need to get one more team on board. We have the first CI/CD pipelines established, and the containers are already up and running.

Unfortunately the advisors overlooked some details.

Two weeks later, the CEO asks the same question, and this time the response is: We only need to get two more teams on board, agree on some common tooling, the delivery methodology and relaunch our community of practice.

A month later, an executive decision is made to go back to the previous processes, tooling and perceived ‘customer focus’.

Two years later, the business closes its doors whilst other competitors achieve record revenues.

What has gone wrong, and why does this happen so often?

To answer this question, let’s have a look… 

Why do you need to transform your business?

Without transforming your business, you will run the risk of falling behind because you are potentially: 

  1. Dealing with the drag of outdated processes and ways of working. Therefore your organisation cannot react swiftly to new business opportunities and changing market trends.
  2. Wasting a lot of time and money on Undifferentiated heavy lifting (UHL). These are tasks that don’t differentiate your business from others but can be easily done better, faster and cheaper by someone else, for example, providing cloud infrastructure. Every minute you spend on UHL distracts you from focusing on your customer.
  3. Not focusing enough on what your customers need. If you don’t have sufficient data insights or experiment with new customer features, you will probably mainly focus on your competition. That makes you a follower. Customer-focused organisations will figure out earlier what works for them and what doesn’t. They will take the lead. 

How do you get started?

The biggest enablers for your transformation are the people in your business. If they work together in a collaborative way, they can leverage synergies and coach each other. This will ultimately motivate them. Delivering customer value is like in a team sport: not the team with the best player wins, but the team with the best strategy and overall team performance.  

How do we get there?

Establishing top-performing DevOps teams

Moving towards cross-functional DevOps teams, also called squads, helps to reduce manual hand-offs and waiting times in your delivery. It is also a very scalable model that is used by many modern organisations that have a good customer experience at their forefront. This applies to a variety of industries, from financial services to retail and professional services. Squad members have different skills and work together towards a shared outcome. A top-performing squad that understands the business goals will not only figure out how to deliver effectively but also how to simplify the solution and reduce Undifferentiated Heavy Lifting. A mature DevOps team will always try out new ways to solve problems. The experimental aspect is crucial for continuous improvement, and it keeps the team excited. Regular feedback in the form of metrics and retrospectives will make it easier for the team to know that they are on the right track.

Understand your customer needs and value chain

There are different methodologies to identify customer needs. Amazon has the “working backwards from the customer” methodology to come up with new ideas, and Google has the “design sprint” methodology. Identifying your actual opportunities and understanding the landscape you are operating in are big challenges. It is easy to get lost in detail and head in the wrong direction. Getting the strategy right is only one aspect of the bigger picture. You also need to get the execution right, experiment with new approaches and establish strong feedback loops between execution and strategy. 

This brings us to the next point that describes how we link those two aspects.

A bidirectional governance approach

DevOps teams operate autonomously and figure out how to best work together within their scope. They do not necessarily know what capabilities are required across the business. Hence you will need a governing working group that has complete visibility of this. That way, you can leverage synergies organisation-wide and not just within a squad. It is important that this working group gets feedback from the individual squads who are closer to specific business domains. One size does not fit all, and for some edge cases, you might need different technologies or delivery approaches. A bidirectional feedback loop will make sure you can improve customer focus and execution across the business.

Key takeaways

Establishing a mature DevOps model is a journey, and it may take some time. Each organisation and industry deals with different challenges, and therefore the journey does not always look the same. It is important to continuously tweak the approach and measure progress to make sure the customer focus can improve.

But if you don’t start the DevOps journey, you could turn into another ‘Waterfall & Silo’.

The post appeared first on Anchor | Cloud Engineering Services.

Our Top 4 Favorite Google Chrome DevTools Tips & Tricks

Post Syndicated from Andy Haine original https://www.anchor.com.au/blog/2020/10/our-top-4-favorite-google-chrome-devtools-tips-tricks/

Welcome to the final installation of our 3-part series on Google Chrome’s DevTools. In part 1 and part 2, we explored an introduction to using DevTools, as well as how you can use it to diagnose SSL and security issues on your site. In the third and final part of our DevTools series, we will be sharing our 4 favourite useful tips and tricks to help you achieve a variety of useful and efficient tasks with DevTools, without ever leaving your website!

Clearing All Site Data

Perhaps one of the most frustrating things when building your website, is the occasional menace that is browser caching. If you’ve put a lot of time into building websites, you probably know the feeling of making a change and then wondering why your site still shows the old page after you refresh. But even further to that, there can be any number of other reasons why you may need to clear all of your site data. Commonly, one might be inclined to just flush all cookies and cache settings in their browser, wiping their history from every website. This can lead to being suddenly logged out of all of your usual haunts – a frustrating inconvenience if you’re ever in a hurry to get something done.

Thankfully, DevTools has a handy little tool that allows you to clear all data related to the site that you’re currently on, without wiping anything else.

  1. Open up DevTools
  2. Click on the “Application” tab. If you can’t see it, just increase the width of your DevTools or click the arrow to view all available tabs.
  3. Click on the “Clear storage” tab under the “Application” heading.
  4. You will see how much local disk usage that specific website is taking up on your computer. To clear it all, just click on the “Clear site data” button.

That’s it! Your cache, cookies, and all other associated data for that website will be wiped out, without losing cached data for any other website.

Testing Device Responsiveness

In today’s world of websites, mobile devices make up more than half of all traffic to websites. That means that it’s more important than ever to ensure your website is fully responsive and looking sharp across all devices, not just desktop computers. Chrome DevTools has an incredibly useful tool to allow you to view your website as if you were viewing it on a mobile device.

  1. Open up DevTools.
  2. Click the “Toggle device toolbar” button on the top left corner. It looks like a tablet and mobile device. Alternatively, press Ctrl + Shift + M on Windows.
  3. At the top of your screen you should now see a dropdown of available devices to pick from, such as iPhone X. Selecting a device will adjust your screen’s ratio to that of the selected device.

Much easier than sneaking down to the Apple store to test out your site on every model of iPhone or iPad, right?

Viewing Console Errors

Sometimes you may experience an error on your site, and not know where to look for more information. This is where DevTools’ Console tab can come in very handy. If you experience any form of error on your site, you can likely follow these steps to find a lead on what to do to solve it:

  1. Open up DevTools
  2. Select the “Console” tab
  3. If your console logged any errors, you can find them here. You may see a 403 error, or a 500 error, etc. The console will generally log extra information too.

If you follow the above steps and you see a 403 error, you then know your action was not completed due to a permissions issue – which can get you started on the right track to troubleshooting the issue. Whatever the error(s) may be, there is usually a plethora of information available on potential solutions by individually researching those error codes or phrases on Google or your search engine of choice.

Edit Any Text On The Page

While you can right-click on text and choose “inspect element”, and then modify text that way, this alternative method allows you to modify any text on a website as if you were editing a regular document or a photoshop file, etc.

  1. Open up DevTools
  2. Go to the “Console” tab
  3. Copy and paste the following into the console and hit enter:
    1. document.designMode=”on”

Once that’s done, you can select any text on your page and edit it. This is actually one of the more fun DevTools features, and it can make testing text changes on your site an absolute breeze.

Conclusion

This concludes our entire DevTools series! We hope you’ve enjoyed it and maybe picked up a few new tools along the way. Our series only really scratches the surface of what DevTools can be used for, but we hope this has offered a useful introduction to some of the types of things you can accomplish. If you want to keep learning, be sure to head over to Google’s Chrome DevTools documentation for so much more!

The post Our Top 4 Favorite Google Chrome DevTools Tips & Tricks appeared first on AWS Managed Services by Anchor.

Diagnosing Security Issues with Google Chrome’s DevTools

Post Syndicated from Andy Haine original https://www.anchor.com.au/blog/2020/10/diagnosing-security-issues-with-google-chromes-devtools/

Welcome to part 2 of our 3 part series on delving into some of the most useful features and functions of Google Chrome’s DevTools. In part 1, we went over a brief introduction of DevTools, plus some minor customisations. In this part 2, we’ll be taking a look into the security panel section of DevTools, including some of the different things you can look at when diagnosing a website or application for security and SSL issues.

The Security Panel

One of Chrome’s most helpful features has to be the security panel. To begin, visit any website through Google Chrome and open up DevTools, then select “Security” from the list of tabs at the top. If you can’t see it, you may need to click the two arrows to display more options or increase the width of DevTools.

Inspecting Your SSL Certificate

When we talk about security on websites, one of the first things that we usually would consider is the presence of an SSL certificate. The security tab allows us to inspect the website’s SSL certificate, which can have many practical uses. For example, when you visit your website, you may see a concerning red “Unsafe” warning. If you suspect that that may be something to do with your SSL certificate, it’s very likely that you’re correct. The problem is, the issue with your SSL certificate could be any number of things. It may be expired, revoked, or maybe no SSL certificate exists at all. This is where DevTools can come in handy. With the Security tab open, go ahead and click “View certificate” to inspect your SSL certificate. In doing so, you will be able to see what domain the SSL has been issued to, what certificate authority it was issued by, and its expiration date – among various other details, such as viewing the full certification path.

For insecure or SSL warnings, viewing your SSL certificate is the perfect first step in the troubleshooting process.

Diagnosing Mixed Content

Sometimes your website may show as insecure, and not have a green padlock in your address bar. You may have checked your SSL certificate is valid using the method above, and everything is all well and good there, but your site is still not displaying a padlock. This can be due to what’s called mixed content. Put simply; mixed content means that your website itself is configured to load over HTTPS://, but some resources (scripts, images, etc) on your website are set to HTTP://. For a website to show as fully secure, all resources must be served over HTTPS://, and your website’s URL must also be configured to load as HTTPS://.

Any resources that are not loading securely are vulnerable to man-in-the-middle attacks, whereby a malicious actor can intercept data sent through your website, potentially leaking private information. This is doubly important for eCommerce sites or any sites handling personal information, and why it’s so important to ensure that your website is fully secure, not to mention increasing users’ trust in your website.

To assist in diagnosing mixed content, head back into the security tab again. Once you have that open, go ahead and refresh the website that you’re diagnosing. If there are any non-secure resources on the page, the panel on the left-hand side will list them. Secure resources will be green, and those non-secure will be red. Oftentimes this can be one or two images with an HTTP:// URL. Whatever the case, this is one of the easiest ways to diagnose what’s preventing your site from gaining a green padlock. Once you have a list of which content is insecure, you can go ahead and manually adjust those issues on your website.

There are always sites like “Why No Padlock?” that effectively do the same thing as the steps listed above, but the beauty of DevTools is that it is one tool that can do it all for you, without having to leave your website.

Conclusion

This concludes part 2 of our 3-part DevTools series! As always, be sure to head over to Google’s Chrome DevTools documentation for further information on everything discussed here.

We hope that this has helped you gain some insight into how you might practically use DevTools when troubleshooting security and SSL issues on your own site. Now that you’re familiar with the basics of the security panel stay tuned for part 3 where we will get stuck into some of the most useful DevTools tips and tricks of all.

The post Diagnosing Security Issues with Google Chrome’s DevTools appeared first on AWS Managed Services by Anchor.

An Introduction To Getting Started with Google Chrome’s DevTools

Post Syndicated from Andy Haine original https://www.anchor.com.au/blog/2020/10/an-introduction-to-getting-started-with-google-chromes-devtools/

Whether you’re a cloud administrator or developer, having a strong arsenal of dev tools under your belt will help to make your everyday tasks and website or application maintenance a lot more efficient.

One of the tools our developers use every day to assist our clients is Chrome’s Devtools. Whether you work on websites or applications for your own clients, or you manage your own company’s assets, Devtools is definitely worth spending the time to get to know. From style and design to troubleshooting technical issues, you would be hard-pressed to find such an effective tool for both.

Whether you already use Chrome’s DevTools on a daily basis, or you’re yet to discover its power and functionality, we hope we can show you something new in our 3-part DevTools series! In part 1 of this series, we will be giving you a brief introduction to DevTools. In part 2, we will cover diagnosing security issues using DevTools. Finally, in part 3, we’ll go over some of the more useful tips and tricks that you can use to enhance your workflow.

While in this series, we will be using Chrome’s DevTools, most of this advice also applies to other popular browser’s developer tools, such as Microsoft Edge or Mozilla Firefox. Although the functionality and location of the tools will differ, doing a quick Google search should help you to dig up anything you’re after.

An Introduction to Chrome DevTools

Chrome DevTools, also known as Chrome Developer tools, is a set of tools built into the Chrome browser to assist web/application developers and novice users alike. Some of the things it can be used for includes, but is not limited to:

  • Debugging and troubleshooting errors
  • Editing on the fly
  • Adjusting/testing styling (CSS) before making live changes
  • Emulating different network speeds (like 3G) to determine load times on slower networks
  • Testing responsiveness across different devices
  • Performance auditing to ensure your website or application is fast and well optimised

All of the above features can greatly enhance productivity when you’re building or editing, whether you’re a professional developer or a hobbyist looking to build your first site or application.

Chrome DevTools has been around for a long time (since Chrome’s initial release), but it’s a tool that has been continuously worked on and improved since its beginnings. It is now extremely feature-rich, and still being improved every day. Keep in mind; the above features are only a very brief overview of all of the functionality DevTools has to offer. In this series, we’ll get you comfortably acquainted with DevTools, but you can additionally find very in-depth documentation over at Google’s DevTools site here, where they provide breakdowns of every feature.

How to open Chrome DevTools

There are a few different ways that you can access DevTools in your Chrome browser.

  1. Open DevTools by right-clicking on anything within the browser page, and select the “Inspect” button. This will open DevTools and jump to the specific element that you selected.
  2. Another method is via the Chrome browser menu. Simply go to the top right corner and click the three dots > More tools > Developer tools.
  3. If you prefer hotkeys, you can open DevTools by doing either of the following, depending on your operating system:

Windows = F12 or Ctrl + shift + I

Mac = Cmd + Opt + I

Customising your Environment

Now that you know what DevTools is and how to open it, it’s worth spending a little bit of time customising DevTools to your own personal preferences.

To begin with, DevTools has a built-in dark mode. When you’re looking at code or a lot of small text all the time, using a dark theme can greatly help to reduce eye strain. Enabling dark mode can be done by following the instructions below:

  1. Open up DevTools using your preferred method above
  2. Once you’re in, click the settings cog on the top right to open up the DevTools settings panel
  3. Under the ‘Appearance’ heading, adjust the ‘Theme’ to ‘Dark’

You may wish to spend some time exploring the remainder of the preferences section of DevTools, as there are a lot of layout and functionality customisations available.

Conclusion

This concludes part 1 of our 3 part DevTools series. We hope that this has been a useful and informative introduction to getting started using DevTools for your own project! Now that you’re familiar with the basics stay tuned for part 2 where we will show you how you can diagnose basic security issues – and more!

The post An Introduction To Getting Started with Google Chrome’s DevTools appeared first on AWS Managed Services by Anchor.