Tag Archives: AWS Community Heroes

Introducing the Newest AWS Heroes – September 2019

Post Syndicated from Ross Barich original https://aws.amazon.com/blogs/aws/introducing-the-newest-aws-heroes-september-2019/

Leaders within the AWS technical community educate others about the latest AWS services in a variety of ways: some share knowledge in person by speaking at events or running workshops and Meetups; while others prefer to share their insights online via social media, blogs, or open source contributions.

The most prominent AWS community leaders worldwide are recognized as AWS Heroes, and today we are excited to introduce to you the latest members of the AWS Hero program:

Alex Schultz – Fort Wayne, USA

Machine Learning Hero Alex Schultz works in the Innovation Labs at Advanced Solutions where he develops machine learning enabled products and solutions for the biomedical and product distribution industries. After receiving a DeepLens at re:Invent 2017, he dove headfirst into machine learning where he used the device to win the AWS DeepLens challenge by building a project which can read books to children. As an active advocate for AWS, he runs the Fort Wayne AWS User Group and loves to share his knowledge and experience with other developers. He also regularly contributes to the online DeepRacer community where he has helped many people who are new to machine learning get started.

 

 

 

 

 

 

 

Chase Douglas – Portland, USA

Serverless Hero Chase Douglas is the CTO and co-founder at Stackery, where he steers engineering and technical architecture of development tools that enable individuals and teams of developers to successfully build and manage serverless applications. He is a deeply experienced software architect and a long-time engineering leader focused on building products that increase development efficiency while delighting users. Chase is also a frequent conference speaker on the topics of serverless, instrumentation, and software patterns. Most recently, he discussed the serverless development-to-production pipeline at the Chicago and New York AWS Summits, and provided insight into how the future of serverless may be functionless in his blog series.

 

 

 

 

 

 

Chris Williams – Portsmouth, USA

Community Hero Chris Williams is an Enterprise Cloud Consultant for GreenPages Technology Solutions—a digital transformation and cloud enablement company. There he helps customers design and deploy the next generation of public, private, and hybrid cloud solutions, specializing in AWS and VMware. Chris blogs about virtualization, technology, and design at Mistwire. He is an active community leader, co-organizing the AWS Portsmouth User Group, and both hosts and presents on vBrownBag.

 

 

 

 

 

 

 

 

Dave Stauffacher – Milwaukee, USA

Community Hero Dave Stauffacher is a Principal Platform Engineer focused on cloud engineering at Direct Supply where he has helped navigate a 30,000% data growth over the last 15 years. In his current role, Dave is focused on helping drive Direct Supply’s cloud migration, combining his storage background with cloud automation and standardization practices. Dave has published his automation work for deploying AWS Storage Gateway for use with SQL Server data protection. He is a participant in the Milwaukee AWS User Group and the Milwaukee Docker User Group and has showcased his cloud experience in presentations at the AWS Midwest Community Day, AWS re:Invent, HashiConf, the Milwaukee Big Data User Group and other industry events.

 

 

 

 

 

 

Gojko Adzic – London, United Kingdom

Serverless Hero Gojko Adzic is a partner at Neuri Consulting LLP and a co-founder of MindMup, a collaborative mind mapping application that has been running on AWS Lambda since 2016. He is the author of the book Running Serverless and co-author of Serverless Computing: Economic and Architectural Impact, one of the first academic papers about AWS Lambda. He is also a key contributor to Claudia.js, an open-source tool that simplifies Lambda application deployment, and is a minor contributor to the AWS SAM CLI. Gojko frequently blogs about serverless application development on Serverless.Pub and his personal blog, and he has authored numerous other books.

 

 

 

 

 

 

 

Liz Rice – Enfield, United Kingdom

Container Hero Liz Rice is VP Open Source Engineering with cloud native security specialists Aqua Security, where she and her team look after several container-related open source projects. She is chair of the CNCF’s Technical Oversight Committee, and was Co-Chair of the KubeCon + CloudNativeCon 2018 events in Copenhagen, Shanghai and Seattle. She is a regular speaker at conferences including re:Invent, Velocity, DockerCon and many more. Her talks usually feature live coding or demos, and she is known for making complex technical concepts accessible.

 

 

 

 

 

 

 

 

Lyndon Leggate – London, United Kingdom

Machine Learning Hero Lyndon Leggate is a senior technology leader with extensive experience of defining and delivering complex technical solutions on large, business critical projects for consumer facing brands. Lyndon is a keen participant in the AWS DeepRacer league. Racing as Etaggel, he has regularly positioned in the top 10, features in DeepRacer TV and in May 2019 established the AWS DeepRacer Community. This vibrant and rapidly growing community provides a space for new and experienced racers to seek advice and share tips. The Community has gone on to expand the DeepRacer toolsets, making the platform more accessible and pushing the bounds of the technology. He also organises the AWS DeepRacer London Meetup series.

 

 

 

 

 

 

Maciej Lelusz – Crakow, Poland

Community Hero Maciej Lelusz is Co-Founder of Chaos Gears, a company concentrated on serverless, automation, IaC and chaos engineering as a way for the improvement of system resiliency. He is focused on community development, blogging, company management, and Public/Hybrid/Private cloud design. He cares about enterprise technology, IT transformation, its culture and people involved in it. Maciej is Co-Leader of the AWS User Group Poland – Cracow Chapter, and the Founder and Leader of the InfraXstructure conference and Polish VMware User Group.

 

 

 

 

 

 

 

 

Nathan Glover – Perth, Australia

Community Hero Nathan Glover is a DevOps Consultant at Mechanical Rock in Perth, Western Australia. Prior to that he worked as a Hardware Systems Designer and Embedded Developer in the IoT space. He is passionate about Cloud Native architecture and loves sharing his successes and failures on his blog. A key focus for him is breaking down the learning barrier by building practical examples using cloud services. On top of these he has a number of online courses teaching people how to get started building with Amazon Alexa Skills and AWS IoT. In his spare time, he loves to dabble in all areas of technology; building cloud connected toasters, embedded systems vehicle tracking, and competing in online capture the flag security events.

 

 

 

 

 

 

Prashanth HN – Bengaluru, India

Serverless Hero Prashanth HN is the Chief Technology Officer at WheelsBox and one of the community leaders of the AWS Users Group, Bengaluru. He mentors and consults other startups to embrace a serverless approach, frequently blogs about serverless topics for all skill levels including topics for beginners and advanced users on his personal blog and Amplify-related topics on the AWS Amplify Community Blog, and delivers talks about building using microservices and serverless. In a recent talk, he demonstrated how microservices patterns can be implemented using serverless. Prashanth maintains the open-source project Lanyard, a serverless agenda app for event organizers and attendees, which was well received at AWS Community Day India.

 

 

 

 

 

 

 

Ran Ribenzaft – Tel Aviv, Israel

Serverless Hero Ran Ribenzaft is the Chief Technology Officer at Epsagon, an AWS Advanced Technology Partner that specializes in monitoring and tracing for serverless applications. Ran is a passionate developer that loves sharing open-source tools to make everyone’s lives easier and writing technical blog posts on the topics of serverless, microservices, cloud, and AWS on Medium and the Epsagon blog. Ran is also dedicated to educating and growing the community around serverless, organizing Serverless meetups in SF and TLV, delivering online webinars and workshops, and frequently giving talks at conferences.

 

 

 

 

 

 

 

 

Rolf Koski – Tampere, Finland

Community Hero Rolf Koski works at Cybercom, which is an AWS Premier Partner from the Nordics headquartered in Sweden. He works as the CTO at Cybercom AWS Business Group. In his role he is both technical as well as being the thought leader in the Cloud. Rolf has been one of the leading figures at the Nordic AWS Communities as one of the community leads in Helsinki and Stockholm user groups and he initially founded and organized the first ever AWS Community Days Nordics. Rolf is professionally certified and additionally works as Well-Architected Lead doing Well-Architected Reviews for customer workloads.

 

 

 

 

 

 

 

Learn more about AWS Heroes and connect with a Hero near you by checking out the Hero website.

Using callback URLs for approval emails with AWS Step Functions

Post Syndicated from Ben Kehoe original https://aws.amazon.com/blogs/aws/using-callback-urls-for-approval-emails-with-aws-step-functions/

Guest post by Cloud Robotics Research Scientist at iRobot and AWS Serverless Hero, Ben Kehoe

AWS Step Functions is a serverless workflow orchestration service that lets you coordinate processes using the declarative Amazon States Language. When you have a Step Functions task that takes more than fifteen minutes, you can’t use an AWS Lambda function—Step Functions provides the callback pattern for us in this situation. Approval emails are a common use case in this category.

In this post, I show you how to create a Step Functions state machine that uses the sfn-callback-urls application for an email approval step. The app is available in the AWS Serverless Application Repository. The state machine sends an email containing approve/reject links, and later a confirmation email. You can easily expand this state machine for your use cases.

Solution overview
An approval email must include URLs that send the appropriate result back to Step Functions when the user clicks on them. The URL should be valid for an extended period of time, longer than presigned URLs—what if the user is on vacation this week? Ideally, this doesn’t involve storage of the token and the maintenance that requires. Luckily, there’s an AWS Serverless Application Repository app for that!

The sfn-callback-urls app allows you to generate one-time-use callback URLs through a call to either an Amazon API Gateway or a Lambda function. Each URL has an associated name, whether it causes success or failure, and what output should be sent back to Step Functions. Sending an HTTP GET or POST to a URL sends its output to Step Functions. sfn-callback-urls is stateless, and it also supports POST callbacks with JSON bodies for use with webhooks.

Deploying the app
First, deploy the sfn-callback-urls serverless app and make note of the ARN for the Lambda function that it exposes. In the AWS Serverless Application Repository console, select Show apps that create custom IAM roles or resource policies, and search for sfn-callback-urls.  You can also access the application.

Under application settings, select the box to acknowledge the creation of IAM resources. By default, this app creates a KMS key. You can disable this by setting the DisableEncryption parameter to true, but first read the Security section in the Readme to the left. Scroll down and choose Deploy.

On the deployment confirmation page, choose CreateUrls, which opens the Lambda console for that function. Make note of the function ARN because you need it later.
Create the application by doing the following:

  1. Create an SNS topic and subscribe your email to it.
  2. Create the Lambda function that handles URL creation and email sending, and add proper permissions.
  3. Create an IAM role for the state machine to invoke the Lambda function.
  4. Create the state machine.
  5. Start the execution and send yourself some emails!

Create the SNS topic
In the SNS console, choose Topics, Create Topic. Name the topic ApprovalEmailsTopic, and choose Create Topic.

Make a note of the topic ARN, for example arn:aws:sns:us-east-2:012345678912:ApprovalEmailsTopic.
Now, set up a subscription to receive emails. Choose Create subscription. For Protocol, choose Email, enter an email address, and choose Create subscription.

Wait for an email to arrive in your inbox with a confirmation link. It confirms the subscription, allowing messages published to the topic to be emailed to you.

Create the Lambda function
Now create the Lambda function that handles the creation of callback URLs and sending of emails. For this short post, create a single Lambda function that completes two separate steps:

  • Creating callback URLs
  • Sending the approval email, and later sending the confirmation email

There’s an if statement in the code to separate the two, which requires the state machine to tell the Lambda function which state is invoking it. The best practice here would be to use two separate Lambda functions.

To create the Lambda function in the Lambda console, choose Create function, name it ApprovalEmailsFunction, and select the latest Python 3 runtime. Under Permissions, choose Create a new Role with basic permissions, Create.

Add permissions by scrolling down to Configuration. Choose the link to see the role in the IAM console.

Add IAM permissions
In the IAM console, select the new role and choose Add inline policy. Add permissions for sns:Publish to the topic that you created and lambda:InvokeFunction to the sfn-callback-urls CreateUrls function ARN.

Back in the Lambda console, use the following code in the function:

import json, os, boto3
def lambda_handler(event, context):
    print('Event:', json.dumps(event))
    # Switch between the two blocks of code to run
    # This is normally in separate functions
    if event['step'] == 'SendApprovalRequest':
        print('Calling sfn-callback-urls app')
        input = {
            # Step Functions gives us this callback token
            # sfn-callback-urls needs it to be able to complete the task
            "token": event['token'],
            "actions": [
                # The approval action that transfers the name to the output
                {
                    "name": "approve",
                    "type": "success",
                    "output": {
                        # watch for re-use of this field below
                        "name_in_output": event['name_in_input']
                    }
                },
                # The rejection action that names the rejecter
                {
                    "name": "reject",
                    "type": "failure",
                    "error": "rejected",
                    "cause": event['name_in_input'] + " rejected it"
                }
            ]
        }
        response = boto3.client('lambda').invoke(
            FunctionName=os.environ['CREATE_URLS_FUNCTION'],
            Payload=json.dumps(input)
        )
        urls = json.loads(response['Payload'].read())['urls']
        print('Got urls:', urls)

        # Compose email
        email_subject = 'Step Functions example approval request'

        email_body = """Hello {name},
        Click below (these could be better in HTML emails):

        Approve:
        {approve}

        Reject:
        {reject}
        """.format(
            name=event['name_in_input'],
            approve=urls['approve'],
            reject=urls['reject']
        )
    elif event['step'] == 'SendConfirmation':
        # Compose email
        email_subject = 'Step Functions example complete'

        if 'Error' in event['output']:
            email_body = """Hello,
            Your task was rejected: {cause}
            """.format(
                cause=event['output']['Cause']
            )
        else:
            email_body = """Hello {name},
            Your task is complete.
            """.format(
                name=event['output']['name_in_output']
            )
    else:
        raise ValueError

    print('Sending email:', email_body)
    boto3.client('sns').publish(
        TopicArn=os.environ['TOPIC_ARN'],
        Subject=email_subject,
        Message=email_body
    )
    print('done')
    return {}

Now, set the following environment variables TOPIC_ARN and CREATE_URLS_FUNCTION to the ARNs of your topic and the sfn-callback-urls function noted earlier.

After updating the code and environment variables, choose Save.
 
Create the state machine
You first need a role for the state machine to assume that can invoke the new Lambda function.

In the IAM console, create a role with Step Functions as its trusted entity. This requires the AWSLambdaRole policy, which gives it access to invoke your function. Name the role ApprovalEmailsStateMachineRole.

Now you’re ready to create the state machine. In the Step Functions console, choose Create state machine, name it ApprovalEmails, and use the following code:

{
    "Version": "1.0",
    "StartAt": "SendApprovalRequest",
    "States": {
        "SendApprovalRequest": {
            "Type": "Task",
            "Resource": "arn:aws:states:::lambda:invoke.waitForTaskToken",
            "Parameters": {
                "FunctionName": "ApprovalEmailsFunction",
                "Payload": {
                    "step.$": "$$.State.Name",
                    "name_in_input.$": "$.name",
                    "token.$": "$$.Task.Token"
                }
            },
            "ResultPath": "$.output",
            "Next": "SendConfirmation",
            "Catch": [
                {
                    "ErrorEquals": [ "rejected" ],
                    "ResultPath": "$.output",
                    "Next": "SendConfirmation"
                }
            ]
        },
        "SendConfirmation": {
            "Type": "Task",
            "Resource": "arn:aws:states:::lambda:invoke",
            "Parameters": {
                "FunctionName": "ApprovalEmailsFunction",
                "Payload": {
                    "step.$": "$$.State.Name",
                    "output.$": "$.output"
                }
            },
            "End": true
        }
    }
}

This state machine has two states. It takes as input a JSON object with one field, “name”. Each state is a Lambda task. To shorten this post, I combined the functionality for both states in a single Lambda function. You pass the state name as the step field to allow the function to choose the block of code to run. Using the best practice of using separate functions for different responsibilities, this field would not be necessary.

The first state, SendApprovalRequest, expects an input JSON object with a name field. It packages that name along with the step and the task token (required to complete the callback task), and invokes the Lambda function with it. Whatever output is received as part of the callback, the state machine stores it in the output JSON object under the output field. That output then becomes the input to the second state.

The second state, SendConfirmation, takes that output field along with the step and invokes the function again. The second invocation does not use the callback pattern and doesn’t involve a task token.

Start the execution
To run the example, choose Start execution and set the input to be a JSON object that looks like the following:
{
  "name": "Ben"
}

You see the execution graph with the SendApprovalRequest state highlighted. This means it has started and is waiting for the task token to be returned. Check your inbox for an email with approve and reject links. Choose a link and get a confirmation page in the browser saying that your response has been accepted. In the State Machines console, you see that the execution has finished, and you also receive a confirmation email for approval or rejection.

Conclusion
In this post, I demonstrated how to use the sfn-callback-urls app from the AWS Serverless Application Repository to create URLs for approval emails. I also showed you how to build a system that can create and send those emails and process the results. This pattern can be used as part of a larger state machine to manage your own workflow.

This example is also available as an AWS CloudFormation template in the sfn-callback-urls GitHub repository.

Ben Kehoe

Building a Modern CI/CD Pipeline in the Serverless Era with GitOps

Post Syndicated from Shimon Tolts original https://aws.amazon.com/blogs/aws/building-a-modern-ci-cd-pipeline-in-the-serverless-era-with-gitops/

Guest post by AWS Community Hero Shimon Tolts, CTO and co-founder at Datree.io. He specializes in developer tools and infrastructure, running a company that is 100% serverless.

In recent years, there was a major transition in the way you build and ship software. This was mainly around microservices, splitting code into small components, using infrastructure as code, and using Git as the single source of truth that glues it all together.

In this post, I discuss the transition and the different steps of modern software development to showcase the possible solutions for the serverless world. In addition, I list useful tools that were designed for this era.

What is serverless?

Before I dive into the wonderful world of serverless development and tooling, here’s what I mean by serverless. The AWS website talks about four main benefits:

  • No server management.
  • Flexible scaling.
  • Pay for value.
  • Automated high availability.

To me, serverless is any infrastructure that you don’t have to manage and scale yourself.
At my company Datree.io, we run 95% of our workload on AWS Fargate and 5% on AWS Lambda. We are a serverless company; we have zero Amazon EC2 instances in our AWS account. For more information, see the following:

What is GitOps?

Git is a free and open source distributed version control system designed to handle everything from small to very large projects with speed and efficiency.
According to Luis Faceira, a CI/CD consultant, GitOps is a way of working. You might look at it as an approach in which everything starts and ends with Git. Here are some key concepts:

  • Git as the SINGLE source of truth of a system
  • Git as the SINGLE place where we operate (create, change and destroy) ALL environments
  • ALL changes are observable/verifiable.

How you built software before the cloud

Back in the waterfall pre-cloud era, you used to have separate teams for development, testing, security, operations, monitoring, and so on.

Nowadays, in most organizations, there is a transition to full developer autonomy and developers owning the entire production path. The developer is the King – or Queen 🙂

Those teams (Ops/Security/IT/etc) used to be gatekeepers to validate and control every developer change. Now they have become more of a satellite unit that drives policy and sets best practices and standards. They are no longer the production bottleneck, so they provide organization-wide platforms and enablement solutions.

Everything is codified

With the transition into full developer ownership of the entire pipeline, developers automated everything. We have more code than ever, and processes that used to be manual are now described in code.

This is a good transition, in my opinion. Here are some of the benefits:

  • Automation: By storing all things as code, everything can be automated, reused, and re-created in moments.
  • Immutable: If anything goes wrong, create it again from the stored configuration.
  • Versioning: Changes can be applied and reverted, and are tracked to a single user who made the change.

GitOps: Git has become the single source of truth

The second major transition is that now everything is in one place! Git is the place where all of the code is stored and where all operations are initiated. Whether it’s testing, building, packaging, or releasing, nowadays everything is triggered through pull requests.

This is amplified by the codification of everything.

GitOps Graphic

Useful tools in the serverless era

There are many useful tools in the market, here is a list of ones that were designed for serverless.

Logos

Code

Always store your code in a source control system. In recent years, more and more functions are codified, such as, BI, ops, security, and AI. For new developers, it is not always obvious that they should use source control for some functionality.

Build and test

The most common mistake I see is manually configuring build jobs in the GUI. This might be good for a small POC but it is not scalable. You should have your job codified and inside your Git repository. Here are some tools to help with building and testing:

Security and governance

When working in a serverless way, you end up having many Git repos. The number of code packages can be overwhelming. The demand for unified code standards remains as it was but now it is much harder to enforce it on top of your R&D org. Here are some tools that might help you with the challenge:

Bundle and release

Building a serverless application is connecting microservices into one unit. For example, you might be using Amazon API Gateway, AWS Lambda, and Amazon DynamoDB. Instead of configuring each one separately, you should use a bundler to hold the configuration in one place. That allows for easy versioning and replication of the app for several environments. Here are a couple of bundlers:

Package

When working with many different serverless components, you should create small packages of tools to be able to import across different Lambda functions. You can use a language-specific store like npm or RubyGems, or use a more holistic solution. Here are several package artifact stores that allow hosting for multiple programming languages:

Monitor

This part is especially tricky when working with serverless applications, as everything is split into small pieces. It’s important to use monitoring tools that support this mode of work. Here are some tools that can handle serverless:

Summary

The serverless era brings many transitions along with it like a codification of the entire pipeline and Git being the single source of truth. This doesn’t mean that the same problems that we use to have like security, logging and more disappeared, you should continue addressing them and leveraging tools that enable you to focus on your business.

Meet the Newest AWS Heroes! June 2019

Post Syndicated from Ross Barich original https://aws.amazon.com/blogs/aws/meet-the-newest-aws-heroes-june-2019/

At the heart of the global AWS community are the builders and influencers whose passion for AWS leads them to actively share their technical know-how with others. The most prominent of these AWS community leaders are recognized as AWS Heroes. Heroes regularly share their extensive AWS knowledge online via blogs, social media, and open source contributions, and in person by speaking at industry conferences and running workshops or AWS-focused User Groups.

Today we are thrilled to introduce the latest cohort of AWS Heroes:

Anton Babenko – Oslo, Norway

Community Hero Anton Babenko is a long time developer and CTO who runs a consulting company Betajob AS in Norway. He helps companies around the globe build solutions using AWS, and specializes in infrastructure as code, DevOps, and reusable infrastructure components. Anton spends a large amount of his time as an open-source contributor on various Terraform & AWS projects, including terraform-aws-modules and modules.tf. He enjoys solving real cloud architecture tasks, figuring out generic solutions by getting to the core, and making them available as open source to let the whole AWS community benefit. Anton also co-founded and co-organizes AWS, DevOps, and HashiCorp User Groups in Norway, DevOpsDays Oslo, and often speaks at various technical Meetups and conferences.

 

 

 

 

 

Bhuvaneswari Subramani – Bengaluru, India

Community Hero Bhuvaneswari Subramani is Director Engineering Operations at Infor. She has almost two decades of IT experience, specializing in Cloud Computing, DevOps, and Performance Testing. She holds the AWS Certified Solution Architect Professional certification, is a co-organizer of the AWS User Group Bengaluru, and is instrumental in organizing Meetups & AWS Community Day Bengaluru. Bhuvaneswari is also an active speaker at AWS community events, industry conferences and delivers guest lectures on Cloud Computing for staff & students at engineering colleges affiliated to Anna University. She is a technophile & IT Blogger, who meticulously and picturesquely depicts the events that inspires & influences her. Her passion for technical writing is exemplified in the form of tech blog DevOps and CloudComputing for over a decade and of late, she constantly writes about AWS conferences and Meetups on the AWS User Group Bengaluru Blog.

 

 

 

Colin Percival – Vancouver, Canada

Community Hero Colin Percival is the founder of Tarsnap, a secure online backup service which combines the flexibility and scriptability of the standard UNIX “tar” utility with strong encryption, deduplication, and the reliability of Amazon S3 storage. Having started work on Tarsnap in 2006, Colin is among the first generation of users of Amazon Web Services, and has written dozens of articles about his experiences with AWS on his blog. Colin has been a member of the FreeBSD project for 15 years and has served in that time as the project Security Officer and a member of the Core team; starting in 2008 he led the efforts to bring FreeBSD to the Amazon EC2 platform, and for the past 7 years he has been maintaining this support, keeping FreeBSD up to date with all of the latest changes and functionality in Amazon EC2.

 

 

 

 

Francesco Pochetti – Luxembourg

Machine Learning Hero Francesco Pochetti first got in touch with Machine Learning back in 2013, taking Stanford’s ML MOOC by Andrew Ng. Now he leverages the wonders of AWS AI infrastructure, plays around with new services, builds ML solutions and lets the world know about his projects on his blog. This is where he regularly posts all his experiments in Machine Learning and Deep Learning. Most notably, within the AWS domain, Inferring movie genre from its poster in AWS SageMaker, Analyzing IMDb reviews sentiment with Amazon Comprehend or Running Neural Style Transfer with Lambda and GPU-powered EC2s.

 

 

 

 

 

 

 

Guy Ernest – Tel Aviv, Israel

Machine Learning Hero Guy Ernest is busy in taking machine learning and AI to the masses to three audiences. The main audience he engages are software developers (SDE) and converting them to machine learning engineers (MLE), using the popular fastai library, PyTorch, and Amazon AI services. The next audience is business people in large enterprises that are learning the applicability of machine learning to their business and the way to conduct the AI transformation of their organization. Finally, Guy works with kids who are starting their AI/ML learning journey by enabling them with Alexa skills, computer vision, and robots in after school and summer camp activities.

 

 

 

 

 

 

 

Kesha Williams – Atlanta, USA

Machine Learning Hero Kesha Williams has over 20 years of experience in software development. She successfully transformed her engineering skills from Java Developer to ML Practitioner by leaning hands on with AWS AI solutions like AWS DeepLens, Amazon Rekognition, and Amazon SageMaker. Kesha believes that as we all benefit from integrating technology into our everyday lives, we still struggle to make meaningful relationships with each other. To solve this, Kesha develops ML models on Amazon SageMaker using computer vision and natural language processing to help us better connect with people around us. Kesha is also very active in the community helping others find their path to machine learning. She authors courses for learning platforms such as Manning Publications, Packt, LinkedIn Learning, A Cloud Guru, and Cloud Academy.

 

 

 

 

Manoj Fernando – Sri Lanka

Community Hero Manoj Fernando is a Technical Lead at 99X Technology in Sri Lanka and the CTO of Whatif AS in Norway. He is passionate about designing scalable and cost-effective cloud architectures on the AWS cloud platform. His team was one of the early adopters of AWS Lambda back in 2015, and he is one of the co-organizers of Serverless Sri Lanka Meetup. During his leisure time, he creates cloud training videos for the community on his YouTube channel. The training videos are focused on Modern Application Development, Cloud Certifications, and Cutting-edge cloud services. He is also a technical blogger, blogging on medium as well as on his website, and a public speaker, conducting cloud workshops for university students in Sri Lanka.

 

 

 

 

 

 

Margaret Valtierra – Chicago, USA

Community Hero Margaret Valtierra is a Program Manager for the Cloud Service team at Morningstar. She is responsible for managing the AWS relationship and promoting cloud skills and best practices. She has organized and led the Chicago AWS user group since 2013. She is a member of the Global AWS Community Day Core Team, promoting user groups and organizing the annual Midwest AWS Community Day. Margaret is also a co-organizer for DevOpsDays Chicago and is an AWS Certified Solutions Architect Associate.

 

 

 

 

 

 

 

Marco Viganò – Milan, Italy

Community Hero Marco Viganò is the CTO of Condé Nast Italy. He has more than 20 years of experience in IT, with a specific focus on media and publishing sector. He is a frequent speaker at AWS Summits and events, sharing design patterns for developing and operating highly scalable cloud solutions for the media and publishing industry. He is focused on Serverless and Machine Learning and one of his main topics is finding new technologies to improve systems. Also, he operates as Voice First evangelist inside the company using Alexa and AWS services.

 

 

 

 

 

 

 

Pavlos Mitsoulis – London, United Kingdom

Machine Learning Hero Pavlos Mitsoulis has 7 years of Machine Learning and Software Engineering experience. Currently, he is a Staff Software Engineer (Machine Learning) at HomeAway (an Expedia Group brand), leading Machine Learning initiatives to support growth marketing. Additionally, he is the creator of Sagify, an open-source library that simplifies training, tuning, evaluating, and deploying ML models to SageMaker. Recently Pavlos authored a Packt video course, “Hands-On Machine Learning Using Amazon SageMaker“.

 

 

 

 

 

 

 

Vicky Seno – Los Angeles, USA

Container Hero Vicky “Tanya” Seno is a Computer Science Professor at Santa Monica College. At SMC she teaches numerous AWS courses covering Computing Services, Containers, Kubernetes, ECS, Serverless, Networking and Security. Vicky has helped develop an AWS Cloud Computing College Degree and is part of a team that helps train and mentor faculty from nineteen local colleges in AWS, to help expand AWS course offerings in the Los Angeles area. She is also a co-organizer of AWS Cloud Day Conference at SMC that includes SA speakers, AWS workshops and a AWS CTF attended by over 130+ students at each event. In an effort to increase female representation in this field, Vicky has been involved in various speaking and training activities. Vicky hosts a YouTube Channel with over 34,000 followers and 100+ beginners tech tutorials. She has also spoken at AWS Summits on Containers, Kubernetes, and Amazon EKS.

 

 

 

 

 

 

You can learn about all the AWS Heroes from around the globe by checking out the Hero website.

Building Serverless Pipelines with Amazon CloudWatch Events

Post Syndicated from Forrest Brazeal original https://aws.amazon.com/blogs/aws/building-serverless-pipelines-with-amazon-cloudwatch-events/

Guest post by AWS Serverless Hero Forrest Brazeal. Forrest is a senior cloud architect at Trek10, Inc., host of the Think FaaS serverless podcast at Trek10, and a regular speaker at workshops and events in the serverless community.

Events and serverless go together like baked beans and barbecue. The serverless mindset says to focus on code and configuration that provide business value. It turns out that much of the time, this means working with events: structured data corresponding to things that happen in the outside world. Rather than maintaining long-running server tasks that chew up resources while polling, I can create serverless applications that do work only in response to event triggers.

I have lots of options when working with events in AWS: Amazon Kinesis Data Streams, Amazon Simple Notification Service (SNS), Amazon Simple Queue Service (SQS), and more, depending on my requirements. Lately, I’ve been using a service more often that has the word ‘event’ right in the name: Amazon CloudWatch Events.

CloudWatch Events: The best-kept secret in serverless event processing

I first knew CloudWatch as the service that collects my Lambda logs and lets me run functions on a schedule. But CloudWatch Events also lets me publish my own custom events using the CloudWatch API. It has similar pricing and delivery guarantees to SNS, and supports a whole bunch of AWS services as targets.

Best of all, I don’t even have to provision the event bus—it’s just there in the CloudWatch console. I can publish an event now, using the boto3 AWS SDK for Python:


import boto3
cw = boto3.client('cloudwatch')
cw.put_events(
    Entries=[
        {
            'Source': 'my.app.event',
            'DetailType': 'MY_EVENT_TYPE',
            'Detail': '{"my_data":"As a JSON string"}'
        }
    ]
)

In short, CloudWatch Events gives me a fully managed event pipe that supports an arbitrary number of consumers, where I can drop any kind of JSON string that I want. And this is super useful for building serverless apps.

Event-driven architectures in action

I build cloud-native solutions for clients at Trek10 daily. I frequently use event-driven architectures as a powerful way to migrate legacy systems to serverless, enable easier downstream integrations, and more. Here are just a couple of my favorite patterns:
• Strangling legacy databases
• Designing event-sourced applications

Strangling legacy databases

The “strangler pattern” hides a legacy system behind a wrapper API, while gradually migrating users to the new interface. Trek10 has written about this before. Streaming changes to the cloud as events is a great way to open up reporting and analytics use cases while taking load off a legacy database. The following diagram shows writing a legacy database to events.

This pattern can also work the other way: I can write new data to CloudWatch Events, consume it into a modern data source, and create a second consumer that syncs the data back to my legacy system.

Designing event-sourced applications

Event sourcing simply means treating changes in the system state as events, publishing them on a ledger or bus where they can be consumed by different downstream applications.

Using CloudWatch Events as a centralized bus, I can make a sanitized record of events available as shown in the following event validation flow diagram.

The validation function ensures that only events that match my application’s standards get tagged as “valid” and are made available to downstream consumers. The default bus handles lots of events (remember, service logs go here!), so it’s important to set up rules that only match the events that I care about.

CloudWatch Events simplifies these patterns by providing a single bus where I can apply filters and subscribe consumers, all without having to provision or maintain any infrastructure. And that’s just the beginning.

Use case: Multi-account event processing with CloudWatch Events

CloudWatch Events gets most interesting when I start connecting multiple AWS accounts. It’s easy to set up a trust relationship between CloudWatch Event buses in different accounts, using filtering rules to choose which events get forwarded.

As an example, imagine a widget processing system for a large enterprise, AnyCompany. AnyCompany has many different development teams, each using their own AWS account. Some services are producing information about widgets as they check into warehouses or travel across the country. Others need that data to run reports or innovate new products.

Suppose that Service A produces information about new widgets, Service B wants to view aggregates about widgets in real time, and Service C needs historical data about widgets for reporting. The full event flow might look like the following diagram.

1. Service A publishes the new widget event to CloudWatch Events in their AWS account with the following event body:

{
            'Source': 'cwi.servicea',
            'DetailType': 'NEW_WIDGET',
            'Detail': '{"widget_id":"abc123"}'
}

2. A filtering rule forwards events tagged with cwi.servicea to the central event processing account. Using CloudFormation, they could define the rule as follows:

CentralForwardingRule:
    Type: AWS::Events::Rule
    Properties:
      Description: Rule for sending events to central account
      EventPattern:
        source:
          - cwi.servicea
      Targets:
        - Arn: !Sub arn:aws:events:${CENTRAL_ACCOUNT_REGION}:${CENTRAL_ACCOUNT_ID}:event-bus/default
          Id: CentralTarget
          RoleArn: <IAM ROLE WITH ACCESS TO PUT CW EVENTS> 

3. The event is validated according to their standards.

4. The valid event is republished on the central event bus with a new source, valid.cw.servicea. This is important because, to avoid infinite loops, an individual event can only be forwarded one time.

5. A filtering rule forwards the valid event to Service B’s AWS account, where it updates a DynamoDB table connected to an AWS AppSync API.

6. A second rule forwards the same event to the Service C account, where it flows through Kinesis Data Firehose to an Amazon S3 bucket for analysis using Amazon Athena.

What CloudWatch Events provides here is a decoupled system that uses mostly “plug-and-play” services, and yet opens up flexibility for future innovation.

Taking full advantage of the cloud

The biggest reason I love CloudWatch Events? It’s a fantastically cloud-native service. There’s little code required, and no operational responsibilities beyond watching AWS service limits. I don’t even have to deploy anything to start using it. And yet, under the hood, I’m using a powerful piece of AWS infrastructure that can support complex distributed applications without breaking a sweat.

That’s pretty close to the platonic ideal of serverless apps. Anytime I’m cooking up an event-driven application, I make sure to consider what CloudWatch Events can bring to the table.

Safely validating usernames with Amazon Cognito

Post Syndicated from Larry Ogrodnek original https://aws.amazon.com/blogs/aws/safely-validating-usernames-with-amazon-cognito/

Guest post by AWS Community Hero Larry Ogrodnek. Larry is an independent consultant focused on cloud architecture, DevOps, serverless, and general software development on AWS. He’s always ready to talk about AWS over coffee, and enjoys development and helping other developers.

How are users identified in your system? Username? Email? Is it important that they’re unique?

You may be surprised to learn, as I was, that in Amazon Cognito both usernames and emails are treated as case-sensitive. So “JohnSmith” is different than “johnsmith,” who is different than “jOhNSmiTh.” The same goes for email addresses—it’s even specified in the SMTP RFC that users “smith” and “Smith” may have different mailboxes. That is crazy!

I recently added custom signup validation for Amazon Cognito. In this post, let me walk you through the implementation.

The problem with uniqueness

Determining uniqueness is even more difficult than just dealing with case insensitivity. Like many of you, I’ve received emails based on Internationalized Domain Name homograph attacks. A site is registered for “example.com” but with Cyrillic character “a,” attempting to impersonate a legitimate site and collect information. This same type of attack is possible for user name registration. If I don’t check for this, someone may be able to impersonate another user on my site.

Do you have reservations?

Does my application have user-generated messages or content? Besides dealing with uniqueness, I may want to reserve certain names. For example, if I have user-editable information at user.myapp.com or myapp.com/user, what if someone registers “signup” or “faq” or “support”? What about “billing”?

It’s possible that a malicious user could impersonate my site and use it as part of an attack. Similar attacks are also possible if users have any kind of inbox or messaging. In addition to reserving usernames, I should also separate out user content to its own domain to avoid confusion. I remember GitHub reacting to something similar when it moved user pages from github.com to github.io in 2013.

James Bennet wrote about these issues in great detail in his excellent post, Let’s talk about usernames. He describes the types of validation performed in his django-registration application.

Integrating with Amazon Cognito

Okay, so now that you know a little bit more about this issue, how do I handle this with Amazon Cognito?

Well, I’m in luck, because Amazon Cognito lets me customize much of my authentication workflow with AWS Lambda triggers.

To add username or email validation, I can implement a pre-sign-up Lambda trigger, which lets me perform custom validation and accept or deny the registration request.

It’s important to note that I can’t modify the request. To perform any kind of case or name standardization (for example, forcing lower case), I have to do that on the client. I can only validate that it was done in my Lambda function. It would be handy if this was something available in the future.

To declare a sign-up as invalid, all I have to do is return an error from the Lambda function. In Python, this is as simple as raising an exception. If my validation passes, I just return the event, which already includes the fields that I need for a generic success response. Optionally, I can auto-verify some fields.

To enforce that my frontend is sending usernames standardized as lowercase, all I need is the following code:

def run(event, context):
  user = event[‘userName’]
  if not user.isLower():
    raise Exception(“Username must be lowercase”)
  return event

Adding unique constraints and reservations

I’ve extracted the validation checks from django-registration into a Python module named username-validator to make it easier to perform these types of uniqueness checks in Lambda:

pip install username-validator

In addition to detecting confusing homoglyphs, it also includes a standard set of reserved names like “www”, “admin”, “root”, “security”, “robots.txt”, and so on. You can provide your own additions for application-specific reservations, as well as perform individual checks.

To add this additional validation and some custom reservations, I update the function as follows:

from username_validator import UsernameValidator

MY_RESERVED = [
  "larry",
  "aws",
  "reinvent"
]

validator = UsernameValidator(additional_names=MY_RESERVED)

def run(event, context):
  user = event['userName']

  if not user.islower():
    raise Exception("Username must be lowercase")

  validator.validate_all(user)

  return event

Now, if I attach that Lambda function to the Amazon Cognito user pool as a pre–sign-up trigger and try to sign up for “aws”, I get a 400 error. I also get some text that I could include in the signup form: Other attributes, including email (if used) are available under event[‘request’][‘userAttributes’]}. For example:

{ "request": {
    "userAttributes": {"name": "larry", "email": "[email protected]" }
  }
}

What’s next?

I can validate other attributes in the same way. Or, I can add other custom validation by adding additional checks and raising an exception, with a custom message if it fails.

In this post, I covered why it’s important to think about identity and uniqueness, and demonstrated how to add additional validations to user signups in Amazon Cognito.

Now you know more about controlling signup validation with a custom Lambda function. I encourage you to check out the other user pool workflow customizations that are possible with Lambda triggers.

AWS Heroes: Putting AWS security services to work for you

Post Syndicated from Mark Nunnikhoven original https://aws.amazon.com/blogs/aws/aws-heroes-putting-aws-security-services-to-work-for-you/

Guest post by AWS Community Hero Mark Nunnikhoven. Mark is the Vice President of Cloud Research at long-time APN Advanced Technology Partner Trend Micro. In addition to helping educate the AWS community about modern security and privacy, he has spearheaded Trend Micro’s launch-day support of most of the AWS security services and attended every AWS re:Invent!

Security is a pillar of the AWS Well-Architected Framework. It’s critical to the success of any workload. But it’s also often misunderstood. It’s steeped in jargon and talked about in terms of threats and fear. This has led to security getting a bad reputation. It’s often thought of as a roadblock and something to put up with.

Nothing could be further from the truth.

At its heart, cybersecurity is simple. It’s a set of processes and controls that work to make sure that whatever I’ve built works as intended… and only as intended. How do I make that happen in the AWS Cloud?

Shared responsibility

It all starts with the shared responsibility model. The model defines the line where responsibility for day-to-day operations shifts from AWS to me, the user. AWS provides the security of the cloud and I am responsible for security in the cloud. For each type of service, more and more of my responsibilities shift to AWS.

My tinfoil hat would be taken away if I didn’t mention that everyone needs to verify that AWS is holding up their end of the deal (#protip: they are and at world-class levels). This is where AWS Artifact enters the picture. It is an easy way to download the evidence that AWS is fulfilling their responsibilities under the model.

But what about my responsibilities under the model? AWS offers help there in the form of various services under the Security, Identity, & Compliance category.

Security services

The trick is understanding how all of these security services fit together to help me meet my responsibilities. Based on conversations I’ve had around the world and helping teach these services at various AWS Summits, I’ve found that grouping them into five subcategories makes things clearer: authorization, protected stores, authentication, enforcement, and visibility.

A few of these categories are already well understood.

  • Authentication services help me identify my users.
  • Authorization services allow me to determine what they—and other services—are allowed to do and under what conditions.
  • Protected stores allow me to encrypt sensitive data and regulate access to it.

Two subcategories aren’t as well understood: enforcement and visibility. I use the services in these categories daily in my security practice and they are vital to ensuring that my apps are working as intended.

Enforcement

Teams struggle with how to get the most out of enforcement controls and it can be difficult to understand how to piece these together into a workable security practice. Most of these controls detect issues, essentially raising their hand when something might be wrong. To protect my deployments, I need a process to handle those detections.

By remembering the goal of ensuring that whatever I build works as intended and only as intended, I can better frame how each of these services helps me.

AWS CloudTrail logs nearly every API action in an account but mining those logs for suspicious activity is difficult. Enter Amazon GuardDuty. It continuously scours CloudTrail logs—as well as Amazon VPC flow logs and DNS logs—for threats and suspicious activity at the AWS account level.

Amazon EC2 instances have the biggest potential for security challenges as they are running a full operating system and applications written by various third parties. All that complexity added up to over 13,000 reported vulnerabilities last year. Amazon Inspector runs on-demand assessments of your instances and raises findings related to the operating system and installed applications that include recommended mitigations.

Despite starting from a locked-down state, teams often make mistakes and sometimes accidentally expose sensitive data in an Amazon S3 bucket. Amazon Macie continuously scans targeted buckets looking for sensitive information and misconfigurations. This augments additional protections like S3 Block Public Access and Trusted Advisor checks.

AWS WAF and AWS Shield work on AWS edge locations and actively stop attacks that they are configured to detect. AWS Shield targets DDoS activity and AWS WAF takes aim at layer seven or web attacks.

Each of these services support the work teams do in hardening configurations and writing quality code. They are designed to help highlight areas of concern for taking action. The challenge is prioritizing those actions.

Visibility

Prioritization is where the visibility services step in. As previously mentioned, AWS Artifact provides visibility into AWS’ activities under the shared responsibility model. The new AWS Security Hub helps me understand the data generated by the other AWS security, identity, and compliance services along with data generated by key APN Partner solutions.

The goal of AWS Security Hub is to be the first stop for any security activity. All data sent to the hub is normalized in the Amazon Finding Format, which includes a standardized severity rating. This provides context for each findings and helps me determine which actions to take first.

This prioritized list of findings quickly translates in a set of responses to undertake. At first, these might be manual responses but as with anything in the AWS Cloud, automation is the key to success.

Using AWS Lambda to react to AWS Security Hub findings is a wildly successful and simple way of modernizing an approach to security. This automated workflow sits atop a pyramid of security controls:

• Core AWS security services and APN Partner solutions at the bottom
• The AWS Security Hub providing visibility in the middle
• Automation as the crown jewel on top

What’s next?

In this post, I described my high-level approach to security success in the AWS Cloud. This aligns directly with the AWS Well-Architected Framework and thousands of customer success stories. When you understand the shared responsibility model and the value of each service, you’re well on your way to demystifying security and building better in the AWS Cloud.

Building serverless apps with components from the AWS Serverless Application Repository

Post Syndicated from Aleksandar Simovic original https://aws.amazon.com/blogs/aws/building-serverless-apps-with-components-from-the-aws-serverless-application-repository/

Guest post by AWS Serverless Hero Aleksandar Simovic. Aleksandar is a Senior Software Engineer at Science Exchange and co-author of “Serverless Applications with Node.js” with Slobodan Stojanovic, published by Manning Publications. He also writes on Medium on both business and technical aspects of serverless.

Many of you have built a user login or an authorization service from scratch a dozen times. And you’ve probably built another dozen services to process payments and another dozen to export PDFs. We’ve all done it, and we’ve often all done it redundantly. Using the AWS Serverless Application Repository, you can now spend more of your time and energy developing business logic to deliver the features that matter to customers, faster.

What is the AWS Serverless Application Repository?

The AWS Serverless Application Repository allows developers to deploy, publish, and share common serverless components among their teams and organizations. Its public library contains community-built, open-source, serverless components that are instantly searchable and deployable with customizable parameters and predefined licensing. They are built and published using the AWS Serverless Application Model (AWS SAM), the infrastructure as code, YAML language, used for templating AWS resources.

How to use AWS Serverless Application Repository in production

I wanted to build an application that enables customers to select a product and pay for it. Sounds like a substantial effort, right? Using AWS Serverless Application Repository, it didn’t actually take me much time.

Broadly speaking, I built:

  • A product page with a Buy button, automatically tied to the Stripe Checkout SDK. When a customer chooses Buy, the page displays the Stripe Checkout payment form.
  • A Stripe payment service with an API endpoint that accepts a callback from Stripe, charges the customer, and sends a notification for successful transactions.

For this post, I created a pre-built sample static page that displays the product details and has the Stripe Checkout JavaScript on the page.

Even with the pre-built page, integrating the payment service is still work. But many other developers have built a payment application at least once, so why should I spend time building identical features? This is where AWS Serverless Application Repository came in handy.

Find and deploy a component

First, I searched for an existing component in the AWS Serverless Application Repository public library. I typed “stripe” and opted in to see applications that created custom IAM roles or resource policies. I saw the following results:

I selected the application titled api-lambda-stripe-charge and chose Deploy on the component’s detail page.

Before I deployed any component, I inspected it to make sure it was safe and production-ready.

Evaluate a component

The recommended approach for evaluating an AWS Serverless Application Repository component is a four-step process:

  1. Check component permissions.
  2. Inspect the component implementation.
  3. Deploy and run the component in a restricted environment.
  4. Monitor the component’s behavior and cost before using in production.

This might appear to negate the quick delivery benefits of AWS Serverless Application Repository, but in reality, you only verify each component one time. Then you can easily reuse and share the component throughout your company.

Here’s how to apply this approach while adding the Stripe component.

1. Check component permissions

There are two types of components: public and private. Public components are open source, while private components do not have to be. In this case, the Stripe component is public. I reviewed the code to make sure that it doesn’t give unnecessary permissions that could potentially compromise security.

In this case, the Stripe component is on GitHub. On the component page, I opened the template.yaml file. There was only one AWS Lambda function there, so I found the Policies attribute and reviewed the policies that it uses.

  CreateStripeCharge:
    Type: AWS::Serverless::Function
    Properties:
      Handler: index.handler
      Runtime: nodejs8.10
      Timeout: 10
      Policies:
        - SNSCrudPolicy:
          TopicName: !GetAtt SNSTopic.TopicName
        - Statement:
          Effect: Allow
          Action:
            - ssm:GetParameters
          Resource: !Sub
arn:${AWS::Partition}:ssm:${AWS::Region}:${AWS::AccountId}:parameter/${SSMParameterPrefix}/*

The component was using a predefined AWS SAM policy template and a custom one. These predefined policy templates are sets of AWS permissions that are verified and recommended by the AWS security team. Using these policies to specify resource permissions is one of the recommended practices for serverless components on AWS Serverless Application Repository. The other custom IAM policy allows the function to retrieve AWS System Manager parameters, which is the best practice to store secure values, such as the Stripe secret key.

2. Inspect the component implementation

I wanted to ensure that the component’s main business logic did only what it was meant to do, which was to create a Stripe charge. It’s also important to look out for unknown third-party HTTP calls to prevent leaks. Then I reviewed this project’s dependencies. For this inspection, I used PureSec, but tools like those offered by Protego are another option.

The main business logic was in the charge-customer.js file. It revealed straightforward logic to simply invoke the Stripe create charge and then publish a notification with the created charge. I saw this reflected in the following code:

return paymentProcessor.createCharge(token, amount, currency, description)
    .then(chargeResponse => {
      createdCharge = chargeResponse;
      return pubsub.publish(createdCharge, TOPIC_ARN);
    })
    .then(() => createdCharge)
    .catch((err) => {
      console.log(err);
      throw err;
    });

The paymentProcessor and pubsub values are adapters for the communication with Stripe and Amazon SNS, respectively. I always like to look and see how they work.

3. Deploy and run the component in a restricted environment

Maintaining a separate, restricted AWS account in which to test your serverless applications is a best practice for serverless development. I always ensure that my test account has strict AWS Billing and Amazon CloudWatch alarms in place.

I signed in to this separate account, opened the Stripe component page, and manually deployed it. After deployment, I needed to verify how it ran. Because this component only has one Lambda function, I looked for that function in the Lambda console and opened its details page so that I could verify the code.

4. Monitor behavior and cost before using a component in production

When everything works as expected in my test account, I usually add monitoring and performance tools to my component to help diagnose any incidents and evaluate component performance. I often use Epsagon and Lumigo for this, although adding those steps would have made this post too long.

I also wanted to track the component’s cost. To do this, I added a strict Billing alarm that tracked the component cost and the cost of each AWS resource within it.

After the component passed these four tests, I was ready to deploy it by adding it to my existing product-selection application.

Deploy the component to an existing application

To add my Stripe component into my existing application, I re-opened the component Review, Configure, and Deploy page and chose Copy as SAM Resource. That copied the necessary template code to my clipboard. I then added it to my existing serverless application by pasting it into my existing AWS SAM template, under Resources. It looked like the following:

Resources:
  ShowProduct:
    Type: AWS::Serverless::Function
    Properties:
      Handler: index.handler
      Runtime: nodejs8.10
      Timeout: 10
      Events:
        Api:
          Type: Api
          Properties:
            Path: /product/:productId
            Method: GET
  apilambdastripecharge:
    Type: AWS::Serverless::Application
    Properties:
      Location:
        ApplicationId: arn:aws:serverlessrepo:us-east-1:375983427419:applications/api-lambda-stripe-charge
        SemanticVersion: 3.0.0
      Parameters:
        # (Optional) Cross-origin resource sharing (CORS) Origin. You can specify a single origin, all origins with "*", or leave it empty and no CORS is applied.
        CorsOrigin: YOUR_VALUE
        # This component assumes that the Stripe secret key needed to use the Stripe Charge API is stored as SecureStrings in Parameter Store under the prefix defined by this parameter. See the component README.
       # SSMParameterPrefix: lambda-stripe-charge # Uncomment to override the default value
Outputs:
  ApiUrl:
    Value: !Sub https://${ServerlessRestApi}.execute-api.${AWS::Region}.amazonaws.com/Stage/product/123
    Description: The URL of the sample API Gateway

I copied and pasted an AWS::Serverless::Application AWS SAM resource, which points to the component by ApplicationId and its SemanticVersion. Then, I defined the component’s parameters.

  • I set CorsOrigin to “*” for demonstration purposes.
  • I didn’t have to set the SSMParameterPrefix value, as it picks up a default value. But I did set up my Stripe secret key in the Systems Manager Parameter Store, by running the following command:

aws ssm put-parameter --name lambda-stripe-charge/stripe-secret-key --value --type SecureString --overwrite

In addition to parameters, components also contain outputs. An output is an externalized component resource or value that you can use with other applications or components. For example, the output for the api-lambda-stripe-charge component is SNSTopic, an Amazon SNS topic. This enables me to attach another component or business logic to get a notification when a successful payment occurs. For example, a lambda-send-email-ses component that sends an email upon successful payment could be attached, too.

To finish, I ran the following two commands:

aws cloudformation package --template-file template.yaml --output-template-file output.yaml --s3-bucket YOUR_BUCKET_NAME

aws cloudformation deploy --template-file output.yaml --stack-name product-show-n-pay --capabilities CAPABILITY_IAM

For the second command, you could add parameter overrides as needed.

My product-selection and payment application was successfully deployed!

Summary

AWS Serverless Application Repository enables me to share and reuse common components, services, and applications so that I can really focus on building core business value.

In a few steps, I created an application that enables customers to select a product and pay for it. It took a matter of minutes, not hours or days! You can see that it doesn’t take long to cautiously analyze and check a component. That component can now be shared with other teams throughout my company so that they can eliminate their redundancies, too.

Now you’re ready to use AWS Serverless Application Repository to accelerate the way that your teams develop products, deliver features, and build and share production-ready applications.

Get to know the newest AWS Heroes – Winter 2019

Post Syndicated from Ross Barich original https://aws.amazon.com/blogs/aws/get-to-know-the-newest-aws-heroes-winter-2019/

AWS Heroes are superusers who possess advanced technical skills and are early adopters of emerging technologies. Heroes are passionate about sharing their extensive AWS knowledge with others. Some get involved in-person by running meetups, workshops, and speaking at conferences, while others share with online AWS communities via social media, blog posts, and open source contributions.

2019 is off to a roaring start and we’re thrilled to introduce you to the latest AWS Heroes:

Aileen Gemma Smith
Ant Stanley
Gaurav Kamboj
Jeremy Daly
Kurt Lee
Matt Weagle
Shingo Yoshida

Aileen Gemma Smith – Sydney, Australia

Community Hero Aileen Gemma Smith is the founder and CEO of Vizalytics Technology. The team at Vizalytics serves public and private sector clients worldwide in transportation, tourism, and economic development. She shared their story in the Building Complex Workloads in the Cloud session, at AWS Canberra Summit 2017. Aileen has a keen interest in diversity and inclusion initiatives and is constantly working to elevate the work and voices of underestimated engineers and founders. At AWS Public Sector Summit Canberra in 2018, she was a panelist for We Power Tech, Inclusive Conversations with Women in Technology. She has supported and encouraged the creation of internships and mentoring programs for high school and university students with a focus on building out STEAM initiatives.
 

 

 

 

 

 

Ant Stanley – London, United Kingdom

Serverless Hero Ant Stanley is a consultant and community organizer. He founded and currently runs the Serverless London user group, and he is part of the ServerlessDays London organizing team and the global ServerlessDays leadership team. Previously, Ant was a co-founder of A Cloud Guru, and responsible for organizing the first Serverlessconf event in New York in May 2016. Living in London since 2009, Ant’s background before serverless is primarily as a solutions architect at various organizations, from managed service providers to Tier 1 telecommunications providers. His current focus is serverless, GraphQL, and Node.js.

 

 

 

 

 

 

 

 

Gaurav Kamboj – Mumbai, India

Community Hero Gaurav Kamboj is a cloud architect at Hotstar, India’s leading OTT provider with a global concurrency record for live streaming to 11Mn+ viewers. At Hotstar, he loves building cost-efficient infrastructure that can scale to millions in minutes. He is also passionate about chaos engineering and cloud security. Gaurav holds the original “all-five” AWS certifications, is co-founder of AWS User Group Mumbai, and speaks at local tech conferences. He also conducts guest lectures and workshops on cloud computing for students at engineering colleges affiliated with the University of Mumbai.

 

 

 

 

 

 

 

 

Jeremy Daly – Boston, USA

Serverless Hero Jeremy Daly is the CTO of AlertMe, a startup based in NYC that uses machine learning and natural language processing to help publishers better connect with their readers. He began building cloud-based applications with AWS in 2009. After discovering Lambda, became a passionate advocate for FaaS and managed services. He now writes extensively about serverless on his blog, jeremydaly.com, and publishes Off-by-none, a weekly newsletter that focuses on all things serverless. As an active member of the serverless community, Jeremy contributes to a number of open-source serverless projects, and has created several others, including Lambda API, Serverless MySQL, and Lambda Warmer.

 

 

 

 

 

 

 

Kurt Lee – Seoul, South Korea

Serverless Hero Kurt Lee works at Vingle Inc. as their tech lead. As one of the original team members, he has been involved in nearly all backend applications there. Most recently, he led Vingle’s full migration to serverless, cutting 40% of the server cost. He’s known for sharing his experience of adapting serverless, along with its technical and organizational value, through Medium. He and his team maintain multiple open-source projects, which they developed during the migration. Kurt hosts [email protected] regularly, and often presents at AWSKRUG about various aspects of serverless and pushing more things to serverless.

 

 

 

 

 

 

 

Matt Weagle – Seattle, USA

Serverless Hero Matt Weagle leverages machine learning, serverless techniques, and a servicefull mindset at Lyft, to create innovative transportation experiences in an operationally sustainable and secure manner. Matt looks to serverless as a way to increase collaboration across development, operational, security, and financial concerns and support rapid business-value creation. He has been involved in the serverless community for several years. Currently, he is the organizer of Serverless – Seattle and co-organizer of the serverlessDays Seattle event. He writes about serverless topics on Medium and Twitter.

 

 

 

 

 

 

 

Shingo Yoshida – Tokyo, Japan

Serverless Hero Shingo Yoshida is the CEO of Section-9, CTO of CYDAS, as well as a founder of Serverless Community(JP) and a member of JAWS-UG (AWS User Group – Japan). Since 2012, Shingo has not only built a system with just AWS, but has also built with a cloud-native architecture to make his customers happy. Serverless Community(JP) was established in 2016, and meetups have been held 20 times in Tokyo, Osaka, Fukuoka, and Sapporo, including three full-day conferences. Through this community, thousands of participants have discovered the value of serverless. Shingo has contributed to these serverless scenes with many blog posts and books about serverless, including Serverless Architectures on AWS.

 

 

 

 

 

 

 

There are now 80 AWS Heroes worldwide. Learn about all of them and connect with an AWS Hero.

Just-in-time VPN access with an AWS IoT button

Post Syndicated from Teri Radichel original https://aws.amazon.com/blogs/aws/just-in-time-vpn-access-with-an-aws-iot-button/

Guest post by AWS Community Hero Teri Radichel. Teri Radichel provides cyber security assessments, pen testing, and research services through her company, 2nd Sight Lab. She is also the founder of the AWS Architects Seattle Meetup.

While traveling to deliver cloud security training, I connect to Wi-Fi networks, both in my hotel room and in the classroom using a VPN. Most companies expose a remote VPN endpoint to the entire internet. I came up with a hypothesis that I could use an AWS IoT button to allow network access only to required locations. What if a VPN user could click to get access, which would trigger opening a network rule, and double-click to disallow network traffic again? I tested it out this idea, and you can see the results below.

You might be wondering why you might want to use a VPN for remote cloud administration. Why an AWS IoT button instead of a laptop or mobile application? More on that in my cloud security blog.

Initially, I wanted to use the AWS IoT Enterprise Button because it allows an organization to have control over the certificates used on the devices. It also uses Wi-Fi, and I was hoping to capture the button IP address to grant network access. To do that, I had to be able to prove that the button received the same IP address from the Wi-Fi network as my laptop. Unfortunately, due to captive portals used by some wireless networks, I had problems connecting the button at some locations.

Next, I tried the AT&T LTE-M Button. I was able to get this button to work for my use case, but with a few less than user-friendly requirements. Because this button is on a cellular network rather than the Wi-Fi I use to connect to my VPN in a hotel room, I can’t auto-magically determine the IP address. I must manually set it using the AWS IoT mobile application.

The other issue I had is that some networks change the public IP addresses of the Wi-Fi client after the VPN connection. The before and after IP addresses are always in the same network block but are not consistent. Instead of using a single IP address, the user has to understand how to figure out what IP range to pass into the button. This proof of concept implementation works well but would not be an ideal solution for non-network-savvy users.

The good news is that I don’t have to log into my AWS account with administrative privileges to change the network settings and allow access to the VPN endpoint from my location. The AWS IoT button user has limited permissions, as does the role for the AWS Lambda function that grants access. The AWS IoT button is a form of multi-factor authentication.

Configure the button with a Lambda function

Caveat: This is not a fully tested or production-ready solution, but it is a starting point to give you some ideas for the implementation of on-demand network access to administrative endpoints. The roles for the button and the Lambda function can be much more restrictive than the ones I used in my proof-of-concept implementation.

1. Set up your VPN endpoint (the instructions are beyond the scope of this post). You can use something like OpenVPN or any of the AWS Marketplace options that allow you to create VPNs for remote access.

2. Jeff Barr has already written a most excellent post on how to set up your AWS IoT button with a Lambda function. The process is straight-forward.

3. To allow your button to change the network, add the ability for your Lambda role to replace network ACL entries. This role enables the assigned resource to update any rule in the account—not recommended! Limit this further to specific network ACLs. Also, make sure that users can only edit the placement attributes of their own assigned buttons.

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "ec2:ReplaceNetworkAclEntry",
"Resource": "*"
}
]
}

4. Write the code.

For my test, I edited the default code that comes with the button to prove that what I was trying to do would work. I left in the lines that send a text message, but after the network change. That way, if there’s an error, the user doesn’t get the SNS message.

Additionally, you always, always, always want to validate any inputs sent into any application from a client. I added log lines to show where you can add that. Change these variables to match your environment: vpcid, nacl, rule. The rule parameter is the rule in your network ACL that is updated to use the IP address provided with the button application.

from __future__ import print_function

import boto3
import json
import logging

logger = logging.getLogger()
logger.setLevel(logging.INFO)

sns = boto3.client('sns')

def lambda_handler(event, context):

    logger.info('Received event: ' + json.dumps(event))

    attributes = event['placementInfo']['attributes']

    phone_number = attributes['phoneNumber']
message = attributes['message']
ip = attributes['ip']

    logger.info("Need code here to validate this is a valid IP address")
logger.info("Need code here to validate the message")
logger.info("Need code here to validate the phone number")

    for key in attributes.keys():
message = message.replace('{{%s}}' % (key), attributes[key])
message = message.replace('{{*}}', json.dumps(attributes))

    dsn = event['deviceInfo']['deviceId']
click_type = event['deviceEvent']['buttonClicked']['clickType']

    vpcid = 'vpc-xxxxxxxxxxxxxxxx'
nacl = 'acl-xxxxxxxxxxxxxxx'
rule = 200

    cidr = ip + '/32'

    message = message + " " + cidr

    client = boto3.client('ec2')

    response = client.replace_network_acl_entry(
NetworkAclId=nacl,
CidrBlock=cidr,
DryRun=False,
Egress=False,
PortRange={
'From': 500,
'To': 500
},
Protocol="17",
RuleAction="allow",
RuleNumber=rule
)

    sns.publish(PhoneNumber=phone_number, Message=message)

    logger.info('SMS has been sent to ' + phone_number)

5. Write the double-click code to remove network access. I left this code as an exercise for the reader. If you understand how to edit the network ACL in the previous step, you should be able to write the double-click function to disallow traffic by changing the line RuleAction=”allow” to RuleAction=”deny”. You have now blocked access to the network port that allows remote users to connect to the VPN.

Test the button

1. Get the public IP address for your current network by going to a site like https://whatismyip.com.

For this post, assume that you only need a single IP address. However, you could easily change the code above to allow for an IP range instead (or in other words, a CIDR block).

2. Log in to the phone app for the button.

3. Choose Projects and select your button project. Mine is named vpn2.

4. The project associated with the Lambda function that you assigned to the button requires the following placement attributes:
message: Network updated!
phoneNumber: The phone number to receive the text.
ip: The IP address from whatismyip.com.

5. Select an existing attribute to change it or choose + Add Placement Attribute to add a new one.

6. Press your AWS IoT button to trigger the Lambda function. If it runs without error, you get the text message with the IP address that you entered.

7. Check your VPC network ACL rule to verify the change to the correct IP address.

8. Verify that you can connect to the VPN.

9. Assuming you implemented the double-click function to disable access, double-click the button to change the network ACL rule to “deny” instead of “allow”.

Now you know how to use an AWS IoT button to change a network rule on demand. Hopefully, this post sparks some ideas for adding additional layers of security to your AWS VPN and administrative endpoints!

How to become an AWS expert

Post Syndicated from Michael Wittig original https://aws.amazon.com/blogs/aws/how-to-become-an-aws-expert/

This guest post is by AWS Community Hero Michael Wittig. Michael Wittig is co-founder of widdix, a consulting company focused on cloud architecture, DevOps, and software development on AWS. In close collaboration with his brother Andreas Wittig, Michael co-authored Amazon Web Services in Action and maintains the blog cloudonaut.io where they share their knowledge about AWS with the community.

If you are just starting to use AWS today, you might think it’s going to be hard to catch up. How can you become an AWS expert? How can you know everything about AWS? I asked myself the same questions some time ago. Let me share my answer on how to become an AWS expert.

My story

I have used AWS for more than five years. Working with my clients of all sizes and industries challenges my AWS knowledge every day. I also maintain several AWS open source projects that I try to keep up-to-date with the ever-improving AWS platform.

Let me show you my tips on staying up-to-date with AWS and learning new things. Here are three examples of the most exciting and surprising things I learned about AWS this year:

  • Network Load Balancers
  • Amazon Linux 2
  • Amazon Cloud Directory

Network Load Balancers

When I started using AWS, there was one option to load balance HTTP and raw TCP traffic: what is now called a Classic Load Balancer. Since then, the portfolio of load balancers has expanded. You can now also choose the Application Load Balancer to distribute HTTP(S) traffic (including HTTP2 and WebSockets) or the Network Load Balancer, which operates on layer 4 to load balance TCP traffic.

When reading the Network Load Balancer announcement, I found myself interested in this shiny new thing. And that’s the first important part of learning something new: If you are interested in the topic, it’s much easier to learn.

Tip #1: Pick the topics that are interesting.

When I’m interested in a topic, I dive into the documentation and read it from top to bottom. It can take a few hours to finish reading before you can start using the new service or feature. However, you then know about all the concepts, best practices, and pitfalls, which saves you time in the long run.

Tip #2: Reading the documentation is a good investment.

Can I remember everything that I read? No. For example, there is one documented limitation to keep in mind when using the Network Load Balancer: Internal load balancers do not support hairpinning or loopback. I read about it and still ran into it. Sometimes, I have to learn the hard way as well.

Amazon Linux 2

Amazon Linux 2 is the successor of Amazon Linux. Both distributions come with a superb AWS integration, a secure default configuration, and regular security updates. You can open AWS Support tickets if you run into any problems.

So, what’s new with Amazon Linux 2? You get long-term support for five years and you can now run a copy of Amazon Linux 2 on your local machine or on premises. The most significant changes are the replacement of SysVinit with systemd and a new way to install additional software, also known as the extras library.

The systemd init system was all new to me. I decided that it was time to change that and I remembered a session from the local AWS User Group in my city about systemd that I had missed. Luckily, I knew the speaker well. I asked Thorsten a few questions to get an idea about the topics I should learn about to understand how systemd works.

There is always someone who knows what you want to learn. You have to find that person. I encourage you to connect with your local AWS community.

Tip #3: It’s easier to learn if you have a network of people to ask questions of and get inspired by.

Amazon Cloud Directory

One of my projects this year was all about hierarchical data. I was looking for a way to store this kind of data in AWS, and I discovered Amazon Cloud Directory. Cloud Directory was all new to me and seemed difficult to learn about. I read all of the documentation. Still, it was painful and I wanted to give up a few times. That’s normal. That’s why I reward myself from time to time (for example, read one more hour of docs and then go for a walk).

Tip #4: Learning a new thing is hard at first. Keep going.

Cloud Directory is a fully managed, hierarchical data store on AWS. Hierarchical data is connected using parent-child relationships. Let me give you an example. Imagine a chat system with teams and channels. The following figure shows a Cloud Directory data model for the imaginary chat system.

When you have a good understanding of a topic, it’s time to master it by using it. You also learn so much while explaining a concept to someone else. That’s why I wrote a blog post, A neglected serverless data store: Cloud Directory.

Tip #5: Apply your knowledge. Share your knowledge. Teach others.

Summary

Becoming an AWS expert is a journey without a final destination. There is always something more to learn. AWS is a huge platform with 100+ services and countless capabilities. The offerings are constantly changing.

I want to encourage you to become an AWS expert. Why not start with one of the new services released at re:Invent this year? Pick the one that is most interesting for you. Read the documentation. Ask questions of others. Be inspired by others. Apply your knowledge. Share with a blog post or a talk to your local AWS user group. Isn’t this what an expert does?

Serverless and startups, the beginning of a beautiful friendship

Post Syndicated from Slobodan Stojanović original https://aws.amazon.com/blogs/aws/serverless-and-startups/

Guest post by AWS Serverless Hero Slobodan Stojanović. Slobodan is the co-author of the book Serverless Applications with Node.js; CTO of Cloud Horizon, a software development studio; and CTO of Vacation Tracker, a Slack-based, leave management app. Slobodan is excited with serverless because it allows him to build software faster and cheaper. He often writes about serverless and talks about it at conferences.

Serverless seems to be perfect for startups. The pay-per-use pricing model and infrastructure that costs you nothing if no one is using your app makes it cheap for early-stage startups.

On the other side, it’s fully managed and scales automatically, so you don’t have to be afraid of large marketing campaigns or unexpected traffic. That’s why we decided to use serverless when we started working on our first product: Vacation Tracker.

Vacation Tracker is a Slack-based app that helps you to track and manage your team’s vacations and days off. Both our Slack app and web-based dashboard needed an API, so an AWS Lambda function with an Amazon API Gateway trigger was a logical starting point. API Gateway provides a public API. Each time that the API receives the request, the Lambda function is triggered to answer that request.

Our app is focused on small and medium teams and is the app that you use less than a few times per day. Periodic usage makes the pay-per-use, serverless pricing model a big win for us because both API Gateway and Lambda cost $0 initially.

Start small, grow tall

We decided to start small, with a simple prototype. Our prototype was a real Slack app with a few hardcoded actions and a little calendar. We used Claudia.js and Bot Builder to build it. Claudia.js is a simple tool for the deployment of Node.js serverless functions to Lambda and API Gateway.

After we finished our prototype, we published a landing page. But we continued building our product even as users signed up for the closed beta access.

Just a few months later, we had a bunch of serverless functions in production: chatbot, dashboard API, Slack notifications, a few tasks for Stripe billing, etc. Each of these functions had their own triggers and roles. While our app was working without issues, it was harder and harder to deploy a new stage.

It was clear that we had to organize our app better. So, our Vacation Tracker koala met the AWS SAM squirrel.

Herding Lambda functions

We started by mapping all our services. Then, we tried to group them into flows. We decided to migrate piece by piece to AWS Serverless Application Model (AWS SAM), an open-source framework for building serverless apps on AWS. As AWS SAM is language-agnostic, it still doesn’t know how to handle Node.js dependencies. We used Claudia’s pack command as a build step.

Grouping services in serverless apps brought back easier deployments. With just a single command, we had a new environment ready for our tester.

Soon after, we had AWS CloudFormation templates for a Slack chatbot, an API, a Stripe-billing based payment flow, notifications flow, and a few other flows for our app.

Like other startups, Vacation Tracker’s goal is to be able to evolve fast and adapt to user needs. To do so, you often need to run experiments and change things on the fly. With that in mind, our goal was to extract some common functionalities from the app flow to reusable components.

For example, as we used multiple Slack slash commands in some of the experiments, we extracted it from the Slack chatbot flow into a reusable component.

Our Slack slash command component consists of a few elements:

  • An API Gateway API with routes for the Slack slash command and message action webhooks
  • A Lambda function that handles slash commands
  • A Lambda function that handles message actions
  • An Amazon SNS topic for parsed Slack data

With this component, we can run our slash command experiments faster. Adding a new Slack slash command to our app requires the deployment of the slash command. We also wrote a few Lambda functions that are triggered by the SNS topic or handle business logic.

While working on Vacation Tracker, we realized the potential value of reusable components. Imagine how fast you would be able to assemble your MVP if you could use the standard components that someone else built? Building an app would require writing glue between reused parts and focus on a business logic that makes your app unique.

This dream can become a reality with AWS Serverless Application Repository, a repository for open source serverless components.

Instead of dreaming, we decided to publish a few reusable components to the Serverless Application Repository, starting with the Slack slash command app. But to do so, we had to have a well-tested app, which led us to the next challenge: how to architect and test a reusable serverless app?

Hexagonal architecture to the rescue

Our answer was simple: Hexagonal architecture, or ports and adapters. It is a pattern that allows an app to be equally driven by users, programs, automated tests, or batch scripts. The app can be developed and tested in isolation from its eventual runtime devices and databases. This makes hexagonal architecture a perfect fit for microservices and serverless apps.

Applying this to Vacation Tracker, we ended up with a setup similar to the following diagram. It consists of the following:

  • lambda.js and main.js files. lambda.js has no tests, as it simply wires the dependencies, such as sns-notification-repository.js, and invokes main.js.
  • main.js has its own unit and integration tests. Integration tests are using local integrations.
  • Each repository has its own unit and integration tests. In their integration tests, repositories connect to AWS services. For example, sns-notification-repository.js integration tests connect to Amazon SNS.

Each of our functions has at least two files: lambda.js and main.js. The first file is small and just invokes main.js with all the dependencies (adapters). This file doesn’t have automated tests, and it looks similar to the following code snippet:

const {
 httpResponse,
 SnsNotificationRepository
} = require('@serverless-slack-command/common')
const main = require('./main')
async function handler(event) {
  const notification = new SnsNotificationRepository(process.env.notificationTopic)
  await main(event.body, event.headers, event.requestContext, notification)
  return httpResponse()
}

exports.handler = handler

The second, and more critical file of each function is main.js. This file contains the function’s business logic, and it must be well tested. In our case, this file has its own unit and integration tests. But the business logic often relies on external integrations, for example sending an SNS notification. Instead of testing all external notifications, we test this file with other adapters, such as a local notification repository.

This file looks similar to the following code snippet:

const qs = require('querystring')

async function slashCommand(slackEvent, headers, requestContext, notification) {
  const eventData = qs.parse(slackEvent);
  return await notification.send({
    type: 'SLASH_COMMAND',
    payload: eventData,
    metadata: {
      headers,
      requestContext
    }
  })
}

module.exports = slashCommand

Adapters for external integrations have their own unit and integration tests, including tests that check the integration with the AWS service. This way we minimized the number of tests that rely on AWS services but still kept our app covered with all necessary tests.

And they lived happily ever after…

Migration to AWS SAM simplified and improved our deployment process. Setting up a new environment now takes minutes, and it can be additionally reduced in future by nesting AWS CloudFormation stacks. Development and testing for our components are easy using hexagonal architecture. Reusable components and Serverless Application Repository put the cherry on top of our serverless cake.

This could be the start of a beautiful friendship between serverless and startups. With serverless, your startup infrastructure is fully managed, and you pay it only if someone is using your app. The serverless pricing model allows you to start cheap. With Serverless Application Repository, you can build your MVPs faster, as you can reuse existing components. These combined benefits give you superpowers and enough velocity to be able to compete with other products with larger teams and budgets.

We are happy to see what startups can build (and outsource) using Serverless Application Repository.

In the meantime, you can see the source of our first open source serverless component on GitHub: https://github.com/vacationtracker/serverless-slack-slash-command-app.

And if you want to try Vacation Tracker, visit https://vacationtracker.io, and you can double your free trial period using the AWS_IS_AWESOME promo code.

Boost your infrastructure with the AWS CDK

Post Syndicated from Philipp Garbe original https://aws.amazon.com/blogs/aws/boost-your-infrastructure-with-cdk/

This guest post is by AWS Container Hero Philipp Garbe. Philipp works as Lead Platform Engineer at Scout24 in Germany. He is driven by technologies and tools that allow him to release faster and more often. He expects that every commit automatically goes into production. You can find him on Twitter at @pgarbe.

Infrastructure as code (IaC) has been adopted by many teams in the last few years. It makes provisioning of your infrastructure easy and helps to keep your environments consistent.

But by using declarative templates, you might still miss many practices that you are used to for “normal” code. You’ve probably already felt the pain that each AWS CloudFormation template is just a copy and paste of your last projects or from StackOverflow. But can you trust these snippets? How can you align improvements or even security fixes through your code base? How can you share best practices within your company or the community?

Fortunately for everyone, AWS published the beta for an important addition to AWS CloudFormation: the AWS Cloud Development Kit (AWS CDK).

What’s the big deal about the AWS CDK?

All your best practices about how to write good AWS CloudFormation templates can now easily be shared within your company or the developer community. At the same time, you can also benefit from others doing the same thing.

For example, think about Amazon DynamoDB. Should be easy to set up in AWS CloudFormation, right? Just some lines in your template. But wait. When you’re already in production, you realize that you’ve got to set up automatic scaling, regular backups, and most importantly, alarms for all relevant metrics. This can amount to several hundred lines.

Think ahead: Maybe you’ve got to create another application that also needs a DynamoDB database. Do you copy and paste all that YAML code? What happens later, when you find some bugs in your template? Do you apply the fix to both code bases?

With the AWS CDK, you’re able to write a “construct” for your best practice, production-ready DynamoDB database. Share it as an npm package with your company or anyone!

What is the AWS CDK?

Back up a step and see what the AWS CDK looks like. Compared to the declarative approach with YAML (or JSON), the CDK allows you to declare your infrastructure imperatively. The main language is TypeScript, but several other languages are also supported.

This is what the Hello World example from Hello, AWS CDK! looks like:

import cdk = require('@aws-cdk/cdk');
import s3 = require('@aws-cdk/aws-s3');

class MyStack extends cdk.Stack {
constructor(parent: cdk.App, id: string, props?: cdk.StackProps) {
super(parent, id, props);

new s3.Bucket(this, 'MyFirstBucket', {
versioned: true
});
}
}

class MyApp extends cdk.App {
constructor(argv: string[]) {
super(argv);

new MyStack(this, 'hello-cdk');
}
}

new MyApp().run();

Apps are the root constructs and can be used directly by the CDK CLI to render and deploy the AWS CloudFormation template.

Apps consist of one or more stacks that are deployable units and contains information about the Region and account. It’s possible to have an app that deploys different stacks to multiple Regions at the same time.

Stacks include constructs that are representations of AWS resources like a DynamoDB table or AWS Lambda function.

A lib is a construct that typically encapsulates further constructs. With that, higher class constructs can be built and also reused. As the construct is just TypeScript (or any other supported language), a package can be built and shared by any package manager.

Constructs

As the CDK is all about constructs, it’s important to understand them. It’s a hierarchical structure called a construct tree. You can think of constructs in three levels:

Level 1: AWS CloudFormation resources

This is a one-to-one mapping of existing resources and is automatically generated. It’s the same as the resources that you use currently in YAML. Ideally, you don’t have to deal with these constructs directly.

Level 2: The AWS Construct Library

These constructs are on an AWS service level. They’re opinionated, well-architected, and handwritten by AWS. They come with proper defaults and should make it easy to create AWS resources without worrying too much about the details.

As an example, this is how to create a complete VPC with private and public subnets in all available Availability Zones:

import ec2 = require('@aws-cdk/aws-ec2');

const vpc = new ec2.VpcNetwork(this, 'VPC');

The AWS Construct Library has some nice concepts about least privilege IAM policies, event-driven API actions, security groups, and metrics. For example, IAM policies are automatically created based on your intent. When a Lambda function subscribes to an SNS topic, a policy is created that allows the topic to invoke the function.

AWS services that offer Amazon CloudWatch metrics have functions like metricXxx() and return metric objects that can easily be used to create alarms.

new Alarm(this, 'Alarm', {
metric: fn.metricErrors(),
threshold: 100,
evaluationPeriods: 2,
});

For more information, see AWS Construct Library.

Level 3: Your awesome stuff

Here’s where it gets interesting. As mentioned earlier, constructs are hierarchical. They can be higher-level abstractions based on other constructs. For example, on this level, you can write your own Amazon ECS cluster construct that contains automatic node draining, automatic scaling, and all the right alarms. Or you can write a construct for all necessary alarms that an Amazon RDS database should monitor. It’s up to you to create and share your constructs.

Conclusion

It’s good that AWS went public in an early stage. The docs are already good, but not everything is covered yet. Not all AWS services have an AWS Construct Library module defined (level 2). Many have only the pure AWS CloudFormation constructs (level 1).

Personally, I think the AWS CDK is a huge step forward, as it allows you to re-use AWS CloudFormation code and share it with others. It makes it easy to apply company standards and allows people to work on awesome features and spend less time on writing “boring” code.

Pick the Right Tool for your IT Challenge

Post Syndicated from Markus Ostertag original https://aws.amazon.com/blogs/aws/pick-the-right-tool-for-your-it-challenge/

This guest post is by AWS Community Hero Markus Ostertag. As CEO of the Munich-based ad-tech company Team Internet AG, Markus is always trying to find the best ways to leverage the cloud, loves to work with cutting-edge technologies, and is a frequent speaker at AWS events and the AWS user group Munich that he co-founded in 2014.

Picking the right tools or services for a job is a huge challenge in IT—every day and in every kind of business. With this post, I want to share some strategies and examples that we at Team Internet used to leverage the huge “tool box” of AWS to build better solutions and solve problems more efficiently.

Use existing resources or build something new? A hard decision

The usual day-to-day work of an IT engineer, architect, or developer is building a solution for a problem or transferring a business process into software. To achieve this, we usually tend to use already existing architectures or resources and build an “add-on” to it.

With the rise of microservices, we all learned that modularization and decoupling are important for being scalable and extendable. This brought us to a different type of software architecture. In reality, we still tend to use already existing resources, like the same database of existing (maybe not fully used) Amazon EC2 instances, because it seems easier than building up new stuff.

Stacks as “next level microservices”?

We at Team Internet are not using the vocabulary of microservices but tend to speak about stacks and building blocks for the different use cases. Our approach is matching the idea of microservices to everything, including the database and other resources that are necessary for the specific problem we need to address.

It’s not about “just” dividing the software and code into different modules. The whole infrastructure is separated based on different needs. Each of those parts of the full architecture is our stack, which is as independent as possible from everything else in the whole system. It only communicates loosely with the other stacks or parts of the infrastructure.

Benefits of this mindset = independence and flexibility

  • Choosing the right parts. For every use case, we can choose the components or services that are best suited for the specific challenges and don’t need to work around limitations. This is especially true for databases, as we can choose from the whole palette instead of trying to squeeze requirements into a DBMS that isn’t built for that. We can differentiate the different needs of workloads like write-heavy vs. read-heavy or structured vs. unstructured data.
  • Rebuilding at will. We’re flexible in rebuilding whole stacks as they’re only loosely coupled. Because of this, a team can build a proof-of-concept with new ideas or services and run them in parallel on production workload without interfering or harming the production system.
  • Lowering costs. Because the operational overhead of running multiple resources is done by AWS (“No undifferentiated heavy lifting”), we just need to look at the service pricing. Most of the price schemes at AWS are supporting the stacks. For databases, you either pay for throughput (Amazon DynamoDB) or per instance (Amazon RDS, etc.). On the throughput level, it’s simple as you just split the throughput you did on one table to several tables without any overhead. On the instance level, the pricing is linear so that an r4.xlarge is half the price of an r4.2xlarge. So why not run two r4.xlarge and split the workload?
  • Designing for resilience. This approach also helps your architecture to be more reliable and resilient by default. As the different stacks are independent from each other, the scaling is much more granular. Scaling on larger systems is often provided with a higher “security buffer,” and failures (hardware, software, fat fingers, etc.) only happen on a small part of the whole system.
  • Taking ownership. A nice side effect we’re seeing now as we use this methodology is the positive effect on ownership and responsibility for our teams. Because of those stacks, it is easier to pinpoint and fix issues but also to be transparent and clear on who is responsible for which stack.

Benefits demand efforts, even with the right tool for the job

Every approach has its downsides. Here, it is obviously the additional development and architecture effort that needs to be taken to build such systems.

Therefore, we decided to always have the goal of a perfect system with independent stacks and reliable and loosely coupled processes between them in our mind. In reality, we sometimes break our own rules and cheat here and there. Even if we do, to have this approach helps us to build better systems and at least know exactly at what point we take a risk of losing the benefits. I hope the explanation and insights here help you to pick the right tool for the job.

Using AWS AI and Amazon Sumerian in IT Education

Post Syndicated from Cyrus Wong original https://aws.amazon.com/blogs/aws/using-aws-ai-and-amazon-sumerian-in-it-education/

This guest post is by AWS Machine Learning Hero, Cyrus Wong. Cyrus is a Data Scientist at the Hong Kong Institute of Vocational Education (Lee Wai Lee) Cloud Innovation Centre. He has achieved all nine AWS Certifications and enjoys sharing his AWS knowledge with others through open-source projects, blog posts, and events.

Our institution (IVE) provides IT training to several thousand students every year and one of our courses successfully applied AWS Promotional Credits. We recently built an open-source project called “Lab Monitor,” which uses AWS AI, serverless, and AR/VR services to enhance our learning experience and gather data to understand what students are doing during labs.

Problem

One of the common problems of lab activity is that students are often doing things that have nothing to do with the course (such as watching videos or playing games). And students can easily copy answers from their classmate because the lab answers are in softcopy. Teachers struggle to challenge students as there is only one answer in general. No one knows which students are working on the lab or which are copying from one another!

Solution

Lab Monitor changes the assessment model form just the final result to the entire development process. We can support and monitor students using AWS AI services.

The system consists of the following parts:

  • A lab monitor agent
  • A lab monitor collector
  • An AR lab assistant

Lab monitor agent

The Lab monitor agent is a Python application that runs on a student’s computer activities. All information is periodically sent to AWS. To identify students and protect the API gateway, each student has a unique API key with a usage limit. The function includes:

  • Capturing all keyboard and pointer events. This can ensure that students are really working on the exercise as it is impossible to complete a coding task without using keyboard and pointer! Also, we encourage students to use shortcuts and we need that information as indicator.
  • Monitoring and controlling PC processes. Teachers can stop students from running programs that are irrelevant to the lab. For computer test, we can kill all browsers and communication software. Processing detailed information is important to decide to upgrade hardware or not!
  • Capturing screens. Amazon Rekognition can detect video or inappropriate content. Extracted text content can trigger an Amazon Sumerian host to talk to a student automatically. It is impossible for a teacher to monitor all student screens! We use a presigned URL with S3 Transfer Acceleration to speed up the image upload.
  • Uploading source code to AWS when students save their code. It is good to know when students complete tasks and to give support to those students who are slower!

Lab monitor collector

The Lab monitor collector is an AWS Serverless Application Model that collects data and provides an API to AR Lab Assistant. Optionally, a teacher can grade students immediately every time they save code by running the unit test inside AWS Lambda. It constantly saves all data into an Amazon S3 data lake and teachers can use Amazon Athena to analyze the data.

To save costs, a scheduled Lambda function checks the teacher’s class calendar every 15 minutes. When there is an upcoming class, it creates a Kinesis stream and Kinesis data analytics application automatically. Teachers can have a nearly real-time view of all student activity.

AR Lab Assistant

The AR lab assistant is a Amazon Sumerian application that reminds students to work on their lab exercise. It sends a camera image to Amazon Rekognition and gets back a student ID.

A Sumerian host, Christine, uses Amazon Polly to speak to students with when something happens:

  • When students pass a unit test, she says congratulations.
  • When students watch movies, she scolds them with the movie actor’s name, such as Tom Cruise.
  • When students watch porn, she scolds them.
  • When students do something wrong, such as forgetting to set up the Python interpreter, she reminds them to set it up.

Students can also ask her questions, for example, checking their overall progress. The host can connect to a Lex chatbot. Student’s conversations are saved in DynamoDB with the sentiment analysis result provided by Amazon Comprehend.

The student screen is like a projector inside the Sumerian application.

Christine: “Stop, watching dirty thing during Lab! Tom Cruise should not be able to help you writing Python code!”

Simplified Architectural Diagrams

Demo video

AR Lab Assistant reaction: https://youtu.be/YZCR2aROBp4

Conclusion

With the combined power of various AWS services, students can now concentrate on only their lab exercise and stop thinking about copying answers from each other! We built the project in about four months and it is still evolving. In a future version, we plan to build a machine learning model to predict the students’ final grade based on their class behavior. They feel that the class is much more fun with Christine.

Lastly, we would like to say thank you to AWS Educate, who provided us with AWS credit, and my AWS Academy student developer team: Mike, Long, Mandy, Tung, Jacqueline, and Hin from IVE Higher Diploma in Cloud and Data Centre Administration. They submitted this application to the AWS Artificial Intelligence (AI) Hackathon and just learned that they received a 3rd place prize!

And Now a Word from Our AWS Heroes…

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/and-now-a-word-from-our-aws-heroes/

Whew! Now that AWS re:Invent 2018 has wrapped up, the AWS Blog Team is taking some time to relax, recharge, and to prepare for 2019.

In order to wrap up the year in style, we have asked several of the AWS Heroes to write guest blog posts on an AWS-related topic of their choice. You will get to hear from Machine Learning Hero Cyrus Wong (pictured at right), Community Hero Markus Ostertag, Container Hero Philipp Garbe, and several others.

Each of these Heroes brings a fresh and unique perspective to the AWS Blog and I know that you will enjoy hearing from them. We’ll have the first post up in a day or two, so stay tuned!

Jeff;

Announcing AWS Machine Learning Heroes (plus new AWS Community Heroes)

Post Syndicated from Cameron Peron original https://aws.amazon.com/blogs/aws/announcing-aws-machine-learning-heroes-plus-new-aws-community-heroes/

The AWS Heroes program helps developers find inspiration and build skills from community leaders who have extensive AWS knowledge and a passion for sharing their expertise with others. The program continues to evolve to align with technology trends and recognize community leaders who focus on specific technical disciplines.

Today we are excited to launch a new category of AWS Heroes: AWS Machine Learning Heroes.

 

Introducing AWS Machine Learning Heroes
AWS Machine Learning Heroes are developers and academics who are passionate enthusiasts of emerging AI/ML technologies. Proficient with deep learning frameworks such as MXNet, PyTorch, and Tensorflow, they are early adopters of Amazon ML technologies and enjoy teaching others how to use machine learning APIs such as Amazon Rekognition (computer vision) and Amazon Comprehend (natural language processing).

Developers from beginner to advanced ML proficiency can learn and apply ML at speed and scale through Hero blog posts, videos, sessions, as well as direct engagement. Our initial cohort of Machine Learning Heroes includes:

 

Agustinus Nalwan – Melbourne, Australia

Agustinus NalwanAgustinus (aka Gus) is the Head of AI at Carsales. He has extensive experience in Deep Learning, setting up distributed training EC2 clusters for deep learning on AWS, and is an advocate of Amazon SageMaker to simplify the machine learning pipeline.

 

 

 

 

 

 

Cyrus Wong – Hong Kong

Cyrus WongCyrus Wong is a Data Scientist at the IT Department of the Hong Kong Institute of Vocational Education. He has achieved all 9 AWS Certifications and builds AI/ML projects with his students using Amazon Rekognition, Amazon Lex, and Amazon Polly, and Amazon Comprehend.

 

 

 

 

 

 

Gillian McCann – Belfast, United Kingdom

Gillian McCannGillian is Head of Cloud Engineering & AI at Workgrid Software. A passionate advocate of cloud native architecture, Gillian leads a team who explores how AWS conversational AI can be leveraged to improve the employee experience.

 

 

 

 

 

 

Matthew Fryer – London, United Kingdom

Matthew FryerMatt leads a team who develops new data science/algorithm functions at Hotels.com and the Expedia Affiliate Network. He has spoken at AWS Summits and other conferences on why machine learning is important to Hotels.com.

 

 

 

 

 

 

Sung Kim – Seoul, South Korea

Sung KimSung is an Associate Professor of Computer Science at the Hong Kong University of Science and Technology. His online deep learning course, including how to use AWS ML services has more than 4M views and 27K subscribers.

 

 

 

 

 

 

Please meet our latest AWS Community Heroes
Also this month we are excited to introduce you to four new AWS Community Heroes:

John Varghese – Mountain View, USA

John VargheseJohn is a Cloud Steward at Intuit responsible for the AWS infrastructure of Intuit’s Futures Group. He runs the AWS Bay Area meetup in the San Francisco Peninsula and has organized multiple AWS Community Day events in the Bay Area.

 

 

 

 

 

 

Serhat Can – Istanbul, Turkey

Serhat CanSerhat is a Technical Evangelist at Atlassian. He is a community organizer as well as a speaker. As a Devopsdays core team member he helps local DevOps communities organize events in 70+ countries and counting.

 

 

 

 

 

 

Bryan Chasko – Las Cruces, USA

Bryan ChaskoBryan is Chief Technology Officer at Electronic Caregiver. A Solutions Architect and Big Data specialist, Bryan uses Amazon Sumerian to apply Augmented and Virtual Reality based solutions to real world business challenges.

 

 

 

 

 

 

Sathyajith Bhat – Bangalore, India

Sathyajith BhatSathyajith Bhat is a DevOps Engineer for Adobe I/O. He is the author of Practical Docker with Python; and organizer of AWS Bangalore Users Group Meetup, AWS Community Day Bangalore, and Barcamp Bangalore.

 

 

 

 

 

 
To learn more about the AWS Heroes program or to connect with an AWS Hero in your community, click here.

Meet the Newest AWS Heroes (September 2018 Edition)

Post Syndicated from Ross Barich original https://aws.amazon.com/blogs/aws/meet-the-newest-aws-heroes-september-2018-edition/

AWS Heroes are passionate AWS enthusiasts who use their extensive knowledge to teach others about all things AWS across a range of mediums. Many Heroes eagerly share knowledge online via forums, social media, or blogs; while others lead AWS User Groups or organize AWS Community Day events. Their extensive efforts to spread AWS knowledge have a significant impact within their local communities. Today we are excited to introduce the newest AWS Heroes:

Jaroslaw Zielinski – Poznan, Poland

AWS Community Hero Jaroslaw Zielinski is a Solutions Architect at Vernity in Poznan (Poland), where his responsibility is to support customers on their road to the cloud using cloud adoption patterns. Jaroslaw is a leader of AWS User Group Poland operating in 7 different cities around Poland. Additionally, he connects the community with the biggest IT conferences in the region – PLNOG, DevOpsDay, [email protected] to name just a few.

He supports numerous projects connected with evangelism, like Zombie Apocalypse Workshops or Cloud Builder’s Day. Bringing together various IT communities, he hosts a conference Cloud & Datacenter Day – the biggest community conference in Poland. In addition, his passion for IT is transferred into his own blog called Popołudnie w Sieci. He also publishes in various professional papers.

 

Jerry Hargrove – Kalama, USA

AWS Community Hero Jerry Hargrove is a cloud architect, developer and evangelist who guides companies on their journey to the cloud, helping them to build smart, secure and scalable applications. Currently with Lucidchart, a leading visual productivity platform, Jerry is a thought leader in the cloud industry and specializes in AWS product and services breakdowns, visualizations and implementation. He brings with him over 20 years of experience as a developer, architect & manager for companies like Rackspace, AWS and Intel.

You can find Jerry on Twitter compiling his famous sketch notes and creating Lucidchart templates that pinpoint practical tips for working in the cloud and helping developers increase efficiency. Jerry is the founder of the AWS Meetup Group in Salt Lake City, often contributes to meetups in the Pacific Northwest and San Francisco Bay area, and speaks at developer conferences worldwide. Jerry holds several professional AWS certifications.

 

Martin Buberl – Copenhagen, Denmark

AWS Community Hero Martin Buberl brings the New York hustle to Scandinavia. As VP Engineering at Trustpilot he is on a mission to build the best engineering teams in the Nordics and Baltics. With a person-centered approach, his focus is on high-leverage activities to maximize impact, customer value and iteration speed — and utilizing cloud technologies checks all those boxes.

His cloud-obsession made him an early adopter and evangelist of all types of AWS services throughout his career. Nowadays, he is especially passionate about Serverless, Big Data and Machine Learning and excited to leverage the cloud to transform those areas.

Martin is an AWS User Group Leader, organizer of the AWS Community Day Nordics and founder of the AWS Community Nordics Slack. He has spoken at multiple international AWS events — AWS User Groups, AWS Community Days and AWS Global Summits — and is looking forward to continue sharing his passion for software engineering and cloud technologies with the Community.

To learn more about the AWS Heroes program or to connect with an AWS Hero in your community, click here.

AWS Heroes – New Categories Launch

Post Syndicated from Ross Barich original https://aws.amazon.com/blogs/aws/aws-heroes-new-categories-launch/

New AWS Heroes Launch Categories

As you may know, in 2014 we launched the AWS Community Heroes program to recognize a vibrant group of AWS experts. These standout individuals use their extensive knowledge to teach customers and fellow-techies about AWS products and services across a range of mediums. As AWS grows, new groups of Heroes emerge.

Today, we’re excited to recognize prominent community leaders by expanding the AWS Heroes program. Unlike Community Heroes (who tend to focus on advocating a wide-range of AWS services within their community), these new Heroes are specialists who focus their efforts and advocacy on a specific technology. Our first new heroes are the AWS Serverless Heroes and AWS Container Heroes. Please join us in welcoming them as the passion and enthusiasm for AWS knowledge-sharing continues to grow in technical communities.

AWS Serverless Heroes

Serverless Heroes are early adopters and spirited pioneers of the AWS serverless ecosystem. They evangelize AWS serverless technologies online and in-person as well as open source contributions to GitHub and the AWS Serverless Application Repository, these Serverless Heroes help evolve the way developers, companies, and the community at large build modern applications. Our initial cohort of Serverless Heroes includes:

Yan Cui

Aleksandar Simovic

Forrest Brazeal

Marcia Villalba

Erica Windisch

Peter Sbarski

Slobodan Stojanović

Rob Gruhl

Michael Hart

Ben Kehoe

Austen Collins

Announcing AWS Container Heroes
AWS Container Heroes are prominent trendsetters and experts with AWS Container Services. As technical-thought leaders they have a deep understanding of our container services, are passionate to learn the latest trends and developments, and are excited to share their learnings with the AWS developer community interested in our container services. Here is the initial group of AWS Container Heroes:

 

Casey Lee

Tung Nguyen

Philipp Garbe

Yusuke Kuoka

Mike Fielder

The trends within the AWS community are ever-changing.  We look forward to recognizing a wide variety of Heroes in the future. Stay tuned for additional updates to the Hero program in coming months, and be sure to visit the Heroes website to learn more.

Our Newest AWS Community Heroes (Spring 2018 Edition)

Post Syndicated from Betsy Chernoff original https://aws.amazon.com/blogs/aws/our-newest-aws-community-heroes-spring-2018-edition/

The AWS Community Heroes program helps shine a spotlight on some of the innovative work being done by rockstar AWS developers around the globe. Marrying cloud expertise with a passion for community building and education, these Heroes share their time and knowledge across social media and in-person events. Heroes also actively help drive content at Meetups, workshops, and conferences.

This March, we have five Heroes that we’re happy to welcome to our network of cloud innovators:

Peter Sbarski

Peter Sbarski is VP of Engineering at A Cloud Guru and the organizer of Serverlessconf, the world’s first conference dedicated entirely to serverless architectures and technologies. His work at A Cloud Guru allows him to work with, talk and write about serverless architectures, cloud computing, and AWS. He has written a book called Serverless Architectures on AWS and is currently collaborating on another book called Serverless Design Patterns with Tim Wagner and Yochay Kiriaty.

Peter is always happy to talk about cloud computing and AWS, and can be found at conferences and meetups throughout the year. He helps to organize Serverless Meetups in Melbourne and Sydney in Australia, and is always keen to share his experience working on interesting and innovative cloud projects.

Peter’s passions include serverless technologies, event-driven programming, back end architecture, microservices, and orchestration of systems. Peter holds a PhD in Computer Science from Monash University, Australia and can be followed on Twitter, LinkedIn, Medium, and GitHub.

 

 

 

Michael Wittig

Michael Wittig is co-founder of widdix, a consulting company focused on cloud architecture, DevOps, and software development on AWS. widdix maintains several AWS related open source projects, most notably a collection of production-ready CloudFormation templates. In 2016, widdix released marbot: a Slack bot supporting your DevOps team to detect and solve incidents on AWS.

In close collaboration with his brother Andreas Wittig, the Wittig brothers are actively creating AWS related content. Their book Amazon Web Services in Action (Manning) introduces AWS with a strong focus on automation. Andreas and Michael run the blog cloudonaut.io where they share their knowledge about AWS with the community. The Wittig brothers also published a bunch of video courses with O’Reilly, Manning, Pluralsight, and A Cloud Guru. You can also find them speaking at conferences and user groups in Europe. Both brothers are co-organizing the AWS user group in Stuttgart.

 

 

 

 

Fernando Hönig

Fernando is an experienced Infrastructure Solutions Leader, holding 5 AWS Certifications, with extensive IT Architecture and Management experience in a variety of market sectors. Working as a Cloud Architect Consultant in United Kingdom since 2014, Fernando built an online community for Hispanic speakers worldwide.

Fernando founded a LinkedIn Group, a Slack Community and a YouTube channel all of them named “AWS en Español”, and started to run a monthly webinar via YouTube streaming where different leaders discuss aspects and challenges around AWS Cloud.

During the last 18 months he’s been helping to run and coach AWS User Group leaders across LATAM and Spain, and 10 new User Groups were founded during this time.

Feel free to follow Fernando on Twitter, connect with him on LinkedIn, or join the ever-growing Hispanic Community via Slack, LinkedIn or YouTube.

 

 

 

Anders Bjørnestad

Anders is a consultant and cloud evangelist at Webstep AS in Norway. He finished his degree in Computer Science at the Norwegian Institute of Technology at about the same time the Internet emerged as a public service. Since then he has been an IT consultant and a passionate advocate of knowledge-sharing.

He architected and implemented his first customer solution on AWS back in 2010, and is essential in building Webstep’s core cloud team. Anders applies his broad expert knowledge across all layers of the organizational stack. He engages with developers on technology and architectures and with top management where he advises about cloud strategies and new business models.

Anders enjoys helping people increase their understanding of AWS and cloud in general, and holds several AWS certifications. He co-founded and co-organizes the AWS User Groups in the largest cities in Norway (Oslo, Bergen, Trondheim and Stavanger), and also uses any opportunity to engage in events related to AWS and cloud wherever he is.

You can follow him on Twitter or connect with him on LinkedIn.

To learn more about the AWS Community Heroes Program and how to get involved with your local AWS community, click here.