Tag Archives: AWS Community Heroes

Building Serverless Pipelines with Amazon CloudWatch Events

Post Syndicated from Forrest Brazeal original https://aws.amazon.com/blogs/aws/building-serverless-pipelines-with-amazon-cloudwatch-events/

Guest post by AWS Serverless Hero Forrest Brazeal. Forrest is a senior cloud architect at Trek10, Inc., host of the Think FaaS serverless podcast at Trek10, and a regular speaker at workshops and events in the serverless community.

Events and serverless go together like baked beans and barbecue. The serverless mindset says to focus on code and configuration that provide business value. It turns out that much of the time, this means working with events: structured data corresponding to things that happen in the outside world. Rather than maintaining long-running server tasks that chew up resources while polling, I can create serverless applications that do work only in response to event triggers.

I have lots of options when working with events in AWS: Amazon Kinesis Data Streams, Amazon Simple Notification Service (SNS), Amazon Simple Queue Service (SQS), and more, depending on my requirements. Lately, I’ve been using a service more often that has the word ‘event’ right in the name: Amazon CloudWatch Events.

CloudWatch Events: The best-kept secret in serverless event processing

I first knew CloudWatch as the service that collects my Lambda logs and lets me run functions on a schedule. But CloudWatch Events also lets me publish my own custom events using the CloudWatch API. It has similar pricing and delivery guarantees to SNS, and supports a whole bunch of AWS services as targets.

Best of all, I don’t even have to provision the event bus—it’s just there in the CloudWatch console. I can publish an event now, using the boto3 AWS SDK for Python:


import boto3
cw = boto3.client('cloudwatch')
cw.put_events(
    Entries=[
        {
            'Source': 'my.app.event',
            'DetailType': 'MY_EVENT_TYPE',
            'Detail': '{"my_data":"As a JSON string"}'
        }
    ]
)

In short, CloudWatch Events gives me a fully managed event pipe that supports an arbitrary number of consumers, where I can drop any kind of JSON string that I want. And this is super useful for building serverless apps.

Event-driven architectures in action

I build cloud-native solutions for clients at Trek10 daily. I frequently use event-driven architectures as a powerful way to migrate legacy systems to serverless, enable easier downstream integrations, and more. Here are just a couple of my favorite patterns:
• Strangling legacy databases
• Designing event-sourced applications

Strangling legacy databases

The “strangler pattern” hides a legacy system behind a wrapper API, while gradually migrating users to the new interface. Trek10 has written about this before. Streaming changes to the cloud as events is a great way to open up reporting and analytics use cases while taking load off a legacy database. The following diagram shows writing a legacy database to events.

This pattern can also work the other way: I can write new data to CloudWatch Events, consume it into a modern data source, and create a second consumer that syncs the data back to my legacy system.

Designing event-sourced applications

Event sourcing simply means treating changes in the system state as events, publishing them on a ledger or bus where they can be consumed by different downstream applications.

Using CloudWatch Events as a centralized bus, I can make a sanitized record of events available as shown in the following event validation flow diagram.

The validation function ensures that only events that match my application’s standards get tagged as “valid” and are made available to downstream consumers. The default bus handles lots of events (remember, service logs go here!), so it’s important to set up rules that only match the events that I care about.

CloudWatch Events simplifies these patterns by providing a single bus where I can apply filters and subscribe consumers, all without having to provision or maintain any infrastructure. And that’s just the beginning.

Use case: Multi-account event processing with CloudWatch Events

CloudWatch Events gets most interesting when I start connecting multiple AWS accounts. It’s easy to set up a trust relationship between CloudWatch Event buses in different accounts, using filtering rules to choose which events get forwarded.

As an example, imagine a widget processing system for a large enterprise, AnyCompany. AnyCompany has many different development teams, each using their own AWS account. Some services are producing information about widgets as they check into warehouses or travel across the country. Others need that data to run reports or innovate new products.

Suppose that Service A produces information about new widgets, Service B wants to view aggregates about widgets in real time, and Service C needs historical data about widgets for reporting. The full event flow might look like the following diagram.

1. Service A publishes the new widget event to CloudWatch Events in their AWS account with the following event body:

{
            'Source': 'cwi.servicea',
            'DetailType': 'NEW_WIDGET',
            'Detail': '{"widget_id":"abc123"}'
}

2. A filtering rule forwards events tagged with cwi.servicea to the central event processing account. Using CloudFormation, they could define the rule as follows:

CentralForwardingRule:
    Type: AWS::Events::Rule
    Properties:
      Description: Rule for sending events to central account
      EventPattern:
        source:
          - cwi.servicea
      Targets:
        - Arn: !Sub arn:aws:events:${CENTRAL_ACCOUNT_REGION}:${CENTRAL_ACCOUNT_ID}:event-bus/default
          Id: CentralTarget
          RoleArn: <IAM ROLE WITH ACCESS TO PUT CW EVENTS> 

3. The event is validated according to their standards.

4. The valid event is republished on the central event bus with a new source, valid.cw.servicea. This is important because, to avoid infinite loops, an individual event can only be forwarded one time.

5. A filtering rule forwards the valid event to Service B’s AWS account, where it updates a DynamoDB table connected to an AWS AppSync API.

6. A second rule forwards the same event to the Service C account, where it flows through Kinesis Data Firehose to an Amazon S3 bucket for analysis using Amazon Athena.

What CloudWatch Events provides here is a decoupled system that uses mostly “plug-and-play” services, and yet opens up flexibility for future innovation.

Taking full advantage of the cloud

The biggest reason I love CloudWatch Events? It’s a fantastically cloud-native service. There’s little code required, and no operational responsibilities beyond watching AWS service limits. I don’t even have to deploy anything to start using it. And yet, under the hood, I’m using a powerful piece of AWS infrastructure that can support complex distributed applications without breaking a sweat.

That’s pretty close to the platonic ideal of serverless apps. Anytime I’m cooking up an event-driven application, I make sure to consider what CloudWatch Events can bring to the table.

Safely validating usernames with Amazon Cognito

Post Syndicated from Larry Ogrodnek original https://aws.amazon.com/blogs/aws/safely-validating-usernames-with-amazon-cognito/

Guest post by AWS Community Hero Larry Ogrodnek. Larry is an independent consultant focused on cloud architecture, DevOps, serverless, and general software development on AWS. He’s always ready to talk about AWS over coffee, and enjoys development and helping other developers.

How are users identified in your system? Username? Email? Is it important that they’re unique?

You may be surprised to learn, as I was, that in Amazon Cognito both usernames and emails are treated as case-sensitive. So “JohnSmith” is different than “johnsmith,” who is different than “jOhNSmiTh.” The same goes for email addresses—it’s even specified in the SMTP RFC that users “smith” and “Smith” may have different mailboxes. That is crazy!

I recently added custom signup validation for Amazon Cognito. In this post, let me walk you through the implementation.

The problem with uniqueness

Determining uniqueness is even more difficult than just dealing with case insensitivity. Like many of you, I’ve received emails based on Internationalized Domain Name homograph attacks. A site is registered for “example.com” but with Cyrillic character “a,” attempting to impersonate a legitimate site and collect information. This same type of attack is possible for user name registration. If I don’t check for this, someone may be able to impersonate another user on my site.

Do you have reservations?

Does my application have user-generated messages or content? Besides dealing with uniqueness, I may want to reserve certain names. For example, if I have user-editable information at user.myapp.com or myapp.com/user, what if someone registers “signup” or “faq” or “support”? What about “billing”?

It’s possible that a malicious user could impersonate my site and use it as part of an attack. Similar attacks are also possible if users have any kind of inbox or messaging. In addition to reserving usernames, I should also separate out user content to its own domain to avoid confusion. I remember GitHub reacting to something similar when it moved user pages from github.com to github.io in 2013.

James Bennet wrote about these issues in great detail in his excellent post, Let’s talk about usernames. He describes the types of validation performed in his django-registration application.

Integrating with Amazon Cognito

Okay, so now that you know a little bit more about this issue, how do I handle this with Amazon Cognito?

Well, I’m in luck, because Amazon Cognito lets me customize much of my authentication workflow with AWS Lambda triggers.

To add username or email validation, I can implement a pre-sign-up Lambda trigger, which lets me perform custom validation and accept or deny the registration request.

It’s important to note that I can’t modify the request. To perform any kind of case or name standardization (for example, forcing lower case), I have to do that on the client. I can only validate that it was done in my Lambda function. It would be handy if this was something available in the future.

To declare a sign-up as invalid, all I have to do is return an error from the Lambda function. In Python, this is as simple as raising an exception. If my validation passes, I just return the event, which already includes the fields that I need for a generic success response. Optionally, I can auto-verify some fields.

To enforce that my frontend is sending usernames standardized as lowercase, all I need is the following code:

def run(event, context):
  user = event[‘userName’]
  if not user.isLower():
    raise Exception(“Username must be lowercase”)
  return event

Adding unique constraints and reservations

I’ve extracted the validation checks from django-registration into a Python module named username-validator to make it easier to perform these types of uniqueness checks in Lambda:

pip install username-validator

In addition to detecting confusing homoglyphs, it also includes a standard set of reserved names like “www”, “admin”, “root”, “security”, “robots.txt”, and so on. You can provide your own additions for application-specific reservations, as well as perform individual checks.

To add this additional validation and some custom reservations, I update the function as follows:

from username_validator import UsernameValidator

MY_RESERVED = [
  "larry",
  "aws",
  "reinvent"
]

validator = UsernameValidator(additional_names=MY_RESERVED)

def run(event, context):
  user = event['userName']

  if not user.islower():
    raise Exception("Username must be lowercase")

  validator.validate_all(user)

  return event

Now, if I attach that Lambda function to the Amazon Cognito user pool as a pre–sign-up trigger and try to sign up for “aws”, I get a 400 error. I also get some text that I could include in the signup form: Other attributes, including email (if used) are available under event[‘request’][‘userAttributes’]}. For example:

{ "request": {
    "userAttributes": {"name": "larry", "email": "[email protected]" }
  }
}

What’s next?

I can validate other attributes in the same way. Or, I can add other custom validation by adding additional checks and raising an exception, with a custom message if it fails.

In this post, I covered why it’s important to think about identity and uniqueness, and demonstrated how to add additional validations to user signups in Amazon Cognito.

Now you know more about controlling signup validation with a custom Lambda function. I encourage you to check out the other user pool workflow customizations that are possible with Lambda triggers.

AWS Heroes: Putting AWS security services to work for you

Post Syndicated from Mark Nunnikhoven original https://aws.amazon.com/blogs/aws/aws-heroes-putting-aws-security-services-to-work-for-you/

Guest post by AWS Community Hero Mark Nunnikhoven. Mark is the Vice President of Cloud Research at long-time APN Advanced Technology Partner Trend Micro. In addition to helping educate the AWS community about modern security and privacy, he has spearheaded Trend Micro’s launch-day support of most of the AWS security services and attended every AWS re:Invent!

Security is a pillar of the AWS Well-Architected Framework. It’s critical to the success of any workload. But it’s also often misunderstood. It’s steeped in jargon and talked about in terms of threats and fear. This has led to security getting a bad reputation. It’s often thought of as a roadblock and something to put up with.

Nothing could be further from the truth.

At its heart, cybersecurity is simple. It’s a set of processes and controls that work to make sure that whatever I’ve built works as intended… and only as intended. How do I make that happen in the AWS Cloud?

Shared responsibility

It all starts with the shared responsibility model. The model defines the line where responsibility for day-to-day operations shifts from AWS to me, the user. AWS provides the security of the cloud and I am responsible for security in the cloud. For each type of service, more and more of my responsibilities shift to AWS.

My tinfoil hat would be taken away if I didn’t mention that everyone needs to verify that AWS is holding up their end of the deal (#protip: they are and at world-class levels). This is where AWS Artifact enters the picture. It is an easy way to download the evidence that AWS is fulfilling their responsibilities under the model.

But what about my responsibilities under the model? AWS offers help there in the form of various services under the Security, Identity, & Compliance category.

Security services

The trick is understanding how all of these security services fit together to help me meet my responsibilities. Based on conversations I’ve had around the world and helping teach these services at various AWS Summits, I’ve found that grouping them into five subcategories makes things clearer: authorization, protected stores, authentication, enforcement, and visibility.

A few of these categories are already well understood.

  • Authentication services help me identify my users.
  • Authorization services allow me to determine what they—and other services—are allowed to do and under what conditions.
  • Protected stores allow me to encrypt sensitive data and regulate access to it.

Two subcategories aren’t as well understood: enforcement and visibility. I use the services in these categories daily in my security practice and they are vital to ensuring that my apps are working as intended.

Enforcement

Teams struggle with how to get the most out of enforcement controls and it can be difficult to understand how to piece these together into a workable security practice. Most of these controls detect issues, essentially raising their hand when something might be wrong. To protect my deployments, I need a process to handle those detections.

By remembering the goal of ensuring that whatever I build works as intended and only as intended, I can better frame how each of these services helps me.

AWS CloudTrail logs nearly every API action in an account but mining those logs for suspicious activity is difficult. Enter Amazon GuardDuty. It continuously scours CloudTrail logs—as well as Amazon VPC flow logs and DNS logs—for threats and suspicious activity at the AWS account level.

Amazon EC2 instances have the biggest potential for security challenges as they are running a full operating system and applications written by various third parties. All that complexity added up to over 13,000 reported vulnerabilities last year. Amazon Inspector runs on-demand assessments of your instances and raises findings related to the operating system and installed applications that include recommended mitigations.

Despite starting from a locked-down state, teams often make mistakes and sometimes accidentally expose sensitive data in an Amazon S3 bucket. Amazon Macie continuously scans targeted buckets looking for sensitive information and misconfigurations. This augments additional protections like S3 Block Public Access and Trusted Advisor checks.

AWS WAF and AWS Shield work on AWS edge locations and actively stop attacks that they are configured to detect. AWS Shield targets DDoS activity and AWS WAF takes aim at layer seven or web attacks.

Each of these services support the work teams do in hardening configurations and writing quality code. They are designed to help highlight areas of concern for taking action. The challenge is prioritizing those actions.

Visibility

Prioritization is where the visibility services step in. As previously mentioned, AWS Artifact provides visibility into AWS’ activities under the shared responsibility model. The new AWS Security Hub helps me understand the data generated by the other AWS security, identity, and compliance services along with data generated by key APN Partner solutions.

The goal of AWS Security Hub is to be the first stop for any security activity. All data sent to the hub is normalized in the Amazon Finding Format, which includes a standardized severity rating. This provides context for each findings and helps me determine which actions to take first.

This prioritized list of findings quickly translates in a set of responses to undertake. At first, these might be manual responses but as with anything in the AWS Cloud, automation is the key to success.

Using AWS Lambda to react to AWS Security Hub findings is a wildly successful and simple way of modernizing an approach to security. This automated workflow sits atop a pyramid of security controls:

• Core AWS security services and APN Partner solutions at the bottom
• The AWS Security Hub providing visibility in the middle
• Automation as the crown jewel on top

What’s next?

In this post, I described my high-level approach to security success in the AWS Cloud. This aligns directly with the AWS Well-Architected Framework and thousands of customer success stories. When you understand the shared responsibility model and the value of each service, you’re well on your way to demystifying security and building better in the AWS Cloud.

Building serverless apps with components from the AWS Serverless Application Repository

Post Syndicated from Aleksandar Simovic original https://aws.amazon.com/blogs/aws/building-serverless-apps-with-components-from-the-aws-serverless-application-repository/

Guest post by AWS Serverless Hero Aleksandar Simovic. Aleksandar is a Senior Software Engineer at Science Exchange and co-author of “Serverless Applications with Node.js” with Slobodan Stojanovic, published by Manning Publications. He also writes on Medium on both business and technical aspects of serverless.

Many of you have built a user login or an authorization service from scratch a dozen times. And you’ve probably built another dozen services to process payments and another dozen to export PDFs. We’ve all done it, and we’ve often all done it redundantly. Using the AWS Serverless Application Repository, you can now spend more of your time and energy developing business logic to deliver the features that matter to customers, faster.

What is the AWS Serverless Application Repository?

The AWS Serverless Application Repository allows developers to deploy, publish, and share common serverless components among their teams and organizations. Its public library contains community-built, open-source, serverless components that are instantly searchable and deployable with customizable parameters and predefined licensing. They are built and published using the AWS Serverless Application Model (AWS SAM), the infrastructure as code, YAML language, used for templating AWS resources.

How to use AWS Serverless Application Repository in production

I wanted to build an application that enables customers to select a product and pay for it. Sounds like a substantial effort, right? Using AWS Serverless Application Repository, it didn’t actually take me much time.

Broadly speaking, I built:

  • A product page with a Buy button, automatically tied to the Stripe Checkout SDK. When a customer chooses Buy, the page displays the Stripe Checkout payment form.
  • A Stripe payment service with an API endpoint that accepts a callback from Stripe, charges the customer, and sends a notification for successful transactions.

For this post, I created a pre-built sample static page that displays the product details and has the Stripe Checkout JavaScript on the page.

Even with the pre-built page, integrating the payment service is still work. But many other developers have built a payment application at least once, so why should I spend time building identical features? This is where AWS Serverless Application Repository came in handy.

Find and deploy a component

First, I searched for an existing component in the AWS Serverless Application Repository public library. I typed “stripe” and opted in to see applications that created custom IAM roles or resource policies. I saw the following results:

I selected the application titled api-lambda-stripe-charge and chose Deploy on the component’s detail page.

Before I deployed any component, I inspected it to make sure it was safe and production-ready.

Evaluate a component

The recommended approach for evaluating an AWS Serverless Application Repository component is a four-step process:

  1. Check component permissions.
  2. Inspect the component implementation.
  3. Deploy and run the component in a restricted environment.
  4. Monitor the component’s behavior and cost before using in production.

This might appear to negate the quick delivery benefits of AWS Serverless Application Repository, but in reality, you only verify each component one time. Then you can easily reuse and share the component throughout your company.

Here’s how to apply this approach while adding the Stripe component.

1. Check component permissions

There are two types of components: public and private. Public components are open source, while private components do not have to be. In this case, the Stripe component is public. I reviewed the code to make sure that it doesn’t give unnecessary permissions that could potentially compromise security.

In this case, the Stripe component is on GitHub. On the component page, I opened the template.yaml file. There was only one AWS Lambda function there, so I found the Policies attribute and reviewed the policies that it uses.

  CreateStripeCharge:
    Type: AWS::Serverless::Function
    Properties:
      Handler: index.handler
      Runtime: nodejs8.10
      Timeout: 10
      Policies:
        - SNSCrudPolicy:
          TopicName: !GetAtt SNSTopic.TopicName
        - Statement:
          Effect: Allow
          Action:
            - ssm:GetParameters
          Resource: !Sub
arn:${AWS::Partition}:ssm:${AWS::Region}:${AWS::AccountId}:parameter/${SSMParameterPrefix}/*

The component was using a predefined AWS SAM policy template and a custom one. These predefined policy templates are sets of AWS permissions that are verified and recommended by the AWS security team. Using these policies to specify resource permissions is one of the recommended practices for serverless components on AWS Serverless Application Repository. The other custom IAM policy allows the function to retrieve AWS System Manager parameters, which is the best practice to store secure values, such as the Stripe secret key.

2. Inspect the component implementation

I wanted to ensure that the component’s main business logic did only what it was meant to do, which was to create a Stripe charge. It’s also important to look out for unknown third-party HTTP calls to prevent leaks. Then I reviewed this project’s dependencies. For this inspection, I used PureSec, but tools like those offered by Protego are another option.

The main business logic was in the charge-customer.js file. It revealed straightforward logic to simply invoke the Stripe create charge and then publish a notification with the created charge. I saw this reflected in the following code:

return paymentProcessor.createCharge(token, amount, currency, description)
    .then(chargeResponse => {
      createdCharge = chargeResponse;
      return pubsub.publish(createdCharge, TOPIC_ARN);
    })
    .then(() => createdCharge)
    .catch((err) => {
      console.log(err);
      throw err;
    });

The paymentProcessor and pubsub values are adapters for the communication with Stripe and Amazon SNS, respectively. I always like to look and see how they work.

3. Deploy and run the component in a restricted environment

Maintaining a separate, restricted AWS account in which to test your serverless applications is a best practice for serverless development. I always ensure that my test account has strict AWS Billing and Amazon CloudWatch alarms in place.

I signed in to this separate account, opened the Stripe component page, and manually deployed it. After deployment, I needed to verify how it ran. Because this component only has one Lambda function, I looked for that function in the Lambda console and opened its details page so that I could verify the code.

4. Monitor behavior and cost before using a component in production

When everything works as expected in my test account, I usually add monitoring and performance tools to my component to help diagnose any incidents and evaluate component performance. I often use Epsagon and Lumigo for this, although adding those steps would have made this post too long.

I also wanted to track the component’s cost. To do this, I added a strict Billing alarm that tracked the component cost and the cost of each AWS resource within it.

After the component passed these four tests, I was ready to deploy it by adding it to my existing product-selection application.

Deploy the component to an existing application

To add my Stripe component into my existing application, I re-opened the component Review, Configure, and Deploy page and chose Copy as SAM Resource. That copied the necessary template code to my clipboard. I then added it to my existing serverless application by pasting it into my existing AWS SAM template, under Resources. It looked like the following:

Resources:
  ShowProduct:
    Type: AWS::Serverless::Function
    Properties:
      Handler: index.handler
      Runtime: nodejs8.10
      Timeout: 10
      Events:
        Api:
          Type: Api
          Properties:
            Path: /product/:productId
            Method: GET
  apilambdastripecharge:
    Type: AWS::Serverless::Application
    Properties:
      Location:
        ApplicationId: arn:aws:serverlessrepo:us-east-1:375983427419:applications/api-lambda-stripe-charge
        SemanticVersion: 3.0.0
      Parameters:
        # (Optional) Cross-origin resource sharing (CORS) Origin. You can specify a single origin, all origins with "*", or leave it empty and no CORS is applied.
        CorsOrigin: YOUR_VALUE
        # This component assumes that the Stripe secret key needed to use the Stripe Charge API is stored as SecureStrings in Parameter Store under the prefix defined by this parameter. See the component README.
       # SSMParameterPrefix: lambda-stripe-charge # Uncomment to override the default value
Outputs:
  ApiUrl:
    Value: !Sub https://${ServerlessRestApi}.execute-api.${AWS::Region}.amazonaws.com/Stage/product/123
    Description: The URL of the sample API Gateway

I copied and pasted an AWS::Serverless::Application AWS SAM resource, which points to the component by ApplicationId and its SemanticVersion. Then, I defined the component’s parameters.

  • I set CorsOrigin to “*” for demonstration purposes.
  • I didn’t have to set the SSMParameterPrefix value, as it picks up a default value. But I did set up my Stripe secret key in the Systems Manager Parameter Store, by running the following command:

aws ssm put-parameter --name lambda-stripe-charge/stripe-secret-key --value --type SecureString --overwrite

In addition to parameters, components also contain outputs. An output is an externalized component resource or value that you can use with other applications or components. For example, the output for the api-lambda-stripe-charge component is SNSTopic, an Amazon SNS topic. This enables me to attach another component or business logic to get a notification when a successful payment occurs. For example, a lambda-send-email-ses component that sends an email upon successful payment could be attached, too.

To finish, I ran the following two commands:

aws cloudformation package --template-file template.yaml --output-template-file output.yaml --s3-bucket YOUR_BUCKET_NAME

aws cloudformation deploy --template-file output.yaml --stack-name product-show-n-pay --capabilities CAPABILITY_IAM

For the second command, you could add parameter overrides as needed.

My product-selection and payment application was successfully deployed!

Summary

AWS Serverless Application Repository enables me to share and reuse common components, services, and applications so that I can really focus on building core business value.

In a few steps, I created an application that enables customers to select a product and pay for it. It took a matter of minutes, not hours or days! You can see that it doesn’t take long to cautiously analyze and check a component. That component can now be shared with other teams throughout my company so that they can eliminate their redundancies, too.

Now you’re ready to use AWS Serverless Application Repository to accelerate the way that your teams develop products, deliver features, and build and share production-ready applications.

Get to know the newest AWS Heroes – Winter 2019

Post Syndicated from Ross Barich original https://aws.amazon.com/blogs/aws/get-to-know-the-newest-aws-heroes-winter-2019/

AWS Heroes are superusers who possess advanced technical skills and are early adopters of emerging technologies. Heroes are passionate about sharing their extensive AWS knowledge with others. Some get involved in-person by running meetups, workshops, and speaking at conferences, while others share with online AWS communities via social media, blog posts, and open source contributions.

2019 is off to a roaring start and we’re thrilled to introduce you to the latest AWS Heroes:

Aileen Gemma Smith
Ant Stanley
Gaurav Kamboj
Jeremy Daly
Kurt Lee
Matt Weagle
Shingo Yoshida

Aileen Gemma Smith – Sydney, Australia

Community Hero Aileen Gemma Smith is the founder and CEO of Vizalytics Technology. The team at Vizalytics serves public and private sector clients worldwide in transportation, tourism, and economic development. She shared their story in the Building Complex Workloads in the Cloud session, at AWS Canberra Summit 2017. Aileen has a keen interest in diversity and inclusion initiatives and is constantly working to elevate the work and voices of underestimated engineers and founders. At AWS Public Sector Summit Canberra in 2018, she was a panelist for We Power Tech, Inclusive Conversations with Women in Technology. She has supported and encouraged the creation of internships and mentoring programs for high school and university students with a focus on building out STEAM initiatives.
 

 

 

 

 

 

Ant Stanley – London, United Kingdom

Serverless Hero Ant Stanley is a consultant and community organizer. He founded and currently runs the Serverless London user group, and he is part of the ServerlessDays London organizing team and the global ServerlessDays leadership team. Previously, Ant was a co-founder of A Cloud Guru, and responsible for organizing the first Serverlessconf event in New York in May 2016. Living in London since 2009, Ant’s background before serverless is primarily as a solutions architect at various organizations, from managed service providers to Tier 1 telecommunications providers. His current focus is serverless, GraphQL, and Node.js.

 

 

 

 

 

 

 

 

Gaurav Kamboj – Mumbai, India

Community Hero Gaurav Kamboj is a cloud architect at Hotstar, India’s leading OTT provider with a global concurrency record for live streaming to 11Mn+ viewers. At Hotstar, he loves building cost-efficient infrastructure that can scale to millions in minutes. He is also passionate about chaos engineering and cloud security. Gaurav holds the original “all-five” AWS certifications, is co-founder of AWS User Group Mumbai, and speaks at local tech conferences. He also conducts guest lectures and workshops on cloud computing for students at engineering colleges affiliated with the University of Mumbai.

 

 

 

 

 

 

 

 

Jeremy Daly – Boston, USA

Serverless Hero Jeremy Daly is the CTO of AlertMe, a startup based in NYC that uses machine learning and natural language processing to help publishers better connect with their readers. He began building cloud-based applications with AWS in 2009. After discovering Lambda, became a passionate advocate for FaaS and managed services. He now writes extensively about serverless on his blog, jeremydaly.com, and publishes Off-by-none, a weekly newsletter that focuses on all things serverless. As an active member of the serverless community, Jeremy contributes to a number of open-source serverless projects, and has created several others, including Lambda API, Serverless MySQL, and Lambda Warmer.

 

 

 

 

 

 

 

Kurt Lee – Seoul, South Korea

Serverless Hero Kurt Lee works at Vingle Inc. as their tech lead. As one of the original team members, he has been involved in nearly all backend applications there. Most recently, he led Vingle’s full migration to serverless, cutting 40% of the server cost. He’s known for sharing his experience of adapting serverless, along with its technical and organizational value, through Medium. He and his team maintain multiple open-source projects, which they developed during the migration. Kurt hosts [email protected] regularly, and often presents at AWSKRUG about various aspects of serverless and pushing more things to serverless.

 

 

 

 

 

 

 

Matt Weagle – Seattle, USA

Serverless Hero Matt Weagle leverages machine learning, serverless techniques, and a servicefull mindset at Lyft, to create innovative transportation experiences in an operationally sustainable and secure manner. Matt looks to serverless as a way to increase collaboration across development, operational, security, and financial concerns and support rapid business-value creation. He has been involved in the serverless community for several years. Currently, he is the organizer of Serverless – Seattle and co-organizer of the serverlessDays Seattle event. He writes about serverless topics on Medium and Twitter.

 

 

 

 

 

 

 

Shingo Yoshida – Tokyo, Japan

Serverless Hero Shingo Yoshida is the CEO of Section-9, CTO of CYDAS, as well as a founder of Serverless Community(JP) and a member of JAWS-UG (AWS User Group – Japan). Since 2012, Shingo has not only built a system with just AWS, but has also built with a cloud-native architecture to make his customers happy. Serverless Community(JP) was established in 2016, and meetups have been held 20 times in Tokyo, Osaka, Fukuoka, and Sapporo, including three full-day conferences. Through this community, thousands of participants have discovered the value of serverless. Shingo has contributed to these serverless scenes with many blog posts and books about serverless, including Serverless Architectures on AWS.

 

 

 

 

 

 

 

There are now 80 AWS Heroes worldwide. Learn about all of them and connect with an AWS Hero.

Just-in-time VPN access with an AWS IoT button

Post Syndicated from Teri Radichel original https://aws.amazon.com/blogs/aws/just-in-time-vpn-access-with-an-aws-iot-button/

Guest post by AWS Community Hero Teri Radichel. Teri Radichel provides cyber security assessments, pen testing, and research services through her company, 2nd Sight Lab. She is also the founder of the AWS Architects Seattle Meetup.

While traveling to deliver cloud security training, I connect to Wi-Fi networks, both in my hotel room and in the classroom using a VPN. Most companies expose a remote VPN endpoint to the entire internet. I came up with a hypothesis that I could use an AWS IoT button to allow network access only to required locations. What if a VPN user could click to get access, which would trigger opening a network rule, and double-click to disallow network traffic again? I tested it out this idea, and you can see the results below.

You might be wondering why you might want to use a VPN for remote cloud administration. Why an AWS IoT button instead of a laptop or mobile application? More on that in my cloud security blog.

Initially, I wanted to use the AWS IoT Enterprise Button because it allows an organization to have control over the certificates used on the devices. It also uses Wi-Fi, and I was hoping to capture the button IP address to grant network access. To do that, I had to be able to prove that the button received the same IP address from the Wi-Fi network as my laptop. Unfortunately, due to captive portals used by some wireless networks, I had problems connecting the button at some locations.

Next, I tried the AT&T LTE-M Button. I was able to get this button to work for my use case, but with a few less than user-friendly requirements. Because this button is on a cellular network rather than the Wi-Fi I use to connect to my VPN in a hotel room, I can’t auto-magically determine the IP address. I must manually set it using the AWS IoT mobile application.

The other issue I had is that some networks change the public IP addresses of the Wi-Fi client after the VPN connection. The before and after IP addresses are always in the same network block but are not consistent. Instead of using a single IP address, the user has to understand how to figure out what IP range to pass into the button. This proof of concept implementation works well but would not be an ideal solution for non-network-savvy users.

The good news is that I don’t have to log into my AWS account with administrative privileges to change the network settings and allow access to the VPN endpoint from my location. The AWS IoT button user has limited permissions, as does the role for the AWS Lambda function that grants access. The AWS IoT button is a form of multi-factor authentication.

Configure the button with a Lambda function

Caveat: This is not a fully tested or production-ready solution, but it is a starting point to give you some ideas for the implementation of on-demand network access to administrative endpoints. The roles for the button and the Lambda function can be much more restrictive than the ones I used in my proof-of-concept implementation.

1. Set up your VPN endpoint (the instructions are beyond the scope of this post). You can use something like OpenVPN or any of the AWS Marketplace options that allow you to create VPNs for remote access.

2. Jeff Barr has already written a most excellent post on how to set up your AWS IoT button with a Lambda function. The process is straight-forward.

3. To allow your button to change the network, add the ability for your Lambda role to replace network ACL entries. This role enables the assigned resource to update any rule in the account—not recommended! Limit this further to specific network ACLs. Also, make sure that users can only edit the placement attributes of their own assigned buttons.

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "ec2:ReplaceNetworkAclEntry",
"Resource": "*"
}
]
}

4. Write the code.

For my test, I edited the default code that comes with the button to prove that what I was trying to do would work. I left in the lines that send a text message, but after the network change. That way, if there’s an error, the user doesn’t get the SNS message.

Additionally, you always, always, always want to validate any inputs sent into any application from a client. I added log lines to show where you can add that. Change these variables to match your environment: vpcid, nacl, rule. The rule parameter is the rule in your network ACL that is updated to use the IP address provided with the button application.

from __future__ import print_function

import boto3
import json
import logging

logger = logging.getLogger()
logger.setLevel(logging.INFO)

sns = boto3.client('sns')

def lambda_handler(event, context):

    logger.info('Received event: ' + json.dumps(event))

    attributes = event['placementInfo']['attributes']

    phone_number = attributes['phoneNumber']
message = attributes['message']
ip = attributes['ip']

    logger.info("Need code here to validate this is a valid IP address")
logger.info("Need code here to validate the message")
logger.info("Need code here to validate the phone number")

    for key in attributes.keys():
message = message.replace('{{%s}}' % (key), attributes[key])
message = message.replace('{{*}}', json.dumps(attributes))

    dsn = event['deviceInfo']['deviceId']
click_type = event['deviceEvent']['buttonClicked']['clickType']

    vpcid = 'vpc-xxxxxxxxxxxxxxxx'
nacl = 'acl-xxxxxxxxxxxxxxx'
rule = 200

    cidr = ip + '/32'

    message = message + " " + cidr

    client = boto3.client('ec2')

    response = client.replace_network_acl_entry(
NetworkAclId=nacl,
CidrBlock=cidr,
DryRun=False,
Egress=False,
PortRange={
'From': 500,
'To': 500
},
Protocol="17",
RuleAction="allow",
RuleNumber=rule
)

    sns.publish(PhoneNumber=phone_number, Message=message)

    logger.info('SMS has been sent to ' + phone_number)

5. Write the double-click code to remove network access. I left this code as an exercise for the reader. If you understand how to edit the network ACL in the previous step, you should be able to write the double-click function to disallow traffic by changing the line RuleAction=”allow” to RuleAction=”deny”. You have now blocked access to the network port that allows remote users to connect to the VPN.

Test the button

1. Get the public IP address for your current network by going to a site like https://whatismyip.com.

For this post, assume that you only need a single IP address. However, you could easily change the code above to allow for an IP range instead (or in other words, a CIDR block).

2. Log in to the phone app for the button.

3. Choose Projects and select your button project. Mine is named vpn2.

4. The project associated with the Lambda function that you assigned to the button requires the following placement attributes:
message: Network updated!
phoneNumber: The phone number to receive the text.
ip: The IP address from whatismyip.com.

5. Select an existing attribute to change it or choose + Add Placement Attribute to add a new one.

6. Press your AWS IoT button to trigger the Lambda function. If it runs without error, you get the text message with the IP address that you entered.

7. Check your VPC network ACL rule to verify the change to the correct IP address.

8. Verify that you can connect to the VPN.

9. Assuming you implemented the double-click function to disable access, double-click the button to change the network ACL rule to “deny” instead of “allow”.

Now you know how to use an AWS IoT button to change a network rule on demand. Hopefully, this post sparks some ideas for adding additional layers of security to your AWS VPN and administrative endpoints!

How to become an AWS expert

Post Syndicated from Michael Wittig original https://aws.amazon.com/blogs/aws/how-to-become-an-aws-expert/

This guest post is by AWS Community Hero Michael Wittig. Michael Wittig is co-founder of widdix, a consulting company focused on cloud architecture, DevOps, and software development on AWS. In close collaboration with his brother Andreas Wittig, Michael co-authored Amazon Web Services in Action and maintains the blog cloudonaut.io where they share their knowledge about AWS with the community.

If you are just starting to use AWS today, you might think it’s going to be hard to catch up. How can you become an AWS expert? How can you know everything about AWS? I asked myself the same questions some time ago. Let me share my answer on how to become an AWS expert.

My story

I have used AWS for more than five years. Working with my clients of all sizes and industries challenges my AWS knowledge every day. I also maintain several AWS open source projects that I try to keep up-to-date with the ever-improving AWS platform.

Let me show you my tips on staying up-to-date with AWS and learning new things. Here are three examples of the most exciting and surprising things I learned about AWS this year:

  • Network Load Balancers
  • Amazon Linux 2
  • Amazon Cloud Directory

Network Load Balancers

When I started using AWS, there was one option to load balance HTTP and raw TCP traffic: what is now called a Classic Load Balancer. Since then, the portfolio of load balancers has expanded. You can now also choose the Application Load Balancer to distribute HTTP(S) traffic (including HTTP2 and WebSockets) or the Network Load Balancer, which operates on layer 4 to load balance TCP traffic.

When reading the Network Load Balancer announcement, I found myself interested in this shiny new thing. And that’s the first important part of learning something new: If you are interested in the topic, it’s much easier to learn.

Tip #1: Pick the topics that are interesting.

When I’m interested in a topic, I dive into the documentation and read it from top to bottom. It can take a few hours to finish reading before you can start using the new service or feature. However, you then know about all the concepts, best practices, and pitfalls, which saves you time in the long run.

Tip #2: Reading the documentation is a good investment.

Can I remember everything that I read? No. For example, there is one documented limitation to keep in mind when using the Network Load Balancer: Internal load balancers do not support hairpinning or loopback. I read about it and still ran into it. Sometimes, I have to learn the hard way as well.

Amazon Linux 2

Amazon Linux 2 is the successor of Amazon Linux. Both distributions come with a superb AWS integration, a secure default configuration, and regular security updates. You can open AWS Support tickets if you run into any problems.

So, what’s new with Amazon Linux 2? You get long-term support for five years and you can now run a copy of Amazon Linux 2 on your local machine or on premises. The most significant changes are the replacement of SysVinit with systemd and a new way to install additional software, also known as the extras library.

The systemd init system was all new to me. I decided that it was time to change that and I remembered a session from the local AWS User Group in my city about systemd that I had missed. Luckily, I knew the speaker well. I asked Thorsten a few questions to get an idea about the topics I should learn about to understand how systemd works.

There is always someone who knows what you want to learn. You have to find that person. I encourage you to connect with your local AWS community.

Tip #3: It’s easier to learn if you have a network of people to ask questions of and get inspired by.

Amazon Cloud Directory

One of my projects this year was all about hierarchical data. I was looking for a way to store this kind of data in AWS, and I discovered Amazon Cloud Directory. Cloud Directory was all new to me and seemed difficult to learn about. I read all of the documentation. Still, it was painful and I wanted to give up a few times. That’s normal. That’s why I reward myself from time to time (for example, read one more hour of docs and then go for a walk).

Tip #4: Learning a new thing is hard at first. Keep going.

Cloud Directory is a fully managed, hierarchical data store on AWS. Hierarchical data is connected using parent-child relationships. Let me give you an example. Imagine a chat system with teams and channels. The following figure shows a Cloud Directory data model for the imaginary chat system.

When you have a good understanding of a topic, it’s time to master it by using it. You also learn so much while explaining a concept to someone else. That’s why I wrote a blog post, A neglected serverless data store: Cloud Directory.

Tip #5: Apply your knowledge. Share your knowledge. Teach others.

Summary

Becoming an AWS expert is a journey without a final destination. There is always something more to learn. AWS is a huge platform with 100+ services and countless capabilities. The offerings are constantly changing.

I want to encourage you to become an AWS expert. Why not start with one of the new services released at re:Invent this year? Pick the one that is most interesting for you. Read the documentation. Ask questions of others. Be inspired by others. Apply your knowledge. Share with a blog post or a talk to your local AWS user group. Isn’t this what an expert does?

Serverless and startups, the beginning of a beautiful friendship

Post Syndicated from Slobodan Stojanović original https://aws.amazon.com/blogs/aws/serverless-and-startups/

Guest post by AWS Serverless Hero Slobodan Stojanović. Slobodan is the co-author of the book Serverless Applications with Node.js; CTO of Cloud Horizon, a software development studio; and CTO of Vacation Tracker, a Slack-based, leave management app. Slobodan is excited with serverless because it allows him to build software faster and cheaper. He often writes about serverless and talks about it at conferences.

Serverless seems to be perfect for startups. The pay-per-use pricing model and infrastructure that costs you nothing if no one is using your app makes it cheap for early-stage startups.

On the other side, it’s fully managed and scales automatically, so you don’t have to be afraid of large marketing campaigns or unexpected traffic. That’s why we decided to use serverless when we started working on our first product: Vacation Tracker.

Vacation Tracker is a Slack-based app that helps you to track and manage your team’s vacations and days off. Both our Slack app and web-based dashboard needed an API, so an AWS Lambda function with an Amazon API Gateway trigger was a logical starting point. API Gateway provides a public API. Each time that the API receives the request, the Lambda function is triggered to answer that request.

Our app is focused on small and medium teams and is the app that you use less than a few times per day. Periodic usage makes the pay-per-use, serverless pricing model a big win for us because both API Gateway and Lambda cost $0 initially.

Start small, grow tall

We decided to start small, with a simple prototype. Our prototype was a real Slack app with a few hardcoded actions and a little calendar. We used Claudia.js and Bot Builder to build it. Claudia.js is a simple tool for the deployment of Node.js serverless functions to Lambda and API Gateway.

After we finished our prototype, we published a landing page. But we continued building our product even as users signed up for the closed beta access.

Just a few months later, we had a bunch of serverless functions in production: chatbot, dashboard API, Slack notifications, a few tasks for Stripe billing, etc. Each of these functions had their own triggers and roles. While our app was working without issues, it was harder and harder to deploy a new stage.

It was clear that we had to organize our app better. So, our Vacation Tracker koala met the AWS SAM squirrel.

Herding Lambda functions

We started by mapping all our services. Then, we tried to group them into flows. We decided to migrate piece by piece to AWS Serverless Application Model (AWS SAM), an open-source framework for building serverless apps on AWS. As AWS SAM is language-agnostic, it still doesn’t know how to handle Node.js dependencies. We used Claudia’s pack command as a build step.

Grouping services in serverless apps brought back easier deployments. With just a single command, we had a new environment ready for our tester.

Soon after, we had AWS CloudFormation templates for a Slack chatbot, an API, a Stripe-billing based payment flow, notifications flow, and a few other flows for our app.

Like other startups, Vacation Tracker’s goal is to be able to evolve fast and adapt to user needs. To do so, you often need to run experiments and change things on the fly. With that in mind, our goal was to extract some common functionalities from the app flow to reusable components.

For example, as we used multiple Slack slash commands in some of the experiments, we extracted it from the Slack chatbot flow into a reusable component.

Our Slack slash command component consists of a few elements:

  • An API Gateway API with routes for the Slack slash command and message action webhooks
  • A Lambda function that handles slash commands
  • A Lambda function that handles message actions
  • An Amazon SNS topic for parsed Slack data

With this component, we can run our slash command experiments faster. Adding a new Slack slash command to our app requires the deployment of the slash command. We also wrote a few Lambda functions that are triggered by the SNS topic or handle business logic.

While working on Vacation Tracker, we realized the potential value of reusable components. Imagine how fast you would be able to assemble your MVP if you could use the standard components that someone else built? Building an app would require writing glue between reused parts and focus on a business logic that makes your app unique.

This dream can become a reality with AWS Serverless Application Repository, a repository for open source serverless components.

Instead of dreaming, we decided to publish a few reusable components to the Serverless Application Repository, starting with the Slack slash command app. But to do so, we had to have a well-tested app, which led us to the next challenge: how to architect and test a reusable serverless app?

Hexagonal architecture to the rescue

Our answer was simple: Hexagonal architecture, or ports and adapters. It is a pattern that allows an app to be equally driven by users, programs, automated tests, or batch scripts. The app can be developed and tested in isolation from its eventual runtime devices and databases. This makes hexagonal architecture a perfect fit for microservices and serverless apps.

Applying this to Vacation Tracker, we ended up with a setup similar to the following diagram. It consists of the following:

  • lambda.js and main.js files. lambda.js has no tests, as it simply wires the dependencies, such as sns-notification-repository.js, and invokes main.js.
  • main.js has its own unit and integration tests. Integration tests are using local integrations.
  • Each repository has its own unit and integration tests. In their integration tests, repositories connect to AWS services. For example, sns-notification-repository.js integration tests connect to Amazon SNS.

Each of our functions has at least two files: lambda.js and main.js. The first file is small and just invokes main.js with all the dependencies (adapters). This file doesn’t have automated tests, and it looks similar to the following code snippet:

const {
 httpResponse,
 SnsNotificationRepository
} = require('@serverless-slack-command/common')
const main = require('./main')
async function handler(event) {
  const notification = new SnsNotificationRepository(process.env.notificationTopic)
  await main(event.body, event.headers, event.requestContext, notification)
  return httpResponse()
}

exports.handler = handler

The second, and more critical file of each function is main.js. This file contains the function’s business logic, and it must be well tested. In our case, this file has its own unit and integration tests. But the business logic often relies on external integrations, for example sending an SNS notification. Instead of testing all external notifications, we test this file with other adapters, such as a local notification repository.

This file looks similar to the following code snippet:

const qs = require('querystring')

async function slashCommand(slackEvent, headers, requestContext, notification) {
  const eventData = qs.parse(slackEvent);
  return await notification.send({
    type: 'SLASH_COMMAND',
    payload: eventData,
    metadata: {
      headers,
      requestContext
    }
  })
}

module.exports = slashCommand

Adapters for external integrations have their own unit and integration tests, including tests that check the integration with the AWS service. This way we minimized the number of tests that rely on AWS services but still kept our app covered with all necessary tests.

And they lived happily ever after…

Migration to AWS SAM simplified and improved our deployment process. Setting up a new environment now takes minutes, and it can be additionally reduced in future by nesting AWS CloudFormation stacks. Development and testing for our components are easy using hexagonal architecture. Reusable components and Serverless Application Repository put the cherry on top of our serverless cake.

This could be the start of a beautiful friendship between serverless and startups. With serverless, your startup infrastructure is fully managed, and you pay it only if someone is using your app. The serverless pricing model allows you to start cheap. With Serverless Application Repository, you can build your MVPs faster, as you can reuse existing components. These combined benefits give you superpowers and enough velocity to be able to compete with other products with larger teams and budgets.

We are happy to see what startups can build (and outsource) using Serverless Application Repository.

In the meantime, you can see the source of our first open source serverless component on GitHub: https://github.com/vacationtracker/serverless-slack-slash-command-app.

And if you want to try Vacation Tracker, visit https://vacationtracker.io, and you can double your free trial period using the AWS_IS_AWESOME promo code.

Boost your infrastructure with the AWS CDK

Post Syndicated from Philipp Garbe original https://aws.amazon.com/blogs/aws/boost-your-infrastructure-with-cdk/

This guest post is by AWS Container Hero Philipp Garbe. Philipp works as Lead Platform Engineer at Scout24 in Germany. He is driven by technologies and tools that allow him to release faster and more often. He expects that every commit automatically goes into production. You can find him on Twitter at @pgarbe.

Infrastructure as code (IaC) has been adopted by many teams in the last few years. It makes provisioning of your infrastructure easy and helps to keep your environments consistent.

But by using declarative templates, you might still miss many practices that you are used to for “normal” code. You’ve probably already felt the pain that each AWS CloudFormation template is just a copy and paste of your last projects or from StackOverflow. But can you trust these snippets? How can you align improvements or even security fixes through your code base? How can you share best practices within your company or the community?

Fortunately for everyone, AWS published the beta for an important addition to AWS CloudFormation: the AWS Cloud Development Kit (AWS CDK).

What’s the big deal about the AWS CDK?

All your best practices about how to write good AWS CloudFormation templates can now easily be shared within your company or the developer community. At the same time, you can also benefit from others doing the same thing.

For example, think about Amazon DynamoDB. Should be easy to set up in AWS CloudFormation, right? Just some lines in your template. But wait. When you’re already in production, you realize that you’ve got to set up automatic scaling, regular backups, and most importantly, alarms for all relevant metrics. This can amount to several hundred lines.

Think ahead: Maybe you’ve got to create another application that also needs a DynamoDB database. Do you copy and paste all that YAML code? What happens later, when you find some bugs in your template? Do you apply the fix to both code bases?

With the AWS CDK, you’re able to write a “construct” for your best practice, production-ready DynamoDB database. Share it as an npm package with your company or anyone!

What is the AWS CDK?

Back up a step and see what the AWS CDK looks like. Compared to the declarative approach with YAML (or JSON), the CDK allows you to declare your infrastructure imperatively. The main language is TypeScript, but several other languages are also supported.

This is what the Hello World example from Hello, AWS CDK! looks like:

import cdk = require('@aws-cdk/cdk');
import s3 = require('@aws-cdk/aws-s3');

class MyStack extends cdk.Stack {
constructor(parent: cdk.App, id: string, props?: cdk.StackProps) {
super(parent, id, props);

new s3.Bucket(this, 'MyFirstBucket', {
versioned: true
});
}
}

class MyApp extends cdk.App {
constructor(argv: string[]) {
super(argv);

new MyStack(this, 'hello-cdk');
}
}

new MyApp().run();

Apps are the root constructs and can be used directly by the CDK CLI to render and deploy the AWS CloudFormation template.

Apps consist of one or more stacks that are deployable units and contains information about the Region and account. It’s possible to have an app that deploys different stacks to multiple Regions at the same time.

Stacks include constructs that are representations of AWS resources like a DynamoDB table or AWS Lambda function.

A lib is a construct that typically encapsulates further constructs. With that, higher class constructs can be built and also reused. As the construct is just TypeScript (or any other supported language), a package can be built and shared by any package manager.

Constructs

As the CDK is all about constructs, it’s important to understand them. It’s a hierarchical structure called a construct tree. You can think of constructs in three levels:

Level 1: AWS CloudFormation resources

This is a one-to-one mapping of existing resources and is automatically generated. It’s the same as the resources that you use currently in YAML. Ideally, you don’t have to deal with these constructs directly.

Level 2: The AWS Construct Library

These constructs are on an AWS service level. They’re opinionated, well-architected, and handwritten by AWS. They come with proper defaults and should make it easy to create AWS resources without worrying too much about the details.

As an example, this is how to create a complete VPC with private and public subnets in all available Availability Zones:

import ec2 = require('@aws-cdk/aws-ec2');

const vpc = new ec2.VpcNetwork(this, 'VPC');

The AWS Construct Library has some nice concepts about least privilege IAM policies, event-driven API actions, security groups, and metrics. For example, IAM policies are automatically created based on your intent. When a Lambda function subscribes to an SNS topic, a policy is created that allows the topic to invoke the function.

AWS services that offer Amazon CloudWatch metrics have functions like metricXxx() and return metric objects that can easily be used to create alarms.

new Alarm(this, 'Alarm', {
metric: fn.metricErrors(),
threshold: 100,
evaluationPeriods: 2,
});

For more information, see AWS Construct Library.

Level 3: Your awesome stuff

Here’s where it gets interesting. As mentioned earlier, constructs are hierarchical. They can be higher-level abstractions based on other constructs. For example, on this level, you can write your own Amazon ECS cluster construct that contains automatic node draining, automatic scaling, and all the right alarms. Or you can write a construct for all necessary alarms that an Amazon RDS database should monitor. It’s up to you to create and share your constructs.

Conclusion

It’s good that AWS went public in an early stage. The docs are already good, but not everything is covered yet. Not all AWS services have an AWS Construct Library module defined (level 2). Many have only the pure AWS CloudFormation constructs (level 1).

Personally, I think the AWS CDK is a huge step forward, as it allows you to re-use AWS CloudFormation code and share it with others. It makes it easy to apply company standards and allows people to work on awesome features and spend less time on writing “boring” code.

Pick the Right Tool for your IT Challenge

Post Syndicated from Markus Ostertag original https://aws.amazon.com/blogs/aws/pick-the-right-tool-for-your-it-challenge/

This guest post is by AWS Community Hero Markus Ostertag. As CEO of the Munich-based ad-tech company Team Internet AG, Markus is always trying to find the best ways to leverage the cloud, loves to work with cutting-edge technologies, and is a frequent speaker at AWS events and the AWS user group Munich that he co-founded in 2014.

Picking the right tools or services for a job is a huge challenge in IT—every day and in every kind of business. With this post, I want to share some strategies and examples that we at Team Internet used to leverage the huge “tool box” of AWS to build better solutions and solve problems more efficiently.

Use existing resources or build something new? A hard decision

The usual day-to-day work of an IT engineer, architect, or developer is building a solution for a problem or transferring a business process into software. To achieve this, we usually tend to use already existing architectures or resources and build an “add-on” to it.

With the rise of microservices, we all learned that modularization and decoupling are important for being scalable and extendable. This brought us to a different type of software architecture. In reality, we still tend to use already existing resources, like the same database of existing (maybe not fully used) Amazon EC2 instances, because it seems easier than building up new stuff.

Stacks as “next level microservices”?

We at Team Internet are not using the vocabulary of microservices but tend to speak about stacks and building blocks for the different use cases. Our approach is matching the idea of microservices to everything, including the database and other resources that are necessary for the specific problem we need to address.

It’s not about “just” dividing the software and code into different modules. The whole infrastructure is separated based on different needs. Each of those parts of the full architecture is our stack, which is as independent as possible from everything else in the whole system. It only communicates loosely with the other stacks or parts of the infrastructure.

Benefits of this mindset = independence and flexibility

  • Choosing the right parts. For every use case, we can choose the components or services that are best suited for the specific challenges and don’t need to work around limitations. This is especially true for databases, as we can choose from the whole palette instead of trying to squeeze requirements into a DBMS that isn’t built for that. We can differentiate the different needs of workloads like write-heavy vs. read-heavy or structured vs. unstructured data.
  • Rebuilding at will. We’re flexible in rebuilding whole stacks as they’re only loosely coupled. Because of this, a team can build a proof-of-concept with new ideas or services and run them in parallel on production workload without interfering or harming the production system.
  • Lowering costs. Because the operational overhead of running multiple resources is done by AWS (“No undifferentiated heavy lifting”), we just need to look at the service pricing. Most of the price schemes at AWS are supporting the stacks. For databases, you either pay for throughput (Amazon DynamoDB) or per instance (Amazon RDS, etc.). On the throughput level, it’s simple as you just split the throughput you did on one table to several tables without any overhead. On the instance level, the pricing is linear so that an r4.xlarge is half the price of an r4.2xlarge. So why not run two r4.xlarge and split the workload?
  • Designing for resilience. This approach also helps your architecture to be more reliable and resilient by default. As the different stacks are independent from each other, the scaling is much more granular. Scaling on larger systems is often provided with a higher “security buffer,” and failures (hardware, software, fat fingers, etc.) only happen on a small part of the whole system.
  • Taking ownership. A nice side effect we’re seeing now as we use this methodology is the positive effect on ownership and responsibility for our teams. Because of those stacks, it is easier to pinpoint and fix issues but also to be transparent and clear on who is responsible for which stack.

Benefits demand efforts, even with the right tool for the job

Every approach has its downsides. Here, it is obviously the additional development and architecture effort that needs to be taken to build such systems.

Therefore, we decided to always have the goal of a perfect system with independent stacks and reliable and loosely coupled processes between them in our mind. In reality, we sometimes break our own rules and cheat here and there. Even if we do, to have this approach helps us to build better systems and at least know exactly at what point we take a risk of losing the benefits. I hope the explanation and insights here help you to pick the right tool for the job.

Using AWS AI and Amazon Sumerian in IT Education

Post Syndicated from Cyrus Wong original https://aws.amazon.com/blogs/aws/using-aws-ai-and-amazon-sumerian-in-it-education/

This guest post is by AWS Machine Learning Hero, Cyrus Wong. Cyrus is a Data Scientist at the Hong Kong Institute of Vocational Education (Lee Wai Lee) Cloud Innovation Centre. He has achieved all nine AWS Certifications and enjoys sharing his AWS knowledge with others through open-source projects, blog posts, and events.

Our institution (IVE) provides IT training to several thousand students every year and one of our courses successfully applied AWS Promotional Credits. We recently built an open-source project called “Lab Monitor,” which uses AWS AI, serverless, and AR/VR services to enhance our learning experience and gather data to understand what students are doing during labs.

Problem

One of the common problems of lab activity is that students are often doing things that have nothing to do with the course (such as watching videos or playing games). And students can easily copy answers from their classmate because the lab answers are in softcopy. Teachers struggle to challenge students as there is only one answer in general. No one knows which students are working on the lab or which are copying from one another!

Solution

Lab Monitor changes the assessment model form just the final result to the entire development process. We can support and monitor students using AWS AI services.

The system consists of the following parts:

  • A lab monitor agent
  • A lab monitor collector
  • An AR lab assistant

Lab monitor agent

The Lab monitor agent is a Python application that runs on a student’s computer activities. All information is periodically sent to AWS. To identify students and protect the API gateway, each student has a unique API key with a usage limit. The function includes:

  • Capturing all keyboard and pointer events. This can ensure that students are really working on the exercise as it is impossible to complete a coding task without using keyboard and pointer! Also, we encourage students to use shortcuts and we need that information as indicator.
  • Monitoring and controlling PC processes. Teachers can stop students from running programs that are irrelevant to the lab. For computer test, we can kill all browsers and communication software. Processing detailed information is important to decide to upgrade hardware or not!
  • Capturing screens. Amazon Rekognition can detect video or inappropriate content. Extracted text content can trigger an Amazon Sumerian host to talk to a student automatically. It is impossible for a teacher to monitor all student screens! We use a presigned URL with S3 Transfer Acceleration to speed up the image upload.
  • Uploading source code to AWS when students save their code. It is good to know when students complete tasks and to give support to those students who are slower!

Lab monitor collector

The Lab monitor collector is an AWS Serverless Application Model that collects data and provides an API to AR Lab Assistant. Optionally, a teacher can grade students immediately every time they save code by running the unit test inside AWS Lambda. It constantly saves all data into an Amazon S3 data lake and teachers can use Amazon Athena to analyze the data.

To save costs, a scheduled Lambda function checks the teacher’s class calendar every 15 minutes. When there is an upcoming class, it creates a Kinesis stream and Kinesis data analytics application automatically. Teachers can have a nearly real-time view of all student activity.

AR Lab Assistant

The AR lab assistant is a Amazon Sumerian application that reminds students to work on their lab exercise. It sends a camera image to Amazon Rekognition and gets back a student ID.

A Sumerian host, Christine, uses Amazon Polly to speak to students with when something happens:

  • When students pass a unit test, she says congratulations.
  • When students watch movies, she scolds them with the movie actor’s name, such as Tom Cruise.
  • When students watch porn, she scolds them.
  • When students do something wrong, such as forgetting to set up the Python interpreter, she reminds them to set it up.

Students can also ask her questions, for example, checking their overall progress. The host can connect to a Lex chatbot. Student’s conversations are saved in DynamoDB with the sentiment analysis result provided by Amazon Comprehend.

The student screen is like a projector inside the Sumerian application.

Christine: “Stop, watching dirty thing during Lab! Tom Cruise should not be able to help you writing Python code!”

Simplified Architectural Diagrams

Demo video

AR Lab Assistant reaction: https://youtu.be/YZCR2aROBp4

Conclusion

With the combined power of various AWS services, students can now concentrate on only their lab exercise and stop thinking about copying answers from each other! We built the project in about four months and it is still evolving. In a future version, we plan to build a machine learning model to predict the students’ final grade based on their class behavior. They feel that the class is much more fun with Christine.

Lastly, we would like to say thank you to AWS Educate, who provided us with AWS credit, and my AWS Academy student developer team: Mike, Long, Mandy, Tung, Jacqueline, and Hin from IVE Higher Diploma in Cloud and Data Centre Administration. They submitted this application to the AWS Artificial Intelligence (AI) Hackathon and just learned that they received a 3rd place prize!

And Now a Word from Our AWS Heroes…

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/and-now-a-word-from-our-aws-heroes/

Whew! Now that AWS re:Invent 2018 has wrapped up, the AWS Blog Team is taking some time to relax, recharge, and to prepare for 2019.

In order to wrap up the year in style, we have asked several of the AWS Heroes to write guest blog posts on an AWS-related topic of their choice. You will get to hear from Machine Learning Hero Cyrus Wong (pictured at right), Community Hero Markus Ostertag, Container Hero Philipp Garbe, and several others.

Each of these Heroes brings a fresh and unique perspective to the AWS Blog and I know that you will enjoy hearing from them. We’ll have the first post up in a day or two, so stay tuned!

Jeff;

Announcing AWS Machine Learning Heroes (plus new AWS Community Heroes)

Post Syndicated from Cameron Peron original https://aws.amazon.com/blogs/aws/announcing-aws-machine-learning-heroes-plus-new-aws-community-heroes/

The AWS Heroes program helps developers find inspiration and build skills from community leaders who have extensive AWS knowledge and a passion for sharing their expertise with others. The program continues to evolve to align with technology trends and recognize community leaders who focus on specific technical disciplines.

Today we are excited to launch a new category of AWS Heroes: AWS Machine Learning Heroes.

 

Introducing AWS Machine Learning Heroes
AWS Machine Learning Heroes are developers and academics who are passionate enthusiasts of emerging AI/ML technologies. Proficient with deep learning frameworks such as MXNet, PyTorch, and Tensorflow, they are early adopters of Amazon ML technologies and enjoy teaching others how to use machine learning APIs such as Amazon Rekognition (computer vision) and Amazon Comprehend (natural language processing).

Developers from beginner to advanced ML proficiency can learn and apply ML at speed and scale through Hero blog posts, videos, sessions, as well as direct engagement. Our initial cohort of Machine Learning Heroes includes:

 

Agustinus Nalwan – Melbourne, Australia

Agustinus NalwanAgustinus (aka Gus) is the Head of AI at Carsales. He has extensive experience in Deep Learning, setting up distributed training EC2 clusters for deep learning on AWS, and is an advocate of Amazon SageMaker to simplify the machine learning pipeline.

 

 

 

 

 

 

Cyrus Wong – Hong Kong

Cyrus WongCyrus Wong is a Data Scientist at the IT Department of the Hong Kong Institute of Vocational Education. He has achieved all 9 AWS Certifications and builds AI/ML projects with his students using Amazon Rekognition, Amazon Lex, and Amazon Polly, and Amazon Comprehend.

 

 

 

 

 

 

Gillian McCann – Belfast, United Kingdom

Gillian McCannGillian is Head of Cloud Engineering & AI at Workgrid Software. A passionate advocate of cloud native architecture, Gillian leads a team who explores how AWS conversational AI can be leveraged to improve the employee experience.

 

 

 

 

 

 

Matthew Fryer – London, United Kingdom

Matthew FryerMatt leads a team who develops new data science/algorithm functions at Hotels.com and the Expedia Affiliate Network. He has spoken at AWS Summits and other conferences on why machine learning is important to Hotels.com.

 

 

 

 

 

 

Sung Kim – Seoul, South Korea

Sung KimSung is an Associate Professor of Computer Science at the Hong Kong University of Science and Technology. His online deep learning course, including how to use AWS ML services has more than 4M views and 27K subscribers.

 

 

 

 

 

 

Please meet our latest AWS Community Heroes
Also this month we are excited to introduce you to four new AWS Community Heroes:

John Varghese – Mountain View, USA

John VargheseJohn is a Cloud Steward at Intuit responsible for the AWS infrastructure of Intuit’s Futures Group. He runs the AWS Bay Area meetup in the San Francisco Peninsula and has organized multiple AWS Community Day events in the Bay Area.

 

 

 

 

 

 

Serhat Can – Istanbul, Turkey

Serhat CanSerhat is a Technical Evangelist at Atlassian. He is a community organizer as well as a speaker. As a Devopsdays core team member he helps local DevOps communities organize events in 70+ countries and counting.

 

 

 

 

 

 

Bryan Chasko – Las Cruces, USA

Bryan ChaskoBryan is Chief Technology Officer at Electronic Caregiver. A Solutions Architect and Big Data specialist, Bryan uses Amazon Sumerian to apply Augmented and Virtual Reality based solutions to real world business challenges.

 

 

 

 

 

 

Sathyajith Bhat – Bangalore, India

Sathyajith BhatSathyajith Bhat is a DevOps Engineer for Adobe I/O. He is the author of Practical Docker with Python; and organizer of AWS Bangalore Users Group Meetup, AWS Community Day Bangalore, and Barcamp Bangalore.

 

 

 

 

 

 
To learn more about the AWS Heroes program or to connect with an AWS Hero in your community, click here.

Meet the Newest AWS Heroes (September 2018 Edition)

Post Syndicated from Ross Barich original https://aws.amazon.com/blogs/aws/meet-the-newest-aws-heroes-september-2018-edition/

AWS Heroes are passionate AWS enthusiasts who use their extensive knowledge to teach others about all things AWS across a range of mediums. Many Heroes eagerly share knowledge online via forums, social media, or blogs; while others lead AWS User Groups or organize AWS Community Day events. Their extensive efforts to spread AWS knowledge have a significant impact within their local communities. Today we are excited to introduce the newest AWS Heroes:

Jaroslaw Zielinski – Poznan, Poland

AWS Community Hero Jaroslaw Zielinski is a Solutions Architect at Vernity in Poznan (Poland), where his responsibility is to support customers on their road to the cloud using cloud adoption patterns. Jaroslaw is a leader of AWS User Group Poland operating in 7 different cities around Poland. Additionally, he connects the community with the biggest IT conferences in the region – PLNOG, DevOpsDay, [email protected] to name just a few.

He supports numerous projects connected with evangelism, like Zombie Apocalypse Workshops or Cloud Builder’s Day. Bringing together various IT communities, he hosts a conference Cloud & Datacenter Day – the biggest community conference in Poland. In addition, his passion for IT is transferred into his own blog called Popołudnie w Sieci. He also publishes in various professional papers.

 

Jerry Hargrove – Kalama, USA

AWS Community Hero Jerry Hargrove is a cloud architect, developer and evangelist who guides companies on their journey to the cloud, helping them to build smart, secure and scalable applications. Currently with Lucidchart, a leading visual productivity platform, Jerry is a thought leader in the cloud industry and specializes in AWS product and services breakdowns, visualizations and implementation. He brings with him over 20 years of experience as a developer, architect & manager for companies like Rackspace, AWS and Intel.

You can find Jerry on Twitter compiling his famous sketch notes and creating Lucidchart templates that pinpoint practical tips for working in the cloud and helping developers increase efficiency. Jerry is the founder of the AWS Meetup Group in Salt Lake City, often contributes to meetups in the Pacific Northwest and San Francisco Bay area, and speaks at developer conferences worldwide. Jerry holds several professional AWS certifications.

 

Martin Buberl – Copenhagen, Denmark

AWS Community Hero Martin Buberl brings the New York hustle to Scandinavia. As VP Engineering at Trustpilot he is on a mission to build the best engineering teams in the Nordics and Baltics. With a person-centered approach, his focus is on high-leverage activities to maximize impact, customer value and iteration speed — and utilizing cloud technologies checks all those boxes.

His cloud-obsession made him an early adopter and evangelist of all types of AWS services throughout his career. Nowadays, he is especially passionate about Serverless, Big Data and Machine Learning and excited to leverage the cloud to transform those areas.

Martin is an AWS User Group Leader, organizer of the AWS Community Day Nordics and founder of the AWS Community Nordics Slack. He has spoken at multiple international AWS events — AWS User Groups, AWS Community Days and AWS Global Summits — and is looking forward to continue sharing his passion for software engineering and cloud technologies with the Community.

To learn more about the AWS Heroes program or to connect with an AWS Hero in your community, click here.

AWS Heroes – New Categories Launch

Post Syndicated from Ross Barich original https://aws.amazon.com/blogs/aws/aws-heroes-new-categories-launch/

New AWS Heroes Launch Categories

As you may know, in 2014 we launched the AWS Community Heroes program to recognize a vibrant group of AWS experts. These standout individuals use their extensive knowledge to teach customers and fellow-techies about AWS products and services across a range of mediums. As AWS grows, new groups of Heroes emerge.

Today, we’re excited to recognize prominent community leaders by expanding the AWS Heroes program. Unlike Community Heroes (who tend to focus on advocating a wide-range of AWS services within their community), these new Heroes are specialists who focus their efforts and advocacy on a specific technology. Our first new heroes are the AWS Serverless Heroes and AWS Container Heroes. Please join us in welcoming them as the passion and enthusiasm for AWS knowledge-sharing continues to grow in technical communities.

AWS Serverless Heroes

Serverless Heroes are early adopters and spirited pioneers of the AWS serverless ecosystem. They evangelize AWS serverless technologies online and in-person as well as open source contributions to GitHub and the AWS Serverless Application Repository, these Serverless Heroes help evolve the way developers, companies, and the community at large build modern applications. Our initial cohort of Serverless Heroes includes:

Yan Cui

Aleksandar Simovic

Forrest Brazeal

Marcia Villalba

Erica Windisch

Peter Sbarski

Slobodan Stojanović

Rob Gruhl

Michael Hart

Ben Kehoe

Austen Collins

Announcing AWS Container Heroes
AWS Container Heroes are prominent trendsetters and experts with AWS Container Services. As technical-thought leaders they have a deep understanding of our container services, are passionate to learn the latest trends and developments, and are excited to share their learnings with the AWS developer community interested in our container services. Here is the initial group of AWS Container Heroes:

 

Casey Lee

Tung Nguyen

Philipp Garbe

Yusuke Kuoka

Mike Fielder

The trends within the AWS community are ever-changing.  We look forward to recognizing a wide variety of Heroes in the future. Stay tuned for additional updates to the Hero program in coming months, and be sure to visit the Heroes website to learn more.

Our Newest AWS Community Heroes (Spring 2018 Edition)

Post Syndicated from Betsy Chernoff original https://aws.amazon.com/blogs/aws/our-newest-aws-community-heroes-spring-2018-edition/

The AWS Community Heroes program helps shine a spotlight on some of the innovative work being done by rockstar AWS developers around the globe. Marrying cloud expertise with a passion for community building and education, these Heroes share their time and knowledge across social media and in-person events. Heroes also actively help drive content at Meetups, workshops, and conferences.

This March, we have five Heroes that we’re happy to welcome to our network of cloud innovators:

Peter Sbarski

Peter Sbarski is VP of Engineering at A Cloud Guru and the organizer of Serverlessconf, the world’s first conference dedicated entirely to serverless architectures and technologies. His work at A Cloud Guru allows him to work with, talk and write about serverless architectures, cloud computing, and AWS. He has written a book called Serverless Architectures on AWS and is currently collaborating on another book called Serverless Design Patterns with Tim Wagner and Yochay Kiriaty.

Peter is always happy to talk about cloud computing and AWS, and can be found at conferences and meetups throughout the year. He helps to organize Serverless Meetups in Melbourne and Sydney in Australia, and is always keen to share his experience working on interesting and innovative cloud projects.

Peter’s passions include serverless technologies, event-driven programming, back end architecture, microservices, and orchestration of systems. Peter holds a PhD in Computer Science from Monash University, Australia and can be followed on Twitter, LinkedIn, Medium, and GitHub.

 

 

 

Michael Wittig

Michael Wittig is co-founder of widdix, a consulting company focused on cloud architecture, DevOps, and software development on AWS. widdix maintains several AWS related open source projects, most notably a collection of production-ready CloudFormation templates. In 2016, widdix released marbot: a Slack bot supporting your DevOps team to detect and solve incidents on AWS.

In close collaboration with his brother Andreas Wittig, the Wittig brothers are actively creating AWS related content. Their book Amazon Web Services in Action (Manning) introduces AWS with a strong focus on automation. Andreas and Michael run the blog cloudonaut.io where they share their knowledge about AWS with the community. The Wittig brothers also published a bunch of video courses with O’Reilly, Manning, Pluralsight, and A Cloud Guru. You can also find them speaking at conferences and user groups in Europe. Both brothers are co-organizing the AWS user group in Stuttgart.

 

 

 

 

Fernando Hönig

Fernando is an experienced Infrastructure Solutions Leader, holding 5 AWS Certifications, with extensive IT Architecture and Management experience in a variety of market sectors. Working as a Cloud Architect Consultant in United Kingdom since 2014, Fernando built an online community for Hispanic speakers worldwide.

Fernando founded a LinkedIn Group, a Slack Community and a YouTube channel all of them named “AWS en Español”, and started to run a monthly webinar via YouTube streaming where different leaders discuss aspects and challenges around AWS Cloud.

During the last 18 months he’s been helping to run and coach AWS User Group leaders across LATAM and Spain, and 10 new User Groups were founded during this time.

Feel free to follow Fernando on Twitter, connect with him on LinkedIn, or join the ever-growing Hispanic Community via Slack, LinkedIn or YouTube.

 

 

 

Anders Bjørnestad

Anders is a consultant and cloud evangelist at Webstep AS in Norway. He finished his degree in Computer Science at the Norwegian Institute of Technology at about the same time the Internet emerged as a public service. Since then he has been an IT consultant and a passionate advocate of knowledge-sharing.

He architected and implemented his first customer solution on AWS back in 2010, and is essential in building Webstep’s core cloud team. Anders applies his broad expert knowledge across all layers of the organizational stack. He engages with developers on technology and architectures and with top management where he advises about cloud strategies and new business models.

Anders enjoys helping people increase their understanding of AWS and cloud in general, and holds several AWS certifications. He co-founded and co-organizes the AWS User Groups in the largest cities in Norway (Oslo, Bergen, Trondheim and Stavanger), and also uses any opportunity to engage in events related to AWS and cloud wherever he is.

You can follow him on Twitter or connect with him on LinkedIn.

To learn more about the AWS Community Heroes Program and how to get involved with your local AWS community, click here.

 

 

 

 

 

 

 

 

Say Hello To Our Newest AWS Community Heroes (Fall 2017 Edition)

Post Syndicated from Sara Rodas original https://aws.amazon.com/blogs/aws/say-hello-to-our-newest-aws-community-heroes-fall-2017-edition/

The AWS Community Heroes program helps shine a spotlight on some of the innovative work being done by rockstar AWS developers around the globe. Marrying cloud expertise with a passion for community building and education, these heroes share their time and knowledge across social media and through in-person events. Heroes also actively help drive community-led tracks at conferences. At this year’s re:Invent, many Heroes will be speaking during the Monday Community Day track.

This November, we are thrilled to have four Heroes joining our network of cloud innovators. Without further ado, meet to our newest AWS Community Heroes!

 

Anh Ho Viet

Anh Ho Viet is the founder of AWS Vietnam User Group, Co-founder & CEO of OSAM, an AWS Consulting Partner in Vietnam, an AWS Certified Solutions Architect, and a cloud lover.

At OSAM, Anh and his enthusiastic team have helped many companies, from SMBs to Enterprises, move to the cloud with AWS. They offer a wide range of services, including migration, consultation, architecture, and solution design on AWS. Anh’s vision for OSAM is beyond a cloud service provider; the company will take part in building a complete AWS ecosystem in Vietnam, where other companies are encouraged to become AWS partners through training and collaboration activities.

In 2016, Anh founded the AWS Vietnam User Group as a channel to share knowledge and hands-on experience among cloud practitioners. Since then, the community has reached more than 4,800 members and is still expanding. The group holds monthly meetups, connects many SMEs to AWS experts, and provides real-time, free-of-charge consultancy to startups. In August 2017, Anh joined as lead content creator of a program called “Cloud Computing Lectures for Universities” which includes translating AWS documentation & news into Vietnamese, providing students with fundamental, up-to-date knowledge of AWS cloud computing, and supporting students’ career paths.

 

Thorsten Höger

Thorsten Höger is CEO and Cloud consultant at Taimos, where he is advising customers on how to use AWS. Being a developer, he focuses on improving development processes and automating everything to build efficient deployment pipelines for customers of all sizes.

Before being self-employed, Thorsten worked as a developer and CTO of Germany’s first private bank running on AWS. With his colleagues, he migrated the core banking system to the AWS platform in 2013. Since then he organizes the AWS user group in Stuttgart and is a frequent speaker at Meetups, BarCamps, and other community events.

As a supporter of open source software, Thorsten is maintaining or contributing to several projects on Github, like test frameworks for AWS Lambda, Amazon Alexa, or developer tools for CloudFormation. He is also the maintainer of the Jenkins AWS Pipeline plugin.

In his spare time, he enjoys indoor climbing and cooking.

 

Becky Zhang

Yu Zhang (Becky Zhang) is COO of BootDev, which focuses on Big Data solutions on AWS and high concurrency web architecture. Before she helped run BootDev, she was working at Yubis IT Solutions as an operations manager.

Becky plays a key role in the AWS User Group Shanghai (AWSUGSH), regularly organizing AWS UG events including AWS Tech Meetups and happy hours, gathering AWS talent together to communicate the latest technology and AWS services. As a female in technology industry, Becky is keen on promoting Women in Tech and encourages more woman to get involved in the community.

Becky also connects the China AWS User Group with user groups in other regions, including Korea, Japan, and Thailand. She was invited as a panelist at AWS re:Invent 2016 and spoke at the Seoul AWS Summit this April to introduce AWS User Group Shanghai and communicate with other AWS User Groups around the world.

Besides events, Becky also promotes the Shanghai AWS User Group by posting AWS-related tech articles, event forecasts, and event reports to Weibo, Twitter, Meetup.com, and WeChat (which now has over 2000 official account followers).

 

Nilesh Vaghela

Nilesh Vaghela is the founder of ElectroMech Corporation, an AWS Cloud and open source focused company (the company started as an open source motto). Nilesh has been very active in the Linux community since 1998. He started working with AWS Cloud technologies in 2013 and in 2014 he trained a dedicated cloud team and started full support of AWS cloud services as an AWS Standard Consulting Partner. He always works to establish and encourage cloud and open source communities.

He started the AWS Meetup community in Ahmedabad in 2014 and as of now 12 Meetups have been conducted, focusing on various AWS technologies. The Meetup has quickly grown to include over 2000 members. Nilesh also created a Facebook group for AWS enthusiasts in Ahmedabad, with over 1500 members.

Apart from the AWS Meetup, Nilesh has delivered a number of seminars, workshops, and talks around AWS introduction and awareness, at various organizations, as well as at colleges and universities. He has also been active in working with startups, presenting AWS services overviews and discussing how startups can benefit the most from using AWS services.

Nilesh is Red Hat Linux Technologies and AWS Cloud Technologies trainer as well.

 

To learn more about the AWS Community Heroes Program and how to get involved with your local AWS community, click here.

Introducing Our NEW AWS Community Heroes (Summer 2017 Edition)

Post Syndicated from Tara Walker original https://aws.amazon.com/blogs/aws/introducing-our-new-aws-community-heroes-summer-2017-edition/

The AWS Community Heroes program seeks to recognize and honor the most engaged Amazon Web Services developers who have had a positive impact in the global community.  If you are interested in learning more about the AWS Community Heroes program or curious about ways to get involved with your local AWS community, please click the graphic below to see the AWS Heroes talk directly about the program.

Now that you know more about the AWS Community Hero program, I am elated to introduce to you all the latest AWS Heroes to join the fold:

These guys and gals impart their passion for AWS and cloud technologies with the technical community by sharing their time and knowledge across social media and via in-person events.

Ben Kehoe

Ben Kehoe works in the field of Cloud Robotics—using the internet to enable robots to do more and better things—an area of IoT involving computation in the cloud and at the edge, Big Data, and machine learning. Approaching cloud computing from this angle, Ben focuses on developing business value rapidly through serverless (and service full) applications.

At iRobot, Ben guided the transition to a serverless architecture on AWS based on AWS Lambda and AWS IoT to support iRobot’s connected robot fleet. This architecture enables iRobot to focus on its core mission of building amazing robots with a minimum of development and operations effort.

Ben seeks to amplify voices from dev, operations, and security to help the community shape the evolution of serverless and event-driven designs for IoT and cloud computing more broadly.

 

 

Marcia Villalba

Marcia is a Senior Full-stack Developer at Rovio, the creators of Angry Birds. She is originally from Uruguay but has been living in Finland for almost a decade.

She has been designing and developing software professionally for over 10 years. For more than four years she has been working with AWS, including the past year which she’s worked mostly with serverless technologies.

Marcia runs her own YouTube channel, in which she publishes at least one new video every week. In her channel, she focuses on teaching how to use AWS serverless technologies and managed services. In addition to her professional work, she is the Tech Lead in “Girls in Tech” Helsinki, helping to inspire more women to enter into technology and programming.

 

 

Joshua Levy

Joshua Levy is an entrepreneur, engineer, writer, and serial startup technologist and advisor in cloud, AI, search, and startup scaling.

He co-founded the Open Guide to AWS, which is one of the most popular AWS resources and communities on the web. The collaborative project welcomes new contributors or editors, and anyone who wishes to ask or answer questions.

Josh has years of experience in hands-on software engineering and leadership at fast-growing consumer and enterprise startups, including Viv Labs (acquired by Samsung) and BloomReach (where he led engineering and AWS infrastructure), and a background in AI and systems research at SRI and mathematics at Berkeley. He has a passion for improving how we share knowledge on complex engineering, product, or business topics. If you share any of these interests, reach out on Twitter or find his contact details on GitHub.

 

Michael Ezzell

Michael Ezzell is a frequent contributor of detailed, in-depth solutions to questions spanning a wide variety of AWS services on Stack Overflow and other sites on the Stack Exchange Network.

Michael is the resident DBA and systems administrator for Online Rewards, a leading provider of web-based employee recognition, channel incentive, and customer loyalty programs, where he was a key player in the company’s full transition to the AWS platform.

Based in Cincinnati, and known to coworkers and associates as “sqlbot,” he also provides design, development, and support services to freelance consulting clients for AWS services and MySQL, as well as, broadcast & cable television and telecommunications technologies.

 

 

 

Thanos Baskous

Thanos Baskous is a San Francisco-based software engineer and entrepreneur who is passionate about designing and building scalable and robust systems.

He co-founded the Open Guide to AWS, which is one of the most popular AWS resources and communities on the web.

At Twitter, he built infrastructure that allows engineers to seamlessly deploy and run their applications across private data centers and public cloud environments. He previously led a team at TellApart (acquired by Twitter) that built an internal platform-as-a-service (Docker, Apache Aurora, Mesos on AWS) in support of a migration from a monolithic application architecture to a microservice-based architecture. Before TellApart, he co-founded AWS-hosted AdStack (acquired by TellApart) in order to automatically personalize and improve the quality of content in marketing emails and email newsletters.

 

 

Rob Gruhl

Rob is a senior engineering manager located in Seattle, WA. He supports a team of talented engineers at Nordstrom Technology exploring and deploying a variety of serverless systems to production.

From the beginning of the serverless era, Rob has been exclusively using serverless architectures to allow a small team of engineers to deliver incredible solutions that scale effortlessly and wake them in the middle of the night rarely. In addition to a number of production services, together with his team Rob has created and released two major open source projects and accompanying open source workshops using a 100% serverless approach. He’d love to talk with you about serverless, event-sourcing, and/or occasionally-connected distributed data layers.

 

Feel free to follow these great AWS Heroes on Twitter and check out their blogs. It is exciting to have them all join the AWS Community Heroes program.

–  Tara

Event: AWS Community Day in San Francisco

Post Syndicated from Tara Walker original https://aws.amazon.com/blogs/aws/event-aws-community-day-in-san-francisco/

Spring is in the air, new technologies are budding, and the community is buzzing.

Which means it’s also springtime in the city by the bay and AWS is thrilled to announce an exciting event, AWS Community Day.

AWS Community Day is a community-led event in San Francisco where AWS Community Heroes, user group leaders, and other AWS enthusiasts will come together to deliver a full day of technical sessions on the latest in cloud computing.

During the event, you will get an opportunity to learn about the latest cloud computing trends, optimization best practices, and practical insights securing your infrastructure. Additionally, you will have the chance to discuss approaches to building healthy AWS meetups and community knowledge-sharing sessions.

In order to learn more details about the AWS Community Day in San Francisco, take a look at the blog post written by Community Hero, Eric Hammond, found here: https://alestic.com/2017/05/aws-community-day-san-francisco/.

Don’t miss this great event, register today to take part in the AWS Community Day.

Tara

Welcome to the Newest AWS Community Heroes (Spring 2017)

Post Syndicated from Ana Visneski original https://aws.amazon.com/blogs/aws/welcome-to-the-newest-aws-community-heroes-spring-2017/

We would like to extend a very warm welcome to the newest AWS Community Heroes:

AWS Community Heroes share their knowledge and demonstrate their enthusiasm for AWS in a plethora of ways. They go above and beyond to share AWS insights via social media, blog posts, open source projects, and through in-person events, user groups, and workshops.


Mark Nunnikhoven
Mark Nunnikhoven explores the impact of technology on individuals, organizations, and communities through the lens of privacy and security. Asking the question, “How can we better protect our information?” Mark studies the world of cybercrime to better understand the risks and threats to our digital world.

As the Vice President of Cloud Research at Trend Micro, a long time Amazon Web Services Advanced Technology Partner and provider of security tools for the AWS Cloud, Mark uses that knowledge to help organizations around the world modernize their security practices by taking advantage of the power of the AWS Cloud.

With a strong focus on automation, he helps bridge the gap between DevOps and traditional security through his writing, speaking, teaching, and by engaging with the AWS community.

 

SangUk Park
SangUk Park is a Chief Solutions Architect at Megazone, which became Korea’s first AWS Partner in 2012 and is the only AWS Premier Consulting Partner to provide AWS support in Korean.

He served as a System Architect for KT’s public cloud and VDI design, and led the system operation of YDOnline and Nexon Japan, one of the leading online gaming companies. Certified both as an AWS Solutions Architect – Professional and AWS DevOps Engineer – Professional, SangUk has authored AWS books, including DevOps and AWS Cloud Design Patterns, and translated four books related to the AWS Cloud.

He’s been making efforts to revitalize the local AWS Korea User Group community as co-leader by presenting at AWS Korea User Group meetings and AWS Summits, and helping to establish small group gatherings such as the AWSKRUG System Engineers in Gangnam. Also, he has done many hands-on labs and has been running a booth as a leader of the user groups at AWS events to cultivate developers and system engineers.

SangUk maintains a close relationship with the Japanese AWS User Group (JAWS UG), using his excellent Japanese communication skills and experiences in Japan. He makes every effort to participate in events held between Japanese and Korean user groups as a facilitator and translator, and will promote cross-regional communications beyond APAC going forward.

 

James Hall
James Hall has been working in the digital sector for over a decade. He is the author of the popular jsPDF library, and is a founder/Director of Parallax, a digital agency in the UK. He’s worked as a software developer on a wide variety of projects, from LED Billboards, car unlocking apps, to large web applications and tools.

Parallax built an online recording studio for David Guetta and UEFA using Serverless technology shortly after API Gateway was released. Since then they have consulted on various serverless projects and technologies. They run the AWS Meetup in Leeds, and help companies around the world build their businesses online. James has contributed to and promotes the Serverless Framework which allows you to elegantly build web applications on top of Lambda and related services.

 

Drew Firment
Drew Firment works with business leaders and technology teams from organizations that seek to accelerate cloud adoption. He has over twenty years of experience leading large-scale technology programs, enterprise platforms, and cultural transformations in a fast-paced agile environment.

After migrating Capital One’s early adopters of AWS into production, his focus shifted toward accelerating a scaleable and sustainable transition to cloud computing. Drew pioneered the intersection of strategy, governance, engineering, agile, and education to drive an enterprise-wide talent transformation. He founded Capital One’s cloud engineering college, and implemented an innovative outcome-based curriculum oriented towards learning communities. Several thousand employees have enrolled in his cloud-fluency program, enabling well over 1,000 AWS certifications since its inception.

Drew has earned all three of the AWS associate-level certifications, enjoys developing custom Amazon Alexa skills using AWS Lambda, and believes serverless is the future of cloud computing. He also serves as an advisory partner to A Cloud Guru and is editor-in-chief of the their community-sourced publication.

Welcome
Please join me in welcoming to our newest AWS Community Heroes!

-Ana