Tag Archives: cron

How to Patch Linux Workloads on AWS

Post Syndicated from Koen van Blijderveen original https://aws.amazon.com/blogs/security/how-to-patch-linux-workloads-on-aws/

Most malware tries to compromise your systems by using a known vulnerability that the operating system maker has already patched. As best practices to help prevent malware from affecting your systems, you should apply all operating system patches and actively monitor your systems for missing patches.

In this blog post, I show you how to patch Linux workloads using AWS Systems Manager. To accomplish this, I will show you how to use the AWS Command Line Interface (AWS CLI) to:

  1. Launch an Amazon EC2 instance for use with Systems Manager.
  2. Configure Systems Manager to patch your Amazon EC2 Linux instances.

In two previous blog posts (Part 1 and Part 2), I showed how to use the AWS Management Console to perform the necessary steps to patch, inspect, and protect Microsoft Windows workloads. You can implement those same processes for your Linux instances running in AWS by changing the instance tags and types shown in the previous blog posts.

Because most Linux system administrators are more familiar with using a command line, I show how to patch Linux workloads by using the AWS CLI in this blog post. The steps to use the Amazon EBS Snapshot Scheduler and Amazon Inspector are identical for both Microsoft Windows and Linux.

What you should know first

To follow along with the solution in this post, you need one or more Amazon EC2 instances. You may use existing instances or create new instances. For this post, I assume this is an Amazon EC2 for Amazon Linux instance installed from Amazon Machine Images (AMIs).

Systems Manager is a collection of capabilities that helps you automate management tasks for AWS-hosted instances on Amazon EC2 and your on-premises servers. In this post, I use Systems Manager for two purposes: to run remote commands and apply operating system patches. To learn about the full capabilities of Systems Manager, see What Is AWS Systems Manager?

As of Amazon Linux 2017.09, the AMI comes preinstalled with the Systems Manager agent. Systems Manager Patch Manager also supports Red Hat and Ubuntu. To install the agent on these Linux distributions or an older version of Amazon Linux, see Installing and Configuring SSM Agent on Linux Instances.

If you are not familiar with how to launch an Amazon EC2 instance, see Launching an Instance. I also assume you launched or will launch your instance in a private subnet. You must make sure that the Amazon EC2 instance can connect to the internet using a network address translation (NAT) instance or NAT gateway to communicate with Systems Manager. The following diagram shows how you should structure your VPC.

Diagram showing how to structure your VPC

Later in this post, you will assign tasks to a maintenance window to patch your instances with Systems Manager. To do this, the IAM user you are using for this post must have the iam:PassRole permission. This permission allows the IAM user assigning tasks to pass his own IAM permissions to the AWS service. In this example, when you assign a task to a maintenance window, IAM passes your credentials to Systems Manager. You also should authorize your IAM user to use Amazon EC2 and Systems Manager. As mentioned before, you will be using the AWS CLI for most of the steps in this blog post. Our documentation shows you how to get started with the AWS CLI. Make sure you have the AWS CLI installed and configured with an AWS access key and secret access key that belong to an IAM user that have the following AWS managed policies attached to the IAM user you are using for this example: AmazonEC2FullAccess and AmazonSSMFullAccess.

Step 1: Launch an Amazon EC2 Linux instance

In this section, I show you how to launch an Amazon EC2 instance so that you can use Systems Manager with the instance. This step requires you to do three things:

  1. Create an IAM role for Systems Manager before launching your Amazon EC2 instance.
  2. Launch your Amazon EC2 instance with Amazon EBS and the IAM role for Systems Manager.
  3. Add tags to the instances so that you can add your instances to a Systems Manager maintenance window based on tags.

A. Create an IAM role for Systems Manager

Before launching an Amazon EC2 instance, I recommend that you first create an IAM role for Systems Manager, which you will use to update the Amazon EC2 instance. AWS already provides a preconfigured policy that you can use for the new role and it is called AmazonEC2RoleforSSM.

  1. Create a JSON file named trustpolicy-ec2ssm.json that contains the following trust policy. This policy describes which principal (an entity that can take action on an AWS resource) is allowed to assume the role we are going to create. In this example, the principal is the Amazon EC2 service.
    {
      "Version": "2012-10-17",
      "Statement": {
        "Effect": "Allow",
        "Principal": {"Service": "ec2.amazonaws.com"},
        "Action": "sts:AssumeRole"
      }
    }

  1. Use the following command to create a role named EC2SSM that has the AWS managed policy AmazonEC2RoleforSSM attached to it. This generates JSON-based output that describes the role and its parameters, if the command is successful.
    $ aws iam create-role --role-name EC2SSM --assume-role-policy-document file://trustpolicy-ec2ssm.json

  1. Use the following command to attach the AWS managed IAM policy (AmazonEC2RoleforSSM) to your newly created role.
    $ aws iam attach-role-policy --role-name EC2SSM --policy-arn arn:aws:iam::aws:policy/service-role/AmazonEC2RoleforSSM

  1. Use the following commands to create the IAM instance profile and add the role to the instance profile. The instance profile is needed to attach the role we created earlier to your Amazon EC2 instance.
    $ aws iam create-instance-profile --instance-profile-name EC2SSM-IP
    $ aws iam add-role-to-instance-profile --instance-profile-name EC2SSM-IP --role-name EC2SSM

B. Launch your Amazon EC2 instance

To follow along, you need an Amazon EC2 instance that is running Amazon Linux. You can use any existing instance you may have or create a new instance.

When launching a new Amazon EC2 instance, be sure that:

  1. Use the following command to launch a new Amazon EC2 instance using an Amazon Linux AMI available in the US East (N. Virginia) Region (also known as us-east-1). Replace YourKeyPair and YourSubnetId with your information. For more information about creating a key pair, see the create-key-pair documentation. Write down the InstanceId that is in the output because you will need it later in this post.
    $ aws ec2 run-instances --image-id ami-cb9ec1b1 --instance-type t2.micro --key-name YourKeyPair --subnet-id YourSubnetId --iam-instance-profile Name=EC2SSM-IP

  1. If you are using an existing Amazon EC2 instance, you can use the following command to attach the instance profile you created earlier to your instance.
    $ aws ec2 associate-iam-instance-profile --instance-id YourInstanceId --iam-instance-profile Name=EC2SSM-IP

C. Add tags

The final step of configuring your Amazon EC2 instances is to add tags. You will use these tags to configure Systems Manager in Step 2 of this post. For this example, I add a tag named Patch Group and set the value to Linux Servers. I could have other groups of Amazon EC2 instances that I treat differently by having the same tag name but a different tag value. For example, I might have a collection of other servers with the tag name Patch Group with a value of Web Servers.

  • Use the following command to add the Patch Group tag to your Amazon EC2 instance.
    $ aws ec2 create-tags --resources YourInstanceId --tags --tags Key="Patch Group",Value="Linux Servers"

Note: You must wait a few minutes until the Amazon EC2 instance is available before you can proceed to the next section. To make sure your Amazon EC2 instance is online and ready, you can use the following AWS CLI command:

$ aws ec2 describe-instance-status --instance-ids YourInstanceId

At this point, you now have at least one Amazon EC2 instance you can use to configure Systems Manager.

Step 2: Configure Systems Manager

In this section, I show you how to configure and use Systems Manager to apply operating system patches to your Amazon EC2 instances, and how to manage patch compliance.

To start, I provide some background information about Systems Manager. Then, I cover how to:

  1. Create the Systems Manager IAM role so that Systems Manager is able to perform patch operations.
  2. Create a Systems Manager patch baseline and associate it with your instance to define which patches Systems Manager should apply.
  3. Define a maintenance window to make sure Systems Manager patches your instance when you tell it to.
  4. Monitor patch compliance to verify the patch state of your instances.

You must meet two prerequisites to use Systems Manager to apply operating system patches. First, you must attach the IAM role you created in the previous section, EC2SSM, to your Amazon EC2 instance. Second, you must install the Systems Manager agent on your Amazon EC2 instance. If you have used a recent Amazon Linux AMI, Amazon has already installed the Systems Manager agent on your Amazon EC2 instance. You can confirm this by logging in to an Amazon EC2 instance and checking the Systems Manager agent log files that are located at /var/log/amazon/ssm/.

To install the Systems Manager agent on an instance that does not have the agent preinstalled or if you want to use the Systems Manager agent on your on-premises servers, see Installing and Configuring the Systems Manager Agent on Linux Instances. If you forgot to attach the newly created role when launching your Amazon EC2 instance or if you want to attach the role to already running Amazon EC2 instances, see Attach an AWS IAM Role to an Existing Amazon EC2 Instance by Using the AWS CLI or use the AWS Management Console.

A. Create the Systems Manager IAM role

For a maintenance window to be able to run any tasks, you must create a new role for Systems Manager. This role is a different kind of role than the one you created earlier: this role will be used by Systems Manager instead of Amazon EC2. Earlier, you created the role, EC2SSM, with the policy, AmazonEC2RoleforSSM, which allowed the Systems Manager agent on your instance to communicate with Systems Manager. In this section, you need a new role with the policy, AmazonSSMMaintenanceWindowRole, so that the Systems Manager service can execute commands on your instance.

To create the new IAM role for Systems Manager:

  1. Create a JSON file named trustpolicy-maintenancewindowrole.json that contains the following trust policy. This policy describes which principal is allowed to assume the role you are going to create. This trust policy allows not only Amazon EC2 to assume this role, but also Systems Manager.
    {
       "Version":"2012-10-17",
       "Statement":[
          {
             "Sid":"",
             "Effect":"Allow",
             "Principal":{
                "Service":[
                   "ec2.amazonaws.com",
                   "ssm.amazonaws.com"
               ]
             },
             "Action":"sts:AssumeRole"
          }
       ]
    }

  1. Use the following command to create a role named MaintenanceWindowRole that has the AWS managed policy, AmazonSSMMaintenanceWindowRole, attached to it. This command generates JSON-based output that describes the role and its parameters, if the command is successful.
    $ aws iam create-role --role-name MaintenanceWindowRole --assume-role-policy-document file://trustpolicy-maintenancewindowrole.json

  1. Use the following command to attach the AWS managed IAM policy (AmazonEC2RoleforSSM) to your newly created role.
    $ aws iam attach-role-policy --role-name MaintenanceWindowRole --policy-arn arn:aws:iam::aws:policy/service-role/AmazonSSMMaintenanceWindowRole

B. Create a Systems Manager patch baseline and associate it with your instance

Next, you will create a Systems Manager patch baseline and associate it with your Amazon EC2 instance. A patch baseline defines which patches Systems Manager should apply to your instance. Before you can associate the patch baseline with your instance, though, you must determine if Systems Manager recognizes your Amazon EC2 instance. Use the following command to list all instances managed by Systems Manager. The --filters option ensures you look only for your newly created Amazon EC2 instance.

$ aws ssm describe-instance-information --filters Key=InstanceIds,Values= YourInstanceId

{
    "InstanceInformationList": [
        {
            "IsLatestVersion": true,
            "ComputerName": "ip-10-50-2-245",
            "PingStatus": "Online",
            "InstanceId": "YourInstanceId",
            "IPAddress": "10.50.2.245",
            "ResourceType": "EC2Instance",
            "AgentVersion": "2.2.120.0",
            "PlatformVersion": "2017.09",
            "PlatformName": "Amazon Linux AMI",
            "PlatformType": "Linux",
            "LastPingDateTime": 1515759143.826
        }
    ]
}

If your instance is missing from the list, verify that:

  1. Your instance is running.
  2. You attached the Systems Manager IAM role, EC2SSM.
  3. You deployed a NAT gateway in your public subnet to ensure your VPC reflects the diagram shown earlier in this post so that the Systems Manager agent can connect to the Systems Manager internet endpoint.
  4. The Systems Manager agent logs don’t include any unaddressed errors.

Now that you have checked that Systems Manager can manage your Amazon EC2 instance, it is time to create a patch baseline. With a patch baseline, you define which patches are approved to be installed on all Amazon EC2 instances associated with the patch baseline. The Patch Group resource tag you defined earlier will determine to which patch group an instance belongs. If you do not specifically define a patch baseline, the default AWS-managed patch baseline is used.

To create a patch baseline:

  1. Use the following command to create a patch baseline named AmazonLinuxServers. With approval rules, you can determine the approved patches that will be included in your patch baseline. In this example, you add all Critical severity patches to the patch baseline as soon as they are released, by setting the Auto approval delay to 0 days. By setting the Auto approval delay to 2 days, you add to this patch baseline the Important, Medium, and Low severity patches two days after they are released.
    $ aws ssm create-patch-baseline --name "AmazonLinuxServers" --description "Baseline containing all updates for Amazon Linux" --operating-system AMAZON_LINUX --approval-rules "PatchRules=[{PatchFilterGroup={PatchFilters=[{Values=[Critical],Key=SEVERITY}]},ApproveAfterDays=0,ComplianceLevel=CRITICAL},{PatchFilterGroup={PatchFilters=[{Values=[Important,Medium,Low],Key=SEVERITY}]},ApproveAfterDays=2,ComplianceLevel=HIGH}]"
    
    {
        "BaselineId": "YourBaselineId"
    }

  1. Use the following command to register the patch baseline you created with your instance. To do so, you use the Patch Group tag that you added to your Amazon EC2 instance.
    $ aws ssm register-patch-baseline-for-patch-group --baseline-id YourPatchBaselineId --patch-group "Linux Servers"
    
    {
        "PatchGroup": "Linux Servers",
        "BaselineId": "YourBaselineId"
    }

C.  Define a maintenance window

Now that you have successfully set up a role, created a patch baseline, and registered your Amazon EC2 instance with your patch baseline, you will define a maintenance window so that you can control when your Amazon EC2 instances will receive patches. By creating multiple maintenance windows and assigning them to different patch groups, you can make sure your Amazon EC2 instances do not all reboot at the same time.

To define a maintenance window:

  1. Use the following command to define a maintenance window. In this example command, the maintenance window will start every Saturday at 10:00 P.M. UTC. It will have a duration of 4 hours and will not start any new tasks 1 hour before the end of the maintenance window.
    $ aws ssm create-maintenance-window --name SaturdayNight --schedule "cron(0 0 22 ? * SAT *)" --duration 4 --cutoff 1 --allow-unassociated-targets
    
    {
        "WindowId": "YourMaintenanceWindowId"
    }

For more information about defining a cron-based schedule for maintenance windows, see Cron and Rate Expressions for Maintenance Windows.

  1. After defining the maintenance window, you must register the Amazon EC2 instance with the maintenance window so that Systems Manager knows which Amazon EC2 instance it should patch in this maintenance window. You can register the instance by using the same Patch Group tag you used to associate the Amazon EC2 instance with the AWS-provided patch baseline, as shown in the following command.
    $ aws ssm register-target-with-maintenance-window --window-id YourMaintenanceWindowId --resource-type INSTANCE --targets "Key=tag:Patch Group,Values=Linux Servers"
    
    {
        "WindowTargetId": "YourWindowTargetId"
    }

  1. Assign a task to the maintenance window that will install the operating system patches on your Amazon EC2 instance. The following command includes the following options.
    1. name is the name of your task and is optional. I named mine Patching.
    2. task-arn is the name of the task document you want to run.
    3. max-concurrency allows you to specify how many of your Amazon EC2 instances Systems Manager should patch at the same time. max-errors determines when Systems Manager should abort the task. For patching, this number should not be too low, because you do not want your entire patch task to stop on all instances if one instance fails. You can set this, for example, to 20%.
    4. service-role-arn is the Amazon Resource Name (ARN) of the AmazonSSMMaintenanceWindowRole role you created earlier in this blog post.
    5. task-invocation-parameters defines the parameters that are specific to the AWS-RunPatchBaseline task document and tells Systems Manager that you want to install patches with a timeout of 600 seconds (10 minutes).
      $ aws ssm register-task-with-maintenance-window --name "Patching" --window-id "YourMaintenanceWindowId" --targets "Key=WindowTargetIds,Values=YourWindowTargetId" --task-arn AWS-RunPatchBaseline --service-role-arn "arn:aws:iam::123456789012:role/MaintenanceWindowRole" --task-type "RUN_COMMAND" --task-invocation-parameters "RunCommand={Comment=,TimeoutSeconds=600,Parameters={SnapshotId=[''],Operation=[Install]}}" --max-concurrency "500" --max-errors "20%"
      
      {
          "WindowTaskId": "YourWindowTaskId"
      }

Now, you must wait for the maintenance window to run at least once according to the schedule you defined earlier. If your maintenance window has expired, you can check the status of any maintenance tasks Systems Manager has performed by using the following command.

$ aws ssm describe-maintenance-window-executions --window-id "YourMaintenanceWindowId"

{
    "WindowExecutions": [
        {
            "Status": "SUCCESS",
            "WindowId": "YourMaintenanceWindowId",
            "WindowExecutionId": "b594984b-430e-4ffa-a44c-a2e171de9dd3",
            "EndTime": 1515766467.487,
            "StartTime": 1515766457.691
        }
    ]
}

D.  Monitor patch compliance

You also can see the overall patch compliance of all Amazon EC2 instances using the following command in the AWS CLI.

$ aws ssm list-compliance-summaries

This command shows you the number of instances that are compliant with each category and the number of instances that are not in JSON format.

You also can see overall patch compliance by choosing Compliance under Insights in the navigation pane of the Systems Manager console. You will see a visual representation of how many Amazon EC2 instances are up to date, how many Amazon EC2 instances are noncompliant, and how many Amazon EC2 instances are compliant in relation to the earlier defined patch baseline.

Screenshot of the Compliance page of the Systems Manager console

In this section, you have set everything up for patch management on your instance. Now you know how to patch your Amazon EC2 instance in a controlled manner and how to check if your Amazon EC2 instance is compliant with the patch baseline you have defined. Of course, I recommend that you apply these steps to all Amazon EC2 instances you manage.

Summary

In this blog post, I showed how to use Systems Manager to create a patch baseline and maintenance window to keep your Amazon EC2 Linux instances up to date with the latest security patches. Remember that by creating multiple maintenance windows and assigning them to different patch groups, you can make sure your Amazon EC2 instances do not all reboot at the same time.

If you have comments about this post, submit them in the “Comments” section below. If you have questions about or issues implementing any part of this solution, start a new thread on the Amazon EC2 forum or contact AWS Support.

– Koen

Cloud Babble: The Jargon of Cloud Storage

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/what-is-cloud-computing/

Cloud Babble

One of the things we in the technology business are good at is coming up with names, phrases, euphemisms, and acronyms for the stuff that we create. The Cloud Storage market is no different, and we’d like to help by illuminating some of the cloud storage related terms that you might come across. We know this is just a start, so please feel free to add in your favorites in the comments section below and we’ll update this post accordingly.

Clouds

The cloud is really just a collection of purpose built servers. In a public cloud the servers are shared between multiple unrelated tenants. In a private cloud, the servers are dedicated to a single tenant or sometimes a group of related tenants. A public cloud is off-site, while a private cloud can be on-site or off-site – or on-prem or off-prem, if you prefer.

Both Sides Now: Hybrid Clouds

Speaking of on-prem and off-prem, there are Hybrid Clouds or Hybrid Data Clouds depending on what you need. Both are based on the idea that you extend your local resources (typically on-prem) to the cloud (typically off-prem) as needed. This extension is controlled by software that decides, based on rules you define, what needs to be done where.

A Hybrid Data Cloud is specific to data. For example, you can set up a rule that says all accounting files that have not been touched in the last year are automatically moved off-prem to cloud storage. The files are still available; they are just no longer stored on your local systems. The rules can be defined to fit an organization’s workflow and data retention policies.

A Hybrid Cloud is similar to a Hybrid Data Cloud except it also extends compute. For example, at the end of the quarter, you can spin up order processing application instances off-prem as needed to add to your on-prem capacity. Of course, determining where the transactional data used and created by these applications resides can be an interesting systems design challenge.

Clouds in my Coffee: Fog

Typically, public and private clouds live in large buildings called data centers. Full of servers, networking equipment, and clean air, data centers need lots of power, lots of networking bandwidth, and lots of space. This often limits where data centers are located. The further away you are from a data center, the longer it generally takes to get your data to and from there. This is known as latency. That’s where “Fog” comes in.

Fog is often referred to as clouds close to the ground. Fog, in our cloud world, is basically having a “little” data center near you. This can make data storage and even cloud based processing faster for everyone nearby. Data, and less so processing, can be transferred to/from the Fog to the Cloud when time is less a factor. Data could also be aggregated in the Fog and sent to the Cloud. For example, your electric meter could report its minute-by-minute status to the Fog for diagnostic purposes. Then once a day the aggregated data could be send to the power company’s Cloud for billing purposes.

Another term used in place of Fog is Edge, as in computing at the Edge. In either case, a given cloud (data center) usually has multiple Edges (little data centers) connected to it. The connection between the Edge and the Cloud is sometimes known as the middle-mile. The network in the middle-mile can be less robust than that required to support a stand-alone data center. For example, the middle-mile can use 1 Gbps lines, versus a data center, which would require multiple 10 Gbps lines.

Heavy Clouds No Rain: Data

We’re all aware that we are creating, processing, and storing data faster than ever before. All of this data is stored in either a structured or more likely an unstructured way. Databases and data warehouses are structured ways to store data, but a vast amount of data is unstructured – meaning the schema and data access requirements are not known until the data is queried. A large pool of unstructured data in a flat architecture can be referred to as a Data Lake.

A Data Lake is often created so we can perform some type of “big data” analysis. In an over simplified example, let’s extend the lake metaphor a bit and ask the question; “how many fish are in our lake?” To get an answer, we take a sufficient sample of our lake’s water (data), count the number of fish we find, and extrapolate based on the size of the lake to get an answer within a given confidence interval.

A Data Lake is usually found in the cloud, an excellent place to store large amounts of non-transactional data. Watch out as this can lead to our data having too much Data Gravity or being locked in the Hotel California. This could also create a Data Silo, thereby making a potential data Lift-and-Shift impossible. Let me explain:

  • Data Gravity — Generally, the more data you collect in one spot, the harder it is to move. When you store data in a public cloud, you have to pay egress and/or network charges to download the data to another public cloud or even to your own on-premise systems. Some public cloud vendors charge a lot more than others, meaning that depending on your public cloud provider, your data could financially have a lot more gravity than you expected.
  • Hotel California — This is like Data Gravity but to a lesser scale. Your data is in the Hotel California if, to paraphrase, “your data can check out any time you want, but it can never leave.” If the cost of downloading your data is limiting the things you want to do with that data, then your data is in the Hotel California. Data is generally most valuable when used, and with cloud storage that can include archived data. This assumes of course that the archived data is readily available, and affordable, to download. When considering a cloud storage project always figure in the cost of using your own data.
  • Data Silo — Over the years, businesses have suffered from organizational silos as information is not shared between different groups, but instead needs to travel up to the top of the silo before it can be transferred to another silo. If your data is “trapped” in a given cloud by the cost it takes to share such data, then you may have a Data Silo, and that’s exactly opposite of what the cloud should do.
  • Lift-and-Shift — This term is used to define the movement of data or applications from one data center to another or from on-prem to off-prem systems. The move generally occurs all at once and once everything is moved, systems are operational and data is available at the new location with few, if any, changes. If your data has too much gravity or is locked in a hotel, a data lift-and-shift may break the bank.

I Can See Clearly Now

Hopefully, the cloudy terms we’ve covered are well, less cloudy. As we mentioned in the beginning, our compilation is just a start, so please feel free to add in your favorite cloud term in the comments section below and we’ll update this post with your contributions. Keep your entries “clean,” and please no words or phrases that are really adverts for your company. Thanks.

The post Cloud Babble: The Jargon of Cloud Storage appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Wanted: Sales Engineer

Post Syndicated from Yev original https://www.backblaze.com/blog/wanted-sales-engineer/

At inception, Backblaze was a consumer company. Thousands upon thousands of individuals came to our website and gave us $5/mo to keep their data safe. But, we didn’t sell business solutions. It took us years before we had a sales team. In the last couple of years, we’ve released products that businesses of all sizes love: Backblaze B2 Cloud Storage and Backblaze for Business Computer Backup. Those businesses want to integrate Backblaze deeply into their infrastructure, so it’s time to hire our first Sales Engineer!

Company Description:
Founded in 2007, Backblaze started with a mission to make backup software elegant and provide complete peace of mind. Over the course of almost a decade, we have become a pioneer in robust, scalable low cost cloud backup. Recently, we launched B2 – robust and reliable object storage at just $0.005/gb/mo. Part of our differentiation is being able to offer the lowest price of any of the big players while still being profitable.

We’ve managed to nurture a team oriented culture with amazingly low turnover. We value our people and their families. Don’t forget to check out our “About Us” page to learn more about the people and some of our perks.

We have built a profitable, high growth business. While we love our investors, we have maintained control over the business. That means our corporate goals are simple – grow sustainably and profitably.

Some Backblaze Perks:

  • Competitive healthcare plans
  • Competitive compensation and 401k
  • All employees receive Option grants
  • Unlimited vacation days
  • Strong coffee
  • Fully stocked Micro kitchen
  • Catered breakfast and lunches
  • Awesome people who work on awesome projects
  • Childcare bonus
  • Normal work hours
  • Get to bring your pets into the office
  • San Mateo Office – located near Caltrain and Highways 101 & 280.

Backblaze B2 cloud storage is a building block for almost any computing service that requires storage. Customers need our help integrating B2 into iOS apps to Docker containers. Some customers integrate directly to the API using the programming language of their choice, others want to solve a specific problem using ready made software, already integrated with B2.

At the same time, our computer backup product is deepening it’s integration into enterprise IT systems. We are commonly asked for how to set Windows policies, integrate with Active Directory, and install the client via remote management tools.

We are looking for a sales engineer who can help our customers navigate the integration of Backblaze into their technical environments.

Are you 1/2” deep into many different technologies, and unafraid to dive deeper?

Can you confidently talk with customers about their technology, even if you have to look up all the acronyms right after the call?

Are you excited to setup complicated software in a lab and write knowledge base articles about your work?

Then Backblaze is the place for you!

Enough about Backblaze already, what’s in it for me?
In this role, you will be given the opportunity to learn about the technologies that drive innovation today; diverse technologies that customers are using day in and out. And more importantly, you’ll learn how to learn new technologies.

Just as an example, in the past 12 months, we’ve had the opportunity to learn and become experts in these diverse technologies:

  • How to setup VM servers for lab environments, both on-prem and using cloud services.
  • Create an automatically “resetting” demo environment for the sales team.
  • Setup Microsoft Domain Controllers with Active Directory and AD Federation Services.
  • Learn the basics of OAUTH and web single sign on (SSO).
  • Archive video workflows from camera to media asset management systems.
  • How upload/download files from Javascript by enabling CORS.
  • How to install and monitor online backup installations using RMM tools, like JAMF.
  • Tape (LTO) systems. (Yes – people still use tape for storage!)

How can I know if I’ll succeed in this role?

You have:

  • Confidence. Be able to ask customers questions about their environments and convey to them your technical acumen.
  • Curiosity. Always want to learn about customers’ situations, how they got there and what problems they are trying to solve.
  • Organization. You’ll work with customers, integration partners, and Backblaze team members on projects of various lengths. You can context switch and either have a great memory or keep copious notes. Your checklists have their own checklists.

You are versed in:

  • The fundamentals of Windows, Linux and Mac OS X operating systems. You shouldn’t be afraid to use a command line.
  • Building, installing, integrating and configuring applications on any operating system.
  • Debugging failures – reading logs, monitoring usage, effective google searching to fix problems excites you.
  • The basics of TCP/IP networking and the HTTP protocol.
  • Novice development skills in any programming/scripting language. Have basic understanding of data structures and program flow.
  • Your background contains:

  • Bachelor’s degree in computer science or the equivalent.
  • 2+ years of experience as a pre or post-sales engineer.
  • The right extra credit:
    There are literally hundreds of previous experiences you can have had that would make you perfect for this job. Some experiences that we know would be helpful for us are below, but make sure you tell us your stories!

  • Experience using or programming against Amazon S3.
  • Experience with large on-prem storage – NAS, SAN, Object. And backing up data on such storage with tools like Veeam, Veritas and others.
  • Experience with photo or video media. Media archiving is a key market for Backblaze B2.
  • Program arduinos to automatically feed your dog.
  • Experience programming against web or REST APIs. (Point us towards your projects, if they are open source and available to link to.)
  • Experience with sales tools like Salesforce.
  • 3D print door stops.
  • Experience with Windows Servers, Active Directory, Group policies and the like.
  • What’s it like working with the Sales team?
    The Backblaze sales team collaborates. We help each other out by sharing ideas, templates, and our customer’s experiences. When we talk about our accomplishments, there is no “I did this,” only “we”. We are truly a team.

    We are honest to each other and our customers and communicate openly. We aim to have fun by embracing crazy ideas and creative solutions. We try to think not outside the box, but with no boxes at all. Customers are the driving force behind the success of the company and we care deeply about their success.

    If this all sounds like you:

    1. Send an email to [email protected] with the position in the subject line.
    2. Tell us a bit about your Sales Engineering experience.
    3. Include your resume.

    The post Wanted: Sales Engineer appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

    US Government Teaches Anti-Piracy Skills Around The Globe

    Post Syndicated from Ernesto original https://torrentfreak.com/us-government-teaches-anti-piracy-skills-around-the-globe-171217/

    Online piracy is a global issue. Pirate sites and services tend to operate in multiple jurisdictions and are purposefully set up to evade law enforcement.

    This makes it hard for police from one country to effectively crack down on a site in another. International cooperation is often required, and the US Government is one of the leaders on this front.

    The US Department of Justice (DoJ) has quite a bit of experience in tracking down pirates and they are actively sharing this knowledge with countries that can use some help. This goes far beyond the occasional seminar.

    A diplomatic cable obtained through a Freedom of Information request provides a relatively recent example of these efforts. The document gives an overview of anti-piracy training, provided and funded by the US Government, during the fall of 2015.

    “On November 24 and 25, prosecutors and investigators from Romania, Moldova, Bulgaria, and Turkey participated in a two-day, US. Department of Justice (USDOJ)-sponsored training program on combatting online piracy.

    “The program updated participants on legal issues, including data retention legislation, surrounding the investigation and prosecution of online piracy,” the cable adds.

    According to the cable, piracy has become a very significant problem in Eastern Europe, costing rightsholders and governments millions of dollars in revenues. After the training, local law enforcement officers in these countries should be better equipped to deal with the problem.

    Pirates Beware

    The event was put together with help from various embassies and among the presenters were law enforcement professionals from around the world.

    The Director of the DoJ’s CCIPS Cybercrime Laboratory was among the speakers. He gave training on computer forensics and participants were provided with various tools to put this to use.

    “Participants were given copies of forensic tools at the conclusion of the program so that they could put to use some of what they saw demonstrated during the training,” the cable reads.

    While catching pirates can be quite hard already, getting them convicted is a challenge as well. Increasingly we’ve seen criminal complaints using non-copyright claims to have site owners prosecuted.

    By using money laundering and tax offenses, pirates can receive tougher penalties. This was one of the talking points during the training as well.

    “Participants were encouraged to consider the use of statutes such as money laundering and tax evasion, in addition to those protecting copyrights and trademarks, since these offenses are often punished more severely than standalone intellectual property crimes.”

    The cable, written by the US Embassy in Bucharest, provides a lot of detail about the two-day training session. It’s also clear on the overall objective. The US wants to increase the likelihood that pirate sites are brought to justice. Not only in the homeland, but around the globe.

    “By focusing approximately forty investigators and prosecutors from four countries on how they can more effectively attack rogue sites, and by connecting rights holders and their investigators with law enforcement, the chances of pirates being caught and held accountable have increased.”

    While it’s hard to link the training to any concrete successes, Romanian law enforcement did shut down the country’s leading pirate site a few months later. As with a previous case in Romania, which involved the FBI, money laundering and tax evasion allegations were expected.

    While it’s not out of the ordinary for international law enforcers to work together, it’s notable how coordinated the US efforts are. Earlier this week we wrote about the US pressure on Sweden to raid The Pirate Bay. And these are not isolated incidents.

    While the US Department of Justice doesn’t reveal all details of its operations, it is very open about its global efforts to protect Intellectual Property.

    Around the world..

    The DoJ’s Computer Crime and Intellectual Property Section (CCIPS) has relationships with law enforcement worldwide and regularly provides training to foreign officers.

    A crucial part of the Department’s international enforcement activities is the Intellectual Property Law Enforcement Coordinator (IPLEC) program, which started in 2006.

    Through IPLECs, the department now has Attorneys stationed in Thailand, Hong Kong, Romania, Brazil, and Nigeria. These Attorneys keep an eye on local law enforcement and provide assistance and training, to protect US copyright holders.

    “Our strategically placed coordinators draw upon their subject matter expertise to help ensure that property holders’ rights are enforced across the globe, and that the American people are protected from harmful products entering the marketplace,” Attorney General John Cronan of the Criminal Division said just last Friday.

    Or to end with the title of the Romanian cable: ‘Pirates beware!’

    The cable cited here was made available in response to a Freedom of Information request, which was submitted by Rachael Tackett and shared with TorrentFreak. It starts at page 47 of document 2.

    Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

    Implementing Dynamic ETL Pipelines Using AWS Step Functions

    Post Syndicated from Tara Van Unen original https://aws.amazon.com/blogs/compute/implementing-dynamic-etl-pipelines-using-aws-step-functions/

    This post contributed by:
    Wangechi Dole, AWS Solutions Architect
    Milan Krasnansky, ING, Digital Solutions Developer, SGK
    Rian Mookencherry, Director – Product Innovation, SGK

    Data processing and transformation is a common use case you see in our customer case studies and success stories. Often, customers deal with complex data from a variety of sources that needs to be transformed and customized through a series of steps to make it useful to different systems and stakeholders. This can be difficult due to the ever-increasing volume, velocity, and variety of data. Today, data management challenges cannot be solved with traditional databases.

    Workflow automation helps you build solutions that are repeatable, scalable, and reliable. You can use AWS Step Functions for this. A great example is how SGK used Step Functions to automate the ETL processes for their client. With Step Functions, SGK has been able to automate changes within the data management system, substantially reducing the time required for data processing.

    In this post, SGK shares the details of how they used Step Functions to build a robust data processing system based on highly configurable business transformation rules for ETL processes.

    SGK: Building dynamic ETL pipelines

    SGK is a subsidiary of Matthews International Corporation, a diversified organization focusing on brand solutions and industrial technologies. SGK’s Global Content Creation Studio network creates compelling content and solutions that connect brands and products to consumers through multiple assets including photography, video, and copywriting.

    We were recently contracted to build a sophisticated and scalable data management system for one of our clients. We chose to build the solution on AWS to leverage advanced, managed services that help to improve the speed and agility of development.

    The data management system served two main functions:

    1. Ingesting a large amount of complex data to facilitate both reporting and product funding decisions for the client’s global marketing and supply chain organizations.
    2. Processing the data through normalization and applying complex algorithms and data transformations. The system goal was to provide information in the relevant context—such as strategic marketing, supply chain, product planning, etc. —to the end consumer through automated data feeds or updates to existing ETL systems.

    We were faced with several challenges:

    • Output data that needed to be refreshed at least twice a day to provide fresh datasets to both local and global markets. That constant data refresh posed several challenges, especially around data management and replication across multiple databases.
    • The complexity of reporting business rules that needed to be updated on a constant basis.
    • Data that could not be processed as contiguous blocks of typical time-series data. The measurement of the data was done across seasons (that is, combination of dates), which often resulted with up to three overlapping seasons at any given time.
    • Input data that came from 10+ different data sources. Each data source ranged from 1–20K rows with as many as 85 columns per input source.

    These challenges meant that our small Dev team heavily invested time in frequent configuration changes to the system and data integrity verification to make sure that everything was operating properly. Maintaining this system proved to be a daunting task and that’s when we turned to Step Functions—along with other AWS services—to automate our ETL processes.

    Solution overview

    Our solution included the following AWS services:

    • AWS Step Functions: Before Step Functions was available, we were using multiple Lambda functions for this use case and running into memory limit issues. With Step Functions, we can execute steps in parallel simultaneously, in a cost-efficient manner, without running into memory limitations.
    • AWS Lambda: The Step Functions state machine uses Lambda functions to implement the Task states. Our Lambda functions are implemented in Java 8.
    • Amazon DynamoDB provides us with an easy and flexible way to manage business rules. We specify our rules as Keys. These are key-value pairs stored in a DynamoDB table.
    • Amazon RDS: Our ETL pipelines consume source data from our RDS MySQL database.
    • Amazon Redshift: We use Amazon Redshift for reporting purposes because it integrates with our BI tools. Currently we are using Tableau for reporting which integrates well with Amazon Redshift.
    • Amazon S3: We store our raw input files and intermediate results in S3 buckets.
    • Amazon CloudWatch Events: Our users expect results at a specific time. We use CloudWatch Events to trigger Step Functions on an automated schedule.

    Solution architecture

    This solution uses a declarative approach to defining business transformation rules that are applied by the underlying Step Functions state machine as data moves from RDS to Amazon Redshift. An S3 bucket is used to store intermediate results. A CloudWatch Event rule triggers the Step Functions state machine on a schedule. The following diagram illustrates our architecture:

    Here are more details for the above diagram:

    1. A rule in CloudWatch Events triggers the state machine execution on an automated schedule.
    2. The state machine invokes the first Lambda function.
    3. The Lambda function deletes all existing records in Amazon Redshift. Depending on the dataset, the Lambda function can create a new table in Amazon Redshift to hold the data.
    4. The same Lambda function then retrieves Keys from a DynamoDB table. Keys represent specific marketing campaigns or seasons and map to specific records in RDS.
    5. The state machine executes the second Lambda function using the Keys from DynamoDB.
    6. The second Lambda function retrieves the referenced dataset from RDS. The records retrieved represent the entire dataset needed for a specific marketing campaign.
    7. The second Lambda function executes in parallel for each Key retrieved from DynamoDB and stores the output in CSV format temporarily in S3.
    8. Finally, the Lambda function uploads the data into Amazon Redshift.

    To understand the above data processing workflow, take a closer look at the Step Functions state machine for this example.

    We walk you through the state machine in more detail in the following sections.

    Walkthrough

    To get started, you need to:

    • Create a schedule in CloudWatch Events
    • Specify conditions for RDS data extracts
    • Create Amazon Redshift input files
    • Load data into Amazon Redshift

    Step 1: Create a schedule in CloudWatch Events
    Create rules in CloudWatch Events to trigger the Step Functions state machine on an automated schedule. The following is an example cron expression to automate your schedule:

    In this example, the cron expression invokes the Step Functions state machine at 3:00am and 2:00pm (UTC) every day.

    Step 2: Specify conditions for RDS data extracts
    We use DynamoDB to store Keys that determine which rows of data to extract from our RDS MySQL database. An example Key is MCS2017, which stands for, Marketing Campaign Spring 2017. Each campaign has a specific start and end date and the corresponding dataset is stored in RDS MySQL. A record in RDS contains about 600 columns, and each Key can represent up to 20K records.

    A given day can have multiple campaigns with different start and end dates running simultaneously. In the following example DynamoDB item, three campaigns are specified for the given date.

    The state machine example shown above uses Keys 31, 32, and 33 in the first ChoiceState and Keys 21 and 22 in the second ChoiceState. These keys represent marketing campaigns for a given day. For example, on Monday, there are only two campaigns requested. The ChoiceState with Keys 21 and 22 is executed. If three campaigns are requested on Tuesday, for example, then ChoiceState with Keys 31, 32, and 33 is executed. MCS2017 can be represented by Key 21 and Key 33 on Monday and Tuesday, respectively. This approach gives us the flexibility to add or remove campaigns dynamically.

    Step 3: Create Amazon Redshift input files
    When the state machine begins execution, the first Lambda function is invoked as the resource for FirstState, represented in the Step Functions state machine as follows:

    "Comment": ” AWS Amazon States Language.", 
      "StartAt": "FirstState",
     
"States": { 
      "FirstState": {
       
"Type": "Task",
       
"Resource": "arn:aws:lambda:xx-xxxx-x:XXXXXXXXXXXX:function:Start",
        "Next": "ChoiceState" 
      } 

    As described in the solution architecture, the purpose of this Lambda function is to delete existing data in Amazon Redshift and retrieve keys from DynamoDB. In our use case, we found that deleting existing records was more efficient and less time-consuming than finding the delta and updating existing records. On average, an Amazon Redshift table can contain about 36 million cells, which translates to roughly 65K records. The following is the code snippet for the first Lambda function in Java 8:

    public class LambdaFunctionHandler implements RequestHandler<Map<String,Object>,Map<String,String>> {
        Map<String,String> keys= new HashMap<>();
        public Map<String, String> handleRequest(Map<String, Object> input, Context context){
           Properties config = getConfig(); 
           // 1. Cleaning Redshift Database
           new RedshiftDataService(config).cleaningTable(); 
           // 2. Reading data from Dynamodb
           List<String> keyList = new DynamoDBDataService(config).getCurrentKeys();
           for(int i = 0; i < keyList.size(); i++) {
               keys.put(”key" + (i+1), keyList.get(i)); 
           }
           keys.put(”key" + T,String.valueOf(keyList.size()));
           // 3. Returning the key values and the key count from the “for” loop
           return (keys);
    }

    The following JSON represents ChoiceState.

    "ChoiceState": {
       "Type" : "Choice",
       "Choices": [ 
       {

          "Variable": "$.keyT",
         "StringEquals": "3",
         "Next": "CurrentThreeKeys" 
       }, 
       {

         "Variable": "$.keyT",
        "StringEquals": "2",
        "Next": "CurrentTwooKeys" 
       } 
     ], 
     "Default": "DefaultState"
    }
    

    The variable $.keyT represents the number of keys retrieved from DynamoDB. This variable determines which of the parallel branches should be executed. At the time of publication, Step Functions does not support dynamic parallel state. Therefore, choices under ChoiceState are manually created and assigned hardcoded StringEquals values. These values represent the number of parallel executions for the second Lambda function.

    For example, if $.keyT equals 3, the second Lambda function is executed three times in parallel with keys, $key1, $key2 and $key3 retrieved from DynamoDB. Similarly, if $.keyT equals two, the second Lambda function is executed twice in parallel.  The following JSON represents this parallel execution:

    "CurrentThreeKeys": { 
      "Type": "Parallel",
      "Next": "NextState",
      "Branches": [ 
      {

         "StartAt": “key31",
        "States": { 
           “key31": {

              "Type": "Task",
            "InputPath": "$.key1",
            "Resource": "arn:aws:lambda:xx-xxxx-x:XXXXXXXXXXXX:function:Execution",
            "End": true 
           } 
        } 
      }, 
      {

         "StartAt": “key32",
        "States": { 
         “key32": {

            "Type": "Task",
           "InputPath": "$.key2",
             "Resource": "arn:aws:lambda:xx-xxxx-x:XXXXXXXXXXXX:function:Execution",
           "End": true 
          } 
         } 
       }, 
       {

          "StartAt": “key33",
           "States": { 
              “key33": {

                    "Type": "Task",
                 "InputPath": "$.key3",
                 "Resource": "arn:aws:lambda:xx-xxxx-x:XXXXXXXXXXXX:function:Execution",
               "End": true 
           } 
         } 
        } 
      ] 
    } 

    Step 4: Load data into Amazon Redshift
    The second Lambda function in the state machine extracts records from RDS associated with keys retrieved for DynamoDB. It processes the data then loads into an Amazon Redshift table. The following is code snippet for the second Lambda function in Java 8.

    public class LambdaFunctionHandler implements RequestHandler<String, String> {
     public static String key = null;
    
public String handleRequest(String input, Context context) { 
       key=input; 
       //1. Getting basic configurations for the next classes + s3 client Properties
       config = getConfig();

       AmazonS3 s3 = AmazonS3ClientBuilder.defaultClient(); 
       // 2. Export query results from RDS into S3 bucket 
       new RdsDataService(config).exportDataToS3(s3,key); 
       // 3. Import query results from S3 bucket into Redshift 
        new RedshiftDataService(config).importDataFromS3(s3,key); 
       System.out.println(input); 
       return "SUCCESS"; 
     } 
    }

    After the data is loaded into Amazon Redshift, end users can visualize it using their preferred business intelligence tools.

    Lessons learned

    • At the time of publication, the 1.5–GB memory hard limit for Lambda functions was inadequate for processing our complex workload. Step Functions gave us the flexibility to chunk our large datasets and process them in parallel, saving on costs and time.
    • In our previous implementation, we assigned each key a dedicated Lambda function along with CloudWatch rules for schedule automation. This approach proved to be inefficient and quickly became an operational burden. Previously, we processed each key sequentially, with each key adding about five minutes to the overall processing time. For example, processing three keys meant that the total processing time was three times longer. With Step Functions, the entire state machine executes in about five minutes.
    • Using DynamoDB with Step Functions gave us the flexibility to manage keys efficiently. In our previous implementations, keys were hardcoded in Lambda functions, which became difficult to manage due to frequent updates. DynamoDB is a great way to store dynamic data that changes frequently, and it works perfectly with our serverless architectures.

    Conclusion

    With Step Functions, we were able to fully automate the frequent configuration updates to our dataset resulting in significant cost savings, reduced risk to data errors due to system downtime, and more time for us to focus on new product development rather than support related issues. We hope that you have found the information useful and that it can serve as a jump-start to building your own ETL processes on AWS with managed AWS services.

    For more information about how Step Functions makes it easy to coordinate the components of distributed applications and microservices in any workflow, see the use case examples and then build your first state machine in under five minutes in the Step Functions console.

    If you have questions or suggestions, please comment below.

    How to Patch, Inspect, and Protect Microsoft Windows Workloads on AWS—Part 2

    Post Syndicated from Koen van Blijderveen original https://aws.amazon.com/blogs/security/how-to-patch-inspect-and-protect-microsoft-windows-workloads-on-aws-part-2/

    Yesterday in Part 1 of this blog post, I showed you how to:

    1. Launch an Amazon EC2 instance with an AWS Identity and Access Management (IAM) role, an Amazon Elastic Block Store (Amazon EBS) volume, and tags that Amazon EC2 Systems Manager (Systems Manager) and Amazon Inspector use.
    2. Configure Systems Manager to install the Amazon Inspector agent and patch your EC2 instances.

    Today in Steps 3 and 4, I show you how to:

    1. Take Amazon EBS snapshots using Amazon EBS Snapshot Scheduler to automate snapshots based on instance tags.
    2. Use Amazon Inspector to check if your EC2 instances running Microsoft Windows contain any common vulnerabilities and exposures (CVEs).

    To catch up on Steps 1 and 2, see yesterday’s blog post.

    Step 3: Take EBS snapshots using EBS Snapshot Scheduler

    In this section, I show you how to use EBS Snapshot Scheduler to take snapshots of your instances at specific intervals. To do this, I will show you how to:

    • Determine the schedule for EBS Snapshot Scheduler by providing you with best practices.
    • Deploy EBS Snapshot Scheduler by using AWS CloudFormation.
    • Tag your EC2 instances so that EBS Snapshot Scheduler backs up your instances when you want them backed up.

    In addition to making sure your EC2 instances have all the available operating system patches applied on a regular schedule, you should take snapshots of the EBS storage volumes attached to your EC2 instances. Taking regular snapshots allows you to restore your data to a previous state quickly and cost effectively. With Amazon EBS snapshots, you pay only for the actual data you store, and snapshots save only the data that has changed since the previous snapshot, which minimizes your cost. You will use EBS Snapshot Scheduler to make regular snapshots of your EC2 instance. EBS Snapshot Scheduler takes advantage of other AWS services including CloudFormation, Amazon DynamoDB, and AWS Lambda to make backing up your EBS volumes simple.

    Determine the schedule

    As a best practice, you should back up your data frequently during the hours when your data changes the most. This reduces the amount of data you lose if you have to restore from a snapshot. For the purposes of this blog post, the data for my instances changes the most between the business hours of 9:00 A.M. to 5:00 P.M. Pacific Time. During these hours, I will make snapshots hourly to minimize data loss.

    In addition to backing up frequently, another best practice is to establish a strategy for retention. This will vary based on how you need to use the snapshots. If you have compliance requirements to be able to restore for auditing, your needs may be different than if you are able to detect data corruption within three hours and simply need to restore to something that limits data loss to five hours. EBS Snapshot Scheduler enables you to specify the retention period for your snapshots. For this post, I only need to keep snapshots for recent business days. To account for weekends, I will set my retention period to three days, which is down from the default of 15 days when deploying EBS Snapshot Scheduler.

    Deploy EBS Snapshot Scheduler

    In Step 1 of Part 1 of this post, I showed how to configure an EC2 for Windows Server 2012 R2 instance with an EBS volume. You will use EBS Snapshot Scheduler to take eight snapshots each weekday of your EC2 instance’s EBS volumes:

    1. Navigate to the EBS Snapshot Scheduler deployment page and choose Launch Solution. This takes you to the CloudFormation console in your account. The Specify an Amazon S3 template URL option is already selected and prefilled. Choose Next on the Select Template page.
    2. On the Specify Details page, retain all default parameters except for AutoSnapshotDeletion. Set AutoSnapshotDeletion to Yes to ensure that old snapshots are periodically deleted. The default retention period is 15 days (you will specify a shorter value on your instance in the next subsection).
    3. Choose Next twice to move to the Review step, and start deployment by choosing the I acknowledge that AWS CloudFormation might create IAM resources check box and then choosing Create.

    Tag your EC2 instances

    EBS Snapshot Scheduler takes a few minutes to deploy. While waiting for its deployment, you can start to tag your instance to define its schedule. EBS Snapshot Scheduler reads tag values and looks for four possible custom parameters in the following order:

    • <snapshot time> – Time in 24-hour format with no colon.
    • <retention days> – The number of days (a positive integer) to retain the snapshot before deletion, if set to automatically delete snapshots.
    • <time zone> – The time zone of the times specified in <snapshot time>.
    • <active day(s)>all, weekdays, or mon, tue, wed, thu, fri, sat, and/or sun.

    Because you want hourly backups on weekdays between 9:00 A.M. and 5:00 P.M. Pacific Time, you need to configure eight tags—one for each hour of the day. You will add the eight tags shown in the following table to your EC2 instance.

    Tag Value
    scheduler:ebs-snapshot:0900 0900;3;utc;weekdays
    scheduler:ebs-snapshot:1000 1000;3;utc;weekdays
    scheduler:ebs-snapshot:1100 1100;3;utc;weekdays
    scheduler:ebs-snapshot:1200 1200;3;utc;weekdays
    scheduler:ebs-snapshot:1300 1300;3;utc;weekdays
    scheduler:ebs-snapshot:1400 1400;3;utc;weekdays
    scheduler:ebs-snapshot:1500 1500;3;utc;weekdays
    scheduler:ebs-snapshot:1600 1600;3;utc;weekdays

    Next, you will add these tags to your instance. If you want to tag multiple instances at once, you can use Tag Editor instead. To add the tags in the preceding table to your EC2 instance:

    1. Navigate to your EC2 instance in the EC2 console and choose Tags in the navigation pane.
    2. Choose Add/Edit Tags and then choose Create Tag to add all the tags specified in the preceding table.
    3. Confirm you have added the tags by choosing Save. After adding these tags, navigate to your EC2 instance in the EC2 console. Your EC2 instance should look similar to the following screenshot.
      Screenshot of how your EC2 instance should look in the console
    4. After waiting a couple of hours, you can see snapshots beginning to populate on the Snapshots page of the EC2 console.Screenshot of snapshots beginning to populate on the Snapshots page of the EC2 console
    5. To check if EBS Snapshot Scheduler is active, you can check the CloudWatch rule that runs the Lambda function. If the clock icon shown in the following screenshot is green, the scheduler is active. If the clock icon is gray, the rule is disabled and does not run. You can enable or disable the rule by selecting it, choosing Actions, and choosing Enable or Disable. This also allows you to temporarily disable EBS Snapshot Scheduler.Screenshot of checking to see if EBS Snapshot Scheduler is active
    1. You can also monitor when EBS Snapshot Scheduler has run by choosing the name of the CloudWatch rule as shown in the previous screenshot and choosing Show metrics for the rule.Screenshot of monitoring when EBS Snapshot Scheduler has run by choosing the name of the CloudWatch rule

    If you want to restore and attach an EBS volume, see Restoring an Amazon EBS Volume from a Snapshot and Attaching an Amazon EBS Volume to an Instance.

    Step 4: Use Amazon Inspector

    In this section, I show you how to you use Amazon Inspector to scan your EC2 instance for common vulnerabilities and exposures (CVEs) and set up Amazon SNS notifications. To do this I will show you how to:

    • Install the Amazon Inspector agent by using EC2 Run Command.
    • Set up notifications using Amazon SNS to notify you of any findings.
    • Define an Amazon Inspector target and template to define what assessment to perform on your EC2 instance.
    • Schedule Amazon Inspector assessment runs to assess your EC2 instance on a regular interval.

    Amazon Inspector can help you scan your EC2 instance using prebuilt rules packages, which are built and maintained by AWS. These prebuilt rules packages tell Amazon Inspector what to scan for on the EC2 instances you select. Amazon Inspector provides the following prebuilt packages for Microsoft Windows Server 2012 R2:

    • Common Vulnerabilities and Exposures
    • Center for Internet Security Benchmarks
    • Runtime Behavior Analysis

    In this post, I’m focused on how to make sure you keep your EC2 instances patched, backed up, and inspected for common vulnerabilities and exposures (CVEs). As a result, I will focus on how to use the CVE rules package and use your instance tags to identify the instances on which to run the CVE rules. If your EC2 instance is fully patched using Systems Manager, as described earlier, you should not have any findings with the CVE rules package. Regardless, as a best practice I recommend that you use Amazon Inspector as an additional layer for identifying any unexpected failures. This involves using Amazon CloudWatch to set up weekly Amazon Inspector scans, and configuring Amazon Inspector to notify you of any findings through SNS topics. By acting on the notifications you receive, you can respond quickly to any CVEs on any of your EC2 instances to help ensure that malware using known CVEs does not affect your EC2 instances. In a previous blog post, Eric Fitzgerald showed how to remediate Amazon Inspector security findings automatically.

    Install the Amazon Inspector agent

    To install the Amazon Inspector agent, you will use EC2 Run Command, which allows you to run any command on any of your EC2 instances that have the Systems Manager agent with an attached IAM role that allows access to Systems Manager.

    1. Choose Run Command under Systems Manager Services in the navigation pane of the EC2 console. Then choose Run a command.
      Screenshot of choosing "Run a command"
    2. To install the Amazon Inspector agent, you will use an AWS managed and provided command document that downloads and installs the agent for you on the selected EC2 instance. Choose AmazonInspector-ManageAWSAgent. To choose the target EC2 instance where this command will be run, use the tag you previously assigned to your EC2 instance, Patch Group, with a value of Windows Servers. For this example, set the concurrent installations to 1 and tell Systems Manager to stop after 5 errors.
      Screenshot of installing the Amazon Inspector agent
    3. Retain the default values for all other settings on the Run a command page and choose Run. Back on the Run Command page, you can see if the command that installed the Amazon Inspector agent executed successfully on all selected EC2 instances.
      Screenshot showing that the command that installed the Amazon Inspector agent executed successfully on all selected EC2 instances

    Set up notifications using Amazon SNS

    Now that you have installed the Amazon Inspector agent, you will set up an SNS topic that will notify you of any findings after an Amazon Inspector run.

    To set up an SNS topic:

    1. In the AWS Management Console, choose Simple Notification Service under Messaging in the Services menu.
    2. Choose Create topic, name your topic (only alphanumeric characters, hyphens, and underscores are allowed) and give it a display name to ensure you know what this topic does (I’ve named mine Inspector). Choose Create topic.
      "Create new topic" page
    3. To allow Amazon Inspector to publish messages to your new topic, choose Other topic actions and choose Edit topic policy.
    4. For Allow these users to publish messages to this topic and Allow these users to subscribe to this topic, choose Only these AWS users. Type the following ARN for the US East (N. Virginia) Region in which you are deploying the solution in this post: arn:aws:iam::316112463485:root. This is the ARN of Amazon Inspector itself. For the ARNs of Amazon Inspector in other AWS Regions, see Setting Up an SNS Topic for Amazon Inspector Notifications (Console). Amazon Resource Names (ARNs) uniquely identify AWS resources across all of AWS.
      Screenshot of editing the topic policy
    5. To receive notifications from Amazon Inspector, subscribe to your new topic by choosing Create subscription and adding your email address. After confirming your subscription by clicking the link in the email, the topic should display your email address as a subscriber. Later, you will configure the Amazon Inspector template to publish to this topic.
      Screenshot of subscribing to the new topic

    Define an Amazon Inspector target and template

    Now that you have set up the notification topic by which Amazon Inspector can notify you of findings, you can create an Amazon Inspector target and template. A target defines which EC2 instances are in scope for Amazon Inspector. A template defines which packages to run, for how long, and on which target.

    To create an Amazon Inspector target:

    1. Navigate to the Amazon Inspector console and choose Get started. At the time of writing this blog post, Amazon Inspector is available in the US East (N. Virginia), US West (N. California), US West (Oregon), EU (Ireland), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Sydney), and Asia Pacific (Tokyo) Regions.
    2. For Amazon Inspector to be able to collect the necessary data from your EC2 instance, you must create an IAM service role for Amazon Inspector. Amazon Inspector can create this role for you if you choose Choose or create role and confirm the role creation by choosing Allow.
      Screenshot of creating an IAM service role for Amazon Inspector
    3. Amazon Inspector also asks you to tag your EC2 instance and install the Amazon Inspector agent. You already performed these steps in Part 1 of this post, so you can proceed by choosing Next. To define the Amazon Inspector target, choose the previously used Patch Group tag with a Value of Windows Servers. This is the same tag that you used to define the targets for patching. Then choose Next.
      Screenshot of defining the Amazon Inspector target
    4. Now, define your Amazon Inspector template, and choose a name and the package you want to run. For this post, use the Common Vulnerabilities and Exposures package and choose the default duration of 1 hour. As you can see, the package has a version number, so always select the latest version of the rules package if multiple versions are available.
      Screenshot of defining an assessment template
    5. Configure Amazon Inspector to publish to your SNS topic when findings are reported. You can also choose to receive a notification of a started run, a finished run, or changes in the state of a run. For this blog post, you want to receive notifications if there are any findings. To start, choose Assessment Templates from the Amazon Inspector console and choose your newly created Amazon Inspector assessment template. Choose the icon below SNS topics (see the following screenshot).
      Screenshot of choosing an assessment template
    6. A pop-up appears in which you can choose the previously created topic and the events about which you want SNS to notify you (choose Finding reported).
      Screenshot of choosing the previously created topic and the events about which you want SNS to notify you

    Schedule Amazon Inspector assessment runs

    The last step in using Amazon Inspector to assess for CVEs is to schedule the Amazon Inspector template to run using Amazon CloudWatch Events. This will make sure that Amazon Inspector assesses your EC2 instance on a regular basis. To do this, you need the Amazon Inspector template ARN, which you can find under Assessment templates in the Amazon Inspector console. CloudWatch Events can run your Amazon Inspector assessment at an interval you define using a Cron-based schedule. Cron is a well-known scheduling agent that is widely used on UNIX-like operating systems and uses the following syntax for CloudWatch Events.

    Image of Cron schedule

    All scheduled events use a UTC time zone, and the minimum precision for schedules is one minute. For more information about scheduling CloudWatch Events, see Schedule Expressions for Rules.

    To create the CloudWatch Events rule:

    1. Navigate to the CloudWatch console, choose Events, and choose Create rule.
      Screenshot of starting to create a rule in the CloudWatch Events console
    2. On the next page, specify if you want to invoke your rule based on an event pattern or a schedule. For this blog post, you will select a schedule based on a Cron expression.
    3. You can schedule the Amazon Inspector assessment any time you want using the Cron expression, or you can use the Cron expression I used in the following screenshot, which will run the Amazon Inspector assessment every Sunday at 10:00 P.M. GMT.
      Screenshot of scheduling an Amazon Inspector assessment with a Cron expression
    4. Choose Add target and choose Inspector assessment template from the drop-down menu. Paste the ARN of the Amazon Inspector template you previously created in the Amazon Inspector console in the Assessment template box and choose Create a new role for this specific resource. This new role is necessary so that CloudWatch Events has the necessary permissions to start the Amazon Inspector assessment. CloudWatch Events will automatically create the new role and grant the minimum set of permissions needed to run the Amazon Inspector assessment. To proceed, choose Configure details.
      Screenshot of adding a target
    5. Next, give your rule a name and a description. I suggest using a name that describes what the rule does, as shown in the following screenshot.
    6. Finish the wizard by choosing Create rule. The rule should appear in the Events – Rules section of the CloudWatch console.
      Screenshot of completing the creation of the rule
    7. To confirm your CloudWatch Events rule works, wait for the next time your CloudWatch Events rule is scheduled to run. For testing purposes, you can choose your CloudWatch Events rule and choose Edit to change the schedule to run it sooner than scheduled.
      Screenshot of confirming the CloudWatch Events rule works
    8. Now navigate to the Amazon Inspector console to confirm the launch of your first assessment run. The Start time column shows you the time each assessment started and the Status column the status of your assessment. In the following screenshot, you can see Amazon Inspector is busy Collecting data from the selected assessment targets.
      Screenshot of confirming the launch of the first assessment run

    You have concluded the last step of this blog post by setting up a regular scan of your EC2 instance with Amazon Inspector and a notification that will let you know if your EC2 instance is vulnerable to any known CVEs. In a previous Security Blog post, Eric Fitzgerald explained How to Remediate Amazon Inspector Security Findings Automatically. Although that blog post is for Linux-based EC2 instances, the post shows that you can learn about Amazon Inspector findings in other ways than email alerts.

    Conclusion

    In this two-part blog post, I showed how to make sure you keep your EC2 instances up to date with patching, how to back up your instances with snapshots, and how to monitor your instances for CVEs. Collectively these measures help to protect your instances against common attack vectors that attempt to exploit known vulnerabilities. In Part 1, I showed how to configure your EC2 instances to make it easy to use Systems Manager, EBS Snapshot Scheduler, and Amazon Inspector. I also showed how to use Systems Manager to schedule automatic patches to keep your instances current in a timely fashion. In Part 2, I showed you how to take regular snapshots of your data by using EBS Snapshot Scheduler and how to use Amazon Inspector to check if your EC2 instances running Microsoft Windows contain any common vulnerabilities and exposures (CVEs).

    If you have comments about today’s or yesterday’s post, submit them in the “Comments” section below. If you have questions about or issues implementing any part of this solution, start a new thread on the Amazon EC2 forum or the Amazon Inspector forum, or contact AWS Support.

    – Koen

    How to Patch, Inspect, and Protect Microsoft Windows Workloads on AWS—Part 1

    Post Syndicated from Koen van Blijderveen original https://aws.amazon.com/blogs/security/how-to-patch-inspect-and-protect-microsoft-windows-workloads-on-aws-part-1/

    Most malware tries to compromise your systems by using a known vulnerability that the maker of the operating system has already patched. To help prevent malware from affecting your systems, two security best practices are to apply all operating system patches to your systems and actively monitor your systems for missing patches. In case you do need to recover from a malware attack, you should make regular backups of your data.

    In today’s blog post (Part 1 of a two-part post), I show how to keep your Amazon EC2 instances that run Microsoft Windows up to date with the latest security patches by using Amazon EC2 Systems Manager. Tomorrow in Part 2, I show how to take regular snapshots of your data by using Amazon EBS Snapshot Scheduler and how to use Amazon Inspector to check if your EC2 instances running Microsoft Windows contain any common vulnerabilities and exposures (CVEs).

    What you should know first

    To follow along with the solution in this post, you need one or more EC2 instances. You may use existing instances or create new instances. For the blog post, I assume this is an EC2 for Microsoft Windows Server 2012 R2 instance installed from the Amazon Machine Images (AMIs). If you are not familiar with how to launch an EC2 instance, see Launching an Instance. I also assume you launched or will launch your instance in a private subnet. A private subnet is not directly accessible via the internet, and access to it requires either a VPN connection to your on-premises network or a jump host in a public subnet (a subnet with access to the internet). You must make sure that the EC2 instance can connect to the internet using a network address translation (NAT) instance or NAT gateway to communicate with Systems Manager and Amazon Inspector. The following diagram shows how you should structure your Amazon Virtual Private Cloud (VPC). You should also be familiar with Restoring an Amazon EBS Volume from a Snapshot and Attaching an Amazon EBS Volume to an Instance.

    Later on, you will assign tasks to a maintenance window to patch your instances with Systems Manager. To do this, the AWS Identity and Access Management (IAM) user you are using for this post must have the iam:PassRole permission. This permission allows this IAM user to assign tasks to pass their own IAM permissions to the AWS service. In this example, when you assign a task to a maintenance window, IAM passes your credentials to Systems Manager. This safeguard ensures that the user cannot use the creation of tasks to elevate their IAM privileges because their own IAM privileges limit which tasks they can run against an EC2 instance. You should also authorize your IAM user to use EC2, Amazon Inspector, Amazon CloudWatch, and Systems Manager. You can achieve this by attaching the following AWS managed policies to the IAM user you are using for this example: AmazonInspectorFullAccess, AmazonEC2FullAccess, and AmazonSSMFullAccess.

    Architectural overview

    The following diagram illustrates the components of this solution’s architecture.

    Diagram showing the components of this solution's architecture

    For this blog post, Microsoft Windows EC2 is Amazon EC2 for Microsoft Windows Server 2012 R2 instances with attached Amazon Elastic Block Store (Amazon EBS) volumes, which are running in your VPC. These instances may be standalone Windows instances running your Windows workloads, or you may have joined them to an Active Directory domain controller. For instances joined to a domain, you can be using Active Directory running on an EC2 for Windows instance, or you can use AWS Directory Service for Microsoft Active Directory.

    Amazon EC2 Systems Manager is a scalable tool for remote management of your EC2 instances. You will use the Systems Manager Run Command to install the Amazon Inspector agent. The agent enables EC2 instances to communicate with the Amazon Inspector service and run assessments, which I explain in detail later in this blog post. You also will create a Systems Manager association to keep your EC2 instances up to date with the latest security patches.

    You can use the EBS Snapshot Scheduler to schedule automated snapshots at regular intervals. You will use it to set up regular snapshots of your Amazon EBS volumes. EBS Snapshot Scheduler is a prebuilt solution by AWS that you will deploy in your AWS account. With Amazon EBS snapshots, you pay only for the actual data you store. Snapshots save only the data that has changed since the previous snapshot, which minimizes your cost.

    You will use Amazon Inspector to run security assessments on your EC2 for Windows Server instance. In this post, I show how to assess if your EC2 for Windows Server instance is vulnerable to any of the more than 50,000 CVEs registered with Amazon Inspector.

    In today’s and tomorrow’s posts, I show you how to:

    1. Launch an EC2 instance with an IAM role, Amazon EBS volume, and tags that Systems Manager and Amazon Inspector will use.
    2. Configure Systems Manager to install the Amazon Inspector agent and patch your EC2 instances.
    3. Take EBS snapshots by using EBS Snapshot Scheduler to automate snapshots based on instance tags.
    4. Use Amazon Inspector to check if your EC2 instances running Microsoft Windows contain any common vulnerabilities and exposures (CVEs).

    Step 1: Launch an EC2 instance

    In this section, I show you how to launch your EC2 instances so that you can use Systems Manager with the instances and use instance tags with EBS Snapshot Scheduler to automate snapshots. This requires three things:

    • Create an IAM role for Systems Manager before launching your EC2 instance.
    • Launch your EC2 instance with Amazon EBS and the IAM role for Systems Manager.
    • Add tags to instances so that you can automate policies for which instances you take snapshots of and when.

    Create an IAM role for Systems Manager

    Before launching your EC2 instance, I recommend that you first create an IAM role for Systems Manager, which you will use to update the EC2 instance you will launch. AWS already provides a preconfigured policy that you can use for your new role, and it is called AmazonEC2RoleforSSM.

    1. Sign in to the IAM console and choose Roles in the navigation pane. Choose Create new role.
      Screenshot of choosing "Create role"
    2. In the role-creation workflow, choose AWS service > EC2 > EC2 to create a role for an EC2 instance.
      Screenshot of creating a role for an EC2 instance
    3. Choose the AmazonEC2RoleforSSM policy to attach it to the new role you are creating.
      Screenshot of attaching the AmazonEC2RoleforSSM policy to the new role you are creating
    4. Give the role a meaningful name (I chose EC2SSM) and description, and choose Create role.
      Screenshot of giving the role a name and description

    Launch your EC2 instance

    To follow along, you need an EC2 instance that is running Microsoft Windows Server 2012 R2 and that has an Amazon EBS volume attached. You can use any existing instance you may have or create a new instance.

    When launching your new EC2 instance, be sure that:

    • The operating system is Microsoft Windows Server 2012 R2.
    • You attach at least one Amazon EBS volume to the EC2 instance.
    • You attach the newly created IAM role (EC2SSM).
    • The EC2 instance can connect to the internet through a network address translation (NAT) gateway or a NAT instance.
    • You create the tags shown in the following screenshot (you will use them later).

    If you are using an already launched EC2 instance, you can attach the newly created role as described in Easily Replace or Attach an IAM Role to an Existing EC2 Instance by Using the EC2 Console.

    Add tags

    The final step of configuring your EC2 instances is to add tags. You will use these tags to configure Systems Manager in Step 2 of this blog post and to configure Amazon Inspector in Part 2. For this example, I add a tag key, Patch Group, and set the value to Windows Servers. I could have other groups of EC2 instances that I treat differently by having the same tag key but a different tag value. For example, I might have a collection of other servers with the Patch Group tag key with a value of IAS Servers.

    Screenshot of adding tags

    Note: You must wait a few minutes until the EC2 instance becomes available before you can proceed to the next section.

    At this point, you now have at least one EC2 instance you can use to configure Systems Manager, use EBS Snapshot Scheduler, and use Amazon Inspector.

    Note: If you have a large number of EC2 instances to tag, you may want to use the EC2 CreateTags API rather than manually apply tags to each instance.

    Step 2: Configure Systems Manager

    In this section, I show you how to use Systems Manager to apply operating system patches to your EC2 instances, and how to manage patch compliance.

    To start, I will provide some background information about Systems Manager. Then, I will cover how to:

    • Create the Systems Manager IAM role so that Systems Manager is able to perform patch operations.
    • Associate a Systems Manager patch baseline with your instance to define which patches Systems Manager should apply.
    • Define a maintenance window to make sure Systems Manager patches your instance when you tell it to.
    • Monitor patch compliance to verify the patch state of your instances.

    Systems Manager is a collection of capabilities that helps you automate management tasks for AWS-hosted instances on EC2 and your on-premises servers. In this post, I use Systems Manager for two purposes: to run remote commands and apply operating system patches. To learn about the full capabilities of Systems Manager, see What Is Amazon EC2 Systems Manager?

    Patch management is an important measure to prevent malware from infecting your systems. Most malware attacks look for vulnerabilities that are publicly known and in most cases are already patched by the maker of the operating system. These publicly known vulnerabilities are well documented and therefore easier for an attacker to exploit than having to discover a new vulnerability.

    Patches for these new vulnerabilities are available through Systems Manager within hours after Microsoft releases them. There are two prerequisites to use Systems Manager to apply operating system patches. First, you must attach the IAM role you created in the previous section, EC2SSM, to your EC2 instance. Second, you must install the Systems Manager agent on your EC2 instance. If you have used a recent Microsoft Windows Server 2012 R2 AMI published by AWS, Amazon has already installed the Systems Manager agent on your EC2 instance. You can confirm this by logging in to an EC2 instance and looking for Amazon SSM Agent under Programs and Features in Windows. To install the Systems Manager agent on an instance that does not have the agent preinstalled or if you want to use the Systems Manager agent on your on-premises servers, see the documentation about installing the Systems Manager agent. If you forgot to attach the newly created role when launching your EC2 instance or if you want to attach the role to already running EC2 instances, see Attach an AWS IAM Role to an Existing Amazon EC2 Instance by Using the AWS CLI or use the AWS Management Console.

    To make sure your EC2 instance receives operating system patches from Systems Manager, you will use the default patch baseline provided and maintained by AWS, and you will define a maintenance window so that you control when your EC2 instances should receive patches. For the maintenance window to be able to run any tasks, you also must create a new role for Systems Manager. This role is a different kind of role than the one you created earlier: Systems Manager will use this role instead of EC2. Earlier we created the EC2SSM role with the AmazonEC2RoleforSSM policy, which allowed the Systems Manager agent on our instance to communicate with the Systems Manager service. Here we need a new role with the policy AmazonSSMMaintenanceWindowRole to make sure the Systems Manager service is able to execute commands on our instance.

    Create the Systems Manager IAM role

    To create the new IAM role for Systems Manager, follow the same procedure as in the previous section, but in Step 3, choose the AmazonSSMMaintenanceWindowRole policy instead of the previously selected AmazonEC2RoleforSSM policy.

    Screenshot of creating the new IAM role for Systems Manager

    Finish the wizard and give your new role a recognizable name. For example, I named my role MaintenanceWindowRole.

    Screenshot of finishing the wizard and giving your new role a recognizable name

    By default, only EC2 instances can assume this new role. You must update the trust policy to enable Systems Manager to assume this role.

    To update the trust policy associated with this new role:

    1. Navigate to the IAM console and choose Roles in the navigation pane.
    2. Choose MaintenanceWindowRole and choose the Trust relationships tab. Then choose Edit trust relationship.
    3. Update the policy document by copying the following policy and pasting it in the Policy Document box. As you can see, I have added the ssm.amazonaws.com service to the list of allowed Principals that can assume this role. Choose Update Trust Policy.
      {
         "Version":"2012-10-17",
         "Statement":[
            {
               "Sid":"",
               "Effect":"Allow",
               "Principal":{
                  "Service":[
                     "ec2.amazonaws.com",
                     "ssm.amazonaws.com"
                 ]
               },
               "Action":"sts:AssumeRole"
            }
         ]
      }

    Associate a Systems Manager patch baseline with your instance

    Next, you are going to associate a Systems Manager patch baseline with your EC2 instance. A patch baseline defines which patches Systems Manager should apply. You will use the default patch baseline that AWS manages and maintains. Before you can associate the patch baseline with your instance, though, you must determine if Systems Manager recognizes your EC2 instance.

    Navigate to the EC2 console, scroll down to Systems Manager Shared Resources in the navigation pane, and choose Managed Instances. Your new EC2 instance should be available there. If your instance is missing from the list, verify the following:

    1. Go to the EC2 console and verify your instance is running.
    2. Select your instance and confirm you attached the Systems Manager IAM role, EC2SSM.
    3. Make sure that you deployed a NAT gateway in your public subnet to ensure your VPC reflects the diagram at the start of this post so that the Systems Manager agent can connect to the Systems Manager internet endpoint.
    4. Check the Systems Manager Agent logs for any errors.

    Now that you have confirmed that Systems Manager can manage your EC2 instance, it is time to associate the AWS maintained patch baseline with your EC2 instance:

    1. Choose Patch Baselines under Systems Manager Services in the navigation pane of the EC2 console.
    2. Choose the default patch baseline as highlighted in the following screenshot, and choose Modify Patch Groups in the Actions drop-down.
      Screenshot of choosing Modify Patch Groups in the Actions drop-down
    3. In the Patch group box, enter the same value you entered under the Patch Group tag of your EC2 instance in “Step 1: Configure your EC2 instance.” In this example, the value I enter is Windows Servers. Choose the check mark icon next to the patch group and choose Close.Screenshot of modifying the patch group

    Define a maintenance window

    Now that you have successfully set up a role and have associated a patch baseline with your EC2 instance, you will define a maintenance window so that you can control when your EC2 instances should receive patches. By creating multiple maintenance windows and assigning them to different patch groups, you can make sure your EC2 instances do not all reboot at the same time. The Patch Group resource tag you defined earlier will determine to which patch group an instance belongs.

    To define a maintenance window:

    1. Navigate to the EC2 console, scroll down to Systems Manager Shared Resources in the navigation pane, and choose Maintenance Windows. Choose Create a Maintenance Window.
      Screenshot of starting to create a maintenance window in the Systems Manager console
    2. Select the Cron schedule builder to define the schedule for the maintenance window. In the example in the following screenshot, the maintenance window will start every Saturday at 10:00 P.M. UTC.
    3. To specify when your maintenance window will end, specify the duration. In this example, the four-hour maintenance window will end on the following Sunday morning at 2:00 A.M. UTC (in other words, four hours after it started).
    4. Systems manager completes all tasks that are in process, even if the maintenance window ends. In my example, I am choosing to prevent new tasks from starting within one hour of the end of my maintenance window because I estimated my patch operations might take longer than one hour to complete. Confirm the creation of the maintenance window by choosing Create maintenance window.
      Screenshot of completing all boxes in the maintenance window creation process
    5. After creating the maintenance window, you must register the EC2 instance to the maintenance window so that Systems Manager knows which EC2 instance it should patch in this maintenance window. To do so, choose Register new targets on the Targets tab of your newly created maintenance window. You can register your targets by using the same Patch Group tag you used before to associate the EC2 instance with the AWS-provided patch baseline.
      Screenshot of registering new targets
    6. Assign a task to the maintenance window that will install the operating system patches on your EC2 instance:
      1. Open Maintenance Windows in the EC2 console, select your previously created maintenance window, choose the Tasks tab, and choose Register run command task from the Register new task drop-down.
      2. Choose the AWS-RunPatchBaseline document from the list of available documents.
      3. For Parameters:
        1. For Role, choose the role you created previously (called MaintenanceWindowRole).
        2. For Execute on, specify how many EC2 instances Systems Manager should patch at the same time. If you have a large number of EC2 instances and want to patch all EC2 instances within the defined time, make sure this number is not too low. For example, if you have 1,000 EC2 instances, a maintenance window of 4 hours, and 2 hours’ time for patching, make this number at least 500.
        3. For Stop after, specify after how many errors Systems Manager should stop.
        4. For Operation, choose Install to make sure to install the patches.
          Screenshot of stipulating maintenance window parameters

    Now, you must wait for the maintenance window to run at least once according to the schedule you defined earlier. Note that if you don’t want to wait, you can adjust the schedule to run sooner by choosing Edit maintenance window on the Maintenance Windows page of Systems Manager. If your maintenance window has expired, you can check the status of any maintenance tasks Systems Manager has performed on the Maintenance Windows page of Systems Manager and select your maintenance window.

    Screenshot of the maintenance window successfully created

    Monitor patch compliance

    You also can see the overall patch compliance of all EC2 instances that are part of defined patch groups by choosing Patch Compliance under Systems Manager Services in the navigation pane of the EC2 console. You can filter by Patch Group to see how many EC2 instances within the selected patch group are up to date, how many EC2 instances are missing updates, and how many EC2 instances are in an error state.

    Screenshot of monitoring patch compliance

    In this section, you have set everything up for patch management on your instance. Now you know how to patch your EC2 instance in a controlled manner and how to check if your EC2 instance is compliant with the patch baseline you have defined. Of course, I recommend that you apply these steps to all EC2 instances you manage.

    Summary

    In Part 1 of this blog post, I have shown how to configure EC2 instances for use with Systems Manager, EBS Snapshot Scheduler, and Amazon Inspector. I also have shown how to use Systems Manager to keep your Microsoft Windows–based EC2 instances up to date. In Part 2 of this blog post tomorrow, I will show how to take regular snapshots of your data by using EBS Snapshot Scheduler and how to use Amazon Inspector to check if your EC2 instances running Microsoft Windows contain any CVEs.

    If you have comments about this post, submit them in the “Comments” section below. If you have questions about or issues implementing this solution, start a new thread on the EC2 forum or the Amazon Inspector forum, or contact AWS Support.

    – Koen

    Capturing Custom, High-Resolution Metrics from Containers Using AWS Step Functions and AWS Lambda

    Post Syndicated from Nathan Taber original https://aws.amazon.com/blogs/compute/capturing-custom-high-resolution-metrics-from-containers-using-aws-step-functions-and-aws-lambda/

    Contributed by Trevor Sullivan, AWS Solutions Architect

    When you deploy containers with Amazon ECS, are you gathering all of the key metrics so that you can correctly monitor the overall health of your ECS cluster?

    By default, ECS writes metrics to Amazon CloudWatch in 5-minute increments. For complex or large services, this may not be sufficient to make scaling decisions quickly. You may want to respond immediately to changes in workload or to identify application performance problems. Last July, CloudWatch announced support for high-resolution metrics, up to a per-second basis.

    These high-resolution metrics can be used to give you a clearer picture of the load and performance for your applications, containers, clusters, and hosts. In this post, I discuss how you can use AWS Step Functions, along with AWS Lambda, to cost effectively record high-resolution metrics into CloudWatch. You implement this solution using a serverless architecture, which keeps your costs low and makes it easier to troubleshoot the solution.

    To show how this works, you retrieve some useful metric data from an ECS cluster running in the same AWS account and region (Oregon, us-west-2) as the Step Functions state machine and Lambda function. However, you can use this architecture to retrieve any custom application metrics from any resource in any AWS account and region.

    Why Step Functions?

    Step Functions enables you to orchestrate multi-step tasks in the AWS Cloud that run for any period of time, up to a year. Effectively, you’re building a blueprint for an end-to-end process. After it’s built, you can execute the process as many times as you want.

    For this architecture, you gather metrics from an ECS cluster, every five seconds, and then write the metric data to CloudWatch. After your ECS cluster metrics are stored in CloudWatch, you can create CloudWatch alarms to notify you. An alarm can also trigger an automated remediation activity such as scaling ECS services, when a metric exceeds a threshold defined by you.

    When you build a Step Functions state machine, you define the different states inside it as JSON objects. The bulk of the work in Step Functions is handled by the common task state, which invokes Lambda functions or Step Functions activities. There is also a built-in library of other useful states that allow you to control the execution flow of your program.

    One of the most useful state types in Step Functions is the parallel state. Each parallel state in your state machine can have one or more branches, each of which is executed in parallel. Another useful state type is the wait state, which waits for a period of time before moving to the next state.

    In this walkthrough, you combine these three states (parallel, wait, and task) to create a state machine that triggers a Lambda function, which then gathers metrics from your ECS cluster.

    Step Functions pricing

    This state machine is executed every minute, resulting in 60 executions per hour, and 1,440 executions per day. Step Functions is billed per state transition, including the Start and End state transitions, and giving you approximately 37,440 state transitions per day. To reach this number, I’m using this estimated math:

    26 state transitions per-execution x 60 minutes x 24 hours

    Based on current pricing, at $0.000025 per state transition, the daily cost of this metric gathering state machine would be $0.936.

    Step Functions offers an indefinite 4,000 free state transitions every month. This benefit is available to all customers, not just customers who are still under the 12-month AWS Free Tier. For more information and cost example scenarios, see Step Functions pricing.

    Why Lambda?

    The goal is to capture metrics from an ECS cluster, and write the metric data to CloudWatch. This is a straightforward, short-running process that makes Lambda the perfect place to run your code. Lambda is one of the key services that makes up “Serverless” application architectures. It enables you to consume compute capacity only when your code is actually executing.

    The process of gathering metric data from ECS and writing it to CloudWatch takes a short period of time. In fact, my average Lambda function execution time, while developing this post, is only about 250 milliseconds on average. For every five-second interval that occurs, I’m only using 1/20th of the compute time that I’d otherwise be paying for.

    Lambda pricing

    For billing purposes, Lambda execution time is rounded up to the nearest 100-ms interval. In general, based on the metrics that I observed during development, a 250-ms runtime would be billed at 300 ms. Here, I calculate the cost of this Lambda function executing on a daily basis.

    Assuming 31 days in each month, there would be 535,680 five-second intervals (31 days x 24 hours x 60 minutes x 12 five-second intervals = 535,680). The Lambda function is invoked every five-second interval, by the Step Functions state machine, and runs for a 300-ms period. At current Lambda pricing, for a 128-MB function, you would be paying approximately the following:

    Total compute

    Total executions = 535,680
    Total compute = total executions x (3 x $0.000000208 per 100 ms) = $0.334 per day

    Total requests

    Total requests = (535,680 / 1000000) * $0.20 per million requests = $0.11 per day

    Total Lambda Cost

    $0.11 requests + $0.334 compute time = $0.444 per day

    Similar to Step Functions, Lambda offers an indefinite free tier. For more information, see Lambda Pricing.

    Walkthrough

    In the following sections, I step through the process of configuring the solution just discussed. If you follow along, at a high level, you will:

    • Configure an IAM role and policy
    • Create a Step Functions state machine to control metric gathering execution
    • Create a metric-gathering Lambda function
    • Configure a CloudWatch Events rule to trigger the state machine
    • Validate the solution

    Prerequisites

    You should already have an AWS account with a running ECS cluster. If you don’t have one running, you can easily deploy a Docker container on an ECS cluster using the AWS Management Console. In the example produced for this post, I use an ECS cluster running Windows Server (currently in beta), but either a Linux or Windows Server cluster works.

    Create an IAM role and policy

    First, create an IAM role and policy that enables Step Functions, Lambda, and CloudWatch to communicate with each other.

    • The CloudWatch Events rule needs permissions to trigger the Step Functions state machine.
    • The Step Functions state machine needs permissions to trigger the Lambda function.
    • The Lambda function needs permissions to query ECS and then write to CloudWatch Logs and metrics.

    When you create the state machine, Lambda function, and CloudWatch Events rule, you assign this role to each of those resources. Upon execution, each of these resources assumes the specified role and executes using the role’s permissions.

    1. Open the IAM console.
    2. Choose Roles, create New Role.
    3. For Role Name, enter WriteMetricFromStepFunction.
    4. Choose Save.

    Create the IAM role trust relationship
    The trust relationship (also known as the assume role policy document) for your IAM role looks like the following JSON document. As you can see from the document, your IAM role needs to trust the Lambda, CloudWatch Events, and Step Functions services. By configuring your role to trust these services, they can assume this role and inherit the role permissions.

    1. Open the IAM console.
    2. Choose Roles and select the IAM role previously created.
    3. Choose Trust RelationshipsEdit Trust Relationships.
    4. Enter the following trust policy text and choose Save.
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Principal": {
            "Service": "lambda.amazonaws.com"
          },
          "Action": "sts:AssumeRole"
        },
        {
          "Effect": "Allow",
          "Principal": {
            "Service": "events.amazonaws.com"
          },
          "Action": "sts:AssumeRole"
        },
        {
          "Effect": "Allow",
          "Principal": {
            "Service": "states.us-west-2.amazonaws.com"
          },
          "Action": "sts:AssumeRole"
        }
      ]
    }

    Create an IAM policy

    After you’ve finished configuring your role’s trust relationship, grant the role access to the other AWS resources that make up the solution.

    The IAM policy is what gives your IAM role permissions to access various resources. You must whitelist explicitly the specific resources to which your role has access, because the default IAM behavior is to deny access to any AWS resources.

    I’ve tried to keep this policy document as generic as possible, without allowing permissions to be too open. If the name of your ECS cluster is different than the one in the example policy below, make sure that you update the policy document before attaching it to your IAM role. You can attach this policy as an inline policy, instead of creating the policy separately first. However, either approach is valid.

    1. Open the IAM console.
    2. Select the IAM role, and choose Permissions.
    3. Choose Add in-line policy.
    4. Choose Custom Policy and then enter the following policy. The inline policy name does not matter.
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [ "logs:*" ],
                "Resource": "*"
            },
            {
                "Effect": "Allow",
                "Action": [ "cloudwatch:PutMetricData" ],
                "Resource": "*"
            },
            {
                "Effect": "Allow",
                "Action": [ "states:StartExecution" ],
                "Resource": [
                    "arn:aws:states:*:*:stateMachine:WriteMetricFromStepFunction"
                ]
            },
            {
                "Effect": "Allow",
                "Action": [ "lambda:InvokeFunction" ],
                "Resource": "arn:aws:lambda:*:*:function:WriteMetricFromStepFunction"
            },
            {
                "Effect": "Allow",
                "Action": [ "ecs:Describe*" ],
                "Resource": "arn:aws:ecs:*:*:cluster/ECSEsgaroth"
            }
        ]
    }

    Create a Step Functions state machine

    In this section, you create a Step Functions state machine that invokes the metric-gathering Lambda function every five (5) seconds, for a one-minute period. If you divide a minute (60) seconds into equal parts of five-second intervals, you get 12. Based on this math, you create 12 branches, in a single parallel state, in the state machine. Each branch triggers the metric-gathering Lambda function at a different five-second marker, throughout the one-minute period. After all of the parallel branches finish executing, the Step Functions execution completes and another begins.

    Follow these steps to create your Step Functions state machine:

    1. Open the Step Functions console.
    2. Choose DashboardCreate State Machine.
    3. For State Machine Name, enter WriteMetricFromStepFunction.
    4. Enter the state machine code below into the editor. Make sure that you insert your own AWS account ID for every instance of “676655494xxx”
    5. Choose Create State Machine.
    6. Select the WriteMetricFromStepFunction IAM role that you previously created.
    {
        "Comment": "Writes ECS metrics to CloudWatch every five seconds, for a one-minute period.",
        "StartAt": "ParallelMetric",
        "States": {
          "ParallelMetric": {
            "Type": "Parallel",
            "Branches": [
              {
                "StartAt": "WriteMetricLambda",
                "States": {
                 	"WriteMetricLambda": {
                      "Type": "Task",
    				  "Resource": "arn:aws:lambda:us-west-2:676655494xxx:function:WriteMetricFromStepFunction",
                      "End": true
                    } 
                }
              },
        	  {
                "StartAt": "WaitFive",
                "States": {
                	"WaitFive": {
                		"Type": "Wait",
                		"Seconds": 5,
                		"Next": "WriteMetricLambdaFive"
              		},
                 	"WriteMetricLambdaFive": {
                      "Type": "Task",
    				  "Resource": "arn:aws:lambda:us-west-2:676655494xxx:function:WriteMetricFromStepFunction",
                      "End": true
                    } 
                }
              },
        	  {
                "StartAt": "WaitTen",
                "States": {
                	"WaitTen": {
                		"Type": "Wait",
                		"Seconds": 10,
                		"Next": "WriteMetricLambda10"
              		},
                 	"WriteMetricLambda10": {
                      "Type": "Task",
                      "Resource": "arn:aws:lambda:us-west-2:676655494xxx:function:WriteMetricFromStepFunction",
                      "End": true
                    } 
                }
              },
        	  {
                "StartAt": "WaitFifteen",
                "States": {
                	"WaitFifteen": {
                		"Type": "Wait",
                		"Seconds": 15,
                		"Next": "WriteMetricLambda15"
              		},
                 	"WriteMetricLambda15": {
                      "Type": "Task",
                      "Resource": "arn:aws:lambda:us-west-2:676655494xxx:function:WriteMetricFromStepFunction",
                      "End": true
                    } 
                }
              },
              {
                "StartAt": "Wait20",
                "States": {
                	"Wait20": {
                		"Type": "Wait",
                		"Seconds": 20,
                		"Next": "WriteMetricLambda20"
              		},
                 	"WriteMetricLambda20": {
                      "Type": "Task",
                      "Resource": "arn:aws:lambda:us-west-2:676655494xxx:function:WriteMetricFromStepFunction",
                      "End": true
                    } 
                }
              },
              {
                "StartAt": "Wait25",
                "States": {
                	"Wait25": {
                		"Type": "Wait",
                		"Seconds": 25,
                		"Next": "WriteMetricLambda25"
              		},
                 	"WriteMetricLambda25": {
                      "Type": "Task",
                      "Resource": "arn:aws:lambda:us-west-2:676655494xxx:function:WriteMetricFromStepFunction",
                      "End": true
                    } 
                }
              },
              {
                "StartAt": "Wait30",
                "States": {
                	"Wait30": {
                		"Type": "Wait",
                		"Seconds": 30,
                		"Next": "WriteMetricLambda30"
              		},
                 	"WriteMetricLambda30": {
                      "Type": "Task",
                      "Resource": "arn:aws:lambda:us-west-2:676655494xxx:function:WriteMetricFromStepFunction",
                      "End": true
                    } 
                }
              },
              {
                "StartAt": "Wait35",
                "States": {
                	"Wait35": {
                		"Type": "Wait",
                		"Seconds": 35,
                		"Next": "WriteMetricLambda35"
              		},
                 	"WriteMetricLambda35": {
                      "Type": "Task",
                      "Resource": "arn:aws:lambda:us-west-2:676655494xxx:function:WriteMetricFromStepFunction",
                      "End": true
                    } 
                }
              },
              {
                "StartAt": "Wait40",
                "States": {
                	"Wait40": {
                		"Type": "Wait",
                		"Seconds": 40,
                		"Next": "WriteMetricLambda40"
              		},
                 	"WriteMetricLambda40": {
                      "Type": "Task",
                      "Resource": "arn:aws:lambda:us-west-2:676655494xxx:function:WriteMetricFromStepFunction",
                      "End": true
                    } 
                }
              },
              {
                "StartAt": "Wait45",
                "States": {
                	"Wait45": {
                		"Type": "Wait",
                		"Seconds": 45,
                		"Next": "WriteMetricLambda45"
              		},
                 	"WriteMetricLambda45": {
                      "Type": "Task",
                      "Resource": "arn:aws:lambda:us-west-2:676655494xxx:function:WriteMetricFromStepFunction",
                      "End": true
                    } 
                }
              },
              {
                "StartAt": "Wait50",
                "States": {
                	"Wait50": {
                		"Type": "Wait",
                		"Seconds": 50,
                		"Next": "WriteMetricLambda50"
              		},
                 	"WriteMetricLambda50": {
                      "Type": "Task",
                      "Resource": "arn:aws:lambda:us-west-2:676655494xxx:function:WriteMetricFromStepFunction",
                      "End": true
                    } 
                }
              },
              {
                "StartAt": "Wait55",
                "States": {
                	"Wait55": {
                		"Type": "Wait",
                		"Seconds": 55,
                		"Next": "WriteMetricLambda55"
              		},
                 	"WriteMetricLambda55": {
                      "Type": "Task",
                      "Resource": "arn:aws:lambda:us-west-2:676655494xxx:function:WriteMetricFromStepFunction",
                      "End": true
                    } 
                }
              }
            ],
            "End": true
          }
      }
    }

    Now you’ve got a shiny new Step Functions state machine! However, you might ask yourself, “After the state machine has been created, how does it get executed?” Before I answer that question, create the Lambda function that writes the custom metric, and then you get the end-to-end process moving.

    Create a Lambda function

    The meaty part of the solution is a Lambda function, written to consume the Python 3.6 runtime, that retrieves metric values from ECS, and then writes them to CloudWatch. This Lambda function is what the Step Functions state machine is triggering every five seconds, via the Task states. Key points to remember:

    The Lambda function needs permission to:

    • Write CloudWatch metrics (PutMetricData API).
    • Retrieve metrics from ECS clusters (DescribeCluster API).
    • Write StdOut to CloudWatch Logs.

    Boto3, the AWS SDK for Python, is included in the Lambda execution environment for Python 2.x and 3.x.

    Because Lambda includes the AWS SDK, you don’t have to worry about packaging it up and uploading it to Lambda. You can focus on writing code and automatically take a dependency on boto3.

    As for permissions, you’ve already created the IAM role and attached a policy to it that enables your Lambda function to access the necessary API actions. When you create your Lambda function, make sure that you select the correct IAM role, to ensure it is invoked with the correct permissions.

    The following Lambda function code is generic. So how does the Lambda function know which ECS cluster to gather metrics for? Your Step Functions state machine automatically passes in its state to the Lambda function. When you create your CloudWatch Events rule, you specify a simple JSON object that passes the desired ECS cluster name into your Step Functions state machine, which then passes it to the Lambda function.

    Use the following property values as you create your Lambda function:

    Function Name: WriteMetricFromStepFunction
    Description: This Lambda function retrieves metric values from an ECS cluster and writes them to Amazon CloudWatch.
    Runtime: Python3.6
    Memory: 128 MB
    IAM Role: WriteMetricFromStepFunction

    import boto3
    
    def handler(event, context):
        cw = boto3.client('cloudwatch')
        ecs = boto3.client('ecs')
        print('Got boto3 client objects')
        
        Dimension = {
            'Name': 'ClusterName',
            'Value': event['ECSClusterName']
        }
    
        cluster = get_ecs_cluster(ecs, Dimension['Value'])
        
        cw_args = {
           'Namespace': 'ECS',
           'MetricData': [
               {
                   'MetricName': 'RunningTask',
                   'Dimensions': [ Dimension ],
                   'Value': cluster['runningTasksCount'],
                   'Unit': 'Count',
                   'StorageResolution': 1
               },
               {
                   'MetricName': 'PendingTask',
                   'Dimensions': [ Dimension ],
                   'Value': cluster['pendingTasksCount'],
                   'Unit': 'Count',
                   'StorageResolution': 1
               },
               {
                   'MetricName': 'ActiveServices',
                   'Dimensions': [ Dimension ],
                   'Value': cluster['activeServicesCount'],
                   'Unit': 'Count',
                   'StorageResolution': 1
               },
               {
                   'MetricName': 'RegisteredContainerInstances',
                   'Dimensions': [ Dimension ],
                   'Value': cluster['registeredContainerInstancesCount'],
                   'Unit': 'Count',
                   'StorageResolution': 1
               }
            ]
        }
        cw.put_metric_data(**cw_args)
        print('Finished writing metric data')
        
    def get_ecs_cluster(client, cluster_name):
        cluster = client.describe_clusters(clusters = [ cluster_name ])
        print('Retrieved cluster details from ECS')
        return cluster['clusters'][0]

    Create the CloudWatch Events rule

    Now you’ve created an IAM role and policy, Step Functions state machine, and Lambda function. How do these components actually start communicating with each other? The final step in this process is to set up a CloudWatch Events rule that triggers your metric-gathering Step Functions state machine every minute. You have two choices for your CloudWatch Events rule expression: rate or cron. In this example, use the cron expression.

    A couple key learning points from creating the CloudWatch Events rule:

    • You can specify one or more targets, of different types (for example, Lambda function, Step Functions state machine, SNS topic, and so on).
    • You’re required to specify an IAM role with permissions to trigger your target.
      NOTE: This applies only to certain types of targets, including Step Functions state machines.
    • Each target that supports IAM roles can be triggered using a different IAM role, in the same CloudWatch Events rule.
    • Optional: You can provide custom JSON that is passed to your target Step Functions state machine as input.

    Follow these steps to create the CloudWatch Events rule:

    1. Open the CloudWatch console.
    2. Choose Events, RulesCreate Rule.
    3. Select Schedule, Cron Expression, and then enter the following rule:
      0/1 * * * ? *
    4. Choose Add Target, Step Functions State MachineWriteMetricFromStepFunction.
    5. For Configure Input, select Constant (JSON Text).
    6. Enter the following JSON input, which is passed to Step Functions, while changing the cluster name accordingly:
      { "ECSClusterName": "ECSEsgaroth" }
    7. Choose Use Existing Role, WriteMetricFromStepFunction (the IAM role that you previously created).

    After you’ve completed with these steps, your screen should look similar to this:

    Validate the solution

    Now that you have finished implementing the solution to gather high-resolution metrics from ECS, validate that it’s working properly.

    1. Open the CloudWatch console.
    2. Choose Metrics.
    3. Choose custom and select the ECS namespace.
    4. Choose the ClusterName metric dimension.

    You should see your metrics listed below.

    Troubleshoot configuration issues

    If you aren’t receiving the expected ECS cluster metrics in CloudWatch, check for the following common configuration issues. Review the earlier procedures to make sure that the resources were properly configured.

    • The IAM role’s trust relationship is incorrectly configured.
      Make sure that the IAM role trusts Lambda, CloudWatch Events, and Step Functions in the correct region.
    • The IAM role does not have the correct policies attached to it.
      Make sure that you have copied the IAM policy correctly as an inline policy on the IAM role.
    • The CloudWatch Events rule is not triggering new Step Functions executions.
      Make sure that the target configuration on the rule has the correct Step Functions state machine and IAM role selected.
    • The Step Functions state machine is being executed, but failing part way through.
      Examine the detailed error message on the failed state within the failed Step Functions execution. It’s possible that the
    • IAM role does not have permissions to trigger the target Lambda function, that the target Lambda function may not exist, or that the Lambda function failed to complete successfully due to invalid permissions.
      Although the above list covers several different potential configuration issues, it is not comprehensive. Make sure that you understand how each service is connected to each other, how permissions are granted through IAM policies, and how IAM trust relationships work.

    Conclusion

    In this post, you implemented a Serverless solution to gather and record high-resolution application metrics from containers running on Amazon ECS into CloudWatch. The solution consists of a Step Functions state machine, Lambda function, CloudWatch Events rule, and an IAM role and policy. The data that you gather from this solution helps you rapidly identify issues with an ECS cluster.

    To gather high-resolution metrics from any service, modify your Lambda function to gather the correct metrics from your target. If you prefer not to use Python, you can implement a Lambda function using one of the other supported runtimes, including Node.js, Java, or .NET Core. However, this post should give you the fundamental basics about capturing high-resolution metrics in CloudWatch.

    If you found this post useful, or have questions, please comment below.

    Automating Security Group Updates with AWS Lambda

    Post Syndicated from Ian Scofield original https://aws.amazon.com/blogs/compute/automating-security-group-updates-with-aws-lambda/

    Customers often use public endpoints to perform cross-region replication or other application layer communication to remote regions. But a common problem is how do you protect these endpoints? It can be tempting to open up the security groups to the world due to the complexity of keeping security groups in sync across regions with a dynamically changing infrastructure.

    Consider a situation where you are running large clusters of instances in different regions that all require internode connectivity. One approach would be to use a VPN tunnel between regions to provide a secure tunnel over which to send your traffic. A good example of this is the Transit VPC Solution, which is a published AWS solution to help customers quickly get up and running. However, this adds additional cost and complexity to your solution due to the newly required additional infrastructure.

    Another approach, which I’ll explore in this post, is to restrict access to the nodes by whitelisting the public IP addresses of your hosts in the opposite region. Today, I’ll outline a solution that allows for cross-region security group updates, can handle remote region failures, and supports external actions such as manually terminating instances or adding instances to an existing Auto Scaling group.

    Solution overview

    The overview of this solution is diagrammed below. Although this post covers limiting access to your instances, you should still implement encryption to protect your data in transit.

    If your entire infrastructure is running in a single region, you can reference a security group as the source, allowing your IP addresses to change without any updates required. However, if you’re going across the public internet between regions to perform things like application-level traffic or cross-region replication, this is no longer an option. Security groups are regional. When you go across regions it can be tempting to drop security to enable this communication.

    Although using an Elastic IP address can provide you with a static IP address that you can define as a source for your security groups, this may not always be feasible, especially when automatic scaling is desired.

    In this example scenario, you have a distributed database that requires full internode communication for replication. If you place a cluster in us-east-1 and us-west-2, you must provide a secure method of communication between the two. Because the database uses cloud best practices, you can add or remove nodes as the load varies.

    To start the process of updating your security groups, you must know when an instance has come online to trigger your workflow. Auto Scaling groups have the concept of lifecycle hooks that enable you to perform custom actions as the group launches or terminates instances.

    When Auto Scaling begins to launch or terminate an instance, it puts the instance into a wait state (Pending:Wait or Terminating:Wait). The instance remains in this state while you perform your various actions until either you tell Auto Scaling to Continue, Abandon, or the timeout period ends. A lifecycle hook can trigger a CloudWatch event, publish to an Amazon SNS topic, or send to an Amazon SQS queue. For this example, you use CloudWatch Events to trigger an AWS Lambda function that updates an Amazon DynamoDB table.

    Component breakdown

    Here’s a quick breakdown of the components involved in this solution:

    • Lambda function
    • CloudWatch event
    • DynamoDB table

    Lambda function

    The Lambda function automatically updates your security groups, in the following way:

    1. Determines whether a change was triggered by your Auto Scaling group lifecycle hook or manually invoked for a “true up” functionality, which I discuss later in this post.
    2. Describes the instances in the Auto Scaling group and obtain public IP addresses for each instance.
    3. Updates both local and remote DynamoDB tables.
    4. Compares the list of public IP addresses for both local and remote clusters with what’s already in the local region security group. Update the security group.
    5. Compares the list of public IP addresses for both local and remote clusters with what’s already in the remote region security group. Update the security group
    6. Signals CONTINUE back to the lifecycle hook.

    CloudWatch event

    The CloudWatch event triggers when an instance passes through either the launching or terminating states. When the Lambda function gets invoked, it receives an event that looks like the following:

    {
    	"account": "123456789012",
    	"region": "us-east-1",
    	"detail": {
    		"LifecycleHookName": "hook-launching",
    		"AutoScalingGroupName": "",
    		"LifecycleActionToken": "33965228-086a-4aeb-8c26-f82ed3bef495",
    		"LifecycleTransition": "autoscaling:EC2_INSTANCE_LAUNCHING",
    		"EC2InstanceId": "i-017425ec54f22f994"
    	},
    	"detail-type": "EC2 Instance-launch Lifecycle Action",
    	"source": "aws.autoscaling",
    	"version": "0",
    	"time": "2017-05-03T02:20:59Z",
    	"id": "cb930cf8-ce8b-4b6c-8011-af17966eb7e2",
    	"resources": [
    		"arn:aws:autoscaling:us-east-1:123456789012:autoScalingGroup:d3fe9d96-34d0-4c62-b9bb-293a41ba3765:autoScalingGroupName/"
    	]
    }

    DynamoDB table

    You use DynamoDB to store lists of remote IP addresses in a local table that is updated by the opposite region as a failsafe source of truth. Although you can describe your Auto Scaling group for the local region, you must maintain a list of IP addresses for the remote region.

    To minimize the number of describe calls and prevent an issue in the remote region from blocking your local scaling actions, we keep a list of the remote IP addresses in a local DynamoDB table. Each Lambda function in each region is responsible for updating the public IP addresses of its Auto Scaling group for both the local and remote tables.

    As with all the infrastructure in this solution, there is a DynamoDB table in both regions that mirror each other. For example, the following screenshot shows a sample DynamoDB table. The Lambda function in us-east-1 would update the DynamoDB entry for us-east-1 in both tables in both regions.

    By updating a DynamoDB table in both regions, it allows the local region to gracefully handle issues with the remote region, which would otherwise prevent your ability to scale locally. If the remote region becomes inaccessible, you have a copy of the latest configuration from the table that you can use to continue to sync with your security groups. When the remote region comes back online, it pushes its updated public IP addresses to the DynamoDB table. The security group is updated to reflect the current status by the remote Lambda function.

     

    Walkthrough

    Note: All of the following steps are performed in both regions. The Launch Stack buttons will default to the us-east-1 region.

    Here’s a quick overview of the steps involved in this process:

    1. An instance is launched or terminated, which triggers an Auto Scaling group lifecycle hook, triggering the Lambda function via CloudWatch Events.
    2. The Lambda function retrieves the list of public IP addresses for all instances in the local region Auto Scaling group.
    3. The Lambda function updates the local and remote region DynamoDB tables with the public IP addresses just received for the local Auto Scaling group.
    4. The Lambda function updates the local region security group with the public IP addresses, removing and adding to ensure that it mirrors what is present for the local and remote Auto Scaling groups.
    5. The Lambda function updates the remote region security group with the public IP addresses, removing and adding to ensure that it mirrors what is present for the local and remote Auto Scaling groups.

    Prerequisites

    To deploy this solution, you need to have Auto Scaling groups, launch configurations, and a base security group in both regions. To expedite this process, this CloudFormation template can be launched in both regions.

    Step 1: Launch the AWS SAM template in the first region

    To make the deployment process easy, I’ve created an AWS Serverless Application Model (AWS SAM) template, which is a new specification that makes it easier to manage and deploy serverless applications on AWS. This template creates the following resources:

    • A Lambda function, to perform the various security group actions
    • A DynamoDB table, to track the state of the local and remote Auto Scaling groups
    • Auto Scaling group lifecycle hooks for instance launching and terminating
    • A CloudWatch event, to track the EC2 Instance-Launch Lifecycle-Action and EC2 Instance-terminate Lifecycle-Action events
    • A pointer from the CloudWatch event to the Lambda function, and the necessary permissions

    Download the template from here or click to launch.

    Upon launching the template, you’ll be presented with a list of parameters which includes the remote/local names for your Auto Scaling Groups, AWS region, Security Group IDs, DynamoDB table names, as well as where the code for the Lambda function is located. Because this is the first region you’re launching the stack in, fill out all the parameters except for the RemoteTable parameter as it hasn’t been created yet (you fill this in later).

    Step 2: Test the local region

    After the stack has finished launching, you can test the local region. Open the EC2 console and find the Auto Scaling group that was created when launching the prerequisite stack. Change the desired number of instances from 0 to 1.

    For both regions, check your security group to verify that the public IP address of the instance created is now in the security group.

    Local region:

    Remote region:

    Now, change the desired number of instances for your group back to 0 and verify that the rules are properly removed.

    Local region:

    Remote region:

    Step 3: Launch in the remote region

    When you deploy a Lambda function using CloudFormation, the Lambda zip file needs to reside in the same region you are launching the template. Once you choose your remote region, create an Amazon S3 bucket and upload the Lambda zip file there. Next, go to the remote region and launch the same SAM template as before, but make sure you update the CodeBucket and CodeKey parameters. Also, because this is the second launch, you now have all the values and can fill out all the parameters, specifically the RemoteTable value.

     

    Step 4: Update the local region Lambda environment variable

    When you originally launched the template in the local region, you didn’t have the name of the DynamoDB table for the remote region, because you hadn’t created it yet. Now that you have launched the remote template, you can perform a CloudFormation stack update on the initial SAM template. This populates the remote DynamoDB table name into the initial Lambda function’s environment variables.

    In the CloudFormation console in the initial region, select the stack. Under Actions, choose Update Stack, and select the SAM template used for both regions. Under Parameters, populate the remote DynamoDB table name, as shown below. Choose Next and let the stack update complete. This updates your Lambda function and completes the setup process.

     

    Step 5: Final testing

    You now have everything fully configured and in place to trigger security group changes based on instances being added or removed to your Auto Scaling groups in both regions. Test this by changing the desired capacity of your group in both regions.

    True up functionality
    If an instance is manually added or removed from the Auto Scaling group, the lifecycle hooks don’t get triggered. To account for this, the Lambda function supports a “true up” functionality in which the function can be manually invoked. If you paste in the following JSON text for your test event, it kicks off the entire workflow. For added peace of mind, you can also have this function fire via a CloudWatch event with a CRON expression for nearly continuous checking.

    {
    	"detail": {
    		"AutoScalingGroupName": "<your ASG name>"
    	},
    	"trueup":true
    }

    Extra credit

    Now that all the resources are created in both regions, go back and break down the policy to incorporate resource-level permissions for specific security groups, Auto Scaling groups, and the DynamoDB tables.

    Although this post is centered around using public IP addresses for your instances, you could instead use a VPN between regions. In this case, you would still be able to use this solution to scope down the security groups to the cluster instances. However, the code would need to be modified to support private IP addresses.

     

    Conclusion

    At this point, you now have a mechanism in place that captures when a new instance is added to or removed from your cluster and updates the security groups in both regions. This ensures that you are locking down your infrastructure securely by allowing access only to other cluster members.

    Keep in mind that this architecture (lifecycle hooks, CloudWatch event, Lambda function, and DynamoDB table) requires that the infrastructure to be deployed in both regions, to have synchronization going both ways.

    Because this Lambda function is modifying security group rules, it’s important to have an audit log of what has been modified and who is modifying them. The out-of-the-box function provides logs in CloudWatch for what IP addresses are being added and removed for which ports. As these are all API calls being made, they are logged in CloudTrail and can be traced back to the IAM role that you created for your lifecycle hooks. This can provide historical data that can be used for troubleshooting or auditing purposes.

    Security is paramount at AWS. We want to ensure that customers are protecting access to their resources. This solution helps you keep your security groups in both regions automatically in sync with your Auto Scaling group resources. Let us know if you have any questions or other solutions you’ve come up with!

    Backing Up Linux to Backblaze B2 with Duplicity and Restic

    Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/backing-linux-backblaze-b2-duplicity-restic/

    Linux users have a variety of options for handling data backup. The choices range from free and open-source programs to paid commercial tools, and include applications that are purely command-line based (CLI) and others that have a graphical interface (GUI), or both.

    If you take a look at our Backblaze B2 Cloud Storage Integrations page, you will see a number of offerings that enable you to back up your Linux desktops and servers to Backblaze B2. These include CloudBerry, Duplicity, Duplicacy, 45 Drives, GoodSync, HashBackup, QNAP, Restic, and Rclone, plus other choices for NAS and hybrid uses.

    In this post, we’ll discuss two popular command line and open-source programs: one older, Duplicity, and a new player, Restic.

    Old School vs. New School

    We’re highlighting Duplicity and Restic today because they exemplify two different philosophical approaches to data backup: “Old School” (Duplicity) vs “New School” (Restic).

    Old School (Duplicity)

    In the old school model, data is written sequentially to the storage medium. Once a section of data is recorded, new data is written starting where that section of data ends. It’s not possible to go back and change the data that’s already been written.

    This old-school model has long been associated with the use of magnetic tape, a prime example of which is the LTO (Linear Tape-Open) standard. In this “write once” model, files are always appended to the end of the tape. If a file is modified and overwritten or removed from the volume, the associated tape blocks used are not freed up: they are simply marked as unavailable, and the used volume capacity is not recovered. Data is deleted and capacity recovered only if the whole tape is reformatted. As a Linux/Unix user, you undoubtedly are familiar with the TAR archive format, which is an acronym for Tape ARchive. TAR has been around since 1979 and was originally developed to write data to sequential I/O devices with no file system of their own.

    It is from the use of tape that we get the full backup/incremental backup approach to backups. A backup sequence beings with a full backup of data. Each incremental backup contains what’s been changed since the last full backup until the next full backup is made and the process starts over, filling more and more tape or whatever medium is being used.

    This is the model used by Duplicity: full and incremental backups. Duplicity backs up files by producing encrypted, digitally signed, versioned, TAR-format volumes and uploading them to a remote location, including Backblaze B2 Cloud Storage. Released under the terms of the GNU General Public License (GPL), Duplicity is free software.

    With Duplicity, the first archive is a complete (full) backup, and subsequent (incremental) backups only add differences from the latest full or incremental backup. Chains consisting of a full backup and a series of incremental backups can be recovered to the point in time that any of the incremental steps were taken. If any of the incremental backups are missing, then reconstructing a complete and current backup is much more difficult and sometimes impossible.

    Duplicity is available under many Unix-like operating systems (such as Linux, BSD, and Mac OS X) and ships with many popular Linux distributions including Ubuntu, Debian, and Fedora. It also can be used with Windows under Cygwin.

    We recently published a KB article on How to configure Backblaze B2 with Duplicity on Linux that demonstrates how to set up Duplicity with B2 and back up and restore a directory from Linux.

    New School (Restic)

    With the arrival of non-sequential storage medium, such as disk drives, and new ideas such as deduplication, comes the new school approach, which is used by Restic. Data can be written and changed anywhere on the storage medium. This efficiency comes largely through the use of deduplication. Deduplication is a process that eliminates redundant copies of data and reduces storage overhead. Data deduplication techniques ensure that only one unique instance of data is retained on storage media, greatly increasing storage efficiency and flexibility.

    Restic is a recently available multi-platform command line backup software program that is designed to be fast, efficient, and secure. Restic supports a variety of backends for storing backups, including a local server, SFTP server, HTTP Rest server, and a number of cloud storage providers, including Backblaze B2.

    Files are uploaded to a B2 bucket as deduplicated, encrypted chunks. Each time a backup runs, only changed data is backed up. On each backup run, a snapshot is created enabling restores to a specific date or time.

    Restic assumes that the storage location for repository is shared, so it always encrypts the backed up data. This is in addition to any encryption and security from the storage provider.

    Restic is open source and free software and licensed under the BSD 2-Clause License and actively developed on GitHub.

    There’s a lot more you can do with Restic, including adding tags, mounting a repository locally, and scripting. To learn more, you can review the documentation at https://restic.readthedocs.io.

    Coincidentally with this blog post, we published a KB article, How to configure Backblaze B2 with Restic on Linux, in which we show how to set up Restic for use with B2 and how to back up and restore a home directory from Linux to B2.

    Which is Right for You?

    While Duplicity is a popular, widely-available, and useful program, many users of cloud storage solutions such as B2 are moving to new-school solutions like Restic that take better advantage of the non-sequential access capabilities and speed of modern storage media used by cloud storage providers.

    Tell us how you’re backing up Linux

    Please let us know in the comments what you’re using for Linux backups, and if you have experience using Duplicity, Restic, or other backup software with Backblaze B2.

    The post Backing Up Linux to Backblaze B2 with Duplicity and Restic appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

    Creating a Cost-Efficient Amazon ECS Cluster for Scheduled Tasks

    Post Syndicated from Nathan Taber original https://aws.amazon.com/blogs/compute/creating-a-cost-efficient-amazon-ecs-cluster-for-scheduled-tasks/

    Madhuri Peri
    Sr. DevOps Consultant

    When you use Amazon Relational Database Service (Amazon RDS), depending on the logging levels on the RDS instances and the volume of transactions, you could generate a lot of log data. To ensure that everything is running smoothly, many customers search for log error patterns using different log aggregation and visualization systems, such as Amazon Elasticsearch Service, Splunk, or other tool of their choice. A module needs to periodically retrieve the RDS logs using the SDK, and then send them to Amazon S3. From there, you can stream them to your log aggregation tool.

    One option is writing an AWS Lambda function to retrieve the log files. However, because of the time that this function needs to execute, depending on the volume of log files retrieved and transferred, it is possible that Lambda could time out on many instances.  Another approach is launching an Amazon EC2 instance that runs this job periodically. However, this would require you to run an EC2 instance continuously, not an optimal use of time or money.

    Using the new Amazon CloudWatch integration with Amazon EC2 Container Service, you can trigger this job to run in a container on an existing Amazon ECS cluster. Additionally, this would allow you to improve costs by running containers on a fleet of Spot Instances.

    In this post, I will show you how to use the new scheduled tasks (cron) feature in Amazon ECS and launch tasks using CloudWatch events, while leveraging Spot Fleet to maximize availability and cost optimization for containerized workloads.

    Architecture

    The following diagram shows how the various components described schedule a task that retrieves log files from Amazon RDS database instances, and deposits the logs into an S3 bucket.

    Amazon ECS cluster container instances are using Spot Fleet, which is a perfect match for the workload that needs to run when it can. This improves cluster costs.

    The task definition defines which Docker image to retrieve from the Amazon EC2 Container Registry (Amazon ECR) repository and run on the Amazon ECS cluster.

    The container image has Python code functions to make AWS API calls using boto3. It iterates over the RDS database instances, retrieves the logs, and deposits them in the S3 bucket. Many customers choose these logs to be delivered to their centralized log-store. CloudWatch Events defines the schedule for when the container task has to be launched.

    Walkthrough

    To provide the basic framework, we have built an AWS CloudFormation template that creates the following resources:

    • Amazon ECR repository for storing the Docker image to be used in the task definition
    • S3 bucket that holds the transferred logs
    • Task definition, with image name and S3 bucket as environment variables provided via input parameter
    • CloudWatch Events rule
    • Amazon ECS cluster
    • Amazon ECS container instances using Spot Fleet
    • IAM roles required for the container instance profiles

    Before you begin

    Ensure that Git, Docker, and the AWS CLI are installed on your computer.

    In your AWS account, instantiate one Amazon Aurora instance using the console. For more information, see Creating an Amazon Aurora DB Cluster.

    Implementation Steps

    1. Clone the code from GitHub that performs RDS API calls to retrieve the log files.
      git clone https://github.com/awslabs/aws-ecs-scheduled-tasks.git
    2. Build and tag the image.
      cd aws-ecs-scheduled-tasks/container-code/src && ls

      Dockerfile		rdslogsshipper.py	requirements.txt

      docker build -t rdslogsshipper .

      Sending build context to Docker daemon 9.728 kB
      Step 1 : FROM python:3
       ---> 41397f4f2887
      Step 2 : WORKDIR /usr/src/app
       ---> Using cache
       ---> 59299c020e7e
      Step 3 : COPY requirements.txt ./
       ---> 8c017e931c3b
      Removing intermediate container df09e1bed9f2
      Step 4 : COPY rdslogsshipper.py /usr/src/app
       ---> 099a49ca4325
      Removing intermediate container 1b1da24a6699
      Step 5 : RUN pip install --no-cache-dir -r requirements.txt
       ---> Running in 3ed98b30901d
      Collecting boto3 (from -r requirements.txt (line 1))
        Downloading boto3-1.4.6-py2.py3-none-any.whl (128kB)
      Collecting botocore (from -r requirements.txt (line 2))
        Downloading botocore-1.6.7-py2.py3-none-any.whl (3.6MB)
      Collecting s3transfer<0.2.0,>=0.1.10 (from boto3->-r requirements.txt (line 1))
        Downloading s3transfer-0.1.10-py2.py3-none-any.whl (54kB)
      Collecting jmespath<1.0.0,>=0.7.1 (from boto3->-r requirements.txt (line 1))
        Downloading jmespath-0.9.3-py2.py3-none-any.whl
      Collecting python-dateutil<3.0.0,>=2.1 (from botocore->-r requirements.txt (line 2))
        Downloading python_dateutil-2.6.1-py2.py3-none-any.whl (194kB)
      Collecting docutils>=0.10 (from botocore->-r requirements.txt (line 2))
        Downloading docutils-0.14-py3-none-any.whl (543kB)
      Collecting six>=1.5 (from python-dateutil<3.0.0,>=2.1->botocore->-r requirements.txt (line 2))
        Downloading six-1.10.0-py2.py3-none-any.whl
      Installing collected packages: six, python-dateutil, docutils, jmespath, botocore, s3transfer, boto3
      Successfully installed boto3-1.4.6 botocore-1.6.7 docutils-0.14 jmespath-0.9.3 python-dateutil-2.6.1 s3transfer-0.1.10 six-1.10.0
       ---> f892d3cb7383
      Removing intermediate container 3ed98b30901d
      Step 6 : COPY . .
       ---> ea7550c04fea
      Removing intermediate container b558b3ebd406
      Successfully built ea7550c04fea
    3. Run the CloudFormation stack and get the names for the Amazon ECR repo and S3 bucket. In the stack, choose Outputs.
    4. Open the ECS console and choose Repositories. The rdslogs repo has been created. Choose View Push Commands and follow the instructions to connect to the repository and push the image for the code that you built in Step 2. The screenshot shows the final result:
    5. Associate the CloudWatch scheduled task with the created Amazon ECS Task Definition, using a new CloudWatch event rule that is scheduled to run at intervals. The following rule is scheduled to run every 15 minutes:
      aws --profile default --region us-west-2 events put-rule --name demo-ecs-task-rule  --schedule-expression "rate(15 minutes)"

      {
          "RuleArn": "arn:aws:events:us-west-2:12345678901:rule/demo-ecs-task-rule"
      }
    6. CloudWatch requires IAM permissions to place a task on the Amazon ECS cluster when the CloudWatch event rule is executed, in addition to an IAM role that can be assumed by CloudWatch Events. This is done in three steps:
      1. Create the IAM role to be assumed by CloudWatch.
        aws --profile default --region us-west-2 iam create-role --role-name Test-Role --assume-role-policy-document file://event-role.json

        {
            "Role": {
                "AssumeRolePolicyDocument": {
                    "Version": "2012-10-17", 
                    "Statement": [
                        {
                            "Action": "sts:AssumeRole", 
                            "Effect": "Allow", 
                            "Principal": {
                                "Service": "events.amazonaws.com"
                            }
                        }
                    ]
                }, 
                "RoleId": "AROAIRYYLDCVZCUACT7FS", 
                "CreateDate": "2017-07-14T22:44:52.627Z", 
                "RoleName": "Test-Role", 
                "Path": "/", 
                "Arn": "arn:aws:iam::12345678901:role/Test-Role"
            }
        }

        The following is an example of the event-role.json file used earlier:

        {
            "Version": "2012-10-17",
            "Statement": [
                {
                    "Effect": "Allow",
                    "Principal": {
                      "Service": "events.amazonaws.com"
                    },
                    "Action": "sts:AssumeRole"
                }
            ]
        }
      2. Create the IAM policy defining the ECS cluster and task definition. You need to get these values from the CloudFormation outputs and resources.
        aws --profile default --region us-west-2 iam create-policy --policy-name test-policy --policy-document file://event-policy.json

        {
            "Policy": {
                "PolicyName": "test-policy", 
                "CreateDate": "2017-07-14T22:51:20.293Z", 
                "AttachmentCount": 0, 
                "IsAttachable": true, 
                "PolicyId": "ANPAI7XDIQOLTBUMDWGJW", 
                "DefaultVersionId": "v1", 
                "Path": "/", 
                "Arn": "arn:aws:iam::123455678901:policy/test-policy", 
                "UpdateDate": "2017-07-14T22:51:20.293Z"
            }
        }

        The following is an example of the event-policy.json file used earlier:

        {
            "Version": "2012-10-17",
            "Statement": [
              {
                  "Effect": "Allow",
                  "Action": [
                      "ecs:RunTask"
                  ],
                  "Resource": [
                      "arn:aws:ecs:*::task-definition/"
                  ],
                  "Condition": {
                      "ArnLike": {
                          "ecs:cluster": "arn:aws:ecs:*::cluster/"
                      }
                  }
              }
            ]
        }
      3. Attach the IAM policy to the role.
        aws --profile default --region us-west-2 iam attach-role-policy --role-name Test-Role --policy-arn arn:aws:iam::1234567890:policy/test-policy
    7. Associate the CloudWatch rule created earlier to place the task on the ECS cluster. The following command shows an example. Replace the AWS account ID and region with your settings.
      aws events put-targets --rule demo-ecs-task-rule --targets "Id"="1","Arn"="arn:aws:ecs:us-west-2:12345678901:cluster/test-cwe-blog-ecsCluster-15HJFWCH4SP67","EcsParameters"={"TaskDefinitionArn"="arn:aws:ecs:us-west-2:12345678901:task-definition/test-cwe-blog-taskdef:8"},"RoleArn"="arn:aws:iam::12345678901:role/Test-Role"

      {
          "FailedEntries": [], 
          "FailedEntryCount": 0
      }

    That’s it. The logs now run based on the defined schedule.

    To test this, open the Amazon ECS console, select the Amazon ECS cluster that you created, and then choose Tasks, Run New Task. Select the task definition created by the CloudFormation template, and the cluster should be selected automatically. As this runs, the S3 bucket should be populated with the RDS logs for the instance.

    Conclusion

    In this post, you’ve seen that the choices for workloads that need to run at a scheduled time include Lambda with CloudWatch events or EC2 with cron. However, sometimes the job could run outside of Lambda execution time limits or be not cost-effective for an EC2 instance.

    In such cases, you can schedule the tasks on an ECS cluster using CloudWatch rules. In addition, you can use a Spot Fleet cluster with Amazon ECS for cost-conscious workloads that do not have hard requirements on execution time or instance availability in the Spot Fleet. For more information, see Powering your Amazon ECS Cluster with Amazon EC2 Spot Instances and Scheduled Events.

    If you have questions or suggestions, please comment below.

    Darth Beats: Star Wars LEGO gets a musical upgrade

    Post Syndicated from Janina Ander original https://www.raspberrypi.org/blog/darth-beats/

    Dan Aldred, Raspberry Pi Certified Educator and creator of the website TeCoEd, has built Darth Beats by managing to fit a Pi Zero W and a Pimoroni Speaker pHAT into a LEGO Darth Vader alarm clock! The Pi force is strong with this one.

    Darth Beats MP3 Player

    Pimoroni Speaker pHAT and Raspberry Pi Zero W embedded into a Lego Darth Vader Alarm clock to create – “Darth Beats MP3 Player”. Video demonstrating all the features and functions of the project. Alarm Clock – https://goo.gl/VSMhG4 Speaker pHAT – https://shop.pimoroni.com/products/speaker-phat

    Darth Beats inspiration: I have a very good feeling about this!

    As we all know, anything you love gets better when you add something else you love: chocolate ice cream + caramel sauce, apple tart + caramel sauce, pizza + caramel sau— okay, maybe not anything, but you get what I’m saying.

    The formula, in the form of “LEGO + Star Wars”, applies to Dan’s LEGO Darth Vader alarm clock. His Darth Vader, however, was sitting around on a shelf, just waiting to be hacked into something even cooler. Then one day, inspiration struck: Dan decided to aim for exponential awesomeness by integrating Raspberry Pi and Pimoroni technology to turn Vader into an MP3 player.

    Darth Beats assembly: always tell me the mods!

    The space inside the LEGO device measures a puny 6×3×3 cm, so cramming in the Zero W and the pHAT was going to be a struggle. But Dan grabbed his dremel and set to work, telling himself to “do or do not. There is no try.”

    Darth Beats dremel

    I find your lack of space disturbing.

    He removed the battery compartment, and added two additional buttons in its place. Including the head, his Darth Beats has seven buttons, which means it is fully autonomous as a music player.

    Darth Beats back buttons

    Almost ready to play a silly remix of Yoda quotes

    Darth Beats can draw its power from a wall socket, or from a portable battery pack, as shown in Dan’s video. Dan used the GPIO Zero Python library to set up ‘on’ and ‘off’ switches, and buttons for skipping tracks and controlling volume.

    For more details on the build process, read his blog, and check out his video log:

    Making Darth Beats

    Short video showing you how I created the “Darth Beats MP3 Player”.

    Accessing Darth Beats: these are the songs you’re looking for

    When you press the ‘on’ switch, the Imperial March sounds before Darth Beats asks “What is thy bidding, my master?”. Then the device is ready to play music. Dan accomplished this by using Cron to run his scripts as soon as the Zero W boots up. MP3 files are played with the help of the Pygame library.

    Of course, over time it would become boring to only be able to listen to songs that are stored on the Zero W. However, Dan got around this issue by accessing the Zero W remotely. He set up an online file upload system to add and remove MP3 files from the player. To do this, he used Droopy, an file sharing server software package written by Pierre Duquesne.

    IT’S A TRAP!

    There’s no reason to use this quote, but since it’s the Star Wars line I use most frequently, I’m adding it here anyway. It’s my post, and I can do what I want!

    As you can imagine, there’s little that gets us more excited at Pi Towers than a Pi-powered Star Wars build. Except maybe a Harry Potter-themed project? What are your favourite geeky builds? Are you maybe even working on one yourself? Be sure to send us nerdy joy by sharing your links in the comments!

    The post Darth Beats: Star Wars LEGO gets a musical upgrade appeared first on Raspberry Pi.