Tag Archives: Permissions

Automate analyzing your permissions using IAM access advisor APIs

Post Syndicated from Ujjwal Pugalia original https://aws.amazon.com/blogs/security/automate-analyzing-permissions-using-iam-access-advisor/

As an administrator that grants access to AWS, you might want to enable your developers to get started with AWS quickly by granting them broad access. However, as your developers gain experience and your applications stabilize, you want to limit permissions to only what they need. To do this, access advisor will determine the permissions your developers have used by analyzing the last timestamp when an IAM entity (for example, a user, role, or group) accessed an AWS service. This information helps you audit service access, remove unnecessary permissions, and set appropriate permissions across different environments. For example, you can grant broad access to services in development accounts and then reduce permissions for access to specific services in production accounts. Finally, as you manage more IAM entities and AWS accounts, you need a way to scale these processes through automation. To help you achieve this automation, you can now use IAM access advisor APIs with the AWS Command Line Interface (AWS CLI) or a programmatic client.

In this post, I first provide the details of the access advisor APIs. Next, I walk through an example to demonstrate how you can use the AWS CLI to create a report of the last-accessed timestamps for the services used by the roles in your account. In this post, I assume that you’re familiar with access advisor and how to Remove Unnecessary Permissions in Your IAM Policies by Using Service Last Accessed Data from the IAM console. Before I share an example, I’ll describe the new IAM access advisor APIs:

  • generate-service-last-accessed-details: Generates the service last accessed data for an IAM resource (user, role, group, or policy). You need to call this API first to start a job that generates the service last accessed data for the IAM resource. This API returns a JobId that you will use for the other APIs, such as get-service-last-accessed-details, to determine the status of the job completion.
  • get-service-last-accessed-details: Use this to retrieve the service last accessed data for an IAM resource based on the JobID you pass in.
  • get-service-last-accessed-details-with-entities: Use this to retrieve the service last accessed data for a specific AWS service. The API provides you with a list of all the IAM entities who have access to the service and includes the last accessed date for each IAM entity.
  • list-policies-granting-service-access: Use this to retrieve all the IAM policies that grant permissions to the services accessed for an IAM entity. This helps you identify the policies you need to modify to remove any unused permissions.

Now that you understand the different IAM access advisor APIs, I’ll walk through an example to demonstrate how to use them to set permissions based on service last accessed information.

Example use case: Setting permissions for IAM roles

Assume Arnav Desai is a security administrator for Example Corp. He works with several development teams and monitors their access across multiple accounts. To get his development teams up and running quickly, he initially created multiple roles with broad permissions that are based on job function in the development accounts. Now, his developers are ready to deploy workloads to production accounts. The developers need access to configure AWS, however, Arnav only wants to grant them access to what they need. To determine these permissions, he uses access advisor APIs to automate a process that helps him understand the services developers accessed in the last six months. Using this information, he authors policies to grant access to specific services in production. I’ll now show you an example to achieve this in one account using AWS CLI commands.

First, Arnav uses the list-roles command to list the IAM roles in his development account. For this example, there are two roles in his development account: DBAdminRole and NetworkAdminRole.

For each role, he uses the generate-service-last-accessed-details command to generate the service last accessed data for the role. Here’s an example of the command that he uses:


aws iam generate-service-last-accessed-details --arn arn:aws:iam::123456789012:role/DBAdminRole

The command above provides Arnav with a JobId for each role signaling that the job has started generating the service last accessed details. Arnav waits for the job to complete successfully to retrieve the access advisor information. In the meantime, he can call the get-service-last-accessed-details command to view the JobStatus of the job. Once the jobs for both roles are COMPLETED, Arnav can view the service last accessed report for both the roles, as shown below.

DBAdminRole


"ServicesLastAccessed": [
        {
            "LastAuthenticated": "2018-11-01T17:41:15Z",
            "LastAuthenticatedEntity": "arn:aws:iam::123456789012:role/ DBAdminRole",
            "ServiceName": "Amazon DynamoDB",
            "ServiceNamespace": "dynamodb",
            "TotalAuthenticatedEntities": 1
        },
        {
            "LastAuthenticated": "2018-08-25T17:41:15Z",
            "LastAuthenticatedEntity": "arn:aws:iam::123456789012:role/ DBAdminRole",
            "ServiceName": "Amazon S3",
            "ServiceNamespace": "s3",
            "TotalAuthenticatedEntities": 1
        },
	.
	.
	.
    ]

Note: I’ve truncated the output because the DBAdminRole doesn’t access other services.

NetworkAdminRole


"ServicesLastAccessed": [
        {
            "LastAuthenticated": "2018-11-21T17:41:15Z",
            "LastAuthenticatedEntity": "arn:aws:iam::123456789012:role/ NetworkAdminRole",
            "ServiceName": "Amazon EC2",
            "ServiceNamespace": "ec2",
            "TotalAuthenticatedEntities": 1
        },
	.
	.
	.
    ]

Note: I’ve truncated the output because the NetworkAdminRole doesn’t access other services.

Based on the output above, you can see that the two roles in development accessed Amazon DynamoDB, Amazon S3, and Amazon EC2 in the last six months. Using this information, Arnav can author a policy to grant access to these specific services for the production accounts.

Conclusion

In this post, I reviewed IAM access advisor APIs and shown how you can use them to determine service last accessed information programmatically. You can use this information to audit access, removed unused permissions, or grant appropriate permissions across your accounts.

If you have comments about retrieving Access Advisor service last accessed information programmatically, submit them in the Comments section below. If you have issues using access advisor commands, start a thread on the IAM forum or contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Ujjwal Pugalia

Ujjwal is the product manager for the console sign-in and sign-up experience at AWS. He enjoys working in the customer-centric environment at Amazon because it aligns with his prior experience building an enterprise marketplace. Outside of work, Ujjwal enjoys watching crime dramas on Netflix. He holds an MBA from Carnegie Mellon University (CMU) in Pittsburgh.

Simplify granting access to your AWS resources by using tags on AWS IAM users and roles

Post Syndicated from Sulay Shah original https://aws.amazon.com/blogs/security/simplify-granting-access-to-your-aws-resources-by-using-tags-on-aws-iam-users-and-roles/

Recently, AWS enabled tags on IAM principals (users and roles). With this update, you can now use attribute-based access control (ABAC) to simplify permissions management at scale. This means administrators can create a reusable policy that applies permissions based on the attributes of the IAM principal (such as tags). For example, you can use an IAM policy that grants developers access to resources that match their project tag. As the team adds resources to projects, permissions automatically apply based on attributes. No policy update required for each new resource.

In this blog post, I walk through three examples of how you can control access permissions by using tags on IAM principals and AWS resources. It’s important to note that you can use tags to control access to your AWS resources, but only if the AWS service in question supports tag-based permissions. To learn more about AWS services that support tag-based permissions, see AWS Services That Work with IAM.

As a reminder, I introduced the following tagging condition keys in my post about tagging. Adding tags to the Condition element of a policy tailors the policy’s permissions and limits its actions and resources.

Condition key Description Actions that support the condition key
aws:RequestTag Tags that you request to be added or removed. iam:CreateUser, iam:Create Role, iam:TagRole, iam:UntagRole, iam:TagUser, iam:UntagUser
aws:TagKeys Tag keys that are checked before the actions are executed. iam:CreateUser, iam:Create Role, iam:TagRole, iam:UntagRole, iam:TagUser, iam:UntagUser
aws:PrincipalTag Tags that exist on the user or role making the call. A global condition (all actions across all services support this condition key)
iam:ResourceTag Tags that exist on an IAM resource. All IAM APIs that supports an IAM user or role and sts:AssumeRole

Example 1: Grant IAM users access to your AWS resources by using tags

Assume that you have multiple teams of developers who need permissions to start and stop specific EC2 instances based on their cost center. In the following policy, I specify the EC2 actions ec2:StartInstances and ec2:StopInstances in the Action element and all resources in the Resource element of the policy. In the Condition element of the policy, I use the condition key aws:PrincipalTag. This will help ensure that the principal is able to start and stop that instance only if value of the ec2 instance CostCenter tag matches value of the CostCenter tag on the principal. Attaching this policy to your developer roles or groups simplifies permissions management, as you only need to manage a single policy for all your dev teams requiring permissions to start and stop instances and rely on tag values to specify the resources.


{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ec2:DescribeInstances"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "ec2:StartInstances",
                "ec2:StopInstances"
            ],
            "Resource": "*",
            "Condition": {
                "StringEquals": {
                    "ec2:ResourceTag/CostCenter": "${aws:PrincipalTag/CostCenter}"
                }
            }
        }
    ]
}

Example 2: Grant users in an IAM group access to your AWS resources by using tags

Assume there are database administrators in your account who need start, stop, and reboot permissions for specific Amazon RDS instances. In the following policy, I define the start, stop, and reboot actions for Amazon RDS in the Action element of the policy, and all resources in the Resource element of the policy. In the Condition element of the policy, I use the condition key, aws:PrincipalTag, to select users with the tag, CostCenter=0735. I use the StringEquals condition operator to check for an exact match of the value. I also use the condition key, rds:db-tag, to control access to databases tagged with Project=DataAnalytics. I attach this policy to an IAM group which contains all the database administrators in my account. Now, any database administrator in this group with tag CostCenter=0735 gets access to the RDS instance tagged Project=DataAnalytics.


{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "rds:DescribeDBInstances",
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "rds:RebootDBInstance",
                "rds:StartDBInstance",
                "rds:StopDBInstance"
            ],
            "Resource": "*",
            "Condition": {
                "StringEquals": {
                    "aws:PrincipalTag/CostCenter": "0735",
                    "rds:db-tag/Project": "DataAnalytics"
                }
            }
        }
    ]
}

Example 3: Use tags to control access to IAM roles

Let’s say a user, Bob in Account A, needs to manage several applications and needs to assume specific roles in Account B. The following policy grants Bob’s IAM user permissions to assume all roles tagged with ExampleCorpABC. In the Action element of the policy, I define sts:AssumeRole, which grants permissions to assume roles. In the Resource element of the policy, I define a wildcard (*) to grant access to all roles, but use the condition key, iam:ResourceTag, in the Condition element to scope down the roles that Bob can assume. As with the previous policy, I use the StringEquals operator to ensure that Bob can assume roles that have the tag, Project=ExampleCorpABC. Now, whenever I create a role in Account B and trust Bob’s account in the role’s trust policy, Bob can only assume this role if it is tagged with Project=ExampleCorpABC.


{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "sts:AssumeRole",
      "Resource": "*",
      "Condition": {
	          "StringEquals": 
		    {"iam:ResourceTag/Project": "ExampleCorpABC"}
      }
    }
  ]
} 
 

Summary

You now can tag your IAM principals to control access to your AWS resources, and the three examples I’ve included in this post show how tags can help you simplify access management.

If you have comments about this post, submit them in the Comments section below. If you have questions about or suggestions for this solution, start a new thread on the IAM forum.

Want more AWS Security news? Follow us on Twitter.

The author

Sulay Shah

Sulay is the product manager for Identity and Access Management service at AWS. He strongly believes in the customer first approach and is always looking for new opportunities to assist customers. Outside of work, Sulay enjoys playing soccer and watching movies. Sulay holds a master’s degree in computer science from the North Carolina State University.

Add Tags to Manage Your AWS IAM Users and Roles

Post Syndicated from Sulay Shah original https://aws.amazon.com/blogs/security/add-tags-to-manage-your-aws-iam-users-and-roles/

We made it easier for you to manage your AWS Identity and Access Management (IAM) resources by enabling you to add tags to your IAM users and roles (also known as IAM principals). Tags enable you to add customizable key-value pairs to resources, and many AWS services support tagging of AWS resources. Now, you can use tags to add custom attributes such as project name and cost center to your IAM principals. Additionally, tags on IAM principals simplify permissions management. For example, you can author a policy that allows a user to assume the roles for a specific project by using a tag. As you add roles with that tag, users gain permissions to assume those roles automatically. In a subsequent post, I will review how you can use tags on IAM principals to control access to your AWS resources.

In this blog post, I introduce the new APIs and conditions you can use to tag IAM principals, show three example policies that address three tagging use cases, and I show how to add tags to IAM principals by using the AWS Console and CLI. The first example policy grants permissions to tag principals. The second example policy requires specific tags for new users, and the third grants permissions to manage specific tags on principals.

Note: You must have the latest version of the AWS CLI to tag your IAM principals. Follow these instructions to update the AWS CLI.

New IAM APIs for tagging IAM principals

The following table lists the new IAM APIs that you must grant access to using an IAM policy so that you can view and modify tags on IAM principals. These APIs support resource-level permissions so that you can grant permissions to tag only specific principals.

Actions Description Supports resource-level permissions
iam:ListUserTags Lists the tags on an IAM user. arn:aws:iam::<ACCOUNT-ID>:user/<USER-NAME>
iam:ListRoleTags Lists the tags on an IAM role. arn:aws:iam::<ACCOUNT-ID>:role/<ROLE-NAME>
iam:TagUser Creates or modifies the tags on an IAM user. arn:aws:iam::<ACCOUNT-ID>:user/<USER-NAME>
iam:TagRole Creates or modifies the tags on an IAM role. arn:aws:iam::<ACCOUNT-ID>:role/<ROLE-NAME>
iam:UntagUser Removes the tags on an IAM user. arn:aws:iam::<ACCOUNT-ID>:user/<USER-NAME>
iam:UntagRole Removes the tags on an IAM role. arn:aws:iam::<ACCOUNT-ID>:role/<ROLE-NAME>

In addition to the new APIs, tagging parameters now are available for the existing iam:CreateUser and iam:CreateRole APIs to enable you to tag your users and roles when they are created. I show how you can add tags to a new user later in this blog post.

Now that you know the APIs you can use to tag IAM principals, let’s review an example of how to grant permissions to tag by using an IAM policy.

Example policy 1: Grant permissions to tag specific users and all roles

To get started using tags, you must first ensure you grant permissions to do so. The following policy grants permissions to tag one IAM user and all roles.

Note: Replace <ACCOUNT-ID> with your 12-digit account number.

        
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "iam:ListUsers",
                "iam:ListRoles"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "iam:ListUserTags",
                "iam:ListRoleTags",
                "iam:TagUser",
                "iam:TagRole",
                "iam:UntagUser",
                "iam:UntagRole"
            ],
            "Resource": [
                "arn:aws:iam:: <ACCOUNT-ID>:user/John",
                "arn:aws:iam:: <ACCOUNT-ID>:role/*"
            ]
        }
    ]
}

This policy lists all the actions required to see and modify tags for IAM principals. The Resource element of the policy grants permissions to tag one user, John, and all roles in the account by specifying the Amazon Resource Name (ARN).

Now that I have reviewed the new APIs you can use to view and modify tags on your IAM principals, let’s go over the new IAM condition keys you can use in policies.

New IAM condition keys for tagging IAM principals

The following table lists the condition keys you can use in your IAM policies to control access by using tags. In this section, I also show examples of how context keys in policies can help you grant more specific access for tagging IAM principals.

Condition key Description Actions that support the condition key
aws:RequestTag Tags that you request to be added or removed from a user or role iam:CreateUser, iam:CreateRole, iam:TagRole, iam:UntagRole, iam:TagUser, iam:UntagUser
aws:TagKeys Tag keys that are checked before the actions are executed iam:CreateUser, iam:CreateRole, iam:TagRole, iam:UntagRole, iam:TagUser, iam:UntagUser
aws:PrincipalTag Tags that exist on the user or role making the call global condition (all actions across all services support this condition key)
iam:ResourceTag Tags that exist on the resource Any IAM API that supports an IAM user or role and sts:AssumeRole

Now that I have explained both the new APIs and condition keys for tagging IAM users and roles, let’s review two more use cases with tags.

Example policy 2: Require tags for new IAM users

Let’s say I want to apply the same tags to all new IAM users so that I can track them consistently along with my other AWS resources. Now, when you create a user, you can also pass in one or more tags. Let’s say I want to ensure that all the administrators on my team apply a CostCenter tag. I create an IAM policy that includes the actions required to create and tag users. I also use the Condition element to list the tags required to be added to each new user during creation. If an administrator forgets to add a tag, the administrator’s attempt to create the user fails.

Note: These actions are creating new users by using the AWS CLI.


{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "ThisRequiresSpecificTagsWhenYouCreateANewUsers",
            "Effect": "Allow",
            "Action": [
                "iam:CreateUser",
                "iam:TagUser"
            ],
            "Resource": "*",
            "Condition": {
                "StringLike": {
                    
                        "aws:RequestTag/CostCenter": "*"
                 
            }
	  }
        }
    ]
} 

The preceding policy grants iam:CreateUser and iam:TagUser to allow creating and tagging IAM users in the AWS CLI. The Condition element that specifies the CostCenter tag is required during creation by using the condition key aws:RequestTag.

Example policy 3: Grant permissions to manage specific tags on IAM principals

Let’s say I want an administrator on my team, Alice, to manage two tags, Project and CostCenter, for all IAM principals in our account. The following policy allows Alice to be able to assign any value to the Project tag, but limits the values she can assign to the CostCenter tag.


{
    "Version": "2012-10-17",
    "Statement": [
       {
            "Sid": "ViewAllTags", 
            "Effect": "Allow",
            "Action": [
                "iam:ListUsers",
                "iam:ListRoles",
				"iam:ListUserTags",
                "iam:ListRoleTags"
            ],
            "Resource": "*"
        },
        {
           "Sid": "TagUserandRoleWithAnyProjectName",
            "Effect": "Allow",
            "Action": [
                "iam:TagUser",
                "iam:TagRole"
            ],
            "Resource": "*",
            "Condition": {
                "StringLike": {
                    "aws:RequestTag/Project": "*"
                }
            }
        },
        {
           "Sid": "TagUserandRoleWithTwoCostCenterValues",
            "Effect": "Allow",
            "Action": [
                "iam:TagUser",
                "iam:TagRole"
            ],
            "Resource": "*",
            "Condition": {
                "StringLike": {
                    "aws:RequestTag/CostCenter": [
                        "1234",
                        "5678"]}}
        },
        {
           "Sid": "UntagUserandRoleProjectCostCenter",
            "Effect": "Allow",
            "Action": [
                "iam:UntagUser",
                "iam:UntagRole"
            ],
            "Resource": "*",
            "Condition": {
                "ForAllValues:StringLike": {
                    "aws:TagKeys": [
                        "CostCenter",
                        "Project"
                    ]
                }
            }
        }
    ]
}

This policy permits Alice to view, add, and remove the Project and CostCenter tags for all principals in the account. In the Condition element of the second and third statements of the policy, I use the condition, aws:RequestTag, to define the tags Alice is allowed to add or remove as well as the values she is able to assign to those tags. Alice can assign any value to the tag, Project, but is limited to two values, 1234 and 5678, for the tag, CostCenter.

Now that you understand how to grant permissions to tag IAM principals, I will show you how to run the commands to tag a new user and an existing role.

How to add tags to a new IAM user

Using the CLI
Let’s say that IAM user, John, is a new team member and needs access to AWS. To manage resources, I use the following command to create John and add the Project, CostCenter, and EmailID tags.

aws iam create-user --user-name John --tags Key=CostCenter,Value=1234, Key=EmailID,[email protected] 

To give John access to the appropriate AWS actions and resources, you can use the use the CLI to attach policies to John.

Using the console
You can also add tags to a user using the AWS console through the user creation flow as shown below.

  1. Sign in to the AWS Management Console and navigate to the IAM console.
  2. In the left navigation pane, select Users, and then select Add user.
  3. Type the user name for the new user.
  4. Select the type of access this user will have. You can select programmatic access, access to the AWS Management Console, or both.
  5. Select Next: Permissions.
  6. On the Set permissions page, specify how you want to assign permissions to this set of new users. You can choose between Add user to group, Copy permissions from existing user, or Attach existing policies to user directly.
  7. Select Next:Tags.
  8. On the Add tags (optional) page, add the tags you want to attach to this principal. I add the CostCenter tag key with a value of 1234 and the EmailID tag key with value of [email protected].
     
    Figure 1: Add tags

    Figure 1: Add tags

  9. Select Next: Review.
  10. Once you reviewed all the information, select Create user. This action creates your user John with the permissions and tags you attached. You can navigate to the user Details page to view this user.

    How to add tags to an existing IAM role

    Using the CLI
    To manage custom data for each role in my account, I need to add the following tags to all existing roles: Company, Project, Service, and CreationDate. The following command adds these tags to all existing roles. To be able to run the commands I just demonstrated, you must have permissions granted to you in an IAM policy.

    aws iam tag-role --role-name * --tags Key=Project, Key=Service

    I can define the value of the tags for a specific role, Migration, by using the following command:

    aws iam add-role-tags --role-name Migration --tags Key=Project,Value=IAM, Key=Service,Value=S3
    

    Using the console
    You can use the console to add tags to roles individually. To do this, on the left side, select Roles, and then select the role you want to add tags to.
     

    Figure 2: Add tags to individual roles

    Figure 2: Add tags to individual roles

    To view the existing tags on the role, select the Tags tab. The image shown below shows a Migration role in my account with two existing tags: Project with value IAM and key Service with value S3. To add tags or edit the existing tags, select Edit tags.
     

    Figure 3: Edit tags

    Figure 3: Edit tags

    Summary

    When you tag IAM principals, you add custom attributes to the users and roles in your account to make it easier to manage your IAM resources. In this post, I reviewed the new APIs and condition keys and showed three policy examples that address use cases to grant permissions to tag your IAM principals. In a subsequent post, I will review how you can use tags on IAM principals to control access to AWS resources and other accounts.

    If you have comments about this post, submit them in the Comments section below. If you have questions about or suggestions for this solution, start a new thread on the IAM forum.

    Want more AWS Security news? Follow us on Twitter.

    The author

    Sulay Shah

    Sulay is the product manager for Identity and Access Management service at AWS. He strongly believes in the customer first approach and is always looking for new opportunities to assist customers. Outside of work, Sulay enjoys playing soccer and watching movies. Sulay holds a master’s degree in computer science from the North Carolina State University.

Delegate permission management to developers by using IAM permissions boundaries

Post Syndicated from Apurv Awasthi original https://aws.amazon.com/blogs/security/delegate-permission-management-to-developers-using-iam-permissions-boundaries/

Today, AWS released a new IAM feature that makes it easier for you to delegate permissions management to trusted employees. As your organization grows, you might want to allow trusted employees to configure and manage IAM permissions to help your organization scale permission management and move workloads to AWS faster. For example, you might want to grant a developer the ability to create and manage permissions for an IAM role required to run an application on Amazon EC2. This ability is powerful and might be used inappropriately or accidentally to attach an administrator access policy to obtain full access to all resources in an account. Now, you can set a permissions boundary to control the maximum permissions employees can grant to the IAM principals (that is, users and roles) that they create and manage.

A permissions boundary is an advanced feature that allows you to limit the maximum permissions that a principal can have. Before we walk you through a specific example, here is an overview of how permissions boundaries work. As the IAM administrator, you can define one or more permissions boundaries using managed policies and allow your employee to create a principal with this boundary. The employee can then attach a permissions policy to this principal. However, the effective permissions of the principal are the intersection of the permissions boundary and permissions policy. As a result, the new principal cannot exceed the boundary that you defined. See the following diagram for a visual representation.
 

Figure 1: The intersection of permission boundaries and policies

Figure 1: The intersection of permissions boundary and permissions policy

In this post, we’ll walk through an example that shows how to grant an employee permission to create IAM roles and assign permissions. We’ll also show how to ensure that these IAM roles can only access Amazon DynamoDB actions and resources in the AWS EU (Frankfurt) region. This solution requires the following steps.

IAM administrator tasks

  1. Define the permissions boundary by creating a customer-managed policy.
  2. Create and attach a permissions policy to allow an employee to create roles, but only with a permissions boundary and a name that meets a specific convention.
  3. Create and attach a permissions policy to allow an employee to pass this role to Amazon EC2.

Employee tasks

  1. Create a role with the required permissions boundary.
  2. Attach a permissions policy to the role.

Administrator step 1: Define the permissions boundary

As an IAM administrator, we’ll create a customer managed policy that grants permissions to put, update, and delete items on all DynamoDB tables in the AWS EU (Frankfurt) region. We’ll require employees to set this policy as the permissions boundary for the roles they create. To follow along, paste the following JSON policy in a file with the name DynamoDB_Boundary_Frankfurt_Text.json.


{
  "Version" : "2012-10-17",
  "Statement" : [
  {
    "Effect": "Allow",
    "Action": [
                "dynamodb:PutItem",
                "dynamodb:UpdateItem",
                "dynamodb:DeleteItem"
   ],
    "Resource": "*",
    "Condition": {
        "StringEquals": {
            "aws:RequestedRegion": "eu-central-1"
        }
    }
  }
]
}

Next, use the create-policy AWS CLI command to create the policy, DynamoDB_Boundary_Frankfurt.

$aws iam create-policy –policy-name DynamoDB_Boundary_Frankfurt –policy-document file://DynamoDB_Boundary_Frankfurt_Text.json

Note: You can also use an AWS managed policy as a permissions boundary.

Administrator step 2: Create and attach the permissions policy

Create a policy that grants permissions to create IAM roles with the DynamoDB_Boundary_Frankfurt permissions boundary, and a name that begins with the prefix MyTestApp. This policy also grants permissions to create and attach IAM policies to roles with this boundary and naming convention. The permissions boundary controls the maximum permissions these roles can have. The naming convention enables administrators to more effectively grant access to manage and use these roles, without updating the employee’s permissions when they create a role. The naming convention also makes it easier to audit and identify roles created by an employee. To create this policy, paste the following JSON policy document in a file with the name Permissions_Policy_For_Employee_Text.json. Make sure to replace the variable <ACCOUNT NUMBER> with your own AWS account number. You can update the policy to grant additional permissions, such as launching EC2 instances in a specific subnet or allowing read-only access on items in a DynmoDB table.


{
  "Version" : "2012-10-17",
  "Statement" : [
     {
"Sid": "SetPermissionsBoundary",
"Effect": "Allow",
"Action": [
"iam:CreateRole",
"iam:AttachRolePolicy",
"iam:DetachRolePolicy"
],
"Resource": "arn:aws:iam::<ACCOUNT_NUMBER>:role/MyTestApp*",
"Condition": {
     "StringEquals": {
     "iam:PermissionsBoundary":     
     "arn:aws:iam::<ACCOUNT_NUMBER>:policy/DynamoDB_Boundary_Frankfurt"}}
      },
     {
      "Sid": "CreateAndEditPermissionsPolicy",
"Effect": "Allow",
"Action": [
"iam:CreatePolicy",
      "iam:CreatePolicyVersion"
],
"Resource": "*"
     }
]
}

Next, use the create-policy command to create the customer managed policy, Permissions_Policy_For_Employee, and use the attach-role-policy command to attach this policy to the principal, MyEmployeeRole, used by your employee.

$aws iam create-policy –policy-name Permissions_Policy_For_Employee –policy-document file://Permissions_Policy_For_Employee_Text.json

$aws iam attach-role-policy –policy-arn arn:aws:iam::<ACCOUNT_NUMBER>:policy/Permissions_Policy_For_Employee –role-name MyEmployeeRole

Administrator step 3: Create and attach the permissions policy for granting permissions to pass the role

Create a policy to allow the employee to pass the roles they created to AWS services, such as Amazon EC2, enabling these services to assume the roles and perform actions on the employee’s behalf. To do this, paste the following JSON policy document in a file with the name Pass_Role_Policy_Text.json.


{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "iam:PassRole",
            "Resource": "arn:aws:iam::<ACCOUNT_NUMBER>:role/MyTestApp*"
        }
    ]
}

Then, use the create-policycreate-policy command to create the policy, Pass_Role_Policy, and the attach-role-policy command to attach this policy to the principal, MyEmployeeRole.

$aws iam create-policy –policy-name Pass_Role_Policy –policy-document file://Pass_Role_Policy_Text.json

$aws iam attach-role-policy –policy-arn arn:aws:iam::<ACCOUNT_NUMBER>:policy/Pass_Role_Policy –role-name MyEmployeeRole

As the IAM administrator, we’ve successfully defined a permissions boundary. We’ve also granted our employee the ability to create IAM roles and attach permissions policies, while ensuring the permissions of the roles don’t exceed the boundary that we set.

Managing Permissions Boundaries

Changing and modifying a permissions boundary is a powerful permission. You should reserve this permission for full administrators in an account. You can do this by ensuring that policies you use as permissions boundaries don’t include the DeleteUserPermissionsBoundary and DeleteRolePermissionsBoundary actions. Or, if you allow “iam:*actions, then you must explicitly deny those actions.

Employee step 1: Create a role by providing the permissions boundary

Your employee can now use the create-role command to create a new IAM role with the DynamoDB_Boundary_Frankfurt permissions boundary and the attach-role-policy command to attach permissions policies to this role.

For this post, we assume that your employee operates an application, MyTestApp, on Amazon EC2 that requires access to the Amazon DynamoDB table, MyTestApp_DDB_Table. The employee can paste the following JSON policy document and save it as Role_Trust_Policy_Text.json to define the trust policy.


{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "ec2.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

Then, the employee can use the create-role command to create the IAM role, MyTestAppRole, and define the permissions boundary as DynamoDB_Boundary_Frankfurt. The create-role command will fail if the employee doesn’t provide the appropriate permissions boundary. Make sure to the <ACCOUNT NUMBER> variable is replaced with the employee’s in the policy below.

$aws iam create-role –role-name MyTestAppRole
–assume-role-policy-document file://Role_Trust_Policy_Text.json
–permissions-boundary arn:aws:iam::<ACCOUNT_NUMBER>:policy/DynamoDB_Boundary_Frankfurt

Next, the employee grants permissions to this role by using the attach-role-policy command to attach the following policy, MyTestApp_DDB_Permissions. This policy grants the ability to perform all actions on the DynamoDB table, MyTestApp_DDB_Table.


{
    "Version": "2012-10-17",
    "Statement": [
{
"Effect": "Allow",
"Action": "dynamodb:*",
"Resource": [
"arn:aws:dynamodb:eu-central-1:<ACCOUNT_NUMBER>:table/MyTestApp_DDB_Table"]
}
]
}

$aws iam attach-role-policy –policy-arn arn:aws:iam::<ACCOUNT_NUMBER>:policy/MyTestApp_DDB_Permissions
–role-name MyTestAppRole

Although the employee granted full DynamoDB access, the effective permissions for this IAM role are the intersection of the permissions boundary, DynamoDB_Boundary_Frankfurt, and the permissions policy, MyTestApp_DDB_Permissions. This means the role only has access to put, update, and delete items on the MyTestApp_DDB_Table in the AWS EU (Frankfurt) region. See the following diagram for a visual representation.
 

Figure 2: Effective permissions for the IAM role

Figure 2: Effective permissions for the IAM role

Summary

We demonstrated how to use permissions boundaries to delegate IAM permission management. Using permissions boundaries can help you scale permission management in your organization and move workloads to AWS faster. To learn more, see the IAM documentation for permissions boundaries.

If you have comments about this post, submit them in the Comments section below. If you have questions or suggestions, please start a new thread on the IAM forum.

Want more AWS Security news? Follow us on Twitter.

Easier way to control access to AWS regions using IAM policies

Post Syndicated from Sulay Shah original https://aws.amazon.com/blogs/security/easier-way-to-control-access-to-aws-regions-using-iam-policies/

We made it easier for you to comply with regulatory standards by controlling access to AWS Regions using IAM policies. For example, if your company requires users to create resources in a specific AWS region, you can now add a new condition to the IAM policies you attach to your IAM principal (user or role) to enforce this for all AWS services. In this post, I review conditions in policies, introduce the new condition, and review a policy example to demonstrate how you can control access across multiple AWS services to a specific region.

Condition concepts

Before I introduce the new condition, let’s review the condition element of an IAM policy. A condition is an optional IAM policy element that lets you specify special circumstances under which the policy grants or denies permission. A condition includes a condition key, operator, and value for the condition. There are two types of conditions: service-specific conditions and global conditions. Service-specific conditions are specific to certain actions in an AWS service. For example, the condition key ec2:InstanceType supports specific EC2 actions. Global conditions support all actions across all AWS services.

Now that I’ve reviewed the condition element in an IAM policy, let me introduce the new condition.

AWS:RequestedRegion condition key

The new global condition key, , supports all actions across all AWS services. You can use any string operator and specify any AWS region for its value.

Condition key Description Operator(s) Value
aws:RequestedRegion Allows you to specify the region to which the IAM principal (user or role) can make API calls All string operators (for example, StringEquals Any AWS region (for example, us-east-1)

I’ll now demonstrate the use of the new global condition key.

Example: Policy with region-level control

Let’s say a group of software developers in my organization is working on a project using Amazon EC2 and Amazon RDS. The project requires a web server running on an EC2 instance using Amazon Linux and a MySQL database instance in RDS. The developers also want to test Amazon Lambda, an event-driven platform, to retrieve data from the MySQL DB instance in RDS for future use.

My organization requires all the AWS resources to remain in the Frankfurt, eu-central-1, region. To make sure this project follows these guidelines, I create a single IAM policy for all the AWS services that this group is going to use and apply the new global condition key aws:RequestedRegion for all the services. This way I can ensure that any new EC2 instances launched or any database instances created using RDS are in Frankfurt. This policy also ensures that any Lambda functions this group creates for testing are also in the Frankfurt region.


{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ec2:DescribeAccountAttributes",
                "ec2:DescribeAvailabilityZones",
                "ec2:DescribeInternetGateways",
                "ec2:DescribeSecurityGroups",
                "ec2:DescribeSubnets",
                "ec2:DescribeVpcAttribute",
                "ec2:DescribeVpcs",
                "ec2:DescribeInstances",
                "ec2:DescribeImages",
                "ec2:DescribeKeyPairs",
                "rds:Describe*",
                "iam:ListRolePolicies",
                "iam:ListRoles",
                "iam:GetRole",
                "iam:ListInstanceProfiles",
                "iam:AttachRolePolicy",
                "lambda:GetAccountSettings"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "ec2:RunInstances",
                "rds:CreateDBInstance",
                "rds:CreateDBCluster",
                "lambda:CreateFunction",
                "lambda:InvokeFunction"
            ],
            "Resource": "*",
      "Condition": {"StringEquals": {"aws:RequestedRegion": "eu-central-1"}}

        },
        {
            "Effect": "Allow",
            "Action": [
                "iam:PassRole"
            ],
            "Resource": "arn:aws:iam::account-id:role/*"
        }
    ]
}

The first statement in the above example contains all the read-only actions that let my developers use the console for EC2, RDS, and Lambda. The permissions for IAM-related actions are required to launch EC2 instances with a role, enable enhanced monitoring in RDS, and for AWS Lambda to assume the IAM execution role to execute the Lambda function. I’ve combined all the read-only actions into a single statement for simplicity. The second statement is where I give write access to my developers for the three services and restrict the write access to the Frankfurt region using the aws:RequestedRegion condition key. You can also list multiple AWS regions with the new condition key if your developers are allowed to create resources in multiple regions. The third statement grants permissions for the IAM action iam:PassRole required by AWS Lambda. For more information on allowing users to create a Lambda function, see Using Identity-Based Policies for AWS Lambda.

Summary

You can now use the aws:RequestedRegion global condition key in your IAM policies to specify the region to which the IAM principal (user or role) can invoke an API call. This capability makes it easier for you to restrict the AWS regions your IAM principals can use to comply with regulatory standards and improve account security. For more information about this global condition key and policy examples using aws:RequestedRegion, see the IAM documentation.

If you have comments about this post, submit them in the Comments section below. If you have questions about or suggestions for this solution, start a new thread on the IAM forum.

Want more AWS Security news? Follow us on Twitter.

Grafana v5.1 Released

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2018/04/26/grafana-v5.1-released/

v5.1 Stable Release

The recent 5.0 major release contained a lot of new features so the Grafana 5.1 release is focused on smoothing out the rough edges and iterating over some of the new features.

Download Grafana 5.1 Now

Release Highlights

There are two new features included, Heatmap Support for Prometheus and a new core data source for Microsoft SQL Server.

Another highlight is the revamp of the Grafana docker container that makes it easier to run and control but be aware there is a breaking change to file permissions that will affect existing containers with data volumes.

We got tons of useful improvement suggestions, bug reports and Pull Requests from our amazing community. Thank you all! See the full changelog for more details.

Improved Scrolling Experience

In Grafana v5.0 we introduced a new scrollbar component. Unfortunately this introduced a lot of issues and in some scenarios removed
the native scrolling functionality. Grafana v5.1 ships with a native scrollbar for all pages together with a scrollbar component for
the dashboard grid and panels that does not override the native scrolling functionality. We hope that these changes and improvements should
make the Grafana user experience much better!

Improved Docker Image

Grafana v5.1 brings an improved official docker image which should make it easier to run and use the Grafana docker image and at the same time give more control to the user how to use/run it.

We have switched the id of the grafana user running Grafana inside a docker container. Unfortunately this means that files created prior to 5.1 will not have the correct permissions for later versions and thereby introduces a breaking change. We made this change so that it would be easier for you to control what user Grafana is executed as.

Please read the updated documentation which includes migration instructions and more information.

Heatmap Support for Prometheus

The Prometheus datasource now supports transforming Prometheus histograms to the heatmap panel. The Prometheus histogram is a powerful feature, and we’re
really happy to finally allow our users to render those as heatmaps. The Heatmap panel documentation
contains more information on how to use it.

Another improvement is that the Prometheus query editor now supports autocomplete for template variables. More information in the Prometheus data source documentation.

Microsoft SQL Server

Grafana v5.1 now ships with a built-in Microsoft SQL Server (MSSQL) data source plugin that allows you to query and visualize data from any
Microsoft SQL Server 2005 or newer, including Microsoft Azure SQL Database. Do you have metric or log data in MSSQL? You can now visualize
that data and define alert rules on it as with any of Grafana’s other core datasources.

The using Microsoft SQL Server in Grafana documentation has more detailed information on how to get started.

Adding New Panels to Dashboards

The control for adding new panels to dashboards now includes panel search and it is also now possible to copy and paste panels between dashboards.

By copying a panel in a dashboard it will be displayed in the Paste tab. When you switch to a new dashboard you can paste the
copied panel.

Align Zero-Line for Right and Left Y-axes

The feature request to align the zero-line for right and left Y-axes on the Graph panel is more than 3 years old. It has finally been implemented – more information in the Graph panel documentation.

Other Highlights

  • Table Panel: New enhancements includes support for mapping a numeric value/range to text and additional units. More information in the Table panel documentation.
  • New variable interpolation syntax: We now support a new option for rendering variables that gives the user full control of how the value(s) should be rendered. More details in the in the Variables documentation.
  • Improved workflow for provisioned dashboards. More details here.

Changelog

Checkout the CHANGELOG.md file for a complete list
of new features, changes, and bug fixes.

New .BOT gTLD from Amazon

Post Syndicated from Randall Hunt original https://aws.amazon.com/blogs/aws/new-bot-gtld-from-amazon/

Today, I’m excited to announce the launch of .BOT, a new generic top-level domain (gTLD) from Amazon. Customers can use .BOT domains to provide an identity and portal for their bots. Fitness bots, slack bots, e-commerce bots, and more can all benefit from an easy-to-access .BOT domain. The phrase “bot” was the 4th most registered domain keyword within the .COM TLD in 2016 with more than 6000 domains per month. A .BOT domain allows customers to provide a definitive internet identity for their bots as well as enhancing SEO performance.

At the time of this writing .BOT domains start at $75 each and must be verified and published with a supported tool like: Amazon Lex, Botkit Studio, Dialogflow, Gupshup, Microsoft Bot Framework, or Pandorabots. You can expect support for more tools over time and if your favorite bot framework isn’t supported feel free to contact us here: [email protected].

Below, I’ll walk through the experience of registering and provisioning a domain for my bot, whereml.bot. Then we’ll look at setting up the domain as a hosted zone in Amazon Route 53. Let’s get started.

Registering a .BOT domain

First, I’ll head over to https://amazonregistry.com/bot, type in a new domain, and click magnifying class to make sure my domain is available and get taken to the registration wizard.

Next, I have the opportunity to choose how I want to verify my bot. I build all of my bots with Amazon Lex so I’ll select that in the drop down and get prompted for instructions specific to AWS. If I had my bot hosted somewhere else I would need to follow the unique verification instructions for that particular framework.

To verify my Lex bot I need to give the Amazon Registry permissions to invoke the bot and verify it’s existence. I’ll do this by creating an AWS Identity and Access Management (IAM) cross account role and providing the AmazonLexReadOnly permissions to that role. This is easily accomplished in the AWS Console. Be sure to provide the account number and external ID shown on the registration page.

Now I’ll add read only permissions to our Amazon Lex bots.

I’ll give my role a fancy name like DotBotCrossAccountVerifyRole and a description so it’s easy to remember why I made this then I’ll click create to create the role and be transported to the role summary page.

Finally, I’ll copy the ARN from the created role and save it for my next step.

Here I’ll add all the details of my Amazon Lex bot. If you haven’t made a bot yet you can follow the tutorial to build a basic bot. I can refer to any alias I’ve deployed but if I just want to grab the latest published bot I can pass in $LATEST as the alias. Finally I’ll click Validate and proceed to registering my domain.

Amazon Registry works with a partner EnCirca to register our domains so we’ll select them and optionally grab Site Builder. I know how to sling some HTML and Javascript together so I’ll pass on the Site Builder side of things.

 

After I click continue we’re taken to EnCirca’s website to finalize the registration and with any luck within a few minutes of purchasing and completing the registration we should receive an email with some good news:

Alright, now that we have a domain name let’s find out how to host things on it.

Using Amazon Route53 with a .BOT domain

Amazon Route 53 is a highly available and scalable DNS with robust APIs, healthchecks, service discovery, and many other features. I definitely want to use this to host my new domain. The first thing I’ll do is navigate to the Route53 console and create a hosted zone with the same name as my domain.


Great! Now, I need to take the Name Server (NS) records that Route53 created for me and use EnCirca’s portal to add these as the authoritative nameservers on the domain.

Now I just add my records to my hosted zone and I should be able to serve traffic! Way cool, I’ve got my very own .bot domain for @WhereML.

Next Steps

  • I could and should add to the security of my site by creating TLS certificates for people who intend to access my domain over TLS. Luckily with AWS Certificate Manager (ACM) this is extremely straightforward and I’ve got my subdomains and root domain verified in just a few clicks.
  • I could create a cloudfront distrobution to front an S3 static single page application to host my entire chatbot and invoke Amazon Lex with a cognito identity right from the browser.

Randall

Ransomware Update: Viruses Targeting Business IT Servers

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/ransomware-update-viruses-targeting-business-it-servers/

Ransomware warning message on computer

As ransomware attacks have grown in number in recent months, the tactics and attack vectors also have evolved. While the primary method of attack used to be to target individual computer users within organizations with phishing emails and infected attachments, we’re increasingly seeing attacks that target weaknesses in businesses’ IT infrastructure.

How Ransomware Attacks Typically Work

In our previous posts on ransomware, we described the common vehicles used by hackers to infect organizations with ransomware viruses. Most often, downloaders distribute trojan horses through malicious downloads and spam emails. The emails contain a variety of file attachments, which if opened, will download and run one of the many ransomware variants. Once a user’s computer is infected with a malicious downloader, it will retrieve additional malware, which frequently includes crypto-ransomware. After the files have been encrypted, a ransom payment is demanded of the victim in order to decrypt the files.

What’s Changed With the Latest Ransomware Attacks?

In 2016, a customized ransomware strain called SamSam began attacking the servers in primarily health care institutions. SamSam, unlike more conventional ransomware, is not delivered through downloads or phishing emails. Instead, the attackers behind SamSam use tools to identify unpatched servers running Red Hat’s JBoss enterprise products. Once the attackers have successfully gained entry into one of these servers by exploiting vulnerabilities in JBoss, they use other freely available tools and scripts to collect credentials and gather information on networked computers. Then they deploy their ransomware to encrypt files on these systems before demanding a ransom. Gaining entry to an organization through its IT center rather than its endpoints makes this approach scalable and especially unsettling.

SamSam’s methodology is to scour the Internet searching for accessible and vulnerable JBoss application servers, especially ones used by hospitals. It’s not unlike a burglar rattling doorknobs in a neighborhood to find unlocked homes. When SamSam finds an unlocked home (unpatched server), the software infiltrates the system. It is then free to spread across the company’s network by stealing passwords. As it transverses the network and systems, it encrypts files, preventing access until the victims pay the hackers a ransom, typically between $10,000 and $15,000. The low ransom amount has encouraged some victimized organizations to pay the ransom rather than incur the downtime required to wipe and reinitialize their IT systems.

The success of SamSam is due to its effectiveness rather than its sophistication. SamSam can enter and transverse a network without human intervention. Some organizations are learning too late that securing internet-facing services in their data center from attack is just as important as securing endpoints.

The typical steps in a SamSam ransomware attack are:

1
Attackers gain access to vulnerable server
Attackers exploit vulnerable software or weak/stolen credentials.
2
Attack spreads via remote access tools
Attackers harvest credentials, create SOCKS proxies to tunnel traffic, and abuse RDP to install SamSam on more computers in the network.
3
Ransomware payload deployed
Attackers run batch scripts to execute ransomware on compromised machines.
4
Ransomware demand delivered requiring payment to decrypt files
Demand amounts vary from victim to victim. Relatively low ransom amounts appear to be designed to encourage quick payment decisions.

What all the organizations successfully exploited by SamSam have in common is that they were running unpatched servers that made them vulnerable to SamSam. Some organizations had their endpoints and servers backed up, while others did not. Some of those without backups they could use to recover their systems chose to pay the ransom money.

Timeline of SamSam History and Exploits

Since its appearance in 2016, SamSam has been in the news with many successful incursions into healthcare, business, and government institutions.

March 2016
SamSam appears

SamSam campaign targets vulnerable JBoss servers
Attackers hone in on healthcare organizations specifically, as they’re more likely to have unpatched JBoss machines.

April 2016
SamSam finds new targets

SamSam begins targeting schools and government.
After initial success targeting healthcare, attackers branch out to other sectors.

April 2017
New tactics include RDP

Attackers shift to targeting organizations with exposed RDP connections, and maintain focus on healthcare.
An attack on Erie County Medical Center costs the hospital $10 million over three months of recovery.
Erie County Medical Center attacked by SamSam ransomware virus

January 2018
Municipalities attacked

• Attack on Municipality of Farmington, NM.
• Attack on Hancock Health.
Hancock Regional Hospital notice following SamSam attack
• Attack on Adams Memorial Hospital
• Attack on Allscripts (Electronic Health Records), which includes 180,000 physicians, 2,500 hospitals, and 7.2 million patients’ health records.

February 2018
Attack volume increases

• Attack on Davidson County, NC.
• Attack on Colorado Department of Transportation.
SamSam virus notification

March 2018
SamSam shuts down Atlanta

• Second attack on Colorado Department of Transportation.
• City of Atlanta suffers a devastating attack by SamSam.
The attack has far-reaching impacts — crippling the court system, keeping residents from paying their water bills, limiting vital communications like sewer infrastructure requests, and pushing the Atlanta Police Department to file paper reports.
Atlanta Ransomware outage alert
• SamSam campaign nets $325,000 in 4 weeks.
Infections spike as attackers launch new campaigns. Healthcare and government organizations are once again the primary targets.

How to Defend Against SamSam and Other Ransomware Attacks

The best way to respond to a ransomware attack is to avoid having one in the first place. If you are attacked, making sure your valuable data is backed up and unreachable by ransomware infection will ensure that your downtime and data loss will be minimal or none if you ever suffer an attack.

In our previous post, How to Recover From Ransomware, we listed the ten ways to protect your organization from ransomware.

  1. Use anti-virus and anti-malware software or other security policies to block known payloads from launching.
  2. Make frequent, comprehensive backups of all important files and isolate them from local and open networks. Cybersecurity professionals view data backup and recovery (74% in a recent survey) by far as the most effective solution to respond to a successful ransomware attack.
  3. Keep offline backups of data stored in locations inaccessible from any potentially infected computer, such as disconnected external storage drives or the cloud, which prevents them from being accessed by the ransomware.
  4. Install the latest security updates issued by software vendors of your OS and applications. Remember to patch early and patch often to close known vulnerabilities in operating systems, server software, browsers, and web plugins.
  5. Consider deploying security software to protect endpoints, email servers, and network systems from infection.
  6. Exercise cyber hygiene, such as using caution when opening email attachments and links.
  7. Segment your networks to keep critical computers isolated and to prevent the spread of malware in case of attack. Turn off unneeded network shares.
  8. Turn off admin rights for users who don’t require them. Give users the lowest system permissions they need to do their work.
  9. Restrict write permissions on file servers as much as possible.
  10. Educate yourself, your employees, and your family in best practices to keep malware out of your systems. Update everyone on the latest email phishing scams and human engineering aimed at turning victims into abettors.

Please Tell Us About Your Experiences with Ransomware

Have you endured a ransomware attack or have a strategy to avoid becoming a victim? Please tell us of your experiences in the comments.

The post Ransomware Update: Viruses Targeting Business IT Servers appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.