Implement tenant isolation for Amazon S3 and Aurora PostgreSQL by using ABAC

Post Syndicated from Ashutosh Upadhyay original https://aws.amazon.com/blogs/security/implement-tenant-isolation-for-amazon-s3-and-aurora-postgresql-by-using-abac/

In software as a service (SaaS) systems, which are designed to be used by multiple customers, isolating tenant data is a fundamental responsibility for SaaS providers. The practice of isolation of data in a multi-tenant application platform is called tenant isolation. In this post, we describe an approach you can use to achieve tenant isolation in Amazon Simple Storage Service (Amazon S3) and Amazon Aurora PostgreSQL-Compatible Edition databases by implementing attribute-based access control (ABAC). You can also adapt the same approach to achieve tenant isolation in other AWS services.

ABAC in Amazon Web Services (AWS), which uses tags to store attributes, offers advantages over the traditional role-based access control (RBAC) model. You can use fewer permissions policies, update your access control more efficiently as you grow, and last but not least, apply granular permissions for various AWS services. These granular permissions help you to implement an effective and coherent tenant isolation strategy for your customers and clients. Using the ABAC model helps you scale your permissions and simplify the management of granular policies. The ABAC model reduces the time and effort it takes to maintain policies that allow access to only the required resources.

The solution we present here uses the pool model of data partitioning. The pool model helps you avoid the higher costs of duplicated resources for each tenant and the specialized infrastructure code required to set up and maintain those copies.

Solution overview

In a typical customer environment where this solution is implemented, the tenant request for access might land at Amazon API Gateway, together with the tenant identifier, which in turn calls an AWS Lambda function. The Lambda function is envisaged to be operating with a basic Lambda execution role. This Lambda role should also have permissions to assume the tenant roles. As the request progresses, the Lambda function assumes the tenant role and makes the necessary calls to Amazon S3 or to an Aurora PostgreSQL-Compatible database. This solution helps you to achieve tenant isolation for objects stored in Amazon S3 and data elements stored in an Aurora PostgreSQL-Compatible database cluster.

Figure 1 shows the tenant isolation architecture for both Amazon S3 and Amazon Aurora PostgreSQL-Compatible databases.

Figure 1: Tenant isolation architecture diagram

Figure 1: Tenant isolation architecture diagram

As shown in the numbered diagram steps, the workflow for Amazon S3 tenant isolation is as follows:

  1. AWS Lambda sends an AWS Security Token Service (AWS STS) assume role request to AWS Identity and Access Management (IAM).
  2. IAM validates the request and returns the tenant role.
  3. Lambda sends a request to Amazon S3 with the assumed role.
  4. Amazon S3 sends the response back to Lambda.

The diagram also shows the workflow steps for tenant isolation for Aurora PostgreSQL-Compatible databases, as follows:

  1. Lambda sends an STS assume role request to IAM.
  2. IAM validates the request and returns the tenant role.
  3. Lambda sends a request to IAM for database authorization.
  4. IAM validates the request and returns the database password token.
  5. Lambda sends a request to the Aurora PostgreSQL-Compatible database with the database user and password token.
  6. Aurora PostgreSQL-Compatible database returns the response to Lambda.

Prerequisites

For this walkthrough, you should have the following prerequisites:

  1. An AWS account for your workload.
  2. An Amazon S3 bucket.
  3. An Aurora PostgreSQL-Compatible cluster with a database created.

    Note: Make sure to note down the default master database user and password, and make sure that you can connect to the database from your desktop or from another server (for example, from Amazon Elastic Compute Cloud (Amazon EC2) instances).

  4. A security group and inbound rules that are set up to allow an inbound PostgreSQL TCP connection (Port 5432) from Lambda functions. This solution uses regular non-VPC Lambda functions, and therefore the security group of the Aurora PostgreSQL-Compatible database cluster should allow an inbound PostgreSQL TCP connection (Port 5432) from anywhere (0.0.0.0/0).

Make sure that you’ve completed the prerequisites before proceeding with the next steps.

Deploy the solution

The following sections describe how to create the IAM roles, IAM policies, and Lambda functions that are required for the solution. These steps also include guidelines on the changes that you’ll need to make to the prerequisite components Amazon S3 and the Aurora PostgreSQL-Compatible database cluster.

Step 1: Create the IAM policies

In this step, you create two IAM policies with the required permissions for Amazon S3 and the Aurora PostgreSQL database.

To create the IAM policies

  1. Open the AWS Management Console.
  2. Choose IAM, choose Policies, and then choose Create policy.
  3. Use the following JSON policy document to create the policy. Replace the placeholder <111122223333> with the bucket name from your account.
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "s3:Get*",
                    "s3:List*"
                ],
                "Resource": "arn:aws:s3:::sts-ti-demo-<111122223333>/${aws:PrincipalTag/s3_home}/*"
            }
        ]
    }
    

  4. Save the policy with the name sts-ti-demo-s3-access-policy.

    Figure 2: Create the IAM policy for Amazon S3 (sts-ti-demo-s3-access-policy)

    Figure 2: Create the IAM policy for Amazon S3 (sts-ti-demo-s3-access-policy)

  5. Open the AWS Management Console.
  6. Choose IAM, choose Policies, and then choose Create policy.
  7. Use the following JSON policy document to create a second policy. This policy grants an IAM role permission to connect to an Aurora PostgreSQL-Compatible database through a database user that is IAM authenticated. Replace the placeholders with the appropriate Region, account number, and cluster resource ID of the Aurora PostgreSQL-Compatible database cluster, respectively.
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "rds-db:connect"
            ],
            "Resource": [
                "arn:aws:rds-db:<us-west-2>:<111122223333>:dbuser:<cluster- ZTISAAAABBBBCCCCDDDDEEEEL4>/${aws:PrincipalTag/dbuser}"
            ]
        }
    ]
}
  • Save the policy with the name sts-ti-demo-dbuser-policy.

    Figure 3: Create the IAM policy for Aurora PostgreSQL database (sts-ti-demo-dbuser-policy)

    Figure 3: Create the IAM policy for Aurora PostgreSQL database (sts-ti-demo-dbuser-policy)

Note: Make sure that you use the cluster resource ID for the clustered database. However, if you intend to adapt this solution for your Aurora PostgreSQL-Compatible non-clustered database, you should use the instance resource ID instead.

Step 2: Create the IAM roles

In this step, you create two IAM roles for the two different tenants, and also apply the necessary permissions and tags.

To create the IAM roles

  1. In the IAM console, choose Roles, and then choose Create role.
  2. On the Trusted entities page, choose the EC2 service as the trusted entity.
  3. On the Permissions policies page, select sts-ti-demo-s3-access-policy and sts-ti-demo-dbuser-policy.
  4. On the Tags page, add two tags with the following keys and values.

    Tag key Tag value
    s3_home tenant1_home
    dbuser tenant1_dbuser
  5. On the Review screen, name the role assumeRole-tenant1, and then choose Save.
  6. In the IAM console, choose Roles, and then choose Create role.
  7. On the Trusted entities page, choose the EC2 service as the trusted entity.
  8. On the Permissions policies page, select sts-ti-demo-s3-access-policy and sts-ti-demo-dbuser-policy.
  9. On the Tags page, add two tags with the following keys and values.

    Tag key Tag value
    s3_home tenant2_home
    dbuser tenant2_dbuser
  10. On the Review screen, name the role assumeRole-tenant2, and then choose Save.

Step 3: Create and apply the IAM policies for the tenants

In this step, you create a policy and a role for the Lambda functions. You also create two separate tenant roles, and establish a trust relationship with the role that you created for the Lambda functions.

To create and apply the IAM policies for tenant1

  1. In the IAM console, choose Policies, and then choose Create policy.
  2. Use the following JSON policy document to create the policy. Replace the placeholder <111122223333> with your AWS account number.
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "VisualEditor0",
                "Effect": "Allow",
                "Action": "sts:AssumeRole",
                "Resource": [
                    "arn:aws:iam::<111122223333>:role/assumeRole-tenant1",
                    "arn:aws:iam::<111122223333>:role/assumeRole-tenant2"
                ]
            }
        ]
    }
    

  3. Save the policy with the name sts-ti-demo-assumerole-policy.
  4. In the IAM console, choose Roles, and then choose Create role.
  5. On the Trusted entities page, select the Lambda service as the trusted entity.
  6. On the Permissions policies page, select sts-ti-demo-assumerole-policy and AWSLambdaBasicExecutionRole.
  7. On the review screen, name the role sts-ti-demo-lambda-role, and then choose Save.
  8. In the IAM console, go to Roles, and enter assumeRole-tenant1 in the search box.
  9. Select the assumeRole-tenant1 role and go to the Trust relationship tab.
  10. Choose Edit the trust relationship, and replace the existing value with the following JSON document. Replace the placeholder <111122223333> with your AWS account number, and choose Update trust policy to save the policy.
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Principal": {
            "AWS": "arn:aws:iam::<111122223333>:role/sts-ti-demo-lambda-role"
          },
          "Action": "sts:AssumeRole"
        }
      ]
    }
    

To verify that the policies are applied correctly for tenant1

In the IAM console, go to Roles, and enter assumeRole-tenant1 in the search box. Select the assumeRole-tenant1 role and on the Permissions tab, verify that sts-ti-demo-dbuser-policy and sts-ti-demo-s3-access-policy appear in the list of policies, as shown in Figure 4.

Figure 4: The assumeRole-tenant1 Permissions tab

Figure 4: The assumeRole-tenant1 Permissions tab

On the Trust relationships tab, verify that sts-ti-demo-lambda-role appears under Trusted entities, as shown in Figure 5.

Figure 5: The assumeRole-tenant1 Trust relationships tab

Figure 5: The assumeRole-tenant1 Trust relationships tab

On the Tags tab, verify that the following tags appear, as shown in Figure 6.

Tag key Tag value
dbuser tenant1_dbuser
s3_home tenant1_home

 

Figure 6: The assumeRole-tenant1 Tags tab

Figure 6: The assumeRole-tenant1 Tags tab

To create and apply the IAM policies for tenant2

  1. In the IAM console, go to Roles, and enter assumeRole-tenant2 in the search box.
  2. Select the assumeRole-tenant2 role and go to the Trust relationship tab.
  3. Edit the trust relationship, replacing the existing value with the following JSON document. Replace the placeholder <111122223333> with your AWS account number.
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Principal": {
            "AWS": "arn:aws:iam::<111122223333>:role/sts-ti-demo-lambda-role"
          },
          "Action": "sts:AssumeRole"
        }
      ]
    }
    

  4. Choose Update trust policy to save the policy.

To verify that the policies are applied correctly for tenant2

In the IAM console, go to Roles, and enter assumeRole-tenant2 in the search box. Select the assumeRole-tenant2 role and on the Permissions tab, verify that sts-ti-demo-dbuser-policy and sts-ti-demo-s3-access-policy appear in the list of policies, you did for tenant1. On the Trust relationships tab, verify that sts-ti-demo-lambda-role appears under Trusted entities.

On the Tags tab, verify that the following tags appear, as shown in Figure 7.

Tag key Tag value
dbuser tenant2_dbuser
s3_home tenant2_home
Figure 7: The assumeRole-tenant2 Tags tab

Figure 7: The assumeRole-tenant2 Tags tab

Step 4: Set up an Amazon S3 bucket

Next, you’ll set up an S3 bucket that you’ll use as part of this solution. You can either create a new S3 bucket or re-purpose an existing one. The following steps show you how to create two user homes (that is, S3 prefixes, which are also known as folders) in the S3 bucket.

  1. In the AWS Management Console, go to Amazon S3 and select the S3 bucket you want to use.
  2. Create two prefixes (folders) with the names tenant1_home and tenant2_home.
  3. Place two test objects with the names tenant.info-tenant1_home and tenant.info-tenant2_home in the prefixes that you just created, respectively.

Step 5: Set up test objects in Aurora PostgreSQL-Compatible database

In this step, you create a table in Aurora PostgreSQL-Compatible Edition, insert tenant metadata, create a row level security (RLS) policy, create tenant users, and grant permission for testing purposes.

To set up Aurora PostgreSQL-Compatible

  1. Connect to Aurora PostgreSQL-Compatible through a client of your choice, using the master database user and password that you obtained at the time of cluster creation.
  2. Run the following commands to create a table for testing purposes and to insert a couple of testing records.
    CREATE TABLE tenant_metadata (
        tenant_id VARCHAR(30) PRIMARY KEY,
        email     VARCHAR(50) UNIQUE,
        status    VARCHAR(10) CHECK (status IN ('active', 'suspended', 'disabled')),
        tier      VARCHAR(10) CHECK (tier IN ('gold', 'silver', 'bronze')));
    
    INSERT INTO tenant_metadata (tenant_id, email, status, tier) 
    VALUES ('tenant1_dbuser','[email protected]','active','gold');
    INSERT INTO tenant_metadata (tenant_id, email, status, tier) 
    VALUES ('tenant2_dbuser','[email protected]','suspended','silver');
    ALTER TABLE tenant_metadata ENABLE ROW LEVEL SECURITY;
    

  3. Run the following command to query the newly created database table.
    SELECT * FROM tenant_metadata;
    

    Figure 8: The tenant_metadata table content

    Figure 8: The tenant_metadata table content

  4. Run the following command to create the row level security policy.
    CREATE POLICY tenant_isolation_policy ON tenant_metadata
    USING (tenant_id = current_user);
    

  5. Run the following commands to establish two tenant users and grant them the necessary permissions.
    CREATE USER tenant1_dbuser WITH LOGIN;
    CREATE USER tenant2_dbuser WITH LOGIN;
    GRANT rds_iam TO tenant1_dbuser;
    GRANT rds_iam TO tenant2_dbuser;
    
    GRANT select, insert, update, delete ON tenant_metadata to tenant1_dbuser, tenant2_dbuser;
    

  6. Run the following commands to verify the newly created tenant users.
    SELECT usename AS role_name,
      CASE
         WHEN usesuper AND usecreatedb THEN
           CAST('superuser, create database' AS pg_catalog.text)
         WHEN usesuper THEN
            CAST('superuser' AS pg_catalog.text)
         WHEN usecreatedb THEN
            CAST('create database' AS pg_catalog.text)
         ELSE
            CAST('' AS pg_catalog.text)
      END role_attributes
    FROM pg_catalog.pg_user
    WHERE usename LIKE (‘tenant%’)
    ORDER BY role_name desc;
    

    Figure 9: Verify the newly created tenant users output

    Figure 9: Verify the newly created tenant users output

Step 6: Set up the AWS Lambda functions

Next, you’ll create two Lambda functions for Amazon S3 and Aurora PostgreSQL-Compatible. You also need to create a Lambda layer for the Python package PG8000.

To set up the Lambda function for Amazon S3

  1. Navigate to the Lambda console, and choose Create function.
  2. Choose Author from scratch. For Function name, enter sts-ti-demo-s3-lambda.
  3. For Runtime, choose Python 3.7.
  4. Change the default execution role to Use an existing role, and then select sts-ti-demo-lambda-role from the drop-down list.
  5. Keep Advanced settings as the default value, and then choose Create function.
  6. Copy the following Python code into the lambda_function.py file that is created in your Lambda function.
    import json
    import os
    import time 
    
    def lambda_handler(event, context):
        import boto3
        bucket_name     =   os.environ['s3_bucket_name']
    
        try:
            login_tenant_id =   event['login_tenant_id']
            data_tenant_id  =   event['s3_tenant_home']
        except:
            return {
                'statusCode': 400,
                'body': 'Error in reading parameters'
            }
    
        prefix_of_role  =   'assumeRole'
        file_name       =   'tenant.info' + '-' + data_tenant_id
    
        # create an STS client object that represents a live connection to the STS service
        sts_client = boto3.client('sts')
        account_of_role = sts_client.get_caller_identity()['Account']
        role_to_assume  =   'arn:aws:iam::' + account_of_role + ':role/' + prefix_of_role + '-' + login_tenant_id
    
        # Call the assume_role method of the STSConnection object and pass the role
        # ARN and a role session name.
        RoleSessionName = 'AssumeRoleSession' + str(time.time()).split(".")[0] + str(time.time()).split(".")[1]
        try:
            assumed_role_object = sts_client.assume_role(
                RoleArn         = role_to_assume, 
                RoleSessionName = RoleSessionName, 
                DurationSeconds = 900) #15 minutes
    
        except:
            return {
                'statusCode': 400,
                'body': 'Error in assuming the role ' + role_to_assume + ' in account ' + account_of_role
            }
    
        # From the response that contains the assumed role, get the temporary 
        # credentials that can be used to make subsequent API calls
        credentials=assumed_role_object['Credentials']
        
        # Use the temporary credentials that AssumeRole returns to make a connection to Amazon S3  
        s3_resource=boto3.resource(
            's3',
            aws_access_key_id=credentials['AccessKeyId'],
            aws_secret_access_key=credentials['SecretAccessKey'],
            aws_session_token=credentials['SessionToken']
        )
    
        try:
            obj = s3_resource.Object(bucket_name, data_tenant_id + "/" + file_name)
            return {
                'statusCode': 200,
                'body': obj.get()['Body'].read()
            }
        except:
            return {
                'statusCode': 400,
                'body': 'error in reading s3://' + bucket_name + '/' + data_tenant_id + '/' + file_name
            }
    

  7. Under Basic settings, edit Timeout to increase the timeout to 29 seconds.
  8. Edit Environment variables to add a key called s3_bucket_name, with the value set to the name of your S3 bucket.
  9. Configure a new test event with the following JSON document, and save it as testEvent.
    {
      "login_tenant_id": "tenant1",
      "s3_tenant_home": "tenant1_home"
    }
    

  10. Choose Test to test the Lambda function with the newly created test event testEvent. You should see status code 200, and the body of the results should contain the data for tenant1.

    Figure 10: The result of running the sts-ti-demo-s3-lambda function

    Figure 10: The result of running the sts-ti-demo-s3-lambda function

Next, create another Lambda function for Aurora PostgreSQL-Compatible. To do this, you first need to create a new Lambda layer.

To set up the Lambda layer

  1. Use the following commands to create a .zip file for Python package pg8000.

    Note: This example is created by using an Amazon EC2 instance running the Amazon Linux 2 Amazon Machine Image (AMI). If you’re using another version of Linux or don’t have the Python 3 or pip3 packages installed, install them by using the following commands.

    sudo yum update -y 
    sudo yum install python3 
    sudo pip3 install pg8000 -t build/python/lib/python3.8/site-packages/ 
    cd build 
    sudo zip -r pg8000.zip python/
    

  2. Download the pg8000.zip file you just created to your local desktop machine or into an S3 bucket location.
  3. Navigate to the Lambda console, choose Layers, and then choose Create layer.
  4. For Name, enter pgdb, and then upload pg8000.zip from your local desktop machine or from the S3 bucket location.

    Note: For more details, see the AWS documentation for creating and sharing Lambda layers.

  5. For Compatible runtimes, choose python3.6, python3.7, and python3.8, and then choose Create.

To set up the Lambda function with the newly created Lambda layer

  1. In the Lambda console, choose Function, and then choose Create function.
  2. Choose Author from scratch. For Function name, enter sts-ti-demo-pgdb-lambda.
  3. For Runtime, choose Python 3.7.
  4. Change the default execution role to Use an existing role, and then select sts-ti-demo-lambda-role from the drop-down list.
  5. Keep Advanced settings as the default value, and then choose Create function.
  6. Choose Layers, and then choose Add a layer.
  7. Choose Custom layer, select pgdb with Version 1 from the drop-down list, and then choose Add.
  8. Copy the following Python code into the lambda_function.py file that was created in your Lambda function.
    import boto3
    import pg8000
    import os
    import time
    import ssl
    
    connection = None
    assumed_role_object = None
    rds_client = None
    
    def assume_role(event):
        global assumed_role_object
        try:
            RolePrefix  = os.environ.get("RolePrefix")
            LoginTenant = event['login_tenant_id']
        
            # create an STS client object that represents a live connection to the STS service
            sts_client      = boto3.client('sts')
            # Prepare input parameters
            role_to_assume  = 'arn:aws:iam::' + sts_client.get_caller_identity()['Account'] + ':role/' + RolePrefix + '-' + LoginTenant
            RoleSessionName = 'AssumeRoleSession' + str(time.time()).split(".")[0] + str(time.time()).split(".")[1]
        
            # Call the assume_role method of the STSConnection object and pass the role ARN and a role session name.
            assumed_role_object = sts_client.assume_role(
                RoleArn         =   role_to_assume, 
                RoleSessionName =   RoleSessionName,
                DurationSeconds =   900) #15 minutes 
            
            return assumed_role_object['Credentials']
        except Exception as e:
            print({'Role assumption failed!': {'role': role_to_assume, 'Exception': 'Failed due to :{0}'.format(str(e))}})
            return None
    
    def get_connection(event):
        global rds_client
        creds = assume_role(event)
    
        try:
            # create an RDS client using assumed credentials
            rds_client = boto3.client('rds',
                aws_access_key_id       = creds['AccessKeyId'],
                aws_secret_access_key   = creds['SecretAccessKey'],
                aws_session_token       = creds['SessionToken'])
    
            # Read the environment variables and event parameters
            DBEndPoint   = os.environ.get('DBEndPoint')
            DatabaseName = os.environ.get('DatabaseName')
            DBUserName   = event['dbuser']
    
            # Generates an auth token used to connect to a database with IAM credentials.
            pwd = rds_client.generate_db_auth_token(
                DBHostname=DBEndPoint, Port=5432, DBUsername=DBUserName, Region='us-west-2'
            )
    
            ssl_context             = ssl.SSLContext()
            ssl_context.verify_mode = ssl.CERT_REQUIRED
            ssl_context.load_verify_locations('rds-ca-2019-root.pem')
    
            # create a database connection
            conn = pg8000.connect(
                host        =   DBEndPoint,
                user        =   DBUserName,
                database    =   DatabaseName,
                password    =   pwd,
                ssl_context =   ssl_context)
            
            return conn
        except Exception as e:
            print ({'Database connection failed!': {'Exception': "Failed due to :{0}".format(str(e))}})
            return None
    
    def execute_sql(connection, query):
        try:
            cursor = connection.cursor()
            cursor.execute(query)
            columns = [str(desc[0]) for desc in cursor.description]
            results = []
            for res in cursor:
                results.append(dict(zip(columns, res)))
            cursor.close()
            retry = False
            return results    
        except Exception as e:
            print ({'Execute SQL failed!': {'Exception': "Failed due to :{0}".format(str(e))}})
            return None
    
    
    def lambda_handler(event, context):
        global connection
        try:
            connection = get_connection(event)
            if connection is None:
                return {'statusCode': 400, "body": "Error in database connection!"}
    
            response = {'statusCode':200, 'body': {
                'db & user': execute_sql(connection, 'SELECT CURRENT_DATABASE(), CURRENT_USER'), \
                'data from tenant_metadata': execute_sql(connection, 'SELECT * FROM tenant_metadata')}}
            return response
        except Exception as e:
            try:
                connection.close()
            except Exception as e:
                connection = None
            return {'statusCode': 400, 'statusDesc': 'Error!', 'body': 'Unhandled error in Lambda Handler.'}
    

  9. Add a certificate file called rds-ca-2019-root.pem into the Lambda project root by downloading it from https://s3.amazonaws.com/rds-downloads/rds-ca-2019-root.pem.
  10. Under Basic settings, edit Timeout to increase the timeout to 29 seconds.
  11. Edit Environment variables to add the following keys and values.

    Key Value
    DBEndPoint Enter the database cluster endpoint URL
    DatabaseName Enter the database name
    RolePrefix assumeRole
    Figure 11: Example of environment variables display

    Figure 11: Example of environment variables display

  12. Configure a new test event with the following JSON document, and save it as testEvent.
    {
      "login_tenant_id": "tenant1",
      "dbuser": "tenant1_dbuser"
    }
    

  13. Choose Test to test the Lambda function with the newly created test event testEvent. You should see status code 200, and the body of the results should contain the data for tenant1.

    Figure 12: The result of running the sts-ti-demo-pgdb-lambda function

    Figure 12: The result of running the sts-ti-demo-pgdb-lambda function

Step 7: Perform negative testing of tenant isolation

You already performed positive tests of tenant isolation during the Lambda function creation steps. However, it’s also important to perform some negative tests to verify the robustness of the tenant isolation controls.

To perform negative tests of tenant isolation

  1. In the Lambda console, navigate to the sts-ti-demo-s3-lambda function. Update the test event to the following, to mimic a scenario where tenant1 attempts to access other tenants’ objects.
    {
      "login_tenant_id": "tenant1",
      "s3_tenant_home": "tenant2_home"
    }
    

  2. Choose Test to test the Lambda function with the updated test event. You should see status code 400, and the body of the results should contain an error message.

    Figure 13: The results of running the sts-ti-demo-s3-lambda function (negative test)

    Figure 13: The results of running the sts-ti-demo-s3-lambda function (negative test)

  3. Navigate to the sts-ti-demo-pgdb-lambda function and update the test event to the following, to mimic a scenario where tenant1 attempts to access other tenants’ data elements.
    {
      "login_tenant_id": "tenant1",
      "dbuser": "tenant2_dbuser"
    }
    

  4. Choose Test to test the Lambda function with the updated test event. You should see status code 400, and the body of the results should contain an error message.

    Figure 14: The results of running the sts-ti-demo-pgdb-lambda function (negative test)

    Figure 14: The results of running the sts-ti-demo-pgdb-lambda function (negative test)

Cleaning up

To de-clutter your environment, remove the roles, policies, Lambda functions, Lambda layers, Amazon S3 prefixes, database users, and the database table that you created as part of this exercise. You can choose to delete the S3 bucket, as well as the Aurora PostgreSQL-Compatible database cluster that we mentioned in the Prerequisites section, to avoid incurring future charges.

Update the security group of the Aurora PostgreSQL-Compatible database cluster to remove the inbound rule that you added to allow a PostgreSQL TCP connection (Port 5432) from anywhere (0.0.0.0/0).

Conclusion

By taking advantage of attribute-based access control (ABAC) in IAM, you can more efficiently implement tenant isolation in SaaS applications. The solution we presented here helps to achieve tenant isolation in Amazon S3 and Aurora PostgreSQL-Compatible databases by using ABAC with the pool model of data partitioning.

If you run into any issues, you can use Amazon CloudWatch and AWS CloudTrail to troubleshoot. If you have feedback about this post, submit comments in the Comments section below.

To learn more, see these AWS Blog and AWS Support articles:

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Ashutosh Upadhyay

Ashutosh works as a Senior Security, Risk, and Compliance Consultant in AWS Professional Services (GFS) and is based in Singapore. Ashutosh started off his career as a developer, and for the past many years has been working in the Security, Risk, and Compliance field. Ashutosh loves to spend his free time learning and playing with new technologies.

Author

Chirantan Saha

Chirantan works as a DevOps Engineer in AWS Professional Services (GFS) and is based in Singapore. Chirantan has 20 years of development and DevOps experience working for large banking, financial services, and insurance organizations. Outside of work, Chirantan enjoys traveling, spending time with family, and helping his children with their science projects.

Tails 4.20 is out

Post Syndicated from original https://lwn.net/Articles/862771/rss

Tails is a privacy focused distribution and Tails 4.20
completely changes how to connect to the Tor network from
Tails
” with the new Tor Connection assistant.

This new assistant is most useful for users who are at high risk of
physical surveillance, under heavy network censorship, or on a poor
Internet connection:

  • It protects better the users who need to go unnoticed if using Tor
    could look suspicious to someone who monitors their Internet connection
    (parental control, abusive partner, school or work network, etc.).

  • It allows people who need to connect to Tor using bridges to configure
    them without having to change the default configuration in the Welcome
    Screen.

  • It helps first-time users understand how to connect to a local Wi-Fi
    network.

  • It provides feedback while connecting to Tor and helps troubleshoot
    network problems.

“Биволъ” публикува сигналите до Велислав Минеков Заличената килия на Левски в Двореца взима главата на Яра Бубнова

Post Syndicated from Николай Марченко original https://bivol.bg/%D0%B7%D0%B0%D0%BB%D0%B8%D1%87%D0%B5%D0%BD%D0%B0%D1%82%D0%B0-%D0%BA%D0%B8%D0%BB%D0%B8%D1%8F-%D0%BD%D0%B0-%D0%BB%D0%B5%D0%B2%D1%81%D0%BA%D0%B8-%D0%B2-%D0%B4%D0%B2%D0%BE%D1%80%D0%B5%D1%86%D0%B0-%D0%B2.html

вторник 13 юли 2021


При ръководения от Министерството на културата по време на Вежди Рашидов “ремонт-реставрация” е унищожен уникалният облик на Царския Дворец в София на бул. “Цар Освободител”, в който днес се помещава…

Enable federation to multiple Amazon QuickSight accounts with Microsoft Azure Active Directory

Post Syndicated from Srikanth Baheti original https://aws.amazon.com/blogs/big-data/enable-federation-to-multiple-amazon-quicksight-accounts-with-microsoft-azure-active-directory/

Amazon QuickSight is a scalable, serverless, embeddable, machine learning (ML)-powered business intelligence (BI) service built for the cloud that supports identity federation in both Standard and Enterprise editions. Organizations are working towards centralizing their identity and access strategy across all of their applications, including on-premises, third-party, and applications on AWS. Many organizations use Microsoft Azure Active Directory (Azure AD) to control and manage user authentication and authorization centrally. If your organization uses Azure AD for cloud applications and multiple QuickSight accounts, you can enable federation to all of your QuickSight accounts without needing to create and manage users multiple times. This authorizes users to access QuickSight assets—analyses, dashboards, folders, and datasets—through centrally managed Azure AD.

In this post, we go through the steps to configure federated single sign-on (SSO) between a single Azure AD instance and multiple QuickSight accounts. We demonstrate registering an SSO application in Azure AD, creating roles in Azure AD, and assigning these roles to map to QuickSight roles (admin, author, and reader) These QuickSight roles represent three different personas supported in QuickSight. Administrators can publish the QuickSight app in the Azure App portal to enable users to SSO to QuickSight using their Azure AD credentials.

Prerequisites

To complete this walkthrough, you must have the following prerequisites:

  • An Azure AD subscription
  • One or more QuickSight account subscriptions

Solution overview

The walkthrough includes the following steps:

  1. Register an AWS Single Sign-On (AWS SSO) application in Azure AD.
  2. Configure the application in Azure AD.
  3. Add Azure AD as your SAML identity provider (IdP) in AWS.
  4. Configure AWS Identity and Access Management (IAM) policies.
  5. Configure IAM roles.
  6. Create roles in Microsoft Graph Explorer.
  7. Assign the newly created roles through Graph Explorer to users in Azure AD.
  8. Test the application from Azure AD.

Register an AWS SSO application in Azure AD

To configure the integration of an AWS SSO application in Azure AD, you need to add AWS SSO to your list of managed software as a service (SaaS) apps.

  1. Sign in to the Azure portal using a Microsoft account.
  2. Under Azure services, choose Azure Active Directory.

  1. In the navigation pane, under Manage, choose Enterprise Applications.

  1. Choose All applications.
  2. Choose New application.

  1. In the Browse Azure AD Gallery section, search for AWS Single Sign-On.
  2. Choose AWS Single Sign-On from the results panel and add the application.

  1. For Name, enter Amazon QuickSight.
  2. After the application is created, copy the Object ID value from the application overview.

You need this object ID in the later steps.

Configure an AWS SSO application in Azure AD

Follow these steps to enable Azure AD SSO in the Azure portal.

  1. In the Azure portal, on the AWS SSO application registered in first step, in the Manage section, choose single sign-on.
  2. On the Select a single sign-on method page, choose SAML.
  3. Choose the pencil icon.
  4. For Identifier (Entity ID), enter URN:AMAZON:WEBSERVICES.
  5. For Reply URL, enter https://signin.aws.amazon.com/saml.
  6. Leave Sign on URL blank
  7. For Relay State, enter https://quicksight.aws.amazon.com.
  8. Leave Logout URL blank.
  9. Choose Save.

  1. On the Set up Single Sign-On with SAML page, under User Attributes & Claims, choose Edit.

  1. In the Additional Claims section, configure SAML token attributes by using the values in the following table.
Name Source attribute Namespace
RoleSessionName user.userprincipalname https://aws.amazon.com/SAML/Attributes
Role user.assignedroles https://aws.amazon.com/SAML/Attributes
SessionDuration Provide a value from 900 seconds (15 minutes) to 43,200 seconds (12 hours) https://aws.amazon.com/SAML/Attributes
  1. In the SAML Signing Certificate section, choose Download to download the federation metadata XML file.

You this XML document later when setting up the SAML provider in IAM.

Add Azure AD as your SAML IdP in AWS

To configure Azure AD as your SAML IdP, complete the following steps:

  1. Open a new tab in your browser.
  2. Sign in to the IAM console in your AWS account with admin permissions.
  3. On the IAM console, under Access Management in the navigation pane, choose Identity providers.
  4. Choose Add provider.

  1. For Provider name, enter AzureActiveDirectory.
  2. Choose Choose file to upload the metadata document you downloaded in the earlier step.
  3. Choose Add provider.

  1. In the banner message that appears, choose View provider.

  1. Copy the ARN to use in a later step.

  1. Repeat these steps in other accounts where you want to enable SSO.

Configure IAM policies

In this step, you create three IAM policies for mapping to three different roles with permissions in QuickSight (admin, author, and reader).

Use the following steps to set up the QuickSight-Admin-Account1 policy. This policy grants admin privileges in QuickSight to the federated user.

  1. On the IAM console, choose Policies.
  2. Choose Create policy.
  3. Choose JSON and replace the existing text with the code from the following table for QuickSight-Admin-Account1.
Policy Name JSON Text
QuickSight-Admin-Account1
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "quicksight:CreateAdmin",
"Resource": "*"
}
]
}

QuickSight-Author-Account1
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "quicksight:CreateUser",
"Resource": "*"
}
]
}

QuickSight-Reader-Account1
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": " quicksight:CreateReader",
"Resource": "*"
}
]
}

  1. Choose Review policy
  2. For Name, enter QuickSight-Admin-Account1.
  3. Choose Create policy.
  4. Repeat the steps for QuickSight-Author-Account1 and QuickSight-Reader-Account1.
  5. Repeat these steps in other accounts where you want to enable SSO.

Configure IAM roles

Next, create the roles that your Azure AD users assume when federating into QuickSight. Use the following steps to set up the admin role:

  1. On the IAM console, choose Roles.
  2. Choose Create role.
  3. For Select type of trusted entity, choose SAML 2.0 federation.
  4. For SAML provider, choose the provider you created earlier (AzureActiveDirectory).
  5. Select Allow programmatic and AWS Management Console access.
  6. For Attribute, choose SAML:aud.
  7. For Value, enter https://signin.aws.amazon.com/saml.

  1. Choose Next: Permissions.
  2. Choose the QuickSight-Admin-Account1 IAM policy you created in the previous step.
  3. Choose Next: Tags.
  4. Choose Next: Review
  5. For Role name, enter QuickSight-Admin-Role.
  6. For Role description, enter a description.
  7. Choose Create role.

  1. On the IAM console, in the navigation pane, choose Roles.
  2. Choose the QuickSight-Admin-Role role you created to open the role’s properties.
  3. Copy the role ARN to the notepad.
  4. On the Trust Relationships tab, choose Edit Trust Relationship.
  5. Under Trusted Entities, verify that the IdP you created is listed.
  6. Under Conditions, verify that SAML:aud with a value of https://signin.aws.amazon.com/saml is present.

  1. Repeat these steps to create your author and reader roles and attach the appropriate policies:
    1. For QuickSight-Author-Role, use the policy QuickSight-Author-Account1.
    2. For QuickSight-Reader-Role, use the policy QuickSight-Reader-Account1.
  2. Repeat these steps in other accounts where you want to enable SSO.

Create roles in Microsoft Graph Explorer

Optionally, you can create roles within Azure AD with paid subscriptions. Open Microsoft Graph Explorer, and then do the following:

  1. Sign in to the Microsoft Graph Explorer site with the domain account for your tenant.

You need sufficient permissions to create the roles.

  1. To grant permissions, choose the ellipsis (three dots) next to your name and choose Select permissions.

  1. On the Permission list, expand Directory.
  2. Select the three directory-level permissions as shown in the following screenshot and choose Consent.

  1. Sign in to Graph Explorer again, and accept the site usage conditions.
  2. Choose GET for the method, and 0 for the version.
  3. In the query box, enter https://graph.microsoft.com/v1.0/servicePrincipals/<objectId> (use the object ID you saved earlier).
  4. In the Response preview pane, copy the response to an editor of your choice to modify.

  1. Extract the appRoles property from the service principal object.

Now you generate new roles for your application. These roles must match the IAM roles in AWS that you created earlier.

  1. From the notepad, use the format <Role ARN>, <IdP ARN> to create your roles:
    1. arn:aws:iam::5xxxxxxxxxx9:role/QuickSight-Admin-Role,arn:aws:iam::5xxxxxxxxxx9:saml-provider/AzureActiveDirectory
    2. arn:aws:iam::5xxxxxxxxxx9:role/QuickSight-Author-Role,arn:aws:iam::5xxxxxxxxxx9:saml-provider/AzureActiveDirectory
    3. arn:aws:iam::5xxxxxxxxxx9:role/QuickSight-Reader-Role,arn:aws:iam::5xxxxxxxxxx9:saml-provider/AzureActiveDirectory
    4. arn:aws:iam::0xxxxxxxxxx2:role/QS-Admin-AZAd-Role,arn:aws:iam::0xxxxxxxxxx2:saml-provider/AzureAd-Acct2
    5. arn:aws:iam::0xxxxxxxxxx2:role/QS-Author-AZAd-Role,arn:aws:iam::0xxxxxxxxxx2:saml-provider/AzureAd-Acct2
    6. arn:aws:iam::0xxxxxxxxxx2:role/QS-Reader-AZAd-Role,arn:aws:iam::0xxxxxxxxxx2:saml-provider/AzureAd-Acct2
  2. The following JSON code is an example of the appRoles Create a similar object to add the roles for your application:
            "appRoles": [
                {
                    "allowedMemberTypes": [
                        "User"
                    ],
                    "description": "User",
                    "displayName": "User",
                    "id": "8774f594-1d59-4279-b9d9-59ef09a23530",
                    "isEnabled": true,
                    "origin": "Application",
                    "value": null
                },
                {
                    "allowedMemberTypes": [
                        "User"
                    ],
                    "description": "msiam_access",
                    "displayName": "msiam_access",
                    "id": "e7f1a7f3-9eda-48e0-9963-bd67bf531afd",
                    "isEnabled": true,
                    "origin": "Application",
                    "value": null
                },
                {
                    "allowedMemberTypes": [
                        "User"
                    ],
                    "description": "Raji Quicksight Admin",
                    "displayName": "RajiQSAdmin",
                    "id": "9a07d03d-667f-405d-b5d7-68bec5b64584",
                    "isEnabled": true,
                    "origin": "ServicePrincipal",
                    "value": "arn:aws:iam::0xxxxxxxxxx2:role/QS-Admin-AZAd-Role,arn:aws:iam::0xxxxxxxxxx2:saml-provider/AzureAd-Acct2"
                },
                {
                    "allowedMemberTypes": [
                        "User"
                    ],
                    "description": "Sri Quicksight Admin",
                    "displayName": "SriQSAdmin",
                    "id": "77dd76d1-f897-4093-bf9a-8f3aaf25f30e",
                    "isEnabled": true,
                    "origin": "ServicePrincipal",
                    "value": "arn:aws:iam::5xxxxxxxxxx9:role/QuickSight-Admin-Role,arn:aws:iam::5xxxxxxxxxx9:saml-provider/AzureActiveDirectory"
                }
            ]

New roles must be followed by msiam_access for the patch operation. You can also add multiple roles, depending on your organization’s needs. Azure AD sends the value of these roles as the claim value in the SAML response.

When adding new roles, you must provide a new GUID for each ID attribute in the JSON payload. You can use online GUID generation tool for generating a new unique GUID per role.

  1. In Microsoft Graph Explorer, change the method from GET to PATCH.
  2. Patch the service principal object with the roles you want by updating the appRoles property, like the one shown in the preceding example.
  3. Choose Run Query to run the patch operation. A success message confirms the creation of the role for your AWS application.

After the service principal is patched with new roles, you can assign users and groups to their respective roles.

  1. In the Azure portal, go to the QuickSight application you created and choose Users and Groups.
  2. Create your groups.

We recommend creating a new group for every AWS role in order to assign a particular role to the group. This one-to-one mapping means that one group is assigned to one role. You can then add members to the group.

  1. After you create the groups, choose the group and assign it to the application.

Nested groups are not allowed.

  1. To assign the role to the group, choose the role, then choose Assign.

Test the application

In this section, you test your Azure AD SSO configuration by using Microsoft Applications.

  1. Navigate to Microsoft Applications.
  2. On the My Apps page, choose AWS Single Sign-On.

  1. Choose a specific role for the QuickSight account you want to use.

You’re redirected to the QuickSight console.

Summary

This post provided step-by-step instructions to configure federated SSO between a single Azure AD instance and multiple QuickSight accounts. We also discussed how to create new roles and map users and groups in Azure AD to IAM for secure access into multiple QuickSight accounts.

For information about federating from Azure AD to a single QuickSight account, see Enabling Amazon QuickSight federation with Azure AD.


About the Authors

 

Srikanth Baheti is a Specialized World Wide Sr. Solution Architect for Amazon QuickSight. He started his career as a consultant and worked for multiple private and government organizations. Later he worked for PerkinElmer Health and Sciences & eResearch Technology Inc, where he was responsible for designing and developing high traffic web applications, highly scalable and maintainable data pipelines for reporting platforms using AWS services and Serverless computing.

 

Raji Sivasubramaniam is a Specialist Solutions Architect at AWS, focusing on Analytics. Raji has 20 years of experience in architecting end-to-end Enterprise Data Management, Business Intelligence and Analytics solutions for Fortune 500 and Fortune 100 companies across the globe. She has in-depth experience in integrated healthcare data and analytics with wide variety of healthcare datasets including managed market, physician targeting and patient analytics. In her spare time, Raji enjoys hiking, yoga and gardening.

 

Padmaja Suren is a Senior Solutions Architect specialized in QuickSight. She has 20+ years of experience in building scalable data platforms for Reporting, Analytics and AI/ML using a variety of technologies. Prior to AWS, in her role as BI Architect at ERT, she designed, engineered and cloud-enabled the BI and Analytics platform for the management of large scale clinical trial data conducted across the world. She dedicates her free time on her passion project SanghWE which helps sexual trauma survivors in developing nations heal and recover.

Security updates for Tuesday

Post Syndicated from original https://lwn.net/Articles/862767/rss

Security updates have been issued by Debian (sogo), Fedora (libvirt), Gentoo (polkit), Mageia (binutils, freeradius, guile1.8, kernel, kernel-linus, libgrss, mediawiki, mosquitto, php-phpmailer, and webmin), openSUSE (bluez and jdom2), Oracle (kernel and xstream), Scientific Linux (xstream), and SUSE (kernel and python-pip).

Subscription Changes for Computer Backup

Post Syndicated from original https://www.backblaze.com/blog/subscription-changes-for-computer-backup/

Thank you all for being Backblaze customers and fans. We’re writing today’s post to let you know that effective August 16th, 2021 at 5 p.m. Pacific, the prices for the Backblaze Computer Backup service are increasing. At that time, our prices per subscription will change to:

Extension Program FAQ

Why the Change?

In short, because of a double digit growth in customer data storage, significant increases in supply chain costs, and our desire to continue investing in providing you with a great service.

Here’s a little more information:

Data Growth and Component Price Increases

Our Computer Backup service is unlimited (and we mean it). Businesses and individuals can back up as much data from their Macs and PCs as they like, and we back up external drives by default as well. This means that as our customers generate more and more data, our costs can rise while our prices remain fixed.

Over the last 14 years, we have worked diligently to keep our costs low and pass our savings on to customers. We’ve invested in deduplication, compression, and other technologies to continually optimize our storage platform and drive our costs down—savings which we pass on to our customers in the form of storing more data for the same price.

However, the average backup size stored by Computer Backup customers has spiked 15% over just the last two years. Additionally, not only have component prices not fallen at traditional rates, but recently electronic components that we rely on to provide our services have actually increased in price.

The combination of these two trends, along with our desire to continue investing in providing a great service, is driving the need to modestly increase our prices.

The Service Keeps Improving

While the cost of our Computer Backup service is increasing, you’re going to continue getting great value for your money. For example, in just the last two years (most recently with version 8.0), we have:

  • Added Extended Version History, which allows customers to retain their backups for longer—up to one year or even forever.
  • Increased backup speeds—faster networks and more intelligent threading means that you can back up quickly and get protected faster.
  • Optimized the app to be kinder to your computer—less load on the computer means we stay out of the way while keeping you protected, leaving your resources free for whatever else you’re working on.
  • Re-architected the app to reduce strain on SSDs—we’ve rewritten how the app handles copying files for backup, which reduces strain and extends the useful life of SSDs, which are common in newer computers.
  • Improved data access by enhancing our mobile apps—backing up your data is one thing, but accessing them is equally important. Our mobile apps give you access to all of your backed up files on the go.
  • Easing deployment options—for our business customers, installing and managing backups across all of their users’ machines is a huge job; we improved our silent installers and mass deployment tools to make their lives easier.

These are just some of the major improvements we’ve made in recent years—nearly every week we push big and small improvements to our service, upgrading our single sign-on options, optimizing inherit backup state functionality, and much more. (A lot of the investments are under-the-covers to silently make the service function more efficiently and seamlessly.)

Lock In Your Current Price With a Subscription Extension

As a way of thanking you for being a loyal Backblaze customer, we’re giving you the opportunity to lock in your existing Computer Backup pricing for one extra year beyond your current subscription period.

(Read our Extension Program FAQ to learn more.)

Thank you for being a customer. We really appreciate your trust in us and are committed to continuing to provide a service that makes it easy to get your data backed up, access it from anywhere in the world, protect it from ransomware, and to locate your computer should it be lost or stolen.

Answers to Questions You Might Have

Are Backblaze B2 Cloud Storage Prices Changing?

No. While data flowing into our storage cloud is up across the board, our B2 Cloud Storage platform charges for usage by the byte, so customers pay for the amount of data that they use. Meanwhile, Computer Backup is an unlimited service, and the increase in our customers’ average storage amount plus the recent spike in rising hardware costs are contributing factors to the increase.

Will You Raise Prices Again?

We have no plans to raise prices in the future. While we expect the data stored by our customers to continue growing, we also expect that the global supply chain challenges will stabilize. We work hard to drive down the cost of storage and provide a great service at an affordable price and intend to continue doing exactly that.

The post Subscription Changes for Computer Backup appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Старата песен на нов глас

Post Syndicated from Екип на Биволъ original https://bivol.bg/%D1%81%D1%82%D0%B0%D1%80%D0%B0%D1%82%D0%B0-%D0%BF%D0%B5%D1%81%D0%B5%D0%BD-%D0%BD%D0%B0-%D0%BD%D0%BE%D0%B2-%D0%B3%D0%BB%D0%B0%D1%81.html

вторник 13 юли 2021


Бир ферари кармазъ. Май така беше, не знам чалгарски много добре. Но по чалготеки съм  ходил, признавам си без бой. И с бой бих си признал, но за това трябва да…

Building well-architected serverless applications: Implementing application workload security – part 2

Post Syndicated from Julian Wood original https://aws.amazon.com/blogs/compute/building-well-architected-serverless-applications-implementing-application-workload-security-part-2/

This series of blog posts uses the AWS Well-Architected Tool with the Serverless Lens to help customers build and operate applications using best practices. In each post, I address the serverless-specific questions identified by the Serverless Lens along with the recommended best practices. See the introduction post for a table of contents and explanation of the example application.

Security question SEC3: How do you implement application security in your workload?

This post continues part 1 of this security question. Previously, I cover reviewing security awareness documentation such as the Common Vulnerabilities and Exposures (CVE) database. I show how to use GitHub security features to inspect and manage code dependencies. I then show how to validate inbound events using Amazon API Gateway request validation.

Required practice: Store secrets that are used in your code securely

Store secrets such as database passwords or API keys in a secrets manager. Using a secrets manager allows for auditing access, easier rotation, and prevents exposing secrets in application source code. There are a number of AWS and third-party solutions to store and manage secrets.

AWS Partner Network (APN) member Hashicorp provides Vault to keep secrets and application data secure. Vault has a centralized workflow for tightly controlling access to secrets across applications, systems, and infrastructure. You can store secrets in Vault and access them from an AWS Lambda function to, for example, access a database. You can use the Vault Agent for AWS to authenticate with Vault, receive the database credentials, and then perform the necessary queries. You can also use the Vault AWS Lambda extension to manage the connectivity to Vault.

AWS Systems Manager Parameter Store allows you to store configuration data securely, including secrets, as parameter values.

AWS Secrets Manager enables you to replace hardcoded credentials in your code with an API call to Secrets Manager to retrieve the secret programmatically. You can protect, rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. You can also generate secure secrets. By default, Secrets Manager does not write or cache the secret to persistent storage.

Parameter Store integrates with Secrets Manager. For more information, see “Referencing AWS Secrets Manager secrets from Parameter Store parameters.”

To show how Secrets Manager works, deploy the solution detailed in “How to securely provide database credentials to Lambda functions by using AWS Secrets Manager”.

The AWS Cloud​Formation stack deploys an Amazon RDS MySQL database with a randomly generated password. This is stored in Secrets Manager using a secret resource. A Lambda function behind an API Gateway endpoint returns the record count in a table from the database, using the required credentials. Lambda function environment variables store the database connection details and which secret to return for the database password. The password is not stored as an environment variable, nor in the Lambda function application code.

Lambda environment variables for Secrets Manager

Lambda environment variables for Secrets Manager

The application flow is as follows:

  1. Clients call the API Gateway endpoint
  2. API Gateway invokes the Lambda function
  3. The Lambda function retrieves the database secrets using the Secrets Manager API
  4. The Lambda function connects to the RDS database using the credentials from Secrets Manager and returns the query results

View the password secret value in the Secrets Manager console, which is randomly generated as part of the stack deployment.

Example password stored in Secrets Manager

Example password stored in Secrets Manager

The Lambda function includes the following code to retrieve the secret from Secrets Manager. The function then uses it to connect to the database securely.

secret_name = os.environ['SECRET_NAME']
rds_host = os.environ['RDS_HOST']
name = os.environ['RDS_USERNAME']
db_name = os.environ['RDS_DB_NAME']

session = boto3.session.Session()
client = session.client(
	service_name='secretsmanager',
	region_name=region_name
)
get_secret_value_response = client.get_secret_value(
	SecretId=secret_name
)
...
secret = get_secret_value_response['SecretString']
j = json.loads(secret)
password = j['password']
...
conn = pymysql.connect(
	rds_host, user=name, passwd=password, db=db_name, connect_timeout=5)

Browsing to the endpoint URL specified in the Cloud​Formation output displays the number of records. This confirms that the Lambda function has successfully retrieved the secure database credentials and queried the table for the record count.

Lambda function retrieving database credentials

Lambda function retrieving database credentials

Audit secrets access through a secrets manager

Monitor how your secrets are used to confirm that the usage is expected, and log any changes to them. This helps to ensure that any unexpected usage or change can be investigated, and unwanted changes can be rolled back.

Hashicorp Vault uses Audit devices that keep a detailed log of all requests and responses to Vault. Audit devices can append logs to a file, write to syslog, or write to a socket.

Secrets Manager supports logging API calls with AWS CloudTrail. CloudTrail captures all API calls for Secrets Manager as events. This includes calls from the Secrets Manager console and from code calling the Secrets Manager APIs.

Viewing the CloudTrail event history shows the requests to secretsmanager.amazonaws.com. This shows the requests from the console in addition to the Lambda function.

CloudTrail showing access to Secrets Manager

CloudTrail showing access to Secrets Manager

Secrets Manager also works with Amazon EventBridge so you can trigger alerts when administrator-specified operations occur. You can configure EventBridge rules to alert on deleted secrets or secret rotation. You can also create an alert if anyone tries to use a secret version while it is pending deletion. This can identify and alert when there is an attempt to use an out-of-date secret.

Enforce least privilege access to secrets

Access to secrets must be tightly controlled because the secrets contain sensitive information. Create AWS Identity and Access Management (IAM) policies that enable minimal access to secrets to prevent credentials being accidentally used or compromised. Secrets that have policies that are too permissive could be misused by other environments or developers. This can lead to accidental data loss or compromised systems. For more information, see “Authentication and access control for AWS Secrets Manager”.

Rotate secrets frequently.

Rotating your workload secrets is important. This prevents misuse of your secrets since they become invalid within a configured time period.

Secrets Manager allows you to rotate secrets on a schedule or on demand. This enables you to replace long-term secrets with short-term ones, significantly reducing the risk of compromise. Secrets Manager creates a CloudFormation stack with a Lambda function to manage the rotation process for you. Secrets Manager has native integrations with Amazon RDS, Amazon Redshift, and Amazon DocumentDB. It populates the function with the Amazon Resource Name (ARN) of the secret. You specify the permissions to rotate the credentials, and how often you want to rotate the secret.

The CloudFormation stack creates a MySecretRotationSchedule resource with a MyRotationLambda function to rotate the secret every 30 days.

MySecretRotationSchedule:
    Type: AWS::SecretsManager::RotationSchedule
    DependsOn: SecretRDSInstanceAttachment
    Properties:
    SecretId: !Ref MyRDSInstanceRotationSecret
    RotationLambdaARN: !GetAtt MyRotationLambda.Arn
    RotationRules:
        AutomaticallyAfterDays: 30
MyRotationLambda:
    Type: AWS::Serverless::Function
    Properties:
    Runtime: python3.7
    Role: !GetAtt MyLambdaExecutionRole.Arn
    Handler: mysql_secret_rotation.lambda_handler
    Description: 'This is a lambda to rotate MySql user passwd'
    FunctionName: 'cfn-rotation-lambda'
    CodeUri: 's3://devsecopsblog/code.zip'      
    Environment:
        Variables:
        SECRETS_MANAGER_ENDPOINT: !Sub 'https://secretsmanager.${AWS::Region}.amazonaws.com'

View and edit the rotation settings in the Secrets Manager console.

Secrets Manager rotation settings

Secrets Manager rotation settings

Manually rotate the secret by selecting Rotate secret immediately. This invokes the Lambda function, which updates the database password and updates the secret in Secrets Manager.

View the updated secret in Secrets Manager, where the password has changed.

Secrets Manager password change

Secrets Manager password change

Browse to the endpoint URL to confirm you can still access the database with the updated credentials.

Access endpoint with updated Secret Manager password

Access endpoint with updated Secret Manager password

You can provide your own code to customize a Lambda rotation function for other databases or services. The code includes the commands required to interact with your secured service to update or add credentials.

Conclusion

Implementing application security in your workload involves reviewing and automating security practices at the application code level. By implementing code security, you can protect against emerging security threats. You can improve the security posture by checking for malicious code, including third-party dependencies.

In this post, I continue from part 1, looking at securely storing, auditing, and rotating secrets that are used in your application code.

In the next post in the series, I start to cover the reliability pillar from the Well-Architected Serverless Lens with regulating inbound request rates.

For more serverless learning resources, visit Serverless Land.

Iranian State-Sponsored Hacking Attempts

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/07/iranian-state-sponsored-hacking-attempts.html

Interesting attack:

Masquerading as UK scholars with the University of London’s School of Oriental and African Studies (SOAS), the threat actor TA453 has been covertly approaching individuals since at least January 2021 to solicit sensitive information. The threat actor, an APT who we assess with high confidence supports Islamic Revolutionary Guard Corps (IRGC) intelligence collection efforts, established backstopping for their credential phishing infrastructure by compromising a legitimate site of a highly regarded academic institution to deliver personalized credential harvesting pages disguised as registration links. Identified targets included experts in Middle Eastern affairs from think tanks, senior professors from well-known academic institutions, and journalists specializing in Middle Eastern coverage.

These connection attempts were detailed and extensive, often including lengthy conversations prior to presenting the next stage in the attack chain. Once the conversation was established, TA453 delivered a “registration link” to a legitimate but compromised website belonging to the University of London’s SOAS radio. The compromised site was configured to capture a variety of credentials. Of note, TA453 also targeted the personal email accounts of at least one of their targets. In subsequent phishing emails, TA453 shifted their tactics and began delivering the registration link earlier in their engagement with the target without requiring extensive conversation. This operation, dubbed SpoofedScholars, represents one of the more sophisticated TA453 campaigns identified by Proofpoint.

The report details the tactics.

News article.

Amazon Simple Queue Service (SQS) – 15 Years and Still Queueing!

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-sqs-15-years-and-still-queueing/

Time sure does fly! I wrote about the production launch of Amazon Simple Queue Service (SQS) back in 2006, with no inkling that I would still be blogging fifteen years later, or that this service would still be growing rapidly while remaining fundamental to the architecture of so many different types of web-scale applications.

The first beta of SQS was announced with little fanfare in late 2004. Even though we have added many features since that beta, the original description (“a reliable, highly scalable hosted queue for buffering messages between distributed application components”) still applies. Our customers think of SQS as an infinite buffer in the cloud and use SQS queues to implement the connections between the functional parts of their application architecture.

Over the years, we have worked hard to keep the SQS interface simple and easy to use, even though there’s a lot of complexity inside. In order to make SQS scalable, durable, and reliable, messages are stored in a fleet that consists of thousands of servers in each AWS Region. Within a region, we save three copies of each message, taking care to distribute the messages across storage nodes and Availability Zones. In addition to this built-in redundant storage, SQS is self-healing and resilient to host failures & network interruptions. Even though SQS is now 15 years old, we continue to improve our scaling and load management applications so that you can always count on SQS to handle your mission-critical workloads.

SQS runs at an incredible scale. For example:

Amazon Product Fulfillment – During Prime Day 2021, traffic to SQS set a new record, peaking at 47.7 million messages per second.

Rapid7 – Amazon customer Rapid7 makes extensive use of SQS. According to Jeff Myers (Platform Software Architect):

Amazon SQS provides us with a simple to use, highly performant, and highly scalable messaging service without any management headaches. It is a crucial component of our architecture that has allowed us to scale to handle 10s of billions of messages per day.

You can visit the Amazon SQS home page to read about other high-scale use cases from NASA, BMW Group, Capital One, and other AWS customers.

Serverless Office Hours
Be sure to join us today (June 13th) for Serverless Office Hours at Noon PT on Twitch.tv. Rumor has it that there will be cake!

Back in Time
Here’s a quick recap of some of the major SQS milestones:

2006Production launch. An unlimited number of queues per account and items per queue, with each item up to 8 KB in length. Pay as you go pricing based on messages sent and data transferred.

2007Additional functions including control of the visibility timeout and access to the approximate number of queued messages.

2009SQS in Europe, and additional control over queue access via AddPermission and RemovePermission. This launch also marked the debut of what we then called the Access Policy Language, which has evolved into today’s IAM Policies.

2010A new free tier (100K requests/month then, since expanded to 1 million requests/month), configurable maximum message length (up to 64 KB), and configurable message retention time.

2011Additional CloudWatch Metrics for each SQS queue. Later that year we added batch operations (SendMessageBatch and DeleteMessageBatch), delay queues, and message timers.

2012Support for long polling, along with SDK support for request batching and client-side buffering.

2013Support for even larger payloads (256 KB) and a 50% price reduction.

2014 – Support for dead letter queues, to accept messages that have become stuck due to a timeout or to a processing error within your code.

2015 – Support for extended payloads (up to 2 GB) using the Extended Client Library.

2016 – Support for FIFO queues with exactly-once processing and deduplication and another price reduction.

2017 – Support for server-side encryption of messages and cost allocation tags.

2018 – Support for Amazon VPC Endpoints using AWS PrivateLink and the ability to invoke Lambda functions.

2019 – Support for Tag-on-Create and X-Ray tracing.

2020 – Support for 1-minute metrics for more fine-grained queue monitoring, a new console experience, and result pagination for the ListQueues and ListDeadLetterSourceQueues functions.

2021Tiered pricing so that you save money as your usage grows, and a High Throughput mode for FIFO queues.

Today, SQS remains simple, scalable, cost-effective, and highly reliable.

From the AWS Heroes
We asked some of the AWS Heroes to reflect on the success of SQS and to share some of their success stories. Here’s what we learned:

Eric Hammond (Serverless Hero) uses AWS Lambda Dead Letter Queues. They put alarms on the queues and send internal emails as alerts when problems need to be investigated.

Tom McLaughlin (Serverless Hero) has been using SQS since 2015. He said “My favorite use case is anytime someone wants a queue and I don’t want to manage a queuing platform. Which is always.”

Ken Collins (Serverless Hero) was not entirely sure how long he had been using SQS, and offered a range of 2 to 8 years! He uses it to power the Lambdakiq gem, which implements ActiveJob on SQS & Lambda.

Kyuhyun Byun (Serverless Hero) has been using SQS to power a push system that must sustain massive amounts of traffic, and tells us “Thanks to SQS, I don’t consider building a queuing system anymore.”

Prashanth HN (Serverless Hero) has been using SQS since 2017, and considers himself “late to the party.” He used SQS as part of his first serverless application, and finds that it is ideal for connecting services with differing throughput.

Ben Kehoe (Serverless Hero) told us that “I first saw the power of SQS when a colleague showed me how to retain state in a fleet of EC2 Spot instances by storing the state in SQS when an instance was getting shut down, and having new instances check the queue for state on startup.”

Jeremy Daly (Serverless Hero) started using SQS in 2010 as a lightweight queue that fed a facial recognition process on user-supplied images. Today, he often uses it as a buffer to throttle requests to downstream services that are not yet serverless.

Casey Lee (Container Hero) also started to use SQS in 2010 as a replacement for ActiveMQ between loosely coupled Java services. Casey implements auto scaling based on queue depth, and has found it to be an accurate way to handle the load.

Vlad Ionecsu (Container Hero) began his AWS journey with SQS back in 2014. Vlad found that the API was very easy to understand, and used SQS to power his thesis project.

Sheen Brisals (Serverless Hero) started to use SQS in 2018 while building a proof-of-concept that also used Lambda and S3. Sheen appreciates the ability to adjust the characteristics of each queue to create a good match for the message processing functions, and also makes use of both high and low priority queues.

Gojko Adzic (Serverless Hero) began to use SQS in 2013 as a task dispatch for exporters in MindMup. This online mind-mapping application allows large groups of users to collaborate, and requires strict ordering of updates to each document. Gojko used FIFO queues to allow them to process messages for different documents in parallel, with sequential order within each document.

Sebastian Müller (Serverless Hero) started to use SQS in 2016 by building a notification center for website-builder jimdo.com . The center ensures that customers are kept aware of events (orders, support messages, and contact requests) on a timely basis.

Luca Bianchi (Serverless Hero) first used SQS in 2012. He decoupled a pair of microservices running on AWS Elastic Beanstalk, and also created a fan-out processing system for a gamification platform. Today, his favorite SQS use case stores inference jobs and makes them available to a worker process running on Amazon SageMaker.

Peter Hanssens (Serverless Hero) uses SQS to offload tasks that do not need to be processed immediately. Several years ago, while assisting some data scientists, he created an event-driven batch-processing system that used a Lambda function to check a queue every 5 minutes and fire up EC2 instances to build models, while keeping strict control over the maximum number of running instances.

Serkan Ozal (Serverless Hero) has been using SQS since 2013 or so. He focuses on asynchronous message processing and counts on the ability of SQS to handle a huge number of events. He also makes uses of the message visibility feature so that he can re-process messages if necessary.

Matthieu Napoli (Serverless Hero) has been using SQS for about five years, starting out with EC2, and as a replacement for other queues. As he says, “Paired with Lambda, it gives massive parallelism out of the box without having to think too much about it. Plus the built-in failure handling makes it very robust.”

As you can see, there are a multitude of use cases for SQS.

SQS Resources
If you are not already using SQS, then it is time to get in the queue and get started. Here are some resources to help you find your way:

Happy queueing!

Jeff;

 

 

 

Scheduled report generation in Zabbix 5.4

Post Syndicated from Arturs Lontons original https://blog.zabbix.com/scheduled-report-generation-in-zabbix-5-4/14776/

The release of version 5.4 grants Zabbix users the ability to receive scheduled PDF reports in their mailbox, which is a very sought-after feature. This post and the video will cover all-new report-related configuration parameters and walk you through setting up scheduled report generation.

Contents

I. Reporting in Zabbix 5.4 (0:45)
II. Scheduled reports (2:26)

III. Questions & Answers (13:28)

Reporting in Zabbix 5.4

Zabbix 5.4 is our first big step in bringing out-of-the-box reporting for our end users. With this feature, we now have a foundation to build upon in the future and make reporting more robust and versatile over time. Since reports are 100% based on dashboard widgets, it’s only a matter of time until more report-focused widgets get released, thus enabling not only better dashboards, but also improving the reporting functionality.

  • We have implemented a new web service component responsible for generating reports — of course, you can install this server in a quick and easy fashion by using the provided packages.
  • Reporting works out of the box without the need to deploy or develop any custom scripts.
  • The initial configuration is easy to understand and implement.
  • Reporting will use the existing Email media types to send out these reports.
  • The reports do respect your permissions, as well as roles introduced in Zabbix 5.2.
  • You will be able to test the report before implementing it as per our schedules just by clicking the Test button.

Scheduled reports

We have added a new  Scheduled report section, where the list of reports is available, displaying the report Name, their Owner, Repeats (daily, weekly, etc.), the Period for which the report is generated, and the Last sent date.

Scheduled reports

NOTE. When you configure new reports, and they have not been sent out yet, the Last sent date will be set to ‘Never.’

Creating a report

When you create a report, you will also have to fill in a couple of fields:

  • Owner,
  • Name of the report,
  • Dashboard, the report will be based on,
  • Period — if you send the report for the Previous day, Previous week, Previous month, of Previous year,
  • Cycle — how often you send the report Daily/Weekly/Monthly/ Yearly,
  • Start time (Zabbix server time is used here),
  • Start date and end date.

Creating reports

Receiving a report

When you receive a PDF report to your mailbox, you can also use the {TIME} macro to display server time both in the subject and the body of the message.

Receiving a report

In the PDF report, you can display any information from the included dashboard – Graphs, Problems, Latest data, and much more. Thanks to all of the available widgets, we will be able to customize our reports in a very granular fashion.

Receiving a report example

The report does respect user permissions. So, in the example above, the report shows only the data to which the user (either the recipient or the report creator) has access.

Permissions

After upgrading to Zabbix 5.4, you will see two new options in the User roles section:

  • Scheduled reports UI element. Under the UI elements, you can grant or deny access to the Scheduled reports section. This is accessible only to Super admin and Admin user Types.

Permissions

If the Scheduled reports UI element is unchecked for the role, the user won’t be able to access the Scheduled report section and will see an error message. The same behavior is true if you use a URL to access the Scheduled reports.

Access to scheduled reports denied message for the users of a user role

You can also manage scheduled report permissions in the Access to actions section by checking the Manage scheduled reports box. This action permission grants or denies the ability to create or edit scheduled reports and is also accessible to Admin and Super admin user types.

Manage scheduled reports

If this check box is unchecked, the users won’t be able to create new or edit existing reports, though they will be able to access the UI section and see the list of reports and how they are configured.

Access to Manage scheduled reports restricted

Recipients of scheduled reports

When you are defining a new report, you can select the recipient. Report subscription can contain a user or a user group.

  • When selecting a user, you can specify to include or exclude the user from the subscription.
  • User group to host group permissions still apply.
  • You can specify which user is going to be generating the report – recipient or the creator of the report.:

Report recipients

For example, if we need to send some extra information to our NOC team that might not be directly available to them, you can select Current user, and the report will be generated with the permissions of the report creator. Since it is the admin that is creating the report, you can add some extra information that wouldn’t be visible to your NOC team or other regular users. They still won’t be able to access it in Zabbix, but they’ll receive it in their mailbox if you configure the report for them.

Report prerequisites

Diving a bit deeper into the technical side of things, we need to set up two additional packages to enable the reports:

  • zabbix-web-service — the additional reporting service by default listening to port 10053. The service needs to be reachable from the Zabbix server and can be deployed on the same machine as our frontend or our server. We also have the option to deploy it on a completely separate machine. The zabbix-web-service package should be available if you have added the Zabbix repository.
#yum install zabbix-web-service
  • Google Chrome is required. However, on some distributions, Chromium is reported to also work, though this is not 100% tested. Note that Google Chrome packages are not included in Zabbix. The Google Chrome packages can simply be downloaded from the Google Chrome website and then installed on the zabbix-web-service host.
#wget https://dl.google.com/linux/direct/google-chrome-stable_current_x86_64.rpm

#yum install google-chrome-stable_current_x86_64.rpm

Configuring reports — Web service

We have a whole new configuration file for the web service. Web service supports many different configuration options:

  • Logging — similar to that for server and for proxy. You can set up debug levels, select the log types, rotations, and so on.
### Option: LogType –system (syslog), file, console (standard output)
### Option: LogFile–Log file location
### Option: LogFileSize -Size in MB before rotation
### Option: DebugLevel –0 -5
  • List of allowed server addresses that can access this web service.
### Option: AllowedIP List of comma delimited IP addresses, optionally in CIDR notation, or DNS names of Zabbix servers
  • Timeout settings
### Option: Timeout -Spend no more than Timeout seconds on processing (Default –3)
  • Listen port
### Option: ListenPort -Service will listen on this port for connections from the server
(Default -ListenPort=10053)
  • Encryption settings by using certificates. This way the communication with the web service can be secured.
### Option: TLSAccept –unencrypted or cert
### Option: TLSCAFile–pathname of a file containing top level CA(s) certificates
### Option: TLSCertFile–pathname of a file containing the service certificate
### Option: TLSKeyFile–pathname of a file containing the service private key

Configuring reports — Server

In addition, the server settings now contain report-related parameters:

  • The number of report writer instances.
### Option: StartReportWriters -Number of pre-forked report writer instances.
(Default –0)

NOTE. You need to have at least one StartReportWriter

NOTE. The number of the necessary report writers will depend on the number of reports and how often you generate them.

  • Zabbix Web Service URL (to be passed on to the server)
### Option: WebServiceURL -URL to Zabbix web service, used to perform web-related tasks. (No default value)
#Example: http://192.168.1.156:10053/report

You need to make sure that we can communicate with the Zabbix Web Service URL and permit the incoming traffic through this port to the web service.

Configuring reports — Frontend

As the last step, you need to enable communication between the frontend and the web service.

In Administration > General > Other, we have a new configuration parameter where you need to specify your frontend URL that will be reachable by the web service.

Frontend URL

Once this is done, we can create a report.

Reports — testing

After you have created the report, you can test it. You can click the Test button and send out your test report to see if it works. The users to which we’re sending the report need to have an Email media assigned to them in the User settings.

NOTE. Currently, {TIME} macros are resolved only with the scheduled generation and are not available in test reports, though this might change in the future.

Testing reports

Common issues

Some parameters can certainly be misconfigured, so let’s look at the most common issues:

  • Make sure that you have a properly configured Email media assigned to the user that should be receiving the report. Otherwise, they will fail to receive it.

— Make sure that the Email media type settings are properly configured.
— Once you define the media type, if you’re creating it from scratch, make sure that you test the media type and generate a test report.

Media configuration failed

NOTE. Sending out the report failed in this example siince no media is configured for the report recipients.

  • Make sure that the correct Web service address is configured on the Zabbix server in the WebServiceURL parameter.

— Confirm that the Zabbix server can connect to the Zabbix web service and that so that we can connect to the specified port/IP address.
— Check your firewall settings if the web service is running on a dedicated machine.
— Make sure that third-party security software, such as SELinux or firewalls don’t block the communication.

Wrong WebServiceURL parameter

Otherwise, you will receive an error message on the Frontend. The error messages should be sufficient enough to point you in the right direction.

  • Make sure that the Web service URL is configured without any typos. Otherwise, you will reach the web service, but the report page will output an error — ‘404 page not found’.
WebServiceURL=http://192.168.1.156:10053/reportwrong

Typos in configuration error message

NOTE. If you see this error message, check for typos in the Zabbix server configuration file for WebServiceURL.

  • Don’t forget to assign the Frontend URL in Administration > General > Other.

— If a URL is misconfigured, you might start receiving empty reports.
— If the URL syntax is wrong, you will receive an error message about the malformed URL.

Malformed URL error message

Frontend URL configuration parameter

  • Google Chrome is not pre-packaged with Zabbix,

— You need to have Google Chrome package installed separately. You can download Google Chrome from the official Google Chrome website, for instance, by using wget.

— Make sure that Google Chrome is available via $PATH environmental variable. If you don’t have it configured, you will receive the error message, so you will need to modify the path variable and make sure the executable is available there.

$PATH environmental variable error

Questions & Answers

Question. What are the possibilities to customize the page size like A4, A3?

Answer. It will be based on how you customize your Dashboard. Currently, you cannot customize the page and select portrait or landscape, for instance.

 

Community stories: Avye

Post Syndicated from Katie Gouskos original https://www.raspberrypi.org/blog/community-stories-avye-robotics-girls-tech/

We’re excited to share another incredible story from the community — the second in our new series of inspirational short films that celebrate young tech creators across the world.

A young teenager with glasses smiles
Avye discovered robotics at her local CoderDojo and is on a mission to get more girls like her into tech.

These stories showcase some of the wonderful things that young people are empowered to do when they learn how to create with technology. We hope that they will inspire many more young people to get creative with technology too!

Meet Avye

This time, you will meet an accomplished, young community member who is on a quest to encourage more girls to join her and get into digital making.

Help us celebrate Avye by liking and sharing her story on Twitter, Linkedin, or Facebook!

For as long as she can remember, Avye (13) has enjoyed creating things. It was at her local CoderDojo that seven-year-old Avye was introduced to the world of robotics. Avye’s second-ever robot, the Raspberry Pi–powered Voice O’Tronik Bot, went on to win the Hardware category at our Coolest Projects UK event in 2018.

A girl shows off a robot she has built
Avye showcased her Raspberry Pi–powered Voice O’Tronik Bot at Coolest Projects UK in 2018.

Coding and digital making have become an integral part of Avye’s life, and she wants to help other girls discover these skills too. She says, I believe that it’s important for girls and women to see and be aware of ordinary girls and women doing cool things in the STEM world.” Avye started running her own workshops for girls in their community and in 2018 founded Girls Into Coding. She has now teamed up with her mum Helene, who is committed to helping to drive the Girls Into Coding mission forwards.

I want to get other girls like me interested in tech.

Avye

Avye has received multiple awards to celebrate her achievements, including the Princess Diana Award and Legacy Award in 2019. Most recently, in 2020, Avye won the TechWomen100 Award, the Women in Tech’s Aspiring Teen Award, and the FDM Everywoman in Tech Award!

We cannot wait to see what the future has in store for her. Help us celebrate Avye and inspire others by liking and sharing her story on Twitter, Linkedin, or Facebook!

The post Community stories: Avye appeared first on Raspberry Pi.

Следизборно

Post Syndicated from Bozho original https://blog.bozho.net/blog/3786

Финалните резултати още не са обявени, но по всичко изглежда, че малко не ми достига за влизане в Народното събрание. По-точно – 194 гласа.

Това може и да е разочароващо, но далеч не толкова, че да ме демотивира дори минимално. Винаги съм казвал, че отвън мога да върша много работа, за която дори „вътре“ няма да ми остава време от процедури и прожектори. Все пак е минус, че няма да имам възможност отвътре да убеждавам и обяснявам, да изграждам консенсус по темата „модернизация“.

Благодаря на всички 2614 души, които гласуваха лично за мен. Това е значителна и изключително задължаваща подкрепа и стимул за работа, без значение дали достъпът ми до Народното събрание ще бъде с депутатска карта или с карта на експерт към парламентарната група, както го правих и в предходния парламент. Депутатството е само инструмент, а не титла или награда.

Благодаря на Демократична България и особено на доброволците на Да, България от 24-ти МИР.

За съжаление, с оглед на създалата се вчера политическа обстановка, смятам, че в Народното събрание няма да има добра среда за бърза модернизация на публичния сектор. Дано все пак това се промени към по-добро в следващите седмици.

Още веднъж – благодаря за цялата подкрепа и доверие. Ще ги приведа в действие и резултат.

Материалът Следизборно е публикуван за пръв път на БЛОГодаря.

Does free software benefit from ML models being derived works of training data?

Post Syndicated from original https://mjg59.dreamwidth.org/57615.html

Github recently announced Copilot, a machine learning system that makes suggestions for you when you’re writing code. It’s apparently trained on all public code hosted on Github, which means there’s a lot of free software in its training set. Github assert that the output of Copilot belongs to the user, although they admit that it may occasionally produce output that is identical to content from the training set.

Unsurprisingly, this has led to a number of questions along the lines of “If Copilot embeds code that is identical to GPLed training data, is my code now GPLed?”. This is extremely understandable, but the underlying issue is actually more general than that. Even code under permissive licenses like BSD requires retention of copyright notices and disclaimers, and failing to include them is just as much a copyright violation as incorporating GPLed code into a work and not abiding by the terms of the GPL is.

But free software licenses only have power to the extent that copyright permits them to. If your code isn’t a derived work of GPLed material, you have no obligation to follow the terms of the GPL. Github clearly believe that Copilot’s output doesn’t count as a derived work as far as US copyright law goes, and as a result the licenses on the training data don’t apply to the output. Some people have interpreted this as an attack on free software – Copilot may insert code that’s either identical or extremely similar to GPLed code, and claim that there are no license obligations created as a result, effectively allowing the laundering of GPLed code into proprietary software.

I’m completely unqualified to hold a strong opinion on whether Github’s legal position is justifiable or not, and right now I’m also not interested in thinking about it too much. What I think is more interesting is what the impact of either position has on free software. Do we benefit more from a future where the output of Copilot (or similar projects) is considered a derived work of the training data, or one where it isn’t? Having been involved in a bunch of GPL enforcement activities, it’s very easy to think of this as something that weakens the GPL and, as a result, weakens free software. That was my initial reaction, but that’s shifted over the past few days.

Let’s look at the GNU manifesto, specifically this section:

The fact that the easiest way to copy a program is from one neighbor to another, the fact that a program has both source code and object code which are distinct, and the fact that a program is used rather than read and enjoyed, combine to create a situation in which a person who enforces a copyright is harming society as a whole both materially and spiritually; in which a person should not do so regardless of whether the law enables him to.

The GPL makes use of copyright law to ensure that GPLed work can’t be taken from the commons. Anyone who produces a derived work of GPLed code is obliged to provide that work under the same terms. If software weren’t copyrightable, the GPL would have no power. But this is the outcome Stallman wanted! The GPL doesn’t exist because copyright is good, it exists because software being copyrightable is what enables the concept of proprietary software in the first place.

The powers that the GPL uses to enforce sharing of code are used by the authors of proprietary software to reduce that sharing. They attempt to forbid us from examining their code to determine how it works – they argue that anyone who does so is tainted, unable to contribute similar code to free software projects in case they produce a derived work of the original. Broadly speaking, the further the definition of a derived work reaches, the greater the power of proprietary software authors. If Oracle’s argument that APIs are copyrightable had prevailed, it would have been disastrous for free software. If the Apple look and feel suit had established that Microsoft infringed Apple’s copyright, we might be living in a future where we had no free software desktop environments.

When we argue for an interpretation of copyright law that enhances the power of the GPL, we’re also enhancing the power of giant corporations with a lot of lawyers on hand. So let’s look at this another way. If Github’s interpretation of copyright law holds, we can train a model on proprietary code and extract concepts without having to worry about being tainted. The proprietary code itself won’t enter the commons, but the ideas it embodies will. No more worries about whether you’re literally copying the code that implements an algorithm you want to duplicate – simply start typing and let the model remove the risk for you.

There’s a reasonable counter argument about equality here. How much GPL-influenced code is going to end up in proprietary projects when compared to the reverse? It’s not an easy question to answer, but we should bear in mind that the majority of public repositories on Github aren’t under an open source license. Copilot is already claiming to give us access to the concepts embodied in those repositories. Do these provide more value than is given up? I honestly don’t know how to measure that. But what I do know is that free software was founded in a belief that software shouldn’t be constrained by copyright, and our default stance shouldn’t be to argue against the idea that copyright is weaker than we imagined.

comment count unavailable comments

App Modularisation at Scale

Post Syndicated from Grab Tech original https://engineering.grab.com/app-modularisation-at-scale

Grab a coffee ☕️, sit back and enjoy reading. 😃

Wanna know how we improved our app’s build time performance and developer experience at Grab? Continue reading…

Where it all began

Imagine you are working on an app that grows continuously as more and more features are added to it, it becomes challenging to manage the code at some point. Code conflicts increase due to coupling, development slows down, releases take longer to ship, collaboration becomes difficult, and so on.

Grab superapp is one such app that offers many services like booking taxis, ordering food, payments using an e-wallet, transferring money to friends/families, paying at merchants, and many more, across Southeast Asia.

Grab app followed a monolithic architecture initially where the entire code was held in a single module containing all the UI and business logic for almost all of its features. But as the app grew, new developers were hired, and more features were built, it became difficult to work on the codebase. We had to think of better ways to maintain the codebase, and that’s when the team decided to modularise the app to solve the issues faced.

What is Modularisation?

Breaking the monolithic app module into smaller, independent, and interchangeable modules to segregate functionality so that every module is responsible for executing a specific functionality and will contain everything necessary to execute that functionality.

Modularising the Grab app was not an easy task as it brought many challenges along with it because of its complicated structure due to the high amount of code coupling.

Approach and Design

We divided the task into the following sub-tasks to ensure that only one out of many functionalities in the app was impacted at a time.

  • Setting up the infrastructure by creating Base/Core modules for Networking, Analytics, Experimentation, Storage, Config, and so on.
  • Building Shared Library modules for Styling, Common-UI, Utils, etc.
  • Incrementally building Feature modules for user-facing features like Payments Home, Wallet Top Up, Peer-to-Merchant (P2M) Payments, GrabCard and many others.
  • Creating Kit modules for the feature to feature module communication. This step helped us in building the feature modules in parallel.
  • Finally, the App module is used as a hub to connect all the other modules together using dependency injection (Dagger).
Modularised app structure
Modularised app structure

In the above diagram, payments-home, wallet top-up, and grabcard are different features provided by the Grab app. top-up-kit and grabcard-kit are bridges that expose functionalities from topup and grabcard modules to the payments-home module, respectively.

In the process of modularising the Grab app, we ensured that a feature module did not directly depend on other feature modules so that they could be built in parallel using the available CPU cores of the machine, hence reducing the overall build time of the app.

With the Kit module approach, we separated our code into independent layers by depending only on abstractions instead of concrete implementation.

Modularisation Benefits

  • Faster build times and hence faster CI: Gradle build system compiles only the changed modules and uses the binaries of all the non-affected modules from its cache. So the compilation becomes faster. Moreover, independent modules are run in parallel on different threads.
  • Fine dependency graph: Dependencies of a module are well defined.
  • Reusability across other apps: Modules can be used across different apps by converting them into an AAR SDK.
  • Scale and maintainability: Teams can work independently on the modules owned by them without blocking each other.
  • Well-defined code ownership: Clear responsibility on who owns which code.

Limitations

  • Requires more effort and time to modularise an app.
  • Separate configuration files to be maintained for each module.
  • Gradle sync time starts to grow.
  • IDE becomes very slow and its memory usage goes up a lot.
  • Parallel execution of the module depends on the machine’s capabilities.

Where we are now

There are more than 1,000 modules in the Grab app and are still counting.

At Grab, we have many sub-teams which take care of different features available in the app. Grab Financial Group (GFG) is one such sub-team that handles everything related to payments in the app. For example: P2P & P2M money transfers, e-Wallet activation, KYC, and so on.

We started modularising payments further in July 2020 as it was already bombarded with too many features and it was difficult for the team to work on the single payments module. The result of payments modularisation is shown in the following chart.

Build time graph of payments module
Build time graph of payments module

As of today, we have about 200+ modules in GFG and more than 95% of the modules take less than 15s to build.

Conclusion

Modularisation has helped us a lot in reducing the overall build time of the app and also, in improving the developer experience by breaking dependencies and allowing us to define code ownership. Having said that, modularisation is not an easy or a small task, especially for large projects with legacy code. However, with careful planning and the right design, modularisation can help in forming a well-structured and maintainable project.

Hope you enjoyed reading. Don’t forget to 👏.

References:

Join Us

Grab is the leading superapp platform in Southeast Asia, providing everyday services that matter to consumers. More than just a ride-hailing and food delivery app, Grab offers a wide range of on-demand services in the region, including mobility, food, package and grocery delivery services, mobile payments, and financial services across 428 cities in eight countries.

Powered by technology and driven by heart, our mission is to drive Southeast Asia forward by creating economic empowerment for everyone. If this mission speaks to you, join our team today!

App Moduralisation at Scale

Post Syndicated from Grab Tech original https://engineering.grab.com/app-moduralisation-at-scale

Grab a coffee ☕️, sit back and enjoy reading. 😃

Wanna know how we improved our app’s build time performance and developer experience at Grab? Continue reading…

Where it all began

Imagine you are working on an app that grows continuously as more and more features are added to it, it becomes challenging to manage the code at some point. Code conflicts increase due to coupling, development slows down, releases take longer to ship, collaboration becomes difficult, and so on.

Grab superapp is one such app that offers many services like booking taxis, ordering food, payments using an e-wallet, transferring money to friends/families, paying at merchants, and many more, across Southeast Asia.

Grab app followed a monolithic architecture initially where the entire code was held in a single module containing all the UI and business logic for almost all of its features. But as the app grew, new developers were hired, and more features were built, it became difficult to work on the codebase. We had to think of better ways to maintain the codebase, and that’s when the team decided to modularise the app to solve the issues faced.

What is Modularisation?

Breaking the monolithic app module into smaller, independent, and interchangeable modules to segregate functionality so that every module is responsible for executing a specific functionality and will contain everything necessary to execute that functionality.

Modularising the Grab app was not an easy task as it brought many challenges along with it because of its complicated structure due to the high amount of code coupling.

Approach and Design

We divided the task into the following sub-tasks to ensure that only one out of many functionalities in the app was impacted at a time.

  • Setting up the infrastructure by creating Base/Core modules for Networking, Analytics, Experimentation, Storage, Config, and so on.
  • Building Shared Library modules for Styling, Common-UI, Utils, etc.
  • Incrementally building Feature modules for user-facing features like Payments Home, Wallet Top Up, Peer-to-Merchant (P2M) Payments, GrabCard and many others.
  • Creating Kit modules for the feature to feature module communication. This step helped us in building the feature modules in parallel.
  • Finally, the App module is used as a hub to connect all the other modules together using dependency injection (Dagger).
Modularised app structure
Modularised app structure

In the above diagram, payments-home, wallet top-up, and grabcard are different features provided by the Grab app. top-up-kit and grabcard-kit are bridges that expose functionalities from topup and grabcard modules to the payments-home module, respectively.

In the process of modularising the Grab app, we ensured that a feature module did not directly depend on other feature modules so that they could be built in parallel using the available CPU cores of the machine, hence reducing the overall build time of the app.

With the Kit module approach, we separated our code into independent layers by depending only on abstractions instead of concrete implementation.

Modularisation Benefits

  • Faster build times and hence faster CI: Gradle build system compiles only the changed modules and uses the binaries of all the non-affected modules from its cache. So the compilation becomes faster. Moreover, independent modules are run in parallel on different threads.
  • Fine dependency graph: Dependencies of a module are well defined.
  • Reusability across other apps: Modules can be used across different apps by converting them into an AAR SDK.
  • Scale and maintainability: Teams can work independently on the modules owned by them without blocking each other.
  • Well-defined code ownership: Clear responsibility on who owns which code.

Limitations

  • Requires more effort and time to modularise an app.
  • Separate configuration files to be maintained for each module.
  • Gradle sync time starts to grow.
  • IDE becomes very slow and its memory usage goes up a lot.
  • Parallel execution of the module depends on the machine’s capabilities.

Where we are now

There are more than 1,000 modules in the Grab app and are still counting.

At Grab, we have many sub-teams which take care of different features available in the app. Grab Financial Group (GFG) is one such sub-team that handles everything related to payments in the app. For example: P2P & P2M money transfers, e-Wallet activation, KYC, and so on.

We started modularising payments further in July 2020 as it was already bombarded with too many features and it was difficult for the team to work on the single payments module. The result of payments modularisation is shown in the following chart.

Build time graph of payments module
Build time graph of payments module

As of today, we have about 200+ modules in GFG and more than 95% of the modules take less than 15s to build.

Conclusion

Modularisation has helped us a lot in reducing the overall build time of the app and also, in improving the developer experience by breaking dependencies and allowing us to define code ownership. Having said that, modularisation is not an easy or a small task, especially for large projects with legacy code. However, with careful planning and the right design, modularisation can help in forming a well-structured and maintainable project.

Hope you enjoyed reading. Don’t forget to 👏.

References:

Join Us

Grab is the leading superapp platform in Southeast Asia, providing everyday services that matter to consumers. More than just a ride-hailing and food delivery app, Grab offers a wide range of on-demand services in the region, including mobility, food, package and grocery delivery services, mobile payments, and financial services across 428 cities in eight countries.

Powered by technology and driven by heart, our mission is to drive Southeast Asia forward by creating economic empowerment for everyone. If this mission speaks to you, join our team today!

SolarWinds Serv-U FTP and Managed File Transfer CVE-2021-35211: What You Need to Know

Post Syndicated from Erick Galinkin original https://blog.rapid7.com/2021/07/12/solarwinds-serv-u-ftp-and-managed-file-transfer-cve-2021-35211-what-you-need-to-know/

SolarWinds Serv-U FTP and Managed File Transfer CVE-2021-35211: What You Need to Know

On July 12, 2021, SolarWinds confirmed an actively exploited zero-day vulnerability, CVE-2021-35211, in the Serv-U FTP and Managed File Transfer component of SolarWinds15.2.3 HF1 (released May 5, 2021) and all prior versions. Successful exploitation of CVE-2021-35211 could enable an attacker to gain remote code execution on a vulnerable target system. The vulnerability only exists when SSH is enabled in the Serv-U environment.

A hotfix for the vulnerability is available, and we recommend all customers of SolarWinds Serv-U FTP and Managed File Transfer install this hotfix immediately (or, at minimum, disable SSH for a temporary mitigation). SolarWinds has emphasized that CVE-2021-35211 only affects Serv-U Managed File Transfer and Serv-U Secure FTP and does not affect any other SolarWinds or N-able (formerly SolarWinds MSP) products. For further details, see SolarWinds’s advisory.

Details

The SolarWinds advisory cites threat intelligence provided by Microsoft. According to Microsoft, a single threat actor unrelated to this year’s earlier SUNBURST intrusions has exploited the vulnerability against a limited, targeted population of SolarWinds customers. The vulnerability exists in all versions of Serv-U 15.2.3 HF1 and earlier. Though Microsoft provided a proof-of-concept exploit to SolarWinds, there are no public proofs-of-concept as of July 12, 2021.

The vulnerability appears to be in the exception handling functionality in a portion of the software related to processing connections on open sockets. Successful exploitation of the vulnerability will cause the Serv-U product to throw an exception, then will overwrite the exception handler with the attacker’s code, causing remote code execution.

Detection

Since the vulnerability is in the exception handler, looking for exceptions in the DebugSocketLog.txt file may help identify exploitation attempts. Note, however, that exceptions can be thrown for many reasons and the presence of an exception in the log does not guarantee that there has been an exploitation attempt.

IP addresses used by the threat actor include:

98.176.196.89 
68.235.178.32 
208.113.35.58

Rapid7 does not use SolarWinds Serv-U FTP products anywhere in our environment and is not affected by CVE-2021-35211.

For further information, see Solarwinds’s FAQ here.

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close