Backblaze Files Registration Statement for Proposed Initial Public Offering

Post Syndicated from Backblaze original https://www.backblaze.com/blog/backblaze-files-registration-statement-for-proposed-initial-public-offering/

San Mateo, California, October 18, 2021 – Backblaze, Inc. today announced that it has publicly filed a registration statement on Form S-1 with the U.S. Securities and Exchange Commission (“SEC”) relating to a proposed initial public offering of its Class A common stock. The number of shares to be offered and the price range for the offering have not yet been determined. Backblaze intends to list its Class A common stock on the Nasdaq Global Market under the ticker symbol “BLZE.”

Oppenheimer & Co., William Blair and Raymond James will act as lead book-running managers for the proposed offering, with JMP Securities and B. Riley Securities acting as joint book-running managers. Lake Street will act as co-manager for the proposed offering.

The offering will be made only by means of a prospectus. Copies of the preliminary prospectus related to the offering may be obtained, when available, from Oppenheimer & Co. Inc., Attention: Syndicate Prospectus Department, 85 Broad St., 26th Floor, New York, NY 10004, by telephone at (212) 667-8055, or by email at [email protected]; William Blair & Company, L.L.C. Attention: Prospectus Department, 150 North Riverside Plaza, Chicago, IL 60606, or by telephone at (800) 621-0687 or by email at [email protected]; or Raymond James & Associates, Inc., 880 Carillon Parkway, St. Petersburg, FL 33716, email: [email protected], telephone: 800-248-8863.

A registration statement relating to these securities has been filed with the SEC but has not yet become effective. These securities may not be sold, nor may offers to buy be accepted, prior to the time the registration statement becomes effective. This press release shall not constitute an offer to sell or the solicitation of an offer to buy, nor shall there be any sale of these securities in any state or jurisdiction in which such offer, solicitation, or sale would be unlawful prior to registration or qualification under the securities laws of any such state or jurisdiction.

Investors:
James Kisner
Vice President of Investor Relations
ir@backblaze.com

Press Contact:
Patrick Thomas
Head of Publishing
press@backblaze.com

The post Backblaze Files Registration Statement for Proposed Initial Public Offering appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Migrate to an Amazon Redshift Lake House Architecture from Snowflake

Post Syndicated from Soujanya Konka original https://aws.amazon.com/blogs/big-data/migrate-to-an-amazon-redshift-lake-house-architecture-from-snowflake/

The need to derive meaningful and timely insights increases proportionally with the amount of data being collected. Data warehouses play a key role in storing, transforming, and making data easily accessible to enable a wide range of use cases, such as data mining, business intelligence (BI) and reporting, and diagnostics, as well as predictive, prescriptive, and cognitive analysis.

Several new features of Amazon Redshift address a wide range of data requirements and improve performance of extract, load, and transform (ELT) jobs and queries. For example, concurrency scaling, the new RA3 instance types, elastic resize, materialized views, and federated query, which allows you to query data stored in your Amazon Aurora or Amazon Relational Database Service (Amazon RDS) Postgres operational databases directly from Amazon Redshift, and the SUPER data type, which can store semi-structured data or documents as values. The new distributed and hardware accelerated cache with AQUA (Advanced Query Accelerator) for Amazon Redshift delivers up to10 times more performance than other cloud warehouses. The machine learning (ML) based self-tuning capability to set sort and distribution keys for tables significantly improves query performance that was previously handled manually. For the latest feature releases for AWS services, see What’s New with AWS?

To take advantage of these capabilities and future innovation, you need to migrate from your current data warehouse, like Snowflake, to Amazon Redshift, which involves two primary steps:

  • Migrate raw, transformed, and prepared data from Snowflake to Amazon Simple Storage Service (Amazon S3)
  • Reconfigure data pipelines to move data from sources to Amazon Redshift and Amazon S3, which provide a unified, natively integrated storage layer of our Lake House Architecture

In this post, we show you how to migrate data from Snowflake to Amazon Redshift. We cover the second step, reconfiguring pipelines, in a later post.

Solution overview

Our solution is designed in two stages, as illustrated in the following architecture diagram.

The first part of our Lake House Architecture is to ingest data into the data lake. We use AWS Glue Studio with AWS Glue custom connectors to connect to the source Snowflake database and extract the tables we want and store them in Amazon S3. To accelerate extracting business insights, we load the frequently accessed data into an Amazon Redshift cluster. The infrequently accessed data is cataloged in the AWS Glue Data Catalog as external tables that can be easily accessed from our cluster.

For this post, we consider three tables: Customer, Lineitem, and Orders, from the open-source TCPH_SF10 dataset. An AWS Glue ETL job, created by AWS Glue Studio, moves the Customers and Orders tables from Snowflake into the Amazon Redshift cluster, and the Lineitem table is copied to Amazon S3 as an external table. A view is created in Amazon Redshift to combine internal and external datasets.

Prerequisites

Before we begin, complete the steps required to set up and deploy the solution:

  1. Create an AWS Secrets Manager secret with the credentials to connect to Snowflake: username, password, and warehouse details. For instructions, see Tutorial: Creating and retrieving a secret.
  2. Download the latest Snowflake JDBC JAR file and upload it to an S3 bucket. You will find this bucket referenced as SnowflakeConnectionbucket in the cloudformation step.
  3. Identify the tables in your Snowflake database that you want to migrate.

Create a Snowflake connector using AWS Glue Studio

To complete a successful connection, you should be familiar with the Snowflake ecosystem and the associated parameters for Snowflake database tables. These can be passed as job parameters during run time. The following screenshot from a Snowflake test account shows the parameter values used in the sample job.

The following screenshot shows the account credentials and database from Secrets Manager.

To create your AWS Glue custom connector for Snowflake, complete the following steps:

  1. On the AWS Glue Studio console, under Connectors, choose Create custom connector.
  2. For Connector S3 URL, browse to the S3 location where you uploaded the Snowflake JDBC connector JAR file.
  3. For Name, enter a logical name.
  4. For Connector type, choose JDBC.
  5. For Class name, enter net.snowflake.client.jdbc.SnowflakeDriver.
  6. Enter the JDBC URL base in the following format: jdbc:snowflake://<snowflakeaccountinfo>/?user=${Username}&password=${Password}&warehouse=${warehouse}.
  7. For URL parameter delimiter, enter &.
  8. Optionally, enter a description to identify your connector.
  9. Choose Create connector.

Set up a Snowflake JDBC connection

To create a JDBC connection to Snowflake, complete the following steps:

  1. On the AWS Glue Studio console, choose Connectors.
  2. Choose the connector you created.
  3. Choose Create connection.

  4. For Name and Description, enter a logical name and description for your reference.
  5. For Connection credential type, choose default.
  6. For AWS Secret, choose the secret created as a part of the prerequisites.
  7. Optionally, you can specify the credentials in plaintext format.
  8. Under Additional options, add the following key-value pairs:
    1. Key db with the Snowflake database name
    2. Key schema with the Snowflake database schema
    3. Key warehouse with the Snowflake warehouse name
  9. Choose Create connection.

Configure other resources and permissions using AWS CloudFormation

In this step, we create additional resources with AWS CloudFormation, which includes an Amazon Redshift cluster, AWS Identity and Access Management (IAM) roles with policies, S3 bucket, and AWS Glue jobs to copy tables from Snowflake to Amazon S3 and from Amazon S3 to Amazon Redshift.

  1. Sign in to the AWS Management Console as an IAM power user, preferably an admin user.
  2. Choose your Region as us-east-1.
  3. Choose Launch Stack:
  4. Choose Next.
  5. For Stack name, enter a name for the stack, for example, snowflake-to-aws-blog.
  6. For Secretname, enter the secret name created in the prerequisites.
  7. For SnowflakeConnectionName, enter the Snowflake JDBC connection you created.
  8. For Snowflake Connection bucket, enter name of the S3 bucket where the snowflake connector is uploaded
  9. For SnowflakeTableNames, enter the list of tables to migrate from Snowflake. For example, Lineitem,customers,order.
  10. For RedshiftTableNames, enter the list of the tables to load into your warehouse (Amazon Redshift). For example, customers,order.
  11. You can specify your choice of Amazon Redshift node type, number of nodes, and Amazon Redshift username and password, or use the default values.
  12. For the MasterUserPassword, enter a password for your master user keeping in mind the following constraints : It must be 8 to 64 characters in length. It must contain at least one uppercase letter, one lowercase letter, and one number.
  13. Choose Create stack.

Run AWS Glue jobs for the data load

The stack takes about 7 minutes to complete. After the stack is deployed successfully, perform the following actions:

  1. On the AWS Glue Studio console, under Databases, choose Connections.
  2. Select the connection redshiftconnection from the list and choose Test Connection.
  3. Choose the IAM role ExecuteGlueSnowflakeJobRole from the drop-down meu and choose Test connection.

If you receive an error, verify or edit the username and password and try again.

  1. After the connection is tested successfully, on the AWS Glue Studio console, select the job Snowflake-s3-load-job.
  2. On the Action menu, choose Run job.

When the job is complete, all the tables mentioned in the SnowflakeTableNames parameter are loaded into your S3 bucket. The time it takes to complete this job varies depending on the number and size of the tables.

Now we load the identified tables in Amazon Redshift.

  1. Run the job s3-redshift-load-job.
  2. After the job is complete, navigate to the Amazon Redshift console.
  3. Use the query editor to connect to your cluster to verify that the tables specified in RedshiftTableNames are loaded successfully.

You can now view and query datasets from Amazon Redshift. The Lineitem dataset is on Amazon S3 and queried by Amazon Redshift Spectrum. The following screenshot shows how to create an Amazon Redshift external schema that allows you to query Amazon S3 data from Amazon Redshift.

Tables loaded to Amazon Redshift associated storage appear as in the following screenshot.

The AWS Glue job, using the standard worker type to move Snowflake data into Amazon S3, completed in approximately 21 minutes, loading overall 2.089 GB (about 76.5 million records). The following screenshot from the Snowflake console shows the tables and their sizes, which we copied to Amazon S3.

You have the ability to customize the AWS Glue worker type, worker nodes, and max concurrency to adjust distribution and workload.

AWS Glue allows parallel data reads from the data store by partitioning the data on a column. You must specify the partition column, the lower partition bound, the upper partition bound, and the number of partitions. This feature enables you use data parallelism and multiple Spark executors allocated the Spark application.

This completes our migration from Snowflake to Amazon Redshift that enables a Lake House Architecture and the ability to analyze data in more ways. We would like to take a step further and talk about features of Amazon Redshift that can help extend this architecture for data democratization and modernize your data warehouse.

Modernize your data warehouse

Amazon Redshift powers the Lake House Architecture, which enables queries from your data lake, data warehouse, and other stores. Amazon Redshift can access the data lake using Redshift Spectrum. Amazon Redshift automatically engages nodes from a separate fleet of Redshift Spectrum nodes. These nodes run queries directly against Amazon S3, run scans and aggregations, and return the data to the compute nodes for further processing.

AWS Lake Formation provides a governance solution for data stored in an Amazon S3-based data lake and offers a central permission model with fine-grained access controls at the column and row level. Lake Formation uses the AWS Glue Data Catalog as a central metadata repository and makes it simple to ingest and catalog data using blueprints and crawlers.

The following screenshot shows the tables from Snowflake represented in the AWS Glue Data Catalog and managed by Lake Formation.

With the Amazon Redshift data lake export feature, you can also save data back in Amazon S3 in open formats like Apache Parquet, to use with other analytics services like Amazon Athena and Amazon EMR.

Distributed storage

Amazon Redshift RA3 gives the flexibility to scale compute and storage independently. Amazon Redshift data is stored on Amazon Redshift managed storage backed by Amazon S3. Distribution of datasets between cluster storage and Amazon S3 allows you to benefit from bringing the appropriate compute to the data depending on your use case. You can query data from Amazon S3 without accessing Amazon Redshift.

Let’s look at an example with the star schema. We can save a fact table that we expect to grow rapidly in Amazon S3 with the schema saved in the Data Catalog, and dimension tables in cluster storage. You can use views with union data from both Amazon S3 and the attached Amazon Redshift managed storage.

Another model for data distribution can be based on the state of hot or cold data, with hot data in Amazon Redshift managed storage and cold data in Amazon S3. In this example, we have the datasets lineitem, customer, and orders. The customer and orders portfolio are infrequently updated datasets in comparison to lineitem. We can create an external table to read lineitem data from Amazon S3 and the schema from the Data Catalog database, and load customer and orders to Amazon Redshift tables. The following screenshot shows a join query between the datasets.

It would be interesting to know the overall run statistics for this query, which can be queried from system tables. The following code gets the stats from the preceding query using svl_s3query_summary:

select elapsed, s3_scanned_rows, s3_scanned_bytes,
s3query_returned_rows, s3query_returned_bytes, files, avg_request_parallelism
from svl_s3query_summary
where query = 1918
order by query,segment;

The following screenshot shows query output.

For more information about this query, see Using the SVL_QUERY_SUMMARY view.

Automated table optimization

Distribution and sort keys are table properties that define how data is physically stored. These are managed by Amazon Redshift. Automatic table optimization continuously observes how queries interact with tables and uses ML to select the best sort and distribution keys to optimize performance for the cluster’s workload. To enhance performance, Amazon Redshift chooses the key and tables are altered automatically.

In the preceding scenario, the lineitem table had distkey (L_ORDERKEY), the customer table had distribution ALL, and orders had distkey (O_ORDERKEY).

Storage optimization

Choosing a data format depends on the data size (JSON, CSV, or Parquet). Redshift Spectrum currently supports Avro, CSV, Grok, Amazon Ion, JSON, ORC, Parquet, RCFile, RegexSerDe, Sequence, Text, and TSV data formats. When you choose your format, consider the overall data scanned and I/O efficiency, such as with a small dataset in CSV or JSON format versus the same dataset in columnar Parquet format. In this case, for smaller scans, Parquet consumes more compute capacity compared to CSV, and may eventually take around the same time as CSV. In most cases, Parquet is the optimal choice, but you need to consider other inputs like volume, cost, and latency.

SUPER data type

The SUPER data type offers native support for semi-structured data. It supports nested data formats such as JSON and Ion files. This allows you to ingest, store, and query nested data natively in Amazon Redshift. You can store JSON formatted data in SUPER columns.

You can query the SUPER data type through an easy-to-use SQL extension that is powered by the PartiQL. PartiQL is a SQL language that makes it easy to efficiently query data regardless of the format, whether the data is structured or semi-structured.

Pause and resume

Pause and resume lets you easily start and stop a cluster to save costs for intermittent workloads. This way, you can cost-effectively manage a cluster with infrequently accessed data.

You can apply pause and resume via the console, API, and user-defined schedules.

AQUA

AQUA for Amazon Redshift is a large high-speed cache architecture on top of Amazon S3 that can scale out to process data in parallel across many nodes. It flips the current paradigm of bringing the data to the compute—AQUA brings the compute to the storage layer so the data doesn’t have to move back and forth between the two, which enables Amazon Redshift to run queries much faster.

Data sharing

The data sharing feature seamlessly allows multiple Amazon Redshift clusters to query data located in RA3 clusters and their managed storage. This is ideal for workloads that are isolated from each other but data needs to be shared for cross-group collaboration without actually copying data.

Concurrency scaling

Amazon Redshift automatically adds transient clusters in seconds to serve sudden spikes in concurrent requests with consistently fast performance. For every 1 day of usage, 1 hour of concurrency scaling is available at no charge.

Conclusion

In this post, we discussed an approach to migrate a Snowflake data warehouse to a Lake House Architecture with a central data lake accessible through Amazon Redshift.

We covered how to use AWS Glue to move data from sources like Snowflake into your data lake, catalog it, and make it ready to analyze in a few simple steps. We also saw how to use Lake Formation to enable governance and fine-grained security in the data lake. Lastly, we discussed several new features of Amazon Redshift that make it easy to use, perform better, and scale to meet business demands.


About the Authors

Soujanya Konka is a Solutions Architect and Analytics specialist at AWS, focused on helping customers build their ideas on cloud. Expertise in design and implementation of business information systems and Data warehousing solutions. Before joining AWS, Soujanya has had stints with companies such as HSBC, Cognizant.

Shraddha Patel is a Solutions Architect and Big data and Analytics Specialist at AWS. She works with customers and partners to build scalable, highly available and secure solutions in the AWS cloud.

„Венецуелската перачница“ Ковчежникът на Мадуро – екстрадиран в САЩ, спецпрокурори протакат с парите му в Инвестбанк АД

Post Syndicated from Николай Марченко original https://bivol.bg/%D0%BA%D0%BE%D0%B2%D1%87%D0%B5%D0%B6%D0%BD%D0%B8%D0%BA%D1%8A%D1%82-%D0%BD%D0%B0-%D0%BC%D0%B0%D0%B4%D1%83%D1%80%D0%BE-%D0%B5%D0%BA%D1%81%D1%82%D1%80%D0%B0%D0%B4%D0%B8%D1%80%D0%B0%D0%BD-%D0%B2-%D1%81.html

понеделник 18 октомври 2021


През изминалия уикенд колумбийският милиардер Алекс Сааб Моран, който има дипломатически статут на „специален посланик“ на венецуелския диктатор Николас Мадуро, е екстрадиран от Кабо Верде към САЩ, където е в…

Field Notes: Perform Automations in Ungoverned Regions During Account Launch Using AWS Control Tower Lifecycle Events

Post Syndicated from Amit Kumar original https://aws.amazon.com/blogs/architecture/field-notes-perform-automations-in-ungoverned-regions-during-account-launch-using-aws-control-tower-lifecycle-events/

This post was co-authored by Amit Kumar; Partner Solutions Architect at AWS, Pavan Kumar Alladi; Senior Cloud Architect at Tech Mahindra, and Thooyavan Arumugam; Senior Cloud Architect at Tech Mahindra.

Organizations use AWS Control Tower to set up and govern secure, multi-account AWS environments. Frequently enterprises with a global presence want to use AWS Control Tower to perform automations during the account creation including in AWS Regions where AWS Control Tower service is not available. To review the current list of Regions where AWS Control Tower is available, visit the AWS Regional Services List.

This blog post shows you how we can use AWS Control Tower lifecycle events, AWS Service Catalog, and AWS Lambda to perform automation in the Region where AWS Control Tower service is unavailable. This solution depicts the scenario for a single Region and the solution need to be changed to work with a multi-Regions scenario.

We use an AWS CloudFormation template to create a virtual private cloud (VPC) with subnet and internet gateway as an example and use it in shared service catalog products at the organization level to make it available in child accounts. Every time AWS Control Tower lifecycle events related to account creation occurs, a Lambda function is initiated to perform automation activities in AWS Regions that are not governed by AWS Control Tower.

The solution in this blog post uses the following AWS services:

Figure 1. Solution architecture

Figure 1. Solution architecture

Prerequisites

For this walkthrough, you need the following prerequisites:

  • AWS Control Tower configured with AWS Organizations defined and registered within AWS Control Tower. For this blog post, AWS Control Tower is deployed in AWS Mumbai Region and with an AWS Organizations structure as depicted in Figure 2.
  • Working knowledge of AWS Control Tower.
Figure 2. AWS Organizations structure

Figure 2. AWS Organizations structure

Create an AWS Service Catalog product and portfolio, and share at the AWS Organizations level

  1. Sign in to AWS Control Tower management account as an administrator, and select an AWS Region which is not governed by AWS Control Tower (for this blog post, we will use AWS us-west-1 (N. California) as the Region because at this time it is unavailable in AWS Control Tower).
  2. In the AWS Service Catalog console, in the left navigation menu, choose Products.
  3. Choose upload new product. For Product Name enter customvpcautomation, and for Owner enter organizationabc. For method, choose Use a template file.
  4. In Upload a template file, select Choose file, and then select the CloudFormation template you are going to use for automation. In this example, we are going to use a CloudFormation template which creates a VPC with CIDR 10.0.0.0/16, Public Subnet, and Internet Gateway.
Figure 3. AWS Service Catalog product

Figure 3. AWS Service Catalog product

CloudFormation template: save this as a YAML file before selecting this in the console.

AWSTemplateFormatVersion: 2010-09-09
Description: Template to create a VPC with CIDR 10.0.0.0/16 with a Public Subnet and Internet Gateway. 

Resources:
  VPC:
    Type: AWS::EC2::VPC
    Properties:
      CidrBlock: 10.0.0.0/16
      EnableDnsSupport: true
      EnableDnsHostnames: true
      Tags:
        - Key: Name
          Value: VPC

  IGW:
    Type: AWS::EC2::InternetGateway
    Properties:
      Tags:
        - Key: Name
          Value: IGW

  VPCtoIGWConnection:
    Type: AWS::EC2::VPCGatewayAttachment
    DependsOn:
      - IGW
      - VPC
    Properties:
      InternetGatewayId: !Ref IGW
      VpcId: !Ref VPC

  PublicRouteTable:
    Type: AWS::EC2::RouteTable
    DependsOn: VPC
    Properties:
      VpcId: !Ref VPC
      Tags:
        - Key: Name
          Value: Public Route Table

  PublicRoute:
    Type: AWS::EC2::Route
    DependsOn:
      - PublicRouteTable
      - VPCtoIGWConnection
    Properties:
      DestinationCidrBlock: 0.0.0.0/0
      GatewayId: !Ref IGW
      RouteTableId: !Ref PublicRouteTable

  PublicSubnet:
    Type: AWS::EC2::Subnet
    DependsOn: VPC
    Properties:
      VpcId: !Ref VPC
      MapPublicIpOnLaunch: true
      CidrBlock: 10.0.0.0/24
      AvailabilityZone: !Select
        - 0
        - !GetAZs
          Ref: AWS::Region
      Tags:
        - Key: Name
          Value: Public Subnet

  PublicRouteTableAssociation:
    Type: AWS::EC2::SubnetRouteTableAssociation
    DependsOn:
      - PublicRouteTable
      - PublicSubnet
    Properties:
      RouteTableId: !Ref PublicRouteTable
      SubnetId: !Ref PublicSubnet

Outputs:

 PublicSubnet:
    Description: Public subnet ID
    Value:
      Ref: PublicSubnet
    Export:
      Name:
        'Fn::Sub': '${AWS::StackName}-SubnetID'

 VpcId:
    Description: The VPC ID
    Value:
      Ref: VPC
    Export:
      Name:
        'Fn::Sub': '${AWS::StackName}-VpcID'
  1. After the CloudFormation template is selected, choose Review, and then choose Create Product.
Figure 4. AWS Service Catalog product

Figure 4. AWS Service Catalog product

  1. In the AWS Service Catalog console, in the left navigation menu, choose Portfolios, and then choose Create portfolio.
  2. For Portfolio name, enter customvpcportfolio, for Owner, enter organizationabc, and then choose Create.
Figure 5. AWS Service Catalog portfolio

Figure 5. AWS Service Catalog portfolio

  1. After the portfolio is created, select customvpcportfolio. In the actions dropdown, select Add product to portfolio. Then select customvpcautomation product, and choose Add Product to Portfolio.
  2. Navigate back to customvpcportfolio, and select the portfolio name to see all the details. On the portfolio details page, expand the Groups, roles, and users tab, and choose Add groups, roles, users. Next, select the Roles tab and search for AWSControlTowerAdmin role, and choose Add access.
Figure 6. AWS Service Catalog portfolio role selection

Figure 6. AWS Service Catalog portfolio role selection

  1. Navigate to the Share section in portfolio details, and choose Share option. Select AWS Organization, and choose Share.

Note: If you get a warning stating “AWS Organizations sharing is not enabled”, then choose Enable and select the organizational unit (OU) where you want this portfolio to be shared. In this case, we have shared at Workload OU where all workload account is created.

Figure 7. AWS Service Catalog portfolio sharing

Figure 7. AWS Service Catalog portfolio sharing

Create an AWS Identity and Access Management (IAM) role

  1. Sign in to AWS Control Tower management account as an administrator and navigate to IAM Service.
  2. In the IAM console, choose Policies in the navigation pane, then choose Create Policy.
  3. Click on Choose a service, and select STS. In the Actions menu, choose All STS Actions, in Resources, choose All resources, and then choose Next: Tags.
  4. Skip the Tag section, go to the Review section, and for Name enter lambdacrossaccountSTS, and then choose Create policy.
  5. In the navigation pane of the IAM console, choose Roles, and then choose Create role. For the use case, select Lambda, and then choose Next: Permissions.
  6. Select AWSServiceCatalogAdminFullAccess and AmazonSNSFullAccess, then choose Next: Tags (skip tag screen if needed), then choose Next: Review.
  7. For Role name, enter Automationnongovernedregions, and then choose Create role.
Figure 8. AWS IAM role permissions

Figure 8. AWS IAM role permissions

Create an Amazon Simple Notification Service (Amazon SNS) topic

  1. Sign in to AWS Control Tower management account as an administrator and select AWS Mumbai Region (Home Region for AWS CT). Navigate to Amazon SNS Service, and on the navigation panel, choose Topics.
  2. On the Topics page, Choose Create topic. On the Create topic page, in the Details section, for Type select Standard, and for Name enter ControlTowerNotifications. Keep default for other options, and then choose Create topic.
  3. In the Details section, in the left navigation pane, choose Subscriptions.
  4. On the Subscriptions page, choose Create subscription. For Protocol, choose Email and for Endpoint mention the email id where notification need to come and choose Create Subscription.

You will receive an email stating that the subscription is in pending status. Follow the email instructions to confirm the subscription. Check in the Amazon SNS Service console to verify subscription confirmation.

Figure 9. Amazon SNS topic creation and subscription

Figure 9. Amazon SNS topic creation and subscription

Create an AWS Lambda function

  1. Sign in to AWS Control Tower management account as an administrator and select AWS Mumbai Region (Home Region for AWS Control Tower). Open the Functions page on the Lambda console, and choose Create function.
  2.  In the Create function section, choose Author from scratch.
  3. In the Basic information section:
    1. For Function name, enter NonGovernedCrossAccountAutomation.
    2. For Runtime, choose Python 3.8.
    3. For Role, select Choose an existing role.
    4. For Existing role, select the Lambda role that you created earlier.
  1. Choose Create function.
  2. Copy and paste the following code in to the Lambda editor (replace the existing code).
  3. In the File menu, choose Save.

Lambda function code: The Lambda function is developed to initiate the AWS Service Catalog product, shared at Organizations level from AWS Control Tower management account, onto all member accounts in a hub and spoke model. Key activities performed by the Lambda function are:

    • Assume role – Provides the mechanism to assume AWSControlTowerExecution role in the child account.
    • Launch product – Launch the AWS Service Catalog product shared in the non-governed Region with the member account.
    • Email notification – Send notifications to the subscribed recipients.

When this Lambda function is invoked by the AWS Control Tower lifecycle event, it performs the activity of provisioning the AWS Service Catalog products in the Region which is not governed by AWS Control Tower.

#####################################################################################
# Decription:This Lambda used execute service catalog products in unmanaged ControlTower 
# regions while creation of AWS accounts
# Environment: Control Tower Env
# Version 1.0
#####################################################################################

import boto3
import os
import time

SSM_Master = boto3.client('ssm')
STS_Master = boto3.client('sts')
SC_Master = boto3.client('servicecatalog',region_name = 'us-west-1')
SNS_Master = boto3.client('sns')

def lambda_handler(event, context):
    
    if event['detail']['serviceEventDetails']['createManagedAccountStatus']['state'] == 'SUCCEEDED':
        
        account_name = event['detail']['serviceEventDetails']['createManagedAccountStatus']['account']['accountName']
        account_id = event['detail']['serviceEventDetails']['createManagedAccountStatus']['account']['accountId']
        ##Assume role to member account
        assume_role(account_id)  
        
        try:
        
            print("-- Executing Service Catalog Procduct in the account: ", account_name)
            ##Launch Product in member account
            launch_product(os.environ['ProductName'], SC_Member)
            sendmail(f'-- Product Launched successfully ')

        except Exception as err:
        
            print(f'-- Error in Executing Service Catalog Procduct in the account: : {err}')
            sendmail(f'-- Error in Executing Service Catalog Procduct in the account: : {err}')   
    
 ##Function to Assume Role and create session in the Member account.                       
def assume_role(account_id):
    
    global SC_Member, IAM_Member, role_arn
    
    ## Assume the Member account role to execute the SC product.
    role_arn = "arn:aws:iam::$ACCOUNT_NUMBER$:role/AWSControlTowerExecution".replace("$ACCOUNT_NUMBER$", account_id)
    
    ##Assuming Member account Service Catalog.
    Assume_Member_Acc = STS_Master.assume_role(RoleArn=role_arn,RoleSessionName="Member_acc_session")
    aws_access_key_id=Assume_Member_Acc['Credentials']['AccessKeyId']
    aws_secret_access_key=Assume_Member_Acc['Credentials']['SecretAccessKey']
    aws_session_token=Assume_Member_Acc['Credentials']['SessionToken']

    #Session to Connect to IAM and Service Catalog in Member Account                          
    IAM_Member = boto3.client('iam',aws_access_key_id=aws_access_key_id, aws_secret_access_key=aws_secret_access_key,aws_session_token=aws_session_token)
    SC_Member = boto3.client('servicecatalog', aws_access_key_id=aws_access_key_id, aws_secret_access_key=aws_secret_access_key,aws_session_token=aws_session_token,region_name = "us-west-1")
    
    ##Accepting the portfolio share in the Member account.
    print("-- Accepting the portfolio share in the Member account.")
    
    length = 0
    
    while length == 0:
        
        try:
            
            search_product = SC_Member.search_products()
            length = len(search_product['ProductViewSummaries'])
        
        except Exception as err:
            
            print(err)
        
        if length == 0:
        
            print("The shared product is still not available. Hence waiting..")
            time.sleep(10)
            ##Accept portfolio share in member account
            Accept_portfolio = SC_Member.accept_portfolio_share(PortfolioId=os.environ['portfolioID'],PortfolioShareType='AWS_ORGANIZATIONS')
            Associate_principal = SC_Member.associate_principal_with_portfolio(PortfolioId=os.environ['portfolioID'],PrincipalARN=role_arn, PrincipalType='IAM')
        
        else:
            
            print("The products are listed in account.")
    
    print("-- The portfolio share has been accepted and has been assigned the IAM Role principal.")
    
    return SC_Member

##Function to execute product in the Member account.    
def launch_product(ProductName, session):
  
    describe_product = SC_Master.describe_product_as_admin(Name=ProductName)
    created_time = []
    version_ID = []
    
    for version in describe_product['ProvisioningArtifactSummaries']:
        
        describe_provisioning_artifacts = SC_Master.describe_provisioning_artifact(ProvisioningArtifactId=version['Id'],Verbose=True,ProductName=ProductName,)
        
        if describe_provisioning_artifacts['ProvisioningArtifactDetail']['Active'] == True:
            
            created_time.append(describe_provisioning_artifacts['ProvisioningArtifactDetail']['CreatedTime'])
            version_ID.append(describe_provisioning_artifacts['ProvisioningArtifactDetail']['Id'])
    
    latest_version = dict(zip(created_time, version_ID))
    latest_time = max(created_time)
    launch_provisioned_product = session.provision_product(ProductName=ProductName,ProvisionedProductName=ProductName,ProvisioningArtifactId=latest_version[latest_time],ProvisioningParameters=[
        {
            'Key': 'string',
            'Value': 'string'
        },
    ])
    print("-- The provisioned product ID is : ", launch_provisioned_product['RecordDetail']['ProvisionedProductId'])
    
    return(launch_provisioned_product)   
    
def sendmail(message):
    
     sendmail = SNS_Master.publish(
     TopicArn=os.environ['SNSTopicARN'],
     Message=message,
     Subject="Alert - Attention Required",
MessageStructure='string')
  1. Choose Configuration, then choose Environment variables.
  2. Choose Edit, and then choose Add environment variable for each of the following:
    1. Variable 1: Key as ProductName, and Value as “customvpcautomation” (name of the product created in the previous step).
    2. Variable 2: Key as SNSTopicARN, and Value as “arn:aws:sns:ap-south-1:<accountid>:ControlTowerNotifications” (ARN of the Amazon SNS topic created in the previous step).
    3. Variable 3: Key as portfolioID, and Value as “port-tbmq6ia54yi6w” (ID for the portfolio which was created in the previous step).
Figure 10. AWS Lambda function environment variable

Figure 10. AWS Lambda function environment variable

  1. Choose Save.
  2. On the function configuration page, on the General configuration pane, choose Edit.
  3. Change the Timeout value to 5 min.
  4. Go to Code Section, and choose the Deploy option to deploy all the changes.

Create an Amazon EventBridge rule and initiate with a Lambda function

  1. Sign in to AWS Control Tower management account as an administrator, and select AWS Mumbai Region (Home Region for AWS Control Tower).
  2. On the navigation bar, choose Services, select Amazon EventBridge, and in the left navigation pane, select Rules.
  3. Choose Create rule, and for Name enter NonGovernedRegionAutomation.
  4. Choose Event pattern, and then choose Pre-defined pattern by service.
  5. For Service provider, choose AWS.
  6. For Service name, choose Control Tower.
  7. For Event type, choose AWS Service Event via CloudTrail.
  8. Choose Specific event(s) option, and select CreateManagedAccount.
  9. In Select targets, for Target, choose Lambda. Select the Lambda function which was created earlier named as NonGovernedCrossAccountAutomation in Function dropdown.
  10. Choose Create.
Figure 11. Amazon EventBridge rule initiated with AWS Lambda

Figure 11. Amazon EventBridge rule initiated with AWS Lambda

Solution walkthrough

    1. Sign in to AWS Control Tower management account as an administrator, and select AWS Mumbai Region (Home Region for AWS Control Tower).
    2. Navigate to the AWS Control Tower Account Factory page, and select Enroll account.
    3. Create a new account and complete the Account Details section. Enter the Account email, Display name, AWS SSO email, and AWS SSO user name, and select the Organizational Unit dropdown. Choose Enroll account.
Figure 12. AWS Control Tower new account creation

Figure 12. AWS Control Tower new account creation

      1. Wait for account creation and enrollment to succeed.
Figure 13. AWS Control Tower new account enrollment

Figure 13. AWS Control Tower new account enrollment

      1. Sign out of the AWS Control Tower management account, and log in to the new account. Select the AWS us-west-1 (N. California) Region. Navigate to AWS Service Catalog and then to Provisioned products. Select the Access filter as Account and you will observe that one provisioned product is created and available.
Figure 14. AWS Service Catalog provisioned product

Figure 14. AWS Service Catalog provisioned product

      1. Go to VPC service to verify if a new VPC is created by the AWS Service Catalog product with a CIDR of 10.0.0.0/16.
Figure 15. AWS VPC creation validation

Figure 15. AWS VPC creation validation

      1. Step 4 and Step 5 validates that you are able to perform the automation during account creation through the AWS Control Tower lifecycle events in non-governed Regions.

Cleaning up

To avoid incurring future charges, clean up the resources created as part of this blog post.

  • Delete the AWS Service Catalog product and portfolio you created.
  • Delete the IAM role, Amazon SNS topic, Amazon EventBridge rule, and AWS Lambda function you created.
  • Delete the AWS Control Tower setup (if created).

Conclusion

In this blog post, we demonstrated how to use AWS Control Tower lifecycle events to perform automation tasks during account creation in Regions not governed by AWS Control Tower. AWS Control Tower provides a way to set up and govern a secure, multi-account AWS environment. With this solution, customers can use AWS Control Tower to automate various tasks during account creation in Regions regardless if AWS Control Tower is available in that Region.

Field Notes provides hands-on technical guidance from AWS Solutions Architects, consultants, and technical account managers, based on their experiences in the field solving real-world business problems for customers.
Daniel Cordes

Pavan Kumar Alladi

Pavan Kumar Alladi is a Senior Cloud Architect with Tech Mahindra and is based out of Chennai, India. He is working on AWS technologies from past 10 years as a specialist in designing and architecting solutions on AWS Cloud. He is ardent in learning and implementing cloud based cutting edge solutions and is extremely zealous about applying cloud services to resolve complex real world business problems. Currently, he leads customer engagements to deliver solutions for Platform Engineering, Cloud Migrations, Cloud Security and DevOps.

Gaurav Jain

Thooyavan Arumugam

Thooyavan Arumugam is a Senior Cloud Architect at Tech Mahindra’s AWS Practice team. He has over 16 years of industry experience in Cloud infrastructure, network, and security. He is passionate about learning new technologies and helping customers solve complex technical problems by providing solutions using AWS products and services. He provides advisory services to customers and solution design for Cloud Infrastructure (Security, Network), new platform design and Cloud Migrations.

Crawler Hints Update: Cloudflare Supports IndexNow and Announces General Availability

Post Syndicated from Alex Krivit original https://blog.cloudflare.com/cloudflare-now-supports-indexnow/

Crawler Hints Update: Cloudflare Supports IndexNow and Announces General Availability

Crawler Hints Update: Cloudflare Supports IndexNow and Announces General Availability

In the midst of the hottest summer on record, Cloudflare held its first ever Impact Week. We announced a variety of products and initiatives that aim to make the Internet and our planet a better place, with a focus on environmental, social, and governance projects. Today, we’re excited to share an update on Crawler Hints, an initiative announced during Impact Week. Crawler Hints is a service that improves the operating efficiency of the approximately 45% of Internet traffic that comes from web crawlers and bots.

Crawler Hints achieves this efficiency improvement by ensuring that crawlers get information about what they’ve crawled previously and if it makes sense to crawl a website again.

Today we are excited to announce two updates for Crawler Hints:

  1. The first: Crawler Hints now supports IndexNow, a new protocol that allows websites to notify search engines whenever content on their website content is created, updated, or deleted. By collaborating with Microsoft and Yandex, Cloudflare can help improve the efficiency of their search engine infrastructure, customer origin servers, and the Internet at large.
  2. The second: Crawler Hints is now generally available to all Cloudflare customers for free. Customers can benefit from these more efficient crawls with a single button click. If you want to enable Crawler Hints, you can do so in the Cache Tab of the Dashboard.

What problem does Crawler Hints solve?

Crawlers help make the Internet work. Crawlers are automated services that travel the Internet looking for… well, whatever they are programmed to look for. To power experiences that rely on indexing content from across the web, search engines and similar services operate massive networks of bots that crawl the Internet to identify the content most relevant to a user query. But because content on the web is always changing, and there is no central clearinghouse for when these changes happen on websites, search engine crawlers have a Sisyphean task. They must continuously wander the Internet, making guesses on how frequently they should check a given site for updates to its content.

Companies that run search engines have worked hard to make the process as efficient as possible, pushing the state-of-the-art for crawl cadence and infrastructure efficiency. But there remains one clear area of waste: excessive crawl.

At Cloudflare, we see traffic from all the major search crawlers, and have spent the last year studying how often these bots revisit a page that hasn’t changed since they last saw it. Every one of these visits is a waste. And, unfortunately, our observation suggests that 53% of this crawler traffic is wasted.

With Crawler Hints, we expect to make this task a bit more tractable by providing an additional heuristic to the people who run these crawlers. This will allow them to know when content has been changed or added to a site instead of relying on preferences or previous changes that might not reflect the true change cadence for a site. Crawler Hints aims to increase the proportion of relevant crawls and limit crawls that don’t find fresh content, improving customer experience and reducing the need for repeated crawls.

Cloudflare sits in a unique position on the Internet to help give crawlers hints about when they should recrawl a site. Don’t knock on a website’s door every 30 seconds to see if anything is new when Cloudflare can proactively tell your crawler when it’s the best time to index new or changed content. That’s Crawler Hints in a nutshell!

If you want to learn more about Crawler Hints, see the original blog.

What is IndexNow?

IndexNow is a standard that was written by Microsoft and Yandex search engines. The standard aims to provide an efficient manner of signaling to search engines and other crawlers for when they should crawl content. Cloudflare’s Crawler Hints now supports IndexNow.

​​In its simplest form, IndexNow is a simple ping so that search engines know that a URL and its content has been added, updated, or deleted, allowing search engines to quickly reflect this change in their search results.
www.indexnow.org

By enabling Crawler Hints on your website, with the simple click of a button, Cloudflare will take care of signaling to these search engines when your content has changed via the IndexNow protocol. You don’t need to do anything else!  

What does this mean for search engine operators? With Crawler Hints you’ll receive a near real-time, pushed feed of change events of Cloudflare websites (that have opted in). This, in turn, will dramatically improve not just the quality of your results, but also the energy efficiency of running your bots.

Collaborating with Industry leaders

Cloudflare is in a unique position to have a sizable portion of the Internet proxied behind us. As a result, we are able to see trends in the way bots access web resources. That visibility allows us to be proactive about signaling which crawls are required vs. not. We are excited to work with partners to make these insights useful to our customers. Search engines are key constituents in this equation. We are happy to collaborate and share this vision of a more efficient Internet with Microsoft Bing, and Yandex. We have been testing our interaction via IndexNow with Bing and Yandex for months with some early successes.  

This is just the beginning. Crawler Hints is a continuous process that will require working with more and more partners to improve Internet efficiency more generally. While this may take time and participation from other key parts of the industry, we are open to collaborate with any interested participant who relies on crawling to power user experiences.

“The cache data from CDNs is a really valuable signal for content freshness. Cloudflare, as one of the top CDNs, is key in the adoption of IndexNow to become an industry-wide standard with a large portion of the internet actually using it. Cloudflare has built a really easy 1-click button for their users to start using it right away. Cloudflare’s mission of helping build a better Internet resonates well with why I started IndexNow i.e. to build a more efficient and effective Search.”
Fabrice Canel, Principal Program Manager

Crawler Hints Update: Cloudflare Supports IndexNow and Announces General Availability

“Yandex is excited to join IndexNow as part of our long-term focus on sustainability. We have been working with the Cloudflare team in early testing to incorporate their caching signals in our crawling mechanism via the IndexNow API. The results are great so far.”
Maxim Zagrebin, Head of Yandex Search

Crawler Hints Update: Cloudflare Supports IndexNow and Announces General Availability

“DuckDuckGo is supportive of anything that makes search more environmentally friendly and better for end users without harming privacy. We’re looking forward to working with Cloudflare on this proposal.”
Gabriel Weinberg, CEO and Founder

Crawler Hints Update: Cloudflare Supports IndexNow and Announces General Availability

How do Cloudflare customers benefit?

Crawler Hints doesn’t just benefit search engines. For our customers and origin owners, Crawler Hints will ensure that search engines and other bot-powered experiences will always have the freshest version of your content, translating into happier users and ultimately influencing search rankings. Crawler Hints will also mean less traffic hitting your origin, improving resource consumption. Moreover, your site performance will be improved as well: your human customers will not be competing with bots!

And for Internet users? When you interact with bot-fed experiences — which we all do every day, whether we realize it or not, like search engines or pricing tools — these will now deliver more useful results from crawled data, because Cloudflare has signaled to the owners of the bots the moment they need to update their results.

How can I enable Crawler Hints for my website?

Crawler Hints is free to use for all Cloudflare customers and promises to revolutionize web efficiency. If you’d like to see how Crawler Hints can benefit how your website is indexed by the worlds biggest search engines, please feel free to opt-into the service:

  1. Sign in to your Cloudflare Account.
  2. In the dashboard, navigate to the Cache tab.
  3. Click on the Configuration section.
  4. Locate the Crawler Hints sign up card and enable. It’s that easy.
Crawler Hints Update: Cloudflare Supports IndexNow and Announces General Availability

Once you’ve enabled it, we will begin sending hints to search engines about when they should crawl particular parts of your website. Crawler Hints holds tremendous promise to improve the efficiency of the Internet.

What’s next?

We’re thrilled to collaborate with industry leaders Microsoft Bing, and Yandex to bring IndexNow to Crawler Hints, and to bring Crawler Hints to a wide audience in general availability. We look forward to working with additional companies who run crawlers to help make this process more efficient for the whole Internet.

2021-10-18 vivacom

Post Syndicated from Vasil Kolev original https://vasil.ludost.net/blog/?p=3451

Има неща, дето не трябва да ме учудват, но все пак успяват.

Днес по някое време ми спря Internet-а. По принцип ползвам Comnet София, които се отделиха от Comnet, и които в последствие бяха купени от Vivacom. След известно гледане видях, че в лога на pppd-то има съобщение “Neplatena smetka”.

Звъннах по телефона, където бях пренасочен към call центъра на Vivacom. След някакво чакане (над 10 минути, не ми се беше случвало скоро) и ходене по менюта стигнах до някакви хора, които да видят какво става. Оказа се, че последното ми плащане е изтекло на 15.10, и днес, на 18ти, са ми спрели услугата. Не бях получил известие от epay, защото явно тази част вече е спряна. Питайки как мога да го платя online ми казаха – не може, нямате още клиентски номер, трябва в магазин.

Отидох до близкия техен магазин, където ме намериха по ЕГН и ми обясниха, че мога да си платя за 6 месеца или 1 година. Обясних, че този договор винаги е бил месец по месец, и за мен няма особен смисъл да плащам толкова време, при условие, че до месец ще съм се изнесъл. Гледаха, мислиха, обадих се и на техния call center пак, и след половин час изводът си беше все тоя – те такава услуга нямат, няма начин. От друга страна, води се предплатена, няма прекратяване или каквото и да е друго и не им дължа нищо.
(явно и не трябва да връщам ONT-то, дето Comnet ми дадоха).

Та, теглих им една учтива майна, и ще карам седмица-две-три на 3G, докато се пренеса.

Също така, не знам това дали е от некадърност или съвсем нарочно, да се опитат да издоят максимално всичките клиенти, дето са купили, но много се надявам никой да не им се върже.

Passwordless Network Scanning: Same Insights, Less Risk

Post Syndicated from Jimmy Cancilla original https://blog.rapid7.com/2021/10/18/passwordless-network-scanning-same-insights-less-risk/

Passwordless Network Scanning: Same Insights, Less Risk

Password-based credentials are a ubiquitous part of our online lives, but they are prone to vulnerabilities. Combatting those vulnerabilities has been a major hurdle for security professionals, and it’s come at major cost for businesses. We are reinventing the credentialing process for our Network Scan Engine with the release of the Scan Assistant — a safer way to scan assets that limits the inherent drawbacks of credentials.

Passwords as a means of securing computer systems have been around for 60 years. Scholars believe MIT’s Compatible Time-Sharing System was the first to implement a password to allow different users to log in. Since then, passwords have become ubiquitous. Every operating system, website, and WiFi connection utilizes passwords as a means of restricting access.

Unfortunately, this has also proven to be fertile ground for attackers who wish to gain unauthorized access to data and computer systems. Due in part to the popularity — and potential weaknesses — of passwords, businesses have spent enormous amounts of time and money in building robust security programs in order to protect their intellectual property.

As a part of any good security program, companies regularly scan their networks to identify where they are vulnerable. One of the most uncomfortable nuances of network scans is that in order to fully assess a set of targets, the scanner must be able to authenticate to those targets. Providing the necessary credentials to the network scan engine comes with a number of challenges. These include:

  • Increased security risk: Storing credentials within an application immediately makes that application a potential vector for attack. If the application is compromised or misconfigured, an attacker could gain access to a comprehensive list of credentials, giving them the ability to compromise a customer’s network.
  • Credential management: Storing credentials within an application introduces additional operational challenges with managing those credentials. Anytime a credential changes on a target or set of targets, that credential will have to be updated within the application. This results in administrators having to manage the same set of credentials within multiple systems, which can be burdensome and error-prone. Using a centralized credential vault can help mitigate this challenge, but not all organizations are in a position to deploy such a service for every target within their environment.
  • Insufficient permissions: In order for a network scanner to accurately assess and report on the risk for a set of targets, the scanner needs to be capable of collecting sufficient information. Thus, the credentials supplied need to have a broad range of permissions associated with them — ideally, root or administrator-level — so the network scanner can perform a full collection of data. In practice, many organizations are either unaware of this requirement or hesitant to do so. This can result in collecting incomplete information, leading to reports that don’t fully convey the targets’ vulnerabilities.

Introducing the Scan Assistant

The Engineering team here at Rapid7 has spent a significant amount of time discussing, researching, and brainstorms solutions to the challenges with providing credentials for the purpose of performing network scans. The team decided that the ideal solution for our customers was to eliminate the need for credentials altogether. This led to the development of the Scan Assistant.

The Scan Assistant is a lightweight service that can be installed on each target you’re scanning. It’s designed to work specifically with the InsightVM and Nexpose Network Scan Engine so it can scan targets without the need to provide credentials. When the Network Scan Engine scans a target containing the Scan Assistant, it collects all the necessary information required to fully assess that target.

The Scan Assistant supports both vulnerability and policy scans performed by the Network Scan Engine. Providing coverage for both types of scans was a key requirement for the team. As a result, customers can quickly identify vulnerabilities and validate policies within their network without the operational burden of managing credentials or permissions. Customers will continue to get the exact same insights into their network while simultaneously reducing the risk of managing credentials within the product.

How it works

The Network Scan Engine and the Scan Assistant communicate over an encrypted channel by using a TLSv1.2 certificate. When the Scan Engine scans a target, there are specific pieces of information that it needs to collect from that target. The Scan Assistant has been designed to only provide the specific data that the Scan Engine needs in order to fully assess the target.

This implies that the Scan Assistant does not provide a means for arbitrarily accessing the filesystem. Furthermore, all commands sent from the Scan Engine to the Scan Assistant are signed, ensuring that only the Scan Engine with the correct signing key is capable of requesting data from a Scan Assistant.

Why it’s better than a credential

Administrative credentials provide the Scan Engine with more access than it needs and put you at risk if those credentials are compromised. The Scan Assistant provides the Scan Engine with only the access it needs, reducing risk.

Root credentials give the Scan Engine unrestricted access to run commands over OpenSSH, which can also introduce risk. It can be a challenge to restrict commands using sudo or similar tools. To solve this problem, the Scan Assistant requires commands to be signed by Rapid7. This reduces risk and transparently limits what the Scan Assistant is allowed to run.

Why it’s secure (in more technical terms)

The Scan Assistant is built on the transport layer security (TLS) protocol and only enables algorithms specified in the Commercial National Security Algorithm Suite (CNSA) by the National Security Agency (NSA). This includes support for Elliptic Curve Diffie-Hellman (ECDH) and Elliptic Curve Digital Signature Algorithm (ECDSA) with the P-521 curve to establish trust with the Scan Engine, and 256-bit Advanced Encryption Standard (AES) to achieve data secrecy between the Scan Engine and Scan Assistant.

The Network Scan Engine and the Scan Assistant use TLSv1.2 with two-way certificate authentication (client-side authentication). However, the server does not verify the client. Each time the Scan Assistant starts, it generates a new certificate. This makes it impossible to track an asset by tracking the scan assistant certificate used on the HTTPS listener. That means there’s no way for the scan engine to verify the certificate from the scan assistant. So in effect, the mechanism is a reverse one-way authentication.

Insight Agent vs. Scan Assistant

At first glance, it may seem that the Insight Agent and the Scan Assistant serve the same purpose. They are both small, background services that get deployed across a fleet of targets for the purpose of vulnerability and policy assessment. However, this is where their similarity ends. The Insight Agent and the Scan Assistant are fundamentally different in terms of the use cases they satisfy.

The Insight Agent is appropriate for assets that have internet connectivity and are capable of periodically publishing data to the platform. For these types of assets, such as laptops and workstations, the Insight Agent is the preferred technology.

The Scan Assistant is intended for assets and environments for which internet connectivity is either unavailable or heavily restricted. This may include assets such as Domain Controllers or database servers. Any device that is effectively air-gapped from the outside world would not be able to use the Insight Agent. These devices must be scanned using the Network Scan Engine in order to assess them for vulnerabilities. In this scenario, the Scan Assistant can help improve the performance of those scans without having to store credentials within the product.

Ultimately, you can deploy both the Insight Agent and the Scan Assistant to different parts of your network in order to provide a fast, secure, and comprehensive vulnerability assessment.

Feature Insight Agent Scan Assistant
Collection Type Active – collects data periodically and publishes to the platform Passive – only collects data when requested by a scan engine
Data Collected Collects all data necessary in order to perform an assessment Only collects the data requested by the scan engine
Platform connected? Yes No
Idle footprint When not collecting data, periodically beacons health status to the platform Contains an HTTPS listener waiting for incoming connections, otherwise does not perform any activity

Breakdown of the differences between the Insight Agent and the Scan Assistant

Performance improvement analysis

Preliminary performance analysis has shown promising improvements when performing scans with the Scan Assistant installed. Vulnerability scans have completed faster, and the total scan time has been more consistent than scans that rely on retrieving data via SMB or WMI.

Furthermore, scan times for policy-based scans have shown significant improvement, particularly against servers with a large number of users and groups (such as Domain Controllers). The following chart compares scan times for policy-based scans performed against different types of servers. The team plans to continue to collect and analyze the performance of the Scan Assistant and will share this analysis in a future article.

Passwordless Network Scanning: Same Insights, Less Risk
Scan duration comparison between the Scan Assistant and SMB. It’s important to note that the timescale is logarithmic, so for most cases, the Scan Assistant provides orders of magnitude better performance than the SMB protocol.

What’s next

Here are some of the major items we plan to work on next.

  • Add support for additional operating systems, including Linux, Unix, and macOS
  • Support the ability to perform DISA-based policy scans
  • Update the Security Console to support managing certificates on the scan engines

If you have any suggestions for features you would like to see, please speak with your Customer Success Manager.

Downloading the Scan Assistant

The Scan Assistant is currently in early access and is only available for Windows operating systems. If you are interested in the Scan Assistant and would like to deploy it in your environment, reach out to your Customer Success Manager to request access.

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

[$] A disagreement over get_mm_exe_file()

Post Syndicated from corbet original https://lwn.net/Articles/873066/rss

Differences of opinion over which kernel symbols should be exported to
loadable modules have been anything but uncommon over the years. Often,
these disagreements relate to which kernel capabilities should be available
to proprietary modules. Sometimes, though, it hinges on the disagreements
over the best way to solve a problem. The recent discussion around the
removal of an export for a core kernel function is a case in point.

Building dynamic Amazon SNS subscriptions for auto scaling container workloads 

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/building-dynamic-amazon-sns-subscriptions-for-auto-scaling-container-workloads/

This post is written by Mithun Mallick, Senior Specialist Solutions Architect, App Integration.

Amazon Simple Notification Service (SNS) is a serverless publish subscribe messaging service. It supports a push-based subscriptions model where subscribers must register an endpoint to receive messages. Amazon Simple Queue Service (SQS) is one such endpoint, which is used by applications to receive messages published on an SNS topic.

With containerized applications, the container instances poll the queue and receive the messages. However, containerized applications can scale out for a variety of reasons. The creation of an SQS queue for each new container instance creates maintenance overhead for customers. You must also clean up the SNS-SQS subscription once the instance scales in.

This blog walks through a dynamic subscription solution, which automates the creation, subscription, and deletion of SQS queues for an Auto Scaling group of containers running in Amazon Elastic Container Service (ECS).

Overview

The solution is based on the use of events to achieve the dynamic subscription pattern. ECS uses the concept of tasks to create an instance of a container. You can find more details on ECS tasks in the ECS documentation.

This solution uses the events generated by ECS to manage the complete lifecycle of an SNS-SQS subscription. It uses the task ID as the name of the queue that is used by the ECS instance for pulling messages. More details on the ECS task ID can be found in the task documentation.

This also uses Amazon EventBridge to apply rules on ECS events and trigger an AWS Lambda function. The first rule detects the running state of an ECS task and triggers a Lambda function, which creates the SQS queue with the task ID as queue name. It also grants permission to the queue and creates the SNS subscription on the topic.

As the container instance starts up, it can send a request to its metadata URL and retrieve the task ID. The task ID is used by the container instance to poll for messages. If the container instance terminates, ECS generates a task stopped event. This event matches a rule in Amazon EventBridge and triggers a Lambda function. The Lambda function retrieves the task ID, deletes the queue, and deletes the subscription from the SNS topic. The solution decouples the container instance from any overhead in maintaining queues, applying permissions, or managing subscriptions. The security permissions for all SNS-SQS management are handled by the Lambda functions.

This diagram shows the solution architecture:

Solution architecture

Events from ECS are sent to the default event bus. There are various events that are generated as part of the lifecycle of an ECS task. You can find more on the various ECS task states in ECS task documentation. This solution uses ECS as the container orchestration service but you can also use Amazon Elastic Kubernetes Service.(EKS). For EKS, you must apply the rules for EKS task state events.

Walkthrough of the implementation

The code snippets are shortened for brevity. The full source code of the solution is in the GitHub repository. The solution uses AWS Serverless Application Model (AWS SAM) for deployment.

SNS topic

The SNS topic is used to send notifications to the ECS tasks. The following snippet from the AWS SAM template shows the definition of the SNS topic:

  SNSDynamicSubsTopic:
    Type: AWS::SNS::Topic
    Properties:
      TopicName: !Ref DynamicSubTopicName

Container instance

The container instance subscribes to the SNS topic using an SQS queue. The container image is a Java class that reads messages from an SQS queue and prints them in the logs. The following code shows some of the message processor implementation:

AmazonSQS sqs = AmazonSQSClientBuilder.defaultClient();
AmazonSQSResponder responder = AmazonSQSResponderClientBuilder.standard()
        .withAmazonSQS(sqs)
        .build();

SQSMessageConsumer consumer = SQSMessageConsumerBuilder.standard()
        .withAmazonSQS(responder.getAmazonSQS())
        .withQueueUrl(queue_url)
        .withConsumer(message -> {
            System.out.println("The message is " + message.getBody());
            sqs.deleteMessage(queue_url,message.getReceiptHandle());

        }).build();
consumer.start();

The queue_url highlighted is the task ID of the ECS task. It is retrieved in the constructor of the class:

String metaDataURL = map.get("ECS_CONTAINER_METADATA_URI_V4");

HttpGet request = new HttpGet(metaDataURL);
CloseableHttpResponse response = httpClient.execute(request);

HttpEntity entity = response.getEntity();
if (entity != null) {
    String result = EntityUtils.toString(entity);
    String taskARN = JsonPath.read(result, "$['Labels']['com.amazonaws.ecs.task-arn']").toString();
    String[] arnTokens = taskARN.split("/");
    taskId = arnTokens[arnTokens.length-1];
    System.out.println("The task arn : "+taskId);
}

queue_url = sqs.getQueueUrl(taskId).getQueueUrl();

The queue URL is constructed from the task ID of the container. Each queue is dedicated to each of the tasks or the instances of the container running in ECS.

EventBridge rules

The following event pattern on the default event bus captures events that match the start of the container instance. The rule triggers a Lambda function:

      EventPattern:
        source:
          - aws.ecs
        detail-type:
          - "ECS Task State Change"
        detail:
          desiredStatus:
            - "RUNNING"
          lastStatus:  
            - "RUNNING"

The start rule routes events to a Lambda function that creates a queue with the name as the task ID. It creates the subscription to the SNS topic and grants permission on the queue to receive messages from the topic.

This event pattern matches STOPPED events of the container task. It also triggers a Lambda function to delete the queue and the associated subscription:

      EventPattern:
        source:
          - aws.ecs
        detail-type:
          - "ECS Task State Change"
        detail:
          desiredStatus:
            - "STOPPED"
          lastStatus:  
            - "STOPPED"

Lambda functions

There are two Lambda functions that perform the queue creation, subscription, authorization, and deletion.

The SNS-SQS-Subscription-Service

The following code creates the queue based on the task id, applies policies, and subscribes it to the topic. It also stores the subscription ARN in a Amazon DynamoDB table:

# get the task id from the event
taskArn = event['detail']['taskArn']
taskArnTokens = taskArn.split('/')
taskId = taskArnTokens[len(taskArnTokens)-1]

create_queue_resp = sqs_client.create_queue(QueueName=queue_name)

response = sns.subscribe(TopicArn=topic_arn, Protocol="sqs", Endpoint=queue_arn)

ddbresponse = dynamodb.update_item(
    TableName=SQS_CONTAINER_MAPPING_TABLE,
    Key={
        'id': {
            'S' : taskId.strip()
        }
    },
    AttributeUpdates={
        'SubscriptionArn':{
            'Value': {
                'S': subscription_arn
            }
        }
    },
    ReturnValues="UPDATED_NEW"
)

The cleanup service

The cleanup function is triggered when the container instance is stopped. It fetches the subscription ARN from the DynamoDB table based on the taskId. It deletes the subscription from the topic and deletes the queue. You can modify this code to include any other cleanup actions or trigger a workflow. The main part of the function code is:

taskId = taskArnTokens[len(taskArnTokens)-1]

ddbresponse = dynamodb.get_item(TableName=SQS_CONTAINER_MAPPING_TABLE,Key={'id': { 'S' : taskId}})
snsresp = sns.unsubscribe(SubscriptionArn=subscription_arn)

queuedelresp = sqs_client.delete_queue(QueueUrl=queue_url)

Conclusion

This blog shows an event driven approach to handling dynamic SNS subscription requirements. It relies on the ECS service events to trigger appropriate Lambda functions. These create the subscription queue, subscribe it to a topic, and delete it once the container instance is terminated.

The approach also allows the container application logic to focus only on consuming and processing the messages from the queue. It does not need any additional permissions to subscribe or unsubscribe from the topic or apply any additional permissions on the queue. Although the solution has been presented using ECS as the container orchestration service, it can be applied for EKS by using its service events.

For more serverless learning resources, visit Serverless Land.

Security updates for Monday

Post Syndicated from ris original https://lwn.net/Articles/873210/rss

Security updates have been issued by Debian (amd64-microcode, libreoffice, linux-4.19, and nghttp2), Fedora (chromium, libopenmpt, vim, and xen), openSUSE (firefox, kernel, krb5, libaom, and opera), Oracle (thunderbird), SUSE (firefox, firefox, rust-cbindgen, iproute2, javapackages-tools, javassist, mysql-connector-java, protobuf, python-python-gflags, and krb5), and Ubuntu (nginx).

Learn the fundamentals of AI and machine learning with our free online course

Post Syndicated from Michael Conterio original https://www.raspberrypi.org/blog/fundamentals-ai-machine-learning-free-online-course/

Join our free online course Introduction to Machine Learning and AI to discover the fundamentals of machine learning and learn to train your own machine learning models using free online tools.

Drawing of a machine learning robot helping a human identify spam at a computer.

Although artificial intelligence (AI) was once the province of science fiction, these days you’re very likely to hear the term in relation to new technologies, whether that’s facial recognition, medical diagnostic tools, or self-driving cars, which use AI systems to make decisions or predictions.

By the end of this free online course, you will have an appreciation for what goes into machine learning and artificial intelligence systems — and why you should think carefully about what comes out.

Machine learning — a brief overview

You’ll also often hear about AI systems that use machine learning (ML). Very simply, we can say that programs created using ML are ‘trained’ on large collections of data to ‘learn’ to produce more accurate outputs over time. One rather funny application you might have heard of is the ‘muffin or chihuahua?’ image recognition task.

Drawing of a machine learning ars rover trying to decide whether it is seeing an alien or a rock.

More precisely, we would say that a ML algorithm builds a model, based on large collections of data (the training data), without being explicitly programmed to do so. The model is ‘finished’ when it makes predictions or decisions with an acceptable level of accuracy. (For example, it rarely mistakes a muffin for a chihuahua in a photo.) It is then considered to be able to make predictions or decisions using new data in the real world.

It’s important to understand AI and ML — especially for educators

But how does all this actually work? If you don’t know, it’s hard to judge what the impacts of these technologies might be, and how we can be sure they benefit everyone — an important discussion that needs to involve people from across all of society. Not knowing can also be a barrier to using AI, whether that’s for a hobby, as part of your job, or to help your community solve a problem.

some things that machine learning and AI systems can be built into: streetlamps, waste collecting vehicles, cars, traffic lights.

For teachers and educators it’s particularly important to have a good foundational knowledge of AI and ML, as they need to teach their learners what the young people need to know about these technologies and how they impact their lives. (We’ve also got a free seminar series about teaching these topics.)

To help you understand the fundamentals of AI and ML, we’ve put together a free online course: Introduction to Machine Learning and AI. Over four weeks in two hours per week, you’ll learn how machine learning can be used to solve problems, without going too deeply into the mathematical details. You’ll also get to grips with the different ways that machines ‘learn’, and you will try out online tools such as Machine Learning for Kids and Teachable Machine to design and train your own machine learning programs.

What types of problems and tasks are AI systems used for?

As well as finding out how these AI systems work, you’ll look at the different types of tasks that they can help us address. One of these is classification — working out which group (or groups) something fits in, such as distinguishing between positive and negative product reviews, identifying an animal (or a muffin) in an image, or spotting potential medical problems in patient data.

You’ll also learn about other types of tasks ML programs are used for, such as regression (predicting a numerical value from a continuous range) and knowledge organisation (spotting links between different pieces of data or clusters of similar data). Towards the end of the course you’ll dive into one of the hottest topics in AI today: neural networks, which are ML models whose design is inspired by networks of brain cells (neurons).

drawing of a small machine learning neural network.

Before an ML program can be trained, you need to collect data to train it with. During the course you’ll see how tools from statistics and data science are important for ML — but also how ethical issues can arise both when data is collected and when the outputs of an ML program are used.

By the end of the course, you will have an appreciation for what goes into machine learning and artificial intelligence systems — and why you should think carefully about what comes out.

Sign up to the course today, for free

The Introduction to Machine Learning and AI course is open for you to sign up to now. Sign-ups will pause after 12 December. Once you sign up, you’ll have access for six weeks. During this time you’ll be able to interact with your fellow learners, and before 25 October, you’ll also benefit from the support of our expert facilitators. So what are you waiting for?

Share your views as part of our research

As part of our research on computing education, we would like to find out about educators’ views on machine learning. Before you start the course, we will ask you to complete a short survey. As a thank you for helping us with our research, you will be offered the chance to take part in a prize draw for a £50 book token!

Learn more about AI, its impacts, and teaching learners about them

To develop your computing knowledge and skills, you might also want to:

If you are a teacher in England, you can develop your teaching skills through the National Centre for Computing Education, which will give you free upgrades for our courses (including Introduction to Machine Learning and AI) so you’ll receive certificates and unlimited access.

The post Learn the fundamentals of AI and machine learning with our free online course appeared first on Raspberry Pi.

Tunnel: Cloudflare’s Newest Homeowner

Post Syndicated from Abe Carryl original https://blog.cloudflare.com/observe-and-manage-cloudflare-tunnel/

Tunnel: Cloudflare’s Newest Homeowner

Cloudflare Tunnel connects your infrastructure to Cloudflare. Your team runs a lightweight connector in your environment, cloudflared, and services can reach Cloudflare and your audience through an outbound-only connection without the need for opening up holes in your firewall.

Tunnel: Cloudflare’s Newest Homeowner

Whether the services are internal apps protected with Zero Trust policies, websites running in Kubernetes clusters in a public cloud environment, or a hobbyist project on a Raspberry Pi — Cloudflare Tunnel provides a stable, secure, and highly performant way to serve traffic.

Starting today, with our new UI in the Cloudflare for Teams Dashboard, users who deploy and manage Cloudflare Tunnel at scale now have easier visibility into their tunnels’ status, routes, uptime, connectors, cloudflared version, and much more. On the Teams Dashboard you will also find an interactive guide that walks you through setting up your first tunnel.  

Getting Started with Tunnel

Tunnel: Cloudflare’s Newest Homeowner

We wanted to start by making the tunnel onboarding process more transparent for users. We understand that not all users are intimately familiar with the command line nor are they deploying tunnel in an environment or OS they’re most comfortable with. To alleviate that burden, we designed a comprehensive onboarding guide with pathways for MacOS, Windows, and Linux for our two primary onboarding flows:

  1. Connecting an origin to Cloudflare
  2. Connecting a private network via WARP to Tunnel

Our new onboarding guide walks through each command required to create, route, and run your tunnel successfully while also highlighting relevant validation commands to serve as guardrails along the way. Once completed, you’ll be able to view and manage your newly established tunnels.

Managing your tunnels

Tunnel: Cloudflare’s Newest Homeowner

When thinking about the new user interface for tunnel we wanted to concentrate our efforts on how users gain visibility into their tunnels today. It was important that we provide the same level of observability, but through the lens of a visual, interactive dashboard. Specifically, we strove to build a familiar experience like the one a user may see if they were to run cloudflared tunnel list to show all of their tunnels, or cloudflared tunnel info if they wanted to better understand the connection status of a specific tunnel.

Tunnel: Cloudflare’s Newest Homeowner

In the interface, you can quickly search by name or filter by name, status, uptime, or creation date. This allows users to easily identify and manage the tunnels they need, when they need them. We also included other key metrics such as Status and Uptime.

A tunnel’s status depends on the health of its connections:

  • Active: This means your tunnel is running and has a healthy connection to the Cloudflare network.
  • Inactive: This means your tunnel is not running and is not connected to Cloudflare.
  • Degraded: This means one or more of your four long-lived TCP connections to Cloudflare have been disconnected, but traffic is still being served to your origin.

A tunnel’s uptime is also calculated by the health of its connections. We perform this calculation by determining the UTC timestamp of when the first (of four) long-lived TCP connections is established with the Cloudflare Edge. In the event this single connection is terminated, we will continue tracking uptime as long as one of the other three connections continues to serve traffic. If no connections are active, Uptime will reset to zero.

Tunnel Routes and Connectors

Last year, shortly after the announcement of Named Tunnels, we released a new feature that allowed users to utilize the same Named Tunnel to serve traffic to many different services through the use of Ingress Rules. In the new UI, if you’re running your tunnels in this manner, you’ll be able to see these various services reflected by hovering over the route’s value in the dashboard. Today, this includes routes for DNS records, Load Balancers, and Private IP ranges.

Even more recently, we announced highly available and highly scalable instances of cloudflared, known more commonly as “cloudflared replicas.” To view your cloudflared replicas, select and expand a tunnel. Then you will identify how many cloudflared replicas you’re running for a given tunnel, as well as the corresponding connection status, data center, IP address, and version. And ultimately, when you’re ready to delete a tunnel, you can do so directly from the dashboard as well.

What’s next

Moving forward, we’re excited to begin incorporating more Cloudflare Tunnel analytics into our dashboard. We also want to continue making Cloudflare Tunnel the easiest way to connect to Cloudflare. In order to do that, we will focus on improving our onboarding experience for new users and look forward to bringing more of that functionality into the Teams Dashboard. If you have things you’re interested in having more visibility around in the future, let us know below!

ЕК: Втора покана за представяне на предложения относно наблюдението на собствеността на медиите

Post Syndicated from nellyo original https://nellyo.wordpress.com/2021/10/18/call2-media-ownership/

Европейската комисия отправи втора покана за представяне на предложения за съфинансираната от ЕС система за наблюдение на собствеността на медиите. Тази покана ще допълни текущия първи пилотен проект (Австрия, Белгия, Чехия, Дания, Финландия, Германия, Гърция, Унгария, Италия, Литва, Холандия, Португалия, Словения, Испания и Швеция). и ще осигури база данни за собствеността на медиите в останалите 12 държави членки (България, Полша, Словакия, Естония, Латвия, Малта, Кипър, Ирландия, Франция, Румъния, Хърватия, Люксембург).

Обзорът на собствеността на медиите систематично ще оценява съответните правни рамки, както и рисковете за прозрачността при собствеността на медиите. Той също така ще показва потенциални рискове за медийния плурализъм и ще предоставя ценна информация за по-добро разбиране на новинарския медиен пазар. Този инструмент ще направи информацията достъпна за всички чрез интерактивна онлайн платформа, показваща резултатите във формати, адаптирани към нуждите на различните потребители.

Заинтересованите консорциуми, работещи в областта на свободата и плурализма на медиите на европейско, регионално и местно равнище, могат да кандидатстват по тази покана до 15 декември. Максималният размер на подкрепата от ЕС по проекта е 500 000 евро. Продължителността е 12 месеца, очаква се началото да е през есента на 2022 г.

Тази инициатива е част от по-широките усилия в областта на свободата и плурализма на медиите, както е посочено в Плана за действие за европейската демокрация.

Повече информация за тази и други покани в областта на медиите, както текущи, така и в процес на подготовка, ще намерите тук.

По първия пилотен проект за мониторинг на собствеността на медиите ще се предостави   база данни, съдържаща информация за собствеността на медиите и системна оценка както на съответните правни рамки, така и на рисковете за прозрачността на собствеността на медиите. Проектът EUROMO започна  през септември 2021 г. и се управлява от консорциум, ръководен от Университета Париж-Лодрон в Залцбург (PLUS).

Подкрепа на ЕС: до 1 000 000 евро

The Missouri Governor Doesn’t Understand Responsible Disclosure

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/10/the-missouri-governor-doesnt-understand-responsible-disclosure.html

The Missouri governor wants to prosecute the reporter who discovered a security vulnerability in a state’s website, and then reported it to the state.

The newspaper agreed to hold off publishing any story while the department fixed the problem and protected the private information of teachers around the state.

[…]

According to the Post-Dispatch, one of its reporters discovered the flaw in a web application allowing the public to search teacher certifications and credentials. No private information was publicly visible, but teacher Social Security numbers were contained in HTML source code of the pages.

The state removed the search tool after being notified of the issue by the Post-Dispatch. It was unclear how long the Social Security numbers had been vulnerable.

[…]

Chris Vickery, a California-based data security expert, told The Independent that it appears the department of education was “publishing data that it shouldn’t have been publishing.

“That’s not a crime for the journalists discovering it,” he said. “Putting Social Security numbers within HTML, even if it’s ‘non-display rendering’ HTML, is a stupid thing for the Missouri website to do and is a type of boneheaded mistake that has been around since day one of the Internet. No exploit, hacking or vulnerability is involved here.”

In explaining how he hopes the reporter and news organization will be prosecuted, [Gov.] Parson pointed to a state statute defining the crime of tampering with computer data. Vickery said that statute wouldn’t work in this instance because of a recent decision by the U.S. Supreme Court in the case of Van Buren v. United States.

One hopes that someone will calm the governor down.

Brian Krebs has more.

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close