Tag Archives: SNI

SNIFFlab – Create Your Own MITM Test Environment

Post Syndicated from Darknet original https://www.darknet.org.uk/2017/11/snifflab-create-mitm-test-environment/?utm_source=rss&utm_medium=social&utm_campaign=darknetfeed

SNIFFlab – Create Your Own MITM Test Environment

SNIFFlab is a set of scripts in Python that enable you to create your own MITM test environment for packet sniffing through a WiFi access point.

Essentially it’s a WiFi hotspot that is continually collecting all the packets transmitted across it. All connected clients’ HTTPS communications are subjected to a “Man-in-the-middle” attack, whereby they can later be decrypted for analysis

What is SNIFFLab MITM Test Environment

In our environment, dubbed Snifflab, a researcher simply connects to the Snifflab WiFi network, is prompted to install a custom certificate authority on the device, and then can use their device as needed for the test.

Read the rest of SNIFFlab – Create Your Own MITM Test Environment now! Only available at Darknet.

Federate Database User Authentication Easily with IAM and Amazon Redshift

Post Syndicated from Thiyagarajan Arumugam original https://aws.amazon.com/blogs/big-data/federate-database-user-authentication-easily-with-iam-and-amazon-redshift/

Managing database users though federation allows you to manage authentication and authorization procedures centrally. Amazon Redshift now supports database authentication with IAM, enabling user authentication though enterprise federation. No need to manage separate database users and passwords to further ease the database administration. You can now manage users outside of AWS and authenticate them for access to an Amazon Redshift data warehouse. Do this by integrating IAM authentication and a third-party SAML-2.0 identity provider (IdP), such as AD FS, PingFederate, or Okta. In addition, database users can also be automatically created at their first login based on corporate permissions.

In this post, I demonstrate how you can extend the federation to enable single sign-on (SSO) to the Amazon Redshift data warehouse.

SAML and Amazon Redshift

AWS supports Security Assertion Markup Language (SAML) 2.0, which is an open standard for identity federation used by many IdPs. SAML enables federated SSO, which enables your users to sign in to the AWS Management Console. Users can also make programmatic calls to AWS API actions by using assertions from a SAML-compliant IdP. For example, if you use Microsoft Active Directory for corporate directories, you may be familiar with how Active Directory and AD FS work together to enable federation. For more information, see the Enabling Federation to AWS Using Windows Active Directory, AD FS, and SAML 2.0 AWS Security Blog post.

Amazon Redshift now provides the GetClusterCredentials API operation that allows you to generate temporary database user credentials for authentication. You can set up an IAM permissions policy that generates these credentials for connecting to Amazon Redshift. Extending the IAM authentication, you can configure the federation of AWS access though a SAML 2.0–compliant IdP. An IAM role can be configured to permit the federated users call the GetClusterCredentials action and generate temporary credentials to log in to Amazon Redshift databases. You can also set up policies to restrict access to Amazon Redshift clusters, databases, database user names, and user group.

Amazon Redshift federation workflow

In this post, I demonstrate how you can use a JDBC– or ODBC-based SQL client to log in to the Amazon Redshift cluster using this feature. The SQL clients used with Amazon Redshift JDBC or ODBC drivers automatically manage the process of calling the GetClusterCredentials action, retrieving the database user credentials, and establishing a connection to your Amazon Redshift database. You can also use your database application to programmatically call the GetClusterCredentials action, retrieve database user credentials, and connect to the database. I demonstrate these features using an example company to show how different database users accounts can be managed easily using federation.

The following diagram shows how the SSO process works:

  1. JDBC/ODBC
  2. Authenticate using Corp Username/Password
  3. IdP sends SAML assertion
  4. Call STS to assume role with SAML
  5. STS Returns Temp Credentials
  6. Use Temp Credentials to get Temp cluster credentials
  7. Connect to Amazon Redshift using temp credentials

Walkthrough

Example Corp. is using Active Directory (idp host:demo.examplecorp.com) to manage federated access for users in its organization. It has an AWS account: 123456789012 and currently manages an Amazon Redshift cluster with the cluster ID “examplecorp-dw”, database “analytics” in us-west-2 region for its Sales and Data Science teams. It wants the following access:

  • Sales users can access the examplecorp-dw cluster using the sales_grp database group
  • Sales users access examplecorp-dw through a JDBC-based SQL client
  • Sales users access examplecorp-dw through an ODBC connection, for their reporting tools
  • Data Science users access the examplecorp-dw cluster using the data_science_grp database group.
  • Partners access the examplecorp-dw cluster and query using the partner_grp database group.
  • Partners are not federated through Active Directory and are provided with separate IAM user credentials (with IAM user name examplecorpsalespartner).
  • Partners can connect to the examplecorp-dw cluster programmatically, using language such as Python.
  • All users are automatically created in Amazon Redshift when they log in for the first time.
  • (Optional) Internal users do not specify database user or group information in their connection string. It is automatically assigned.
  • Data warehouse users can use SSO for the Amazon Redshift data warehouse using the preceding permissions.

Step 1:  Set up IdPs and federation

The Enabling Federation to AWS Using Windows Active Directory post demonstrated how to prepare Active Directory and enable federation to AWS. Using those instructions, you can establish trust between your AWS account and the IdP and enable user access to AWS using SSO.  For more information, see Identity Providers and Federation.

For this walkthrough, assume that this company has already configured SSO to their AWS account: 123456789012 for their Active Directory domain demo.examplecorp.com. The Sales and Data Science teams are not required to specify database user and group information in the connection string. The connection string can be configured by adding SAML Attribute elements to your IdP. Configuring these optional attributes enables internal users to conveniently avoid providing the DbUser and DbGroup parameters when they log in to Amazon Redshift.

The user-name attribute can be set up as follows, with a user ID (for example, nancy) or an email address (for example. [email protected]):

<Attribute Name="https://redshift.amazon.com/SAML/Attributes/DbUser">  
  <AttributeValue>user-name</AttributeValue>
</Attribute>

The AutoCreate attribute can be defined as follows:

<Attribute Name="https://redshift.amazon.com/SAML/Attributes/AutoCreate">
    <AttributeValue>true</AttributeValue>
</Attribute>

The sales_grp database group can be included as follows:

<Attribute Name="https://redshift.amazon.com/SAML/Attributes/DbGroups">
    <AttributeValue>sales_grp</AttributeValue>
</Attribute>

For more information about attribute element configuration, see Configure SAML Assertions for Your IdP.

Step 2: Create IAM roles for access to the Amazon Redshift cluster

The next step is to create IAM policies with permissions to call GetClusterCredentials and provide authorization for Amazon Redshift resources. To grant a SQL client the ability to retrieve the cluster endpoint, region, and port automatically, include the redshift:DescribeClusters action with the Amazon Redshift cluster resource in the IAM role.  For example, users can connect to the Amazon Redshift cluster using a JDBC URL without the need to hardcode the Amazon Redshift endpoint:

Previous:  jdbc:redshift://endpoint:port/database

Current:  jdbc:redshift:iam://clustername:region/dbname

Use IAM to create the following policies. You can also use an existing user or role and assign these policies. For example, if you already created an IAM role for IdP access, you can attach the necessary policies to that role. Here is the policy created for sales users for this example:

Sales_DW_IAM_Policy

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "redshift:DescribeClusters"
            ],
            "Resource": [
                "arn:aws:redshift:us-west-2:123456789012:cluster:examplecorp-dw"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "redshift:GetClusterCredentials"
            ],
            "Resource": [
                "arn:aws:redshift:us-west-2:123456789012:cluster:examplecorp-dw",
                "arn:aws:redshift:us-west-2:123456789012:dbuser:examplecorp-dw/${redshift:DbUser}"
            ],
            "Condition": {
                "StringEquals": {
                    "aws:userid": "AIDIODR4TAW7CSEXAMPLE:${redshift:DbUser}@examplecorp.com"
                }
            }
        },
        {
            "Effect": "Allow",
            "Action": [
                "redshift:CreateClusterUser"
            ],
            "Resource": [
                "arn:aws:redshift:us-west-2:123456789012:dbuser:examplecorp-dw/${redshift:DbUser}"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "redshift:JoinGroup"
            ],
            "Resource": [
                "arn:aws:redshift:us-west-2:123456789012:dbgroup:examplecorp-dw/sales_grp"
            ]
        }
    ]
}

The policy uses the following parameter values:

  • Region: us-west-2
  • AWS Account: 123456789012
  • Cluster name: examplecorp-dw
  • Database group: sales_grp
  • IAM role: AIDIODR4TAW7CSEXAMPLE
Policy Statement Description
{
"Effect":"Allow",
"Action":[
"redshift:DescribeClusters"
],
"Resource":[
"arn:aws:redshift:us-west-2:123456789012:cluster:examplecorp-dw"
]
}

Allow users to retrieve the cluster endpoint, region, and port automatically for the Amazon Redshift cluster examplecorp-dw. This specification uses the resource format arn:aws:redshift:region:account-id:cluster:clustername. For example, the SQL client JDBC can be specified in the format jdbc:redshift:iam://clustername:region/dbname.

For more information, see Amazon Resource Names.

{
"Effect":"Allow",
"Action":[
"redshift:GetClusterCredentials"
],
"Resource":[
"arn:aws:redshift:us-west-2:123456789012:cluster:examplecorp-dw",
"arn:aws:redshift:us-west-2:123456789012:dbuser:examplecorp-dw/${redshift:DbUser}"
],
"Condition":{
"StringEquals":{
"aws:userid":"AIDIODR4TAW7CSEXAMPLE:${redshift:DbUser}@examplecorp.com"
}
}
}

Generates a temporary token to authenticate into the examplecorp-dw cluster. “arn:aws:redshift:us-west-2:123456789012:dbuser:examplecorp-dw/${redshift:DbUser}” restricts the corporate user name to the database user name for that user. This resource is specified using the format: arn:aws:redshift:region:account-id:dbuser:clustername/dbusername.

The Condition block enforces that the AWS user ID should match “AIDIODR4TAW7CSEXAMPLE:${redshift:DbUser}@examplecorp.com”, so that individual users can authenticate only as themselves. The AIDIODR4TAW7CSEXAMPLE role has the Sales_DW_IAM_Policy policy attached.

{
"Effect":"Allow",
"Action":[
"redshift:CreateClusterUser"
],
"Resource":[
"arn:aws:redshift:us-west-2:123456789012:dbuser:examplecorp-dw/${redshift:DbUser}"
]
}
Automatically creates database users in examplecorp-dw, when they log in for the first time. Subsequent logins reuse the existing database user.
{
"Effect":"Allow",
"Action":[
"redshift:JoinGroup"
],
"Resource":[
"arn:aws:redshift:us-west-2:123456789012:dbgroup:examplecorp-dw/sales_grp"
]
}
Allows sales users to join the sales_grp database group through the resource “arn:aws:redshift:us-west-2:123456789012:dbgroup:examplecorp-dw/sales_grp” that is specified in the format arn:aws:redshift:region:account-id:dbgroup:clustername/dbgroupname.

Similar policies can be created for Data Science users with access to join the data_science_grp group in examplecorp-dw. You can now attach the Sales_DW_IAM_Policy policy to the role that is mapped to IdP application for SSO.
 For more information about how to define the claim rules, see Configuring SAML Assertions for the Authentication Response.

Because partners are not authorized using Active Directory, they are provided with IAM credentials and added to the partner_grp database group. The Partner_DW_IAM_Policy is attached to the IAM users for partners. The following policy allows partners to log in using the IAM user name as the database user name.

Partner_DW_IAM_Policy

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "redshift:DescribeClusters"
            ],
            "Resource": [
                "arn:aws:redshift:us-west-2:123456789012:cluster:examplecorp-dw"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "redshift:GetClusterCredentials"
            ],
            "Resource": [
                "arn:aws:redshift:us-west-2:123456789012:cluster:examplecorp-dw",
                "arn:aws:redshift:us-west-2:123456789012:dbuser:examplecorp-dw/${redshift:DbUser}"
            ],
            "Condition": {
                "StringEquals": {
                    "redshift:DbUser": "${aws:username}"
                }
            }
        },
        {
            "Effect": "Allow",
            "Action": [
                "redshift:CreateClusterUser"
            ],
            "Resource": [
                "arn:aws:redshift:us-west-2:123456789012:dbuser:examplecorp-dw/${redshift:DbUser}"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "redshift:JoinGroup"
            ],
            "Resource": [
                "arn:aws:redshift:us-west-2:123456789012:dbgroup:examplecorp-dw/partner_grp"
            ]
        }
    ]
}

redshift:DbUser“: “${aws:username}” forces an IAM user to use the IAM user name as the database user name.

With the previous steps configured, you can now establish the connection to Amazon Redshift through JDBC– or ODBC-supported clients.

Step 3: Set up database user access

Before you start connecting to Amazon Redshift using the SQL client, set up the database groups for appropriate data access. Log in to your Amazon Redshift database as superuser to create a database group, using CREATE GROUP.

Log in to examplecorp-dw/analytics as superuser and create the following groups and users:

CREATE GROUP sales_grp;
CREATE GROUP datascience_grp;
CREATE GROUP partner_grp;

Use the GRANT command to define access permissions to database objects (tables/views) for the preceding groups.

Step 4: Connect to Amazon Redshift using the JDBC SQL client

Assume that sales user “nancy” is using the SQL Workbench client and JDBC driver to log in to the Amazon Redshift data warehouse. The following steps help set up the client and establish the connection:

  1. Download the latest Amazon Redshift JDBC driver from the Configure a JDBC Connection page
  2. Build the JDBC URL with the IAM option in the following format:
    jdbc:redshift:iam://examplecorp-dw:us-west-2/sales_db

Because the redshift:DescribeClusters action is assigned to the preceding IAM roles, it automatically resolves the cluster endpoints and the port. Otherwise, you can specify the endpoint and port information in the JDBC URL, as described in Configure a JDBC Connection.

Identify the following JDBC options for providing the IAM credentials (see the “Prepare your environment” section) and configure in the SQL Workbench Connection Profile:

plugin_name=com.amazon.redshift.plugin.AdfsCredentialsProvider 
idp_host=demo.examplecorp.com (The name of the corporate identity provider host)
idp_port=443  (The port of the corporate identity provider host)
user=examplecorp\nancy(corporate user name)
password=***(corporate user password)

The SQL workbench configuration looks similar to the following screenshot:

Now, “nancy” can connect to examplecorp-dw by authenticating using the corporate Active Directory. Because the SAML attributes elements are already configured for nancy, she logs in as database user nancy and is assigned the sales_grp. Similarly, other Sales and Data Science users can connect to the examplecorp-dw cluster. A custom Amazon Redshift ODBC driver can also be used to connect using a SQL client. For more information, see Configure an ODBC Connection.

Step 5: Connecting to Amazon Redshift using JDBC SQL Client and IAM Credentials

This optional step is necessary only when you want to enable users that are not authenticated with Active Directory. Partners are provided with IAM credentials that they can use to connect to the examplecorp-dw Amazon Redshift clusters. These IAM users are attached to Partner_DW_IAM_Policy that assigns them to be assigned to the public database group in Amazon Redshift. The following JDBC URLs enable them to connect to the Amazon Redshift cluster:

jdbc:redshift:iam//examplecorp-dw/analytics?AccessKeyID=XXX&SecretAccessKey=YYY&DbUser=examplecorpsalespartner&DbGroup= partner_grp&AutoCreate=true

The AutoCreate option automatically creates a new database user the first time the partner logs in. There are several other options available to conveniently specify the IAM user credentials. For more information, see Options for providing IAM credentials.

Step 6: Connecting to Amazon Redshift using an ODBC client for Microsoft Windows

Assume that another sales user “uma” is using an ODBC-based client to log in to the Amazon Redshift data warehouse using Example Corp Active Directory. The following steps help set up the ODBC client and establish the Amazon Redshift connection in a Microsoft Windows operating system connected to your corporate network:

  1. Download and install the latest Amazon Redshift ODBC driver.
  2. Create a system DSN entry.
    1. In the Start menu, locate the driver folder or folders:
      • Amazon Redshift ODBC Driver (32-bit)
      • Amazon Redshift ODBC Driver (64-bit)
      • If you installed both drivers, you have a folder for each driver.
    2. Choose ODBC Administrator, and then type your administrator credentials.
    3. To configure the driver for all users on the computer, choose System DSN. To configure the driver for your user account only, choose User DSN.
    4. Choose Add.
  3. Select the Amazon Redshift ODBC driver, and choose Finish. Configure the following attributes:
    Data Source Name =any friendly name to identify the ODBC connection 
    Database=analytics
    user=uma(corporate user name)
    Auth Type-Identity Provider: AD FS
    password=leave blank (Windows automatically authenticates)
    Cluster ID: examplecorp-dw
    idp_host=demo.examplecorp.com (The name of the corporate IdP host)

This configuration looks like the following:

  1. Choose OK to save the ODBC connection.
  2. Verify that uma is set up with the SAML attributes, as described in the “Set up IdPs and federation” section.

The user uma can now use this ODBC connection to establish the connection to the Amazon Redshift cluster using any ODBC-based tools or reporting tools such as Tableau. Internally, uma authenticates using the Sales_DW_IAM_Policy  IAM role and is assigned the sales_grp database group.

Step 7: Connecting to Amazon Redshift using Python and IAM credentials

To enable partners, connect to the examplecorp-dw cluster programmatically, using Python on a computer such as Amazon EC2 instance. Reuse the IAM users that are attached to the Partner_DW_IAM_Policy policy defined in Step 2.

The following steps show this set up on an EC2 instance:

  1. Launch a new EC2 instance with the Partner_DW_IAM_Policy role, as described in Using an IAM Role to Grant Permissions to Applications Running on Amazon EC2 Instances. Alternatively, you can attach an existing IAM role to an EC2 instance.
  2. This example uses Python PostgreSQL Driver (PyGreSQL) to connect to your Amazon Redshift clusters. To install PyGreSQL on Amazon Linux, use the following command as the ec2-user:
    sudo easy_install pip
    sudo yum install postgresql postgresql-devel gcc python-devel
    sudo pip install PyGreSQL

  1. The following code snippet demonstrates programmatic access to Amazon Redshift for partner users:
    #!/usr/bin/env python
    """
    Usage:
    python redshift-unload-copy.py <config file> <region>
    
    * Copyright 2014, Amazon.com, Inc. or its affiliates. All Rights Reserved.
    *
    * Licensed under the Amazon Software License (the "License").
    * You may not use this file except in compliance with the License.
    * A copy of the License is located at
    *
    * http://aws.amazon.com/asl/
    *
    * or in the "license" file accompanying this file. This file is distributed
    * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
    * express or implied. See the License for the specific language governing
    * permissions and limitations under the License.
    """
    
    import sys
    import pg
    import boto3
    
    REGION = 'us-west-2'
    CLUSTER_IDENTIFIER = 'examplecorp-dw'
    DB_NAME = 'sales_db'
    DB_USER = 'examplecorpsalespartner'
    
    options = """keepalives=1 keepalives_idle=200 keepalives_interval=200
                 keepalives_count=6"""
    
    set_timeout_stmt = "set statement_timeout = 1200000"
    
    def conn_to_rs(host, port, db, usr, pwd, opt=options, timeout=set_timeout_stmt):
        rs_conn_string = """host=%s port=%s dbname=%s user=%s password=%s
                             %s""" % (host, port, db, usr, pwd, opt)
        print "Connecting to %s:%s:%s as %s" % (host, port, db, usr)
        rs_conn = pg.connect(dbname=rs_conn_string)
        rs_conn.query(timeout)
        return rs_conn
    
    def main():
        # describe the cluster and fetch the IAM temporary credentials
        global redshift_client
        redshift_client = boto3.client('redshift', region_name=REGION)
        response_cluster_details = redshift_client.describe_clusters(ClusterIdentifier=CLUSTER_IDENTIFIER)
        response_credentials = redshift_client.get_cluster_credentials(DbUser=DB_USER,DbName=DB_NAME,ClusterIdentifier=CLUSTER_IDENTIFIER,DurationSeconds=3600)
        rs_host = response_cluster_details['Clusters'][0]['Endpoint']['Address']
        rs_port = response_cluster_details['Clusters'][0]['Endpoint']['Port']
        rs_db = DB_NAME
        rs_iam_user = response_credentials['DbUser']
        rs_iam_pwd = response_credentials['DbPassword']
        # connect to the Amazon Redshift cluster
        conn = conn_to_rs(rs_host, rs_port, rs_db, rs_iam_user,rs_iam_pwd)
        # execute a query
        result = conn.query("SELECT sysdate as dt")
        # fetch results from the query
        for dt_val in result.getresult() :
            print dt_val
        # close the Amazon Redshift connection
        conn.close()
    
    if __name__ == "__main__":
        main()

You can save this Python program in a file (redshiftscript.py) and execute it at the command line as ec2-user:

python redshiftscript.py

Now partners can connect to the Amazon Redshift cluster using the Python script, and authentication is federated through the IAM user.

Summary

In this post, I demonstrated how to use federated access using Active Directory and IAM roles to enable single sign-on to an Amazon Redshift cluster. I also showed how partners outside an organization can be managed easily using IAM credentials.  Using the GetClusterCredentials API action, now supported by Amazon Redshift, lets you manage a large number of database users and have them use corporate credentials to log in. You don’t have to maintain separate database user accounts.

Although this post demonstrated the integration of IAM with AD FS and Active Directory, you can replicate this solution across with your choice of SAML 2.0 third-party identity providers (IdP), such as PingFederate or Okta. For the different supported federation options, see Configure SAML Assertions for Your IdP.

If you have questions or suggestions, please comment below.


Additional Reading

Learn how to establish federated access to your AWS resources by using Active Directory user attributes.


About the Author

Thiyagarajan Arumugam is a Big Data Solutions Architect at Amazon Web Services and designs customer architectures to process data at scale. Prior to AWS, he built data warehouse solutions at Amazon.com. In his free time, he enjoys all outdoor sports and practices the Indian classical drum mridangam.

 

Some notes on the KRACK attack

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/10/some-notes-on-krack-attack.html

This is my interpretation of the KRACK attacks paper that describes a way of decrypting encrypted WiFi traffic with an active attack.

tl;dr: Wow. Everyone needs to be afraid. (Well, worried — not panicked.) It means in practice, attackers can decrypt a lot of wifi traffic, with varying levels of difficulty depending on your precise network setup. My post last July about the DEF CON network being safe was in error.

Details

This is not a crypto bug but a protocol bug (a pretty obvious and trivial protocol bug).
When a client connects to the network, the access-point will at some point send a random “key” data to use for encryption. Because this packet may be lost in transmission, it can be repeated many times.
What the hacker does is just repeatedly sends this packet, potentially hours later. Each time it does so, it resets the “keystream” back to the starting conditions. The obvious patch that device vendors will make is to only accept the first such packet it receives, ignore all the duplicates.
At this point, the protocol bug becomes a crypto bug. We know how to break crypto when we have two keystreams from the same starting position. It’s not always reliable, but reliable enough that people need to be afraid.
Android, though, is the biggest danger. Rather than simply replaying the packet, a packet with key data of all zeroes can be sent. This allows attackers to setup a fake WiFi access-point and man-in-the-middle all traffic.
In a related case, the access-point/base-station can sometimes also be attacked, affecting the stream sent to the client.
Not only is sniffing possible, but in some limited cases, injection. This allows the traditional attack of adding bad code to the end of HTML pages in order to trick users into installing a virus.

This is an active attack, not a passive attack, so in theory, it’s detectable.

Who is vulnerable?

Everyone, pretty much.
The hacker only needs to be within range of your WiFi. Your neighbor’s teenage kid is going to be downloading and running the tool in order to eavesdrop on your packets.
The hacker doesn’t need to be logged into your network.
It affects all WPA1/WPA2, the personal one with passwords that we use in home, and the enterprise version with certificates we use in enterprises.
It can’t defeat SSL/TLS or VPNs. Thus, if you feel your laptop is safe surfing the public WiFi at airports, then your laptop is still safe from this attack. With Android, it does allow running tools like sslstrip, which can fool many users.
Your home network is vulnerable. Many devices will be using SSL/TLS, so are fine, like your Amazon echo, which you can continue to use without worrying about this attack. Other devices, like your Phillips lightbulbs, may not be so protected.

How can I defend myself?

Patch.
More to the point, measure your current vendors by how long it takes them to patch. Throw away gear by those vendors that took a long time to patch and replace it with vendors that took a short time.
High-end access-points that contains “WIPS” (WiFi Intrusion Prevention Systems) features should be able to detect this and block vulnerable clients from connecting to the network (once the vendor upgrades the systems, of course). Even low-end access-points, like the $30 ones you get for home, can easily be updated to prevent packet sequence numbers from going back to the start (i.e. from the keystream resetting back to the start).
At some point, you’ll need to run the attack against yourself, to make sure all your devices are secure. Since you’ll be constantly allowing random phones to connect to your network, you’ll need to check their vulnerability status before connecting them. You’ll need to continue doing this for several years.
Of course, if you are using SSL/TLS for everything, then your danger is mitigated. This is yet another reason why you should be using SSL/TLS for internal communications.
Most security vendors will add things to their products/services to defend you. While valuable in some cases, it’s not a defense. The defense is patching the devices you know about, and preventing vulnerable devices from attaching to your network.
If I remember correctly, DEF CON uses Aruba. Aruba contains WIPS functionality, which means by the time DEF CON roles around again next year, they should have the feature to deny vulnerable devices from connecting, and specifically to detect an attack in progress and prevent further communication.
However, for an attacker near an Android device using a low-powered WiFi, it’s likely they will be able to conduct man-in-the-middle without any WIPS preventing them.

Application Load Balancers Now Support Multiple TLS Certificates With Smart Selection Using SNI

Post Syndicated from Randall Hunt original https://aws.amazon.com/blogs/aws/new-application-load-balancer-sni/

Today we’re launching support for multiple TLS/SSL certificates on Application Load Balancers (ALB) using Server Name Indication (SNI). You can now host multiple TLS secured applications, each with its own TLS certificate, behind a single load balancer. In order to use SNI, all you need to do is bind multiple certificates to the same secure listener on your load balancer. ALB will automatically choose the optimal TLS certificate for each client. These new features are provided at no additional charge.

If you’re looking for a TL;DR on how to use this new feature just click here. If you’re like me and you’re a little rusty on the specifics of Transport Layer Security (TLS) then keep reading.

TLS? SSL? SNI?

People tend to use the terms SSL and TLS interchangeably even though the two are technically different. SSL technically refers to a predecessor of the TLS protocol. To keep things simple I’ll be using the term TLS for the rest of this post.

TLS is a protocol for securely transmitting data like passwords, cookies, and credit card numbers. It enables privacy, authentication, and integrity of the data being transmitted. TLS uses certificate based authentication where certificates are like ID cards for your websites. You trust the person that signed and issued the certificate, the certificate authority (CA), so you trust that the data in the certificate is correct. When a browser connects to your TLS-enabled ALB, ALB presents a certificate that contains your site’s public key, which has been cryptographically signed by a CA. This way the client can be sure it’s getting the ‘real you’ and that it’s safe to use your site’s public key to establish a secure connection.

With SNI support we’re making it easy to use more than one certificate with the same ALB. The most common reason you might want to use multiple certificates is to handle different domains with the same load balancer. It’s always been possible to use wildcard and subject-alternate-name (SAN) certificates with ALB, but these come with limitations. Wildcard certificates only work for related subdomains that match a simple pattern and while SAN certificates can support many different domains, the same certificate authority has to authenticate each one. That means you have reauthenticate and reprovision your certificate everytime you add a new domain.

One of our most frequent requests on forums, reddit, and in my e-mail inbox has been to use the Server Name Indication (SNI) extension of TLS to choose a certificate for a client. Since TLS operates at the transport layer, below HTTP, it doesn’t see the hostname requested by a client. SNI works by having the client tell the server “This is the domain I expect to get a certificate for” when it first connects. The server can then choose the correct certificate to respond to the client. All modern web browsers and a large majority of other clients support SNI. In fact, today we see SNI supported by over 99.5% of clients connecting to CloudFront.

Smart Certificate Selection on ALB

ALB’s smart certificate selection goes beyond SNI. In addition to containing a list of valid domain names, certificates also describe the type of key exchange and cryptography that the server supports, as well as the signature algorithm (SHA2, SHA1, MD5) used to sign the certificate. To establish a TLS connection, a client starts a TLS handshake by sending a “ClientHello” message that outlines the capabilities of the client: the protocol versions, extensions, cipher suites, and compression methods. Based on what an individual client supports, ALB’s smart selection algorithm chooses a certificate for the connection and sends it to the client. ALB supports both the classic RSA algorithm and the newer, hipper, and faster Elliptic-curve based ECDSA algorithm. ECDSA support among clients isn’t as prevalent as SNI, but it is supported by all modern web browsers. Since it’s faster and requires less CPU, it can be particularly useful for ultra-low latency applications and for conserving the amount of battery used by mobile applications. Since ALB can see what each client supports from the TLS handshake, you can upload both RSA and ECDSA certificates for the same domains and ALB will automatically choose the best one for each client.

Using SNI with ALB

I’ll use a few example websites like VimIsBetterThanEmacs.com and VimIsTheBest.com. I’ve purchased and hosted these domains on Amazon Route 53, and provisioned two separate certificates for them in AWS Certificate Manager (ACM). If I want to securely serve both of these sites through a single ALB, I can quickly add both certificates in the console.

First, I’ll select my load balancer in the console, go to the listeners tab, and select “view/edit certificates”.

Next, I’ll use the “+” button in the top left corner to select some certificates then I’ll click the “Add” button.

There are no more steps. If you’re not really a GUI kind of person you’ll be pleased to know that it’s also simple to add new certificates via the AWS Command Line Interface (CLI) (or SDKs).

aws elbv2 add-listener-certificates --listener-arn <listener-arn> --certificates CertificateArn=<cert-arn>

Things to know

  • ALB Access Logs now include the client’s requested hostname and the certificate ARN used. If the “hostname” field is empty (represented by a “-“) the client did not use the SNI extension in their request.
  • You can use any of your certificates in ACM or IAM.
  • You can bind multiple certificates for the same domain(s) to a secure listener. Your ALB will choose the optimal certificate based on multiple factors including the capabilities of the client.
  • If the client does not support SNI your ALB will use the default certificate (the one you specified when you created the listener).
  • There are three new ELB API calls: AddListenerCertificates, RemoveListenerCertificates, and DescribeListenerCertificates.
  • You can bind up to 25 certificates per load balancer (not counting the default certificate).
  • These new features are supported by AWS CloudFormation at launch.

You can see an example of these new features in action with a set of websites created by my colleague Jon Zobrist: https://www.exampleloadbalancer.com/.

Overall, I will personally use this feature and I’m sure a ton of AWS users will benefit from it as well. I want to thank the Elastic Load Balancing team for all their hard work in getting this into the hands of our users.

Randall

Spotify Threatened Researchers Who Revealed ‘Pirate’ History

Post Syndicated from Andy original https://torrentfreak.com/spotify-threatened-researchers-who-revealed-pirate-history-171006/

As one of the members of Sweden’s infamous Piratbyrån (Piracy Bureau), Rasmus Fleischer was also one of early key figures at The Pirate Bay. Over the years he’s been a writer, researcher, debater, and musician, and in 2012 he finished his PhD thesis on “music’s political economy.”

As part of a five-person research team (Pelle Snickars, Patrick Vonderau, Anna Johansson, Rasmus Fleischer, Maria Eriksson) funded by the Swedish Research Council, Fleischer has co-written a book about the history of Spotify.

Titled ‘Spotify Teardown – Inside the Black Box of Streaming Music’, the publication is set to shine light on the history of the now famous music service while revealing quite a few past secrets.

With its release scheduled for 2018, Fleischer has already teased a few interesting nuggets, not least that Spotify’s early beta version used ‘pirate’ MP3 files, some of them sourced from The Pirate Bay.

Fleischer says that following an interview earlier this year with DI.se, in which he revealed that Spotify distributed unlicensed music between May 2007 to October 2008, Spotify looked at ways to try and stop his team’s research. However, the ‘pirate’ angle wasn’t the clear target, another facet of the team’s research was.

“Building on the tradition of ‘breaching experiments’ in ethnomethodology, the research group sought to break into the hidden infrastructures of digital music distribution in order to study its underlying norms and structures,” project leader Pelle Snickars previously revealed.

With this goal, the team conducted experiments to see if the system was open to abuse or could be manipulated, as Fleischer now explains.

“For example, some hundreds of robot users were created to study whether the same listening behavior results in different recommendations depending on whether the user was registered as male or female,” he says.

“We have also investigated on a small scale the possibilities of manipulating the system. However, we have not collected any data about real users. Our proposed methods appeared several years ago in our research funding application, which was approved by the Swedish Research Council, which was already noted in 2013.”

Fleischer says that Spotify had been aware of the project for several years but it wasn’t until this year, after he spoke of Spotify’s past as a ‘pirate’ service, that pressure began to mount.

“On May 19, our project manager received a letter from Benjamin Helldén-Hegelund, a lawyer at Spotify. The timing was hardly a coincidence. Spotify demanded that we ‘confirm in writing’ that we had ‘ceased activities contrary to their Terms of Use’,” Fleischer reveals.

A corresponding letter to the Swedish Research Council detailed Spotify’s problems with the project.

“Spotify is particularly concerned about the information that has emerged regarding the research group’s methods in the project. The data indicate that the research team has deliberately taken action that is explicitly in violation of Spotify’s Terms of Use and by means of technical methods they sought to conceal these breaches of conditions,” the letter read.

“The research group has worked, among other things, to artificially increase the number of plays and manipulate Spotify’s services using scripts or other automated processes.

“Spotify assumes that the systematic breach of its conditions has not been known to the Swedish Research Council and is convinced that the Swedish Research Council is convinced that the research undertaken with the support of the Swedish Research Council in all respects meets ethical guidelines and is carried out reasonably and in accordance with applicable law.”

Fleischer admits that part of the research was concerned with the possibility of artificially increasing the number of plays, but he says that was carried out on a small scale without any commercial gain.

“The purpose was simply to test if it is true that Spotify could be manipulated on a larger scale, as claimed by journalists who did similar experiments. It is also true that we ‘sought to hide these crimes’ by using a VPN connection,” he says.

Fleischer says that Spotify’s lawyer blended complaints together, such as correlating terms of service violations with violation of research ethics, while presenting the same as grounds for legal action.

“The argument was quite ridiculous. Nevertheless, the letter could not be interpreted as anything other than an attempt by Spotify to prevent us from pursuing the research project,” he notes.

This week, however, it appears the dispute has reached some kind of conclusion. In a posting on his Copyriot blog (Swedish), Fleischer reveals that Spotify has informed the Swedish Research Council that the case has been closed, meaning that the research into the streaming service can continue.

“It must be acknowledged that Spotify’s threats have taken both time and power from the project. This seems to be the purpose when big companies go after researchers who they perceive as uncomfortable. It may not be possible to stop the research but it can be delayed,” Fleischer says.

“Sure [Spotify] dislikes people being reminded of how the service started as a pirate service. But instead of inviting an open dialogue, lawyers are sent out for the purpose of slowing down researchers.”

Spotify Teardown. Inside the Black Box of Streaming Music is to be published by MIT Press in 2018.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

timeShift(GrafanaBuzz, 1w) Issue 16

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2017/10/06/timeshiftgrafanabuzz-1w-issue-16/

Welcome to another issue of TimeShift. In addition to the roundup of articles and plugin updates, we had a big announcement this week – Early Bird tickets to GrafanaCon EU are now available! We’re also accepting CFPs through the end of October, so if you have a topic in mind, don’t wait until the last minute, please send it our way. Speakers who are selected will receive a comped ticket to the conference.


Early Bird Tickets Now Available

We’ve released a limited number of Early Bird tickets before General Admission tickets are available. Take advantage of this discount before they’re sold out!

Get Your Early Bird Ticket Now

Interested in speaking at GrafanaCon? We’re looking for technical and non-tecnical talks of all sizes. Submit a CFP Now.


From the Blogosphere

Get insights into your Azure Cosmos DB: partition heatmaps, OMS, and More: Microsoft recently announced the ability to access a subset of Azure Cosmos DB metrics via Azure Monitor API. Grafana Labs built an Azure Monitor Plugin for Grafana 4.5 to visualize the data.

How to monitor Docker for Mac/Windows: Brian was tired of guessing about the performance of his development machines and test environment. Here, he shows how to monitor Docker with Prometheus to get a better understanding of a dev environment in his quest to monitor all the things.

Prometheus and Grafana to Monitor 10,000 servers: This article covers enokido’s process of choosing a monitoring platform. He identifies three possible solutions, outlines the pros and cons of each, and discusses why he chose Prometheus.

GitLab Monitoring: It’s fascinating to see Grafana dashboards with production data from companies around the world. For instance, we’ve previously highlighted the huge number of dashboards Wikimedia publicly shares. This week, we found that GitLab also has public dashboards to explore.

Monitoring a Docker Swarm Cluster with cAdvisor, InfluxDB and Grafana | The Laboratory: It’s important to know the state of your applications in a scalable environment such as Docker Swarm. This video covers an overview of Docker, VM’s vs. containers, orchestration and how to monitor Docker Swarm.

Introducing Telemetry: Actionable Time Series Data from Counters: Learn how to use counters from mulitple disparate sources, devices, operating systems, and applications to generate actionable time series data.

ofp_sniffer Branch 1.2 (docker/influxdb/grafana) Upcoming Features: This video demo shows off some of the upcoming features for OFP_Sniffer, an OpenFlow sniffer to help network troubleshooting in production networks.


Grafana Plugins

Plugin authors add new features and bugfixes all the time, so it’s important to always keep your plugins up to date. To update plugins from on-prem Grafana, use the Grafana-cli tool, if you are using Hosted Grafana, you can update with 1 click! If you have questions or need help, hit up our community site, where the Grafana team and members of the community are happy to help.

UPDATED PLUGIN

PNP for Nagios Data Source – The latest release for the PNP data source has some fixes and adds a mathematical factor option.

Update

UPDATED PLUGIN

Google Calendar Data Source – This week, there was a small bug fix for the Google Calendar annotations data source.

Update

UPDATED PLUGIN

BT Plugins – Our friends at BT have been busy. All of the BT plugins in our catalog received and update this week. The plugins are the Status Dot Panel, the Peak Report Panel, the Trend Box Panel and the Alarm Box Panel.

Changes include:

  • Custom dashboard links now work in Internet Explorer.
  • The Peak Report panel no longer supports click-to-sort.
  • The Status Dot panel tooltips now look like Grafana tooltips.


This week’s MVC (Most Valuable Contributor)

Each week we highlight some of the important contributions from our amazing open source community. This week, we’d like to recognize a contributor who did a lot of work to improve Prometheus support.

pdoan017
Thanks to Alin Sinpaleanfor his Prometheus PR – that aligns the step and interval parameters. Alin got a lot of feedback from the Prometheus community and spent a lot of time and energy explaining, debating and iterating before the PR was ready.
Thank you!


Grafana Labs is Hiring!

We are passionate about open source software and thrive on tackling complex challenges to build the future. We ship code from every corner of the globe and love working with the community. If this sounds exciting, you’re in luck – WE’RE HIRING!

Check out our Open Positions


Tweet of the Week

We scour Twitter each week to find an interesting/beautiful dashboard and show it off! #monitoringLove

Wow – Excited to be a part of exploring data to find out how Mexico City is evolving.

We Need Your Help!

Do you have a graph that you love because the data is beautiful or because the graph provides interesting information? Please get in touch. Tweet or send us an email with a screenshot, and we’ll tell you about this fun experiment.

Tell Me More


What do you think?

That’s a wrap! How are we doing? Submit a comment on this article below, or post something at our community forum. Help us make these weekly roundups better!

Follow us on Twitter, like us on Facebook, and join the Grafana Labs community.

Browser hacking for 280 character tweets

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/09/browser-hacking-for-280-character-tweets.html

Twitter has raised the limit to 280 characters for a select number of people. However, they left open a hole, allowing anybody to make large tweets with a little bit of hacking. The hacking skills needed are basic hacking skills, which I thought I’d write up in a blog post.


Specifically, the skills you will exercise are:

  • basic command-line shell
  • basic HTTP requests
  • basic browser DOM editing

The short instructions

The basic instructions were found in tweets like the following:
These instructions are clear to the average hacker, but of course, a bit difficult for those learning hacking, hence this post.

The command-line

The basics of most hacking start with knowledge of the command-line. This is the “Terminal” app under macOS or cmd.exe under Windows. Almost always when you see hacking dramatized in the movies, they are using the command-line.
In the beginning, the command-line is all computers had. To do anything on a computer, you had to type a “command” telling it what to do. What we see as the modern graphical screen is a layer on top of the command-line, one that translates clicks of the mouse into the raw commands.
On most systems, the command-line is known as “bash”. This is what you’ll find on Linux and macOS. Windows historically has had a different command-line that uses slightly different syntax, though in the last couple years, they’ve also supported “bash”. You’ll have to install it first, such as by following these instructions.
You’ll see me use command that may not be yet installed on your “bash” command-line, like nc and curl. You’ll need to run a command to install them, such as:
sudo apt-get install nc curl
The thing to remember about the command-line is that the mouse doesn’t work. You can’t click to move the cursor as you normally do in applications. That’s because the command-line predates the mouse by decades. Instead, you have to use arrow keys.
I’m not going to spend much effort discussing the command-line, as a complete explanation is beyond the scope of this document. Instead, I’m assuming the reader either already knows it, or will learn-from-example as we go along.

Web requests

The basics of how the web works are really simple. A request to a web server is just a small packet of text, such as the following, which does a search on Google for the search-term “penguin” (presumably, you are interested in knowing more about penguins):
GET /search?q=penguin HTTP/1.0
Host: www.google.com
User-Agent: human
The command we are sending to the server is GET, meaning get a page. We are accessing the URL /search, which on Google’s website, is how you do a search. We are then sending the parameter q with the value penguin. We also declare that we are using version 1.0 of the HTTP (hyper-text transfer protocol).
Following the first line there are a number of additional headers. In one header, we declare the Host name that we are accessing. Web servers can contain many different websites, with different names, so this header is usually imporant.
We also add the User-Agent header. The “user-agent” means the “browser” that you use, like Edge, Chrome, Firefox, or Safari. It allows servers to send content optimized for different browsers. Since we are sending web requests without a browser here, we are joking around saying human.
Here’s what happens when we use the nc program to send this to a google web server:
The first part is us typing, until we hit the [enter] key to create a blank line. After that point is the response from the Google server. We get back a result code (OK), followed by more headers from the server, and finally the contents of the webpage, which goes on from many screens. (We’ll talk about what web pages look like below).
Note that a lot of HTTP headers are optional and really have little influence on what’s going on. They are just junk added to web requests. For example, we see Google report a P3P header is some relic of 2002 that nobody uses anymore, as far as I can tell. Indeed, if you follow the URL in the P3P header, Google pretty much says exactly that.
I point this out because the request I show above is a simplified one. In practice, most requests contain a lot more headers, especially Cookie headers. We’ll see that later when making requests.

Using cURL instead

Sending the raw HTTP request to the server, and getting raw HTTP/HTML back, is annoying. The better way of doing this is with the tool known as cURL, or plainly, just curl. You may be familiar with the older command-line tools wget. cURL is similar, but more flexible.
To use curl for the experiment above, we’d do something like the following. We are saving the web page to “penguin.html” instead of just spewing it on the screen.
Underneath, cURL builds an HTTP header just like the one we showed above, and sends it to the server, getting the response back.

Web-pages

Now let’s talk about web pages. When you look at the web page we got back from Google while searching for “penguin”, you’ll see that it’s intimidatingly complex. I mean, it intimidates me. But it all starts from some basic principles, so we’ll look at some simpler examples.
The following is text of a simple web page:
<html>
<body>
<h1>Test</h1>
<p>This is a simple web page</p>
</body>
</html>
This is HTML, “hyper-text markup language”. As it’s name implies, we “markup” text, such as declaring the first text as a level-1 header (H1), and the following text as a paragraph (P).
In a web browser, this gets rendered as something that looks like the following. Notice how a header is formatted differently from a paragraph. Also notice that web browsers can use local files as well as make remote requests to web servers:
You can right-mouse click on the page and do a “View Source”. This will show the raw source behind the web page:
Web pages don’t just contain marked-up text. They contain two other important features, style information that dictates how things appear, and script that does all the live things that web pages do, from which we build web apps.
So let’s add a little bit of style and scripting to our web page. First, let’s view the source we’ll be adding:
In our header (H1) field, we’ve added the attribute to the markup giving this an id of mytitle. In the style section above, we give that element a color of blue, and tell it to align to the center.
Then, in our script section, we’ve told it that when somebody clicks on the element “mytitle”, it should send an “alert” message of “hello”.
This is what our web page now looks like, with the center blue title:
When we click on the title, we get a popup alert:
Thus, we see an example of the three components of a webpage: markup, style, and scripting.

Chrome developer tools

Now we go off the deep end. Right-mouse click on “Test” (not normal click, but right-button click, to pull up a menu). Select “Inspect”.
You should now get a window that looks something like the following. Chrome splits the screen in half, showing the web page on the left, and it’s debug tools on the right.
This looks similar to what “View Source” shows, but it isn’t. Instead, it’s showing how Chrome interpreted the source HTML. For example, our style/script tags should’ve been marked up with a head (header) tag. We forgot it, but Chrome adds it in anyway.
What Google is showing us is called the DOM, or document object model. It shows us all the objects that make up a web page, and how they fit together.
For example, it shows us how the style information for #mytitle is created. It first starts with the default style information for an h1 tag, and then how we’ve changed it with our style specifications.
We can edit the DOM manually. Just double click on things you want to change. For example, in this screen shot, I’ve changed the style spec from blue to red, and I’ve changed the header and paragraph test. The original file on disk hasn’t changed, but I’ve changed the DOM in memory.
This is a classic hacking technique. If you don’t like things like paywalls, for example, just right-click on the element blocking your view of the text, “Inspect” it, then delete it. (This works for some paywalls).
This edits the markup and style info, but changing the scripting stuff is a bit more complicated. To do that, click on the [Console] tab. This is the scripting console, and allows you to run code directly as part of the webpage. We are going to run code that resets what happens when we click on the title. In this case, we are simply going to change the message to “goodbye”.
Now when we click on the title, we indeed get the message:
Again, a common way to get around paywalls is to run some code like that that change which functions will be called.

Putting it all together

Now let’s put this all together in order to hack Twitter to allow us (the non-chosen) to tweet 280 characters. Review Dildog’s instructions above.
The first step is to get to Chrome Developer Tools. Dildog suggests F12. I suggest right-clicking on the Tweet button (or Reply button, as I use in my example) and doing “Inspect”, as I describe above.
You’ll now see your screen split in half, with the DOM toward the right, similar to how I describe above. However, Twitter’s app is really complex. Well, not really complex, it’s all basic stuff when you come right down to it. It’s just so much stuff — it’s a large web app with lots of parts. So we have to dive in without understanding everything that’s going on.
The Tweet/Reply button we are inspecting is going to look like this in the DOM:
The Tweet/Reply button is currently greyed out because it has the “disabled” attribute. You need to double click on it and remove that attribute. Also, in the class attribute, there is also a “disabled” part. Double-click, then click on that and removed just that disabled as well, without impacting the stuff around it. This should change the button from disabled to enabled. It won’t be greyed out, and it’ll respond when you click on it.
Now click on it. You’ll get an error message, as shown below:
What we’ve done here is bypass what’s known as client-side validation. The script in the web page prevented sending Tweets longer than 140 characters. Our editing of the DOM changed that, allowing us to send a bad request to the server. Bypassing client-side validation this way is the source of a lot of hacking.
But Twitter still does server-side validation as well. They know any client-side validation can be bypassed, and are in on the joke. They tell us hackers “You’ll have to be more clever”. So let’s be more clever.
In order to make longer 280 characters tweets work for select customers, they had to change something on the server-side. The thing they added was adding a “weighted_character_count=true” to the HTTP request. We just need to repeat the request we generated above, adding this parameter.
In theory, we can do this by fiddling with the scripting. The way Dildog describes does it a different way. He copies the request out of the browser, edits it, then send it via the command-line using curl.
We’ve used the [Elements] and [Console] tabs in Chrome’s DevTools. Now we are going to use the [Network] tab. This lists all the requests the web page has made to the server. The twitter app is constantly making requests to refresh the content of the web page. The request we made trying to do a long tweet is called “create”, and is red, because it failed.
Google Chrome gives us a number of ways to duplicate the request. The most useful is that it copies it as a full cURL command we can just paste onto the command-line. We don’t even need to know cURL, it takes care of everything for us. On Windows, since you have two command-lines, it gives you a choice to use the older Windows cmd.exe, or the newer bash.exe. I use the bash version, since I don’t know where to get the Windows command-line version of cURL.exe.
There’s a lot of going on here. The first thing to notice is the long xxxxxx strings. That’s actually not in the original screenshot. I edited the picture. That’s because these are session-cookies. If inserted them into your browser, you’d hijack my Twitter session, and be able to tweet as me (such as making Carlos Danger style tweets). Therefore, I have to remove them from the example.
At the top of the screen is the URL that we are accessing, which is https://twitter.com/i/tweet/create. Much of the rest of the screen uses the cURL -H option to add a header. These are all the HTTP headers that I describe above. Finally, at the bottom, is the –data section, which contains the data bits related to the tweet, especially the tweet itself.
We need to edit either the URL above to read https://twitter.com/i/tweet/create?weighted_character_count=true, or we need to add &weighted_character_count=true to the –data section at the bottom (either works). Remember: mouse doesn’t work on command-line, so you have to use the cursor-keys to navigate backwards in the line. Also, since the line is larger than the screen, it’s on several visual lines, even though it’s all a single line as far as the command-line is concerned.
Now just hit [return] on your keyboard, and the tweet will be sent to the server, which at the moment, works. Presto!
Twitter will either enable or disable the feature for everyone in a few weeks, at which point, this post won’t work. But the reason I’m writing this is to demonstrate the basic hacking skills. We manipulate the web pages we receive from servers, and we manipulate what’s sent back from our browser back to the server.

Easier: hack the scripting

Instead of messing with the DOM and editing the HTTP request, the better solution would be to change the scripting that does both DOM client-side validation and HTTP request generation. The only reason Dildog above didn’t do that is that it’s a lot more work trying to find where all this happens.
Others have, though. @Zemnmez did just that, though his technique works for the alternate TweetDeck client (https://tweetdeck.twitter.com) instead of the default client. Go copy his code from here, then paste it into the DevTools scripting [Console]. It’ll go in an replace some scripting functions, such like my simpler example above.
The console is showing a stream of error messages, because TweetDeck has bugs, ignore those.
Now you can effortlessly do long tweets as normal, without all the messing around I’ve spent so much text in this blog post describing.
Now, as I’ve mentioned this before, you are only editing what’s going on in the current web page. If you refresh this page, or close it, everything will be lost. You’ll have to re-open the DevTools scripting console and repaste the code. The easier way of doing this is to use the [Sources] tab instead of [Console] and use the “Snippets” feature to save this bit of code in your browser, to make it easier next time.
The even easier way is to use Chrome extensions like TamperMonkey and GreaseMonkey that’ll take care of this for you. They’ll save the script, and automatically run it when they see you open the TweetDeck webpage again.
An even easier way is to use one of the several Chrome extensions written in the past day specifically designed to bypass the 140 character limit. Since the purpose of this blog post is to show you how to tamper with your browser yourself, rather than help you with Twitter, I won’t list them.

Conclusion

Tampering with the web-page the server gives you, and the data you send back, is a basic hacker skill. In truth, there is a lot to this. You have to get comfortable with the command-line, using tools like cURL. You have to learn how HTTP requests work. You have to understand how web pages are built from markup, style, and scripting. You have to be comfortable using Chrome’s DevTools for messing around with web page elements, network requests, scripting console, and scripting sources.
So it’s rather a lot, actually.
My hope with this page is to show you a practical application of all this, without getting too bogged down in fully explaining how every bit works.

Using Enhanced Request Authorizers in Amazon API Gateway

Post Syndicated from Stefano Buliani original https://aws.amazon.com/blogs/compute/using-enhanced-request-authorizers-in-amazon-api-gateway/

Recently, AWS introduced a new type of authorizer in Amazon API Gateway, enhanced request authorizers. Previously, custom authorizers received only the bearer token included in the request and the ARN of the API Gateway method being called. Enhanced request authorizers receive all of the headers, query string, and path parameters as well as the request context. This enables you to make more sophisticated authorization decisions based on parameters such as the client IP address, user agent, or a query string parameter alongside the client bearer token.

Enhanced request authorizer configuration

From the API Gateway console, you can declare a new enhanced request authorizer by selecting the Request option as the AWS Lambda event payload:

Create enhanced request authorizer

 

Just like normal custom authorizers, API Gateway can cache the policy returned by your Lambda function. With enhanced request authorizers, however, you can also specify the values that form the unique key of a policy in the cache. For example, if your authorization decision is based on both the bearer token and the IP address of the client, both values should be part of the unique key in the policy cache. The identity source parameter lets you specify these values as mapping expressions:

  • The bearer token appears in the Authorization header
  • The client IP address is stored in the sourceIp parameter of the request context.

Configure identity sources

 

Using enhanced request authorizers with Swagger

You can also define enhanced request authorizers in your Swagger (Open API) definitions. In the following example, you can see that all of the options configured in the API Gateway console are available as custom extensions in the API definition. For example, the identitySource field is a comma-separated list of mapping expressions.

securityDefinitions:
  IpAuthorizer:
    type: "apiKey"
    name: "IpAuthorizer"
    in: "header"
    x-amazon-apigateway-authtype: "custom"
    x-amazon-apigateway-authorizer:
      authorizerResultTtlInSeconds: 300
      identitySource: "method.request.header.Authorization, context.identity.sourceIp"
      authorizerUri: "arn:aws:apigateway:us-east-1:lambda:path/2015-03-31/functions/arn:aws:lambda:us-east-1:XXXXXXXXXX:function:py-ip-authorizer/invocations"
      type: "request"

After you have declared your authorizer in the security definitions section, you can use it in your API methods:

---
swagger: "2.0"
info:
  title: "request-authorizer-demo"
basePath: "/dev"
paths:
  /hello:
    get:
      security:
      - IpAuthorizer: []
...

Enhanced request authorizer Lambda functions

Enhanced request authorizer Lambda functions receive an event object that is similar to proxy integrations. It contains all of the information about a request, excluding the body.

{
    "methodArn": "arn:aws:execute-api:us-east-1:XXXXXXXXXX:xxxxxx/dev/GET/hello",
    "resource": "/hello",
    "requestContext": {
        "resourceId": "xxxx",
        "apiId": "xxxxxxxxx",
        "resourcePath": "/hello",
        "httpMethod": "GET",
        "requestId": "9e04ff18-98a6-11e7-9311-ef19ba18fc8a",
        "path": "/dev/hello",
        "accountId": "XXXXXXXXXXX",
        "identity": {
            "apiKey": "",
            "sourceIp": "58.240.196.186"
        },
        "stage": "dev"
    },
    "queryStringParameters": {},
    "httpMethod": "GET",
    "pathParameters": {},
    "headers": {
        "cache-control": "no-cache",
        "x-amzn-ssl-client-hello": "AQACJAMDAAAAAAAAAAAAAAAAAAAAAAAAAAAA…",
        "Accept-Encoding": "gzip, deflate",
        "X-Forwarded-For": "54.240.196.186, 54.182.214.90",
        "Accept": "*/*",
        "User-Agent": "PostmanRuntime/6.2.5",
        "Authorization": "hello"
    },
    "stageVariables": {},
    "path": "/hello",
    "type": "REQUEST"
}

The following enhanced request authorizer snippet is written in Python and compares the source IP address against a list of valid IP addresses. The comments in the code explain what happens in each step.

...
VALID_IPS = ["58.240.195.186", "201.246.162.38"]

def lambda_handler(event, context):

    # Read the client’s bearer token.
    jwtToken = event["headers"]["Authorization"]
    
    # Read the source IP address for the request form 
    # for the API Gateway context object.
    clientIp = event["requestContext"]["identity"]["sourceIp"]
    
    # Verify that the client IP address is allowed.
    # If it’s not valid, raise an exception to make sure
    # that API Gateway returns a 401 status code.
    if clientIp not in VALID_IPS:
        raise Exception('Unauthorized')
    
    # Only allow hello users in!
    if not validate_jwt(userId):
        raise Exception('Unauthorized')

    # Use the values from the event object to populate the 
    # required parameters in the policy object.
    policy = AuthPolicy(userId, event["requestContext"]["accountId"])
    policy.restApiId = event["requestContext"]["apiId"]
    policy.region = event["methodArn"].split(":")[3]
    policy.stage = event["requestContext"]["stage"]
    
    # Use the scopes from the bearer token to make a 
    # decision on which methods to allow in the API.
    policy.allowMethod(HttpVerb.GET, '/hello')

    # Finally, build the policy.
    authResponse = policy.build()

    return authResponse
...

Conclusion

API Gateway customers build complex APIs, and authorization decisions often go beyond the simple properties in a JWT token. For example, users may be allowed to call the “list cars” endpoint but only with a specific subset of filter parameters. With enhanced request authorizers, you have access to all request parameters. You can centralize all of your application’s access control decisions in a Lambda function, making it easier to manage your application security.

Съд на ЕС: Регламент № 1215/2012: международна компетентност относно засягане на доброто име на юридическо лице в интернет

Post Syndicated from nellyo original https://nellyo.wordpress.com/2017/09/22/1215-2012/

Вече е известно заключението на Генералния адвокат Бобек по дело  C‑194/16 Bolagsupplysningen OÜ Ingrid Ilsjan срещу Svensk Handel AB по преюдициално запитване от Естония.

Заключението започва описанието на фактите с известна поетичност:

Естонско дружество, извършващо дейност в Швеция, е включено в черен списък на уебсайта на шведска федерация на работодателите заради предполагаеми съмнителни бизнес практики. Както неизбежно се случва в епохата на анонимната храброст в интернет, всеизвестна със своя изтънчен стил, проницателно разбиране и сдържаност, уебсайтът предизвиква някои враждебни коментари от своите читатели.

Естонското дружество предявява иск срещу шведската федерация пред естонските съдилища . То се оплаква, че публикуваните данни засягат неблагоприятно неговата чест  и добро име и  иска от естонските съдилища да осъдят шведската федерация да поправи данните и да заличи коментарите от уебсайта си, както и обезщетение за вредите, които твърди, че е претърпяло вследствие на публикуваните в интернет данни и коментари.

Така се стига до преюдициално запитване на Riigikohus (Върховен съд, Естония) – по същество, по три въпроса: 

  • Първо, могат ли естонските юрисдикции да установят, че са компетентни да разглеждат този иск въз основа на „центъра на интересите“ на ищеца — специално основание за компетентност, което Съдът досега е прилагал по отношение на физически лица, но не е по отношение на юридически лица?
  • При утвърдителен отговор, второ, как следва да се определи „центърът на интересите“ на юридическо лице?
  • Трето, ако компетентността на естонските юрисдикции се ограничава до случаи, в които вредите са настъпили в Естония, запитващата юрисдикция се пита дали може да задължи шведската федерация да поправи и да заличи въпросните данни.

Заключението

Бобек тръгва от констатацията, че съгласно съображение 15 от Регламент (ЕС) № 1215/2012  правилата за компетентността следва да са „във висока степен предвидими и основани на принципа, че компетентността по правило се основава на местоживеенето на ответника“.

След това посочва, че не вижда основание за разграничение между физическите и юридическите лица по отношение признаването на международна компетентност по дела във връзка с извъндоговорна отговорност за засягане на добро име –  по отношение на международната отговорност те следва да се третират еднакво.

Прегледът на досегашната практика – едновременният прочит на решения Shevill и eDate на Съда на ЕС

показва, че понастоящем в случай на твърдение за накърняване на репутацията чрез публикуване на данни в интернет ищецът, ако е физическо лице, разполага с възможност за избор между четири типа компетентни юрисдикции. Три от тях са „напълно“ компетентни: могат да се произнесат по искове за отговорност за цялата вреда. Четвъртият тип юрисдикции имат „частична“ компетентност: могат да се произнесат единствено по вредата, причинена на територията на тяхната държава. Напълно компетентни са съдилищата с обща компетентност (по местоживеенето на ответника) и тези с два вида специална компетентност (по мястото на настъпване на вредите, което в повечето случаи съвпада с местоживеенето на ответника, и по мястото, където се намира центърът на интересите на ищеца). В допълнение, възможно е съдилищата на всички останали държави членки да бъдат сезирани като частично компетентни, тъй като данните в интернет са достъпни във всички държави членки.

Бобек прави и обзор на практиката на ЕСПЧ – известно е, че  Съдът се е занимавал с доброто име на юридически лица, тук се цитира  Fayed v. the United Kingdom, но известно е също и  Steel and Morris v. the United Kingdom, където засегнатото лице е МакДоналдс.

Така стига до извода [60], че

 в правото на Съюза трябва да съществуват еднакви правила относно компетентността, въз основа на които да се определи компетентният съд по искове като разглеждания в главното производство.

В заключение:

„–      Член 7, точка 2 от Регламент № 1215/2012 на Европейския парламент и на Съвета от 12 декември 2012 година относно компетентността, признаването и изпълнението на съдебни решения по граждански и търговски дела (преработен текст) следва да се тълкува в смисъл, че юридическо лице, което твърди, че неговите права на личността са били нарушени чрез публикуване на данни в интернет, може да предяви иск за всички претърпени вреди пред съдилищата на държавата членка, в която се намира центърът на неговите интереси.

–      Центърът на интересите на юридическо лице се намира в държавата членка, в която това лице извършва основната си професионална дейност, при условие че данните, за които се твърди, че са причинили вреда, са в състояние да засегнат неговата професионална дейност в тази държава“.[131]

 

Filed under: Digital, EU Law, Media Law Tagged: съд на ес

Can an Army of Bitcoin “Bounty Hunters” Deter Pirates?

Post Syndicated from Ernesto original https://torrentfreak.com/can-an-army-of-bitcoin-bounty-hunters-deter-pirates-170917/

When we first heard of the idea to use Bitcoin bounties to track down pirated content online, we scratched our heads.

Snitching on copyright infringers is not a new concept, but the idea of instant cash rewards though cryptocurrency was quite novel.

In theory, it’s pretty straightforward. Content producers can add a unique identifying watermark into movies, eBooks, or other digital files before they’re circulated. When these somehow leak to the public, the bounty hunters use the watermark to claim their Bitcoin, alerting the owner in the process.

This helps to spot leaks early on, even on networks where automated tools don’t have access, and identify the source at the same time.

Two years have passed and it looks like the idea was no fluke. Custos, the South African company that owns the technology, has various copyright holders on board and recently announced a new partnership with book publisher Erudition Digital.

With help from anti-piracy outfit Digimarc, the companies will add identifying watermarks to eBook releases, counting on the bounty hunters to keep an eye out for leaks. These bounty hunters don’t have to be anti-piracy experts. On the contrary, pirates are more than welcome to help out.

“The Custos approach is revolutionary in that it attacks the economy of piracy by targeting uploaders rather than downloaders, turning downloaders into an early detection network,” the companies announced a few days ago.

“The result is pirates turn on one another, sowing seeds of distrust amongst their communities. As a result, the Custos system is capable of penetrating hard-to-reach places such as the dark web, peer-to-peer networks, and even email.”



Devon Weston, Director of Market Development for Digimarc Guardian, believes that this approach is the next level in anti-piracy efforts. It complements the automated detection tools that have been available in the past by providing access to hard-to-reach places.

“Together, this suite of products represents the next generation in technical measures against eBook piracy,” Weston commented on the partnership.

TorrentFreak reached out to Custos COO Fred Lutz to find out what progress the company has made in recent years. We were informed that they have been protecting thousands of copies every month, ranging from pre-release movie content to eBooks.

At the moment the company works with a selected group of “bounty hunters,” but they plan to open the extraction tool to the public in the near future, so everyone can join in.

“So far we have carefully seeded the free bounty extractor tool in relevant communities with great success. However, in the next phase, we will open the bounty hunting to the general public. We are just careful not to grow the bounty hunting community faster than the number of bounties in the wild require,” Lutz tells us.

The Bitcoin bounties themselves vary in size based on the specific use case. For a movie screener, they are typically anything between $10 and $50. However, for the most sensitive content, they can be $100 or more.

“We can also adjust the bounty over time based on the customer’s needs. A low-quality screener that was very sensitive prior to cinematic release does not require as large a bounty after cam-rips becomes available,” Lutz notes.

Thus far, roughly 50 Bitcoin bounties have been claimed. Some of these were planted by Custos themselves, as an incentive for the bounty hunters. Not a very high number, but that doesn’t mean that it’s not working.

“While this number might seem a bit small compared to the number of copies we protect, our aim is first and foremost not to detect leaks, but to pose a credible threat of quick detection and being caught.”

People who receive content protected by Custos are made aware of the watermarks, which may make them think twice about sharing it. If that’s the case, then it’s having effect without any bounties being claimed.

The question remains how many people will actively help to spot bounties. The success of the system largely depends on volunteers, and not all pirates are eager to rat on the people that provide free content.

On the other hand, there’s also room to abuse the system. In theory, people could claim the bounties on their own eBooks and claim that they’ve lost their e-reader. That would be fraud, of course, but since the bounties are in Bitcoin this isn’t easy to prove.

That brings us to the final question. What happens of a claimed bounty identifies a leaker? Custos admits that this alone isn’t enough evidence to pursue a legal case, but the measures that are taken in response are up to the copyright holders.

“A claim of a bounty is never a sufficient legal proof of piracy, instead, it is an invaluable first piece of evidence on which a legal case could be built if the client so requires. Legal prosecution is definitely not always the best approach to dealing with leaks,” Lutz says.

Time will tell if the Bitcoin bounty approach works…

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Pimoroni’s ‘World’s Thinnest Raspberry Pi 3’

Post Syndicated from Helen Lynn original https://www.raspberrypi.org/blog/pimoroni-thinnest-pi/

The Raspberry Pi is not a chunky computer. Nonetheless, tech treasure merchants Pimoroni observed that at almost 20mm tall, it’s still a little on the large side for some applications. So, in their latest live-streamed YouTube Bilge Tank episode, they stripped a Pi 3 down to the barest of bones.

Pimoroni Thinnest Raspberry Pi 3 desoldered pi

But why?

The Raspberry Pi is easy to connect to peripherals. Grab a standard USB mouse, keyboard, and HDMI display, plug them in, and you’re good to go.

desoldered pi

But it’s possible to connect all these things without the bulky ports, if you’re happy to learn how, and you’re in possession of patience and a soldering iron. You might want to do this if, after prototyping your project using the Pi’s standard ports, you want to embed it as a permanent part of a slimmed-down final build. Safely removing the USB ports, the Ethernet port and GPIO pins lets you fit your Pi into really narrow spaces.

As Jon explains:

A lot of the time people want to integrate a Raspberry Pi into a project where there’s a restricted amount of space. but they still want the power of the Raspberry Pi 3’s processor

While the Raspberry Pi Zero and Zero W are cheaper and have a smaller footprint, you might want to take advantage of the greater power the Pi 3 offers.

How to slim down a Raspberry Pi 3

Removing components is a matter of snipping in the right places and desoldering with a hot air gun and a solder sucker, together with the judicious application of brute force. I should emphasise, as the Pimoroni team do, that this is something you should only do with care, after making sure you know what you’re doing.

Pimoroni Thinnest Raspberry Pi 3 desoldered pi

The project was set to take half an hour, though Jon and Sandy ended up taking slightly more time than planned. You can watch the entire process below.

Bilge Tank 107 – The World’s Slimmest Raspberry Pi 3

This week, we attempt to completely strip down a Raspberry Pi 3, removing the USB, Ethernet, HDMI, audio jack, CSI/DSI connectors, and GPIO header in an audacious attempt to create the world’s slimmest Raspberry Pi 3 (not officially ratified by the Guinness Book of World Records).

If Pimoroni’s video has given you ideas, you’ll also want to check out N-O-D-E‘s recent Raspberry Pi 3 Slim build. N-O-D-E takes a similar approach, and adds new micro USB connectors to one end of the board for convenience. If you decide to give something like this a go, please let us know how it went: tell us in the comments, or on Raspberry Pi’s social channels.

The post Pimoroni’s ‘World’s Thinnest Raspberry Pi 3’ appeared first on Raspberry Pi.

State of MAC address randomization

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/09/state-of-mac-address-randomization.html

tldr: I went to DragonCon, a conference of 85,000 people, so sniff WiFi packets and test how many phones now uses MAC address randomization. Almost all iPhones nowadays do, but it seems only a third of Android phones do.

Ten years ago at BlackHat, we presented the “data seepage” problem, how the broadcasts from your devices allow you to be tracked. Among the things we highlighted was how WiFi probes looking to connect to access-points expose the unique hardware address burned into the phone, the MAC address. This hardware address is unique to your phone, shared by no other device in the world. Evildoers, such as the NSA or GRU, could install passive listening devices in airports and train-stations around the world in order to track your movements. This could be done with $25 devices sprinkled around a few thousand places — within the budget of not only a police state, but also the average hacker.

In 2014, with the release of iOS 8, Apple addressed this problem by randomizing the MAC address. Every time you restart your phone, it picks a new, random, hardware address for connecting to WiFi. This causes a few problems: every time you restart your iOS devices, your home network sees a completely new device, which can fill up your router’s connection table. Since that table usually has at least 100 entries, this shouldn’t be a problem for your home, but corporations and other owners of big networks saw their connection tables suddenly get big with iOS 8.

In 2015, Google added the feature to Android as well. However, even though most Android phones today support this feature in theory, it’s usually not enabled.

Recently, I went to DragonCon in order to test out how well this works. DragonCon is a huge sci-fi/fantasy conference in Atlanta in August, second to San Diego’s ComicCon in popularity. It’s spread across several neighboring hotels in the downtown area. A lot of the traffic funnels through the Marriot Marquis hotel, which has a large open area where, from above, you can see thousands of people at a time.

And, with a laptop, see their broadcast packets.

So I went up on a higher floor and setup my laptop in order to capture “probe” broadcasts coming from phones, in order to record the hardware MAC addresses. I’ve done this in years past, before address randomization, in order to record the popularity of iPhones. The first three bytes of an old-style, non-randomized address, identifies the manufacturer. This time, I should see a lot fewer manufacturer IDs, and mostly just random addresses instead.

I recorded 9,095 unique probes over a couple hours. I’m not sure exactly how long — my laptop would go to sleep occasionally because of lack of activity on the keyboard. I should probably setup a Raspberry Pi somewhere next year to get a more consistent result.

A quick summary of the results are:

The 9,000 devices were split almost evenly between Apple and Android. Almost all of the Apple devices randomized their addresses. About a third of the Android devices randomized. (This assumes Android only randomizes the final 3 bytes of the address, and that Apple randomizes all 6 bytes — my assumption may be wrong).

A table of the major results are below. A little explanation:

  • The first item in the table is the number of phones that randomized the full 6 bytes of the MAC address. I’m guessing these are either mostly or all Apple iOS devices. They are nearly half of the total, or 4498 out of 9095 unique probes.
  • The second number is those that randomized the final 3 bytes of the MAC address, but left the first three bytes identifying themselves as Android devices. I’m guessing this represents all the Android devices that randomize. My guesses may be wrong, maybe some Androids randomize the full 6 bytes, which would get them counted in the first number.
  • The following numbers are phones from major Android manufacturers like Motorola, LG, HTC, Huawei, OnePlus, ZTE. Remember: the first 3 bytes of an un-randomized address identifies who made it. There are roughly 2500 of these devices.
  • There is a count for 309 Apple devices. These are either older iOS devices pre iOS 8, or which have turned off the feature (some corporations demand this), or which are actually MacBooks instead of phones.
  • The vendor of the access-points that Marriot uses is “Ruckus”. There have a lot of access-points in the hotel.
  • The “TCT mobile” entry is actually BlackBerry. Apparently, BlackBerry stopped making phones and instead just licenses the software/brand to other hardware makers. If you buy a BlackBerry from the phone store, it’s likely going to be a TCT phone instead.
  • I’m assuming the “Amazon” devices are Kindle ebooks.
  • Lastly, I’d like to point out the two records for “Ford”. I was capturing while walking out of the building, I think I got a few cars driving by.

(random)  4498
(Android)  1562
Samsung  646
Motorola  579
Murata  505
LG  412
Apple  309
HTC-phone  226
Huawei  66
Ruckus  60
OnePlus Tec  40
ZTE  23
TCT mobile  20
Amazon Tech  19
Nintendo  17
Intel  14
Microsoft  9
-hp-  8
BLU Product  8
Kyocera  8
AsusTek  6
Yulong Comp  6
Lite-On  4
Sony Mobile  4
Z-COM, INC.  4
ARRIS Group  2
AzureWave  2
Barnes&Nobl  2
Canon  2
Ford Motor  2
Foxconn  2
Google, Inc  2
Motorola (W  2
Sonos, Inc.  2
SparkLAN Co  2
Wi2Wi, Inc  2
Xiaomi Comm  2
Alps Electr  1
Askey  1
BlackBerry  1
Chi Mei Com  1
Clover Netw  1
CNet Techno  1
eSSys Co.,L  1
GoPro  1
InPro Comm  1
JJPlus Corp  1
Private  1
Quanta  1
Raspberry P  1
Roku, Inc.  1
Sonim Techn  1
Texas Instr  1
TP-LINK TEC  1
Vizio, Inc  1

New – Application Load Balancing via IP Address to AWS & On-Premises Resources

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-application-load-balancing-via-ip-address-to-aws-on-premises-resources/

I told you about the new AWS Application Load Balancer last year and showed you how to use it to do implement Layer 7 (application) routing to EC2 instances and to microservices running in containers.

Some of our customers are building hybrid applications as part of a longer-term move to AWS. These customers have told us that they would like to use a single Application Load Balancer to spread traffic across a combination of existing on-premises resources and new resources running in the AWS Cloud. Other customers would like to spread traffic to web or database servers that are scattered across two or more Virtual Private Clouds (VPCs), host multiple services on the same instance with distinct IP addresses but a common port number, and to offer support for IP-based virtual hosting for clients that do not support Server Name Indication (SNI). Another group of customers would like to host multiple instances of a service on the same instance (perhaps within containers), while using multiple interfaces and security groups to implement fine-grained access control.

These situations arise within a broad set of hybrid, migration, disaster recovery, and on-premises use cases and scenarios.

Route to IP Addresses
In order to address these use cases, Application Load Balancers can now route traffic directly to IP addresses. These addresses can be in the same VPC as the ALB, a peer VPC in the same region, on an EC2 instance connected to a VPC by way of ClassicLink, or on on-premises resources at the other end of a VPN connection or AWS Direct Connect connection.

Application Load Balancers already group targets in to target groups. As part of today’s launch, each target group now has a target type attribute:

instance – Targets are registered by way of EC2 instance IDs, as before.

ip – Targets are registered as IP addresses. You can use any IPv4 address from the load balancer’s VPC CIDR for targets within load balancer’s VPC and any IPv4 address from the RFC 1918 ranges (10.0.0.0/8, 172.16.0.0/12, and 192.168.0.0/16) or the RFC 6598 range (100.64.0.0/10) for targets located outside the load balancer’s VPC (this includes Peered VPC, EC2-Classic, and on-premises targets reachable over Direct Connect or VPN).

Each target group has a load balancer and health check configuration, and publishes metrics to CloudWatch, as has always been the case.

Let’s say that you are in the transition phase of an application migration to AWS or want to use AWS to augment on-premises resources with EC2 instances and you need to distribute application traffic across both your AWS and on-premises resources. You can achieve this by registering all the resources (AWS and on-premises) to the same target group and associate the target group with a load balancer. Alternatively, you can use DNS based weighted load balancing across AWS and on-premises resources using two load balancers i.e. one load balancer for AWS and other for on-premises resources. In the scenario where application-A back-ends are in VPC and application-B back-ends are in on-premises locations then you can put back-ends for each application in different target groups and use content based routing to route traffic to each target group.

Creating a Target Group
Here’s how I create a target group that sends traffic to some IP addresses as part of the process of creating an Application Load Balancer. I enter a name (ip-target-1) and select ip as the Target type:

Then I enter IP address targets. These can be from the VPC that hosts the load balancer:

Or they can be other private IP addresses within one of the private ranges listed above, for targets outside of the VPC that hosts the load balancer:

After I review the settings and create the load balancer, traffic will be sent to the designated IP addresses as soon as they pass the health checks. Each load balancer can accommodate up to 1000 targets.

I can examine my target group and edit the set of targets at any time:

As you can see, one of my targets was not healthy when I took this screen shot (this was by design). Metrics are published to CloudWatch for each target group; I can see them in the Console and I can create CloudWatch Alarms:

Available Now
This feature is available now and you can start using it today in all AWS Regions.

Jeff;

 

GitMiner – Advanced Tool For Mining Github

Post Syndicated from Darknet original https://www.darknet.org.uk/2017/08/gitminer-advanced-tool-mining-github/?utm_source=darknet&utm_medium=rss&utm_campaign=feed

GitMiner is an Advanced search tool for automation in Github, it enables mining Github for useful or potentially dangerous information or for example specific vulnerable or useful WordPress files. This tool aims to facilitate mining the code or snippets on Github through the site’s search page. What is Mining Github? GitHub is a web-based Git […]

The post GitMiner – Advanced Tool For Mining Github appeared first on Darknet.

Approved Reseller programme launch PLUS more Pi Zero resellers

Post Syndicated from Mike Buffham original https://www.raspberrypi.org/blog/approved-reseller/

Ever since the launch of the first Raspberry Pi back in 2012, one thing that has been critical to us is to make our products easy to buy in as many countries as possible.

Buying a Raspberry Pi is certainly much simpler nowadays than it was when we were just starting out. Nevertheless, we want to go even further, and so today we are introducing an Approved Reseller programme. With this programme, we aim to recognise those resellers that represent Raspberry Pi products well, and make purchasing them easy for their customers.

The Raspberry Pi Approved Reseller programme

We’re launching the programme in eleven countries today: the UK, Ireland, France, Spain, Portugal, Italy, the Netherlands, Belgium, Luxembourg, Greece and South Africa. Over the next few weeks, you will see us expand it to at least 50 countries.

We will link to the Approved Resellers’ websites directly from our Products page via the “Buy now” button. For customers who want to buy for business applications we have also added a “Buy for business” button. After clicking it, you will be able to select your country from a drop down menu. Doing so will link you directly to the local websites of our two licensed partners, Premier Farnell and Electrocomponents.

Our newest Raspberry Pi Zero resellers

On top of this we are also adding 6 new Raspberry Pi Zero resellers, giving 13 countries direct access to the Raspberry Pi Zero for the first time. We are particularly excited that these countries include Brazil and India, since they both have proved difficult to supply in the past.

The full list of new resellers is:

Hong Kong and China

Brazil

Raspberry Pi Brazil

India

Raspberry Pi India

Czech Republic and Slovakia

Raspberry Pi Czech Republic and Slovakia

Slovenia, Croatia, Serbia and Bosnia-Herzegovina

Raspberry Pi Slovenia, Croatia, Serbia and Bosnia

Romania, Bulgaria and Hungary

Raspberry Pi Romania, Bulgaria and Hungary

Mexico

Raspberry Pi Mexico

The post Approved Reseller programme launch PLUS more Pi Zero resellers appeared first on Raspberry Pi.