Tag Archives: authentication

New – Managed Device Authentication for Amazon WorkSpaces

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-managed-device-authentication-for-amazon-workspaces/

Amazon WorkSpaces allows you to access a virtual desktop in the cloud from the web and from a wide variety of desktop and mobile devices. This flexibility makes WorkSpaces ideal for environments where users have the ability to use their existing devices (often known as BYOD, or Bring Your Own Device). In these environments, organizations sometimes need the ability to manage the devices which can access WorkSpaces. For example, they may have to regulate access based on the client device operating system, version, or patch level in order to help meet compliance or security policy requirements.

Managed Device Authentication
Today we are launching device authentication for WorkSpaces. You can now use digital certificates to manage client access from Apple OSX and Microsoft Windows. You can also choose to allow or block access from iOS, Android, Chrome OS, web, and zero client devices. You can implement policies to control which device types you want to allow and which ones you want to block, with control all the way down to the patch level. Access policies are set for each WorkSpaces directory. After you have set the policies, requests to connect to WorkSpaces from a client device are assessed and either blocked or allowed. In order to make use of this feature, you will need to distribute certificates to your client devices using Microsoft System Center Configuration Manager or a mobile device management (MDM) tool.

Here’s how you set your access control options from the WorkSpaces Console:

Here’s what happens if a client is not authorized to connect:

 

Available Today
This feature is now available in all Regions where WorkSpaces is available.

Jeff;

 

New Technique to Hijack Social Media Accounts

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/06/new_technique_t.html

Access Now has documented it being used against a Twitter user, but it also works against other social media accounts:

With the Doubleswitch attack, a hijacker takes control of a victim’s account through one of several attack vectors. People who have not enabled an app-based form of multifactor authentication for their accounts are especially vulnerable. For instance, an attacker could trick you into revealing your password through phishing. If you don’t have multifactor authentication, you lack a secondary line of defense. Once in control, the hijacker can then send messages and also subtly change your account information, including your username. The original username for your account is now available, allowing the hijacker to register for an account using that original username, while providing different login credentials.

Three news stories.

Secure API Access with Amazon Cognito Federated Identities, Amazon Cognito User Pools, and Amazon API Gateway

Post Syndicated from Ed Lima original https://aws.amazon.com/blogs/compute/secure-api-access-with-amazon-cognito-federated-identities-amazon-cognito-user-pools-and-amazon-api-gateway/

Ed Lima, Solutions Architect

 

Our identities are what define us as human beings. Philosophical discussions aside, it also applies to our day-to-day lives. For instance, I need my work badge to get access to my office building or my passport to travel overseas. My identity in this case is attached to my work badge or passport. As part of the system that checks my access, these documents or objects help define whether I have access to get into the office building or travel internationally.

This exact same concept can also be applied to cloud applications and APIs. To provide secure access to your application users, you define who can access the application resources and what kind of access can be granted. Access is based on identity controls that can confirm authentication (AuthN) and authorization (AuthZ), which are different concepts. According to Wikipedia:

 

The process of authorization is distinct from that of authentication. Whereas authentication is the process of verifying that “you are who you say you are,” authorization is the process of verifying that “you are permitted to do what you are trying to do.” This does not mean authorization presupposes authentication; an anonymous agent could be authorized to a limited action set.

Amazon Cognito allows building, securing, and scaling a solution to handle user management and authentication, and to sync across platforms and devices. In this post, I discuss the different ways that you can use Amazon Cognito to authenticate API calls to Amazon API Gateway and secure access to your own API resources.

 

Amazon Cognito Concepts

 

It’s important to understand that Amazon Cognito provides three different services:

Today, I discuss the use of the first two. One service doesn’t need the other to work; however, they can be configured to work together.
 

Amazon Cognito Federated Identities

 
To use Amazon Cognito Federated Identities in your application, create an identity pool. An identity pool is a store of user data specific to your account. It can be configured to require an identity provider (IdP) for user authentication, after you enter details such as app IDs or keys related to that specific provider.

After the user is validated, the provider sends an identity token to Amazon Cognito Federated Identities. In turn, Amazon Cognito Federated Identities contacts the AWS Security Token Service (AWS STS) to retrieve temporary AWS credentials based on a configured, authenticated IAM role linked to the identity pool. The role has appropriate IAM policies attached to it and uses these policies to provide access to other AWS services.

Amazon Cognito Federated Identities currently supports the IdPs listed in the following graphic.

 



Continue reading Secure API Access with Amazon Cognito Federated Identities, Amazon Cognito User Pools, and Amazon API Gateway

Amazon QuickSight Now Supports Federated Single Sign-On Using SAML 2.0

Post Syndicated from Jose Kunnackal original https://aws.amazon.com/blogs/big-data/amazon-quicksight-now-supports-federated-single-sign-on-using-saml-2-0/

Since launch, Amazon QuickSight has enabled business users to quickly and easily analyze data from a wide variety of data sources with superfast visualization capabilities enabled by SPICE (Superfast, Parallel, In-memory Calculation Engine). When setting up Amazon QuickSight access for business users, administrators have a choice of authentication mechanisms. These include Amazon QuickSight–specific credentials, AWS credentials, or in the case of Amazon QuickSight Enterprise Edition, existing Microsoft Active Directory credentials. Although each of these mechanisms provides a reliable, secure authentication process, they all require end users to input their credentials every time users log in to Amazon QuickSight. In addition, the invitation model for user onboarding currently in place today requires administrators to add users to Amazon QuickSight accounts either via email invitations or via AD-group membership, which can contribute to delays in user provisioning.

Today, we are happy to announce two new features that will make user authentication and provisioning simpler – Federated Single-Sign-On (SSO) and just-in-time (JIT) user creation.

Federated Single Sign-On

Federated SSO authentication to web applications (including the AWS Management Console) and Software-as-a-Service products has become increasingly popular, because Federated SSO lets organizations consolidate end-user authentication to external applications.

Traditionally, SSO involves the use of a centralized identity store (such as Active Directory or LDAP) to authenticate the user against applications within a corporate network. The growing popularity of SaaS and web applications created the need to authenticate users outside corporate networks. Federated SSO makes this scenario possible. It provides a mechanism for external applications to direct authentication requests to the centralized identity store and receive an authentication token back with the response and validity. SAML is the most common protocol used as a basis for Federated SSO capabilities today.

With Federated SSO in place, business users sign in to their Identity Provider portals with existing credentials and access QuickSight with a single click, without having to enter any QuickSight-specific passwords or account names. This makes it simple for users to access Amazon QuickSight for data analysis needs.

Federated SSO also enables administrators to impose additional security requirements for Amazon QuickSight access (through the identity provider portal) depending on details such as where the user is accessing from or what device is used for access. This access control lets administrators comply with corporate policies regarding data access and also enforce additional security for sensitive data handling in Amazon QuickSight.

Setting up federated authentication in Amazon QuickSight is straightforward. You follow the same sequence of steps you would to setup federated access for the AWS Management Console and then setup redirection to ensure that users land directly on Amazon QuickSight.

Let’s take a look at how this works. The following diagram illustrates the authentication flow between Amazon QuickSight and a third-party identity provider with Federated SSO in place with SAML 2.0.

  1. The Amazon QuickSight user browses to the organization’s identity provider portal, and authenticates using existing credentials.
  2. The federation service requests user authentication from the organization’s identity store, based on credentials provided.
  3. The identity store authenticates the user, and returns the authentication response to the federation service.
  4. The federation service posts the SAML assertion to the user’s browser.
  5. The user’s browser posts the SAML assertion to the AWS Sign-In SAML endpoint. AWS Sign-In processes the SAML request, authenticates the user, and forwards the authentication token to Amazon QuickSight.
  6. Amazon QuickSight uses the authentication token from AWS Sign-In, and authorizes user access.

Federated SSO using SAML 2.0 is now available for Amazon QuickSight Standard Edition, with support for Enterprise Edition coming shortly. You can enable federated access by using any identity provider compliant with SAML 2.0. These identity providers include Microsoft Active Directory Federation Services, Okta, Ping Identity, and Shibboleth. To set up your Amazon QuickSight account for Federated SSO, follow the guidance here.

Just-in-time user creation

With this release, we are also launching a new permissions-based user provisioning model in Amazon QuickSight. Administrators can use the existing AWS permissions management mechanisms in place to enable Amazon QuickSight permissions for their users. Once these required permissions are in place, users can onboard themselves to QuickSight without any additional administrator intervention. This approach simplifies user provisioning and enables onboarding of thousands of users by simply granting the right permissions.

Administrators can choose to assign either of the permissions below, which will result in the user being able to sign up to QuickSight either as a user or an administrator.

quicksight:CreateUser
quicksight:CreateAdmin

If you have an AWS account that is already signed up for QuickSight, and you would like to add yourself as a new user, add one of the permissions above and access https://quicksight.aws.amazon.com.

You will see a screen that requests your email address. Once you provide this, you will be added to the QuickSight account as a user or administrator, as specified by your permissions!

Switch to a Federated SSO user: If you are already an Amazon QuickSight Standard Edition user using authentication based on user name and password, and you want to switch to using Federated SSO, follow these steps:

  1. Sign in using the Federated SSO option to the AWS Management console as you do today. Ensure that you have the permissions for QuickSight user/admin creation assigned to you.
  2. Access https://quicksight.aws.amazon.com.
  3. Provide your email address, and sign up for Amazon QuickSight as an Amazon QuickSight user or admin.
  4. Delete the existing Amazon QuickSight user that you no longer want to use.
  5. Assign resources and data to the new role-based user from step 1. (Amazon QuickSight will prompt you to do this when you delete a user. For more information, see Deleting a User Account.)
  6. Continue as the new, role-based user.

Learn more

To learn more about these capabilities and start using them with your identity provider, see [Managing-SSO-user-guide-topic] in the Amazon QuickSight User Guide.

Stay engaged

If you have questions and suggestions, you can post them on the Amazon QuickSight Discussion Forum.

Not an Amazon QuickSight user?

See the Amazon Quicksight page to get started for free.

 

 

How to Update AWS CloudHSM Devices and Client Instances to the Software and Firmware Versions Supported by AWS

Post Syndicated from Tracy Pierce original https://aws.amazon.com/blogs/security/how-to-update-aws-cloudhsm-devices-and-client-instances-to-the-software-and-firmware-versions-supported-by-aws/

As I explained in my previous Security Blog post, a hardware security module (HSM) is a hardware device designed with the security of your data and cryptographic key material in mind. It is tamper-resistant hardware that prevents unauthorized users from attempting to pry open the device, plug in any extra devices to access data or keys such as subtokens, or damage the outside housing. The HSM device AWS CloudHSM offers is the Luna SA 7000 (also called Safenet Network HSM 7000), which is created by Gemalto. Depending on the firmware version you install, many of the security properties of these HSMs will have been validated under Federal Information Processing Standard (FIPS) 140-2, a standard issued by the National Institute of Standards and Technology (NIST) for cryptography modules. These standards are in place to protect the integrity and confidentiality of the data stored on cryptographic modules.

To help ensure its continued use, functionality, and support from AWS, we suggest that you update your AWS CloudHSM device software and firmware as well as the client instance software to current versions offered by AWS. As of the publication of this blog post, the current non-FIPS-validated versions are 5.4.9/client, 5.3.13/software, and 6.20.2/firmware, and the current FIPS-validated versions are 5.4.9/client, 5.3.13/software, and 6.10.9/firmware. (The firmware version determines FIPS validation.) It is important to know your current versions before updating so that you can follow the correct update path.

In this post, I demonstrate how to update your current CloudHSM devices and client instances so that you are using the most current versions of software and firmware. If you contact AWS Support for CloudHSM hardware and application issues, you will be required to update to these supported versions before proceeding. Also, any newly provisioned CloudHSM devices will use these supported software and firmware versions only, and AWS does not offer “downgrade options.

Note: Before you perform any updates, check with your local CloudHSM administrator and application developer to verify that these updates will not conflict with your current applications or architecture.

Overview of the update process

To update your client and CloudHSM devices, you must use both update paths offered by AWS. The first path involves updating the software on your client instance, also known as a control instance. Following the second path updates the software first and then the firmware on your CloudHSM devices. The CloudHSM software must be updated before the firmware because of the firmware’s dependencies on the software in order to work appropriately.

As I demonstrate in this post, the correct update order is:

  1. Updating your client instance
  2. Updating your CloudHSM software
  3. Updating your CloudHSM firmware

To update your client instance, you must have the private SSH key you created when you first set up your client environment. This key is used to connect via SSH protocol on port 22 of your client instance. If you have more than one client instance, you must repeat this connection and update process on each of them. The following diagram shows the flow of an SSH connection from your local network to your client instances in the AWS Cloud.

Diagram that shows the flow of an SSH connection from your local network to your client instances in the AWS Cloud

After you update your client instance to the most recent software (5.3.13), you then must update the CloudHSM device software and firmware. First, you must initiate an SSH connection from any one client instance to each CloudHSM device, as illustrated in the following diagram. A successful SSH connection will have you land at the Luna shell, denoted by lunash:>. Second, you must be able to initiate a Secure Copy (SCP) of files to each device from the client instance. Because the software and firmware updates require an elevated level of privilege, you must have the Security Officer (SO) password that you created when you initialized your CloudHSM devices.

Diagram illustrating the initiation of an SSH connection from any one client instance to each CloudHSM device

After you have completed all updates, you can receive enhanced troubleshooting assistance from AWS, if you need it. When new versions of software and firmware are released, AWS performs extensive testing to ensure your smooth transition from version to version.

Detailed guidance for updating your client instance, CloudHSM software, and CloudHSM firmware

1.  Updating your client instance

Let’s start by updating your client instances. My client instance and CloudHSM devices are in the eu-west-1 region, but these steps work the same in any AWS region. Because Gemalto offers client instances in both Linux and Windows, I will cover steps to update both. I will start with Linux. Please note that all commands should be run as the “root” user.

Updating the Linux client

  1. SSH from your local network into the client instance. You can do this from Linux or Windows. Typically, you would do this from the directory where you have stored your private SSH key by using a command like the following command in a terminal or PuTTY This initiates the SSH connection by pointing to the path of your SSH key and denoting the user name and IP address of your client instance.
    ssh –i /Users/Bob/Keys/CloudHSM_SSH_Key.pem [email protected]

  1. After the SSH connection is established, you must stop all applications and services on the instance that are using the CloudHSM device. This is required because you are removing old software and installing new software in its place. After you have stopped all applications and services, you can move on to remove the existing version of the client software.
    /usr/safenet/lunaclient/bin/uninstall.sh

This command will remove the old client software, but will not remove your configuration file or certificates. These will be saved in the Chrystoki.conf file of your /etc directory and your usr/safenet/lunaclient/cert directory. Do not delete these files because you will lose the configuration of your CloudHSM devices and client connections.

  1. Download the new client software package: cloudhsm-safenet-client. Double-click it to extract the archive.
    SafeNet-Luna-client-5-4-9/linux/64/install.sh

    Make sure you choose the Luna SA option when presented with it. Because the directory where your certificates are installed is the same, you do not need to copy these certificates to another directory. You do, however, need to ensure that the Chrystoki.conf file, located at /etc/Chrystoki.conf, has the same path and name for the certificates as when you created them. (The path or names should not have changed, but you should still verify they are same as before the update.)

  1. Check to ensure that the PATH environment variable points to the directory, /usr/safenet/lunaclient/bin, to ensure no issues when you restart applications and services. The update process for your Linux client Instance is now complete.

Updating the Windows client

Use the following steps to update your Windows client instances:

  1. Use Remote Desktop Protocol (RDP) from your local network into the client instance. You can accomplish this with the RDP application of your choice.
  2. After you establish the RDP connection, stop all applications and services on the instance that are using the CloudHSM device. This is required because you will remove old software and install new software in its place or overwrite If your client software version is older than 5.4.1, you need to completely remove it and all patches by using Programs and Features in the Windows Control Panel. If your client software version is 5.4.1 or newer, proceed without removing the software. Your configuration file will remain intact in the crystoki.ini file of your C:\Program Files\SafeNet\Lunaclient\ directory. All certificates are preserved in the C:\Program Files\SafeNet\Lunaclient\cert\ directory. Again, do not delete these files, or you will lose all configuration and client connection data.
  3. After you have completed these steps, download the new client software: cloudhsm-safenet-client. Extract the archive from the downloaded file, and launch the SafeNet-Luna-client-5-4-9\win\64\Lunaclient.msi Choose the Luna SA option when it is presented to you. Because the directory where your certificates are installed is the same, you do not need to copy these certificates to another directory. You do, however, need to ensure that the crystoki.ini file, which is located at C:\Program Files\SafeNet\Lunaclient\crystoki.ini, has the same path and name for the certificates as when you created them. (The path and names should not have changed, but you should still verify they are same as before the update.)
  4. Make one last check to ensure the PATH environment variable points to the directory C:\Program Files\SafeNet\Lunaclient\ to help ensure no issues when you restart applications and services. The update process for your Windows client instance is now complete.

2.  Updating your CloudHSM software

Now that your clients are up to date with the most current software version, it’s time to move on to your CloudHSM devices. A few important notes:

  • Back up your data to a Luna SA Backup device. AWS does not sell or support the Luna SA Backup devices, but you can purchase them from Gemalto. We do, however, offer the steps to back up your data to a Luna SA Backup device. Do not update your CloudHSM devices without backing up your data first.
  • If the names of your clients used for Network Trust Link Service (NTLS) connections has a capital “T” as the eighth character, the client will not work after this update. This is because of a Gemalto naming convention. Before upgrading, ensure you modify your client names accordingly. The NTLS connection uses a two-way digital certificate authentication and SSL data encryption to protect sensitive data transmitted between your CloudHSM device and the client Instances.
  • The syslog configuration for the CloudHSM devices will be lost. After the update is complete, notify AWS Support and we will update the configuration for you.

Now on to updating the software versions. There are actually three different update paths to follow, and I will cover each. Depending on the current software versions on your CloudHSM devices, you might need to follow all three or just one.

Updating the software from version 5.1.x to 5.1.5

If you are running any version of the software older than 5.1.5, you must first update to version 5.1.5 before proceeding. To update to version 5.1.5:

  1. Stop all applications and services that access the CloudHSM device.
  2. Download the Luna SA software update package.
  3. Extract all files from the archive.
  4. Run the following command from your client instance to copy the lunasa_update-5.1.5-2.spkg file to the CloudHSM device.
    $ scp –I <private_key_file> lunasa_update-5.1.5-2.spkg manager@<hsm_ip_address>:

    <private_key_file> is the private portion of your SSH key pair and <hsm_ip_address> is the IP address of your CloudHSM elastic network interface (ENI). The ENI is the network endpoint that permits access to your CloudHSM device. The IP address was supplied to you when the CloudHSM device was provisioned.

  1. Use the following command to connect to your CloudHSM device and log in with your Security Officer (SO) password.
    $ ssh –I <private_key_file> manager@<hsm_ip_address>
    
    lunash:> hsm login

  1. Run the following commands to verify and then install the updated Luna SA software package.
    lunash:> package verify lunasa_update-5.1.5-2.spkg –authcode <auth_code>
    
    lunash:> package update lunasa_update-5.1.5-2.spkg –authcode <auth_code>

    The value you will use for <auth_code> is contained in the lunasa_update-5.1.5-2.auth file found in the 630-010165-018_reva.tar archive you downloaded in Step 2.

  1. Reboot the CloudHSM device by running the following command.
    lunash:> sysconf appliance reboot

When all the steps in this section are completed, you will have updated your CloudHSM software to version 5.1.5. You can now move on to update to version 5.3.10.

Updating the software to version 5.3.10

You can update to version 5.3.10 only if you are currently running version 5.1.5. To update to version 5.3.10:

  1. Stop all applications and services that access the CloudHSM device.
  2. Download the v 5.3.10 Luna SA software update package.
  3. Extract all files from the archive.
  4. Run the following command to copy the lunasa_update-5.3.10-7.spkg file to the CloudHSM device.
    $ scp –i <private_key_file> lunasa_update-5.3.10-7.spkg manager@<hsm_ip_address>:

    <private_key_file> is the private portion of your SSH key pair and <hsm_ip_address> is the IP address of your CloudHSM ENI.

  1. Run the following command to connect to your CloudHSM device and log in with your SO password.
    $ ssh –i <private_key_file> manager@<hsm_ip_address>
    
    lunash:> hsm login

  1. Run the following commands to verify and then install the updated Luna SA software package.
    lunash:> package verify lunasa_update-5.3.10-7.spkg –authcode <auth_code>
    
    lunash:> package update lunasa_update-5.3.10-7.spkg –authcode <auth_code>

The value you will use for <auth_code> is contained in the lunasa_update-5.3.10-7.auth file found in the SafeNet-Luna-SA-5-3-10.zip archive you downloaded in Step 2.

  1. Reboot the CloudHSM device by running the following command.
    lunash:> sysconf appliance reboot

When all the steps in this section are completed, you will have updated your CloudHSM software to version 5.3.10. You can now move on to update to version 5.3.13.

Note: Do not configure your applog settings at this point; you must first update the software to version 5.3.13 in the following step.

Updating the software to version 5.3.13

You can update to version 5.3.13 only if you are currently running version 5.3.10. If you are not already running version 5.3.10, follow the two update paths mentioned previously in this section.

To update to version 5.3.13:

  1. Stop all applications and services that access the CloudHSM device.
  2. Download the Luna SA software update package.
  3. Extract all files from the archive.
  4. Run the following command to copy the lunasa_update-5.3.13-1.spkg file to the CloudHSM device.
    $ scp –i <private_key_file> lunasa_update-5.3.13-1.spkg manager@<hsm_ip_address>

<private_key_file> is the private portion of your SSH key pair and <hsm_ip_address> is the IP address of your CloudHSM ENI.

  1. Run the following command to connect to your CloudHSM device and log in with your SO password.
    $ ssh –i <private_key_file> manager@<hsm_ip_address>
    
    lunash:> hsm login

  1. Run the following commands to verify and then install the updated Luna SA software package.
    lunash:> package verify lunasa_update-5.3.13-1.spkg –authcode <auth_code>
    
    lunash:> package update lunasa_update-5.3.13-1.spkg –authcode <auth_code>

The value you will use for <auth_code> is contained in the lunasa_update-5.3.13-1.auth file found in the SafeNet-Luna-SA-5-3-13.zip archive that you downloaded in Step 2.

  1. When updating to this software version, the option to update the firmware also is offered. If you do not require a version of the firmware validated under FIPS 140-2, accept the firmware update to version 6.20.2. If you do require a version of the firmware validated under FIPS 140-2, do not accept the firmware update and instead update by using the steps in the next section, “Updating your CloudHSM FIPS 140-2 validated firmware.”
  2. After updating the CloudHSM device, reboot it by running the following command.
    lunash:> sysconf appliance reboot

  1. Disable NTLS IP checking on the CloudHSM device so that it can operate within its VPC. To do this, run the following command.
    lunash:> ntls ipcheck disable

When all the steps in this section are completed, you will have updated your CloudHSM software to version 5.3.13. If you don’t need the FIPS 140-2 validated firmware, you will have also updated the firmware to version 6.20.2. If you do need the FIPS 140-2 validated firmware, proceed to the next section.

3.  Updating your CloudHSM FIPS 140-2 validated firmware

To update the FIPS 140-2 validated version of the firmware to 6.10.9, use the following steps:

  1. Download version 6.10.9 of the firmware package.
  2. Extract all files from the archive.
  3. Run the following command to copy the 630-010430-010_SPKG_LunaFW_6.10.9.spkg file to the CloudHSM device.
    $ scp –i <private_key_file> 630-010430-010_SPKG_LunaFW_6.10.9.spkg manager@<hsm_ip_address>:

<private_key_file> is the private portion of your SSH key pair, and <hsm_ip_address> is the IP address of your CloudHSM ENI.

  1. Run the following command to connect to your CloudHSM device and log in with your SO password.
    $ ssh –i <private_key_file> manager#<hsm_ip_address>
    
    lunash:> hsm login

  1. Run the following commands to verify and then install the updated Luna SA firmware package.
    lunash:> package verify 630-010430-010_SPKG_LunaFW_6.10.9.spkg –authcode <auth_code>
    
    lunash:> package update 630-010430-010_SPKG_LunaFW_6.10.9.spkg –authcode <auth_code>

The value you will use for <auth_code> is contained in the 630-010430-010_SPKG_LunaFW_6.10.9.auth file found in the 630-010430-010_SPKG_LunaFW_6.10.9.zip archive that you downloaded in Step 1.

  1. Run the following command to update the firmware of the CloudHSM devices.
    lunash:> hsm update firmware

  1. After you have updated the firmware, reboot the CloudHSM devices to complete the installation.
    lunash:> sysconf appliance reboot

Summary

In this blog post, I walked you through how to update your existing CloudHSM devices and clients so that they are using supported client, software, and firmware versions. Per AWS Support and CloudHSM Terms and Conditions, your devices and clients must use the most current supported software and firmware for continued troubleshooting assistance. Software and firmware versions regularly change based on customer use cases and requirements. Because AWS tests and validates all updates from Gemalto, you must install all updates for firmware and software by using our package links described in this post and elsewhere in our documentation.

If you have comments about this blog post, submit them in the “Comments” section below. If you have questions about implementing this solution, please start a new thread on the CloudHSM forum.

– Tracy

Github Dorks – Github Security Scanning Tool

Post Syndicated from Darknet original http://feedproxy.google.com/~r/darknethackers/~3/tanX0m9KJM0/

Github search is quite a powerful and useful feature and can be used to search for sensitive data in repositories, this Github security scanning tool comes with a collection of Github dorks that can reveal sensitive personal and/or other proprietary organisational information such as private keys, credentials, authentication tokens and so on….

Read the full post at darknet.org.uk

Several openSUSE services disabled due to a security breach

Post Syndicated from corbet original https://lwn.net/Articles/722591/rss

The openSUSE project has announced that its authentication system has been
breached and a number of services have been shut down or put into read-only
mode. “This includes the openSUSE OBS, wiki, and forums.

The scope and impact of the breach is not yet fully clear. The
disabling of authentication is to ensure the protection of our systems
and user data while the situation is fully investigated.

Based on the information available at this time, there is a
possibility that the breach is limited to users of non-openSUSE
infrastructure that shares the same authentication system.” There
does not appear to be reason to worry that the download infrastructure has
been compromised.

Test Your Streaming Data Solution with the New Amazon Kinesis Data Generator

Post Syndicated from Allan MacInnis original https://aws.amazon.com/blogs/big-data/test-your-streaming-data-solution-with-the-new-amazon-kinesis-data-generator/

When building a streaming data solution, most customers want to test it with data that is similar to their production data. Creating this data and streaming it to your solution can often be the most tedious task in testing the solution.

Amazon Kinesis Streams and Amazon Kinesis Firehose enable you to continuously capture and store terabytes of data per hour from hundreds of thousands of sources. Amazon Kinesis Analytics gives you the ability to use standard SQL to analyze and aggregate this data in real-time. It’s easy to create an Amazon Kinesis stream or Firehose delivery stream with just a few clicks in the AWS Management Console (or a few commands using the AWS CLI or Amazon Kinesis API). However, to generate a continuous stream of test data, you must write a custom process or script that runs continuously, using the AWS SDK or CLI to send test records to Amazon Kinesis. Although this task is necessary to adequately test your solution, it means more complexity and longer development and testing times.

Wouldn’t it be great if there were a user-friendly tool to generate test data and send it to Amazon Kinesis? Well, now there is—the Amazon Kinesis Data Generator (KDG).

KDG overview

The KDG simplifies the task of generating data and sending it to Amazon Kinesis. The tool provides a user-friendly UI that runs directly in your browser. With the KDG, you can do the following:

  • Create templates that represent records for your specific use cases
  • Populate the templates with fixed data or random data
  • Save the templates for future use
  • Continuously send thousands of records per second to your Amazon Kinesis stream or Firehose delivery stream

The KDG is open source, and you can find the source code on the Amazon Kinesis Data Generator repo in GitHub. Because the tool is a collection of static HTML and JavaScript files that run directly in your browser, you can start using it immediately without downloading or cloning the project. It is enabled as a static site in GitHub, and we created a short URL to access it.

To get started immediately, check it out at http://amzn.to/datagen.

Using the KDG

Getting started with the KDG requires only three short steps:

  1. Create an Amazon Cognito user in your AWS account (first-time only).
  2. Use this user’s credentials to log in to the KDG.
  3. Create a record template for your data.

When you’ve completed these steps, you can then send data to Streams or Firehose.

Create an Amazon Cognito user

The KDG is a great example of a mobile application that uses Amazon Cognito for a user repository and user authentication, and the AWS JavaScript SDK to communicate with AWS services directly from your browser. For information about how to build your own JavaScript application that uses Amazon Cognito, see Use Amazon Cognito in your website for simple AWS authentication on the AWS Mobile Blog.

Before you can start sending data to your Amazon Kinesis stream, you must create an Amazon Cognito user in your account who can write to Streams and Firehose. When you create the user, you create a username and password for that user. You use those credentials to sign in to the KDG. To simplify creating the Amazon Cognito user in your account, we created a Lambda function and a CloudFormation template. For more information about creating the Amazon Cognito user in your AWS account, see Configure Your AWS Account.

Note:  It’s important that you use the URL provided by the output of the CloudFormation stack the first time that you access the KDG. This URL contains parameters needed by the KDG. The KDG stores the values of these parameters locally, so you can then access the tool using the short URL, http://amzn.to/datagen.

Log in to the KDG

After you create an Amazon Cognito user in your account, the next step is to log in to the KDG. To do this, provide the username and password that you created earlier.

On the main page, you can configure your data templates and send data to an Amazon Kinesis stream or Firehose delivery stream.

The basic configuration is simple enough. All fields on the page are required:

  • Region: Choose the AWS Region that contains the Amazon Kinesis stream or Firehose delivery stream to receive your streaming data.
  • Stream/firehose name: Choose the name of the stream or delivery stream to receive your streaming data.
  • Records per second: Enter the number of records to send to your stream or delivery stream each second.
  • Record template: Enter the raw data, or a template that represents your data structure, to be used for each record sent by the KDG. For information about creating templates for your data, see the “Creating Record Templates” section, later in this post.

When you set the Records per second value, consider that the KDG isn’t intended to be a data producer for load-testing your application. However, it can easily send several thousand records per second from a single tab in your browser, which is plenty of data for most applications. In testing, the KDG has produced 80,000 records per second to a single Amazon Kinesis stream, but your mileage may vary. The maximum rate at which it produces records depends on your computer’s specs and the complexity of your record template.

Ensure that your stream or delivery stream is scaled appropriately:

  • 1,000 records/second or 1 MB/second to an Amazon Kinesis stream
  • 5,000 records/second or 5 MB/second to a Firehose delivery stream

Otherwise, Amazon Kinesis may reject records, and you won’t achieve your desired throughput. For more information about adding capacity to a stream by adding more shards, see Resharding a Stream. For information about increasing the capacity of a delivery stream, see Amazon Kinesis Firehose Limits.

Create record templates

The Record Template field is a free-text field where you can enter any text that represents a single streaming data record. You can create a single line of static data, so that each record sent to Amazon Kinesis is identical. Or, you can format the text as a template.

In this case, the KDG substitutes portions of the template with fake or random data before sending the record. This lets you introduce randomness or variability in each record that is sent in your data stream. The KDG uses Faker.js, an open source library, to generate fake data. For more information, see the faker.js project page in GitHub. The easiest way to see how this works is to review an example.

To simulate records being sent from a weather sensor Internet of Things (IoT) device, you want each record to be formatted in JSON. The following is an example of what a final record must look like:

{
	"sensorId": 40,
	"currentTemperature": 76,
	"status": "OK"
} 

For this use case, you want to simulate sending data from one of 50 sensors, so the sensorID field can be an integer between 1 and 50. The temperature value can range between 10 and 150, so the currentTemperature field should contain a value in this range. Finally, the status value can be one of three possible values: OK, FAIL, and WARN. The KDG template format uses moustache syntax (double curly-braces) to enclose items that should be replaced before the record is sent to Amazon Kinesis. To model the record, the template looks like this:

{
    "sensorId": {{random.number(50)}},
    "currentTemperature": {{random.number(
        {
            "min":10,
            "max":150
        }
    )}},
    "status": "{{random.arrayElement(
        ["OK","FAIL","WARN"]
    )}}"
}

Take a look at one more example, simulating a stream of records that represent rows from an Apache access log. A single Apache access log entry might look like this:

76.0.56.179 - - [29/Apr/2017:16:32:11 -05:00] "GET /wp-admin" 200 8233 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_0 rv:6.0; CY) AppleWebKit/535.0.0 (KHTML, like Gecko) Version/4.0.3 Safari/535.0.0"

The following example shows how to create a template for the Apache access log:

{{internet.ip}} - - [{{date.now("DD/MMM/YYYY:HH:mm:ss Z")}}] "{{random.weightedArrayElement({"weights":[0.6,0.1,0.1,0.2],"data":["GET","POST","DELETE","PUT"]})}} {{random.arrayElement(["/list","/wp-content","/wp-admin","/explore","/search/tag/list","/app/main/posts","/posts/posts/explore"])}}" {{random.weightedArrayElement({"weights": [0.9,0.04,0.02,0.04], "data":["200","404","500","301"]})}} {{random.number(10000)}} "-" "{{internet.userAgent}}"

For more information about creating your own templates, see the Record Template section of the KDG documentation.

The KDG saves the templates that you create in your local browser storage. As long as you use the same browser on the same computer, you can reuse up to five templates.

Summary

Testing your streaming data solution has never been easier. Get started today by visiting the KDG hosted UI or its Amazon Kinesis Data Generator page in GitHub. The project is licensed under the Apache 2.0 license, so feel free to clone and modify it for your own use as necessary. And of course, please submit any issues or pull requests via GitHub.

If you have any questions or suggestions, please add them below.

 


About the Author

Allan MacInnis is a Solutions Architect at Amazon Web Services. He works with our customers to help them build streaming data solutions using Amazon Kinesis. In his spare time, he enjoys mountain biking and spending time with his family.

 

 


Related

Scale Your Amazon Kinesis Stream Capacity with UpdateShardCount

 

 

Criminals are Now Exploiting SS7 Flaws to Hack Smartphone Two-Factor Authentication Systems

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/05/criminals_are_n.html

I’ve previously written about the serious vulnerabilities in the SS7 phone routing system. Basically, the system doesn’t authenticate messages. Now, criminals are using it to hack smartphone-based two-factor authentication systems:

In short, the issue with SS7 is that the network believes whatever you tell it. SS7 is especially used for data-roaming: when a phone user goes outside their own provider’s coverage, messages still need to get routed to them. But anyone with SS7 access, which can be purchased for around 1000 Euros according to The Süddeutsche Zeitung, can send a routing request, and the network may not authenticate where the message is coming from.

That allows the attacker to direct a target’s text messages to another device, and, in the case of the bank accounts, steal any codes needed to login or greenlight money transfers (after the hackers obtained victim passwords).

Intel AMT on wireless networks

Post Syndicated from Matthew Garrett original http://mjg59.dreamwidth.org/48837.html

More details about Intel’s AMT vulnerablity have been released – it’s about the worst case scenario, in that it’s a total authentication bypass that appears to exist independent of whether the AMT is being used in Small Business or Enterprise modes (more background in my previous post here). One thing I claimed was that even though this was pretty bad it probably wasn’t super bad, since Shodan indicated that there were only a small number of thousand machines on the public internet and accessible via AMT. Most deployments were probably behind corporate firewalls, which meant that it was plausibly a vector for spreading within a company but probably wasn’t a likely initial vector.

I’ve since done some more playing and come to the conclusion that it’s rather worse than that. AMT actually supports being accessed over wireless networks. Enabling this is a separate option – if you simply provision AMT it won’t be accessible over wireless by default, you need to perform additional configuration (although this is as simple as logging into the web UI and turning on the option). Once enabled, there are two cases:

  1. The system is not running an operating system, or the operating system has not taken control of the wireless hardware. In this case AMT will attempt to join any network that it’s been explicitly told about. Note that in default configuration, joining a wireless network from the OS is not sufficient for AMT to know about it – there needs to be explicit synchronisation of the network credentials to AMT. Intel provide a wireless manager that does this, but the stock behaviour in Windows (even after you’ve installed the AMT support drivers) is not to do this.
  2. The system is running an operating system that has taken control of the wireless hardware. In this state, AMT is no longer able to drive the wireless hardware directly and counts on OS support to pass packets on. Under Linux, Intel’s wireless drivers do not appear to implement this feature. Under Windows, they do. This does not require any application level support, and uninstalling LMS will not disable this functionality. This also appears to happen at the driver level, which means it bypasses the Windows firewall.

Case 2 is the scary one. If you have a laptop that supports AMT, and if AMT has been provisioned, and if AMT has had wireless support turned on, and if you’re running Windows, then connecting your laptop to a public wireless network means that AMT is accessible to anyone else on that network[1]. If it hasn’t received a firmware update, they’ll be able to do so without needing any valid credentials.

If you’re a corporate IT department, and if you have AMT enabled over wifi, turn it off. Now.

[1] Assuming that the network doesn’t block client to client traffic, of course

comment count unavailable comments

Amazon Chime Update – Use Your Existing Active Directory, Claim Your Domain

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-chime-update-use-your-existing-active-directory-claim-your-domain/

I first told you about Amazon Chime this past February (Amazon Chime – Unified Communications Service) and told you how I connect and collaborate with people all over the world.

Since the launch, Amazon Chime has quickly become the communication tool of choice within the AWS team. I participate in multiple person-to-person and group chats throughout the day, and frequently “Chime In” to Amazon Chime-powered conferences to discuss upcoming launches and speaking opportunities.

Today we are adding two new features to Amazon Chime: the ability to claim a domain as your own and support for your existing Active Directory.

Claiming a Domain
Claiming a domain gives you the authority to manage Amazon Chime usage for all of the users in the domain. You can make sure that new employees sign up for Amazon Chime in an official fashion and you can suspend accounts for employees that leave the organization.

To claim a domain, you assert that you own a particular domain name and then back up the assertion by entering a TXT record to your domain’s DNS entry. You must do this for each domain and subdomain that your organization uses for email addresses.

Here’s how I would claim one of my own domains:

After I click on Verify this domain, Amazon Chime provides me with the record for my DNS:

After I do this, the domain’s status will change to Pending Verification. Once Amazon Chime has confirmed that the new record exists as expected, the status will change to Verified and the team account will become an enterprise account.

Active Directory Support
This feature allows your users to sign in to Amazon Chime using their existing Active Directory identity and credentials. After you have set it up, you can enable and take advantage of advanced AD security features such as password rotation, password complexity rules, and multi-factor authentication. You can also control the allocation of Amazon Chime’s Plus and Pro licenses on a group-by-group basis (check out Plans and Pricing to learn more about each type of license).

In order to use this feature, you must be using an Amazon Chime enterprise account. If you are using a team account, follow the directions at Create an Enterprise Account before proceeding.

Then you will need to set up a directory with the AWS Directory Service. You have two options at this point:

  1. Use the AWS Directory Service AD Connector to connect to your existing on-premises Active Directory instance.
  2. Use Microsoft Active Directory, configured for standalone use. Read How to Create a Microsoft AD Directory for more information on this option.

After you have set up your directory, you can connect to it from within the Amazon Chime console by clicking on Settings and Active directory and choosing your directory from the drop-down:

After you have done this you can select individual groups within the directory and assign the appropriate subscriptions (Plus or Pro) on a group-by-group basis.

With everything set up as desired, your users can log in to Amazon Chime using their existing directory credentials.

These new features are available now and you can start using them today!

If you would like to learn more about Amazon Chime, you can watch the recent AWS Tech Talk: Modernize Meetings with Amazon Chime:

Here is the presentation from the talk:

Jeff;

 

Forging Voice

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/05/forging_voice.html

LyreBird is a system that can accurately reproduce the voice of someone, given a large amount of sample inputs. It’s pretty good — listen to the demo here — and will only get better over time.

The applications for recorded-voice forgeries are obvious, but I think the larger security risk will be real-time forgery. Imagine the social engineering implications of an attacker on the telephone being able to impersonate someone the victim knows.

I don’t think we’re ready for this. We use people’s voices to authenticate them all the time, in all sorts of different ways.

EDITED TO ADD (5/11): This is from 2003 on the topic.

Visualize Big Data with Amazon QuickSight, Presto, and Apache Spark on Amazon EMR

Post Syndicated from Luis Wang original https://aws.amazon.com/blogs/big-data/visualize-big-data-with-amazon-quicksight-presto-and-apache-spark-on-amazon-emr/

Last December, we introduced the Amazon Athena connector in Amazon QuickSight, in the Derive Insights from IoT in Minutes using AWS IoT, Amazon Kinesis Firehose, Amazon Athena, and Amazon QuickSight post.

The connector allows you to visualize your big data easily in Amazon S3 using Athena’s interactive query engine in a serverless fashion. This turned out to be a very popular combination, as customers benefit from the speed, agility, and cost benefit that serverless business intelligence (BI) and analytics architecture brings.

Today, we’re excited to announce two new native connectors in QuickSight for big data analytics: Presto and Spark. With the Presto and SparkSQL connector in QuickSight, you can easily create interactive visualizations over large datasets using Amazon EMR.

EMR provides a simple and cost effective way to run highly distributed processing frameworks such as Presto and Spark when compared to on-premises deployments. EMR provides you with the flexibility to define specific compute, memory, storage, and application parameters and optimize your analytic requirements.

In this post, I walk you through connecting QuickSight to an EMR cluster running Presto. If you’d like a walkthrough with Spark, let us know in the comments section!

Presto overview

Presto is an open source, distributed SQL query engine for running interactive analytic queries against data sources ranging from gigabytes to petabytes. It supports the ANSI SQL standard, including complex queries, aggregations, joins, and window functions. Presto can run on multiple data sources, including Amazon S3.

Presto’s execution framework is fundamentally different from that of Hive/MapReduce. Presto has a custom query and execution engine where the stages of execution are pipelined, similar to a directed acyclic graph (DAG), and all processing occurs in memory to reduce disk I/O. This pipelined execution model can run multiple stages in parallel and streams data from one stage to another as the data becomes available. This reduces end-to-end latency and makes Presto a great tool for ad hoc data exploration over large data sets.

Walkthrough

Use the following steps to connect QuickSight to an EMR cluster running Presto:

  1. Create an EMR cluster with the latest 5.5.0 release.
  2. Configure LDAP for user authentication in QuickSight.
  3. Configure SSL using a QuickSight supported certificate authority (CA).
  4. Create tables for Presto in the Hive metastore.
  5. Whitelist the QuickSight IP address range in your EMR master security group rules.
  6. Connect QuickSight to Presto and create some visualizations.

Prerequisites

You need run Presto version 0.167, at a minimum, which is the first release that supports LDAP authentication. LDAP authentication is a requirement for the Presto and Spark connectors and QuickSight refuses to connect if LDAP is not configured on your cluster.

Create an EMR cluster with release version 5.5.0

In the EMR console, use the Quick Create option to create a cluster.  For this post, use most of the default settings with a few exceptions. To install both Presto and Spark on your cluster (and customize other settings), create your cluster from the Advanced Options wizard instead.

Make sure that EMR release 5.5.0 is selected and under Applications, choose Presto. If you have an EC2 key pair, you can use it. Otherwise, create a key pair (.PEM file) and then return to this page to create the cluster. 

Make sure that you configure your cluster’s security group inbound rules to allow SSH from your machine’s IP address range. 

Configure LDAP for user authentication in QuickSight

After your cluster is in a running state, connect using SSH to your cluster to configure LDAP authentication.

To SSH into your EMR cluster, use the following commands in the terminal:

chmod 600 ~/YOUR_PEM_FILE.pem
ssh -i ~/YOUR_PEM_FILE.pem [email protected]_MASTER_PUBLIC_DNS_FROM_EMR_CLUSTER

After you log in, install OpenLDAP, configure it, and create users in the directory. For more about configuring LDAP, see Editing /etc/openldap/slapd.conf in the OpenLDAP documentation.

# Install LDAP Server
 sudo yum install openldap openldap-servers openldap-clients
# Create the config files
sudo cp /usr/share/openldap-servers/DB_CONFIG.example /var/lib/ldap/DB_CONFIG
sudo cp /usr/share/openldap-servers/slapd.conf.obsolete /etc/openldap/slapd.conf
# Bounce LDAP
sudo service slapd restart

After LDAP is installed and restarted, you issue a couple of commands to change the LDAP password. First, generate a hash for the LDAP root password and save the output hash that looks like this:

{SSHA}DmD616c3yZyKndsccebZK/vmWiaQde83

Issue the following command and set a root password for LDAP when prompted:

slappasswd

Now, prepare the commands to set the password for the LDAP root. Make sure to replace the hash below with the one that you generated in the previous step:

cat > /tmp/config.ldif <<EOF
dn: olcDatabase={0}config,cn=config
changetype: modify
add: olcRootPW
olcRootPW: {SSHA}DmD616c3yZyKndsccebZK/vmWiaQde83

dn: olcDatabase={2}bdb,cn=config
changetype: modify
add: olcRootPW
olcRootPW: {SSHA}DmD616c3yZyKndsccebZK/vmWiaQde83
-
replace: olcRootDN
olcRootDN: cn=dev,dc=example,dc=com
-
replace: olcSuffix
olcSuffix: dc=example,dc=com
EOF

Run the following command to execute the above commands against LDAP:

sudo ldapadd -Y EXTERNAL -H ldapi:/// -f /tmp/ config.ldif

Next, create a user account with password in the LDAP directory with the following commands. When prompted for a password, use the LDAP root password that you created in the previous step.

cat > /tmp/accounts.ldif <<EOF
dn: dc=example,dc=com
objectclass: domain
objectclass: top
dc: example

dn: ou= dev,dc=example,dc=com
objectclass: organizationalUnit
ou: dev
description: Container for developer entries

dn: uid=<REPLACE_WITH_YOUR_USER_NAME>,ou=dev,dc=example,dc=com
uid: <REPLACE_WITH_YOUR_USER_NAME>
objectClass: inetOrgPerson
userPassword: <REPLACE_WITH_STRONG_PASSWORD>
sn: <REPLACE_WITH_SURNAME>
cn: dev
EOF

ldapadd -D "cn=dev,dc=example,dc=com" -W -f /tmp/accounts.ldif

You now have OpenLDAP configured on your EMR cluster running Presto and a user that you later use to authenticate against when connecting to Presto.

Configure SSL using a QuickSight supported certificate authority

To ensure that any communication between QuickSight and Presto is secured, QuickSight requires that the connection to be established with SSL enabled. You need to obtain a certificate from a certificate authority (CA) that QuickSight trusts. You can find the full list of public CAs accepted by QuickSight in the Network and Database Configuration Requirements topic.

To set up SSL on LDAP and Presto, obtain the following three SSL certificate files from your CA and store them in the /home/hadoop/ directory.

  • Certificate key file
  • Certificate file
  • CA certificate

Configure the keys in LDAP with the following commands:

cat > /tmp/ca.ldif <<EOF
dn: cn=config
replace: olcTLSCertificateKeyFile
olcTLSCertificateKeyFile: /home/hadoop/certificateKey.pem

replace: olcTLSCertificateFile
olcTLSCertificateFile: /home/hadoop/certificate.pem

replace: olcTLSCACertificateFile
olcTLSCACertificateFile: /home/hadoop/cacertificate.pem
EOF

sudo ldapmodify -Y EXTERNAL -H ldapi:/// -f /tmp/ca.ldif

Now, enable SSL in LDAP by editing the /etc/sysconfi/ldap file and set SLAPD_LDAPS=yes:

sudo vi /etc/sysconfig/ldap

SLAPD_LDAPS=yes

sudo service slapd restart

Use the following commands to generate keystore. You will be prompted to provide a password for the keystore.

openssl pkcs12 -inkey certificatekey.pem -in certificate.pem -export -out server-key.p12

keytool -importkeystore -srckeystore server-key.p12 -srcstoretype PKCS12 -destkeystore server.keystore

Edit the configuration files for Presto in EMR.

SERVERNAME=<PUBLIC_DNS_NAME_OF_EMR_CLUSTER>
cd /etc/presto/conf

# Enable LDAPS auth for Presto
echo http-server.authentication.type=LDAP | sudo tee -a config.properties
echo authentication.ldap.url=ldaps://${SERVERNAME}:636 | sudo tee -a config.properties
echo authentication.ldap.user-bind-pattern=uid=\${USER},OU=dev,DC=example,DC=com | sudo tee -a config.properties

# Enable SSL for the Presto server
echo http-server.https.enabled=true | sudo tee -a config.properties
echo http-server.https.port=<PORT_NUMBER> | sudo tee -a config.properties
echo http-server.https.keystore.path=/home/hadoop/server.keystore | sudo tee -a config.properties
echo http-server.https.keystore.key=<KEYSTORE_PASSWORD> | sudo tee -a config.properties

# Bounce Presto to pick up the new config
sudo pkill presto
# wait until presto is up
while [[ 1 ]]; do pgrep presto; if [ $? -eq 0 ]; then break; else echo -n .; sleep 1; fi; done

Create tables for Presto in the Hive metastore

Now that you have a running EMR cluster with Presto and LDAP set up, you can load some sample data into the cluster for analysis. Use the same CloudFront log sample data set that is available for Athena. The following SQL query creates a table in EMR and loads the sample data set into it:

# Run hive
$hive

#Create table and load data
CREATE EXTERNAL TABLE IF NOT EXISTS cloudfront_logs (
  Date Date,
  Time STRING,
  Location STRING,
  Bytes INT,
  RequestIP STRING,
  Method STRING,
  Host STRING,
  Uri STRING,
  Status INT,
  Referrer STRING,
  OS String,
  Browser String,
  BrowserVersion String
)
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.RegexSerDe'
WITH SERDEPROPERTIES (
  "input.regex" = "^(?!#)([^ ]+)\\s+([^ ]+)\\s+([^ ]+)\\s+([^ ]+)\\s+([^ ]+)\\s+([^ ]+)\\s+([^ ]+)\\s+([^ ]+)\\s+([^ ]+)\\s+([^ ]+)\\s+[^\()+[\()([^\;]+).*\%20([^\/]+)[\/](.*)$"
) LOCATION 's3://athena-examples/cloudfront/plaintext/'; 

# quit hive
quit

Try to query the data using the Presto CLI with the following commands:

#Run the Presto CLI
presto-cli --server https://<PUBLIC_DNS_NAME_OF_EMR_CLUSTER>:<PORT_NUMER>     --user <USERNAME> --password --catalog hive

#Issue query to Presto
SELECT * FROM cloudfront_logs limit 10;

You should see an output from Presto like the following:

Whitelist the QuickSight IP address range in your EMR master security group rules

Now you’re ready to connect QuickSight to Presto. For QuickSight to connect to Presto, you need to make sure that Presto is reachable by QuickSight’s public endpoints by adding QuickSight’s IP address ranges to your EMR master node security group.

Connect QuickSight to Presto and create some visualizations

If you have not already signed up for QuickSight, you can do so at https://quicksight.aws. QuickSight offers a 1 user and 1 GB perpetual free tier.

After you’re signed up for QuickSight, navigate to the New Analysis page and the New Data Set page. You see the new Presto and Spark connector as in the following screenshot.

Open the Presto connector, provide the connection details in the modal window, and choose Create data source.

Select the default schema and choose the cloudfront_logs table that you just created.

In QuickSight, you can choose between importing the data in SPICE for analysis or directly querying your data in Presto. SPICE is an in-memory optimized columnar engine in QuickSight that enable fast, interactive visualization as you explore your data. For this post, choose to import the data into SPICE and choose Visualize.

In the analysis view, you can see the notification that shows import is complete with 4996 rows imported. On the left, you see the list of fields available in the data set and below, the various types of visualizations from which you can choose.

QuickSight makes it easy for you to create visualizations and analyze data with AutoGraph, a feature that automatically selects the best visualization for you based on selected fields.

To create a visualization, select the fields on the left panel. In this case, look at the number of connections to CloudFront ordered by the various OS types, by selecting the OS field. Additionally, you can select the bytes fields to look at total bytes transferred by OS instead of count.

Summary

You just finished creating an EMR cluster, setting up Presto and LDAP with SSL, and using QuickSight to visualize your data. I hope this post was helpful. Feel free to reach out if you have any questions or suggestions.

Learn more

To learn more about these capabilities and start using them in your dashboards, check out the QuickSight User Guide.

Stay engaged

If you have questions and suggestions, you can post them on the QuickSight forum. 

Not a QuickSight user

Go to the QuickSight website to get started for FREE.

 

Intel’s remote AMT vulnerablity

Post Syndicated from Matthew Garrett original http://mjg59.dreamwidth.org/48429.html

Intel just announced a vulnerability in their Active Management Technology stack. Here’s what we know so far.

Background

Intel chipsets for some years have included a Management Engine, a small microprocessor that runs independently of the main CPU and operating system. Various pieces of software run on the ME, ranging from code to handle media DRM to an implementation of a TPM. AMT is another piece of software running on the ME, albeit one that takes advantage of a wide range of ME features.

Active Management Technology

AMT is intended to provide IT departments with a means to manage client systems. When AMT is enabled, any packets sent to the machine’s wired network port on port 16992 will be redirected to the ME and passed on to AMT – the OS never sees these packets. AMT provides a web UI that allows you to do things like reboot a machine, provide remote install media or even (if the OS is configured appropriately) get a remote console. Access to AMT requires a password – the implication of this vulnerability is that that password can be bypassed.

Remote management

AMT has two types of remote console: emulated serial and full graphical. The emulated serial console requires only that the operating system run a console on that serial port, while the graphical environment requires drivers on the OS side. However, an attacker who enables emulated serial support may be able to use that to configure grub to enable serial console. Remote graphical console seems to be problematic under Linux but some people claim to have it working, so an attacker would be able to interact with your graphical console as if you were physically present. Yes, this is terrifying.

Remote media

AMT supports providing an ISO remotely. In older versions of AMT (before 11.0) this was in the form of an emulated IDE controller. In 11.0 and later, this takes the form of an emulated USB device. The nice thing about the latter is that any image provided that way will probably be automounted if there’s a logged in user, which probably means it’s possible to use a malformed filesystem to get arbitrary code execution in the kernel. Fun!

The other part of the remote media is that systems will happily boot off it. An attacker can reboot a system into their own OS and examine drive contents at their leisure. This doesn’t let them bypass disk encryption in a straightforward way[1], so you should probably enable that.

How bad is this

That depends. Unless you’ve explicitly enabled AMT at any point, you’re probably fine. The drivers that allow local users to provision the system would require administrative rights to install, so as long as you don’t have them installed then the only local users who can do anything are the ones who are admins anyway. If you do have it enabled, though…

How do I know if I have it enabled?

Yeah this is way more annoying than it should be. First of all, does your system even support AMT? AMT requires a few things:

1) A supported CPU
2) A supported chipset
3) Supported network hardware
4) The ME firmware to contain the AMT firmware

Merely having a “vPRO” CPU and chipset isn’t sufficient – your system vendor also needs to have licensed the AMT code. Under Linux, if lspci doesn’t show a communication controller with “MEI” in the description, AMT isn’t running and you’re safe. If it does show an MEI controller, that still doesn’t mean you’re vulnerable – AMT may still not be provisioned. If you reboot you should see a brief firmware splash mentioning the ME. Hitting ctrl+p at this point should get you into a menu which should let you disable AMT.

What do we not know?

We have zero information about the vulnerability, other than that it allows unauthenticated access to AMT. One big thing that’s not clear at the moment is whether this affects all AMT setups, setups that are in Small Business Mode, or setups that are in Enterprise Mode. If the latter, the impact on individual end-users will be basically zero – Enterprise Mode involves a bunch of effort to configure and nobody’s doing that for their home systems. If it affects all systems, or just systems in Small Business Mode, things are likely to be worse.

What should I do?

Make sure AMT is disabled. If it’s your own computer, you should then have nothing else to worry about. If you’re a Windows admin with untrusted users, you should also disable or uninstall LSM by following these instructions.

Does this mean every Intel system built since 2008 can be taken over by hackers?

No. Most Intel systems don’t ship with AMT. Most Intel systems with AMT don’t have it turned on.

Does this allow persistent compromise of the system?

Not in any novel way. An attacker could disable Secure Boot and install a backdoored bootloader, just as they could with physical access.

But isn’t the ME a giant backdoor with arbitrary access to RAM?

Yes, but there’s no indication that this vulnerability allows execution of arbitrary code on the ME – it looks like it’s just (ha ha) an authentication bypass for AMT.

Is this a big deal anyway?

Yes. Fixing this requires a system firmware update in order to provide new ME firmware (including an updated copy of the AMT code). Many of the affected machines are no longer receiving firmware updates from their manufacturers, and so will probably never get a fix. Anyone who ever enables AMT on one of these devices will be vulnerable. That’s ignoring the fact that firmware updates are rarely flagged as security critical (they don’t generally come via Windows update), so even when updates are made available, users probably won’t know about them or install them.

Avoiding this kind of thing in future

Users ought to have full control over what’s running on their systems, including the ME. If a vendor is no longer providing updates then it should at least be possible for a sufficiently desperate user to pay someone else to do a firmware build with the appropriate fixes. Leaving firmware updates at the whims of hardware manufacturers who will only support systems for a fraction of their useful lifespan is inevitably going to end badly.

How certain are you about any of this?

Not hugely – the quality of public documentation on AMT isn’t wonderful, and while I’ve spent some time playing with it (and related technologies) I’m not an expert. If anything above seems inaccurate, let me know and I’ll fix it.

[1] Eh well. They could reboot into their own OS, modify your initramfs (because that’s not signed even if you’re using UEFI Secure Boot) such that it writes a copy of your disk passphrase to /boot before unlocking it, wait for you to type in your passphrase, reboot again and gain access. Sealing the encryption key to the TPM would avoid this.

comment count unavailable comments

How to Enable the Use of Remote Desktops by Deploying Microsoft Remote Desktop Licensing Manager on AWS Microsoft AD

Post Syndicated from Ron Cully original https://aws.amazon.com/blogs/security/how-to-enable-the-use-of-remote-desktops-by-deploying-microsoft-remote-desktop-licensing-manager-on-aws-microsoft-ad/

AWS Directory Service for Microsoft Active Directory (Enterprise Edition), also known as AWS Microsoft AD, now supports Microsoft Remote Desktop Licensing Manager (RD Licensing). By using AWS Microsoft AD as the directory for your Remote Desktop Services solution, you reduce the time it takes to deploy remote desktop solutions on Amazon EC2 for Windows Server instances, and you enable your users to use remote desktops with the credentials they already know. In this blog post, I explain how to deploy RD Licensing Manager on AWS Microsoft AD to enable your users to sign in to remote desktops by using credentials stored in an AWS Microsoft AD or an on-premises Active Directory (AD) domain.

Enable your AWS Microsoft AD users to open remote desktop sessions

To use RD Licensing, you must authorize RD Licensing servers in the same Active Directory domain as the Windows Remote Desktop Session Hosts (RD Session Hosts) by adding them to the Terminal Service Licensing Server security group in AD. This new release grants your AWS Microsoft AD administrative account permissions to do this. As a result, you can now deploy RD Session Hosts in the AWS Cloud without the extra time and effort to set up and configure your own AD domain on Amazon EC2 for Windows Server.

The following diagram illustrates the steps to set up remote desktops with RD Licensing with users in AWS Microsoft AD and shows what happens when users connect to remote desktops.

Diagram illustrating the steps to set up remote desktops with RD Licensing with users in AWS Microsoft AD

In detail, here is how the process works, as it is illustrated in the preceding diagram:

  1. Create an AWS Microsoft AD directory and create users in the directory. You can add user accounts (in this case jsmith) using Active Directory Users and Computers on an EC2 for Windows Server instance that you joined to the domain.
  2. Create EC2 for Windows Server instances to use as your RD Licensing servers (RDLS1 in the preceding diagram). Add the instances to the same domain to which you will join your Windows Remote Desktop Session Hosts (RD Session Hosts).
  3. Configure your EC2 for Windows Server instances as RD Licensing servers and add them to the Terminal Service Licensing Servers security group in AWS Microsoft AD. You can connect to the instances from the AWS Management Console to configure RD Licensing. You also can use Active Directory Users and Computers to add the RD Licensing servers to the security group, thereby authorizing the instances for RD Licensing.
  4. Install your Remote Desktop Services client access licenses (RDS CALs) on the RD Licensing server. You can connect to the instances from the AWS Management Console to install the RDS CALs.
  5. Create other hosts for use as RD Session Hosts (RDSH1 in the diagram). Add the hosts to the same domain as your RD Licensing servers.
  6. A user (in this case jsmith) attempts to open an RDS session.
  7. The RD Session Host requests an RDS CAL from the RD Licensing Server.
  8. The RD Licensing Server returns an RDS CAL to the RD Session Host.

Because the user exists in AWS Microsoft AD, authentication happens against AWS Microsoft AD. The order of authentication relative to session creation depends on whether you configure your RD Session Host for Network Level Authentication.

Enable your users to open remote desktop sessions with their on-premises credentials

If you have an on-premises AD domain with users, your users can open remote desktop sessions with their on-premises credentials if you create a forest trust from AWS Microsoft AD to your Active Directory. The trust enables using on-premises credentials without the need for complex directory synchronization or replication. The following diagram illustrates how to configure a system using the same steps as in the previous section, except that you must create a one-way trust to your on-premises domain in Step 1a. With the trust in place, AWS Microsoft AD refers the RD Session Host to the on-premises domain for authentication.

Diagram illustrating how to configure a system using the same steps as in the previous section, except that you must create a one-way trust to your on-premises domain in Step 1a

Summary

In this post, I have explained how to authorize RD Licensing in AWS Microsoft AD to support EC2-based remote desktop sessions for AWS managed users and on-premises AD managed users. To learn more about how to use AWS Microsoft AD, see the AWS Directory Service documentation. For general information and pricing, see the AWS Directory Service home page.

If you have comments about this blog post, submit a comment in the “Comments” section below. If you have implementation or troubleshooting questions, please start a new thread on the Directory Service forum.

– Ron

Looking at the Netgear Arlo home IP camera

Post Syndicated from Matthew Garrett original http://mjg59.dreamwidth.org/48215.html

Another in the series of looking at the security of IoT type objects. This time I’ve gone for the Arlo network connected cameras produced by Netgear, specifically the stock Arlo base system with a single camera. The base station is based on a Broadcom 5358 SoC with an 802.11n radio, along with a single Broadcom gigabit ethernet interface. Other than it only having a single ethernet port, this looks pretty much like a standard Netgear router. There’s a convenient unpopulated header on the board that turns out to be a serial console, so getting a shell is only a few minutes work.

Normal setup is straight forward. You plug the base station into a router, wait for all the lights to come on and then you visit arlo.netgear.com and follow the setup instructions – by this point the base station has connected to Netgear’s cloud service and you’re just associating it to your account. Security here is straightforward: you need to be coming from the same IP address as the Arlo. For most home users with NAT this works fine. I sat frustrated as it repeatedly failed to find any devices, before finally moving everything behind a backup router (my main network isn’t NATted) for initial setup. Once you and the Arlo are on the same IP address, the site shows you the base station’s serial number for confirmation and then you attach it to your account. Next step is adding cameras. Each base station is broadcasting an 802.11 network on the 2.4GHz spectrum. You connect a camera by pressing the sync button on the base station and then the sync button on the camera. The camera associates with the base station via WDS and now you’re up and running.

This is the point where I get bored and stop following instructions, but if you’re using a desktop browser (rather than using the mobile app) you appear to need Flash in order to actually see any of the camera footage. Bleah.

But back to the device itself. The first thing I traced was the initial device association. What I found was that once the device is associated with an account, it can’t be attached to another account. This is good – I can’t simply request that devices be rebound to my account from someone else’s. Further, while the serial number is displayed to the user to disambiguate between devices, it doesn’t seem to be what’s used internally. Tracing the logon traffic from the base station shows it sending a long random device ID along with an authentication token. If you perform a factory reset, these values are regenerated. The device to account mapping seems to be based on this random device ID, which means that once the device is reset and bound to another account there’s no way for the initial account owner to regain access (other than resetting it again and binding it back to their account). This is far better than many devices I’ve looked at.

Performing a factory reset also changes the WPA PSK for the camera network. Newsky Security discovered that doing so originally reset it to 12345678, which is, uh, suboptimal? That’s been fixed in newer firmware, along with their discovery that the original random password choice was not terribly random.

All communication from the base station to the cloud seems to be over SSL, and everything validates certificates properly. This also seems to be true for client communication with the cloud service – camera footage is streamed back over port 443 as well.

Most of the functionality of the base station is provided by two daemons, xagent and vzdaemon. xagent appears to be responsible for registering the device with the cloud service, while vzdaemon handles the camera side of things (including motion detection). All of this is running as root, so in the event of any kind of vulnerability the entire platform is owned. For such a single purpose device this isn’t really a big deal (the only sensitive data it has is the camera feed – if someone has access to that then root doesn’t really buy them anything else). They’re statically linked and stripped so I couldn’t be bothered spending any significant amount of time digging into them. In any case, they don’t expose any remotely accessible ports and only connect to services with verified SSL certificates. They’re probably not a big risk.

Other than the dependence on Flash, there’s nothing immediately concerning here. What is a little worrying is a family of daemons running on the device and listening to various high numbered UDP ports. These appear to be provided by Broadcom and a standard part of all their router platforms – they’re intended for handling various bits of wireless authentication. It’s not clear why they’re listening on 0.0.0.0 rather than 127.0.0.1, and it’s not obvious whether they’re vulnerable (they mostly appear to receive packets from the driver itself, process them and then stick packets back into the kernel so who knows what’s actually going on), but since you can’t set one of these devices up in the first place without it being behind a NAT gateway it’s unlikely to be of real concern to most users. On the other hand, the same daemons seem to be present on several Broadcom-based router platforms where they may end up being visible to the outside world. That’s probably investigation for another day, though.

Overall: pretty solid, frustrating to set up if your network doesn’t match their expectations, wouldn’t have grave concerns over having it on an appropriately firewalled network.

comment count unavailable comments

Protecting Your Account

Post Syndicated from Tim Nufire original https://www.backblaze.com/blog/protecting-your-account/


Editor’s Note: This is a copy of an email sent to our customers on 4/28/17. The Backblaze login database has in no way been compromised. That said, we have seen a number of automated login attempts to our site and wanted to alert our users of the risk. See below for more info.
=====
Dear Customer –

Over the last 72 hours, our security team has noticed an increase in automated attempts to log into our users’ accounts using credentials stolen from other websites. To protect your account, we recommend that you:

Change your password
● Add Two-Factor Authentication for additional security

NOTE: The Backblaze login database has not been compromised – the credentials were stolen from other sources.

Regrettably, we live in an era where companies have been breached and their customers’ credentials have been leaked – Dropbox , Adobe , and LinkedIn are just a few, high profile examples. What happens in these attacks is that the attacker acquires “the Dropbox list” and simply tries those usernames and passwords on another site. If your credentials were leaked in one of those hacks and you used the same username/password combination to sign up for other services (such as ours), you are vulnerable.

While we have a number of methods in place to thwart nefarious attacks, there is a limit to what we can do to prevent someone from signing in to an account with a valid username and password. We are sending this message to you today because we know that some of our users credentials are in these stolen lists.

Changing your password now ensures you’re not using a password that was previously leaked. Adding Two-Factor Authentication provides an extra layer of security and protection if end up on one of these lists in the future.

Thank you,

Tim
Chief Cloud Officer
Backblaze

The post Protecting Your Account appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Reading Analytics and Privacy

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/04/reading_analyti.html

Interesting paper: “The rise of reading analytics and the emerging calculus of reading privacy in the digital world,” by Clifford Lynch:

Abstract: This paper studies emerging technologies for tracking reading behaviors (“reading analytics”) and their implications for reader privacy, attempting to place them in a historical context. It discusses what data is being collected, to whom it is available, and how it might be used by various interested parties (including authors). I explore means of tracking what’s being read, who is doing the reading, and how readers discover what they read. The paper includes two case studies: mass-market e-books (both directly acquired by readers and mediated by libraries) and scholarly journals (usually mediated by academic libraries); in the latter case I also provide examples of the implications of various authentication, authorization and access management practices on reader privacy. While legal issues are touched upon, the focus is generally pragmatic, emphasizing technology and marketplace practices. The article illustrates the way reader privacy concerns are shifting from government to commercial surveillance, and the interactions between government and the private sector in this area. The paper emphasizes U.S.-based developments.