Additional Pricing Options for AWS Marketplace Products

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/additional-pricing-options-for-aws-marketplace-products/

Forward-looking ISVs (Indepdendent Software Vendors) are making great use of AWS Marketplace.  Users can find, buy, and start using products in minutes, without having to procure hardware or install any software. This streamlined delivery method can help ISVs to discover new customers while also decreasing the length of the sales cycle. The user pays for the products via their existing AWS account, per the regular AWS billing cycle.
As part of the on-boarding process for AWS Marketplace, each ISV has the freedom to determine the price of the software. The ISV can elect to offer prices for monthly and/or annual usage, generally with a discount. For software that is traditionally licensed on something other than time, ISVs make multiple entries in AWS Marketplace, representing licensing options on their chosen dimension.
This model has worked out well for many types of applications. However, as usual, there’s room to do even better!
More Pricing Options ISVs have told us that they would like to have some more flexibility when it comes to packaging and pricing their software and we are happy to oblige. Some of them would like to extend the per-seat model without having to create multiple entries. Others would like to charge on other dimensions. A vendor of security products might want to charge by the number of hosts that were scanned. Or, a vendor of analytic products might want to charge based on the amount of data processed.
In order to accommodate all of these options, ISVs can now track and report on usage based on a pricing dimension that makes sense for their product (number of hosts scanned, amount of data processed, and so forth). They can also establish a per-unit price for this usage ($0.50 per host, $0.25 per GB of data, and so forth). Charges for this usage will appear on the user’s AWS bill.
I believe that this change will open the door to an even wider variety of products in the AWS Marketplace.
Implementing New Pricing Options If you are an ISV and would like to use this new model price to your AWS Marketplace products, you need to add a little bit of code to your app. You simply measure usage along the appropriate dimension(s) and then call a new AWS API function to report on the usage. You must send this data (also known as a metering record) once per hour, even if there’s no usage for the hour. AWS Marketplace expects each running copy of the application to generate a metering record each hour in order to confirm that the application is still functioning properly. If the application stops sending records, AWS will email the customer and ask them to adjust their network configuration.
Here’s a sample call to the new MeterUsage function:
AWSMarketplaceMetering::MeterUsage("4w1vgsrkqdkypbz43g7qkk4uz","2015-05-19T07:31:23Z", "HostsScanned", 2);
The parameters are as follows:

AWS Marketplace product code.
Timestamp (UTC), in ISO-8601 format.
Usage dimension.
Usage quantity.

The usage data will be made available to you as part of the daily and monthly seller reports.
Some Examples Here are a couple of examples of products that are already making use of this new pricing option. As you can see in the Infrastructure Fees, these vendors have chosen to price their products along a variety of interesting (and relevant) dimensions:
SoftNAS Cloud NAS:

 
Aspera faspex On-Demand:

Chef Server:

Trend Micro Deep Security:

Available Now This new pricing option is available now and you can start using it today! —
Jeff;

Additional Pricing Options for AWS Marketplace Products

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/additional-pricing-options-for-aws-marketplace-products/

Forward-looking ISVs (Indepdendent Software Vendors) are making great use of AWS Marketplace.  Users can find, buy, and start using products in minutes, without having to procure hardware or install any software. This streamlined delivery method can help ISVs to discover new customers while also decreasing the length of the sales cycle. The user pays for the products via their existing AWS account, per the regular AWS billing cycle.
As part of the on-boarding process for AWS Marketplace, each ISV has the freedom to determine the price of the software. The ISV can elect to offer prices for monthly and/or annual usage, generally with a discount. For software that is traditionally licensed on something other than time, ISVs make multiple entries in AWS Marketplace, representing licensing options on their chosen dimension.
This model has worked out well for many types of applications. However, as usual, there’s room to do even better!
More Pricing Options ISVs have told us that they would like to have some more flexibility when it comes to packaging and pricing their software and we are happy to oblige. Some of them would like to extend the per-seat model without having to create multiple entries. Others would like to charge on other dimensions. A vendor of security products might want to charge by the number of hosts that were scanned. Or, a vendor of analytic products might want to charge based on the amount of data processed.
In order to accommodate all of these options, ISVs can now track and report on usage based on a pricing dimension that makes sense for their product (number of hosts scanned, amount of data processed, and so forth). They can also establish a per-unit price for this usage ($0.50 per host, $0.25 per GB of data, and so forth). Charges for this usage will appear on the user’s AWS bill.
I believe that this change will open the door to an even wider variety of products in the AWS Marketplace.
Implementing New Pricing Options If you are an ISV and would like to use this new model price to your AWS Marketplace products, you need to add a little bit of code to your app. You simply measure usage along the appropriate dimension(s) and then call a new AWS API function to report on the usage. You must send this data (also known as a metering record) once per hour, even if there’s no usage for the hour. AWS Marketplace expects each running copy of the application to generate a metering record each hour in order to confirm that the application is still functioning properly. If the application stops sending records, AWS will email the customer and ask them to adjust their network configuration.
Here’s a sample call to the new MeterUsage function:
AWSMarketplaceMetering::MeterUsage("4w1vgsrkqdkypbz43g7qkk4uz","2015-05-19T07:31:23Z", "HostsScanned", 2);
The parameters are as follows:

AWS Marketplace product code.
Timestamp (UTC), in ISO-8601 format.
Usage dimension.
Usage quantity.

The usage data will be made available to you as part of the daily and monthly seller reports.
Some Examples Here are a couple of examples of products that are already making use of this new pricing option. As you can see in the Infrastructure Fees, these vendors have chosen to price their products along a variety of interesting (and relevant) dimensions:
SoftNAS Cloud NAS:

 
Aspera faspex On-Demand:

Chef Server:

Trend Micro Deep Security:

Available Now This new pricing option is available now and you can start using it today! —
Jeff;

How to Use the New AWS Encryption SDK to Simplify Data Encryption and Improve Application Availability

Post Syndicated from Greg Rubin original https://blogs.aws.amazon.com/security/post/TxGBG3U5VUS2HY/How-to-Use-the-New-AWS-Encryption-SDK-to-Simplify-Data-Encryption-and-Improve-Ap

The AWS Cryptography team is happy to announce the AWS Encryption SDK. This new SDK makes encryption easier for developers while minimizing errors that could lessen the security of your applications. The new SDK does not require you to be an AWS customer, but it does include ready-to-use examples for AWS customers.

Developers using encryption often face two problems:

  1. How to correctly generate and use a key to encrypt data.
  2. How to protect the key after it has been used.

The library provided in the new AWS Encryption SDK addresses the first problem by transparently implementing the low-level details using the cryptographic provider available in your development environment. The library helps address the second problem by providing intuitive interfaces to let you choose how you want to protect your keys. Developers can then focus on the cores of the applications they are building, instead of on the complexities of encryption. In this blog post, I will show you how you can use the AWS Encryption SDK to simplify the process of encrypting data and how to protect your keys in ways that help improve application availability by not tying you to a single region or key management solution.

Envelope encryption and the new SDK

An important concept to understand when using the AWS Encryption SDK is envelope encryption (also known as hybrid encryption). Different algorithms have different strengths, and no single algorithm fits every use case. For example, solutions with good key management properties (such as RSA or AWS Key Management Service [KMS]) often do not work well with large amounts of data. Envelope encryption solves this problem by encrypting bulk data with a single-use data key appropriate for large amounts of data (such as AES-GCM). Envelope encryption then encrypts that data key with a master key that uses an algorithm or other solution appropriate for key management.

Another advantage of envelope encryption is that a single message can be encrypted so that multiple recipients can decrypt it. Rather than having everyone share a key (which is usually insecure) or encrypting the entire message multiple times (which is impractical), only the data key is encrypted multiple times by using each recipient’s keys. This significantly reduced amount of duplication makes encrypting with multiple keys far more practical.

The downside of envelope encryption is implementation complexity. All clients must be able to generate and parse the data formats, handle multiple keys and algorithms, and ideally remain reasonably forward and backward compatible.

How does the AWS Encryption SDK help me?

The AWS Encryption SDK comes with a carefully designed and reviewed data format that supports multiple secure algorithm combinations (with room for future expansion) and has no limits on the types or algorithms of the master keys. The AWS Encryption SDK itself is a production‑ready reference Java implementation with direct support for KMS and the Java Cryptography Architecture (JCA/JCE), which includes support for AWS CloudHSM and other PKCS #11 devices. Implementations of this SDK in other languages are currently being developed.

One benefit of the AWS Encryption SDK is that it takes care of the low-level cryptographic details so that you can focus on moving data. Next, I will show how little code you need to build a powerful and secure multiregion solution.

Example 1: Encrypting application secrets under multiple regional KMS master keys for high availability

Many customers want to build systems that not only span multiple Availability Zones, but also multiple regions. Such deployment can be challenging when data is encrypted with KMS because you cannot share KMS customer master keys (CMKs) across regions. With envelope encryption, you can work around this limitation by encrypting the data key with multiple KMS CMKs in different regions. Applications running in each region can use the local KMS endpoint to decrypt the ciphertext for faster and more reliable access.

For all examples, I will assume that I am running on Amazon EC2 instances configured with IAM roles for EC2. This lets us avoid credential management and take advantage of built-in logic that routes requests to the nearest endpoints. These examples also assume that the AWS SDK for Java (different than the AWS Encryption SDK) and Bouncy Castle are available.

The encryption logic has a very simple high-level design. After reading in some parameters from the command line, I get the master keys and use them to encrypt the file (as shown in the following code example). I will provide the missing methods later in this post.

public static void main(final String[] args) throws Exception {
    // Get parameters from the command line
    final String inputFile = args[0];
    final String outputFile = args[1];

    // Get all the master keys we'll use
    final MasterKeyProvider<?> provider = getMasterKeyProvider();

    // Actually encrypt the data
    encryptFile(provider, inputFile, outputFile);
}

Create master keys and combine them into a single master key provider

The following code example shows how you can encrypt data under CMKs in three US regions: us-east-1, us-west-1, and us-west-2. The example assumes that you have already set up the CMKs and created aliases named alias/exampleKey in each region for each CMK. For more information about creating CMKs and aliases, see Creating Keys in the AWS KMS documentation.

This example then uses the MultipleProviderFactory to combine all the master keys into a single master key provider. Note that the first master key is the one used to generate the new data key and the other master keys are used to encrypt the new data key.

private static MasterKeyProvider<?> getMasterKeyProvider() {
    // Get credentials from Roles for EC2
    final AWSCredentialsProvider creds = 
        new InstanceProfileCredentialsProvider();

    // Get KmsMasterKeys for the regions we care about
    final String aliasName = "alias/exampleKey";
    final KmsMasterKey useast1 = KmsMasterKey.getInstance(creds,
                "arn:aws:kms:us-east-1:" + ACCOUNT_ID + ":" + aliasName);
    final KmsMasterKey uswest1 = KmsMasterKey.getInstance(creds,
                "arn:aws:kms:us-west-1:" + ACCOUNT_ID + ":" + aliasName);
    final KmsMasterKey uswest2 = KmsMasterKey.getInstance(creds,
                "arn:aws:kms:us-west-2:" + ACCOUNT_ID + ":" + aliasName);

    return MultipleProviderFactory.buildMultiProvider(
        useast1, uswest1, uswest2);
}

The logic to construct a MasterKeyProvider could easily be built once by your central security team and then reused across your company to both simplify development and ensure that all encrypted data meets corporate standards.

Encrypt the data

The data you encrypt can come from anywhere and be distributed however you like. In the following code example, I am reading a file from disk and writing out an encrypted copy. The AWS Encryption SDK integrates directly with Java’s streams to make this easy.

private static void encryptFile(final MasterKeyProvider<?> provider,
        final String inputFile, final String outputFile) throws IOException {
    // Get an instance of the encryption logic
    final AwsCrypto crypto = new AwsCrypto();

    // Open the files for reading and writing
    try (
            final FileInputStream in = new FileInputStream(inputFile);
            final FileOutputStream out = new FileOutputStream(outputFile);
            // Wrap the output stream in encryption logic
            // It is important that this is closed because it adds footers
            final CryptoOutputStream<?> encryptingStream =
                    crypto.createEncryptingStream(provider, out)) {
        // Copy the data over
        IOUtils.copy(in, encryptingStream);
    }
}

This file could contain, for example, secret application configuration data (such as passwords, certificates, and the like) that is then sent to EC2 instances as EC2 user data upon launch.

Decrypt the data

The following code example decrypts the contents of the EC2 user data and writes it to the specified file. The AWS SDK for Java defaults to using KMS in the local region, so decryption proceeds quickly without cross-region calls.

public static void main(String[] args) throws Exception {
    // Get parameters from the command line
    final String outputFile = args[0];

    // Create a master key provider that points to the local
    // KMS stack and uses Roles for EC2 to get credentials.
    final KmsMasterKeyProvider provider = new KmsMasterKeyProvider(
        new InstanceProfileCredentialsProvider());

    // Get an instance of the encryption logic
    final AwsCrypto crypto = new AwsCrypto();

    // Open a stream to read the user data
    // and a stream to write out the decrypted file
    final URL userDataUrl = new URL("http://169.254.169.254/latest/user-data");
    try (
            final InputStream in = userDataUrl.openStream();
            final FileOutputStream out = new FileOutputStream(outputFile);
            // Wrap the input stream in decryption logic
            final CryptoInputStream<?> decryptingStream =
                    crypto.createDecryptingStream(provider, in)) {
        // Copy the data over
        IOUtils.copy(decryptingStream, out);
    }
}

Congratulations! You have just encrypted data under master keys in multiple regions and have code that will always decrypt the data by using the local KMS stack. This gives you higher availability and lower latency for decryption, while still only needing to manage a single ciphertext.

Example 2: Encrypting application secrets under master keys from different providers for escrow and portability

Another reason why you might want to encrypt data under multiple master keys is to avoid relying on a single provider for your keys. By not tying yourself to a single key management solution, you help improve your applications’ availability. This approach also might help if you have compliance, data loss prevention, or disaster recovery requirements that require multiple providers.

You can use the same technique demonstrated previously in this post to encrypt your data to an escrow or additional decryption master key that is independent from your primary provider. This example demonstrates how to use an additional master key, which is an RSA public key with the private key stored in a key management infrastructure independent from KMS such as an offline Hardware Security Module (HSM). (Creating and managing the RSA key pair are out of scope for this blog.)

Encrypt the data with a public master key

Just like the previous code example that created a number of KmsMasterKeys to encrypt data, the following code example creates one more MasterKey for use with the RSA public key. The example uses a JceMasterKey because it uses a java.security.PublicKey object from the Java Cryptography Extensions (JCE). The example then passes the new MasterKey into the MultipleProviderFactory (along with all the other master keys). The example loads the public key from a file called rsa_public_key.der.

private static MasterKeyProvider<?> getMasterKeyProvider()
      throws IOException, GeneralSecurityException {
    // Get credentials from Roles for EC2
    final AWSCredentialsProvider creds =
        new InstanceProfileCredentialsProvider();

    // Get KmsMasterKeys for the regions we care about
    final String aliasName = "alias/exampleKey";
    final KmsMasterKey useast1 = KmsMasterKey.getInstance(creds,
            "arn:aws:kms:us-east-1:" + ACCOUNT_ID + ":" + aliasName);
    final KmsMasterKey uswest1 = KmsMasterKey.getInstance(creds,
            "arn:aws:kms:us-west-1:" + ACCOUNT_ID + ":" + aliasName);
    final KmsMasterKey uswest2 = KmsMasterKey.getInstance(creds,
            "arn:aws:kms:us-west-2:" + ACCOUNT_ID + ":" + aliasName);

    // Load the RSA public key from a file and make a MasterKey from it.
    final byte[] rsaBytes = Files.readAllBytes(
        new File("rsa_public_key.der").toPath());
    final KeyFactory rsaFactory = KeyFactory.getInstance("RSA");
    final PublicKey rsaKey = rsaFactory.generatePublic(
        new X509EncodedKeySpec(rsaBytes));
    final JceMasterKey rsaMasterKey =
        JceMasterKey.getInstance(rsaKey, null,
            "escrow-provider", "escrow",
            "RSA/ECB/OAEPWithSHA-256AndMGF1Padding");

    return MultipleProviderFactory.buildMultiProvider(
        useast1, uswest1, uswest2, rsaMasterKey);
}

Decrypt the data with the private key

Many HSMs support the standard Java KeyStore interface or at least supply PKCS #11 drivers, which allow you to use the existing Java KeyStore implementations. The following decryption code example uses an RSA private key from a KeyStore.

public static void main(String[] args) throws Exception {
    // Get parameters from the command line
    final String inputFile = args[0];
    final String outputFile = args[1];

    // Get the KeyStore
    // In a production system, this would likely be backed by an HSM.
    // For this example, it will simply be a file on disk
    final char[] keystorePassword = "example".toCharArray();
    final KeyStore keyStore = KeyStore.getInstance("JKS");
    try (final FileInputStream fis = new FileInputStream("cryptosdk.jks")) {
        keyStore.load(fis, keystorePassword);
    }

    // Create a master key provider from the keystore.
    // Be aware that because KMS isn’t being used, it cannot directly
    // protect the integrity or authenticity of this data.
    final KeyStoreProvider provider = new KeyStoreProvider(keyStore,
            new PasswordProtection(keystorePassword),
            "escrow-provider", "RSA/ECB/OAEPWithSHA-256AndMGF1Padding");

    // Get an instance of the encryption logic
    final AwsCrypto crypto = new AwsCrypto();

    // Open a stream to read the encrypted file
    // and a stream to write out the decrypted file
    try (
            final FileInputStream in = new FileInputStream(inputFile);
            final FileOutputStream out = new FileOutputStream(outputFile);
            // Wrap the output stream in encryption logic
            final CryptoInputStream<?> decryptingStream =
                    crypto.createDecryptingStream(provider, in)) {
        // Copy the data over
        IOUtils.copy(decryptingStream, out);
    }
}

Conclusion

Envelope encryption is powerful, but traditionally, it has been challenging to implement. The new AWS Encryption SDK helps manage data keys for you, and it simplifies the process of encrypting data under multiple master keys. As a result, this new SDK allows you to focus on the code that drives your business forward. It also provides a framework you can easily extend to ensure that you have a cryptographic library that is configured to match and enforce your standards.

We are excited about releasing the AWS Encryption SDK and can’t wait to hear what you do with it. If you have comments about the new SDK or anything in this blog post, please add a comment in the “Comments” section below. If you have implementation or usage questions, start a new thread on the KMS forum.

– Greg

How to Use the New AWS Encryption SDK to Simplify Data Encryption and Improve Application Availability

Post Syndicated from Greg Rubin original https://blogs.aws.amazon.com/security/post/TxGBG3U5VUS2HY/How-to-Use-the-New-AWS-Encryption-SDK-to-Simplify-Data-Encryption-and-Improve-Ap

The AWS Cryptography team is happy to announce the AWS Encryption SDK. This new SDK makes encryption easier for developers while minimizing errors that could lessen the security of your applications. The new SDK does not require you to be an AWS customer, but it does include ready-to-use examples for AWS customers.

Developers using encryption often face two problems:

How to correctly generate and use a key to encrypt data.

How to protect the key after it has been used.

The library provided in the new AWS Encryption SDK addresses the first problem by transparently implementing the low-level details using the cryptographic provider available in your development environment. The library helps address the second problem by providing intuitive interfaces to let you choose how you want to protect your keys. Developers can then focus on the cores of the applications they are building, instead of on the complexities of encryption. In this blog post, I will show you how you can use the AWS Encryption SDK to simplify the process of encrypting data and how to protect your keys in ways that help improve application availability by not tying you to a single region or key management solution.

Envelope encryption and the new SDK

An important concept to understand when using the AWS Encryption SDK is envelope encryption (also known as hybrid encryption). Different algorithms have different strengths, and no single algorithm fits every use case. For example, solutions with good key management properties (such as RSA or AWS Key Management Service [KMS]) often do not work well with large amounts of data. Envelope encryption solves this problem by encrypting bulk data with a single-use data key appropriate for large amounts of data (such as AES-GCM). Envelope encryption then encrypts that data key with a master key that uses an algorithm or other solution appropriate for key management.

Another advantage of envelope encryption is that a single message can be encrypted so that multiple recipients can decrypt it. Rather than having everyone share a key (which is usually insecure) or encrypting the entire message multiple times (which is impractical), only the data key is encrypted multiple times by using each recipient’s keys. This significantly reduced amount of duplication makes encrypting with multiple keys far more practical.

The downside of envelope encryption is implementation complexity. All clients must be able to generate and parse the data formats, handle multiple keys and algorithms, and ideally remain reasonably forward and backward compatible.

How does the AWS Encryption SDK help me?

The AWS Encryption SDK comes with a carefully designed and reviewed data format that supports multiple secure algorithm combinations (with room for future expansion) and has no limits on the types or algorithms of the master keys. The AWS Encryption SDK itself is a production‑ready reference Java implementation with direct support for KMS and the Java Cryptography Architecture (JCA/JCE), which includes support for AWS CloudHSM and other PKCS #11 devices. Implementations of this SDK in other languages are currently being developed.

One benefit of the AWS Encryption SDK is that it takes care of the low-level cryptographic details so that you can focus on moving data. Next, I will show how little code you need to build a powerful and secure multiregion solution.

Example 1: Encrypting application secrets under multiple regional KMS master keys for high availability

Many customers want to build systems that not only span multiple Availability Zones, but also multiple regions. Such deployment can be challenging when data is encrypted with KMS because you cannot share KMS customer master keys (CMKs) across regions. With envelope encryption, you can work around this limitation by encrypting the data key with multiple KMS CMKs in different regions. Applications running in each region can use the local KMS endpoint to decrypt the ciphertext for faster and more reliable access.

For all examples, I will assume that I am running on Amazon EC2 instances configured with IAM roles for EC2. This lets us avoid credential management and take advantage of built-in logic that routes requests to the nearest endpoints. These examples also assume that the AWS SDK for Java (different than the AWS Encryption SDK) and Bouncy Castle are available.

The encryption logic has a very simple high-level design. After reading in some parameters from the command line, I get the master keys and use them to encrypt the file (as shown in the following code example). I will provide the missing methods later in this post.

public static void main(final String[] args) throws Exception {
// Get parameters from the command line
final String inputFile = args[0];
final String outputFile = args[1];

// Get all the master keys we’ll use
final MasterKeyProvider<?> provider = getMasterKeyProvider();

// Actually encrypt the data
encryptFile(provider, inputFile, outputFile);
}

Create master keys and combine them into a single master key provider

The following code example shows how you can encrypt data under CMKs in three US regions: us-east-1, us-west-1, and us-west-2. The example assumes that you have already set up the CMKs and created aliases named alias/exampleKey in each region for each CMK. For more information about creating CMKs and aliases, see Creating Keys in the AWS KMS documentation.

This example then uses the MultipleProviderFactory to combine all the master keys into a single master key provider. Note that the first master key is the one used to generate the new data key and the other master keys are used to encrypt the new data key.

private static MasterKeyProvider<?> getMasterKeyProvider() {
// Get credentials from Roles for EC2
final AWSCredentialsProvider creds =
new InstanceProfileCredentialsProvider();

// Get KmsMasterKeys for the regions we care about
final String aliasName = "alias/exampleKey";
final KmsMasterKey useast1 = KmsMasterKey.getInstance(creds,
"arn:aws:kms:us-east-1:" + ACCOUNT_ID + ":" + aliasName);
final KmsMasterKey uswest1 = KmsMasterKey.getInstance(creds,
"arn:aws:kms:us-west-1:" + ACCOUNT_ID + ":" + aliasName);
final KmsMasterKey uswest2 = KmsMasterKey.getInstance(creds,
"arn:aws:kms:us-west-2:" + ACCOUNT_ID + ":" + aliasName);

return MultipleProviderFactory.buildMultiProvider(
useast1, uswest1, uswest2);
}

The logic to construct a MasterKeyProvider could easily be built once by your central security team and then reused across your company to both simplify development and ensure that all encrypted data meets corporate standards.

Encrypt the data

The data you encrypt can come from anywhere and be distributed however you like. In the following code example, I am reading a file from disk and writing out an encrypted copy. The AWS Encryption SDK integrates directly with Java’s streams to make this easy.

private static void encryptFile(final MasterKeyProvider<?> provider,
final String inputFile, final String outputFile) throws IOException {
// Get an instance of the encryption logic
final AwsCrypto crypto = new AwsCrypto();

// Open the files for reading and writing
try (
final FileInputStream in = new FileInputStream(inputFile);
final FileOutputStream out = new FileOutputStream(outputFile);
// Wrap the output stream in encryption logic
// It is important that this is closed because it adds footers
final CryptoOutputStream<?> encryptingStream =
crypto.createEncryptingStream(provider, out)) {
// Copy the data over
IOUtils.copy(in, encryptingStream);
}
}

This file could contain, for example, secret application configuration data (such as passwords, certificates, and the like) that is then sent to EC2 instances as EC2 user data upon launch.

Decrypt the data

The following code example decrypts the contents of the EC2 user data and writes it to the specified file. The AWS SDK for Java defaults to using KMS in the local region, so decryption proceeds quickly without cross-region calls.

public static void main(String[] args) throws Exception {
// Get parameters from the command line
final String outputFile = args[0];

// Create a master key provider that points to the local
// KMS stack and uses Roles for EC2 to get credentials.
final KmsMasterKeyProvider provider = new KmsMasterKeyProvider(
new InstanceProfileCredentialsProvider());

// Get an instance of the encryption logic
final AwsCrypto crypto = new AwsCrypto();

// Open a stream to read the user data
// and a stream to write out the decrypted file
final URL userDataUrl = new URL("http://169.254.169.254/latest/user-data");
try (
final InputStream in = userDataUrl.openStream();
final FileOutputStream out = new FileOutputStream(outputFile);
// Wrap the input stream in decryption logic
final CryptoInputStream<?> decryptingStream =
crypto.createDecryptingStream(provider, in)) {
// Copy the data over
IOUtils.copy(decryptingStream, out);
}
}

Congratulations! You have just encrypted data under master keys in multiple regions and have code that will always decrypt the data by using the local KMS stack. This gives you higher availability and lower latency for decryption, while still only needing to manage a single ciphertext.

Example 2: Encrypting application secrets under master keys from different providers for escrow and portability

Another reason why you might want to encrypt data under multiple master keys is to avoid relying on a single provider for your keys. By not tying yourself to a single key management solution, you help improve your applications’ availability. This approach also might help if you have compliance, data loss prevention, or disaster recovery requirements that require multiple providers.

You can use the same technique demonstrated previously in this post to encrypt your data to an escrow or additional decryption master key that is independent from your primary provider. This example demonstrates how to use an additional master key, which is an RSA public key with the private key stored in a key management infrastructure independent from KMS such as an offline Hardware Security Module (HSM). (Creating and managing the RSA key pair are out of scope for this blog.)

Encrypt the data with a public master key

Just like the previous code example that created a number of KmsMasterKeys to encrypt data, the following code example creates one more MasterKey for use with the RSA public key. The example uses a JceMasterKey because it uses a java.security.PublicKey object from the Java Cryptography Extensions (JCE). The example then passes the new MasterKey into the MultipleProviderFactory (along with all the other master keys). The example loads the public key from a file called rsa_public_key.der.

private static MasterKeyProvider<?> getMasterKeyProvider()
throws IOException, GeneralSecurityException {
// Get credentials from Roles for EC2
final AWSCredentialsProvider creds =
new InstanceProfileCredentialsProvider();

// Get KmsMasterKeys for the regions we care about
final String aliasName = "alias/exampleKey";
final KmsMasterKey useast1 = KmsMasterKey.getInstance(creds,
"arn:aws:kms:us-east-1:" + ACCOUNT_ID + ":" + aliasName);
final KmsMasterKey uswest1 = KmsMasterKey.getInstance(creds,
"arn:aws:kms:us-west-1:" + ACCOUNT_ID + ":" + aliasName);
final KmsMasterKey uswest2 = KmsMasterKey.getInstance(creds,
"arn:aws:kms:us-west-2:" + ACCOUNT_ID + ":" + aliasName);

// Load the RSA public key from a file and make a MasterKey from it.
final byte[] rsaBytes = Files.readAllBytes(
new File("rsa_public_key.der").toPath());
final KeyFactory rsaFactory = KeyFactory.getInstance("RSA");
final PublicKey rsaKey = rsaFactory.generatePublic(
new X509EncodedKeySpec(rsaBytes));
final JceMasterKey rsaMasterKey =
JceMasterKey.getInstance(rsaKey, null,
"escrow-provider", "escrow",
"RSA/ECB/OAEPWithSHA-256AndMGF1Padding");

return MultipleProviderFactory.buildMultiProvider(
useast1, uswest1, uswest2, rsaMasterKey);
}

Decrypt the data with the private key

Many HSMs support the standard Java KeyStore interface or at least supply PKCS #11 drivers, which allow you to use the existing Java KeyStore implementations. The following decryption code example uses an RSA private key from a KeyStore.

public static void main(String[] args) throws Exception {
// Get parameters from the command line
final String inputFile = args[0];
final String outputFile = args[1];

// Get the KeyStore
// In a production system, this would likely be backed by an HSM.
// For this example, it will simply be a file on disk
final char[] keystorePassword = "example".toCharArray();
final KeyStore keyStore = KeyStore.getInstance("JKS");
try (final FileInputStream fis = new FileInputStream("cryptosdk.jks")) {
keyStore.load(fis, keystorePassword);
}

// Create a master key provider from the keystore.
// Be aware that because KMS isn’t being used, it cannot directly
// protect the integrity or authenticity of this data.
final KeyStoreProvider provider = new KeyStoreProvider(keyStore,
new PasswordProtection(keystorePassword),
"escrow-provider", "RSA/ECB/OAEPWithSHA-256AndMGF1Padding");

// Get an instance of the encryption logic
final AwsCrypto crypto = new AwsCrypto();

// Open a stream to read the encrypted file
// and a stream to write out the decrypted file
try (
final FileInputStream in = new FileInputStream(inputFile);
final FileOutputStream out = new FileOutputStream(outputFile);
// Wrap the output stream in encryption logic
final CryptoInputStream<?> decryptingStream =
crypto.createDecryptingStream(provider, in)) {
// Copy the data over
IOUtils.copy(decryptingStream, out);
}
}

Conclusion

Envelope encryption is powerful, but traditionally, it has been challenging to implement. The new AWS Encryption SDK helps manage data keys for you, and it simplifies the process of encrypting data under multiple master keys. As a result, this new SDK allows you to focus on the code that drives your business forward. It also provides a framework you can easily extend to ensure that you have a cryptographic library that is configured to match and enforce your standards.

We are excited about releasing the AWS Encryption SDK and can’t wait to hear what you do with it. If you have comments about the new SDK or anything in this blog post, please add a comment in the “Comments” section below. If you have implementation or usage questions, start a new thread on the KMS forum.

– Greg

Import Zeppelin notes from GitHub or JSON in Zeppelin 0.5.6 on Amazon EMR

Post Syndicated from Jonathan Fritz original https://blogs.aws.amazon.com/bigdata/post/Tx1Y66KB4QZTVJL/Import-Zeppelin-notes-from-GitHub-or-JSON-in-Zeppelin-0-5-6-on-Amazon-EMR

Jonathan Fritz is a Senior Product Manager for Amazon EMR

Many Amazon EMR customers use Zeppelin to create interactive notebooks to run workloads with Spark using Scala, Python, and SQL. These customers have found Amazon EMR to be a great platform for running Zeppelin because of strong integration with other AWS services and the ability to quickly create a fully configured Spark environment. Many customers have already discovered Amazon S3 to be a useful way to durably store and move their notebook files between EMR clusters. 

With the latest Zeppelin release (0.5.6) included on Amazon EMR release 4.4.0, you can now import notes using links to S3 JSON files, raw file URLs in GitHub, or local files. You can also download a note as a JSON file as well. This new functionality makes it easier to save and share Zeppelin notes, and it allows you to version your notes during development. The import feature is located on the Zeppelin home screen, and the export feature is located on the toolbar for each note. Additionally, you can still configure Zeppelin to store its entire notebook file in S3 by adding this configuration for zeppelin-env when creating your cluster (just make sure you have already created the bucket in S3 before creating your cluster):

[
{
"Classification": "zeppelin-env",
"Properties": {

},
"Configurations": [
{
"Classification": "export",
"Properties": {
"ZEPPELIN_NOTEBOOK_STORAGE":"org.apache.zeppelin.notebook.repo.S3NotebookRepo",
"ZEPPELIN_NOTEBOOK_S3_BUCKET":"my-zeppelin-bucket-name",
"ZEPPELIN_NOTEBOOK_USER":"user"
},
"Configurations": [

]
}
]
}
]

Below is a screenshot of the import note functionality. You can specify the URL for a JSON in S3 or a raw file in GitHub here:

 

 

 

 

Import Zeppelin notes from GitHub or JSON in Zeppelin 0.5.6 on Amazon EMR

Post Syndicated from Jonathan Fritz original https://blogs.aws.amazon.com/bigdata/post/Tx1Y66KB4QZTVJL/Import-Zeppelin-notes-from-GitHub-or-JSON-in-Zeppelin-0-5-6-on-Amazon-EMR

Jonathan Fritz is a Senior Product Manager for Amazon EMR

Many Amazon EMR customers use Zeppelin to create interactive notebooks to run workloads with Spark using Scala, Python, and SQL. These customers have found Amazon EMR to be a great platform for running Zeppelin because of strong integration with other AWS services and the ability to quickly create a fully configured Spark environment. Many customers have already discovered Amazon S3 to be a useful way to durably store and move their notebook files between EMR clusters. 

With the latest Zeppelin release (0.5.6) included on Amazon EMR release 4.4.0, you can now import notes using links to S3 JSON files, raw file URLs in GitHub, or local files. You can also download a note as a JSON file as well. This new functionality makes it easier to save and share Zeppelin notes, and it allows you to version your notes during development. The import feature is located on the Zeppelin home screen, and the export feature is located on the toolbar for each note. Additionally, you can still configure Zeppelin to store its entire notebook file in S3 by adding this configuration for zeppelin-env when creating your cluster (just make sure you have already created the bucket in S3 before creating your cluster):

[
{
"Classification": "zeppelin-env",
"Properties": {

},
"Configurations": [
{
"Classification": "export",
"Properties": {
"ZEPPELIN_NOTEBOOK_STORAGE":"org.apache.zeppelin.notebook.repo.S3NotebookRepo",
"ZEPPELIN_NOTEBOOK_S3_BUCKET":"my-zeppelin-bucket-name",
"ZEPPELIN_NOTEBOOK_USER":"user"
},
"Configurations": [

]
}
]
}
]

Below is a screenshot of the import note functionality. You can specify the URL for a JSON in S3 or a raw file in GitHub here:

 

 

 

 

Join us at Strata + Hadoop World Conference in San Jose, March 29-31

Post Syndicated from Jorge A. Lopez original https://blogs.aws.amazon.com/bigdata/post/Tx93TE0FN99SCR/Join-us-at-Strata-Hadoop-World-Conference-in-San-Jose-March-29-31

Jorge A. Lopez is responsible for Big Data Solutions Marketing at AWS

Visit us

Come see the AWS Big Data team at Booth #736, where big data experts will be happy to answer your questions, hear about your specific requirements, and help you with your big data initiatives.

Click to reserve a consultation slot wih AWS big data experts!

Catch a presentation

Get technical details and best practices from AWS experts.  Hear directly from customers and learn from the experience of other organizations that are deploying big data solutions on AWS. Below is a list of some AWS-related sessions:

Real-world smart applications with Amazon Machine Learning
Alex Ingerman (Amazon Web Services)
4:20pm–5:00 pm Wednesday, March 30

Building a scalable architecture for processing streaming data on AWS
Siva Raghupathy (Amazon Web Services), Manjeet Chayel (Amazon Web Services)
5:10 pm–5:50 pm Wednesday, March 30

Netflix: Making big data small 
Daniel Weeks (Netflix)
11:00 am–11:40 am Thursday, March 31

Data applications and infrastructure at Coursera
Roshan Sumbaly (Coursera Inc), Pierre Barthelemy (Coursera)
11:50 am–12:30 pm Thursday, March 31

Self-service, interactive analytics at multipetabyte scale in capital markets regulation on the cloud
Scott Donaldson (FINRA) and Matt Cardillo (FINRA)
1:50 pm–2:30 pm Thursday, March 31

Hope to see you there!

 

Join us at Strata + Hadoop World Conference in San Jose, March 29-31

Post Syndicated from Jorge A. Lopez original https://blogs.aws.amazon.com/bigdata/post/Tx93TE0FN99SCR/Join-us-at-Strata-Hadoop-World-Conference-in-San-Jose-March-29-31

Jorge A. Lopez is responsible for Big Data Solutions Marketing at AWS

Visit us

Come see the AWS Big Data team at Booth #736, where big data experts will be happy to answer your questions, hear about your specific requirements, and help you with your big data initiatives.

Click to reserve a consultation slot wih AWS big data experts!

Catch a presentation

Get technical details and best practices from AWS experts.  Hear directly from customers and learn from the experience of other organizations that are deploying big data solutions on AWS. Below is a list of some AWS-related sessions:

Real-world smart applications with Amazon Machine Learning
Alex Ingerman (Amazon Web Services)
4:20pm–5:00 pm Wednesday, March 30

Building a scalable architecture for processing streaming data on AWS
Siva Raghupathy (Amazon Web Services), Manjeet Chayel (Amazon Web Services)
5:10 pm–5:50 pm Wednesday, March 30

Netflix: Making big data small 
Daniel Weeks (Netflix)
11:00 am–11:40 am Thursday, March 31

Data applications and infrastructure at Coursera
Roshan Sumbaly (Coursera Inc), Pierre Barthelemy (Coursera)
11:50 am–12:30 pm Thursday, March 31

Self-service, interactive analytics at multipetabyte scale in capital markets regulation on the cloud
Scott Donaldson (FINRA) and Matt Cardillo (FINRA)
1:50 pm–2:30 pm Thursday, March 31

Hope to see you there!

 

What’s the Diff: Time Machine vs. Time Capsule

Post Syndicated from Peter Cohen original https://www.backblaze.com/blog/whats-diff-time-machine-vs-time-capsule/

Whats the difference between time machine and time capsule

What’s the Diff is here to explain in plain language what makes up the computer terminology we talk about, to help give you a clearer idea of what it is and how it works.

Apple tries to make things really easy and non-intimidating for people who aren’t computer experts. But backing up your data can be intimidating, no way around it.
Let’s try to demystify a couple of things related to backing up on the Mac that can be confusing to people new to the platform, and even not so new to the platform. This week we’re talking about Time Machine and Time Capsule.
To summarize, Time Machine is the Mac’s built-in backup software. Time Capsule is a network device sold by Apple that works with Time Machine, but does a lot more too.
Interested in finding out more? Come take a look.
What is Time Machine?
There are different ways you can back up your Mac – several companies offer backup software that does the job, including Backblaze. We’ll get to why Backblaze is important later. But Time Machine is Apple’s solution to this problem. It’s free, it’s included on the Mac, and it’s pretty foolproof.
Time Machine
You have to turn on Time Machine yourself, but that’s just a matter of flipping a switch. Time Machine works whenever the Mac is on. With Time Machine, your Mac keeps hourly backups for the previous 24 hours, daily backups for the previous month, and weekly backups for all previous months, until the Time Machine disk is full.
This means you always can restore your Mac to its most recent working state. Time Machine also gives you a window into the past with each of those snapshots, so you can restore deleted files or even previously saved versions of files.

For more about how to use Time Machine, read our Mac Backup Guide.

Time Machine works with external hard drives. Some Network Attached Storage (NAS) makers like Synology and QNAP enable their devices to be configured to work as network-based Time Machine servers. You can also use Time Machine with a stand-alone drive from Seagate or Western Digital, for example.
Time Machine is designed to work as a local, primary backup of your Mac – meaning the data stays physically close to the computer, and is intended to be the first line of defense should you have to recover. If anything happens to your computer, Time Machine and the hard drive it’s backing up to can be used to restore your computer to right where it was before the problem happened.
Apple has its own Time Machine network server, too. And this is where confusion sets in for some of us, because it’s so similarly named. I’m talking about Time Capsule.
What is Time Capsule?
Apple sells a network device called a Time Capsule which is designed to work with Time Machine. Time Capsule currently comes in 2 terabyte (TB) and 3TB capacities.
Time Capsule isn’t just a hard drive. It’s a full-on network router, one that supports IEEE 802.11ac networking, the same fast Wi-Fi networking supported on most newer computers and mobile devices.
Time Capsule
Apple makes it easy to configure a new Time Capsule using an app called AirPort Utility which you can find in your Mac’s Utilities folder. Once it’s up and running on your network, the Time Capsule is visible to any Mac on the network as a valid Time Machine backup location.
This makes Time Capsule a great way to make sure all your Macs are backed up all the time. While its backup features are Mac-specific, Time Capsule works as a network router with devices from other manufacturers too.
What’s wrong with Time Machine?
For many of us backing up to an external hard drive with Time Machine or using a Time Capsule on our network is as much backup as we think we need. In fact it’s probably more backup than we ever had before. Better safe than sorry, eh?
Well, as I’ve said before, Time Machine and Time Capsule are good primary backup systems. But they shouldn’t be your only backup. Because with either Time Machine or a Time Capsule, you’re depending on a single hard drive to store all of your precious data.

All hard drives fail. It’s just a matter of time. We at Backblaze happen to know something about this – we use a lot of hard drives, and we track which ones work and which ones don’t work so well. You’re welcome to read our latest Hard Drive Reliability Review for more details.

A single drive means a single point of failure. If something happens to your Mac and your Time Machine backup drive or Time Capsule isn’t working, you’re not going to be able to recover.
Your backup system is only as good as your last backup
22 days - no time machine
There’s one thing worse than having a Time Machine backup that doesn’t work, and that’s having one that’s out of date, or not having one at all. It’s not uncommon for someone to run Time Machine on an external hard drive once, put it in a drawer, and forget about it again until there’s a problem.
Network problems can disrupt the transfer of data to your Time Capsule. Time Machine will nag you to fix things that go wrong, but you can turn off the nags too.
At the risk of self promotion, that’s why adding Backblaze Personal Backup to the mix is so vitally important. You set up Backblaze and then forget it. And all of your important files are backed up safely, securely and quickly to our servers. The best part is that Backblaze will work with Time Machine or Time Capsule to provide both onsite and offsite data backup.
If you need one file back or an entire drive’s worth of files, we can deliver. You can download those files from any web browser and access them from your iPhone or Android phone, or you can even order a flash drive or hard drive to be delivered with your backup on it.
But enough about Backblaze. Hopefully we’ve given you some good info about Time Machine and Time Capsule. Still confused? Have a question? Let us know in the comments. And if you have ideas for things you’d like to see featured in future installments of What’s the Diff, please let us know!
Coming next week on What’s the Diff: Thunderbolt vs. USB
The post What’s the Diff: Time Machine vs. Time Capsule appeared first on Backblaze Blog | The Life of a Cloud Backup Company.

There’s no conspiracy behind the FBI-v-Apple postponement

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/03/theres-no-conspiracy-behind-fbi-v-apple.html

The FBI says it may have found another way to get data off an iPhone, and thus asked to postpone a hearing about whether Apple can be forced to do it. I thought I’d write a couple of comments. Specifically, people are looking for reasons to believe that the FBI, or Apple, or both are acting in bad faith, and that everything that happens is some sort of conspiracy. As far as I can tell, all evidence is that they are acting in good faith.Orin Kerr writes:If that happens, neither side will look good in the short term. The FBI won’t look good because it went to court and claimed it had no alternatives when an alternative existed. The whole case was for nothing, which will raise suspicions about why the government filed the case and the timing of this new discovery. But Apple won’t look good either. Apple claimed that the sky would fall if it had to create the code in light of the risk outsiders might steal it and threaten the privacy of everyone. If outsiders already have a way in without Apple’s help, then the sky has already fallen. Apple just didn’t know it.I don’t agree.It’s perfectly reasonable that alternatives for the FBI didn’t exist a few weeks ago, but exist now. Once the case hit the news, jailbreakers and 0day hackers could have looked for a bug to exploit, then created just the solution the FBI wants. They can do it in only a couple weeks, which would take Apple much longer, because they are vastly more motivated to do the work.Conversely, Apple doesn’t claim the “sky will fall”. It only claims that developing a backdoor will make life easier for the hackers. Imagine that the hackers are charging the FBI $1 million. From Apple’s perspective, the sky hasn’t fallen, as the iPhone is safe from anybody who can’t afford $1 million. Conversely, if some tool leaked out on GitHub, so that anybody could download it, then relatively the sky will have fallen for Apple. The point is that this isn’t black-or-white, sky-falling issue, but one of a vast grey area somewhere in between.Thus, the evidence is that both sides appear to be acting in good faith. The FBI exhausted all alternatives at the time, but then hackers created a new alternative. Apple doesn’t want to do anything more that could help those hackers.The FBI and Apple are, of course, aware of how this one case fits into their long term plans. Thus, we know that what they say in public, and what they file in their briefs, have a larger agenda than just this case. But at the same time, the FBI could not have started this process if an alternative had been available at the time, or Apple would have contested the order by simply pointing out the alternative. That this didn’t happen means that both the FBI and Apple were unaware of a a better alternative. Thus, as far as I can tell, there’s no conspiracy here.

There’s no conspiracy behind the FBI-v-Apple postponement

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/03/theres-no-conspiracy-behind-fbi-v-apple.html

The FBI says it may have found another way to get data off an iPhone, and thus asked to postpone a hearing about whether Apple can be forced to do it. I thought I’d write a couple of comments. Specifically, people are looking for reasons to believe that the FBI, or Apple, or both are acting in bad faith, and that everything that happens is some sort of conspiracy. As far as I can tell, all evidence is that they are acting in good faith.Orin Kerr writes:If that happens, neither side will look good in the short term. The FBI won’t look good because it went to court and claimed it had no alternatives when an alternative existed. The whole case was for nothing, which will raise suspicions about why the government filed the case and the timing of this new discovery. But Apple won’t look good either. Apple claimed that the sky would fall if it had to create the code in light of the risk outsiders might steal it and threaten the privacy of everyone. If outsiders already have a way in without Apple’s help, then the sky has already fallen. Apple just didn’t know it.I don’t agree.It’s perfectly reasonable that alternatives for the FBI didn’t exist a few weeks ago, but exist now. Once the case hit the news, jailbreakers and 0day hackers could have looked for a bug to exploit, then created just the solution the FBI wants. They can do it in only a couple weeks, which would take Apple much longer, because they are vastly more motivated to do the work.Conversely, Apple doesn’t claim the “sky will fall”. It only claims that developing a backdoor will make life easier for the hackers. Imagine that the hackers are charging the FBI $1 million. From Apple’s perspective, the sky hasn’t fallen, as the iPhone is safe from anybody who can’t afford $1 million. Conversely, if some tool leaked out on GitHub, so that anybody could download it, then relatively the sky will have fallen for Apple. The point is that this isn’t black-or-white, sky-falling issue, but one of a vast grey area somewhere in between.Thus, the evidence is that both sides appear to be acting in good faith. The FBI exhausted all alternatives at the time, but then hackers created a new alternative. Apple doesn’t want to do anything more that could help those hackers.The FBI and Apple are, of course, aware of how this one case fits into their long term plans. Thus, we know that what they say in public, and what they file in their briefs, have a larger agenda than just this case. But at the same time, the FBI could not have started this process if an alternative had been available at the time, or Apple would have contested the order by simply pointing out the alternative. That this didn’t happen means that both the FBI and Apple were unaware of a a better alternative. Thus, as far as I can tell, there’s no conspiracy here.

New – CloudWatch Metrics for Spot Fleets

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-cloudwatch-metrics-for-spot-fleets/

You can launch an EC2 Spot fleet with a couple of clicks. Once launched, the fleet allows you to draw resources from multiple pools of capacity, giving you access to cost-effective compute power regardless of the fleet size (from one instance to many thousands). For more information about this important EC2 feature, read my posts: Amazon EC2 Spot Fleet API – Manage Thousands of Spot Instances with One Request and Spot Fleet Update – Console Support, Fleet Scaling, CloudFormation.
I like to think of each Spot fleet as a single, collective entity. After a fleet has been launched, it is an autonomous group of EC2 instances. The instances may come and go from time to time as Spot prices change (and your mix of instances is altered in order to deliver results as cost-effectively as possible) or if the fleet’s capacity is updated, but the fleet itself retains its identity and its properties.
New Spot Fleet Metrics In order to make it even easier for you to manage, monitor, and scale your Spot fleets as collective entities, we are introducing a new set of Spot fleet CloudWatch metrics.
The metrics are reported across multiple dimensions: for each Spot fleet, for each Availability Zone utilized by each Spot fleet, for each EC2 instance type within the fleet, and for each Availability Zone / instance type combination.
The following metrics are reported for each Spot fleet (you will need to enable EC2 Detailed Monitoring in order to ensure that they are all published):

AvailableInstancePoolsCount
BidsSubmittedForCapacity
CPUUtilization
DiskReadBytes
DiskReadOps
DiskWriteBytes
DiskWriteOps
EligibleInstancePoolCount
FulfilledCapacity
MaxPercentCapacityAllocation
NetworkIn
NetworkOut
PendingCapacity
StatusCheckFailed
StatusCheckFailed_Instance
StatusCheckFailed_System
TargetCapacity
TerminatingCapacity

Some of the metrics will give you some insights into the operation of the Spot fleet bidding process. For example:

AvailableInstancePoolsCount – Indicates the number of instance pools included in the Spot fleet request.
BidsSubmittedForCapacity – Indicates the number of bids that have been made for Spot fleet capacity.
EligibleInstancePoolsCount – Indicates the number of instance pools that are eligible for Spot instance requests. A pool is ineligible when either (1) The Spot price is higher than the On-Demand price or (2) the bid price is lower than the Spot price.
FulfilledCapacity – Indicates the amount of capacity that has been fulfilled for the fleet.
PercentCapacityAllocation – Indicates the percent of capacity allocated for the given dimension. You can use this in conjunction with the instance type dimension to determine the percent of capacity allocated to a given instance type.
PendingCapacity – The difference between TargetCapacity and FulfilledCapacity.
TargetCapacity – The currently requested target capacity for the Spot fleet.
TerminatingCapacity – The fleet capacity for instances that have received Spot instance termination notices.

These metrics will allow you to determine the overall status and performance of each of your Spot fleets. As you can see from the names of the metrics, you can easily observe the disk, CPU, and network resources consumed by the fleet. You can also get a sense for the work that is happening behind the scenes as bids are placed on your behalf for Spot capacity.
You can further inspect the following metrics across the Availability Zone and/or instance type dimensions:

CPUUtilization
DiskReadBytes
DiskReadOps
DiskWriteBytes
FulfilledCapacity
NetworkIn
NetworkOut
StatusCheckFailed
StatusCheckFailed_Instance
StatusCheckFailed_System

These metrics will allow you to see if you have an acceptable distribution of load across Availability Zones and/or instance types.
You can aggregate these metrics using Max, Min, or Avg in order to observe the overall utilization of your fleet. However, be aware that using Avg does not always make sense when used across a fleet comprised of two or more types of instances!
Available Now The new metrics are available now. —
Jeff;

New – CloudWatch Metrics for Spot Fleets

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-cloudwatch-metrics-for-spot-fleets/

You can launch an EC2 Spot fleet with a couple of clicks. Once launched, the fleet allows you to draw resources from multiple pools of capacity, giving you access to cost-effective compute power regardless of the fleet size (from one instance to many thousands). For more information about this important EC2 feature, read my posts: Amazon EC2 Spot Fleet API – Manage Thousands of Spot Instances with One Request and Spot Fleet Update – Console Support, Fleet Scaling, CloudFormation.
I like to think of each Spot fleet as a single, collective entity. After a fleet has been launched, it is an autonomous group of EC2 instances. The instances may come and go from time to time as Spot prices change (and your mix of instances is altered in order to deliver results as cost-effectively as possible) or if the fleet’s capacity is updated, but the fleet itself retains its identity and its properties.
New Spot Fleet Metrics In order to make it even easier for you to manage, monitor, and scale your Spot fleets as collective entities, we are introducing a new set of Spot fleet CloudWatch metrics.
The metrics are reported across multiple dimensions: for each Spot fleet, for each Availability Zone utilized by each Spot fleet, for each EC2 instance type within the fleet, and for each Availability Zone / instance type combination.
The following metrics are reported for each Spot fleet (you will need to enable EC2 Detailed Monitoring in order to ensure that they are all published):

AvailableInstancePoolsCount
BidsSubmittedForCapacity
CPUUtilization
DiskReadBytes
DiskReadOps
DiskWriteBytes
DiskWriteOps
EligibleInstancePoolCount
FulfilledCapacity
MaxPercentCapacityAllocation
NetworkIn
NetworkOut
PendingCapacity
StatusCheckFailed
StatusCheckFailed_Instance
StatusCheckFailed_System
TargetCapacity
TerminatingCapacity

Some of the metrics will give you some insights into the operation of the Spot fleet bidding process. For example:

AvailableInstancePoolsCount – Indicates the number of instance pools included in the Spot fleet request.
BidsSubmittedForCapacity – Indicates the number of bids that have been made for Spot fleet capacity.
EligibleInstancePoolsCount – Indicates the number of instance pools that are eligible for Spot instance requests. A pool is ineligible when either (1) The Spot price is higher than the On-Demand price or (2) the bid price is lower than the Spot price.
FulfilledCapacity – Indicates the amount of capacity that has been fulfilled for the fleet.
PercentCapacityAllocation – Indicates the percent of capacity allocated for the given dimension. You can use this in conjunction with the instance type dimension to determine the percent of capacity allocated to a given instance type.
PendingCapacity – The difference between TargetCapacity and FulfilledCapacity.
TargetCapacity – The currently requested target capacity for the Spot fleet.
TerminatingCapacity – The fleet capacity for instances that have received Spot instance termination notices.

These metrics will allow you to determine the overall status and performance of each of your Spot fleets. As you can see from the names of the metrics, you can easily observe the disk, CPU, and network resources consumed by the fleet. You can also get a sense for the work that is happening behind the scenes as bids are placed on your behalf for Spot capacity.
You can further inspect the following metrics across the Availability Zone and/or instance type dimensions:

CPUUtilization
DiskReadBytes
DiskReadOps
DiskWriteBytes
FulfilledCapacity
NetworkIn
NetworkOut
StatusCheckFailed
StatusCheckFailed_Instance
StatusCheckFailed_System

These metrics will allow you to see if you have an acceptable distribution of load across Availability Zones and/or instance types.
You can aggregate these metrics using Max, Min, or Avg in order to observe the overall utilization of your fleet. However, be aware that using Avg does not always make sense when used across a fleet comprised of two or more types of instances!
Available Now The new metrics are available now. —
Jeff;

AWS Week in Review – March 14, 2016

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-week-in-review-march-14-2016/

Let’s take a quick look at what happened in AWS-land last week:

Monday
March 14

We announced that the Developer Preview of AWS SDK for C++ is Now Available.
We celebrated Ten Years in the AWS Cloud.
We launched Amazon EMR 4.4.0 with Sqoop, HCatalog, Java 8, and More.
The AWS Compute Blog announced the Launch of AWS Lambda and Amazon API Gateway in the EU (Frankfurt) Region.
The Amazon Simple Email Service Blog annouced that Amazon SES Now Supports Custom Email From Domains.
The AWS Java Blog talked about Using Amazon SQS with Spring Boot and Spring JMS.
The AWS Partner Network Blog urged you to Take Advantage of AWS Self-Paced Labs.
The AWS Windows and .NET Developer Blog showed you how to Retrieve Request Metrics from the AWS SDK for .NET.
The AWS Government, Education, & Nonprofits Blog announced the New Amazon-Busan Cloud Innovation and Technology Center.
We announced Lumberyard Beta 1.1 is Now Available.
Bometric shared AWS Security Best Practices: Network Security.
CloudCheckr listed 5 AWS Security Traps You Might be Missing.
Serverless Code announced that ServerlessConf is Here!
Cloud Academy launched 2 New AWS Courses – (Advanced Techniques for AWS Monitoring, Metrics and Logging and Advanced Deployment Techniques on AWS).
Cloudonaut reminded you to Avoid Sharing Key Pairs for EC2.
8KMiles talked about How Cloud Computing Can Address Healthcare Industry Challenges.
Evident discussed the CIS Foundations Benchmark for AWS Security.
Talkin’ Cloud shared 10 Facts About AWS as it Celebrates 10 Years.
The Next Platform reviewed Ten Years of AWS And a Status Check for HPC Clouds.
ZephyCloud is AWS-powered Wind Farm Design Software.

Tuesday
March 15

We announced the AWS Database Migration Service.
We announced that AWS CloudFormation Now Supports Amazon GameLift.
The AWS Partner Network Blog reminded everyone that Friends Don’t Let Friends Build Data Centers.
The Amazon GameDev Blog talked about Using Autoscaling to Control Costs While Delivering Great Player Experiences.
We updated the AWS SDK for JavaScript, the AWS SDK for Ruby, and the AWS SDK for Go.
Calorious talked about Uploading Images into Amazon S3.
Serverless Code showed you How to Use LXML in Lambda.
The Acquia Developer Center talked about Open-Sourcing Moonshot.
Concurrency Labs encouraged you to Hatch a Swarm of AWS IoT Things Using Locust, EC2 and Get Your IoT Application Ready for Prime Time.

Wednesday
March 16

We announced an S3 Lifecycle Management Update with Support for Multipart Upload and Delete Markers.
We announced that the EC2 Container Service is Now Available in the US West (Oregon) Region.
We announced that Amazon ElastiCache now supports the R3 node family in AWS China (Beijing) and AWS South America (Sao Paulo) Regions.
We announced that AWS IoT Now Integrates with Amazon Elasticsearch Service and CloudWatch.
We published the Puppet on the AWS Cloud: Quick Start Reference Deployment.
We announced that Amazon RDS Enhanced Monitoring is now available in the Asia Pacific (Seoul) Region.
I wrote about Additional Failover Control for Amazon Aurora (this feature was launched earlier in the year).
The AWS Security Blog showed you How to Set Up Uninterrupted, Federated User Access to AWS Using AD FS.
The AWS Java Blog talked about Migrating Your Databases Using AWS Database Migration Service.
We updated the AWS SDK for Java and the AWS CLI.
CloudWedge asked Cloud Computing: Cost Saver or Additional Expense?
Gathering Clouds reviewed New 2016 AWS Services: Certificate Manager, Lambda, Dev SecOps.

Thursday
March 17

We announced the new Marketplace Metering Service for 3rd Party Sellers.
We announced Amazon VPC Endpoints for Amazon S3 in South America (Sao Paulo) and Asia Pacific (Seoul).
We announced AWS CloudTrail Support for Kinesis Firehose.
The AWS Big Data Blog showed you How to Analyze a Time Series in Real Time with AWS Lambda, Amazon Kinesis and Amazon DynamoDB Streams.
The AWS Enterprise Blog showed you How to Create a Cloud Center of Excellence in your Enterprise, and then talked about Staffing Your Enterprise’s Cloud Center of Excellence.
The AWS Mobile Development Blog showed you How to Analyze Device-Generated Data with AWS IoT and Amazon Elasticsearch Service.
Stelligent initiated a series on Serverless Delivery.
CloudHealth Academy talked about Modeling RDS Reservations.
N2W Software talked about How to Pre-Warm Your EBS Volumes on AWS.
ParkMyCloud explained How to Save Money on AWS With ParkMyCloud.

Friday
March 18

The AWS Government, Education, & Nonprofits Blog told you how AWS GovCloud (US) Helps ASD Cut Costs by 50% While Dramatically Improving Security.
The Amazon GameDev Blog discussed Code Archeology: Crafting Lumberyard.
Calorious talked about Importing JSON into DynamoDB.
DZone Cloud Zone talked about Graceful Shutdown Using AWS AutoScaling Groups and Terraform.

Saturday
March 19

DZone Cloud Zone wants to honor some Trailblazing Women in the Cloud.

Sunday
March 20

 Cloudability talked about How Atlassian Nailed the Reserved Instance Buying Process.
DZone Cloud Zone talked about Serverless Delivery Architectures.
Gorillastack explained Why the Cloud is THE Key Technology Enabler for Digital Transformation.

New & Notable Open Source

Tumbless is a blogging platform based only on S3 and your browser.
aws-amicleaner cleans up old, unused AMIs and related snapshots.
alexa-aws-administration helps you to do various administration tasks in your AWS account using an Amazon Echo.
aws-s3-zipper takes an S3 bucket folder and zips it for streaming.
aws-lambda-helper is a collection of helper methods for Lambda.
CloudSeed lets you describe a list of AWS stack components, then configure and build a custom stack.
aws-ses-sns-dashboard is a Go-based dashboard with SES and SNS notifications.
snowplow-scala-analytics-sdk is a Scala SDK for working with Snowplow-enriched events in Spark using Lambda.
StackFormation is a lightweight CloudFormation stack manager.
aws-keychain-util is a command-line utility to manage AWS credentials in the OS X keychain.

New SlideShare Presentations

Account Separation and Mandatory Access Control on AWS.
Crypto Options in AWS.
Security Day IAM Recommended Practices.
What’s Nearly New.

New Customer Success Stories

AdiMap measures online advertising spend, app financials, and salary data. Using AWS, AdiMap builds predictive financial models without spending millions on compute resources and hardware, providing scalable financial intelligence and reducing time to market for new products.
Change.org is the world’s largest and fastest growing social change platform, with more than 125 million users in 196 countries starting campaigns and mobilizing support for local causes and global issues. The organization runs its website and business intelligence cluster on AWS, and runs its continuous integration and testing on Solano CI from APN member Solano Labs.
Flatiron Health has been able to reach 230 cancer clinics and 2,200 clinicians across the United States with a solution that captures and organizes oncology data, helping to support cancer treatments. Flatiron moved its solution to AWS to improve speed to market and to minimize the time and expense that the startup company needs to devote to its IT infrastructure.
Global Red specializes in lifecycle marketing, including strategy, data, analytics, and execution across all digital channels. By re-architecting and migrating its data platform and related applications to AWS, Global Red reduced the time to onboard new customers for its advertising trading desk and marketing automation platforms by 50 percent.
GMobi primarily sells its products and services to Original Design Manufacturers and Original Equipment Manufacturers in emerging markets. By running its “over the air” firmware updates, mobile billing, and advertising software development kits in an AWS infrastructure, GMobi has grown to support 120 million users while maintaining more than 99.9 percent availability
Time Inc.’s new chief technology officer joined the renowned media organization in early 2014, and promised big changes. With AWS, Time Inc. can leverage security features and functionality that mirror the benefits of cloud computing, including rich tools, best-in-class industry standards and protocols and lower costs.
Seaco Global is one of the world’s largest shipping companies. By using AWS to run SAP applications, it also reduced the time needed to complete monthly business processes to just one day, down from four days in the past.

New YouTube Videos

AWS Database Migration Service.
Introduction to Amazon WorkSpaces.
AWS Pop-up Loft.
Save the Date – AWS re:Invent 2016.

Upcoming Events

March 22nd – Live Event (Seattle, Washington) – AWS Big Data Meetup – Intro to SparkR.
March 22nd – Live Broadcast – VoiceOps: Commanding and Controlling Your AWS environments using Amazon Echo and Lambda.
March 23rd – Live Event (Atlanta, Georgia) – AWS Key Management Service & AWS Storage Services for a Hybrid Cloud (Atlanta AWS Community).
April 6th – Live Event (Boston, Massachusetts) AWS at Bio-IT World.
April 18th & 19th – Live Event (Chicago, Illinois) – AWS Summit – Chicago.
April 20th – Live Event (Melbourne, Australia) – Inaugural Melbourne Serverless Meetup.
April 26th – Live Event (Sydney, Australia) – AWS Partner Summit.
April 26th – Live Event (Sydney, Australia) – Inaugural Sydney Serverless Meetup.
ParkMyCloud 2016 AWS Cost-Reduction Roadshow.
AWS Loft – San Francisco.
AWS Loft – New York.
AWS Loft – Tel Aviv.
AWS Zombie Microservices Roadshow.
AWS Public Sector Events.
AWS Global Summit Series.

Help Wanted

AWS Careers.

Stay tuned for next week! In the meantime, follow me on Twitter and subscribe to the RSS feed.
Jeff;

AWS Week in Review – March 14, 2016

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-week-in-review-march-14-2016/

Let’s take a quick look at what happened in AWS-land last week:

Monday
March 14

We announced that the Developer Preview of AWS SDK for C++ is Now Available.
We celebrated Ten Years in the AWS Cloud.
We launched Amazon EMR 4.4.0 with Sqoop, HCatalog, Java 8, and More.
The AWS Compute Blog announced the Launch of AWS Lambda and Amazon API Gateway in the EU (Frankfurt) Region.
The Amazon Simple Email Service Blog annouced that Amazon SES Now Supports Custom Email From Domains.
The AWS Java Blog talked about Using Amazon SQS with Spring Boot and Spring JMS.
The AWS Partner Network Blog urged you to Take Advantage of AWS Self-Paced Labs.
The AWS Windows and .NET Developer Blog showed you how to Retrieve Request Metrics from the AWS SDK for .NET.
The AWS Government, Education, & Nonprofits Blog announced the New Amazon-Busan Cloud Innovation and Technology Center.
We announced Lumberyard Beta 1.1 is Now Available.
Bometric shared AWS Security Best Practices: Network Security.
CloudCheckr listed 5 AWS Security Traps You Might be Missing.
Serverless Code announced that ServerlessConf is Here!
Cloud Academy launched 2 New AWS Courses – (Advanced Techniques for AWS Monitoring, Metrics and Logging and Advanced Deployment Techniques on AWS).
Cloudonaut reminded you to Avoid Sharing Key Pairs for EC2.
8KMiles talked about How Cloud Computing Can Address Healthcare Industry Challenges.
Evident discussed the CIS Foundations Benchmark for AWS Security.
Talkin’ Cloud shared 10 Facts About AWS as it Celebrates 10 Years.
The Next Platform reviewed Ten Years of AWS And a Status Check for HPC Clouds.
ZephyCloud is AWS-powered Wind Farm Design Software.

Tuesday
March 15

We announced the AWS Database Migration Service.
We announced that AWS CloudFormation Now Supports Amazon GameLift.
The AWS Partner Network Blog reminded everyone that Friends Don’t Let Friends Build Data Centers.
The Amazon GameDev Blog talked about Using Autoscaling to Control Costs While Delivering Great Player Experiences.
We updated the AWS SDK for JavaScript, the AWS SDK for Ruby, and the AWS SDK for Go.
Calorious talked about Uploading Images into Amazon S3.
Serverless Code showed you How to Use LXML in Lambda.
The Acquia Developer Center talked about Open-Sourcing Moonshot.
Concurrency Labs encouraged you to Hatch a Swarm of AWS IoT Things Using Locust, EC2 and Get Your IoT Application Ready for Prime Time.

Wednesday
March 16

We announced an S3 Lifecycle Management Update with Support for Multipart Upload and Delete Markers.
We announced that the EC2 Container Service is Now Available in the US West (Oregon) Region.
We announced that Amazon ElastiCache now supports the R3 node family in AWS China (Beijing) and AWS South America (Sao Paulo) Regions.
We announced that AWS IoT Now Integrates with Amazon Elasticsearch Service and CloudWatch.
We published the Puppet on the AWS Cloud: Quick Start Reference Deployment.
We announced that Amazon RDS Enhanced Monitoring is now available in the Asia Pacific (Seoul) Region.
I wrote about Additional Failover Control for Amazon Aurora (this feature was launched earlier in the year).
The AWS Security Blog showed you How to Set Up Uninterrupted, Federated User Access to AWS Using AD FS.
The AWS Java Blog talked about Migrating Your Databases Using AWS Database Migration Service.
We updated the AWS SDK for Java and the AWS CLI.
CloudWedge asked Cloud Computing: Cost Saver or Additional Expense?
Gathering Clouds reviewed New 2016 AWS Services: Certificate Manager, Lambda, Dev SecOps.

Thursday
March 17

We announced the new Marketplace Metering Service for 3rd Party Sellers.
We announced Amazon VPC Endpoints for Amazon S3 in South America (Sao Paulo) and Asia Pacific (Seoul).
We announced AWS CloudTrail Support for Kinesis Firehose.
The AWS Big Data Blog showed you How to Analyze a Time Series in Real Time with AWS Lambda, Amazon Kinesis and Amazon DynamoDB Streams.
The AWS Enterprise Blog showed you How to Create a Cloud Center of Excellence in your Enterprise, and then talked about Staffing Your Enterprise’s Cloud Center of Excellence.
The AWS Mobile Development Blog showed you How to Analyze Device-Generated Data with AWS IoT and Amazon Elasticsearch Service.
Stelligent initiated a series on Serverless Delivery.
CloudHealth Academy talked about Modeling RDS Reservations.
N2W Software talked about How to Pre-Warm Your EBS Volumes on AWS.
ParkMyCloud explained How to Save Money on AWS With ParkMyCloud.

Friday
March 18

The AWS Government, Education, & Nonprofits Blog told you how AWS GovCloud (US) Helps ASD Cut Costs by 50% While Dramatically Improving Security.
The Amazon GameDev Blog discussed Code Archeology: Crafting Lumberyard.
Calorious talked about Importing JSON into DynamoDB.
DZone Cloud Zone talked about Graceful Shutdown Using AWS AutoScaling Groups and Terraform.

Saturday
March 19

DZone Cloud Zone wants to honor some Trailblazing Women in the Cloud.

Sunday
March 20

 Cloudability talked about How Atlassian Nailed the Reserved Instance Buying Process.
DZone Cloud Zone talked about Serverless Delivery Architectures.
Gorillastack explained Why the Cloud is THE Key Technology Enabler for Digital Transformation.

New & Notable Open Source

Tumbless is a blogging platform based only on S3 and your browser.
aws-amicleaner cleans up old, unused AMIs and related snapshots.
alexa-aws-administration helps you to do various administration tasks in your AWS account using an Amazon Echo.
aws-s3-zipper takes an S3 bucket folder and zips it for streaming.
aws-lambda-helper is a collection of helper methods for Lambda.
CloudSeed lets you describe a list of AWS stack components, then configure and build a custom stack.
aws-ses-sns-dashboard is a Go-based dashboard with SES and SNS notifications.
snowplow-scala-analytics-sdk is a Scala SDK for working with Snowplow-enriched events in Spark using Lambda.
StackFormation is a lightweight CloudFormation stack manager.
aws-keychain-util is a command-line utility to manage AWS credentials in the OS X keychain.

New SlideShare Presentations

Account Separation and Mandatory Access Control on AWS.
Crypto Options in AWS.
Security Day IAM Recommended Practices.
What’s Nearly New.

New Customer Success Stories

AdiMap measures online advertising spend, app financials, and salary data. Using AWS, AdiMap builds predictive financial models without spending millions on compute resources and hardware, providing scalable financial intelligence and reducing time to market for new products.
Change.org is the world’s largest and fastest growing social change platform, with more than 125 million users in 196 countries starting campaigns and mobilizing support for local causes and global issues. The organization runs its website and business intelligence cluster on AWS, and runs its continuous integration and testing on Solano CI from APN member Solano Labs.
Flatiron Health has been able to reach 230 cancer clinics and 2,200 clinicians across the United States with a solution that captures and organizes oncology data, helping to support cancer treatments. Flatiron moved its solution to AWS to improve speed to market and to minimize the time and expense that the startup company needs to devote to its IT infrastructure.
Global Red specializes in lifecycle marketing, including strategy, data, analytics, and execution across all digital channels. By re-architecting and migrating its data platform and related applications to AWS, Global Red reduced the time to onboard new customers for its advertising trading desk and marketing automation platforms by 50 percent.
GMobi primarily sells its products and services to Original Design Manufacturers and Original Equipment Manufacturers in emerging markets. By running its “over the air” firmware updates, mobile billing, and advertising software development kits in an AWS infrastructure, GMobi has grown to support 120 million users while maintaining more than 99.9 percent availability
Time Inc.’s new chief technology officer joined the renowned media organization in early 2014, and promised big changes. With AWS, Time Inc. can leverage security features and functionality that mirror the benefits of cloud computing, including rich tools, best-in-class industry standards and protocols and lower costs.
Seaco Global is one of the world’s largest shipping companies. By using AWS to run SAP applications, it also reduced the time needed to complete monthly business processes to just one day, down from four days in the past.

New YouTube Videos

AWS Database Migration Service.
Introduction to Amazon WorkSpaces.
AWS Pop-up Loft.
Save the Date – AWS re:Invent 2016.

Upcoming Events

March 22nd – Live Event (Seattle, Washington) – AWS Big Data Meetup – Intro to SparkR.
March 22nd – Live Broadcast – VoiceOps: Commanding and Controlling Your AWS environments using Amazon Echo and Lambda.
March 23rd – Live Event (Atlanta, Georgia) – AWS Key Management Service & AWS Storage Services for a Hybrid Cloud (Atlanta AWS Community).
April 6th – Live Event (Boston, Massachusetts) AWS at Bio-IT World.
April 18th & 19th – Live Event (Chicago, Illinois) – AWS Summit – Chicago.
April 20th – Live Event (Melbourne, Australia) – Inaugural Melbourne Serverless Meetup.
April 26th – Live Event (Sydney, Australia) – AWS Partner Summit.
April 26th – Live Event (Sydney, Australia) – Inaugural Sydney Serverless Meetup.
ParkMyCloud 2016 AWS Cost-Reduction Roadshow.
AWS Loft – San Francisco.
AWS Loft – New York.
AWS Loft – Tel Aviv.
AWS Zombie Microservices Roadshow.
AWS Public Sector Events.
AWS Global Summit Series.

Help Wanted

AWS Careers.

Stay tuned for next week! In the meantime, follow me on Twitter and subscribe to the RSS feed.
Jeff;

NAXSI – Open-Source WAF For Nginx

Post Syndicated from Darknet original http://feedproxy.google.com/~r/darknethackers/~3/_qy86kN6c34/

NAXSI is an open-source WAF for Nginx (Web Application Firewall) which by default can block 99% of known patterns involved in website vulnerabilities. NAXSI means Nginx Anti XSS & SQL Injection Technically, it is a third party Nginx module, available as a package for many UNIX-like platforms. This module, by default, reads a small subset…

Read the full post at darknet.org.uk

Ten days to enter our Astro Pi competition

Post Syndicated from Rachel Rayns original https://www.raspberrypi.org/blog/ten-days-enter-astro-pi-competition/

Calling all space coders! A quick announcement:
T minus ten days to the deadline of our latest Astro Pi competition.
You have until 12 noon on Thursday 31st March to submit your Sonic Pi tunes and MP3 player code.
Send your code to space
British ESA astronaut Tim Peake wants students to compose music in Sonic Pi for him to listen to. Tim needs to be able to listen to your tunes on one of the Astro Pi flight units, so we are also looking for a Python program to turn the units into an MP3 media player.  You do need to be 18 or under and live in the UK.
We have some fantastic competition judges: musicians including synthpop giants OMD and film composer Ilan Eshkeri, as well as experts from the aerospace industry and our own crack team of developers.
If you haven’t used Sonic Pi before, here is a brilliant introduction from our Education Team:
Getting Started With Sonic Pi | Raspberry Pi Learning Resources
Sonic Pi is an open-source programming environment, designed for creating new sounds with code in a live coding environment; it was developed by Dr Sam Aaron at the University of Cambridge. He uses the software to perform live with his band.

You can find all the competition information, including how to enter, at astro-pi.org/coding-challenges.
The post Ten days to enter our Astro Pi competition appeared first on Raspberry Pi.

Biweekly roundup: doubling down

Post Syndicated from Eevee original https://eev.ee/dev/2016/03/20/weekly-roundup-doubling-down/

March’s theme is video games, I guess?

It’s actually been two weeks since the last roundup, but there’s an excellent reason for that!

  • doom: As previously mentioned, someone started a “just get something done” ZDoom mapping project, so I made a map! I spent a solid seven days doing virtually nothing but working on it. And it came out pretty fantastically, I think. The final project is still in a bug-fixing phase, but I’ll link it when it’s done.

  • blog: I wrote about how maybe we could tone down the JavaScript, and it was phenomenally popular. People are still linking it anew on Twitter. That’s pretty cool. I also wrote a ton of developer commentary for my Doom map, which I’ll finish in the next few days and publish once the mapset is actually released. And I combed through my Doom series to edit a few things that are fixed in recent ZDoom and SLADE releases.

  • veekun: I managed to generate a YAML-based data file for Pokémon Red directly from game data. There’s still a lot of work to do to capture moves and places and other data, but this is a great start.

  • SLADE: In my 3D floor preview branch, the sides of simple 3D floors now render. There is so much work left to do here but the basics are finally there. Also fixed about nine papercuts I encountered while making my map, though some others remain.

  • mario maker: I made a level but have neglected to write about it here yet. Oops.

  • art: I drew most of the next part of Pokémon Yellow but then got kinda distracted by Doom stuff. I redrew last year’s Pi Day comic for the sake of comparison. I also started on Mel’s birthday present, which involves something astoundingly difficult that I’ve never tried before.

  • irl: I replaced my case fans, and it was a nightmare. “Toolless” fasteners are awful.

Pouring a solid week into one thing is weird; I feel like I haven’t drawn or touched Runed Awakening in ages, now. I’d like to get back to those.

I also still want to rig a category for posts about stuff I’m releasing, and also do something with that terrible “projects” page, so hopefully I’ll get to those soon.

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close