Tag Archives: ble encryption

Best Practices for Running Apache Kafka on AWS

Post Syndicated from Prasad Alle original https://aws.amazon.com/blogs/big-data/best-practices-for-running-apache-kafka-on-aws/

This post was written in partnership with Intuit to share learnings, best practices, and recommendations for running an Apache Kafka cluster on AWS. Thanks to Vaishak Suresh and his colleagues at Intuit for their contribution and support.

Intuit, in their own words: Intuit, a leading enterprise customer for AWS, is a creator of business and financial management solutions. For more information on how Intuit partners with AWS, see our previous blog post, Real-time Stream Processing Using Apache Spark Streaming and Apache Kafka on AWS. Apache Kafka is an open-source, distributed streaming platform that enables you to build real-time streaming applications.

The best practices described in this post are based on our experience in running and operating large-scale Kafka clusters on AWS for more than two years. Our intent for this post is to help AWS customers who are currently running Kafka on AWS, and also customers who are considering migrating on-premises Kafka deployments to AWS.

AWS offers Amazon Kinesis Data Streams, a Kafka alternative that is fully managed.

Running your Kafka deployment on Amazon EC2 provides a high performance, scalable solution for ingesting streaming data. AWS offers many different instance types and storage option combinations for Kafka deployments. However, given the number of possible deployment topologies, it’s not always trivial to select the most appropriate strategy suitable for your use case.

In this blog post, we cover the following aspects of running Kafka clusters on AWS:

  • Deployment considerations and patterns
  • Storage options
  • Instance types
  • Networking
  • Upgrades
  • Performance tuning
  • Monitoring
  • Security
  • Backup and restore

Note: While implementing Kafka clusters in a production environment, make sure also to consider factors like your number of messages, message size, monitoring, failure handling, and any operational issues.

Deployment considerations and patterns

In this section, we discuss various deployment options available for Kafka on AWS, along with pros and cons of each option. A successful deployment starts with thoughtful consideration of these options. Considering availability, consistency, and operational overhead of the deployment helps when choosing the right option.

Single AWS Region, Three Availability Zones, All Active

One typical deployment pattern (all active) is in a single AWS Region with three Availability Zones (AZs). One Kafka cluster is deployed in each AZ along with Apache ZooKeeper and Kafka producer and consumer instances as shown in the illustration following.

In this pattern, this is the Kafka cluster deployment:

  • Kafka producers and Kafka cluster are deployed on each AZ.
  • Data is distributed evenly across three Kafka clusters by using Elastic Load Balancer.
  • Kafka consumers aggregate data from all three Kafka clusters.

Kafka cluster failover occurs this way:

  • Mark down all Kafka producers
  • Stop consumers
  • Debug and restack Kafka
  • Restart consumers
  • Restart Kafka producers

Following are the pros and cons of this pattern.

ProsCons
  • Highly available
  • Can sustain the failure of two AZs
  • No message loss during failover
  • Simple deployment

 

  • Very high operational overhead:
    • All changes need to be deployed three times, one for each Kafka cluster
    • Maintaining and monitoring three Kafka clusters
    • Maintaining and monitoring three consumer clusters

A restart is required for patching and upgrading brokers in a Kafka cluster. In this approach, a rolling upgrade is done separately for each cluster.

Single Region, Three Availability Zones, Active-Standby

Another typical deployment pattern (active-standby) is in a single AWS Region with a single Kafka cluster and Kafka brokers and Zookeepers distributed across three AZs. Another similar Kafka cluster acts as a standby as shown in the illustration following. You can use Kafka mirroring with MirrorMaker to replicate messages between any two clusters.

In this pattern, this is the Kafka cluster deployment:

  • Kafka producers are deployed on all three AZs.
  • Only one Kafka cluster is deployed across three AZs (active).
  • ZooKeeper instances are deployed on each AZ.
  • Brokers are spread evenly across all three AZs.
  • Kafka consumers can be deployed across all three AZs.
  • Standby Kafka producers and a Multi-AZ Kafka cluster are part of the deployment.

Kafka cluster failover occurs this way:

  • Switch traffic to standby Kafka producers cluster and Kafka cluster.
  • Restart consumers to consume from standby Kafka cluster.

Following are the pros and cons of this pattern.

ProsCons
  • Less operational overhead when compared to the first option
  • Only one Kafka cluster to manage and consume data from
  • Can handle single AZ failures without activating a standby Kafka cluster
  • Added latency due to cross-AZ data transfer among Kafka brokers
  • For Kafka versions before 0.10, replicas for topic partitions have to be assigned so they’re distributed to the brokers on different AZs (rack-awareness)
  • The cluster can become unavailable in case of a network glitch, where ZooKeeper does not see Kafka brokers
  • Possibility of in-transit message loss during failover

Intuit recommends using a single Kafka cluster in one AWS Region, with brokers distributing across three AZs (single region, three AZs). This approach offers stronger fault tolerance than otherwise, because a failed AZ won’t cause Kafka downtime.

Storage options

There are two storage options for file storage in Amazon EC2:

Ephemeral storage is local to the Amazon EC2 instance. It can provide high IOPS based on the instance type. On the other hand, Amazon EBS volumes offer higher resiliency and you can configure IOPS based on your storage needs. EBS volumes also offer some distinct advantages in terms of recovery time. Your choice of storage is closely related to the type of workload supported by your Kafka cluster.

Kafka provides built-in fault tolerance by replicating data partitions across a configurable number of instances. If a broker fails, you can recover it by fetching all the data from other brokers in the cluster that host the other replicas. Depending on the size of the data transfer, it can affect recovery process and network traffic. These in turn eventually affect the cluster’s performance.

The following table contrasts the benefits of using an instance store versus using EBS for storage.

Instance storeEBS
  • Instance storage is recommended for large- and medium-sized Kafka clusters. For a large cluster, read/write traffic is distributed across a high number of brokers, so the loss of a broker has less of an impact. However, for smaller clusters, a quick recovery for the failed node is important, but a failed broker takes longer and requires more network traffic for a smaller Kafka cluster.
  • Storage-optimized instances like h1, i3, and d2 are an ideal choice for distributed applications like Kafka.

 

  • The primary advantage of using EBS in a Kafka deployment is that it significantly reduces data-transfer traffic when a broker fails or must be replaced. The replacement broker joins the cluster much faster.
  • Data stored on EBS is persisted in case of an instance failure or termination. The broker’s data stored on an EBS volume remains intact, and you can mount the EBS volume to a new EC2 instance. Most of the replicated data for the replacement broker is already available in the EBS volume and need not be copied over the network from another broker. Only the changes made after the original broker failure need to be transferred across the network. That makes this process much faster.

 

 

Intuit chose EBS because of their frequent instance restacking requirements and also other benefits provided by EBS.

Generally, Kafka deployments use a replication factor of three. EBS offers replication within their service, so Intuit chose a replication factor of two instead of three.

Instance types

The choice of instance types is generally driven by the type of storage required for your streaming applications on a Kafka cluster. If your application requires ephemeral storage, h1, i3, and d2 instances are your best option.

Intuit used r3.xlarge instances for their brokers and r3.large for ZooKeeper, with ST1 (throughput optimized HDD) EBS for their Kafka cluster.

Here are sample benchmark numbers from Intuit tests.

ConfigurationBroker bytes (MB/s)
  • r3.xlarge
  • ST1 EBS
  • 12 brokers
  • 12 partitions

 

Aggregate 346.9

If you need EBS storage, then AWS has a newer-generation r4 instance. The r4 instance is superior to R3 in many ways:

  • It has a faster processor (Broadwell).
  • EBS is optimized by default.
  • It features networking based on Elastic Network Adapter (ENA), with up to 10 Gbps on smaller sizes.
  • It costs 20 percent less than R3.

Note: It’s always best practice to check for the latest changes in instance types.

Networking

The network plays a very important role in a distributed system like Kafka. A fast and reliable network ensures that nodes can communicate with each other easily. The available network throughput controls the maximum amount of traffic that Kafka can handle. Network throughput, combined with disk storage, is often the governing factor for cluster sizing.

If you expect your cluster to receive high read/write traffic, select an instance type that offers 10-Gb/s performance.

In addition, choose an option that keeps interbroker network traffic on the private subnet, because this approach allows clients to connect to the brokers. Communication between brokers and clients uses the same network interface and port. For more details, see the documentation about IP addressing for EC2 instances.

If you are deploying in more than one AWS Region, you can connect the two VPCs in the two AWS Regions using cross-region VPC peering. However, be aware of the networking costs associated with cross-AZ deployments.

Upgrades

Kafka has a history of not being backward compatible, but its support of backward compatibility is getting better. During a Kafka upgrade, you should keep your producer and consumer clients on a version equal to or lower than the version you are upgrading from. After the upgrade is finished, you can start using a new protocol version and any new features it supports. There are three upgrade approaches available, discussed following.

Rolling or in-place upgrade

In a rolling or in-place upgrade scenario, upgrade one Kafka broker at a time. Take into consideration the recommendations for doing rolling restarts to avoid downtime for end users.

Downtime upgrade

If you can afford the downtime, you can take your entire cluster down, upgrade each Kafka broker, and then restart the cluster.

Blue/green upgrade

Intuit followed the blue/green deployment model for their workloads, as described following.

If you can afford to create a separate Kafka cluster and upgrade it, we highly recommend the blue/green upgrade scenario. In this scenario, we recommend that you keep your clusters up-to-date with the latest Kafka version. For additional details on Kafka version upgrades or more details, see the Kafka upgrade documentation.

The following illustration shows a blue/green upgrade.

In this scenario, the upgrade plan works like this:

  • Create a new Kafka cluster on AWS.
  • Create a new Kafka producers stack to point to the new Kafka cluster.
  • Create topics on the new Kafka cluster.
  • Test the green deployment end to end (sanity check).
  • Using Amazon Route 53, change the new Kafka producers stack on AWS to point to the new green Kafka environment that you have created.

The roll-back plan works like this:

  • Switch Amazon Route 53 to the old Kafka producers stack on AWS to point to the old Kafka environment.

For additional details on blue/green deployment architecture using Kafka, see the re:Invent presentation Leveraging the Cloud with a Blue-Green Deployment Architecture.

Performance tuning

You can tune Kafka performance in multiple dimensions. Following are some best practices for performance tuning.

 These are some general performance tuning techniques:

  • If throughput is less than network capacity, try the following:
    • Add more threads
    • Increase batch size
    • Add more producer instances
    • Add more partitions
  • To improve latency when acks =-1, increase your num.replica.fetches value.
  • For cross-AZ data transfer, tune your buffer settings for sockets and for OS TCP.
  • Make sure that num.io.threads is greater than the number of disks dedicated for Kafka.
  • Adjust num.network.threads based on the number of producers plus the number of consumers plus the replication factor.
  • Your message size affects your network bandwidth. To get higher performance from a Kafka cluster, select an instance type that offers 10 Gb/s performance.

For Java and JVM tuning, try the following:

  • Minimize GC pauses by using the Oracle JDK, which uses the new G1 garbage-first collector.
  • Try to keep the Kafka heap size below 4 GB.

Monitoring

Knowing whether a Kafka cluster is working correctly in a production environment is critical. Sometimes, just knowing that the cluster is up is enough, but Kafka applications have many moving parts to monitor. In fact, it can easily become confusing to understand what’s important to watch and what you can set aside. Items to monitor range from simple metrics about the overall rate of traffic, to producers, consumers, brokers, controller, ZooKeeper, topics, partitions, messages, and so on.

For monitoring, Intuit used several tools, including Newrelec, Wavefront, Amazon CloudWatch, and AWS CloudTrail. Our recommended monitoring approach follows.

For system metrics, we recommend that you monitor:

  • CPU load
  • Network metrics
  • File handle usage
  • Disk space
  • Disk I/O performance
  • Garbage collection
  • ZooKeeper

For producers, we recommend that you monitor:

  • Batch-size-avg
  • Compression-rate-avg
  • Waiting-threads
  • Buffer-available-bytes
  • Record-queue-time-max
  • Record-send-rate
  • Records-per-request-avg

For consumers, we recommend that you monitor:

  • Batch-size-avg
  • Compression-rate-avg
  • Waiting-threads
  • Buffer-available-bytes
  • Record-queue-time-max
  • Record-send-rate
  • Records-per-request-avg

Security

Like most distributed systems, Kafka provides the mechanisms to transfer data with relatively high security across the components involved. Depending on your setup, security might involve different services such as encryption, Kerberos, Transport Layer Security (TLS) certificates, and advanced access control list (ACL) setup in brokers and ZooKeeper. The following tells you more about the Intuit approach. For details on Kafka security not covered in this section, see the Kafka documentation.

Encryption at rest

For EBS-backed EC2 instances, you can enable encryption at rest by using Amazon EBS volumes with encryption enabled. Amazon EBS uses AWS Key Management Service (AWS KMS) for encryption. For more details, see Amazon EBS Encryption in the EBS documentation. For instance store–backed EC2 instances, you can enable encryption at rest by using Amazon EC2 instance store encryption.

Encryption in transit

Kafka uses TLS for client and internode communications.

Authentication

Authentication of connections to brokers from clients (producers and consumers) to other brokers and tools uses either Secure Sockets Layer (SSL) or Simple Authentication and Security Layer (SASL).

Kafka supports Kerberos authentication. If you already have a Kerberos server, you can add Kafka to your current configuration.

Authorization

In Kafka, authorization is pluggable and integration with external authorization services is supported.

Backup and restore

The type of storage used in your deployment dictates your backup and restore strategy.

The best way to back up a Kafka cluster based on instance storage is to set up a second cluster and replicate messages using MirrorMaker. Kafka’s mirroring feature makes it possible to maintain a replica of an existing Kafka cluster. Depending on your setup and requirements, your backup cluster might be in the same AWS Region as your main cluster or in a different one.

For EBS-based deployments, you can enable automatic snapshots of EBS volumes to back up volumes. You can easily create new EBS volumes from these snapshots to restore. We recommend storing backup files in Amazon S3.

For more information on how to back up in Kafka, see the Kafka documentation.

Conclusion

In this post, we discussed several patterns for running Kafka in the AWS Cloud. AWS also provides an alternative managed solution with Amazon Kinesis Data Streams, there are no servers to manage or scaling cliffs to worry about, you can scale the size of your streaming pipeline in seconds without downtime, data replication across availability zones is automatic, you benefit from security out of the box, Kinesis Data Streams is tightly integrated with a wide variety of AWS services like Lambda, Redshift, Elasticsearch and it supports open source frameworks like Storm, Spark, Flink, and more. You may refer to kafka-kinesis connector.

If you have questions or suggestions, please comment below.


Additional Reading

If you found this post useful, be sure to check out Implement Serverless Log Analytics Using Amazon Kinesis Analytics and Real-time Clickstream Anomaly Detection with Amazon Kinesis Analytics.


About the Author

Prasad Alle is a Senior Big Data Consultant with AWS Professional Services. He spends his time leading and building scalable, reliable Big data, Machine learning, Artificial Intelligence and IoT solutions for AWS Enterprise and Strategic customers. His interests extend to various technologies such as Advanced Edge Computing, Machine learning at Edge. In his spare time, he enjoys spending time with his family.

 

 

New – Encryption at Rest for DynamoDB

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-encryption-at-rest-for-dynamodb/

At AWS re:Invent 2017, Werner encouraged his audience to “Dance like nobody is watching, and to encrypt like everyone is:

The AWS team is always eager to add features that make it easier for you to protect your sensitive data and to help you to achieve your compliance objectives. For example, in 2017 we launched encryption at rest for SQS and EFS, additional encryption options for S3, and server-side encryption of Kinesis Data Streams.

Today we are giving you another data protection option with the introduction of encryption at rest for Amazon DynamoDB. You simply enable encryption when you create a new table and DynamoDB takes care of the rest. Your data (tables, local secondary indexes, and global secondary indexes) will be encrypted using AES-256 and a service-default AWS Key Management Service (KMS) key. The encryption adds no storage overhead and is completely transparent; you can insert, query, scan, and delete items as before. The team did not observe any changes in latency after enabling encryption and running several different workloads on an encrypted DynamoDB table.

Creating an Encrypted Table
You can create an encrypted table from the AWS Management Console, API (CreateTable), or CLI (create-table). I’ll use the console! I enter the name and set up the primary key as usual:

Before proceeding, I uncheck Use default settings, scroll down to the Encrypytion section, and check Enable encryption. Then I click Create and my table is created in encrypted form:

I can see the encryption setting for the table at a glance:

When my compliance team asks me to show them how DynamoDB uses the key to encrypt the data, I can create a AWS CloudTrail trail, insert an item, and then scan the table to see the calls to the AWS KMS API. Here’s an extract from the trail:

{
  "eventTime": "2018-01-24T00:06:34Z",
  "eventSource": "kms.amazonaws.com",
  "eventName": "Decrypt",
  "awsRegion": "us-west-2",
  "sourceIPAddress": "dynamodb.amazonaws.com",
  "userAgent": "dynamodb.amazonaws.com",
  "requestParameters": {
    "encryptionContext": {
      "aws:dynamodb:tableName": "reg-users",
      "aws:dynamodb:subscriberId": "1234567890"
    }
  },
  "responseElements": null,
  "requestID": "7072def1-009a-11e8-9ab9-4504c26bd391",
  "eventID": "3698678a-d04e-48c7-96f2-3d734c5c7903",
  "readOnly": true,
  "resources": [
    {
      "ARN": "arn:aws:kms:us-west-2:1234567890:key/e7bd721d-37f3-4acd-bec5-4d08c765f9f5",
      "accountId": "1234567890",
      "type": "AWS::KMS::Key"
    }
  ]
}

Available Now
This feature is available now in the US East (N. Virginia), US East (Ohio), US West (Oregon), and EU (Ireland) Regions and you can start using it today.

There’s no charge for the encryption; you will be charged for the calls that DynamoDB makes to AWS KMS on your behalf.

Jeff;

 

Yet Another FBI Proposal for Insecure Communications

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/01/yet_another_fbi.html

Deputy Attorney General Rosenstein has given talks where he proposes that tech companies decrease their communications and device security for the benefit of the FBI. In a recent talk, his idea is that tech companies just save a copy of the plaintext:

Law enforcement can also partner with private industry to address a problem we call “Going Dark.” Technology increasingly frustrates traditional law enforcement efforts to collect evidence needed to protect public safety and solve crime. For example, many instant-messaging services now encrypt messages by default. The prevent the police from reading those messages, even if an impartial judge approves their interception.

The problem is especially critical because electronic evidence is necessary for both the investigation of a cyber incident and the prosecution of the perpetrator. If we cannot access data even with lawful process, we are unable to do our job. Our ability to secure systems and prosecute criminals depends on our ability to gather evidence.

I encourage you to carefully consider your company’s interests and how you can work cooperatively with us. Although encryption can help secure your data, it may also prevent law enforcement agencies from protecting your data.

Encryption serves a valuable purpose. It is a foundational element of data security and essential to safeguarding data against cyber-attacks. It is critical to the growth and flourishing of the digital economy, and we support it. I support strong and responsible encryption.

I simply maintain that companies should retain the capability to provide the government unencrypted copies of communications and data stored on devices, when a court orders them to do so.

Responsible encryption is effective secure encryption, coupled with access capabilities. We know encryption can include safeguards. For example, there are systems that include central management of security keys and operating system updates; scanning of content, like your e-mails, for advertising purposes; simulcast of messages to multiple destinations at once; and key recovery when a user forgets the password to decrypt a laptop. No one calls any of those functions a “backdoor.” In fact, those very capabilities are marketed and sought out.

I do not believe that the government should mandate a specific means of ensuring access. The government does not need to micromanage the engineering.

The question is whether to require a particular goal: When a court issues a search warrant or wiretap order to collect evidence of crime, the company should be able to help. The government does not need to hold the key.

Rosenstein is right that many services like Gmail naturally keep plaintext in the cloud. This is something we pointed out in our 2016 paper: “Don’t Panic.” But forcing companies to build an alternate means to access the plaintext that the user can’t control is an enormous vulnerability.

AWS HIPAA Eligibility Update (October 2017) – Sixteen Additional Services

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-hipaa-eligibility-post-update-october-2017-sixteen-additional-services/

Our Health Customer Stories page lists just a few of the many customers that are building and running healthcare and life sciences applications that run on AWS. Customers like Verge Health, Care Cloud, and Orion Health trust AWS with Protected Health Information (PHI) and Personally Identifying Information (PII) as part of their efforts to comply with HIPAA and HITECH.

Sixteen More Services
In my last HIPAA Eligibility Update I shared the news that we added eight additional services to our list of HIPAA eligible services. Today I am happy to let you know that we have added another sixteen services to the list, bringing the total up to 46. Here are the newest additions, along with some short descriptions and links to some of my blog posts to jog your memory:

Amazon Aurora with PostgreSQL Compatibility – This brand-new addition to Amazon Aurora allows you to encrypt your relational databases using keys that you create and manage through AWS Key Management Service (KMS). When you enable encryption for an Amazon Aurora database, the underlying storage is encrypted, as are automated backups, read replicas, and snapshots. Read New – Encryption at Rest for Amazon Aurora to learn more.

Amazon CloudWatch Logs – You can use the logs to monitor and troubleshoot your systems and applications. You can monitor your existing system, application, and custom log files in near real-time, watching for specific phrases, values, or patterns. Log data can be stored durably and at low cost, for as long as needed. To learn more, read Store and Monitor OS & Application Log Files with Amazon CloudWatch and Improvements to CloudWatch Logs and Dashboards.

Amazon Connect – This self-service, cloud-based contact center makes it easy for you to deliver better customer service at a lower cost. You can use the visual designer to set up your contact flows, manage agents, and track performance, all without specialized skills. Read Amazon Connect – Customer Contact Center in the Cloud and New – Amazon Connect and Amazon Lex Integration to learn more.

Amazon ElastiCache for Redis – This service lets you deploy, operate, and scale an in-memory data store or cache that you can use to improve the performance of your applications. Each ElastiCache for Redis cluster publishes key performance metrics to Amazon CloudWatch. To learn more, read Caching in the Cloud with Amazon ElastiCache and Amazon ElastiCache – Now With a Dash of Redis.

Amazon Kinesis Streams – This service allows you to build applications that process or analyze streaming data such as website clickstreams, financial transactions, social media feeds, and location-tracking events. To learn more, read Amazon Kinesis – Real-Time Processing of Streaming Big Data and New: Server-Side Encryption for Amazon Kinesis Streams.

Amazon RDS for MariaDB – This service lets you set up scalable, managed MariaDB instances in minutes, and offers high performance, high availability, and a simplified security model that makes it easy for you to encrypt data at rest and in transit. Read Amazon RDS Update – MariaDB is Now Available to learn more.

Amazon RDS SQL Server – This service lets you set up scalable, managed Microsoft SQL Server instances in minutes, and also offers high performance, high availability, and a simplified security model. To learn more, read Amazon RDS for SQL Server and .NET support for AWS Elastic Beanstalk and Amazon RDS for Microsoft SQL Server – Transparent Data Encryption (TDE) to learn more.

Amazon Route 53 – This is a highly available Domain Name Server. It translates names like www.example.com into IP addresses. To learn more, read Moving Ahead with Amazon Route 53.

AWS Batch – This service lets you run large-scale batch computing jobs on AWS. You don’t need to install or maintain specialized batch software or build your own server clusters. Read AWS Batch – Run Batch Computing Jobs on AWS to learn more.

AWS CloudHSM – A cloud-based Hardware Security Module (HSM) for key storage and management at cloud scale. Designed for sensitive workloads, CloudHSM lets you manage your own keys using FIPS 140-2 Level 3 validated HSMs. To learn more, read AWS CloudHSM – Secure Key Storage and Cryptographic Operations and AWS CloudHSM Update – Cost Effective Hardware Key Management at Cloud Scale for Sensitive & Regulated Workloads.

AWS Key Management Service – This service makes it easy for you to create and control the encryption keys used to encrypt your data. It uses HSMs to protect your keys, and is integrated with AWS CloudTrail in order to provide you with a log of all key usage. Read New AWS Key Management Service (KMS) to learn more.

AWS Lambda – This service lets you run event-driven application or backend code without thinking about or managing servers. To learn more, read AWS Lambda – Run Code in the Cloud, AWS Lambda – A Look Back at 2016, and AWS Lambda – In Full Production with New Features for Mobile Devs.

[email protected] – You can use this new feature of AWS Lambda to run Node.js functions across the global network of AWS locations without having to provision or manager servers, in order to deliver rich, personalized content to your users with low latency. Read [email protected] – Intelligent Processing of HTTP Requests at the Edge to learn more.

AWS Snowball Edge – This is a data transfer device with 100 terabytes of on-board storage as well as compute capabilities. You can use it to move large amounts of data into or out of AWS, as a temporary storage tier, or to support workloads in remote or offline locations. To learn more, read AWS Snowball Edge – More Storage, Local Endpoints, Lambda Functions.

AWS Snowmobile – This is an exabyte-scale data transfer service. Pulled by a semi-trailer truck, each Snowmobile packs 100 petabytes of storage into a ruggedized 45-foot long shipping container. Read AWS Snowmobile – Move Exabytes of Data to the Cloud in Weeks to learn more (and to see some of my finest LEGO work).

AWS Storage Gateway – This hybrid storage service lets your on-premises applications use AWS cloud storage (Amazon Simple Storage Service (S3), Amazon Glacier, and Amazon Elastic File System) in a simple and seamless way, with storage for volumes, files, and virtual tapes. To learn more, read The AWS Storage Gateway – Integrate Your Existing On-Premises Applications with AWS Cloud Storage and File Interface to AWS Storage Gateway.

And there you go! Check out my earlier post for a list of resources that will help you to build applications that comply with HIPAA and HITECH.

Jeff;

 

"Responsible encryption" fallacies

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/10/responsible-encryption-fallacies.html

Deputy Attorney General Rod Rosenstein gave a speech recently calling for “Responsible Encryption” (aka. “Crypto Backdoors”). It’s full of dangerous ideas that need to be debunked.

The importance of law enforcement

The first third of the speech talks about the importance of law enforcement, as if it’s the only thing standing between us and chaos. It cites the 2016 Mirai attacks as an example of the chaos that will only get worse without stricter law enforcement.

But the Mira case demonstrated the opposite, how law enforcement is not needed. They made no arrests in the case. A year later, they still haven’t a clue who did it.

Conversely, we technologists have fixed the major infrastructure issues. Specifically, those affected by the DNS outage have moved to multiple DNS providers, including a high-capacity DNS provider like Google and Amazon who can handle such large attacks easily.

In other words, we the people fixed the major Mirai problem, and law-enforcement didn’t.

Moreover, instead being a solution to cyber threats, law enforcement has become a threat itself. The DNC didn’t have the FBI investigate the attacks from Russia likely because they didn’t want the FBI reading all their files, finding wrongdoing by the DNC. It’s not that they did anything actually wrong, but it’s more like that famous quote from Richelieu “Give me six words written by the most honest of men and I’ll find something to hang him by”. Give all your internal emails over to the FBI and I’m certain they’ll find something to hang you by, if they want.
Or consider the case of Andrew Auernheimer. He found AT&T’s website made public user accounts of the first iPad, so he copied some down and posted them to a news site. AT&T had denied the problem, so making the problem public was the only way to force them to fix it. Such access to the website was legal, because AT&T had made the data public. However, prosecutors disagreed. In order to protect the powerful, they twisted and perverted the law to put Auernheimer in jail.

It’s not that law enforcement is bad, it’s that it’s not the unalloyed good Rosenstein imagines. When law enforcement becomes the thing Rosenstein describes, it means we live in a police state.

Where law enforcement can’t go

Rosenstein repeats the frequent claim in the encryption debate:

Our society has never had a system where evidence of criminal wrongdoing was totally impervious to detection

Of course our society has places “impervious to detection”, protected by both legal and natural barriers.

An example of a legal barrier is how spouses can’t be forced to testify against each other. This barrier is impervious.

A better example, though, is how so much of government, intelligence, the military, and law enforcement itself is impervious. If prosecutors could gather evidence everywhere, then why isn’t Rosenstein prosecuting those guilty of CIA torture?

Oh, you say, government is a special exception. If that were the case, then why did Rosenstein dedicate a precious third of his speech discussing the “rule of law” and how it applies to everyone, “protecting people from abuse by the government”. It obviously doesn’t, there’s one rule of government and a different rule for the people, and the rule for government means there’s lots of places law enforcement can’t go to gather evidence.

Likewise, the crypto backdoor Rosenstein is demanding for citizens doesn’t apply to the President, Congress, the NSA, the Army, or Rosenstein himself.

Then there are the natural barriers. The police can’t read your mind. They can only get the evidence that is there, like partial fingerprints, which are far less reliable than full fingerprints. They can’t go backwards in time.

I mention this because encryption is a natural barrier. It’s their job to overcome this barrier if they can, to crack crypto and so forth. It’s not our job to do it for them.

It’s like the camera that increasingly comes with TVs for video conferencing, or the microphone on Alexa-style devices that are always recording. This suddenly creates evidence that the police want our help in gathering, such as having the camera turned on all the time, recording to disk, in case the police later gets a warrant, to peer backward in time what happened in our living rooms. The “nothing is impervious” argument applies here as well. And it’s equally bogus here. By not helping police by not recording our activities, we aren’t somehow breaking some long standing tradit

And this is the scary part. It’s not that we are breaking some ancient tradition that there’s no place the police can’t go (with a warrant). Instead, crypto backdoors breaking the tradition that never before have I been forced to help them eavesdrop on me, even before I’m a suspect, even before any crime has been committed. Sure, laws like CALEA force the phone companies to help the police against wrongdoers — but here Rosenstein is insisting I help the police against myself.

Balance between privacy and public safety

Rosenstein repeats the frequent claim that encryption upsets the balance between privacy/safety:

Warrant-proof encryption defeats the constitutional balance by elevating privacy above public safety.

This is laughable, because technology has swung the balance alarmingly in favor of law enforcement. Far from “Going Dark” as his side claims, the problem we are confronted with is “Going Light”, where the police state monitors our every action.

You are surrounded by recording devices. If you walk down the street in town, outdoor surveillance cameras feed police facial recognition systems. If you drive, automated license plate readers can track your route. If you make a phone call or use a credit card, the police get a record of the transaction. If you stay in a hotel, they demand your ID, for law enforcement purposes.

And that’s their stuff, which is nothing compared to your stuff. You are never far from a recording device you own, such as your mobile phone, TV, Alexa/Siri/OkGoogle device, laptop. Modern cars from the last few years increasingly have always-on cell connections and data recorders that record your every action (and location).

Even if you hike out into the country, when you get back, the FBI can subpoena your GPS device to track down your hidden weapon’s cache, or grab the photos from your camera.

And this is all offline. So much of what we do is now online. Of the photographs you own, fewer than 1% are printed out, the rest are on your computer or backed up to the cloud.

Your phone is also a GPS recorder of your exact position all the time, which if the government wins the Carpenter case, they police can grab without a warrant. Tagging all citizens with a recording device of their position is not “balance” but the premise for a novel more dystopic than 1984.

If suspected of a crime, which would you rather the police searched? Your person, houses, papers, and physical effects? Or your mobile phone, computer, email, and online/cloud accounts?

The balance of privacy and safety has swung so far in favor of law enforcement that rather than debating whether they should have crypto backdoors, we should be debating how to add more privacy protections.

“But it’s not conclusive”

Rosenstein defends the “going light” (“Golden Age of Surveillance”) by pointing out it’s not always enough for conviction. Nothing gives a conviction better than a person’s own words admitting to the crime that were captured by surveillance. This other data, while copious, often fails to convince a jury beyond a reasonable doubt.
This is nonsense. Police got along well enough before the digital age, before such widespread messaging. They solved terrorist and child abduction cases just fine in the 1980s. Sure, somebody’s GPS location isn’t by itself enough — until you go there and find all the buried bodies, which leads to a conviction. “Going dark” imagines that somehow, the evidence they’ve been gathering for centuries is going away. It isn’t. It’s still here, and matches up with even more digital evidence.
Conversely, a person’s own words are not as conclusive as you think. There’s always missing context. We quickly get back to the Richelieu “six words” problem, where captured communications are twisted to convict people, with defense lawyers trying to untwist them.

Rosenstein’s claim may be true, that a lot of criminals will go free because the other electronic data isn’t convincing enough. But I’d need to see that claim backed up with hard studies, not thrown out for emotional impact.

Terrorists and child molesters

You can always tell the lack of seriousness of law enforcement when they bring up terrorists and child molesters.
To be fair, sometimes we do need to talk about terrorists. There are things unique to terrorism where me may need to give government explicit powers to address those unique concerns. For example, the NSA buys mobile phone 0day exploits in order to hack terrorist leaders in tribal areas. This is a good thing.
But when terrorists use encryption the same way everyone else does, then it’s not a unique reason to sacrifice our freedoms to give the police extra powers. Either it’s a good idea for all crimes or no crimes — there’s nothing particular about terrorism that makes it an exceptional crime. Dead people are dead. Any rational view of the problem relegates terrorism to be a minor problem. More citizens have died since September 8, 2001 from their own furniture than from terrorism. According to studies, the hot water from the tap is more of a threat to you than terrorists.
Yes, government should do what they can to protect us from terrorists, but no, it’s not so bad of a threat that requires the imposition of a military/police state. When people use terrorism to justify their actions, it’s because they trying to form a military/police state.
A similar argument works with child porn. Here’s the thing: the pervs aren’t exchanging child porn using the services Rosenstein wants to backdoor, like Apple’s Facetime or Facebook’s WhatsApp. Instead, they are exchanging child porn using custom services they build themselves.
Again, I’m (mostly) on the side of the FBI. I support their idea of buying 0day exploits in order to hack the web browsers of visitors to the secret “PlayPen” site. This is something that’s narrow to this problem and doesn’t endanger the innocent. On the other hand, their calls for crypto backdoors endangers the innocent while doing effectively nothing to address child porn.
Terrorists and child molesters are a clichéd, non-serious excuse to appeal to our emotions to give up our rights. We should not give in to such emotions.

Definition of “backdoor”

Rosenstein claims that we shouldn’t call backdoors “backdoors”:

No one calls any of those functions [like key recovery] a “back door.”  In fact, those capabilities are marketed and sought out by many users.

He’s partly right in that we rarely refer to PGP’s key escrow feature as a “backdoor”.

But that’s because the term “backdoor” refers less to how it’s done and more to who is doing it. If I set up a recovery password with Apple, I’m the one doing it to myself, so we don’t call it a backdoor. If it’s the police, spies, hackers, or criminals, then we call it a “backdoor” — even it’s identical technology.

Wikipedia uses the key escrow feature of the 1990s Clipper Chip as a prime example of what everyone means by “backdoor“. By “no one”, Rosenstein is including Wikipedia, which is obviously incorrect.

Though in truth, it’s not going to be the same technology. The needs of law enforcement are different than my personal key escrow/backup needs. In particular, there are unsolvable problems, such as a backdoor that works for the “legitimate” law enforcement in the United States but not for the “illegitimate” police states like Russia and China.

I feel for Rosenstein, because the term “backdoor” does have a pejorative connotation, which can be considered unfair. But that’s like saying the word “murder” is a pejorative term for killing people, or “torture” is a pejorative term for torture. The bad connotation exists because we don’t like government surveillance. I mean, honestly calling this feature “government surveillance feature” is likewise pejorative, and likewise exactly what it is that we are talking about.

Providers

Rosenstein focuses his arguments on “providers”, like Snapchat or Apple. But this isn’t the question.

The question is whether a “provider” like Telegram, a Russian company beyond US law, provides this feature. Or, by extension, whether individuals should be free to install whatever software they want, regardless of provider.

Telegram is a Russian company that provides end-to-end encryption. Anybody can download their software in order to communicate so that American law enforcement can’t eavesdrop. They aren’t going to put in a backdoor for the U.S. If we succeed in putting backdoors in Apple and WhatsApp, all this means is that criminals are going to install Telegram.

If the, for some reason, the US is able to convince all such providers (including Telegram) to install a backdoor, then it still doesn’t solve the problem, as uses can just build their own end-to-end encryption app that has no provider. It’s like email: some use the major providers like GMail, others setup their own email server.

Ultimately, this means that any law mandating “crypto backdoors” is going to target users not providers. Rosenstein tries to make a comparison with what plain-old telephone companies have to do under old laws like CALEA, but that’s not what’s happening here. Instead, for such rules to have any effect, they have to punish users for what they install, not providers.

This continues the argument I made above. Government backdoors is not something that forces Internet services to eavesdrop on us — it forces us to help the government spy on ourselves.
Rosenstein tries to address this by pointing out that it’s still a win if major providers like Apple and Facetime are forced to add backdoors, because they are the most popular, and some terrorists/criminals won’t move to alternate platforms. This is false. People with good intentions, who are unfairly targeted by a police state, the ones where police abuse is rampant, are the ones who use the backdoored products. Those with bad intentions, who know they are guilty, will move to the safe products. Indeed, Telegram is already popular among terrorists because they believe American services are already all backdoored. 
Rosenstein is essentially demanding the innocent get backdoored while the guilty don’t. This seems backwards. This is backwards.

Apple is morally weak

The reason I’m writing this post is because Rosenstein makes a few claims that cannot be ignored. One of them is how he describes Apple’s response to government insistence on weakening encryption doing the opposite, strengthening encryption. He reasons this happens because:

Of course they [Apple] do. They are in the business of selling products and making money. 

We [the DoJ] use a different measure of success. We are in the business of preventing crime and saving lives. 

He swells in importance. His condescending tone ennobles himself while debasing others. But this isn’t how things work. He’s not some white knight above the peasantry, protecting us. He’s a beat cop, a civil servant, who serves us.

A better phrasing would have been:

They are in the business of giving customers what they want.

We are in the business of giving voters what they want.

Both sides are doing the same, giving people what they want. Yes, voters want safety, but they also want privacy. Rosenstein imagines that he’s free to ignore our demands for privacy as long has he’s fulfilling his duty to protect us. He has explicitly rejected what people want, “we use a different measure of success”. He imagines it’s his job to tell us where the balance between privacy and safety lies. That’s not his job, that’s our job. We, the people (and our representatives), make that decision, and it’s his job is to do what he’s told. His measure of success is how well he fulfills our wishes, not how well he satisfies his imagined criteria.

That’s why those of us on this side of the debate doubt the good intentions of those like Rosenstein. He criticizes Apple for wanting to protect our rights/freedoms, and declare they measure success differently.

They are willing to be vile

Rosenstein makes this argument:

Companies are willing to make accommodations when required by the government. Recent media reports suggest that a major American technology company developed a tool to suppress online posts in certain geographic areas in order to embrace a foreign government’s censorship policies. 

Let me translate this for you:

Companies are willing to acquiesce to vile requests made by police-states. Therefore, they should acquiesce to our vile police-state requests.

It’s Rosenstein who is admitting here is that his requests are those of a police-state.

Constitutional Rights

Rosenstein says:

There is no constitutional right to sell warrant-proof encryption.

Maybe. It’s something the courts will have to decide. There are many 1st, 2nd, 3rd, 4th, and 5th Amendment issues here.
The reason we have the Bill of Rights is because of the abuses of the British Government. For example, they quartered troops in our homes, as a way of punishing us, and as a way of forcing us to help in our own oppression. The troops weren’t there to defend us against the French, but to defend us against ourselves, to shoot us if we got out of line.

And that’s what crypto backdoors do. We are forced to be agents of our own oppression. The principles enumerated by Rosenstein apply to a wide range of even additional surveillance. With little change to his speech, it can equally argue why the constant TV video surveillance from 1984 should be made law.

Let’s go back and look at Apple. It is not some base company exploiting consumers for profit. Apple doesn’t have guns, they cannot make people buy their product. If Apple doesn’t provide customers what they want, then customers vote with their feet, and go buy an Android phone. Apple isn’t providing encryption/security in order to make a profit — it’s giving customers what they want in order to stay in business.
Conversely, if we citizens don’t like what the government does, tough luck, they’ve got the guns to enforce their edicts. We can’t easily vote with our feet and walk to another country. A “democracy” is far less democratic than capitalism. Apple is a minority, selling phones to 45% of the population, and that’s fine, the minority get the phones they want. In a Democracy, where citizens vote on the issue, those 45% are screwed, as the 55% impose their will unwanted onto the remainder.

That’s why we have the Bill of Rights, to protect the 49% against abuse by the 51%. Regardless whether the Supreme Court agrees the current Constitution, it is the sort right that might exist regardless of what the Constitution says. 

Obliged to speak the truth

Here is the another part of his speech that I feel cannot be ignored. We have to discuss this:

Those of us who swear to protect the rule of law have a different motivation.  We are obliged to speak the truth.

The truth is that “going dark” threatens to disable law enforcement and enable criminals and terrorists to operate with impunity.

This is not true. Sure, he’s obliged to say the absolute truth, in court. He’s also obliged to be truthful in general about facts in his personal life, such as not lying on his tax return (the sort of thing that can get lawyers disbarred).

But he’s not obliged to tell his spouse his honest opinion whether that new outfit makes them look fat. Likewise, Rosenstein knows his opinion on public policy doesn’t fall into this category. He can say with impunity that either global warming doesn’t exist, or that it’ll cause a biblical deluge within 5 years. Both are factually untrue, but it’s not going to get him fired.

And this particular claim is also exaggerated bunk. While everyone agrees encryption makes law enforcement’s job harder than with backdoors, nobody honestly believes it can “disable” law enforcement. While everyone agrees that encryption helps terrorists, nobody believes it can enable them to act with “impunity”.

I feel bad here. It’s a terrible thing to question your opponent’s character this way. But Rosenstein made this unavoidable when he clearly, with no ambiguity, put his integrity as Deputy Attorney General on the line behind the statement that “going dark threatens to disable law enforcement and enable criminals and terrorists to operate with impunity”. I feel it’s a bald face lie, but you don’t need to take my word for it. Read his own words yourself and judge his integrity.

Conclusion

Rosenstein’s speech includes repeated references to ideas like “oath”, “honor”, and “duty”. It reminds me of Col. Jessup’s speech in the movie “A Few Good Men”.

If you’ll recall, it was rousing speech, “you want me on that wall” and “you use words like honor as a punchline”. Of course, since he was violating his oath and sending two privates to death row in order to avoid being held accountable, it was Jessup himself who was crapping on the concepts of “honor”, “oath”, and “duty”.

And so is Rosenstein. He imagines himself on that wall, doing albeit terrible things, justified by his duty to protect citizens. He imagines that it’s he who is honorable, while the rest of us not, even has he utters bald faced lies to further his own power and authority.

We activists oppose crypto backdoors not because we lack honor, or because we are criminals, or because we support terrorists and child molesters. It’s because we value privacy and government officials who get corrupted by power. It’s not that we fear Trump becoming a dictator, it’s that we fear bureaucrats at Rosenstein’s level becoming drunk on authority — which Rosenstein demonstrably has. His speech is a long train of corrupt ideas pursuing the same object of despotism — a despotism we oppose.

In other words, we oppose crypto backdoors because it’s not a tool of law enforcement, but a tool of despotism.

How to Configure Even Stronger Password Policies to Help Meet Your Security Standards by Using AWS Directory Service for Microsoft Active Directory

Post Syndicated from Ravi Turlapati original https://aws.amazon.com/blogs/security/how-to-configure-even-stronger-password-policies-to-help-meet-your-security-standards-by-using-aws-directory-service-for-microsoft-active-directory/

With AWS Directory Service for Microsoft Active Directory (Enterprise Edition), also known as AWS Microsoft AD, you can now create and enforce custom password policies for your Microsoft Windows users. AWS Microsoft AD now includes five empty password policies that you can edit and apply with standard Microsoft password policy tools such as Active Directory Administrative Center (ADAC). With this capability, you are no longer limited to the default Windows password policy. Now, you can configure even stronger password policies and define lockout policies that specify when to lock out an account after login failures.

In this blog post, I demonstrate how to edit these new password policies to help you meet your security standards by using AWS Microsoft AD. I also introduce the password attributes you can modify and demonstrate how to apply password policies to user groups in your domain.

Prerequisites

The instructions in this post assume that you already have the following components running:

  • An active AWS Microsoft AD directory.
  • An Amazon EC2 for Windows Server instance that is domain joined to your AWS Microsoft AD directory and on which you have installed ADAC.

If you still need to meet these prerequisites before proceeding:

Scenario overview

Let’s say I am the Active Directory (AD) administrator of Example Corp. At Example Corp., we have a group of technical administrators, several groups of senior managers, and general, nontechnical employees. I need to create password policies for these groups that match our security standards.

Our general employees have access only to low-sensitivity information. However, our senior managers regularly access confidential information and we want to enforce password complexity (a mix of upper and lower case letters, numbers, and special characters) to reduce the risk of data theft. For our administrators, we want to enforce password complexity policies to prevent unauthorized access to our system administration tools.

Our security standards call for the following enforced password and account lockout policies:

  • General employees – To make it easier for nontechnical general employees to remember their passwords, we do not enforce password complexity. However, we want to enforce a minimum password length of 8 characters and a lockout policy after 6 failed login attempts as a minimum bar to protect against unwanted access to our low-sensitivity information. If a general employee forgets their password and becomes locked out, we let them try again in 5 minutes, rather than require escalated password resets. We also want general employees to rotate their passwords every 60 days with no duplicated passwords in the past 10 password changes.
  • Senior managers – For senior managers, we enforce a minimum password length of 10 characters and require password complexity. An account lockout is enforced after 6 failed attempts with an account lockout duration of 15 minutes. Senior managers must rotate their passwords every 45 days, and they cannot duplicate passwords from the past 20 changes.
  • Administrators – For administrators, we enforce password complexity with a minimum password length of 15 characters. We also want to lock out accounts after 6 failed attempts, have password rotation every 30 days, and disallow duplicate passwords in the past 30 changes. When a lockout occurs, we require a special administrator to intervene and unlock the account so that we can be aware of any potential hacking.
  • Fine-Grained Password Policy administrators – To ensure that only trusted administrators unlock accounts, we have two special administrator accounts (admin and midas) that can unlock accounts. These two accounts have the same policy as the other administrators except they have an account lockout duration of 15 minutes, rather than requiring a password reset. These two accounts are also the accounts used to manage Example Corp.’s password policies.

The following table summarizes how I edit each of the four policies I intend to use.

Policy nameEXAMPLE-PSO-01EXAMPLE-PSO-02EXAMPLE-PSO-03EXAMPLE-PSO-05
Precedence10203050
User groupFine-Grained Password Policy AdministratorsOther AdministratorsSenior ManagersGeneral Employees
Minimum password length 1515108
Password complexity EnableEnableEnableDisable
Maximum password age30 days30 days45 days60 days
Account complexityEnableEnableEnableDisable
Number of failed logon attempts allowed6666
Duration15 minutesNot applicable15 minutes5 minutes
Password history24302010
Until admin manually unlocks accountNot applicableSelectedNot applicableNot applicable

To implement these password policies, I use 4 of the 5 new password policies available in AWS Microsoft AD:

  1. I first explain how to configure the password policies.
  2. I then demonstrate how to apply the four password policies that match Example Corp.’s security standards for these user groups.

1. Configure password policies in AWS Microsoft AD

To help you get started with password policies, AWS has added the Fine-Grained Pwd Policy Admins AD security group to your AWS Microsoft AD directory. Any user or other security group that is part of the Fine-Grained Pwd Policy Admins group has permissions to edit and apply the five new password policies. By default, your directory Admin is part of the new group and can add other users or groups to this group.

Adding users to the Fine-Grained Pwd Policy Admins user group

Follow these steps to add more users or AD security groups to the Fine-Grained Pwd Policy Admins security group so that they can administer fine-grained password policies:

  1. Launch ADAC from your managed instance.
  2. Switch to the Tree View and navigate to CORP > Users.
  3. Find the Fine Grained Pwd Policy Admins user group. Add any users or groups in your domain to this group.

Edit password policies

To edit fine-grained password policies, open ADAC from any management instance joined to your domain. Switch to the Tree View and navigate to System > Password Settings Container. You will see the five policies containing the string -PSO- that AWS added to your directory, as shown in the following screenshot. Select a policy to edit it.

Screenshot showing the five new password policies

After editing the password policy, apply the policy by adding users or AD security groups to these policies by choosing Add. The default domain GPO applies if you do not configure any of the five password policies. For additional details about using Password Settings Container, go to Step-by-Step: Enabling and Using Fine-Grained Password Policies in AD on the Microsoft TechNet Blog.

The password attributes you can edit

AWS allows you to edit all of the password attributes except Precedence (I explain more about Precedence in the next section). These attributes include:

  • Password history
  • Minimum password length
  • Minimum password age
  • Maximum password age
  • Store password using reversible encryption
  • Password must meet complexity requirements

You also can enforce the following attributes for account lockout settings:

  • The number of failed login attempts allowed
  • Account lockout duration
  • Reset failed login attempts after a specified duration

For more details about how these attributes affect password enforcement, see AD DS: Fine-Grained Password Policies on Microsoft TechNet.

Understanding password policy precedence

AD password policies have a precedence (a numerical attribute that AD uses to determine the resultant policy) associated with them. Policies with a lower value for Precedence have higher priority than other policies. A user inherits all policies that you apply directly to the user or to any groups to which the user belongs. For example, suppose jsmith is a member of the HR group and also a member of the MANAGERS group. If I apply a policy with a Precedence of 50 to the HR group and a policy with a Precedence of 40 to MANAGERS, the policy with the Precedence value of 40 ranks higher and AD applies that policy to jsmith.

If you apply multiple policies to a user or group, the resultant policy is determined as follows by AD:

  1. If you apply a policy directly to a user, AD enforces the lowest directly applied password policy.
  2. If you did not apply a policy directly to the user, AD enforces the policy with the lowest Precedence value of all policies inherited by the user through the user’s group membership.

For more information about AD fine-grained policies, see AD DS: Fine-Grained Password Policies on Microsoft TechNet.

2. Apply password policies to user groups

In this section, I demonstrate how to apply Example Corp.’s password policies. Except in rare cases, I only apply policies by group membership, which ensures that AD does not enforce a lower priority policy on an individual user if have I added them to a group with a higher priority policy.

Because my directory is new, I use a Remote Desktop Protocol (RDP) connection to sign in to the Windows Server instance I domain joined to my AWS Microsoft AD directory. Signing in with the admin account, I launch ADAC to perform the following tasks:

  1. First, I set up my groups so that I can apply password policies to them. Later, I can create user accounts and add them to my groups and AD applies the right policy by using the policy precedence and resultant policy algorithms I discussed previously. I start by adding the two special administrative accounts (admin and midas) that I described previously to the Fine-Grained Pwd Policy Admins. Because AWS Microsoft AD adds my default admin account to Fine-Grained Pwd Policy Admins, I only need to create midas and then add midas to the Fine-Grained Pwd Policy Admins group.
  2. Next, I create the Other Administrators, Senior Managers, and General Employees groups that I described previously, as shown in the following screenshot.
    Screenshot of the groups created

For this post’s example, I use these four policies:

  1. EXAMPLE-PSO-01 (highest priority policy) – For the administrators who manage Example Corp.’s password policies. Applying this highest priority policy to the Fine-Grained Pwd Policy Admins group prevents these users from being locked out if they also are assigned to a different policy.
  2. EXAMPLE-PSO-02 (the second highest priority policy) – For Example Corp.’s other administrators.
  3. EXAMPLE-PSO-03 (the third highest priority policy) – For Example Corp.’s senior managers.
  4. EXAMPLE-PSO-05 (the lowest priority policy) – For Example Corp.’s general employees.

This leaves me one password policy (EXAMPLE-PSO-04) that I can use for in the future if needed.

I start by editing the policy, EXAMPLE-PSO-01. To edit the policy, I follow the Edit password policies section from earlier in this post. When finished, I add the Fine-Grained Pwd Policy Admins group to that policy, as shown in the following screenshot. I then repeat the process for each of the remaining policies, as described in the Scenario overview section earlier in this post.

Screenshot of adding the Fine-Grained Pwd Policy Admins group to the EXAMPLE-PSO-01 policy

Though AD enforces new password policies, the timing related to how password policies replicate in the directory, the types of attributes that are changed, and the timing of user password changes can cause variability in the immediacy of policy enforcement. In general, after the policies are replicated throughout the directory, attributes that affect account lockout and password age take effect. Attributes that affect the quality of a password, such as password length, take effect when the password is changed. If the password age for a user is in compliance, but their password strength is out of compliance, the user is not forced to change their password. For more information password policy impact, see this Microsoft TechNet article.

Summary

In this post, I have demonstrated how you can configure strong password policies to meet your security standards by using AWS Microsoft AD. To learn more about AWS Microsoft AD, see the AWS Directory Service home page.

If you have comments about this post, submit them in the “Comments” section below. If you have questions about this blog post, start a new thread on the Directory Service forum.

– Ravi

Under the Hood of Server-Side Encryption for Amazon Kinesis Streams

Post Syndicated from Damian Wylie original https://aws.amazon.com/blogs/big-data/under-the-hood-of-server-side-encryption-for-amazon-kinesis-streams/

Customers are using Amazon Kinesis Streams to ingest, process, and deliver data in real time from millions of devices or applications. Use cases for Kinesis Streams vary, but a few common ones include IoT data ingestion and analytics, log processing, clickstream analytics, and enterprise data bus architectures.

Within milliseconds of data arrival, applications (KCL, Apache Spark, AWS Lambda, Amazon Kinesis Analytics) attached to a stream are continuously mining value or delivering data to downstream destinations. Customers are then scaling their streams elastically to match demand. They pay incrementally for the resources that they need, while taking advantage of a fully managed, serverless streaming data service that allows them to focus on adding value closer to their customers.

These benefits are great; however, AWS learned that many customers could not take advantage of Kinesis Streams unless their data-at-rest within a stream was encrypted. Many customers did not want to manage encryption on their own, so they asked for a fully managed, automatic, server-side encryption mechanism leveraging centralized AWS Key Management Service (AWS KMS) customer master keys (CMK).

Motivated by this feedback, AWS added another fully managed, low cost aspect to Kinesis Streams by delivering server-side encryption via KMS managed encryption keys (SSE-KMS) in the following regions:

  • US East (N. Virginia)
  • US West (Oregon)
  • US West (N. California)
  • EU (Ireland)
  • Asia Pacific (Singapore)
  • Asia Pacific (Tokyo)

In this post, I cover the mechanics of the Kinesis Streams server-side encryption feature. I also share a few best practices and considerations so that you can get started quickly.

Understanding the mechanics

The following section walks you through how Kinesis Streams uses CMKs to encrypt a message in the PutRecord or PutRecords path before it is propagated to the Kinesis Streams storage layer, and then decrypt it in the GetRecords path after it has been retrieved from the storage layer.

When server-side encryption is enabled—which takes just a few clicks in the console—the partition key and payload for every incoming record is encrypted automatically as it’s flowing into Kinesis Streams, using the selected CMK. When data is at rest within a stream, it’s encrypted.

When records are retrieved through a GetRecords request from the encrypted stream, they are decrypted automatically as they are flowing out of the service. That means your Kinesis Streams producers and consumers do not need to be aware of encryption. You have a fully managed data encryption feature at your fingertips, which can be enabled within seconds.

AWS also makes it easy to audit the application of server-side encryption. You can use the AWS Management Console for instant stream-level verification; the responses from PutRecord, PutRecords, and getRecords; or AWS CloudTrail.

Calling PutRecord or PutRecords

When server-side encryption is enabled for a particular stream, Kinesis Streams and KMS perform the following actions when your applications call PutRecord or PutRecords on a stream with server-side encryption enabled. The Amazon Kinesis Producer Library (KPL) uses PutRecords.

 

  1. Data is sent from a customer’s producer (client) to a Kinesis stream using TLS via HTTPS. Data in transit to a stream is encrypted by default.
  2. After data is received, it is momentarily stored in RAM within a front-end proxy layer.
  3. Kinesis Streams authenticates the producer, then impersonates the producer to request input keying material from KMS.
  4. KMS creates key material, encrypts it by using CMK, and sends both the plaintext and encrypted key material to the service, encrypted with TLS.
  5. The client uses the plaintext key material to derive data encryption keys (data keys) that are unique per-record.
  6. The client encrypts the payload and partition key using the data key in RAM within the front-end proxy layer and removes the plaintext data key from memory.
  7. The client appends the encrypted key material to the encrypted data.
  8. The plaintext key material is securely cached in memory within the front-end layer for reuse, until it expires after 5 minutes.
  9. The client delivers the encrypted message to a back-end store where it is stored at rest and fetchable by an authorized consumer through a GetRecords The Amazon Kinesis Client Library (KCL) calls GetRecords to retrieve records from a stream.

Calling getRecords

Kinesis Streams and KMS perform the following actions when your applications call GetRecords on a server-side encrypted stream.

 

  1. When a GeRecords call is made, the front-end proxy layer retrieves the encrypted record from its back-end store.
  2. The consumer (client) makes a request to KMS using a token generated by the customer’s request. KMS authorizes it.
  3. The client requests that KMS decrypt the encrypted key material.
  4. KMS decrypts the encrypted key material and sends the plaintext key material to the client.
  5. Kinesis Streams derives the per-record data keys from the decrypted key material.
  6. If the calling application is authorized, the client decrypts the payload and removes the plaintext data key from memory.
  7. The client delivers the payload over TLS and HTTPS to the consumer, requesting the records. Data in transit to a consumer is encrypted by default.

Verifying server-side encryption

Auditors or administrators often ask for proof that server-side encryption was or is enabled. Here are a few ways to do this.

To check if encryption is enabled now for your streams:

  • Use the AWS Management Console or the DescribeStream API operation. You can also see what CMK is being used for encryption.
  • See encryption in action by looking at responses from PutRecord, PutRecords, or GetRecords When encryption is enabled, the encryptionType parameter is set to “KMS”. If encryption is not enabled, encryptionType is not included in the response.

Sample PutRecord response

{
    "SequenceNumber": "49573959617140871741560010162505906306417380215064887298",
    "ShardId": "shardId-000000000000",
    "EncryptionType": "KMS"
}

Sample GetRecords response

{
    "Records": [
        {
            "Data": "aGVsbG8gd29ybGQ=", 
            "PartitionKey": "test", 
            "ApproximateArrivalTimestamp": 1498292565.825, 
            "EncryptionType": "KMS", 
            "SequenceNumber": "495735762417140871741560010162505906306417380215064887298"
        }, 
        {
            "Data": "ZnJvZG8gbGl2ZXMK", 
            "PartitionKey": "3d0d9301-3c30-4c48-a9a8-e485b2982b28", 
            "ApproximateArrivalTimestamp": 1498292801.747, 
            "EncryptionType": "KMS", 
            "SequenceNumber": "49573959617140871741560010162507115232237011062036103170"
        }
    ], 
    "NextShardIterator": "AAAAAAAAAAEvFypHZDx/4bJVAS34puwdiNcwssKqbh/XhRK7HSYRq3RS+YXJnVKJ8j0gQUt94bONdqQYHk9X9JHgefMUDKzDzndy5WbZWO4CS3hRdMdrbmJ/9KoR4lOfZvqTLt6JWQjDqXv0IaKs06/LHYcEA3oPcyQLOTJHdJl2EzplCTZnn/U295ovxvqF9g9DY8y2nVoMkdFLmdcEMVXjhCDKiRIt", 
    "MillisBehindLatest": 0
}

To check if encryption was enabled, use CloudTrail, which logs the StartStreamEncryption() and StopStreamEncryption() API calls made against a particular stream.

Getting started

It’s very easy to enable, disable, or modify server-side encryption for a particular stream.

  1. In the Kinesis Streams console, select a stream and choose Details.
  2. Select a CMK and select Enabled.
  3. Choose Save.

You can enable encryption only for a live stream, not upon stream creation.  Follow the same process to disable a stream. To use a different CMK, select it and choose Save.

Each of these tasks can also be accomplished using the StartStreamEncryption and StopStreamEncryption API operations.

Considerations

There are a few considerations you should be aware of when using server-side encryption for Kinesis Streams:

  • Permissions
  • Costs
  • Performance

Permissions

One benefit of using the “(Default) aws/kinesis” AWS managed key is that every producer and consumer with permissions to call PutRecord, PutRecords, or GetRecords inherits the right permissions over the “(Default) aws/kinesis” key automatically.

However, this is not necessarily the same case for a CMK. Kinesis Streams producers and consumers do not need to be aware of encryption. However, if you enable encryption using a custom master key but a producer or consumer doesn’t have IAM permissions to use it, PutRecord, PutRecords, or GetRecords requests fail.

This is a great security feature. On the other hand, it can effectively lead to data loss if you inadvertently apply a custom master key that restricts producers and consumers from interacting from the Kinesis stream. Take precautions when applying a custom master key. For more information about the minimum IAM permissions required for producers and consumers interacting with an encrypted stream, see Using Server-Side Encryption.

Costs

When you apply server-side encryption, you are subject to KMS API usage and key costs. Unlike custom KMS master keys, the “(Default) aws/kinesis” CMK is offered free of charge. However, you still need to pay for the API usage costs that Kinesis Streams incurs on your behalf.

API usage costs apply for every CMK, including custom ones. Kinesis Streams calls KMS approximately every 5 minutes when it is rotating the data key. In a 30-day month, the total cost of KMS API calls initiated by a Kinesis stream should be less than a few dollars.

Performance

During testing, AWS discovered that there was a slight increase (typically 0.2 millisecond or less per record) with put and get record latencies due to the additional overhead of encryption.

If you have questions or suggestions, please comment below.

Amazon Aurora Update – More Cross Region & Cross Account Support, T2.Small DB Instances, Another Region

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-aurora-update-more-cross-region-cross-account-support-t2-small-db-instances-another-region/

I’m in catch-up mode again, and would like to tell you about some recent improvements that we have made to Amazon Aurora. As a reminder, Aurora is our high-performance MySQL-compatible (and soon PostgreSQL-compatible) enterprise-class database (read Now Available – Amazon Aurora and Amazon Aurora – New Cost-Effective MySQL-Compatible Database Engine for Amazon RDS for an introduction).

Here’s are the newest additions to Aurora:

  • Cross Region Snapshot Copy
  • Cross Region Replication for Encrypted Databases
  • Cross Account Encrypted Snapshot Sharing
  • Availability in the US West (Northern California) Region
  • T2.Small Instance Support

Let’s take a quick look at each one!

Cross Region Snapshot Copy
You can now copy Amazon Aurora snapshots (either automatic or manual) from one region to another. Select the snapshot, choose Copy Snapshot from the Snapshot Actions menu, pick a region, and enter a name for the new snapshot:

You can also choose to encrypt the snapshot as part of this operation. To learn more, read Copying a DB Snapshot or DB Cluster Snapshot.

Cross Region Replication for Encrypted Databases
You can already enable encryption when you create a fresh Amazon Aurora DB Instance:

You can now create a read replica in another region with a couple of clicks. You can use this to build multi-region, highly available systems or to move the data closer to the user. To create a cross region read replica, simply select the existing DB Instance and choose Create Cross Region Read Replica from the menu:

Then you choose the destination region in the Network & Security settings, and click on Create:

The destination region must include a DB Subnet Group that encompasses 2 or more Availability Zones.

To learn more about this powerful new feature, read Replicating Amazon Aurora DB Clusters Across AWS Regions.

Cross Account Encrypted Snapshot Sharing
You already have the ability to configure periodic, automated snapshots when you create your Amazon Aurora DB Instance. You can also create snapshots at any desired time with a couple of clicks:

If the DB Instance is encrypted, the snapshot will be as well.

You can now share encrypted snapshots with other AWS accounts. In order to use this feature, the DB Instance (and therefore the snapshot) must be encrypted with a Master Key other than the default RDS key. Select the snapshot and choose Share Snapshot from the Snapshot Actions menu:

Then enter the target AWS Account ID(s), clicking Add after each one, and click on Save to share the snapshot:

You will also need to share the key that was used to encrypt the snapshot. To learn more about this feature, read Sharing a DB Snapshot or DB Cluster Snapshot.

Availability in the US West (Northern California Region)
You can now launch Amazon Aurora DB Instances in the US West (Northern California) Region. Here’s the full list of regions where Aurora is available:

  • US East (Northern Virginia)
  • US East (Ohio)
  • US West (Oregon)
  • US West (Northern California)
  • Canada (Central)
  • EU (Ireland)
  • EU (London)
  • Asia Pacific (Tokyo)
  • Asia Pacific (Sydney)
  • Asia Pacific (Seoul)
  • Asia Pacific (Mumbai)

See the Amazon Aurora Pricing page for pricing info in each region.

T2.Small Instance Support
You can now launch t2.small DB Instances:

These economical instances are a great fit for dev & test environments and for light production workloads. You can also use them to gain some experience with Amazon Aurora. These instances (along with six others, including the t2.medium that we launched last November) are available in all AWS regions where Aurora is available.

On-Demand pricing for t2.small DB Instances starts at $0.041 per hour in the US East (Northern Virginia) Region, dropping to $0.018 per hour for an All Upfront Reserved Instance with a 3 year term (see the Amazon Aurora Pricing page for more info).

Jeff;

 

crackle – Crack Bluetooth Smart Encryption (BLE)

Post Syndicated from Darknet original http://feedproxy.google.com/~r/darknethackers/~3/zIaG6r8gh6g/

crackle is a tool to crack Bluetooth Smart Encryption (BLE), it exploits a flaw in the pairing mechanism that leaves all communications vulnerable to decryption by passive eavesdroppers. crackle can guess or very quickly brute force the TK (temporary key) used in the pairing modes supported by most devices (Just Works and 6-digit PIN). With…

Read the full post at darknet.org.uk

Announcing SECCURE

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/seccure.html

Yesterday my brother released his second Free Software package, the SECCURE Elliptic Curve Crypto Utility for Reliable Encryption. (Recursive acronyms, yay!)

The seccure toolset implements a selection of asymmetric algorithms based on elliptic curve cryptography (ECC). In particular, it offers public key encryption / decryption and signature generation / verification. ECC schemes offer a much better key size to security ratio than classical systems (RSA, DSA). Keys are short enough to make direct specification of keys on the command line possible (sometimes this is more convenient than the management of PGP-like key rings). seccure builds on this feature and therefore is the tool of choice whenever lightweight asymmetric cryptography — independent of key servers, revocation certificates, the Web of Trust, or even configuration files — is required.

Anyone willing to work on the Debian RFP?

(The first Free Software package of him is ssss, an implementation of Shamir’s secret sharing scheme)