Tag Archives: Certificate Authority

AWS Online Tech Talks – May and Early June 2018

Post Syndicated from Devin Watson original https://aws.amazon.com/blogs/aws/aws-online-tech-talks-may-and-early-june-2018/

AWS Online Tech Talks – May and Early June 2018  

Join us this month to learn about some of the exciting new services and solution best practices at AWS. We also have our first re:Invent 2018 webinar series, “How to re:Invent”. Sign up now to learn more, we look forward to seeing you.

Note – All sessions are free and in Pacific Time.

Tech talks featured this month:

Analytics & Big Data

May 21, 2018 | 11:00 AM – 11:45 AM PT Integrating Amazon Elasticsearch with your DevOps Tooling – Learn how you can easily integrate Amazon Elasticsearch Service into your DevOps tooling and gain valuable insight from your log data.

May 23, 2018 | 11:00 AM – 11:45 AM PTData Warehousing and Data Lake Analytics, Together – Learn how to query data across your data warehouse and data lake without moving data.

May 24, 2018 | 11:00 AM – 11:45 AM PTData Transformation Patterns in AWS – Discover how to perform common data transformations on the AWS Data Lake.

Compute

May 29, 2018 | 01:00 PM – 01:45 PM PT – Creating and Managing a WordPress Website with Amazon Lightsail – Learn about Amazon Lightsail and how you can create, run and manage your WordPress websites with Amazon’s simple compute platform.

May 30, 2018 | 01:00 PM – 01:45 PM PTAccelerating Life Sciences with HPC on AWS – Learn how you can accelerate your Life Sciences research workloads by harnessing the power of high performance computing on AWS.

Containers

May 24, 2018 | 01:00 PM – 01:45 PM PT – Building Microservices with the 12 Factor App Pattern on AWS – Learn best practices for building containerized microservices on AWS, and how traditional software design patterns evolve in the context of containers.

Databases

May 21, 2018 | 01:00 PM – 01:45 PM PTHow to Migrate from Cassandra to Amazon DynamoDB – Get the benefits, best practices and guides on how to migrate your Cassandra databases to Amazon DynamoDB.

May 23, 2018 | 01:00 PM – 01:45 PM PT5 Hacks for Optimizing MySQL in the Cloud – Learn how to optimize your MySQL databases for high availability, performance, and disaster resilience using RDS.

DevOps

May 23, 2018 | 09:00 AM – 09:45 AM PT.NET Serverless Development on AWS – Learn how to build a modern serverless application in .NET Core 2.0.

Enterprise & Hybrid

May 22, 2018 | 11:00 AM – 11:45 AM PTHybrid Cloud Customer Use Cases on AWS – Learn how customers are leveraging AWS hybrid cloud capabilities to easily extend their datacenter capacity, deliver new services and applications, and ensure business continuity and disaster recovery.

IoT

May 31, 2018 | 11:00 AM – 11:45 AM PTUsing AWS IoT for Industrial Applications – Discover how you can quickly onboard your fleet of connected devices, keep them secure, and build predictive analytics with AWS IoT.

Machine Learning

May 22, 2018 | 09:00 AM – 09:45 AM PTUsing Apache Spark with Amazon SageMaker – Discover how to use Apache Spark with Amazon SageMaker for training jobs and application integration.

May 24, 2018 | 09:00 AM – 09:45 AM PTIntroducing AWS DeepLens – Learn how AWS DeepLens provides a new way for developers to learn machine learning by pairing the physical device with a broad set of tutorials, examples, source code, and integration with familiar AWS services.

Management Tools

May 21, 2018 | 09:00 AM – 09:45 AM PTGaining Better Observability of Your VMs with Amazon CloudWatch – Learn how CloudWatch Agent makes it easy for customers like Rackspace to monitor their VMs.

Mobile

May 29, 2018 | 11:00 AM – 11:45 AM PT – Deep Dive on Amazon Pinpoint Segmentation and Endpoint Management – See how segmentation and endpoint management with Amazon Pinpoint can help you target the right audience.

Networking

May 31, 2018 | 09:00 AM – 09:45 AM PTMaking Private Connectivity the New Norm via AWS PrivateLink – See how PrivateLink enables service owners to offer private endpoints to customers outside their company.

Security, Identity, & Compliance

May 30, 2018 | 09:00 AM – 09:45 AM PT – Introducing AWS Certificate Manager Private Certificate Authority (CA) – Learn how AWS Certificate Manager (ACM) Private Certificate Authority (CA), a managed private CA service, helps you easily and securely manage the lifecycle of your private certificates.

June 1, 2018 | 09:00 AM – 09:45 AM PTIntroducing AWS Firewall Manager – Centrally configure and manage AWS WAF rules across your accounts and applications.

Serverless

May 22, 2018 | 01:00 PM – 01:45 PM PTBuilding API-Driven Microservices with Amazon API Gateway – Learn how to build a secure, scalable API for your application in our tech talk about API-driven microservices.

Storage

May 30, 2018 | 11:00 AM – 11:45 AM PTAccelerate Productivity by Computing at the Edge – Learn how AWS Snowball Edge support for compute instances helps accelerate data transfers, execute custom applications, and reduce overall storage costs.

June 1, 2018 | 11:00 AM – 11:45 AM PTLearn to Build a Cloud-Scale Website Powered by Amazon EFS – Technical deep dive where you’ll learn tips and tricks for integrating WordPress, Drupal and Magento with Amazon EFS.

 

 

 

 

Sci-Hub ‘Pirate Bay For Science’ Security Certs Revoked by Comodo

Post Syndicated from Andy original https://torrentfreak.com/sci-hub-pirate-bay-for-science-security-certs-revoked-by-comodo-ca-180503/

Sci-Hub is often referred to as the “Pirate Bay of Science”. Like its namesake, it offers masses of unlicensed content for free, mostly against the wishes of copyright holders.

While The Pirate Bay will index almost anything, Sci-Hub is dedicated to distributing tens of millions of academic papers and articles, something which has turned itself into a target for publishing giants like Elsevier.

Sci-Hub and its Kazakhstan-born founder Alexandra Elbakyan have been under sustained attack for several years but more recently have been fending off an unprecedented barrage of legal action initiated by the American Chemical Society (ACS), a leading source of academic publications in the field of chemistry.

After winning a default judgment for $4.8 million in copyright infringement damages last year, ACS was further granted a broad injunction.

It required various third-party services (including domain registries, hosting companies and search engines) to stop facilitating access to the site. This plunged Sci-Hub into a game of domain whac-a-mole, one that continues to this day.

Determined to head Sci-Hub off at the pass, ACS obtained additional authority to tackle the evasive site and any new domains it may register in the future.

While Sci-Hub has been hopping around domains for a while, this week a new development appeared on the horizon. Visitors to some of the site’s domains were greeted with errors indicating that the domains’ security certificates had been revoked.

Tests conducted by TorrentFreak revealed clear revocations on Sci-Hub.hk and Sci-Hub.nz, both of which returned the error ‘NET::ERR_CERT_REVOKED’.

Certificate revoked

These certificates were first issued and then revoked by Comodo CA, the world’s largest certification authority. TF contacted the company who confirmed that it had been forced to take action against Sci-Hub.

“In response to a court order against Sci-Hub, Comodo CA has revoked four certificates for the site,” Jonathan Skinner, Director, Global Channel Programs at Comodo CA informed TorrentFreak.

“By policy Comodo CA obeys court orders and the law to the full extent of its ability.”

Comodo refused to confirm any additional details, including whether these revocations were anything to do with the current ACS injunction. However, Susan R. Morrissey, Director of Communications at ACS, told TorrentFreak that the revocations were indeed part of ACS’ legal action against Sci-Hub.

“[T]he action is related to our continuing efforts to protect ACS’ intellectual property,” Morrissey confirmed.

Sci-Hub operates multiple domains (an up-to-date list is usually available on Wikipedia) that can be switched at any time. At the time of writing the domain sci-hub.ga currently returns ‘ERR_SSL_VERSION_OR_CIPHER_MISMATCH’ while .CN and .GS variants both have Comodo certificates that expired last year.

When TF first approached Comodo earlier this week, Sci-Hub’s certificates with the company hadn’t been completely wiped out. For example, the domain https://sci-hub.tw operated perfectly, with an active and non-revoked Comodo certificate.

Still in the game…but not for long

By Wednesday, however, the domain was returning the now-familiar “revoked” message.

These domain issues are the latest technical problems to hit Sci-Hub as a result of the ACS injunction. In February, Cloudflare terminated service to several of the site’s domains.

“Cloudflare will terminate your service for the following domains sci-hub.la, sci-hub.tv, and sci-hub.tw by disabling our authoritative DNS in 24 hours,” Cloudflare told Sci-Hub.

While ACS has certainly caused problems for Sci-Hub, the platform is extremely resilient and remains online.

The domains https://sci-hub.is and https://sci-hub.nu are fully operational with certificates issued by Let’s Encrypt, a free and open certificate authority supported by the likes of Mozilla, EFF, Chrome, Private Internet Access, and other prominent tech companies.

It’s unclear whether these certificates will be targeted in the future but Sci-Hub doesn’t appear to be in the mood to back down.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Securing messages published to Amazon SNS with AWS PrivateLink

Post Syndicated from Otavio Ferreira original https://aws.amazon.com/blogs/security/securing-messages-published-to-amazon-sns-with-aws-privatelink/

Amazon Simple Notification Service (SNS) now supports VPC Endpoints (VPCE) via AWS PrivateLink. You can use VPC Endpoints to privately publish messages to SNS topics, from an Amazon Virtual Private Cloud (VPC), without traversing the public internet. When you use AWS PrivateLink, you don’t need to set up an Internet Gateway (IGW), Network Address Translation (NAT) device, or Virtual Private Network (VPN) connection. You don’t need to use public IP addresses, either.

VPC Endpoints doesn’t require code changes and can bring additional security to Pub/Sub Messaging use cases that rely on SNS. VPC Endpoints helps promote data privacy and is aligned with assurance programs, including the Health Insurance Portability and Accountability Act (HIPAA), FedRAMP, and others discussed below.

VPC Endpoints for SNS in action

Here’s how VPC Endpoints for SNS works. The following example is based on a banking system that processes mortgage applications. This banking system, which has been deployed to a VPC, publishes each mortgage application to an SNS topic. The SNS topic then fans out the mortgage application message to two subscribing AWS Lambda functions:

  • Save-Mortgage-Application stores the application in an Amazon DynamoDB table. As the mortgage application contains personally identifiable information (PII), the message must not traverse the public internet.
  • Save-Credit-Report checks the applicant’s credit history against an external Credit Reporting Agency (CRA), then stores the final credit report in an Amazon S3 bucket.

The following diagram depicts the underlying architecture for this banking system:
 
Diagram depicting the architecture for the example banking system
 
To protect applicants’ data, the financial institution responsible for developing this banking system needed a mechanism to prevent PII data from traversing the internet when publishing mortgage applications from their VPC to the SNS topic. Therefore, they created a VPC endpoint to enable their publisher Amazon EC2 instance to privately connect to the SNS API. As shown in the diagram, when the VPC endpoint is created, an Elastic Network Interface (ENI) is automatically placed in the same VPC subnet as the publisher EC2 instance. This ENI exposes a private IP address that is used as the entry point for traffic destined to SNS. This ensures that traffic between the VPC and SNS doesn’t leave the Amazon network.

Set up VPC Endpoints for SNS

The process for creating a VPC endpoint to privately connect to SNS doesn’t require code changes: access the VPC Management Console, navigate to the Endpoints section, and create a new Endpoint. Three attributes are required:

  • The SNS service name.
  • The VPC and Availability Zones (AZs) from which you’ll publish your messages.
  • The Security Group (SG) to be associated with the endpoint network interface. The Security Group controls the traffic to the endpoint network interface from resources in your VPC. If you don’t specify a Security Group, the default Security Group for your VPC will be associated.

Help ensure your security and compliance

SNS can support messaging use cases in regulated market segments, such as healthcare provider systems subject to the Health Insurance Portability and Accountability Act (HIPAA) and financial systems subject to the Payment Card Industry Data Security Standard (PCI DSS), and is also in-scope with the following Assurance Programs:

The SNS API is served through HTTP Secure (HTTPS), and encrypts all messages in transit with Transport Layer Security (TLS) certificates issued by Amazon Trust Services (ATS). The certificates verify the identity of the SNS API server when encrypted connections are established. The certificates help establish proof that your SNS API client (SDK, CLI) is communicating securely with the SNS API server. A Certificate Authority (CA) issues the certificate to a specific domain. Hence, when a domain presents a certificate that’s issued by a trusted CA, the SNS API client knows it’s safe to make the connection.

Summary

VPC Endpoints can increase the security of your pub/sub messaging use cases by allowing you to publish messages to SNS topics, from instances in your VPC, without traversing the internet. Setting up VPC Endpoints for SNS doesn’t require any code changes because the SNS API address remains the same.

VPC Endpoints for SNS is now available in all AWS Regions where AWS PrivateLink is available. For information on pricing and regional availability, visit the VPC pricing page.
For more information and on-boarding, see Publishing to Amazon SNS Topics from Amazon Virtual Private Cloud in the SNS documentation.

If you have comments about this post, submit them in the Comments section below. If you have questions about anything in this post, start a new thread on the Amazon SNS forum or contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

AWS Certificate Manager Launches Private Certificate Authority

Post Syndicated from Randall Hunt original https://aws.amazon.com/blogs/aws/aws-certificate-manager-launches-private-certificate-authority/

Today we’re launching a new feature for AWS Certificate Manager (ACM), Private Certificate Authority (CA). This new service allows ACM to act as a private subordinate CA. Previously, if a customer wanted to use private certificates, they needed specialized infrastructure and security expertise that could be expensive to maintain and operate. ACM Private CA builds on ACM’s existing certificate capabilities to help you easily and securely manage the lifecycle of your private certificates with pay as you go pricing. This enables developers to provision certificates in just a few simple API calls while administrators have a central CA management console and fine grained access control through granular IAM policies. ACM Private CA keys are stored securely in AWS managed hardware security modules (HSMs) that adhere to FIPS 140-2 Level 3 security standards. ACM Private CA automatically maintains certificate revocation lists (CRLs) in Amazon Simple Storage Service (S3) and lets administrators generate audit reports of certificate creation with the API or console. This service is packed full of features so let’s jump in and provision a CA.

Provisioning a Private Certificate Authority (CA)

First, I’ll navigate to the ACM console in my region and select the new Private CAs section in the sidebar. From there I’ll click Get Started to start the CA wizard. For now, I only have the option to provision a subordinate CA so we’ll select that and use my super secure desktop as the root CA and click Next. This isn’t what I would do in a production setting but it will work for testing out our private CA.

Now, I’ll configure the CA with some common details. The most important thing here is the Common Name which I’ll set as secure.internal to represent my internal domain.

Now I need to choose my key algorithm. You should choose the best algorithm for your needs but know that ACM has a limitation today that it can only manage certificates that chain up to to RSA CAs. For now, I’ll go with RSA 2048 bit and click Next.

In this next screen, I’m able to configure my certificate revocation list (CRL). CRLs are essential for notifying clients in the case that a certificate has been compromised before certificate expiration. ACM will maintain the revocation list for me and I have the option of routing my S3 bucket to a custome domain. In this case I’ll create a new S3 bucket to store my CRL in and click Next.

Finally, I’ll review all the details to make sure I didn’t make any typos and click Confirm and create.

A few seconds later and I’m greeted with a fancy screen saying I successfully provisioned a certificate authority. Hooray! I’m not done yet though. I still need to activate my CA by creating a certificate signing request (CSR) and signing that with my root CA. I’ll click Get started to begin that process.

Now I’ll copy the CSR or download it to a server or desktop that has access to my root CA (or potentially another subordinate – so long as it chains to a trusted root for my clients).

Now I can use a tool like openssl to sign my cert and generate the certificate chain.


$openssl ca -config openssl_root.cnf -extensions v3_intermediate_ca -days 3650 -notext -md sha256 -in csr/CSR.pem -out certs/subordinate_cert.pem
Using configuration from openssl_root.cnf
Enter pass phrase for /Users/randhunt/dev/amzn/ca/private/root_private_key.pem:
Check that the request matches the signature
Signature ok
The Subject's Distinguished Name is as follows
stateOrProvinceName   :ASN.1 12:'Washington'
localityName          :ASN.1 12:'Seattle'
organizationName      :ASN.1 12:'Amazon'
organizationalUnitName:ASN.1 12:'Engineering'
commonName            :ASN.1 12:'secure.internal'
Certificate is to be certified until Mar 31 06:05:30 2028 GMT (3650 days)
Sign the certificate? [y/n]:y


1 out of 1 certificate requests certified, commit? [y/n]y
Write out database with 1 new entries
Data Base Updated

After that I’ll copy my subordinate_cert.pem and certificate chain back into the console. and click Next.

Finally, I’ll review all the information and click Confirm and import. I should see a screen like the one below that shows my CA has been activated successfully.

Now that I have a private CA we can provision private certificates by hopping back to the ACM console and creating a new certificate. After clicking create a new certificate I’ll select the radio button Request a private certificate then I’ll click Request a certificate.

From there it’s just similar to provisioning a normal certificate in ACM.

Now I have a private certificate that I can bind to my ELBs, CloudFront Distributions, API Gateways, and more. I can also export the certificate for use on embedded devices or outside of ACM managed environments.

Available Now
ACM Private CA is a service in and of itself and it is packed full of features that won’t fit into a blog post. I strongly encourage the interested readers to go through the developer guide and familiarize themselves with certificate based security. ACM Private CA is available in in US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), EU (Frankfurt) and EU (Ireland). Private CAs cost $400 per month (prorated) for each private CA. You are not charged for certificates created and maintained in ACM but you are charged for certificates where you have access to the private key (exported or created outside of ACM). The pricing per certificate is tiered starting at $0.75 per certificate for the first 1000 certificates and going down to $0.001 per certificate after 10,000 certificates.

I’m excited to see administrators and developers take advantage of this new service. As always please let us know what you think of this service on Twitter or in the comments below.

Randall

E-Mailing Private HTTPS Keys

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/03/e-mailing_priva.html

I don’t know what to make of this story:

The email was sent on Tuesday by the CEO of Trustico, a UK-based reseller of TLS certificates issued by the browser-trusted certificate authorities Comodo and, until recently, Symantec. It was sent to Jeremy Rowley, an executive vice president at DigiCert, a certificate authority that acquired Symantec’s certificate issuance business after Symantec was caught flouting binding industry rules, prompting Google to distrust Symantec certificates in its Chrome browser. In communications earlier this month, Trustico notified DigiCert that 50,000 Symantec-issued certificates Trustico had resold should be mass revoked because of security concerns.

When Rowley asked for proof the certificates were compromised, the Trustico CEO emailed the private keys of 23,000 certificates, according to an account posted to a Mozilla security policy forum. The report produced a collective gasp among many security practitioners who said it demonstrated a shockingly cavalier treatment of the digital certificates that form one of the most basic foundations of website security.

Generally speaking, private keys for TLS certificates should never be archived by resellers, and, even in the rare cases where such storage is permissible, they should be tightly safeguarded. A CEO being able to attach the keys for 23,000 certificates to an email raises troubling concerns that those types of best practices weren’t followed.

I am croggled by the multiple layers of insecurity here.

BoingBoing post.

How to Delegate Administration of Your AWS Managed Microsoft AD Directory to Your On-Premises Active Directory Users

Post Syndicated from Vijay Sharma original https://aws.amazon.com/blogs/security/how-to-delegate-administration-of-your-aws-managed-microsoft-ad-directory-to-your-on-premises-active-directory-users/

You can now enable your on-premises users administer your AWS Directory Service for Microsoft Active Directory, also known as AWS Managed Microsoft AD. Using an Active Directory (AD) trust and the new AWS delegated AD security groups, you can grant administrative permissions to your on-premises users by managing group membership in your on-premises AD directory. This simplifies how you manage who can perform administration. It also makes it easier for your administrators because they can sign in to their existing workstation with their on-premises AD credential to administer your AWS Managed Microsoft AD.

AWS created new domain local AD security groups (AWS delegated groups) in your AWS Managed Microsoft AD directory. Each AWS delegated group has unique AD administrative permissions. Users that are members in the new AWS delegated groups get permissions to perform administrative tasks, such as add users, configure fine-grained password policies and enable Microsoft enterprise Certificate Authority. Because the AWS delegated groups are domain local in scope, you can use them through an AD Trust to your on-premises AD. This eliminates the requirement to create and use separate identities to administer your AWS Managed Microsoft AD. Instead, by adding selected on-premises users to desired AWS delegated groups, you can grant your administrators some or all of the permissions. You can simplify this even further by adding on-premises AD security groups to the AWS delegated groups. This enables you to add and remove users from your on-premises AD security group so that they can manage administrative permissions in your AWS Managed Microsoft AD.

In this blog post, I will show you how to delegate permissions to your on-premises users to perform an administrative task–configuring fine-grained password policies–in your AWS Managed Microsoft AD directory. You can follow the steps in this post to delegate other administrative permissions, such as configuring group Managed Service Accounts and Kerberos constrained delegation, to your on-premises users.

Background

Until now, AWS Managed Microsoft AD delegated administrative permissions for your directory by creating AD security groups in your Organization Unit (OU) and authorizing these AWS delegated groups for common administrative activities. The admin user in your directory created user accounts within your OU, and granted these users permissions to administer your directory by adding them to one or more of these AWS delegated groups.

However, if you used your AWS Managed Microsoft AD with a trust to an on-premises AD forest, you couldn’t add users from your on-premises directory to these AWS delegated groups. This is because AWS created the AWS delegated groups with global scope, which restricts adding users from another forest. This necessitated that you create different user accounts in AWS Managed Microsoft AD for the purpose of administration. As a result, AD administrators typically had to remember additional credentials for AWS Managed Microsoft AD.

To address this, AWS created new AWS delegated groups with domain local scope in a separate OU called AWS Delegated Groups. These new AWS delegated groups with domain local scope are more flexible and permit adding users and groups from other domains and forests. This allows your admin user to delegate your on-premises users and groups administrative permissions to your AWS Managed Microsoft AD directory.

Note: If you already have an existing AWS Managed Microsoft AD directory containing the original AWS delegated groups with global scope, AWS preserved the original AWS delegated groups in the event you are currently using them with identities in AWS Managed Microsoft AD. AWS recommends that you transition to use the new AWS delegated groups with domain local scope. All newly created AWS Managed Microsoft AD directories have the new AWS delegated groups with domain local scope only.

Now, I will show you the steps to delegate administrative permissions to your on-premises users and groups to configure fine-grained password policies in your AWS Managed Microsoft AD directory.

Prerequisites

For this post, I assume you are familiar with AD security groups and how security group scope rules work. I also assume you are familiar with AD trusts.

The instructions in this blog post require you to have the following components running:

Solution overview

I will now show you how to manage which on-premises users have delegated permissions to administer your directory by efficiently using on-premises AD security groups to manage these permissions. I will do this by:

  1. Adding on-premises groups to an AWS delegated group. In this step, you sign in to management instance connected to AWS Managed Microsoft AD directory as admin user and add on-premises groups to AWS delegated groups.
  2. Administer your AWS Managed Microsoft AD directory as on-premises user. In this step, you sign in to a workstation connected to your on-premises AD using your on-premises credentials and administer your AWS Managed Microsoft AD directory.

For the purpose of this blog, I already have an on-premises AD directory (in this case, on-premises.com). I also created an AWS Managed Microsoft AD directory (in this case, corp.example.com) that I use with Amazon RDS for SQL Server. To enable Integrated Windows Authentication to my on-premises.com domain, I established a one-way outgoing trust from my AWS Managed Microsoft AD directory to my on-premises AD directory. To administer my AWS Managed Microsoft AD, I created an Amazon EC2 for Windows Server instance (in this case, Cloud Management). I also have an on-premises workstation (in this case, On-premises Management), that is connected to my on-premises AD directory.

The following diagram represents the relationships between the on-premises AD and the AWS Managed Microsoft AD directory.

The left side represents the AWS Cloud containing AWS Managed Microsoft AD directory. I connected the directory to the on-premises AD directory via a 1-way forest trust relationship. When AWS created my AWS Managed Microsoft AD directory, AWS created a group called AWS Delegated Fine Grained Password Policy Administrators that has permissions to configure fine-grained password policies in AWS Managed Microsoft AD.

The right side of the diagram represents the on-premises AD directory. I created a global AD security group called On-premises fine grained password policy admins and I configured it so all members can manage fine grained password policies in my on-premises AD. I have two administrators in my company, John and Richard, who I added as members of On-premises fine grained password policy admins. I want to enable John and Richard to also manage fine grained password policies in my AWS Managed Microsoft AD.

While I could add John and Richard to the AWS Delegated Fine Grained Password Policy Administrators individually, I want a more efficient way to delegate and remove permissions for on-premises users to manage fine grained password policies in my AWS Managed Microsoft AD. In fact, I want to assign permissions to the same people that manage password policies in my on-premises directory.

Diagram showing delegation of administrative permissions to on-premises users

To do this, I will:

  1. As admin user, add the On-premises fine grained password policy admins as member of the AWS Delegated Fine Grained Password Policy Administrators security group from my Cloud Management machine.
  2. Manage who can administer password policies in my AWS Managed Microsoft AD directory by adding and removing users as members of the On-premises fine grained password policy admins. Doing so enables me to perform all my delegation work in my on-premises directory without the need to use a remote desktop protocol (RDP) session to my Cloud Management instance. In this case, Richard, who is a member of On-premises fine grained password policy admins group can now administer AWS Managed Microsoft AD directory from On-premises Management workstation.

Although I’m showing a specific case using fine grained password policy delegation, you can do this with any of the new AWS delegated groups and your on-premises groups and users.

Let’s get started.

Step 1 – Add on-premises groups to AWS delegated groups

In this step, open an RDP session to the Cloud Management instance and sign in as the admin user in your AWS Managed Microsoft AD directory. Then, add your users and groups from your on-premises AD to AWS delegated groups in AWS Managed Microsoft AD directory. In this example, I do the following:

  1. Sign in to the Cloud Management instance with the user name admin and the password that you set for the admin user when you created your directory.
  2. Open the Microsoft Windows Server Manager and navigate to Tools > Active Directory Users and Computers.
  3. Switch to the tree view and navigate to corp.example.com > AWS Delegated Groups. Right-click AWS Delegated Fine Grained Password Policy Administrators and select Properties.
  4. In the AWS Delegated Fine Grained Password Policy window, switch to Members tab and choose Add.
  5. In the Select Users, Contacts, Computers, Service Accounts, or Groups window, choose Locations.
  6. In the Locations window, select on-premises.com domain and choose OK.
  7. In the Enter the object names to select box, enter on-premises fine grained password policy admins and choose Check Names.
  8. Because I have a 1-way trust from AWS Managed Microsoft AD to my on-premises AD, Windows prompts me to enter credentials for an on-premises user account that has permissions to complete the search. If I had a 2-way trust and the admin account in my AWS Managed Microsoft AD has permissions to read my on-premises directory, Windows will not prompt me.In the Windows Security window, enter the credentials for an account with permissions for on-premises.com and choose OK.
  9. Click OK to add On-premises fine grained password policy admins group as a member of the AWS Delegated Fine Grained Password Policy Administrators group in your AWS Managed Microsoft AD directory.

At this point, any user that is a member of On-premises fine grained password policy admins group has permissions to manage password policies in your AWS Managed Microsoft AD directory.

Step 2 – Administer your AWS Managed Microsoft AD as on-premises user

Any member of the on-premises group(s) that you added to an AWS delegated group inherited the permissions of the AWS delegated group.

In this example, Richard signs in to the On-premises Management instance. Because Richard inherited permissions from Delegated Fine Grained Password Policy Administrators, he can now administer fine grained password policies in the AWS Managed Microsoft AD directory using on-premises credentials.

  1. Sign in to the On-premises Management instance as Richard.
  2. Open the Microsoft Windows Server Manager and navigate to Tools > Active Directory Users and Computers.
  3. Switch to the tree view, right-click Active Directory Users and Computers, and then select Change Domain.
  4. In the Change Domain window, enter corp.example.com, and then choose OK.
  5. You’ll be connected to your AWS Managed Microsoft AD domain:

Richard can now administer the password policies. Because John is also a member of the AWS delegated group, John can also perform password policy administration the same way.

In future, if Richard moves to another division within the company and you hire Judy as a replacement for Richard, you can simply remove Richard from On-premises fine grained password policy admins group and add Judy to this group. Richard will no longer have administrative permissions, while Judy can now administer password policies for your AWS Managed Microsoft AD directory.

Summary

We’ve tried to make it easier for you to administer your AWS Managed Microsoft AD directory by creating AWS delegated groups with domain local scope. You can add your on-premises AD groups to the AWS delegated groups. You can then control who can administer your directory by managing group membership in your on-premises AD directory. Your administrators can sign in to their existing on-premises workstations using their on-premises credentials and administer your AWS Managed Microsoft AD directory. I encourage you to explore the new AWS delegated security groups by using Active Directory Users and Computers from the management instance for your AWS Managed Microsoft AD. To learn more about AWS Directory Service, see the AWS Directory Service home page. If you have questions, please post them on the Directory Service forum. If you have comments about this post, submit them in the “Comments” section below.

 

About the Amazon Trust Services Migration

Post Syndicated from Brent Meyer original https://aws.amazon.com/blogs/ses/669-2/

Amazon Web Services is moving the certificates for our services—including Amazon SES—to use our own certificate authority, Amazon Trust Services. We have carefully planned this change to minimize the impact it will have on your workflow. Most customers will not have to take any action during this migration.

About the Certificates

The Amazon Trust Services Certificate Authority (CA) uses the Starfield Services CA, which has been valid since 2005. The Amazon Trust Services certificates are available in most major operating systems released in the past 10 years, and are also trusted by all modern web browsers.

If you send email through the Amazon SES SMTP interface using a mail server that you operate, we recommend that you confirm that the appropriate certificates are installed. You can test whether your server trusts the Amazon Trust Services CAs by visiting the following URLs (for example, by using cURL):

If you see a message stating that the certificate issuer is not recognized, then you should install the appropriate root certificate. You can download individual certificates from https://www.amazontrust.com/repository. The process of adding a trusted certificate to your server varies depending on the operating system you use. For more information, see “Adding New Certificates,” below.

AWS SDKs and CLI

Recent versions of the AWS SDKs and the AWS CLI are not impacted by this change. If you use an AWS SDK or a version of the AWS CLI released prior to February 5, 2015, you should upgrade to the latest version.

Potential Issues

If your system is configured to use a very restricted list of root CAs (for example, if you use certificate pinning), you may be impacted by this migration. In this situation, you must update your pinned certificates to include the Amazon Trust Services CAs.

Adding New Root Certificates

The following sections list the steps you can take to install the Amazon Root CA certificates on your systems if they are not already present.

macOS

To install a new certificate on a macOS server

  1. Download the .pem file for the certificate you want to install from https://www.amazontrust.com/repository.
  2. Change the file extension for the file you downloaded from .pem to .crt.
  3. At the command prompt, type the following command to install the certificate: sudo security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain /path/to/certificatename.crt, replacing /path/to/certificatename.crt with the full path to the certificate file.

Windows Server

To install a new certificate on a Windows server

  1. Download the .pem file for the certificate you want to install from https://www.amazontrust.com/repository.
  2. Change the file extension for the file you downloaded from .pem to .crt.
  3. At the command prompt, type the following command to install the certificate: certutil -addstore -f "ROOT" c:\path\to\certificatename.crt, replacing c:\path\to\certificatename.crt with the full path to the certificate file.

Ubuntu

To install a new certificate on an Ubuntu (or similar) server

  1. Download the .pem file for the certificate you want to install from https://www.amazontrust.com/repository.
  2. Change the file extension for the file you downloaded from .pem to .crt.
  3. Copy the certificate file to the directory /usr/local/share/ca-certificates/
  4. At the command prompt, type the following command to update the certificate authority store: sudo update-ca-certificates

Red Hat Enterprise Linux/Fedora/CentOS

To install a new certificate on a Red Hat Enterprise Linux (or similar) server

  1. Download the .pem file for the certificate you want to install from https://www.amazontrust.com/repository.
  2. Change the file extension for the file you downloaded from .pem to .crt.
  3. Copy the certificate file to the directory /etc/pki/ca-trust/source/anchors/
  4. At the command line, type the following command to enable dynamic certificate authority configuration: sudo update-ca-trust force-enable
  5. At the command line, type the following command to update the certificate authority store: sudo update-ca-trust extract

To learn more about this migration, see How to Prepare for AWS’s Move to Its Own Certificate Authority on the AWS Security Blog.

SNIFFlab – Create Your Own MITM Test Environment

Post Syndicated from Darknet original https://www.darknet.org.uk/2017/11/snifflab-create-mitm-test-environment/?utm_source=rss&utm_medium=social&utm_campaign=darknetfeed

SNIFFlab – Create Your Own MITM Test Environment

SNIFFlab is a set of scripts in Python that enable you to create your own MITM test environment for packet sniffing through a WiFi access point.

Essentially it’s a WiFi hotspot that is continually collecting all the packets transmitted across it. All connected clients’ HTTPS communications are subjected to a “Man-in-the-middle” attack, whereby they can later be decrypted for analysis

What is SNIFFLab MITM Test Environment

In our environment, dubbed Snifflab, a researcher simply connects to the Snifflab WiFi network, is prompted to install a custom certificate authority on the device, and then can use their device as needed for the test.

Read the rest of SNIFFlab – Create Your Own MITM Test Environment now! Only available at Darknet.

How to Prepare for AWS’s Move to Its Own Certificate Authority

Post Syndicated from Jonathan Kozolchyk original https://aws.amazon.com/blogs/security/how-to-prepare-for-aws-move-to-its-own-certificate-authority/

AWS Certificate Manager image

 

Update from March 28, 2018: We updated the Amazon Trust Services table by replacing an out-of-date value with a new value.


Transport Layer Security (TLS, formerly called Secure Sockets Layer [SSL]) is essential for encrypting information that is exchanged on the internet. For example, Amazon.com uses TLS for all traffic on its website, and AWS uses it to secure calls to AWS services.

An electronic document called a certificate verifies the identity of the server when creating such an encrypted connection. The certificate helps establish proof that your web browser is communicating securely with the website that you typed in your browser’s address field. Certificate Authorities, also known as CAs, issue certificates to specific domains. When a domain presents a certificate that is issued by a trusted CA, your browser or application knows it’s safe to make the connection.

In January 2016, AWS launched AWS Certificate Manager (ACM), a service that lets you easily provision, manage, and deploy SSL/TLS certificates for use with AWS services. These certificates are available for no additional charge through Amazon’s own CA: Amazon Trust Services. For browsers and other applications to trust a certificate, the certificate’s issuer must be included in the browser’s trust store, which is a list of trusted CAs. If the issuing CA is not in the trust store, the browser will display an error message (see an example) and applications will show an application-specific error. To ensure the ubiquity of the Amazon Trust Services CA, AWS purchased the Starfield Services CA, a root found in most browsers and which has been valid since 2005. This means you shouldn’t have to take any action to use the certificates issued by Amazon Trust Services.

AWS has been offering free certificates to AWS customers from the Amazon Trust Services CA. Now, AWS is in the process of moving certificates for services such as Amazon EC2 and Amazon DynamoDB to use certificates from Amazon Trust Services as well. Most software doesn’t need to be changed to handle this transition, but there are exceptions. In this blog post, I show you how to verify that you are prepared to use the Amazon Trust Services CA.

How to tell if the Amazon Trust Services CAs are in your trust store

The following table lists the Amazon Trust Services certificates. To verify that these certificates are in your browser’s trust store, click each Test URL in the following table to verify that it works for you. When a Test URL does not work, it displays an error similar to this example.

Distinguished name SHA-256 hash of subject public key information Test URL
CN=Amazon Root CA 1,O=Amazon,C=US fbe3018031f9586bcbf41727e417b7d1c45c2f47f93be372a17b96b50757d5a2 Test URL
CN=Amazon Root CA 2,O=Amazon,C=US 7f4296fc5b6a4e3b35d3c369623e364ab1af381d8fa7121533c9d6c633ea2461 Test URL
CN=Amazon Root CA 3,O=Amazon,C=US 36abc32656acfc645c61b71613c4bf21c787f5cabbee48348d58597803d7abc9 Test URL
CN=Amazon Root CA 4,O=Amazon,C=US f7ecded5c66047d28ed6466b543c40e0743abe81d109254dcf845d4c2c7853c5 Test URL
CN=Starfield Services Root Certificate Authority – G2,O=Starfield Technologies\, Inc.,L=Scottsdale,ST=Arizona,C=US 2b071c59a0a0ae76b0eadb2bad23bad4580b69c3601b630c2eaf0613afa83f92 Test URL
Starfield Class 2 Certification Authority 15f14ac45c9c7da233d3479164e8137fe35ee0f38ae858183f08410ea82ac4b4 Not available*

* Note: Amazon doesn’t own this root and doesn’t have a test URL for it. The certificate can be downloaded from here.

You can calculate the SHA-256 hash of Subject Public Key Information as follows. With the PEM-encoded certificate stored in certificate.pem, run the following openssl commands:

openssl x509 -in certificate.pem -noout -pubkey | openssl asn1parse -noout -inform pem -out certificate.key
openssl dgst -sha256 certificate.key

As an example, with the Starfield Class 2 Certification Authority self-signed cert in a PEM encoded file sf-class2-root.crt, you can use the following openssl commands:

openssl x509 -in sf-class2-root.crt -noout -pubkey | openssl asn1parse -noout -inform pem -out sf-class2-root.key
openssl dgst -sha256 sf-class2-root.key ~
SHA256(sf-class2-root.key)= 15f14ac45c9c7da233d3479164e8137fe35ee0f38ae858183f08410ea82ac4b4

What to do if the Amazon Trust Services CAs are not in your trust store

If your tests of any of the Test URLs failed, you must update your trust store. The easiest way to update your trust store is to upgrade the operating system or browser that you are using.

You will find the Amazon Trust Services CAs in the following operating systems (release dates are in parentheses):

  • Microsoft Windows versions that have January 2005 or later updates installed, Windows Vista, Windows 7, Windows Server 2008, and newer versions
  • Mac OS X 10.4 with Java for Mac OS X 10.4 Release 5, Mac OS X 10.5 and newer versions
  • Red Hat Enterprise Linux 5 (March 2007), Linux 6, and Linux 7 and CentOS 5, CentOS 6, and CentOS 7
  • Ubuntu 8.10
  • Debian 5.0
  • Amazon Linux (all versions)
  • Java 1.4.2_12, Java 5 update 2, and all newer versions, including Java 6, Java 7, and Java 8

All modern browsers trust Amazon’s CAs. You can update the certificate bundle in your browser simply by updating your browser. You can find instructions for updating the following browsers on their respective websites:

If your application is using a custom trust store, you must add the Amazon root CAs to your application’s trust store. The instructions for doing this vary based on the application or platform. Please refer to the documentation for the application or platform you are using.

AWS SDKs and CLIs

Most AWS SDKs and CLIs are not impacted by the transition to the Amazon Trust Services CA. If you are using a version of the Python AWS SDK or CLI released before October 29, 2013, you must upgrade. The .NET, Java, PHP, Go, JavaScript, and C++ SDKs and CLIs do not bundle any certificates, so their certificates come from the underlying operating system. The Ruby SDK has included at least one of the required CAs since June 10, 2015. Before that date, the Ruby V2 SDK did not bundle certificates.

Certificate pinning

If you are using a technique called certificate pinning to lock down the CAs you trust on a domain-by-domain basis, you must adjust your pinning to include the Amazon Trust Services CAs. Certificate pinning helps defend you from an attacker using misissued certificates to fool an application into creating a connection to a spoofed host (an illegitimate host masquerading as a legitimate host). The restriction to a specific, pinned certificate is made by checking that the certificate issued is the expected certificate. This is done by checking that the hash of the certificate public key received from the server matches the expected hash stored in the application. If the hashes do not match, the code stops the connection.

AWS recommends against using certificate pinning because it introduces a potential availability risk. If the certificate to which you pin is replaced, your application will fail to connect. If your use case requires pinning, we recommend that you pin to a CA rather than to an individual certificate. If you are pinning to an Amazon Trust Services CA, you should pin to all CAs shown in the table earlier in this post.

If you have comments about this post, submit them in the “Comments” section below. If you have questions about this post, start a new thread on the ACM forum.

– Jonathan

Application Load Balancers Now Support Multiple TLS Certificates With Smart Selection Using SNI

Post Syndicated from Randall Hunt original https://aws.amazon.com/blogs/aws/new-application-load-balancer-sni/

Today we’re launching support for multiple TLS/SSL certificates on Application Load Balancers (ALB) using Server Name Indication (SNI). You can now host multiple TLS secured applications, each with its own TLS certificate, behind a single load balancer. In order to use SNI, all you need to do is bind multiple certificates to the same secure listener on your load balancer. ALB will automatically choose the optimal TLS certificate for each client. These new features are provided at no additional charge.

If you’re looking for a TL;DR on how to use this new feature just click here. If you’re like me and you’re a little rusty on the specifics of Transport Layer Security (TLS) then keep reading.

TLS? SSL? SNI?

People tend to use the terms SSL and TLS interchangeably even though the two are technically different. SSL technically refers to a predecessor of the TLS protocol. To keep things simple I’ll be using the term TLS for the rest of this post.

TLS is a protocol for securely transmitting data like passwords, cookies, and credit card numbers. It enables privacy, authentication, and integrity of the data being transmitted. TLS uses certificate based authentication where certificates are like ID cards for your websites. You trust the person that signed and issued the certificate, the certificate authority (CA), so you trust that the data in the certificate is correct. When a browser connects to your TLS-enabled ALB, ALB presents a certificate that contains your site’s public key, which has been cryptographically signed by a CA. This way the client can be sure it’s getting the ‘real you’ and that it’s safe to use your site’s public key to establish a secure connection.

With SNI support we’re making it easy to use more than one certificate with the same ALB. The most common reason you might want to use multiple certificates is to handle different domains with the same load balancer. It’s always been possible to use wildcard and subject-alternate-name (SAN) certificates with ALB, but these come with limitations. Wildcard certificates only work for related subdomains that match a simple pattern and while SAN certificates can support many different domains, the same certificate authority has to authenticate each one. That means you have reauthenticate and reprovision your certificate everytime you add a new domain.

One of our most frequent requests on forums, reddit, and in my e-mail inbox has been to use the Server Name Indication (SNI) extension of TLS to choose a certificate for a client. Since TLS operates at the transport layer, below HTTP, it doesn’t see the hostname requested by a client. SNI works by having the client tell the server “This is the domain I expect to get a certificate for” when it first connects. The server can then choose the correct certificate to respond to the client. All modern web browsers and a large majority of other clients support SNI. In fact, today we see SNI supported by over 99.5% of clients connecting to CloudFront.

Smart Certificate Selection on ALB

ALB’s smart certificate selection goes beyond SNI. In addition to containing a list of valid domain names, certificates also describe the type of key exchange and cryptography that the server supports, as well as the signature algorithm (SHA2, SHA1, MD5) used to sign the certificate. To establish a TLS connection, a client starts a TLS handshake by sending a “ClientHello” message that outlines the capabilities of the client: the protocol versions, extensions, cipher suites, and compression methods. Based on what an individual client supports, ALB’s smart selection algorithm chooses a certificate for the connection and sends it to the client. ALB supports both the classic RSA algorithm and the newer, hipper, and faster Elliptic-curve based ECDSA algorithm. ECDSA support among clients isn’t as prevalent as SNI, but it is supported by all modern web browsers. Since it’s faster and requires less CPU, it can be particularly useful for ultra-low latency applications and for conserving the amount of battery used by mobile applications. Since ALB can see what each client supports from the TLS handshake, you can upload both RSA and ECDSA certificates for the same domains and ALB will automatically choose the best one for each client.

Using SNI with ALB

I’ll use a few example websites like VimIsBetterThanEmacs.com and VimIsTheBest.com. I’ve purchased and hosted these domains on Amazon Route 53, and provisioned two separate certificates for them in AWS Certificate Manager (ACM). If I want to securely serve both of these sites through a single ALB, I can quickly add both certificates in the console.

First, I’ll select my load balancer in the console, go to the listeners tab, and select “view/edit certificates”.

Next, I’ll use the “+” button in the top left corner to select some certificates then I’ll click the “Add” button.

There are no more steps. If you’re not really a GUI kind of person you’ll be pleased to know that it’s also simple to add new certificates via the AWS Command Line Interface (CLI) (or SDKs).

aws elbv2 add-listener-certificates --listener-arn <listener-arn> --certificates CertificateArn=<cert-arn>

Things to know

  • ALB Access Logs now include the client’s requested hostname and the certificate ARN used. If the “hostname” field is empty (represented by a “-“) the client did not use the SNI extension in their request.
  • You can use any of your certificates in ACM or IAM.
  • You can bind multiple certificates for the same domain(s) to a secure listener. Your ALB will choose the optimal certificate based on multiple factors including the capabilities of the client.
  • If the client does not support SNI your ALB will use the default certificate (the one you specified when you created the listener).
  • There are three new ELB API calls: AddListenerCertificates, RemoveListenerCertificates, and DescribeListenerCertificates.
  • You can bind up to 25 certificates per load balancer (not counting the default certificate).
  • These new features are supported by AWS CloudFormation at launch.

You can see an example of these new features in action with a set of websites created by my colleague Jon Zobrist: https://www.exampleloadbalancer.com/.

Overall, I will personally use this feature and I’m sure a ton of AWS users will benefit from it as well. I want to thank the Elastic Load Balancing team for all their hard work in getting this into the hands of our users.

Randall

How to Enable LDAPS for Your AWS Microsoft AD Directory

Post Syndicated from Vijay Sharma original https://aws.amazon.com/blogs/security/how-to-enable-ldaps-for-your-aws-microsoft-ad-directory/

Starting today, you can encrypt the Lightweight Directory Access Protocol (LDAP) communications between your applications and AWS Directory Service for Microsoft Active Directory, also known as AWS Microsoft AD. Many Windows and Linux applications use Active Directory’s (AD) LDAP service to read and write sensitive information about users and devices, including personally identifiable information (PII). Now, you can encrypt your AWS Microsoft AD LDAP communications end to end to protect this information by using LDAP Over Secure Sockets Layer (SSL)/Transport Layer Security (TLS), also called LDAPS. This helps you protect PII and other sensitive information exchanged with AWS Microsoft AD over untrusted networks.

To enable LDAPS, you need to add a Microsoft enterprise Certificate Authority (CA) server to your AWS Microsoft AD domain and configure certificate templates for your domain controllers. After you have enabled LDAPS, AWS Microsoft AD encrypts communications with LDAPS-enabled Windows applications, Linux computers that use Secure Shell (SSH) authentication, and applications such as Jira and Jenkins.

In this blog post, I show how to enable LDAPS for your AWS Microsoft AD directory in six steps: 1) Delegate permissions to CA administrators, 2) Add a Microsoft enterprise CA to your AWS Microsoft AD directory, 3) Create a certificate template, 4) Configure AWS security group rules, 5) AWS Microsoft AD enables LDAPS, and 6) Test LDAPS access using the LDP tool.

Assumptions

For this post, I assume you are familiar with following:

Solution overview

Before going into specific deployment steps, I will provide a high-level overview of deploying LDAPS. I cover how you enable LDAPS on AWS Microsoft AD. In addition, I provide some general background about CA deployment models and explain how to apply these models when deploying Microsoft CA to enable LDAPS on AWS Microsoft AD.

How you enable LDAPS on AWS Microsoft AD

LDAP-aware applications (LDAP clients) typically access LDAP servers using Transmission Control Protocol (TCP) on port 389. By default, LDAP communications on port 389 are unencrypted. However, many LDAP clients use one of two standards to encrypt LDAP communications: LDAP over SSL on port 636, and LDAP with StartTLS on port 389. If an LDAP client uses port 636, the LDAP server encrypts all traffic unconditionally with SSL. If an LDAP client issues a StartTLS command when setting up the LDAP session on port 389, the LDAP server encrypts all traffic to that client with TLS. AWS Microsoft AD now supports both encryption standards when you enable LDAPS on your AWS Microsoft AD domain controllers.

You enable LDAPS on your AWS Microsoft AD domain controllers by installing a digital certificate that a CA issued. Though Windows servers have different methods for installing certificates, LDAPS with AWS Microsoft AD requires you to add a Microsoft CA to your AWS Microsoft AD domain and deploy the certificate through autoenrollment from the Microsoft CA. The installed certificate enables the LDAP service running on domain controllers to listen for and negotiate LDAP encryption on port 636 (LDAP over SSL) and port 389 (LDAP with StartTLS).

Background of CA deployment models

You can deploy CAs as part of a single-level or multi-level CA hierarchy. In a single-level hierarchy, all certificates come from the root of the hierarchy. In a multi-level hierarchy, you organize a collection of CAs in a hierarchy and the certificates sent to computers and users come from subordinate CAs in the hierarchy (not the root).

Certificates issued by a CA identify the hierarchy to which the CA belongs. When a computer sends its certificate to another computer for verification, the receiving computer must have the public certificate from the CAs in the same hierarchy as the sender. If the CA that issued the certificate is part of a single-level hierarchy, the receiver must obtain the public certificate of the CA that issued the certificate. If the CA that issued the certificate is part of a multi-level hierarchy, the receiver can obtain a public certificate for all the CAs that are in the same hierarchy as the CA that issued the certificate. If the receiver can verify that the certificate came from a CA that is in the hierarchy of the receiver’s “trusted” public CA certificates, the receiver trusts the sender. Otherwise, the receiver rejects the sender.

Deploying Microsoft CA to enable LDAPS on AWS Microsoft AD

Microsoft offers a standalone CA and an enterprise CA. Though you can configure either as single-level or multi-level hierarchies, only the enterprise CA integrates with AD and offers autoenrollment for certificate deployment. Because you cannot sign in to run commands on your AWS Microsoft AD domain controllers, an automatic certificate enrollment model is required. Therefore, AWS Microsoft AD requires the certificate to come from a Microsoft enterprise CA that you configure to work in your AD domain. When you install the Microsoft enterprise CA, you can configure it to be part of a single-level hierarchy or a multi-level hierarchy. As a best practice, AWS recommends a multi-level Microsoft CA trust hierarchy consisting of a root CA and a subordinate CA. I cover only a multi-level hierarchy in this post.

In a multi-level hierarchy, you configure your subordinate CA by importing a certificate from the root CA. You must issue a certificate from the root CA such that the certificate gives your subordinate CA the right to issue certificates on behalf of the root. This makes your subordinate CA part of the root CA hierarchy. You also deploy the root CA’s public certificate on all of your computers, which tells all your computers to trust certificates that your root CA issues and to trust certificates from any authorized subordinate CA.

In such a hierarchy, you typically leave your root CA offline (inaccessible to other computers in the network) to protect the root of your hierarchy. You leave the subordinate CA online so that it can issue certificates on behalf of the root CA. This multi-level hierarchy increases security because if someone compromises your subordinate CA, you can revoke all certificates it issued and set up a new subordinate CA from your offline root CA. To learn more about setting up a secure CA hierarchy, see Securing PKI: Planning a CA Hierarchy.

When a Microsoft CA is part of your AD domain, you can configure certificate templates that you publish. These templates become visible to client computers through AD. If a client’s profile matches a template, the client requests a certificate from the Microsoft CA that matches the template. Microsoft calls this process autoenrollment, and it simplifies certificate deployment. To enable LDAPS on your AWS Microsoft AD domain controllers, you create a certificate template in the Microsoft CA that generates SSL and TLS-compatible certificates. The domain controllers see the template and automatically import a certificate of that type from the Microsoft CA. The imported certificate enables LDAP encryption.

Steps to enable LDAPS for your AWS Microsoft AD directory

The rest of this post is composed of the steps for enabling LDAPS for your AWS Microsoft AD directory. First, though, I explain which components you must have running to deploy this solution successfully. I also explain how this solution works and include an architecture diagram.

Prerequisites

The instructions in this post assume that you already have the following components running:

  1. An active AWS Microsoft AD directory – To create a directory, follow the steps in Create an AWS Microsoft AD directory.
  2. An Amazon EC2 for Windows Server instance for managing users and groups in your directory – This instance needs to be joined to your AWS Microsoft AD domain and have Active Directory Administration Tools installed. Active Directory Administration Tools installs Active Directory Administrative Center and the LDP tool.
  3. An existing root Microsoft CA or a multi-level Microsoft CA hierarchy – You might already have a root CA or a multi-level CA hierarchy in your on-premises network. If you plan to use your on-premises CA hierarchy, you must have administrative permissions to issue certificates to subordinate CAs. If you do not have an existing Microsoft CA hierarchy, you can set up a new standalone Microsoft root CA by creating an Amazon EC2 for Windows Server instance and installing a standalone root certification authority. You also must create a local user account on this instance and add this user to the local administrator group so that the user has permissions to issue a certificate to a subordinate CA.

The solution setup

The following diagram illustrates the setup with the steps you need to follow to enable LDAPS for AWS Microsoft AD. You will learn how to set up a subordinate Microsoft enterprise CA (in this case, SubordinateCA) and join it to your AWS Microsoft AD domain (in this case, corp.example.com). You also will learn how to create a certificate template on SubordinateCA and configure AWS security group rules to enable LDAPS for your directory.

As a prerequisite, I already created a standalone Microsoft root CA (in this case RootCA) for creating SubordinateCA. RootCA also has a local user account called RootAdmin that has administrative permissions to issue certificates to SubordinateCA. Note that you may already have a root CA or a multi-level CA hierarchy in your on-premises network that you can use for creating SubordinateCA instead of creating a new root CA. If you choose to use your existing on-premises CA hierarchy, you must have administrative permissions on your on-premises CA to issue a certificate to SubordinateCA.

Lastly, I also already created an Amazon EC2 instance (in this case, Management) that I use to manage users, configure AWS security groups, and test the LDAPS connection. I join this instance to the AWS Microsoft AD directory domain.

Diagram showing the process discussed in this post

Here is how the process works:

  1. Delegate permissions to CA administrators (in this case, CAAdmin) so that they can join a Microsoft enterprise CA to your AWS Microsoft AD domain and configure it as a subordinate CA.
  2. Add a Microsoft enterprise CA to your AWS Microsoft AD domain (in this case, SubordinateCA) so that it can issue certificates to your directory domain controllers to enable LDAPS. This step includes joining SubordinateCA to your directory domain, installing the Microsoft enterprise CA, and obtaining a certificate from RootCA that grants SubordinateCA permissions to issue certificates.
  3. Create a certificate template (in this case, ServerAuthentication) with server authentication and autoenrollment enabled so that your AWS Microsoft AD directory domain controllers can obtain certificates through autoenrollment to enable LDAPS.
  4. Configure AWS security group rules so that AWS Microsoft AD directory domain controllers can connect to the subordinate CA to request certificates.
  5. AWS Microsoft AD enables LDAPS through the following process:
    1. AWS Microsoft AD domain controllers request a certificate from SubordinateCA.
    2. SubordinateCA issues a certificate to AWS Microsoft AD domain controllers.
    3. AWS Microsoft AD enables LDAPS for the directory by installing certificates on the directory domain controllers.
  6. Test LDAPS access by using the LDP tool.

I now will show you these steps in detail. I use the names of components—such as RootCA, SubordinateCA, and Management—and refer to users—such as Admin, RootAdmin, and CAAdmin—to illustrate who performs these steps. All component names and user names in this post are used for illustrative purposes only.

Deploy the solution

Step 1: Delegate permissions to CA administrators


In this step, you delegate permissions to your users who manage your CAs. Your users then can join a subordinate CA to your AWS Microsoft AD domain and create the certificate template in your CA.

To enable use with a Microsoft enterprise CA, AWS added a new built-in AD security group called AWS Delegated Enterprise Certificate Authority Administrators that has delegated permissions to install and administer a Microsoft enterprise CA. By default, your directory Admin is part of the new group and can add other users or groups in your AWS Microsoft AD directory to this security group. If you have trust with your on-premises AD directory, you can also delegate CA administrative permissions to your on-premises users by adding on-premises AD users or global groups to this new AD security group.

To create a new user (in this case CAAdmin) in your directory and add this user to the AWS Delegated Enterprise Certificate Authority Administrators security group, follow these steps:

  1. Sign in to the Management instance using RDP with the user name admin and the password that you set for the admin user when you created your directory.
  2. Launch the Microsoft Windows Server Manager on the Management instance and navigate to Tools > Active Directory Users and Computers.
    Screnshot of the menu including the "Active Directory Users and Computers" choice
  3. Switch to the tree view and navigate to corp.example.com > CORP > Users. Right-click Users and choose New > User.
    Screenshot of choosing New > User
  4. Add a new user with the First name CA, Last name Admin, and User logon name CAAdmin.
    Screenshot of completing the "New Object - User" boxes
  5. In the Active Directory Users and Computers tool, navigate to corp.example.com > AWS Delegated Groups. In the right pane, right-click AWS Delegated Enterprise Certificate Authority Administrators and choose Properties.
    Screenshot of navigating to AWS Delegated Enterprise Certificate Authority Administrators > Properties
  6. In the AWS Delegated Enterprise Certificate Authority Administrators window, switch to the Members tab and choose Add.
    Screenshot of the "Members" tab of the "AWS Delegate Enterprise Certificate Authority Administrators" window
  7. In the Enter the object names to select box, type CAAdmin and choose OK.
    Screenshot showing the "Enter the object names to select" box
  8. In the next window, choose OK to add CAAdmin to the AWS Delegated Enterprise Certificate Authority Administrators security group.
    Screenshot of adding "CA Admin" to the "AWS Delegated Enterprise Certificate Authority Administrators" security group
  9. Also add CAAdmin to the AWS Delegated Server Administrators security group so that CAAdmin can RDP in to the Microsoft enterprise CA machine.
    Screenshot of adding "CAAdmin" to the "AWS Delegated Server Administrators" security group also so that "CAAdmin" can RDP in to the Microsoft enterprise CA machine

 You have granted CAAdmin permissions to join a Microsoft enterprise CA to your AWS Microsoft AD directory domain.

Step 2: Add a Microsoft enterprise CA to your AWS Microsoft AD directory


In this step, you set up a subordinate Microsoft enterprise CA and join it to your AWS Microsoft AD directory domain. I will summarize the process first and then walk through the steps.

First, you create an Amazon EC2 for Windows Server instance called SubordinateCA and join it to the domain, corp.example.com. You then publish RootCA’s public certificate and certificate revocation list (CRL) to SubordinateCA’s local trusted store. You also publish RootCA’s public certificate to your directory domain. Doing so enables SubordinateCA and your directory domain controllers to trust RootCA. You then install the Microsoft enterprise CA service on SubordinateCA and request a certificate from RootCA to make SubordinateCA a subordinate Microsoft CA. After RootCA issues the certificate, SubordinateCA is ready to issue certificates to your directory domain controllers.

Note that you can use an Amazon S3 bucket to pass the certificates between RootCA and SubordinateCA.

In detail, here is how the process works, as illustrated in the preceding diagram:

  1. Set up an Amazon EC2 instance joined to your AWS Microsoft AD directory domain – Create an Amazon EC2 for Windows Server instance to use as a subordinate CA, and join it to your AWS Microsoft AD directory domain. For this example, the machine name is SubordinateCA and the domain is corp.example.com.
  2. Share RootCA’s public certificate with SubordinateCA – Log in to RootCA as RootAdmin and start Windows PowerShell with administrative privileges. Run the following commands to copy RootCA’s public certificate and CRL to the folder c:\rootcerts on RootCA.
    New-Item c:\rootcerts -type directory
    copy C:\Windows\system32\certsrv\certenroll\*.cr* c:\rootcerts

    Upload RootCA’s public certificate and CRL from c:\rootcerts to an S3 bucket by following the steps in How Do I Upload Files and Folders to an S3 Bucket.

The following screenshot shows RootCA’s public certificate and CRL uploaded to an S3 bucket.
Screenshot of RootCA’s public certificate and CRL uploaded to the S3 bucket

  1. Publish RootCA’s public certificate to your directory domain – Log in to SubordinateCA as the CAAdmin. Download RootCA’s public certificate and CRL from the S3 bucket by following the instructions in How Do I Download an Object from an S3 Bucket? Save the certificate and CRL to the C:\rootcerts folder on SubordinateCA. Add RootCA’s public certificate and the CRL to the local store of SubordinateCA and publish RootCA’s public certificate to your directory domain by running the following commands using Windows PowerShell with administrative privileges.
    certutil –addstore –f root <path to the RootCA public certificate file>
    certutil –addstore –f root <path to the RootCA CRL file>
    certutil –dspublish –f <path to the RootCA public certificate file> RootCA
  2. Install the subordinate Microsoft enterprise CA – Install the subordinate Microsoft enterprise CA on SubordinateCA by following the instructions in Install a Subordinate Certification Authority. Ensure that you choose Enterprise CA for Setup Type to install an enterprise CA.

For the CA Type, choose Subordinate CA.

  1. Request a certificate from RootCA – Next, copy the certificate request on SubordinateCA to a folder called c:\CARequest by running the following commands using Windows PowerShell with administrative privileges.
    New-Item c:\CARequest -type directory
    Copy c:\*.req C:\CARequest

    Upload the certificate request to the S3 bucket.
    Screenshot of uploading the certificate request to the S3 bucket

  1. Approve SubordinateCA’s certificate request – Log in to RootCA as RootAdmin and download the certificate request from the S3 bucket to a folder called CARequest. Submit the request by running the following command using Windows PowerShell with administrative privileges.
    certreq -submit <path to certificate request file>

    In the Certification Authority List window, choose OK.
    Screenshot of the Certification Authority List window

Navigate to Server Manager > Tools > Certification Authority on RootCA.
Screenshot of "Certification Authority" in the drop-down menu

In the Certification Authority window, expand the ROOTCA tree in the left pane and choose Pending Requests. In the right pane, note the value in the Request ID column. Right-click the request and choose All Tasks > Issue.
Screenshot of noting the value in the "Request ID" column

  1. Retrieve the SubordinateCA certificate – Retrieve the SubordinateCA certificate by running following command using Windows PowerShell with administrative privileges. The command includes the <RequestId> that you noted in the previous step.
    certreq –retrieve <RequestId> <drive>:\subordinateCA.crt

    Upload SubordinateCA.crt to the S3 bucket.

  1. Install the SubordinateCA certificate – Log in to SubordinateCA as the CAAdmin and download SubordinateCA.crt from the S3 bucket. Install the certificate by running following commands using Windows PowerShell with administrative privileges.
    certutil –installcert c:\subordinateCA.crt
    start-service certsvc
  2. Delete the content that you uploaded to S3  As a security best practice, delete all the certificates and CRLs that you uploaded to the S3 bucket in the previous steps because you already have installed them on SubordinateCA.

You have finished setting up the subordinate Microsoft enterprise CA that is joined to your AWS Microsoft AD directory domain. Now you can use your subordinate Microsoft enterprise CA to create a certificate template so that your directory domain controllers can request a certificate to enable LDAPS for your directory.

Step 3: Create a certificate template


In this step, you create a certificate template with server authentication and autoenrollment enabled on SubordinateCA. You create this new template (in this case, ServerAuthentication) by duplicating an existing certificate template (in this case, Domain Controller template) and adding server authentication and autoenrollment to the template.

Follow these steps to create a certificate template:

  1. Log in to SubordinateCA as CAAdmin.
  2. Launch Microsoft Windows Server Manager. Select Tools > Certification Authority.
  3. In the Certificate Authority window, expand the SubordinateCA tree in the left pane. Right-click Certificate Templates, and choose Manage.
    Screenshot of choosing "Manage" under "Certificate Template"
  4. In the Certificate Templates Console window, right-click Domain Controller and choose Duplicate Template.
    Screenshot of the Certificate Templates Console window
  5. In the Properties of New Template window, switch to the General tab and change the Template display name to ServerAuthentication.
    Screenshot of the "Properties of New Template" window
  6. Switch to the Security tab, and choose Domain Controllers in the Group or user names section. Select the Allow check box for Autoenroll in the Permissions for Domain Controllers section.
    Screenshot of the "Permissions for Domain Controllers" section of the "Properties of New Template" window
  7. Switch to the Extensions tab, choose Application Policies in the Extensions included in this template section, and choose Edit
    Screenshot of the "Extensions" tab of the "Properties of New Template" window
  8. In the Edit Application Policies Extension window, choose Client Authentication and choose Remove. Choose OK to create the ServerAuthentication certificate template. Close the Certificate Templates Console window.
    Screenshot of the "Edit Application Policies Extension" window
  9. In the Certificate Authority window, right-click Certificate Templates, and choose New > Certificate Template to Issue.
    Screenshot of choosing "New" > "Certificate Template to Issue"
  10. In the Enable Certificate Templates window, choose ServerAuthentication and choose OK.
    Screenshot of the "Enable Certificate Templates" window

You have finished creating a certificate template with server authentication and autoenrollment enabled on SubordinateCA. Your AWS Microsoft AD directory domain controllers can now obtain a certificate through autoenrollment to enable LDAPS.

Step 4: Configure AWS security group rules


In this step, you configure AWS security group rules so that your directory domain controllers can connect to the subordinate CA to request a certificate. To do this, you must add outbound rules to your directory’s AWS security group (in this case, sg-4ba7682d) to allow all outbound traffic to SubordinateCA’s AWS security group (in this case, sg-6fbe7109) so that your directory domain controllers can connect to SubordinateCA for requesting a certificate. You also must add inbound rules to SubordinateCA’s AWS security group to allow all incoming traffic from your directory’s AWS security group so that the subordinate CA can accept incoming traffic from your directory domain controllers.

Follow these steps to configure AWS security group rules:

  1. Log in to the Management instance as Admin.
  2. Navigate to the EC2 console.
  3. In the left pane, choose Network & Security > Security Groups.
  4. In the right pane, choose the AWS security group (in this case, sg-6fbe7109) of SubordinateCA.
  5. Switch to the Inbound tab and choose Edit.
  6. Choose Add Rule. Choose All traffic for Type and Custom for Source. Enter your directory’s AWS security group (in this case, sg-4ba7682d) in the Source box. Choose Save.
    Screenshot of adding an inbound rule
  7. Now choose the AWS security group (in this case, sg-4ba7682d) of your AWS Microsoft AD directory, switch to the Outbound tab, and choose Edit.
  8. Choose Add Rule. Choose All traffic for Type and Custom for Destination. Enter your directory’s AWS security group (in this case, sg-6fbe7109) in the Destination box. Choose Save.

You have completed the configuration of AWS security group rules to allow traffic between your directory domain controllers and SubordinateCA.

Step 5: AWS Microsoft AD enables LDAPS


The AWS Microsoft AD domain controllers perform this step automatically by recognizing the published template and requesting a certificate from the subordinate Microsoft enterprise CA. The subordinate CA can take up to 180 minutes to issue certificates to the directory domain controllers. The directory imports these certificates into the directory domain controllers and enables LDAPS for your directory automatically. This completes the setup of LDAPS for the AWS Microsoft AD directory. The LDAP service on the directory is now ready to accept LDAPS connections!

Step 6: Test LDAPS access by using the LDP tool


In this step, you test the LDAPS connection to the AWS Microsoft AD directory by using the LDP tool. The LDP tool is available on the Management machine where you installed Active Directory Administration Tools. Before you test the LDAPS connection, you must wait up to 180 minutes for the subordinate CA to issue a certificate to your directory domain controllers.

To test LDAPS, you connect to one of the domain controllers using port 636. Here are the steps to test the LDAPS connection:

  1. Log in to Management as Admin.
  2. Launch the Microsoft Windows Server Manager on Management and navigate to Tools > Active Directory Users and Computers.
  3. Switch to the tree view and navigate to corp.example.com > CORP > Domain Controllers. In the right pane, right-click on one of the domain controllers and choose Properties. Copy the DNS name of the domain controller.
    Screenshot of copying the DNS name of the domain controller
  4. Launch the LDP.exe tool by launching Windows PowerShell and running the LDP.exe command.
  5. In the LDP tool, choose Connection > Connect.
    Screenshot of choosing "Connnection" > "Connect" in the LDP tool
  6. In the Server box, paste the DNS name you copied in the previous step. Type 636 in the Port box. Choose OK to test the LDAPS connection to port 636 of your directory.
    Screenshot of completing the boxes in the "Connect" window
  7. You should see the following message to confirm that your LDAPS connection is now open.

You have completed the setup of LDAPS for your AWS Microsoft AD directory! You can now encrypt LDAP communications between your Windows and Linux applications and your AWS Microsoft AD directory using LDAPS.

Summary

In this blog post, I walked through the process of enabling LDAPS for your AWS Microsoft AD directory. Enabling LDAPS helps you protect PII and other sensitive information exchanged over untrusted networks between your Windows and Linux applications and your AWS Microsoft AD. To learn more about how to use AWS Microsoft AD, see the Directory Service documentation. For general information and pricing, see the Directory Service home page.

If you have comments about this blog post, submit a comment in the “Comments” section below. If you have implementation or troubleshooting questions, start a new thread on the Directory Service forum.

– Vijay

OVH Renews Platinum Sponsorship of Let's Encrypt

Post Syndicated from Let's Encrypt - Free SSL/TLS Certificates original https://letsencrypt.org/2017/03/23/ovh-platinum-renewal.html

<p>We’re pleased to announce that <a href="https://www.ovh.com/">OVH</a> has renewed their support for Let’s Encrypt as a <a href="https://letsencrypt.org/sponsors/">Platinum sponsor</a> for the next three years. OVH’s strong support for Let’s Encrypt will go a long way towards creating a more secure and privacy-respecting Web.</p>

<p>OVH initially got in touch with Let’s Encrypt to become a Platinum sponsor shortly after our public launch in December of 2015. It was clear that they understood the need for Let’s Encrypt and our potential impact on the Web.</p>

<p>&ldquo;Over a year ago, when Let’s Encrypt came out of beta, it was an obvious choice for OVH to support this new certificate authority, and become a Platinum sponsor,&rdquo; said Octave Klaba, Founder, CTO and Chairman. &ldquo;We provided free Let’s Encrypt certificates to all our Web customers. At OVH today, over 2.2 million websites can be reached over a secure connection, and a total of 3.6 million certificates were created for our customers during the first year.&rdquo;</p>

<p>In the past year, Let’s Encrypt has grown to provide <a href="https://letsencrypt.org/stats/">28 million certificates to more than 31 million websites</a>. The Web went from around 40% HTTPS page loads at the end of 2015 to 50% HTTPS page loads at the start of 2017. This is phenomenal growth for the Web, and Let’s Encrypt is proud to have been a driving force behind it.</p>

<p>Of course, it wouldn’t have been possible without major hosting providers like OVH making it easier for their customers to enable HTTPS with Let’s Encrypt. OVH was one of the first major hosting providers to make HTTPS available to a large number of their customers, and they are continuing to expand the scope of services that are secure by default.</p>

<p>&ldquo;We then wanted to go one step further,&rdquo; continues Octave Klaba. &ldquo;We decided to launch <a href="https://www.ovh.com/ca/en/ssl-gateway/">SSL Gateway</a>, powered by Let’s Encrypt. It’s an all-in-one front-end for your infrastructure with HTTPS encryption and anti-DDOS capability. It makes the Web even more secure and reliable. This service is now available to everyone, for free.&rdquo;</p>

<p>Financial and product commitments like these from OVH are moving the Web toward our goal of 100% encryption. We depend on support from organizations like OVH to continue operating. If your company or organization would like to sponsor Let’s Encrypt please email us at <a href="mailto:[email protected]">[email protected]</a>.</p>

Някои идеи за електронното гласуване

Post Syndicated from Delian Delchev original http://feedproxy.google.com/~r/delian/~3/3SzV7avcgtQ/blog-post_24.html

Чета постоянно истерия от представители или почитатели на партии с твърд пенсионерски електорат или партии разчитащи на по малко грамотни граждани срещу електронното гласуване, как то нямало да бъде сигурно.


Аз смятам, че това е най смешната теза на противниците на електронното гласуване преди референдума. Дали електронното гласуване ще бъде сигурно или не, зависи единствено и само от това как ще бъде реализирано. Това е технически проблем, който няма отношение към това дали ще го има или не. Електронното гласуване може да бъде много сигурно, всъщност може да бъде значително по-сигурно отколкото настоящето ни не електронно присъствено гласуване. Но също така може да бъде направено и много несигурно, по малко сигурно отколкото е сегашното ни присъствено гласуване.
Няма принципна причина, която да прави дистанционното гласуване несигурно, различна от избраните процедури за провеждането му. Но това няма отношение към дискусията по референдума – да има или да няма електронно гласуване. Това са техническите детайли, които трябва да се обсъждат открито и активно, ако референдума е положителен и предизвика публична дускусия, която да доведе до промяна на законодателството, която да разреши да има електронно гласуване.


За мен технологичните дискусии на този етап са преждевременни. Те са еквивалентни на дискусиите – дайте да нямаме гласуване въобще, защото може да се подправя. По добре ли ще ни е с един неподправен цар или дикатура на партията? 🙂 Няма съществена разлика в тезите. И вероятно за това царските движения и почитатели на диктатурата на партии, са основните противници на електронното гласуване.


Това не значи, че бягам от обсъждане на технологии. Това ми е страст. В този текст, искам да опиша един процедурен модел, как може да бъде реализирано електронно гласуване сигурно и анонимно. Това не е предложение как да бъде реализирано. Нито твърдение, че ще бъде реализирано точно така. Това е само илюстрация, че може. Технологичната дискусия е въпрос за обсъждане след като бъде взето решение да има електронно гласуване. Там трябва да сме внимателни, за да няма издънки от държавата. Но същевременно, не можем да смятаме, че държавата цели или съществува само за да прави издънки, защото иначе какъв е смисъла от нея? Най добре да дойде друг цар и господар, който да ни управлява нали? Ах, да, рязясних още една от тезите на противниците 🙂


Всъщност няма значение дали говорим за електронно или хартиено, присъствено или дистанционно гласуване, всяко нещо може да бъде реализирано сигурно или несигурно, като основният момент в него никога не е технологията по същество, а алгоритъма – процедурата използвана за реализацията.


В този смисъл нека за момент да не говорим за технология. Нека говорим за процедури-процедура.
И аз нямам за цел, да кажа, че дадена процедура за дистанционно гласуване гарантира абсолютна сигурност. Аз искам само да отбележа, че във всеки един момент, тя е винаги по-добра и по-сигурна отколкото моментната процедура за присъствено гласуване.


Изискванията към гласуването, без значение дали е присъствено или дистанционно, са да бъде лично, тайно, и гарантирано (тоест да не може да се подмени вота).


Тук не става въпрос за електронно гласуване или не. Можем да имаме електронно безприсъствено гласуване и електронно присъствено (което се експериментира от години при вотовете у нас, наричано машинно гласуване, заради помощ от машина), технологията е от не особено съществено значение. Основният въпрос е в процедурите за прилагане (включително и на технология).


Процедурата, която коментирам, е приложима за присъствено и безприсъствено гласуване, машинно или с бюлетини. Подобна процедура вече се разработва и в подходяща законодателна форма от друга група ентусиасти, но основните идеи (макар и да изказвам различно някой специфични детайли) са същите.


При присъственото гласуване, “личното” се гарантира типично чрез механизмите за аутентикация предвидени за личната карта. Ако снимката ти отговаря на личната карта, и ти имаш доверие на картата и издателят и, то ти приемаш (но няма абсолютна гаранция), че лицето е това, за което се представя и идва да гласува лично.


Тайната на вота се гарантира от безименни бюлетини и от гласуване в (макар и) обществена стая, такава в която трети страни твърдят, че е сигурно, че няма никой друг (но няма абсолютна гаранция).


Сигурноста на вота трябва да се гарантира от това, че няма как друга бюлетина да е влязла в урната, освен такава от гласуващи. Бюлетината не може пък да се подмени, защото е пазена зорко, от хора, които се предполага, че са врагове и ще се дебнат един друг (изборната комисия съставена от представители на различни партии).


Ние добре знаем обаче, че над 1 на 10000 лични карти са фалшифицирани.
Знаем, за мъртвите души – хора, които имат право да гласуват, и са в списъците за гласоподаватели, но са починали, и се използват за да се позволи да има останали свободни бюлетини добавяни в последният момент в урните, без това да създава подозрения при централното преброяване, тъй като броят на бюлетините е под или около очакваната бройка.
Само сега, за тези избори са премахнати над 500000 мъртви души (заради правилото за уседналост), но това не са всички, и ни дават чудесен индикатор колко неточни са всъщност изборните списъци. На по-предни избори бяха премахнати други 700000. Но постоянната урбанизация (при местните избори има изискване за уседналост), емиграция, динамика на смъртността, създават естествен процес на проява на мъртви души, тъй като списъците по места, физически няма как да са идеално актуални (макар централно да има как).

Знаем, че някой хора гласуват едновременно на две различни физически места, и поради невъзможност при присъственото гласуване да се подсигури в реално време национална проверка дали вече си гласувал или не, гласът се приема, а поради невъзможност за валидация на бюлетината по произход, не може да се премахнат двойни гласувания и следователно въпреки, че знаем за нарушението, то остава в преброеният резултат и какво ли още не.

Нещо повече, на някой места местни картели от представители в ИК си добавят бюлетини в последният момент, и дори да са преброени много над подписалите се, че са гласували, тъй като няма как да се разбере, кои са истински и кои не са, се броят всички и участват в преброяването. Общо, грешките (невалидни, дублирани бюлетини, мъртви души) плават между 5 и 15% на изборите и обикновено нямат съществено значение по отношение на цялостният изборен резултат. Но имат съществено значение при преразпределението на локалните мандати и при местните избори, където често един мандат (заради фрагментацията) или кмет се избират с по малко от 5000 гласа. Тези грешки, както и ниската избирателна активност имат значение и за партиите, които влизат в парламента или получават субсидия. За пример, на предните парламентарни избори само 50000 гласа по голяма изборна активност (под 1% от всички имащи право на глас) деляха дали АБВ и АТАКА ще влязат в парламента или не. Даже, нямаше значение за кого са гласували тези хора, тъй като става въпрос за фрагментацията от разпределението на мандати. Само ниска избирателна активност. Дали това е една от причините и двете партии да са твърди противници на всякакви методики за повишаване на изборната активност (от реклами, през задължително гласуване, до против електронното гласуване, всъщност АБВ официално е за електронно гласуване, но негови представители се изказват публично против) не знам.

Всички тези дефекти, и други, са породени от това, че няма въведен механизъм за двустранна валидация.
Ние знаем за тях от години, коментират се на всички избори, и постоянно променяме изборният кодекс така че да атакуваме един или друг проблем, до момента не особено ефективно.


Състемата за сигурност на присъственото гласуване разчита прекоменрно много на заплахи (глоби), политически присъствени изборни комисии (които ще се следят едни други поради естествената политическа конкуренция), и изборни наблюдатели за защита. Но с години, постепенно се съкращава количеството хора и партии имащи право да присъстват в изборна комисия, до парламентарно представени или до коалиционно партниращи си субекти, и така започват да се толерират естествени картелни съглашения, които отслабват значително присъственият контрол. Отделно независимите изборните наблюдатели се съкращават (сложна процедура за регистрация, и изискване да трябва да идват от политически партии предимно), а самите сътрудници или наблюдатели от своя страна злоупотребяват като гласоподаватели (предимно с дублиране на гласовете – пример ББЦ, БСП и ДПС на предни избори, те когато гласуват могат да заобикалят правилото за уседналост, което автоматично допуска злоупотреби с дублиране на гласовете, двойно гласуване, и други).


Нека да си представим обаче, следната хипотетична ситуация –


Че имаме разеделние между компонентите – лична оторизация, придобиване на право да гласуваш (бюлетина), асоциирането на бюлетина с вот (гласуване) и преброяване на гласовете. Всеки един от тези елементи се оторизира и проверява в отделна и независима организация, а нещо друго (например математиката?) ни гарантира съседна двустранна асоциация, но не и такава, която прескача съсед. Тогава много изкривявания (но не всички) на текущият модел биха били преодолени автоматично.


Обяснявам малко по детайлно – всичко това си го имаме и сега, но нека си представим че то е напълно отделено в отделни организации и процесите се случват независимо. Следните отделни и независими компоненти, мога да определя:


  1. Оторизация на едно място, че ти си си ти (еквивалент на издаване на личната карта) и получаване на съответният идентификатор за това (сега това е личната карта издадена от МВР, но може да бъде и електроне подпис или друго, от една организация)
  2. Оторизация, че имаш право да получиш бюлетина, с която да гласуваш (сега това го прави изборната комисия, проверяваща те в изборният списък и гарантираща, че ти си ти, чрез сертификата издаден от организация 1, в електронният можеш да получиш генериран математически случаен електронен сертификат, проверяем математически (може да е hash функция) и подписан със сертификата на тази втора организация, така че да е непроменим)
  3. Асоцииране на вот с бюлетина (сега го правиш пред изборната комисия, която подпечатва бюлетината ти, за да гарантира пред трети страни че е валидна, електронно може да е трета организация, например сайт, в който ти подписваш електронно избора си с полученият от 2 сертификат)
  4. Преброяване на бюлетините (прави го ЦИК, с под изпълнител типично ИО, в електронният свят това е преброяване и двойно валидиране на валидноста на електронната бюлетина, от четвърта организация)


Нека си представим преднатата система реализирана така (отново, технологията е примерна, не е важна, важна е процедурата и принципа) –
  1. Отиваш и си издаваш електронен подпис, който валидира, че ти си си ти. Той съдържа уникален твой сертификат валидиран от издателят на сертификата (всъщност днес това го правят всички клонове на банки, за 15 лева), което има това право (еквивалент на МВР). Електронният подпис е само пример. Може да бъде електронната ти идентификационна карта (личната ти карта, която ще можеш да получиш от 2017-та година нататък), може да е One-Time-Password система от друга форма. Няма голямо значение. Технологията не е важна, на този етап.
  2. Отиваш и получаваш електронна бюлетина срещу електронният си подпис, тя представлява да речем hash функционално число, подписано от твоят сертификат (но без идентификационна информация) или от алтернативен по-анонимен алгоритъм, като да речем модифициран DH (или уникален случаен частен ключ, и двете могат да дойдат пак през електронният ти подпис). Получената информация пък е подписана със сертификат от организацията издаваща електронните бюлетин (тук можем да правим вариации, може да имаш уникален публичен и частен ключ, генерирани случайно и дадени за теб например).
  3. Отиваш на сайта за гласуване, там с полученият частен ключ подписваш бюлетината и я пращаш обратно на сайта за гласуване, който я подписва със своят си сертификат (сертификатите играят роля на печатите при ИК) и предава веднага или по късно на организацията за преброяване (или пък я предава веднага на междинна организация, но тя я предава на организацията за преброяване след приключване на изборният ден).
  4. Организацията за преброяване получава всички сертификати. Тя има публичните ключове на (1), 2 и 3 (но не и частните). Като резултат тя може да чете информацията (но никой друг не може да чете освен своята си част) и да направи преброяване и пълна валидация. Отделно от 2 е получила списъка с генерираните хешове, има как да ги валидира математически (че са истински) и приема само тези сертификати, които съдържат коректен хеш.


Какъв е резултата от това разделение на отделни независими и несвързани организации:


Имаме невъзможност за генериране на фалшификати (подписванията при 2 и 3 стават локално при гласуващият, през мрежата пътуват и сайтовете получават само подписани сертификати), за да има фалшификат трябва някой да е компрометирал едновременно 1, 2 и 3 (което може да стане само при личният компютър на гласуващият, но дори той да е компрометиран, пак не може да стане лесно – опитайте се да подмените съдържанието на подписваната информация от електронният подпис в ръцете ви на собственият ви компютър. Ако успеете, ми се обадете).
Дори да приемем това за възможно при компютъра на гласуващият, то ще бъде невъзможно да бъде направено масово (няма как в ограничен период от време да повлияеш и да проконтролираш милиони компютри и милиони сертификати), което го прави по добре стоящо от сегашният модел (ЦИК публикува на всяко гласуване статистика показваща близо поне 100к невалидни или дублирани бюлетини, и то по доста консервативният механизъм на оценка, който имат). Също така, подмяната изисква интерактивно действие в момента, в който потребителят се оторизира. Ще ми бъде изключително интересно да чуя как може да стане при един масов електронен вот, в реално време, в рамките на изборният ден. Дори да имаме 1-2 случая на подмяна, те ще са много малко за да повлияят на резултата, и отделно точно електронното гласуване може да допусне механизъм как да се откриват и поправят.


Имаме и гаранция за тайна на вота. Защото само 1 знае дали имаш право да гласуваш и кой си ти. Само 2 издава бюлетина срещу информацията от 1 (но не е задължително да знае кой си, стига да знае, че имаш валиден сертификат, чиято валидация може да се подсигури анонимно), но не знае дали и за кой си гласувал. Само 3 може да знае (това е опционално, 3 може да не разполага със собственият си публичен ключ и да не може да знае), електронната бюлетина с кой вот е асоциирана, но не знае кой е гласувал. Само 4 може да преброи и да валидира бюлетините, пак без да знае кой е гласувал.
За да компрометираш тайната на вота, трябва да компрометираш всичките едновременно. Това е теоретично възможно за държавата да се организира и да го направи, но това е малко вероятно, защото първо може лесно да се установи (много по лесно от при физическото гласуване, понеже тук всички записи се пазят на всякъде), и второ това не е проблем да се прави и сега, така че за него няма технологичен проблем при присъственото гласуване, има принципен държавен и морален проблем, и не можем да кажем, че е нещо, което ще го докара електронното гласуване. Проблемът ако съществува, ще е проблем е на организацията, държавата и културата.


Тъй като подписите стават локално там където ги извършва гласуващият, то хакер хакнал по отделно 2 или 3 или 4 не може да генерира фалшиви подписи. Нито може да подпише (няма частните ключове или сертификатите при клиента от предната организация), нито да създаде валиден сертификат (защото той изисква участие от край до край). Теоретично е възможно да създаде масови фалшиви подписи ако хакне 1, но пък красотата на всичко тук, е че това може лесно да се установи (от ГРАО, в реално време или постфактум при проверката при 4) и всички гласове генерирани по този начин да се извадят от вота (да се инвалидизират хешовете от 2 или сертификатите от 1 и ще се получи инвалидизация при преброяването при 4), включително пост фактум, отново при запазване на пълна анонимност за гласуващият.


Дублирането е премахнато заради валидацията по активен хеш в организация 4. Но за дублирането ще кажа още нещо по-късно. Всеки електронно гласувал ще има само един единствен валиден вот.


Как гарантираме личното гласуване? Това в хипотезата ми го прави електронният подпис, но той може да бъде валидизиран (при 2) ако трябва от видео или снимка обаждане, и ще получи същата форма на валидизация, каквато има и при изборната комисия (да те видят че си приличаш).


Ами подслушването? Дори да гледаме логове и IP адреси, можем да догадаем че някой е гласувал, но не със сигирност кой е той точно, и определено не и за кого е гласувал. Ако криптираната информация е с еднаква дължина, дори статистически механизми не биха могли да помогнат да се различи от подслушване, кой или колко за кого са гласували. Този отговор може да даде само организация номер 4.
Защитата, която имаме тук е дори значително по-добра, при това дори при пълна откритост, отколкото тази, която имаме при присъственото гласуване. Ако например журналист следи пред сградата с изборни секции и да снима кой влиза и излиза (а това го виждаме по телевизията на всички избори), той получава много по-голяма и точна информация, отколкото ако подслушвате мрежата за електронно гласуване.


Как ще гарантираме, че подписите не се дават на други хора? Гаранция, че не си си дал подписа (паспорта) на някой друг е същата, каквато е и при нормалното гласуване. Никаква. Но отново, много е трудно и малко вероятно това да стане масово, и може да се валидира с видео обаждане (или заснемане при получаване на правото за гласуване).


Звучи сложно? На пръв поглед технологията изглежда сложна и многостъпкова. Сигурно ще е трудно за потребителите да я следват? Всъщност не. Идеята с разделението и многото подписвания не е нова и не е случайна. Всъщност тя се изпълнява от ИК и сега, в присъственото гласуване (личната ти карта е сертификат/електронен подпис издаден от МВР, бюлетина с воден знак е сертификат от издателя на бюлетините, проверката по ИК и лична карта и подписа в избирателният списък е еквивалента на получаването на право за гласуване е моята стъпка 2, гласуването в стаичаката и последващият печат върху бюлетината е еквивалента на стъпка 3, валидацията на печатите, и водният знак бюлетините и съдържанието от ЦИК и ИО е еквивалента на стъпка 4). Тази технология с 3-ен подпис е класическата технология използвана от технологията за оторизация KERBEROS на MIT, която до сега не е разбивана, а всъщност е изключително масово употребявана (Active Directory оторизацията в Windows е базирана на нея). Използва се и от свръх популярната OAUTH2 система за оторизация. За потребителят изглежда че попълва данните си само на едно място, но отзад има двойна (или дори тройна) оторизация, и нито една организация не разполага с цялата и пълната информация за личните данни на участника. По важното е, че потребителите дори не разбират как става, и не се налага да се логват на 3 сайта едновременно. Но това не премахва гаранциите за сигурността.


Компрометирани броусъри, операционни системи, троянски коне не компрометират автоматично например електронният подпис (всъщност всички хакове, за които знаем не компрометират системата по същество, а само дебнат ситуация в която потребителят подписва в реално време и се опитват да изземат сесията. Дори да заподозрем 1-2 такива случая, това е невъзможно да се изпълни масово в изборният ден по начин по който да се повлияе на изборният резултат).


За дублиранията – тъй като имаме вторична анонимна проверка (при 4), потребителят може да гласува безброй пъти анонимно (издавайки си безброй бюлетини), и да му се брои само последният вот. Това не само не е недостатък, но може да е и предимство. Дори да бъде принуден от някой “да се гласува правилно” пред него, по късно потребителят може да гласува пак, и да инвалидизира предното си гласуване. Така насилственият вот може да стане много по труден (но не и продаденият, но той не може да бъде преборен с технология – това е лично решение), тъй като хората ще имат алтернатива. Особено в малките населени места локалното влияние (в изборната секция със заплаха от кмета, както често се случва) може да стане много трудно, тъй като хората могат да отидат и пак да си гласуват валидно – другаде.


Тъй като само потребителят може да е в състояние да прочете информацията от подписите си, то системата може да бъде направена така, че потребителят да може да провери как е гласувал (ако се предостави такава услуга от 4) и да открие фалшификации, хакове, или да инвалидизира гласа си. Нещо повече, никой няма да бъде в състояние да му попречи да го направи, или да разбере, че това се е случило, без да има абсолютно цялата информация от 1,2,3 и 4 (а пък ако дори само един се оплаче, това лесно ще се открива проблема и ще се хваща виновника. При добре работеща полиция, много бързо и точно ще се излавят хората опитващи да правят компрометиране на системата).


Обърнете внимание, технологиите не са толкова важни (ползвам публични-частни ключове, защото са добра и позната илюстрация), а процедурата и разделението са важни. Ние следваме същата процедура и сега, но нямаме разделение, просто всичко е концентрирано на едно място и не се проверява вторично. Така там където е концентрацията (ИК) може да се правят фалшификации, а поради липсата на вторична проверка, не могат да се изфилтрират. Също така в приръственото гласуване днес използваме значително по-лесни за фалшификация идентификатори (номера на кочани, водни знаци), отколкото в една електронна система.


Аз лично не само че не виждам по големи недостатъци при така описаната по горе процедура, за дистанционно електронно гласуване, отколкото при сегашният ни физически модел, но и виждам адресирани едни от най важните технически проблеми при сегашното гласуване – повишаване на изборната активност (с което пък адресираме изкривяванията, които имаме от ниската изборателна активност) поради улесняване, премахване на възможностите за валидно преброяване на дубликати, и също така създаване на алтернативи за запазване на гражданските права и личният избор, с което при добра комуникация с избирателите, можем директно да атакуваме (минимум с оскъпяване на) платеният вот и (да обезсмислим) феодализираният вот.


До момента няма нито един случай, на наистина хакната система за електронно гласуване или за електронно банкиране. Нямаме познат случай, при който да е хакнат електронен подпис (но знаем за хиляди подправени паспорти). Нямаме случай, в който да е хакнат алгоритъм за оторизация.
Имаме случаи на откраднати (физически) електронни подписи (но както и при личният паспорт, те могат да се инвалидизират и за разлика от личният паспорт инвалидизацията може да е мигновенна). Имаме случаи на откраднати статични сертификати. Имаме случаи на пробити банкови системи по друг начин (достъп до софтуера управляващ трансферите на парите). Имаме класически Denial of Service атаки, които блокират или забавят скороста на работа на системите. Но те не нарушават анонимноста или оторизационната система на потребителите, а са плод на грешен дизайн в други части от софтуера. Така че не могат да се ползват като доказателство за несигурност. Сигурността за личноста и решенията накрайните потребители си се запазва. Но некадърността на системите си е друг проблем.


В предложената от мен схема, нито една организация и дори нито една произволна комбинация от две организации, не може да генерира фалшива система за оторизация, фалшива бюлетина, фалшива асоциация с вот, които да минат на проверка при номер 4. Всички те само ги записват, подписванията прави само потребителят локално.
Можем да имаме (умишлено) фалшиво преброяване при 4, но пък всички сертификати/бюлетини си стоят и подлежат на вторична проверка, така че при всяко подозрение, могат да се проверят и изловят. Доста по добре, отколкото при сегашният модел на присъствено гласуване.


Отделно, една от красотите на цялата работа е, че 1,2,3 и частично дори 4, могат да бъдат реализирани частно, и повече от веднъж едновременно. Можете да имате много издатели на оторизационни системи (електронен подпис), много издатели на електронни бюлетини, много сайтове за асоцииране на вот, много преброители. Това не само че не нарушава сигурността, но дори я увеличава. Потребител, който има подозрения, или ако направи проверка за собственият си вот, какво е гласувал и види несъвпадение, може да смени в рамките на изборният период (ден) и да гласува пак на друго място, и така да инвалидизира грешката, както и да заобиколи проблема. Обратно, полицията пък може да сваля фалшиви сайтове.


Фишинг атаки (сайтове за фалшиво гласувне) няма да сработят, тъй като не разполагат със сертификатите от публичните ключове, както за да вземат гласа на потребителя, така за да го пратят валидно към 3 или 4.


Единствената атака, която остава е DoS атака, към електронните места за гласуване, така че да бъдат блокирани и потребителите да не могат да гласуват и да се нервят. Но DoS атаките се решават лесно, с добре дистрибутирани и скалирани системи. Дори и без атака, системата може да бъде крайно бавна, защото е проектирана лошо (и тук мога да дам пример с Търговският Регистър или сайта за електронно преброяване на предните избори). Но това не е причина да няма електронно гласуване, това може да е причина да притиснем държавните организации да бъдат много по сериозни в разработката на софтуера си.


Послепис:
За незапознатите, ще кажа няколко думи за основни алгоритми в криптографията, които могат да се ползват (и всъщност се ползват във всички системи с електронни подписи, онлайн оторизация и електронно банкиране):


Challenge алгоритъм – Представете си че имате две страни които си комуникират през несигурна среда. Страна А иска да валидира, че страна Б е тази, за която се представя. Страна А знае, че страна Б ще се валидира с информация, която страна А знае (да кажем парола).
Когато Б иска да се представи, той моли А да му даде едно случайно число. А му изпраща случайно число. Б използва случайното число, за да криптира (или hash-не) информацията си, и я подава на А. А знае числото, знае алгоритъма за криптация (hash), и извършва действието на Б локално, след което сравнява резултата с това, което е получил от Б. Ако има съвпадение, значи Б е този, за когото се представя.
Ако някой подслушва мрежата, той знае че Б се опитва да се оторизира пред А. Но не знае паролата на Б, тя никога не минава некриптирана. Отделно не може да използва информацията, получена от Б за да се представи за Б някъде другаде, понеже друго случайно число ще бъде използвано в онази оторизация.
В резултат – имаме активна оторизация, която не може да се декриптира, и не може да се използва другаде или тук по друго време.


Асиметрични алгоритми за криптиране – Това са алгоритми с публични частни ключове. Идеята е проста – имаме математически алгоритми, които ни гарантират, с изключително високо ниво на сигурност, че имаме два ключа, наричани условно публичен и частен. С частният ключ можем да криптираме информация, която може да се декриптира само с публичният ключ. Частният не може да я декриптира. А по частният ключ не можем да създадем публичен и обратно. Така ако вземем частен ключ и криптираме с него някакъв текст, то знаем, че той е криптиран и подписан от този, за който се представя отсрещната страна, ако можем да му го декриптираме с публичният ключ. Без да имаме публичният ключ, не можем да декриптираме, без да е криптиран с реципрочният частен ключ, не можем да декриптираме с публичният. Този алгоритъм е хитър, защото можете да раздадете на приятелите си вашият публичен ключ, и само те ще могат да четат криптираната информация, която им пращате. Но никой друг, освен вас, няма да може да им пише и изпраща информация.


Електронен Сертификат – Двоен (или повече) подпис с публични и частни ключове. Пример – представете си, че вие си създадете своя двойка публичен и частен ключ. Частният си е винаги у вас. Но искате публичният да е достъпен за група хора или всички. Отивате в полицията (или Certificate Authority) и те проверяват, че вие сте вие. След което криптират с техният си частен ключ информация-текст, която казва – да, това е лицето Пешо Пешев, и неговият публичен ключ е този и този (записан вътре). Полученото нещо се нарича сертификат, и можете да го публикувате публично (или да го изпращате, с криптираната от вашият частен ключ поща).
Всеки, който получи вашият сертификат, ако има публичният ключ на полицията (CA) ще може да го прочете. И ще знае, че полицията (гарантирано от нейният частен ключ) твърди, че вие сте този, за когото се представяте, и вашият публичен ключ кой е. С него пък получателят ще декриптира вашата поща, и ще знае че тя е написана от вас, защото само вие имате вашият частен ключ.


Дифъл-Хелман алгоритъм – друга форма на асиметрично криптиране, при което участниците дори си разменят публично индикатори за ключовете, от които може да се регенерира публичният (или друг) ключ, но подслушващият не може да го направи.


Еднократна парола – частна форма на challenge алгоритъма, при която криптираната информация, която се обменя за валидация (паролата) се допълва от компонент, който е зависим от времето (или последователността на събитието), примерно времето е добавено към паролата. Така дори някой да подслуша и да се опита да кракне с речник паролата, няма да може, понеже само след да речем минута информацията е чисто нова, и старата инвалидизирана.


Електронен подпис – Това е комбинация от електронен сертификат и еднократна парола. През мрежата се транспортира сертификационна информация, модифицирана и криптирана с форма на еднократна парола. Може да бъде реализиран и по опростено (например чиповете на кредитните карти са с много по опростена форма на оторизация), и по сложно. Но това е избор на имплементацията, а не е проблем на технологията. Технологията може да бъде удивително сигурна, дори и всички публични ключове и информация, да бъдат свободно достъпни в мрежата.

Безспорно е, че нещо може да бъде реализирано несигурно. Но също така е безспорно, че може да бъде реализирано и сигурно. Това обаче не е въпрос на дискусията дали да има или да няма електронно гласуване. Това е дискусия, след като решим да има електронно гласуване, как точно да бъде направено то. Да казваш, по добре да няма, защото може да бъде направено лошо, е същото, като да отказваш да летиш със самолети, защото от време на време падат. Да, няма да паднеш със самолет, но не значи, че ще живееш вечно, или че самолет няма да ти падне на главата. Значи само, че ще си винаги по бавен от тези, които пътуват със самолет.

ППС:
Може по детайлно да го разпиша, но основната идея е в разделението. Ако си представим, че асоциацията на бюлетина с вот, е всъщност като действието по създаване на сертификат, само че имаме частен ключ генериран локално, и публичен ключ асоцииран с анонимен хеш, който половината да речем отива при асоциятора на вота и половината при преброителя директно, то асоциатора (3) може само да валидира информацията, но не и да знае кой е гласувал и какво. Преброителя може да валидира, но не и да знае кой е гласувал. Само потребителят, ще има възможност да провери, да чете и да (пре)гласува, локално при него.
Математиката допуска това. Така работят вече и много електронни системи за оторизации. Така че може да стане сигурно. Въпросът в действителност обаче никога не е бил технологичен, нали?