Tag Archives: certificates

Critical Windows Vulnerability Discovered by NSA

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/01/critical_window.html

Yesterday’s Microsoft Windows patches included a fix for a critical vulnerability in the system’s crypto library.

A spoofing vulnerability exists in the way Windows CryptoAPI (Crypt32.dll) validates Elliptic Curve Cryptography (ECC) certificates.

An attacker could exploit the vulnerability by using a spoofed code-signing certificate to sign a malicious executable, making it appear the file was from a trusted, legitimate source. The user would have no way of knowing the file was malicious, because the digital signature would appear to be from a trusted provider.

A successful exploit could also allow the attacker to conduct man-in-the-middle attacks and decrypt confidential information on user connections to the affected software.

That’s really bad, and you should all patch your system right now, before you finish reading this blog post.

This is a zero-day vulnerability, meaning that it was not detected in the wild before the patch was released. It was discovered by security researchers. Interestingly, it was discovered by NSA security researchers, and the NSA security advisory gives a lot more information about it than the Microsoft advisory does.

Exploitation of the vulnerability allows attackers to defeat trusted network connections and deliver executable code while appearing as legitimately trusted entities. Examples where validation of trust may be impacted include:

  • HTTPS connections
  • Signed files and emails
  • Signed executable code launched as user-mode processes

The vulnerability places Windows endpoints at risk to a broad range of exploitation vectors. NSA assesses the vulnerability to be severe and that sophisticated cyber actors will understand the underlying flaw very quickly and, if exploited, would render the previously mentioned platforms as fundamentally vulnerable.The consequences of not patching the vulnerability are severe and widespread. Remote exploitation tools will likely be made quickly and widely available.Rapid adoption of the patch is the only known mitigation at this time and should be the primary focus for all network owners.

Early yesterday morning, NSA’s Cybersecurity Directorate head Anne Neuberger hosted a media call where she talked about the vulnerability and — to my shock — took questions from the attendees. According to her, the NSA discovered this vulnerability as part of its security research. (If it found it in some other nation’s cyberweapons stash — my personal favorite theory — she declined to say.) She did not answer when asked how long ago the NSA discovered the vulnerability. She said that this is not the first time the NSA sent Microsoft a vulnerability to fix, but it was the first time it has publicly taken credit for the discovery. The reason is that the NSA is trying to rebuild trust with the security community, and this disclosure is a result of its new initiative to share findings more quickly and more often.

Barring any other information, I would take the NSA at its word here. So, good for it.

And — seriously — patch your systems now: Windows 10 and Windows Server 2016/2019. Assume that this vulnerability has already been weaponized, probably by criminals and certainly by major governments. Even assume that the NSA is using this vulnerability — why wouldn’t it?

Ars Technica article. Wired article. CERT advisory.

EDITED TO ADD: Washington Post article.

EDITED TO ADD (1/16): The attack was demonstrated in less than 24 hours.

Brian Krebs blog post.

The NSA Warns of TLS Inspection

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/11/the_nsa_warns_o.html

The NSA has released a security advisory warning of the dangers of TLS inspection:

Transport Layer Security Inspection (TLSI), also known as TLS break and inspect, is a security process that allows enterprises to decrypt traffic, inspect the decrypted content for threats, and then re-encrypt the traffic before it enters or leaves the network. Introducing this capability into an enterprise enhances visibility within boundary security products, but introduces new risks. These risks, while not inconsequential, do have mitigations.

[…]

The primary risk involved with TLSI’s embedded CA is the potential abuse of the CA to issue unauthorized certificates trusted by the TLS clients. Abuse of a trusted CA can allow an adversary to sign malicious code to bypass host IDS/IPSs or to deploy malicious services that impersonate legitimate enterprise services to the hosts.

[…]

A further risk of introducing TLSI is that an adversary can focus their exploitation efforts on a single device where potential traffic of interest is decrypted, rather than try to exploit each location where the data is stored.Setting a policy to enforce that traffic is decrypted and inspected only as authorized, and ensuring that decrypted traffic is contained in an out-of-band, isolated segment of the network prevents unauthorized access to the decrypted traffic.

[…]

To minimize the risks described above, breaking and inspecting TLS traffic should only be conducted once within the enterprise network. Redundant TLSI, wherein a client-server traffic flow is decrypted, inspected, and re-encrypted by one forward proxy and is then forwarded to a second forward proxy for more of the same,should not be performed.Inspecting multiple times can greatly complicate diagnosing network issues with TLS traffic. Also, multi-inspection further obscures certificates when trying to ascertain whether a server should be trusted. In this case, the “outermost” proxy makes the decisions on what server certificates or CAs should be trusted and is the only location where certificate pinning can be performed.Finally, a single TLSI implementation is sufficient for detecting encrypted traffic threats; additional TLSI will have access to the same traffic. If the first TLSI implementation detected a threat, killed the session, and dropped the traffic, then additional TLSI implementations would be rendered useless since they would not even receive the dropped traffic for further inspection. Redundant TLSI increases the risk surface, provides additional opportunities for adversaries to gain unauthorized access to decrypted traffic, and offers no additional benefits.

Nothing surprising or novel. No operational information about who might be implementing these attacks. No classified information revealed.

News article.

NordVPN Breached

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/10/nordvpn_breache.html

There was a successful attack against NordVPN:

Based on the command log, another of the leaked secret keys appeared to secure a private certificate authority that NordVPN used to issue digital certificates. Those certificates might be issued for other servers in NordVPN’s network or for a variety of other sensitive purposes. The name of the third certificate suggested it could also have been used for many different sensitive purposes, including securing the server that was compromised in the breach.

The revelations came as evidence surfaced suggesting that two rival VPN services, TorGuard and VikingVPN, also experienced breaches that leaked encryption keys. In a statement, TorGuard said a secret key for a transport layer security certificate for *.torguardvpnaccess.com was stolen. The theft happened in a 2017 server breach. The stolen data related to a squid proxy certificate.

TorGuard officials said on Twitter that the private key was not on the affected server and that attackers “could do nothing with those keys.” Monday’s statement went on to say TorGuard didn’t remove the compromised server until early 2018. TorGuard also said it learned of VPN breaches last May, “and in a related development we filed a legal complaint against NordVPN.”

The breach happened nineteen months ago, but the company is only just disclosing it to the public. We don’t know exactly what was stolen and how it affects VPN security. More details are needed.

VPNs are a shadowy world. We use them to protect our Internet traffic when we’re on a network we don’t trust, but we’re forced to trust the VPN instead. Recommendations are hard. NordVPN’s website says that the company is based in Panama. Do we have any reason to trust it at all?

I’m curious what VPNs others use, and why they should be believed to be trustworthy.

New Reductor Nation-State Malware Compromises TLS

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/10/new_reductor_na.html

Kaspersky has a detailed blog post about a new piece of sophisticated malware that it’s calling Reductor. The malware is able to compromise TLS traffic by infecting the computer with hacked TLS engine substituted on the fly, “marking” infected TLS handshakes by compromising the underlining random-number generator, and adding new digital certificates. The result is that the attacker can identify, intercept, and decrypt TLS traffic from the infected computer.

The Kaspersky Attribution Engine shows strong code similarities between this family and the COMPfun Trojan. Moreover, further research showed that the original COMpfun Trojan most probably is used as a downloader in one of the distribution schemes. Based on these similarities, we’re quite sure the new malware was developed by the COMPfun authors.

The COMpfun malware was initially documented by G-DATA in 2014. Although G-DATA didn’t identify which actor was using this malware, Kaspersky tentatively linked it to the Turla APT, based on the victimology. Our telemetry indicates that the current campaign using Reductor started at the end of April 2019 and remained active at the time of writing (August 2019). We identified targets in Russia and Belarus.

[…]

Turla has in the past shown many innovative ways to accomplish its goals, such as using hijacked satellite infrastructure. This time, if we’re right that Turla is the actor behind this new wave of attacks, then with Reductor it has implemented a very interesting way to mark a host’s encrypted TLS traffic by patching the browser without parsing network packets. The victimology for this new campaign aligns with previous Turla interests.

We didn’t observe any MitM functionality in the analyzed malware samples. However, Reductor is able to install digital certificates and mark the targets’ TLS traffic. It uses infected installers for initial infection through HTTP downloads from warez websites. The fact the original files on these sites are not infected also points to evidence of subsequent traffic manipulation.

The attribution chain from Reductor to COMPfun to Turla is thin. Speculation is that the attacker behind all of this is Russia.

How to migrate a digital signing workload to AWS CloudHSM

Post Syndicated from Tracy Pierce original https://aws.amazon.com/blogs/security/how-to-migrate-a-digital-signing-workload-to-aws-cloudhsm/

Is your on-premises Hardware Security Module (HSM) at end-of-life? Does continued maintenance of your on-premises hardware take a lot of time and cost a lot of money? Do you want or need all of your workloads to be performed on AWS? By migrating these workloads to AWS CloudHSM, you receive automated backups, low cost HSMs, managed maintenance, automatic recovery in event of a hardware failure, integrated fault tolerance, and high-availability. One such workload you might consider migrating is secret key material used for digital signing operations.

Enterprise certificate authority (CA) or public key infrastructure (PKI) applications use the private portion of an asymmetric key pair generated and stored in a hardware security module (HSM) to perform signing operations. Examples of such operations include the creation of digital certificates for web-servers or IoT devices, file signatures, or when negotiating a TLS session. Migrating this type of workload to AWS may save you time and money. If your HSM is at end of life and you need an alternative, you can migrate the digital signing workload to AWS CloudHSM in just a few steps.

This post will focus on a workload that allows you to create and use a digital certificate to digitally sign an arbitrary file. I’ll show you how to create a new asymmetric key pair and generate the corresponding certificate signing request (CSR) on AWS CloudHSM. This CSR, once signed by the appropriate issuing CA, allows your new key pair and the associated certificate to be trusted in the same way as the key pairs in your original HSM. You could then move traffic related to signing operations or issuing certificates to your AWS CloudHSM cluster.

Background

Before I walk you through the steps of migrating a certificate signing workload into CloudHSM, I’ll provide a little background information so you’ll know how CloudHSM, PKI, and CAs work together. Every certificate is associated with a key pair made up of a private (secret) key and a public key. The private key associated with a certificate needs to be kept confidential, so it typically resides on a hardware security module (HSM). The public portion of the key pair is not confidential, is included in the certificate, and can be shared with anyone who wants to verify a digital signature made with the corresponding private key. In a PKI, a CA is the trusted entity that issues digital certificates on behalf of end-entities. At the top of the trust hierarchy is a root CA, which is implicitly trusted when it is established because it acts as the root of trust for intermediate CAs and end-entity certificates that may be issued underneath it. Intermediate CAs are trusted because their certificates are signed by the root CA. Intermediate CAs in turn sign end-entity certificates, which are used to authenticate identities of various actors across the data transfer process. A common use case for end-entity certificates is for web servers so that connecting clients can verify the server’s identity. Generally, end-entity certificates are valid for 1-3 years, intermediate CA certificates are valid for 5-10 years, and root CAs are valid for 30 years or more.

Beyond solving for the non-repudiation of objects signed by end-entity certificates to ensure the owner of the private key performed the signing operation, there is still the problem of trusting that the owner of the private key is the identity they claim to be. When evaluating trust in this way, there are generally two options; relying on public CAs or private CAs. Public CAs widely distribute the public keys of their root certificates into popular client trust stores (for example, browsers and operating systems). This allows users to verify that the identity of the end-entity has been attested to by a publicly trusted CA. This helps when the signer and the verifier of the digital asset don’t know each other and haven’t shared cryptographic material with each other in advance to perform future validations. Private CAs are those for which there are no widely distributed copies of their associated public keys. The verifier has to retrieve the public key from the private CA and has to explicitly trust the cert without any third-party attestation of the signer’s identity. This is appropriate for cases when signers and verifiers are in the same company or know each other. Examples of when to use a private CA are securing virtual private networks, data or file replication between internal servers, remote backups, file-sharing, email, or other personal accounts.

Regardless of the certificate trust model you need, AWS CloudHSM can be used to create the initial key pair and CSR for both public and private CA requests. Note that AWS offers some alternatives for certificate management that may simplify your workloads without having to use AWS CloudHSM directly. AWS Certificate Manager (ACM) automatically creates key pairs and issues public or private certificates to identify resources within your organization. For use cases that need capabilities not yet supported by ACM, or in unusual situations in which a single-tenant HSM under your control is required for compliance reasons, you can use AWS CloudHSM directly for key generation and signing operations.

Organizations currently using an on-premises HSM for the creation of asymmetric keys used in digital certificates often use a vendor-proprietary mechanism to replicate key material across multiple HSMs for resiliency. However, this method prevents the key material from ever being transferred to an HSM offered by a different vendor. Consider it “vendor lock-in’ by design. So, the private key corresponding to the certificates you use for signing and authentication are locked inside that HSM. But if they are locked inside, how do you move to AWS CloudHSM? The answer is that you don’t have to rely on these inaccessible keys: you can create a new key pair and use it within AWS CloudHSM to begin issuing end-entity certificates.

Solution overview

I will go over creating a new private key in AWS CloudHSM using the Windows client and using Microsoft certreq to generate a corresponding CSR. You provide this CSR to your private or public CA to receive a signed certificate in return. This certificate and its public key then needs to be propagated to wherever your signatures are verified. At the end of this post, I will show you how to verify your digital signatures using Microsoft SignTool. SignTool is provided by Microsoft to allow Windows users to digitally sign files, verify file signatures, and file timestamps.
 

Figure 1: Procedural diagram

Figure 1: Procedural diagram

As shown in the diagram above, the steps followed in this post are:

  1. Create a new RSA private key using KSP/CNG through the AWS CloudHSM Windows client.
  2. Using Microsoft certreq, create your CSR.
  3. Provide the CSR to your CA for signing.
  4. Use Microsoft SignTool to sign files in your environment.

Note: You may have to register this new certificate with any partners that do not automatically verify the entire certificate chain. This could be 3rd party applications, vendors, or outside entities that utilize your certificates to determine trust.

Prerequisites

In this walkthrough, I assume that you already have an AWS CloudHSM cluster set up and initialized with at least one HSM device, and an Amazon Elastic Compute Cloud (EC2) Windows-based instance with the AWS CloudHSM client, PowerShell, and Windows SDK with Microsoft SignTool installed. You must have a crypto user (CU) on the HSM to perform the steps in this post.

Deploying the solution

Step 1: Create a new private key using KSP/CNG using the AWS CloudHSM Windows client

On your Windows server where the AWS CloudHSM Windows client is installed, use a text editor to create a certificate request file named IISCertRequest.inf. For the purpose of this post, I have filled out an example file below.


[Version]
Signature = "$Windows NT$"
[NewRequest]
Subject = "CN=example.com,C=US,ST=Washington,L=Seattle,O=ExampleOrg,OU=WebServer"
HashAlgorithm = SHA256
KeyAlgorithm = RSA
KeyLength = 2048
ProviderName = "Cavium Key Storage Provider"
KeyUsage = "CERT_DIGITAL_SIGNATURE_KEY_USAGE"
MachineKeySet = True    

Step 2: Using Microsoft certreq, create your CSR

On the same server, open PowerShell and, at the PowerShell prompt, create a CSR from the IISCertRequest.inf file by using the Windows certreq command. Here’s an example of the command. Remember to change out the text in red italics with your own file name.


PS C:\>certreq -new <IISCertRequest.inf IISCertRequest.csr> 
	SDK Version: 2.03
CertReq: Request Created

If successful, you’ll see the “Request Created” message above, as well as the new file <IISCertRequest.csr> on your server. This certificate will be provided to your choice of public CA for certificate issuance. This will need to be completed manually via your public CAs suggested method of certificate request.

Step 3: Provide the CSR to your CA for signing

The CA that had been signing your existing end-entity certificates with keys generated by your original HSM is the one you use to sign the new certificates with keys generated by AWS CloudHSM, as well. There are many CAs to choose from, such as Digicert, Trustwave, GoDaddy, and so on. You will want to follow their steps for submitting your CSR to receive your certificate in return.

Step 4: Use Microsoft SignTool to sign files in your environment

When you receive your signed certificate back from your chosen CA, save a copy locally on your Windows server. Then, move the certificate file to the Personal Certificate Store in Windows so it can be used by other applications, such as Microsoft SignTool. Here’s an example of the command. Be sure to replace the value in <red italics> with your actual certificate name.
PS C:\certreq -accept <signedCertificate.cer>

Now, the certificate is ready for use, and I’ll show you how to use it to sign a file. First, you have to get the thumbprint of your certificate. To do this, open PowerShell as an Administrator (right-click the app and choose Run as Administrator). Type this command:
PS C:\>Get-ChildItem -path cert:\LocalMachine\My

If successful, you should see an output similar to this. Copy the thumbprint that is returned. You’ll need it when you perform the actual signing operation on a file.


Thumbprint				                Subject
---------------						-----------
49DF7HDJT84723FDKCURLSXYRF9830568CXHSUB2		CN=WINDOWS-CA
VJFU57E6DI9DKMCHAKLDFJA8E73739Q04730QU7A		CN=www.example.com, OU=Certif….

To open the SignTool application, navigate to the app’s directory within PowerShell. By default, this is typically:
C:\Program Files (x86)\Windows Kits\<SDK Version> \bin\<version number> \<CPU architecture>

For example, if you had downloaded the Microsoft Windows SDK 10 version, the application would be stored in:

C:\Program Files (x86)\Windows Kits\10\bin\10.0.17763.0\x64

When you’ve located the directory, sign your file by running the command below. Remember to replace the values in <red italics> with your own values. The test.exe file in this example can be any valid executable file in your directory.
PS C:\>.\signtool.exe sign /v /fd sha256 /sha1 <thumbprint> /sm /as C:\Users\Administrator\Desktop\<test.exe>

You should see a message like this:


Done Adding Additional Store
Successfully signed C:\User\Administrator\Desktop\<test.exe>

Number of files successfully Signed: 1
Number of warnings: 0
Number of errors: 0

One last optional item you can do is verify the signature on the file using the command below. Again, replace your values for those in red italics.
PS C:\>.\signtool.exe verify /v /pa C:\Users\Administrators\Desktop\<test.exe>

You’ve now successfully migrated your file signing workload to AWS CloudHSM. If your signing certificate was not issued by a publicly trusted CA but instead by a private CA, make sure to deploy a copy of the root CA certificate and any intermediate certs from the private CA on any systems you want to verify the integrity of your signed file.

Conclusion

In this post, I walked you through creating a new RSA asymmetric key pair to create a CSR. After supplying the CSR to your chosen CA and receiving a signing certificate in return, I then showed you a how to use Microsoft SignTool with AWS CloudHSM to sign files in your environment. You can now use AWS CloudHSM to sign code, documents, or other certificates in the same method of your original HSMs.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the AWS CloudHSM forum.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Tracy Pierce

Tracy Pierce is a Senior Consultant, Security Specialty, for Remote Consulting Services. She enjoys the peculiar culture of Amazon and uses that to ensure every day is exciting for her fellow engineers and customers alike. Customer Obsession is her highest priority and she shows this by improving processes, documentation, and building tutorials. She has her AS in Computer Security & Forensics from SCTD, SSCP certification, AWS Developer Associate certification, and AWS Security Specialist certification. Outside of work, she enjoys time with friends, her Great Dane, and three cats. She keeps work interesting by drawing cartoon characters on the walls at request.

CAs Reissue Over One Million Weak Certificates

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/03/cas_reissue_ove.html

Turns out that the software a bunch of CAs used to generate public-key certificates was flawed: they created random serial numbers with only 63 bits instead of the required 64. That may not seem like a big deal to the layman, but that one bit change means that the serial numbers only have half the required entropy. This really isn’t a security problem; the serial numbers are to protect against attacks that involve weak hash functions, and we don’t allow those weak hash functions anymore. Still, it’s a good thing that the CAs are reissuing the certificates. The point of a standard is that it’s to be followed.

Detecting Phishing Sites with Machine Learning

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/08/detecting_phish.html

Really interesting article:

A trained eye (or even a not-so-trained one) can discern when something phishy is going on with a domain or subdomain name. There are search tools, such as Censys.io, that allow humans to specifically search through the massive pile of certificate log entries for sites that spoof certain brands or functions common to identity-processing sites. But it’s not something humans can do in real time very well — which is where machine learning steps in.

StreamingPhish and the other tools apply a set of rules against the names within certificate log entries. In StreamingPhish’s case, these rules are the result of guided learning — a corpus of known good and bad domain names is processed and turned into a “classifier,” which (based on my anecdotal experience) can then fairly reliably identify potentially evil websites.

Security updates for Friday

Post Syndicated from ris original https://lwn.net/Articles/756260/rss

Security updates have been issued by Debian (kernel, procps, and tiff), Fedora (ca-certificates, chromium, and git), Mageia (kernel, kernel-linus, kernel-tmb, and libvirt), openSUSE (chromium and xen), Oracle (procps, xmlrpc, and xmlrpc3), Red Hat (xmlrpc and xmlrpc3), Scientific Linux (procps, xmlrpc, and xmlrpc3), SUSE (HA kernel modules and kernel), and Ubuntu (libytnef and python-oslo.middleware).

Security updates for Thursday

Post Syndicated from ris original https://lwn.net/Articles/756164/rss

Security updates have been issued by CentOS (389-ds-base, corosync, firefox, java-1.7.0-openjdk, java-1.8.0-openjdk, kernel, librelp, libvirt, libvncserver, libvorbis, PackageKit, patch, pcs, and qemu-kvm), Fedora (asterisk, ca-certificates, gifsicle, ncurses, nodejs-base64-url, nodejs-mixin-deep, and wireshark), Mageia (thunderbird), Red Hat (procps), SUSE (curl, kvm, and libvirt), and Ubuntu (apport, haproxy, and tomcat7, tomcat8).

AWS GDPR Data Processing Addendum – Now Part of Service Terms

Post Syndicated from Chad Woolf original https://aws.amazon.com/blogs/security/aws-gdpr-data-processing-addendum/

Today, we’re happy to announce that the AWS GDPR Data Processing Addendum (GDPR DPA) is now part of our online Service Terms. This means all AWS customers globally can rely on the terms of the AWS GDPR DPA which will apply automatically from May 25, 2018, whenever they use AWS services to process personal data under the GDPR. The AWS GDPR DPA also includes EU Model Clauses, which were approved by the European Union (EU) data protection authorities, known as the Article 29 Working Party. This means that AWS customers wishing to transfer personal data from the European Economic Area (EEA) to other countries can do so with the knowledge that their personal data on AWS will be given the same high level of protection it receives in the EEA.

As we approach the GDPR enforcement date this week, this announcement is an important GDPR compliance component for us, our customers, and our partners. All customers which that are using cloud services to process personal data will need to have a data processing agreement in place between them and their cloud services provider if they are to comply with GDPR. As early as April 2017, AWS announced that AWS had a GDPR-ready DPA available for its customers. In this way, we started offering our GDPR DPA to customers over a year before the May 25, 2018 enforcement date. Now, with the DPA terms included in our online service terms, there is no extra engagement needed by our customers and partners to be compliant with the GDPR requirement for data processing terms.

The AWS GDPR DPA also provides our customers with a number of other important assurances, such as the following:

  • AWS will process customer data only in accordance with customer instructions.
  • AWS has implemented and will maintain robust technical and organizational measures for the AWS network.
  • AWS will notify its customers of a security incident without undue delay after becoming aware of the security incident.
  • AWS will make available certificates issued in relation to the ISO 27001 certification, the ISO 27017 certification, and the ISO 27018 certification to further help customers and partners in their own GDPR compliance activities.

Customers who have already signed an offline version of the AWS GDPR DPA can continue to rely on that GDPR DPA. By incorporating our GDPR DPA into the AWS Service Terms, we are simply extending the terms of our GDPR DPA to all customers globally who will require it under GDPR.

AWS GDPR DPA is only part of the story, however. We are continuing to work alongside our customers and partners to help them on their journey towards GDPR compliance.

If you have any questions about the GDPR or the AWS GDPR DPA, please contact your account representative, or visit the AWS GDPR Center at: https://aws.amazon.com/compliance/gdpr-center/

-Chad

Interested in AWS Security news? Follow the AWS Security Blog on Twitter.

AWS IoT 1-Click – Use Simple Devices to Trigger Lambda Functions

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-iot-1-click-use-simple-devices-to-trigger-lambda-functions/

We announced a preview of AWS IoT 1-Click at AWS re:Invent 2017 and have been refining it ever since, focusing on simplicity and a clean out-of-box experience. Designed to make IoT available and accessible to a broad audience, AWS IoT 1-Click is now generally available, along with new IoT buttons from AWS and AT&T.

I sat down with the dev team a month or two ago to learn about the service so that I could start thinking about my blog post. During the meeting they gave me a pair of IoT buttons and I started to think about some creative ways to put them to use. Here are a few that I came up with:

Help Request – Earlier this month I spent a very pleasant weekend at the HackTillDawn hackathon in Los Angeles. As the participants were hacking away, they occasionally had questions about AWS, machine learning, Amazon SageMaker, and AWS DeepLens. While we had plenty of AWS Solution Architects on hand (decked out in fashionable & distinctive AWS shirts for easy identification), I imagined an IoT button for each team. Pressing the button would alert the SA crew via SMS and direct them to the proper table.

Camera ControlTim Bray and I were in the AWS video studio, prepping for the first episode of Tim’s series on AWS Messaging. Minutes before we opened the Twitch stream I realized that we did not have a clean, unobtrusive way to ask the camera operator to switch to a closeup view. Again, I imagined that a couple of IoT buttons would allow us to make the request.

Remote Dog Treat Dispenser – My dog barks every time a stranger opens the gate in front of our house. While it is great to have confirmation that my Ring doorbell is working, I would like to be able to press a button and dispense a treat so that Luna stops barking!

Homes, offices, factories, schools, vehicles, and health care facilities can all benefit from IoT buttons and other simple IoT devices, all managed using AWS IoT 1-Click.

All About AWS IoT 1-Click
As I said earlier, we have been focusing on simplicity and a clean out-of-box experience. Here’s what that means:

Architects can dream up applications for inexpensive, low-powered devices.

Developers don’t need to write any device-level code. They can make use of pre-built actions, which send email or SMS messages, or write their own custom actions using AWS Lambda functions.

Installers don’t have to install certificates or configure cloud endpoints on newly acquired devices, and don’t have to worry about firmware updates.

Administrators can monitor the overall status and health of each device, and can arrange to receive alerts when a device nears the end of its useful life and needs to be replaced, using a single interface that spans device types and manufacturers.

I’ll show you how easy this is in just a moment. But first, let’s talk about the current set of devices that are supported by AWS IoT 1-Click.

Who’s Got the Button?
We’re launching with support for two types of buttons (both pictured above). Both types of buttons are pre-configured with X.509 certificates, communicate to the cloud over secure connections, and are ready to use.

The AWS IoT Enterprise Button communicates via Wi-Fi. It has a 2000-click lifetime, encrypts outbound data using TLS, and can be configured using BLE and our mobile app. It retails for $19.99 (shipping and handling not included) and can be used in the United States, Europe, and Japan.

The AT&T LTE-M Button communicates via the LTE-M cellular network. It has a 1500-click lifetime, and also encrypts outbound data using TLS. The device and the bundled data plan is available an an introductory price of $29.99 (shipping and handling not included), and can be used in the United States.

We are very interested in working with device manufacturers in order to make even more shapes, sizes, and types of devices (badge readers, asset trackers, motion detectors, and industrial sensors, to name a few) available to our customers. Our team will be happy to tell you about our provisioning tools and our facility for pushing OTA (over the air) updates to large fleets of devices; you can contact them at [email protected].

AWS IoT 1-Click Concepts
I’m eager to show you how to use AWS IoT 1-Click and the buttons, but need to introduce a few concepts first.

Device – A button or other item that can send messages. Each device is uniquely identified by a serial number.

Placement Template – Describes a like-minded collection of devices to be deployed. Specifies the action to be performed and lists the names of custom attributes for each device.

Placement – A device that has been deployed. Referring to placements instead of devices gives you the freedom to replace and upgrade devices with minimal disruption. Each placement can include values for custom attributes such as a location (“Building 8, 3rd Floor, Room 1337”) or a purpose (“Coffee Request Button”).

Action – The AWS Lambda function to invoke when the button is pressed. You can write a function from scratch, or you can make use of a pair of predefined functions that send an email or an SMS message. The actions have access to the attributes; you can, for example, send an SMS message with the text “Urgent need for coffee in Building 8, 3rd Floor, Room 1337.”

Getting Started with AWS IoT 1-Click
Let’s set up an IoT button using the AWS IoT 1-Click Console:

If I didn’t have any buttons I could click Buy devices to get some. But, I do have some, so I click Claim devices to move ahead. I enter the device ID or claim code for my AT&T button and click Claim (I can enter multiple claim codes or device IDs if I want):

The AWS buttons can be claimed using the console or the mobile app; the first step is to use the mobile app to configure the button to use my Wi-Fi:

Then I scan the barcode on the box and click the button to complete the process of claiming the device. Both of my buttons are now visible in the console:

I am now ready to put them to use. I click on Projects, and then Create a project:

I name and describe my project, and click Next to proceed:

Now I define a device template, along with names and default values for the placement attributes. Here’s how I set up a device template (projects can contain several, but I just need one):

The action has two mandatory parameters (phone number and SMS message) built in; I add three more (Building, Room, and Floor) and click Create project:

I’m almost ready to ask for some coffee! The next step is to associate my buttons with this project by creating a placement for each one. I click Create placements to proceed. I name each placement, select the device to associate with it, and then enter values for the attributes that I established for the project. I can also add additional attributes that are peculiar to this placement:

I can inspect my project and see that everything looks good:

I click on the buttons and the SMS messages appear:

I can monitor device activity in the AWS IoT 1-Click Console:

And also in the Lambda Console:

The Lambda function itself is also accessible, and can be used as-is or customized:

As you can see, this is the code that lets me use {{*}}include all of the placement attributes in the message and {{Building}} (for example) to include a specific placement attribute.

Now Available
I’ve barely scratched the surface of this cool new service and I encourage you to give it a try (or a click) yourself. Buy a button or two, build something cool, and let me know all about it!

Pricing is based on the number of enabled devices in your account, measured monthly and pro-rated for partial months. Devices can be enabled or disabled at any time. See the AWS IoT 1-Click Pricing page for more info.

To learn more, visit the AWS IoT 1-Click home page or read the AWS IoT 1-Click documentation.

Jeff;

 

AWS Online Tech Talks – May and Early June 2018

Post Syndicated from Devin Watson original https://aws.amazon.com/blogs/aws/aws-online-tech-talks-may-and-early-june-2018/

AWS Online Tech Talks – May and Early June 2018  

Join us this month to learn about some of the exciting new services and solution best practices at AWS. We also have our first re:Invent 2018 webinar series, “How to re:Invent”. Sign up now to learn more, we look forward to seeing you.

Note – All sessions are free and in Pacific Time.

Tech talks featured this month:

Analytics & Big Data

May 21, 2018 | 11:00 AM – 11:45 AM PT Integrating Amazon Elasticsearch with your DevOps Tooling – Learn how you can easily integrate Amazon Elasticsearch Service into your DevOps tooling and gain valuable insight from your log data.

May 23, 2018 | 11:00 AM – 11:45 AM PTData Warehousing and Data Lake Analytics, Together – Learn how to query data across your data warehouse and data lake without moving data.

May 24, 2018 | 11:00 AM – 11:45 AM PTData Transformation Patterns in AWS – Discover how to perform common data transformations on the AWS Data Lake.

Compute

May 29, 2018 | 01:00 PM – 01:45 PM PT – Creating and Managing a WordPress Website with Amazon Lightsail – Learn about Amazon Lightsail and how you can create, run and manage your WordPress websites with Amazon’s simple compute platform.

May 30, 2018 | 01:00 PM – 01:45 PM PTAccelerating Life Sciences with HPC on AWS – Learn how you can accelerate your Life Sciences research workloads by harnessing the power of high performance computing on AWS.

Containers

May 24, 2018 | 01:00 PM – 01:45 PM PT – Building Microservices with the 12 Factor App Pattern on AWS – Learn best practices for building containerized microservices on AWS, and how traditional software design patterns evolve in the context of containers.

Databases

May 21, 2018 | 01:00 PM – 01:45 PM PTHow to Migrate from Cassandra to Amazon DynamoDB – Get the benefits, best practices and guides on how to migrate your Cassandra databases to Amazon DynamoDB.

May 23, 2018 | 01:00 PM – 01:45 PM PT5 Hacks for Optimizing MySQL in the Cloud – Learn how to optimize your MySQL databases for high availability, performance, and disaster resilience using RDS.

DevOps

May 23, 2018 | 09:00 AM – 09:45 AM PT.NET Serverless Development on AWS – Learn how to build a modern serverless application in .NET Core 2.0.

Enterprise & Hybrid

May 22, 2018 | 11:00 AM – 11:45 AM PTHybrid Cloud Customer Use Cases on AWS – Learn how customers are leveraging AWS hybrid cloud capabilities to easily extend their datacenter capacity, deliver new services and applications, and ensure business continuity and disaster recovery.

IoT

May 31, 2018 | 11:00 AM – 11:45 AM PTUsing AWS IoT for Industrial Applications – Discover how you can quickly onboard your fleet of connected devices, keep them secure, and build predictive analytics with AWS IoT.

Machine Learning

May 22, 2018 | 09:00 AM – 09:45 AM PTUsing Apache Spark with Amazon SageMaker – Discover how to use Apache Spark with Amazon SageMaker for training jobs and application integration.

May 24, 2018 | 09:00 AM – 09:45 AM PTIntroducing AWS DeepLens – Learn how AWS DeepLens provides a new way for developers to learn machine learning by pairing the physical device with a broad set of tutorials, examples, source code, and integration with familiar AWS services.

Management Tools

May 21, 2018 | 09:00 AM – 09:45 AM PTGaining Better Observability of Your VMs with Amazon CloudWatch – Learn how CloudWatch Agent makes it easy for customers like Rackspace to monitor their VMs.

Mobile

May 29, 2018 | 11:00 AM – 11:45 AM PT – Deep Dive on Amazon Pinpoint Segmentation and Endpoint Management – See how segmentation and endpoint management with Amazon Pinpoint can help you target the right audience.

Networking

May 31, 2018 | 09:00 AM – 09:45 AM PTMaking Private Connectivity the New Norm via AWS PrivateLink – See how PrivateLink enables service owners to offer private endpoints to customers outside their company.

Security, Identity, & Compliance

May 30, 2018 | 09:00 AM – 09:45 AM PT – Introducing AWS Certificate Manager Private Certificate Authority (CA) – Learn how AWS Certificate Manager (ACM) Private Certificate Authority (CA), a managed private CA service, helps you easily and securely manage the lifecycle of your private certificates.

June 1, 2018 | 09:00 AM – 09:45 AM PTIntroducing AWS Firewall Manager – Centrally configure and manage AWS WAF rules across your accounts and applications.

Serverless

May 22, 2018 | 01:00 PM – 01:45 PM PTBuilding API-Driven Microservices with Amazon API Gateway – Learn how to build a secure, scalable API for your application in our tech talk about API-driven microservices.

Storage

May 30, 2018 | 11:00 AM – 11:45 AM PTAccelerate Productivity by Computing at the Edge – Learn how AWS Snowball Edge support for compute instances helps accelerate data transfers, execute custom applications, and reduce overall storage costs.

June 1, 2018 | 11:00 AM – 11:45 AM PTLearn to Build a Cloud-Scale Website Powered by Amazon EFS – Technical deep dive where you’ll learn tips and tricks for integrating WordPress, Drupal and Magento with Amazon EFS.

 

 

 

 

Sci-Hub ‘Pirate Bay For Science’ Security Certs Revoked by Comodo

Post Syndicated from Andy original https://torrentfreak.com/sci-hub-pirate-bay-for-science-security-certs-revoked-by-comodo-ca-180503/

Sci-Hub is often referred to as the “Pirate Bay of Science”. Like its namesake, it offers masses of unlicensed content for free, mostly against the wishes of copyright holders.

While The Pirate Bay will index almost anything, Sci-Hub is dedicated to distributing tens of millions of academic papers and articles, something which has turned itself into a target for publishing giants like Elsevier.

Sci-Hub and its Kazakhstan-born founder Alexandra Elbakyan have been under sustained attack for several years but more recently have been fending off an unprecedented barrage of legal action initiated by the American Chemical Society (ACS), a leading source of academic publications in the field of chemistry.

After winning a default judgment for $4.8 million in copyright infringement damages last year, ACS was further granted a broad injunction.

It required various third-party services (including domain registries, hosting companies and search engines) to stop facilitating access to the site. This plunged Sci-Hub into a game of domain whac-a-mole, one that continues to this day.

Determined to head Sci-Hub off at the pass, ACS obtained additional authority to tackle the evasive site and any new domains it may register in the future.

While Sci-Hub has been hopping around domains for a while, this week a new development appeared on the horizon. Visitors to some of the site’s domains were greeted with errors indicating that the domains’ security certificates had been revoked.

Tests conducted by TorrentFreak revealed clear revocations on Sci-Hub.hk and Sci-Hub.nz, both of which returned the error ‘NET::ERR_CERT_REVOKED’.

Certificate revoked

These certificates were first issued and then revoked by Comodo CA, the world’s largest certification authority. TF contacted the company who confirmed that it had been forced to take action against Sci-Hub.

“In response to a court order against Sci-Hub, Comodo CA has revoked four certificates for the site,” Jonathan Skinner, Director, Global Channel Programs at Comodo CA informed TorrentFreak.

“By policy Comodo CA obeys court orders and the law to the full extent of its ability.”

Comodo refused to confirm any additional details, including whether these revocations were anything to do with the current ACS injunction. However, Susan R. Morrissey, Director of Communications at ACS, told TorrentFreak that the revocations were indeed part of ACS’ legal action against Sci-Hub.

“[T]he action is related to our continuing efforts to protect ACS’ intellectual property,” Morrissey confirmed.

Sci-Hub operates multiple domains (an up-to-date list is usually available on Wikipedia) that can be switched at any time. At the time of writing the domain sci-hub.ga currently returns ‘ERR_SSL_VERSION_OR_CIPHER_MISMATCH’ while .CN and .GS variants both have Comodo certificates that expired last year.

When TF first approached Comodo earlier this week, Sci-Hub’s certificates with the company hadn’t been completely wiped out. For example, the domain https://sci-hub.tw operated perfectly, with an active and non-revoked Comodo certificate.

Still in the game…but not for long

By Wednesday, however, the domain was returning the now-familiar “revoked” message.

These domain issues are the latest technical problems to hit Sci-Hub as a result of the ACS injunction. In February, Cloudflare terminated service to several of the site’s domains.

“Cloudflare will terminate your service for the following domains sci-hub.la, sci-hub.tv, and sci-hub.tw by disabling our authoritative DNS in 24 hours,” Cloudflare told Sci-Hub.

While ACS has certainly caused problems for Sci-Hub, the platform is extremely resilient and remains online.

The domains https://sci-hub.is and https://sci-hub.nu are fully operational with certificates issued by Let’s Encrypt, a free and open certificate authority supported by the likes of Mozilla, EFF, Chrome, Private Internet Access, and other prominent tech companies.

It’s unclear whether these certificates will be targeted in the future but Sci-Hub doesn’t appear to be in the mood to back down.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

No, Ray Ozzie hasn’t solved crypto backdoors

Post Syndicated from Robert Graham original https://blog.erratasec.com/2018/04/no-ray-ozzie-hasnt-solved-crypto.html

According to this Wired article, Ray Ozzie may have a solution to the crypto backdoor problem. No, he hasn’t. He’s only solving the part we already know how to solve. He’s deliberately ignoring the stuff we don’t know how to solve. We know how to make backdoors, we just don’t know how to secure them.

The vault doesn’t scale

Yes, Apple has a vault where they’ve successfully protected important keys. No, it doesn’t mean this vault scales. The more people and the more often you have to touch the vault, the less secure it becomes. We are talking thousands of requests per day from 100,000 different law enforcement agencies around the world. We are unlikely to protect this against incompetence and mistakes. We are definitely unable to secure this against deliberate attack.

A good analogy to Ozzie’s solution is LetsEncrypt for getting SSL certificates for your website, which is fairly scalable, using a private key locked in a vault for signing hundreds of thousands of certificates. That this scales seems to validate Ozzie’s proposal.

But at the same time, LetsEncrypt is easily subverted. LetsEncrypt uses DNS to verify your identity. But spoofing DNS is easy, as was recently shown in the recent BGP attack against a cryptocurrency. Attackers can create fraudulent SSL certificates with enough effort. We’ve got other protections against this, such as discovering and revoking the SSL bad certificate, so while damaging, it’s not catastrophic.

But with Ozzie’s scheme, equivalent attacks would be catastrophic, as it would lead to unlocking the phone and stealing all of somebody’s secrets.

In particular, consider what would happen if LetsEncrypt’s certificate was stolen (as Matthew Green points out). The consequence is that this would be detected and mass revocations would occur. If Ozzie’s master key were stolen, nothing would happen. Nobody would know, and evildoers would be able to freely decrypt phones. Ozzie claims his scheme can work because SSL works — but then his scheme includes none of the many protections necessary to make SSL work.

What I’m trying to show here is that in a lab, it all looks nice and pretty, but when attacked at scale, things break down — quickly. We have so much experience with failure at scale that we can judge Ozzie’s scheme as woefully incomplete. It’s not even up to the standard of SSL, and we have a long list of SSL problems.

Cryptography is about people more than math

We have a mathematically pure encryption algorithm called the “One Time Pad”. It can’t ever be broken, provably so with mathematics.

It’s also perfectly useless, as it’s not something humans can use. That’s why we use AES, which is vastly less secure (anything you encrypt today can probably be decrypted in 100 years). AES can be used by humans whereas One Time Pads cannot be. (I learned the fallacy of One Time Pad’s on my grandfather’s knee — he was a WW II codebreaker who broke German messages trying to futz with One Time Pads).

The same is true with Ozzie’s scheme. It focuses on the mathematical model but ignores the human element. We already know how to solve the mathematical problem in a hundred different ways. The part we don’t know how to secure is the human element.

How do we know the law enforcement person is who they say they are? How do we know the “trusted Apple employee” can’t be bribed? How can the law enforcement agent communicate securely with the Apple employee?

You think these things are theoretical, but they aren’t. Consider financial transactions. It used to be common that you could just email your bank/broker to wire funds into an account for such things as buying a house. Hackers have subverted that, intercepting messages, changing account numbers, and stealing millions. Most banks/brokers require additional verification before doing such transfers.

Let me repeat: Ozzie has only solved the part we already know how to solve. He hasn’t addressed these issues that confound us.

We still can’t secure security, much less secure backdoors

We already know how to decrypt iPhones: just wait a year or two for somebody to discover a vulnerability. FBI claims it’s “going dark”, but that’s only for timely decryption of phones. If they are willing to wait a year or two a vulnerability will eventually be found that allows decryption.

That’s what’s happened with the “GrayKey” device that’s been all over the news lately. Apple is fixing it so that it won’t work on new phones, but it works on old phones.

Ozzie’s solution is based on the assumption that iPhones are already secure against things like GrayKey. Like his assumption “if Apple already has a vault for private keys, then we have such vaults for backdoor keys”, Ozzie is saying “if Apple already had secure hardware/software to secure the phone, then we can use the same stuff to secure the backdoors”. But we don’t really have secure vaults and we don’t really have secure hardware/software to secure the phone.

Again, to stress this point, Ozzie is solving the part we already know how to solve, but ignoring the stuff we don’t know how to solve. His solution is insecure for the same reason phones are already insecure.

Locked phones aren’t the problem

Phones are general purpose computers. That means anybody can install an encryption app on the phone regardless of whatever other security the phone might provide. The police are powerless to stop this. Even if they make such encryption crime, then criminals will still use encryption.

That leads to a strange situation that the only data the FBI will be able to decrypt is that of people who believe they are innocent. Those who know they are guilty will install encryption apps like Signal that have no backdoors.

In the past this was rare, as people found learning new apps a barrier. These days, apps like Signal are so easy even drug dealers can figure out how to use them.

We know how to get Apple to give us a backdoor, just pass a law forcing them to. It may look like Ozzie’s scheme, it may be something more secure designed by Apple’s engineers. Sure, it will weaken security on the phone for everyone, but those who truly care will just install Signal. But again we are back to the problem that Ozzie’s solving the problem we know how to solve while ignoring the much larger problem, that of preventing people from installing their own encryption.

The FBI isn’t necessarily the problem

Ozzie phrases his solution in terms of U.S. law enforcement. Well, what about Europe? What about Russia? What about China? What about North Korea?

Technology is borderless. A solution in the United States that allows “legitimate” law enforcement requests will inevitably be used by repressive states for what we believe would be “illegitimate” law enforcement requests.

Ozzie sees himself as the hero helping law enforcement protect 300 million American citizens. He doesn’t see himself what he really is, the villain helping oppress 1.4 billion Chinese, 144 million Russians, and another couple billion living in oppressive governments around the world.

Conclusion

Ozzie pretends the problem is political, that he’s created a solution that appeases both sides. He hasn’t. He’s solved the problem we already know how to solve. He’s ignored all the problems we struggle with, the problems we claim make secure backdoors essentially impossible. I’ve listed some in this post, but there are many more. Any famous person can create a solution that convinces fawning editors at Wired Magazine, but if Ozzie wants to move forward he’s going to have to work harder to appease doubting cryptographers.

New .BOT gTLD from Amazon

Post Syndicated from Randall Hunt original https://aws.amazon.com/blogs/aws/new-bot-gtld-from-amazon/

Today, I’m excited to announce the launch of .BOT, a new generic top-level domain (gTLD) from Amazon. Customers can use .BOT domains to provide an identity and portal for their bots. Fitness bots, slack bots, e-commerce bots, and more can all benefit from an easy-to-access .BOT domain. The phrase “bot” was the 4th most registered domain keyword within the .COM TLD in 2016 with more than 6000 domains per month. A .BOT domain allows customers to provide a definitive internet identity for their bots as well as enhancing SEO performance.

At the time of this writing .BOT domains start at $75 each and must be verified and published with a supported tool like: Amazon Lex, Botkit Studio, Dialogflow, Gupshup, Microsoft Bot Framework, or Pandorabots. You can expect support for more tools over time and if your favorite bot framework isn’t supported feel free to contact us here: [email protected].

Below, I’ll walk through the experience of registering and provisioning a domain for my bot, whereml.bot. Then we’ll look at setting up the domain as a hosted zone in Amazon Route 53. Let’s get started.

Registering a .BOT domain

First, I’ll head over to https://amazonregistry.com/bot, type in a new domain, and click magnifying class to make sure my domain is available and get taken to the registration wizard.

Next, I have the opportunity to choose how I want to verify my bot. I build all of my bots with Amazon Lex so I’ll select that in the drop down and get prompted for instructions specific to AWS. If I had my bot hosted somewhere else I would need to follow the unique verification instructions for that particular framework.

To verify my Lex bot I need to give the Amazon Registry permissions to invoke the bot and verify it’s existence. I’ll do this by creating an AWS Identity and Access Management (IAM) cross account role and providing the AmazonLexReadOnly permissions to that role. This is easily accomplished in the AWS Console. Be sure to provide the account number and external ID shown on the registration page.

Now I’ll add read only permissions to our Amazon Lex bots.

I’ll give my role a fancy name like DotBotCrossAccountVerifyRole and a description so it’s easy to remember why I made this then I’ll click create to create the role and be transported to the role summary page.

Finally, I’ll copy the ARN from the created role and save it for my next step.

Here I’ll add all the details of my Amazon Lex bot. If you haven’t made a bot yet you can follow the tutorial to build a basic bot. I can refer to any alias I’ve deployed but if I just want to grab the latest published bot I can pass in $LATEST as the alias. Finally I’ll click Validate and proceed to registering my domain.

Amazon Registry works with a partner EnCirca to register our domains so we’ll select them and optionally grab Site Builder. I know how to sling some HTML and Javascript together so I’ll pass on the Site Builder side of things.

 

After I click continue we’re taken to EnCirca’s website to finalize the registration and with any luck within a few minutes of purchasing and completing the registration we should receive an email with some good news:

Alright, now that we have a domain name let’s find out how to host things on it.

Using Amazon Route53 with a .BOT domain

Amazon Route 53 is a highly available and scalable DNS with robust APIs, healthchecks, service discovery, and many other features. I definitely want to use this to host my new domain. The first thing I’ll do is navigate to the Route53 console and create a hosted zone with the same name as my domain.


Great! Now, I need to take the Name Server (NS) records that Route53 created for me and use EnCirca’s portal to add these as the authoritative nameservers on the domain.

Now I just add my records to my hosted zone and I should be able to serve traffic! Way cool, I’ve got my very own .bot domain for @WhereML.

Next Steps

  • I could and should add to the security of my site by creating TLS certificates for people who intend to access my domain over TLS. Luckily with AWS Certificate Manager (ACM) this is extremely straightforward and I’ve got my subdomains and root domain verified in just a few clicks.
  • I could create a cloudfront distrobution to front an S3 static single page application to host my entire chatbot and invoke Amazon Lex with a cognito identity right from the browser.

Randall

RDS for Oracle: Extending Outbound Network Access to use SSL/TLS

Post Syndicated from Surya Nallu original https://aws.amazon.com/blogs/architecture/rds-for-oracle-extending-outbound-network-access-to-use-ssltls/

In December 2016, we launched the Outbound Network Access functionality for Amazon RDS for Oracle, enabling customers to use their RDS for Oracle database instances to communicate with external web endpoints using the utl_http and utl tcp packages, and sending emails through utl_smtp. We extended the functionality by adding the option of using custom DNS servers, allowing such outbound network accesses to make use of any DNS server a customer chooses to use. These releases enabled HTTP, TCP and SMTP communication originating out of RDS for Oracle instances – limited to non-secure (non-SSL) mediums.

To overcome the limitation over SSL connections, we recently published a whitepaper, that guides through the process of creating customized Oracle wallet bundles on your RDS for Oracle instances. By making use of such wallets, you can now extend the Outbound Network Access capability to have external communications happen over secure (SSL/TLS) connections. This opens up new use cases for your RDS for Oracle instances.

With the right set of certificates imported into your RDS for Oracle instances (through Oracle wallets), your database instances can now:

  • Communicate with a HTTPS endpoint: Using utl_http, access a resource such as https://status.aws.amazon.com/robots.txt
  • Download files from Amazon S3 securely: Using a presigned URL from Amazon S3, you can now download any file over SSL
  • Extending Oracle Database links to use SSL: Database links between RDS for Oracle instances can now use SSL as long as the instances have the SSL option installed
  • Sending email over SMTPS:
    • You can now integrate with Amazon SES to send emails from your database instances and any other generic SMTPS with which the provider can be integrated

These are just a few high-level examples of new use cases that have opened up with the whitepaper. As a reminder, always ensure to have best security practices in place when making use of Outbound Network Access (detailed in the whitepaper).

About the Author

Surya Nallu is a Software Development Engineer on the Amazon RDS for Oracle team.

Securing messages published to Amazon SNS with AWS PrivateLink

Post Syndicated from Otavio Ferreira original https://aws.amazon.com/blogs/security/securing-messages-published-to-amazon-sns-with-aws-privatelink/

Amazon Simple Notification Service (SNS) now supports VPC Endpoints (VPCE) via AWS PrivateLink. You can use VPC Endpoints to privately publish messages to SNS topics, from an Amazon Virtual Private Cloud (VPC), without traversing the public internet. When you use AWS PrivateLink, you don’t need to set up an Internet Gateway (IGW), Network Address Translation (NAT) device, or Virtual Private Network (VPN) connection. You don’t need to use public IP addresses, either.

VPC Endpoints doesn’t require code changes and can bring additional security to Pub/Sub Messaging use cases that rely on SNS. VPC Endpoints helps promote data privacy and is aligned with assurance programs, including the Health Insurance Portability and Accountability Act (HIPAA), FedRAMP, and others discussed below.

VPC Endpoints for SNS in action

Here’s how VPC Endpoints for SNS works. The following example is based on a banking system that processes mortgage applications. This banking system, which has been deployed to a VPC, publishes each mortgage application to an SNS topic. The SNS topic then fans out the mortgage application message to two subscribing AWS Lambda functions:

  • Save-Mortgage-Application stores the application in an Amazon DynamoDB table. As the mortgage application contains personally identifiable information (PII), the message must not traverse the public internet.
  • Save-Credit-Report checks the applicant’s credit history against an external Credit Reporting Agency (CRA), then stores the final credit report in an Amazon S3 bucket.

The following diagram depicts the underlying architecture for this banking system:
 
Diagram depicting the architecture for the example banking system
 
To protect applicants’ data, the financial institution responsible for developing this banking system needed a mechanism to prevent PII data from traversing the internet when publishing mortgage applications from their VPC to the SNS topic. Therefore, they created a VPC endpoint to enable their publisher Amazon EC2 instance to privately connect to the SNS API. As shown in the diagram, when the VPC endpoint is created, an Elastic Network Interface (ENI) is automatically placed in the same VPC subnet as the publisher EC2 instance. This ENI exposes a private IP address that is used as the entry point for traffic destined to SNS. This ensures that traffic between the VPC and SNS doesn’t leave the Amazon network.

Set up VPC Endpoints for SNS

The process for creating a VPC endpoint to privately connect to SNS doesn’t require code changes: access the VPC Management Console, navigate to the Endpoints section, and create a new Endpoint. Three attributes are required:

  • The SNS service name.
  • The VPC and Availability Zones (AZs) from which you’ll publish your messages.
  • The Security Group (SG) to be associated with the endpoint network interface. The Security Group controls the traffic to the endpoint network interface from resources in your VPC. If you don’t specify a Security Group, the default Security Group for your VPC will be associated.

Help ensure your security and compliance

SNS can support messaging use cases in regulated market segments, such as healthcare provider systems subject to the Health Insurance Portability and Accountability Act (HIPAA) and financial systems subject to the Payment Card Industry Data Security Standard (PCI DSS), and is also in-scope with the following Assurance Programs:

The SNS API is served through HTTP Secure (HTTPS), and encrypts all messages in transit with Transport Layer Security (TLS) certificates issued by Amazon Trust Services (ATS). The certificates verify the identity of the SNS API server when encrypted connections are established. The certificates help establish proof that your SNS API client (SDK, CLI) is communicating securely with the SNS API server. A Certificate Authority (CA) issues the certificate to a specific domain. Hence, when a domain presents a certificate that’s issued by a trusted CA, the SNS API client knows it’s safe to make the connection.

Summary

VPC Endpoints can increase the security of your pub/sub messaging use cases by allowing you to publish messages to SNS topics, from instances in your VPC, without traversing the internet. Setting up VPC Endpoints for SNS doesn’t require any code changes because the SNS API address remains the same.

VPC Endpoints for SNS is now available in all AWS Regions where AWS PrivateLink is available. For information on pricing and regional availability, visit the VPC pricing page.
For more information and on-boarding, see Publishing to Amazon SNS Topics from Amazon Virtual Private Cloud in the SNS documentation.

If you have comments about this post, submit them in the Comments section below. If you have questions about anything in this post, start a new thread on the Amazon SNS forum or contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

AWS Certificate Manager Launches Private Certificate Authority

Post Syndicated from Randall Hunt original https://aws.amazon.com/blogs/aws/aws-certificate-manager-launches-private-certificate-authority/

Today we’re launching a new feature for AWS Certificate Manager (ACM), Private Certificate Authority (CA). This new service allows ACM to act as a private subordinate CA. Previously, if a customer wanted to use private certificates, they needed specialized infrastructure and security expertise that could be expensive to maintain and operate. ACM Private CA builds on ACM’s existing certificate capabilities to help you easily and securely manage the lifecycle of your private certificates with pay as you go pricing. This enables developers to provision certificates in just a few simple API calls while administrators have a central CA management console and fine grained access control through granular IAM policies. ACM Private CA keys are stored securely in AWS managed hardware security modules (HSMs) that adhere to FIPS 140-2 Level 3 security standards. ACM Private CA automatically maintains certificate revocation lists (CRLs) in Amazon Simple Storage Service (S3) and lets administrators generate audit reports of certificate creation with the API or console. This service is packed full of features so let’s jump in and provision a CA.

Provisioning a Private Certificate Authority (CA)

First, I’ll navigate to the ACM console in my region and select the new Private CAs section in the sidebar. From there I’ll click Get Started to start the CA wizard. For now, I only have the option to provision a subordinate CA so we’ll select that and use my super secure desktop as the root CA and click Next. This isn’t what I would do in a production setting but it will work for testing out our private CA.

Now, I’ll configure the CA with some common details. The most important thing here is the Common Name which I’ll set as secure.internal to represent my internal domain.

Now I need to choose my key algorithm. You should choose the best algorithm for your needs but know that ACM has a limitation today that it can only manage certificates that chain up to to RSA CAs. For now, I’ll go with RSA 2048 bit and click Next.

In this next screen, I’m able to configure my certificate revocation list (CRL). CRLs are essential for notifying clients in the case that a certificate has been compromised before certificate expiration. ACM will maintain the revocation list for me and I have the option of routing my S3 bucket to a custome domain. In this case I’ll create a new S3 bucket to store my CRL in and click Next.

Finally, I’ll review all the details to make sure I didn’t make any typos and click Confirm and create.

A few seconds later and I’m greeted with a fancy screen saying I successfully provisioned a certificate authority. Hooray! I’m not done yet though. I still need to activate my CA by creating a certificate signing request (CSR) and signing that with my root CA. I’ll click Get started to begin that process.

Now I’ll copy the CSR or download it to a server or desktop that has access to my root CA (or potentially another subordinate – so long as it chains to a trusted root for my clients).

Now I can use a tool like openssl to sign my cert and generate the certificate chain.


$openssl ca -config openssl_root.cnf -extensions v3_intermediate_ca -days 3650 -notext -md sha256 -in csr/CSR.pem -out certs/subordinate_cert.pem
Using configuration from openssl_root.cnf
Enter pass phrase for /Users/randhunt/dev/amzn/ca/private/root_private_key.pem:
Check that the request matches the signature
Signature ok
The Subject's Distinguished Name is as follows
stateOrProvinceName   :ASN.1 12:'Washington'
localityName          :ASN.1 12:'Seattle'
organizationName      :ASN.1 12:'Amazon'
organizationalUnitName:ASN.1 12:'Engineering'
commonName            :ASN.1 12:'secure.internal'
Certificate is to be certified until Mar 31 06:05:30 2028 GMT (3650 days)
Sign the certificate? [y/n]:y


1 out of 1 certificate requests certified, commit? [y/n]y
Write out database with 1 new entries
Data Base Updated

After that I’ll copy my subordinate_cert.pem and certificate chain back into the console. and click Next.

Finally, I’ll review all the information and click Confirm and import. I should see a screen like the one below that shows my CA has been activated successfully.

Now that I have a private CA we can provision private certificates by hopping back to the ACM console and creating a new certificate. After clicking create a new certificate I’ll select the radio button Request a private certificate then I’ll click Request a certificate.

From there it’s just similar to provisioning a normal certificate in ACM.

Now I have a private certificate that I can bind to my ELBs, CloudFront Distributions, API Gateways, and more. I can also export the certificate for use on embedded devices or outside of ACM managed environments.

Available Now
ACM Private CA is a service in and of itself and it is packed full of features that won’t fit into a blog post. I strongly encourage the interested readers to go through the developer guide and familiarize themselves with certificate based security. ACM Private CA is available in in US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), EU (Frankfurt) and EU (Ireland). Private CAs cost $400 per month (prorated) for each private CA. You are not charged for certificates created and maintained in ACM but you are charged for certificates where you have access to the private key (exported or created outside of ACM). The pricing per certificate is tiered starting at $0.75 per certificate for the first 1000 certificates and going down to $0.001 per certificate after 10,000 certificates.

I’m excited to see administrators and developers take advantage of this new service. As always please let us know what you think of this service on Twitter or in the comments below.

Randall