Tag Archives: encryption

The EARN-IT Act

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/03/the_earn-it_act.html

Prepare for another attack on encryption in the U.S. The EARN-IT Act purports to be about protecting children from predation, but it’s really about forcing the tech companies to break their encryption schemes:

The EARN IT Act would create a “National Commission on Online Child Sexual Exploitation Prevention” tasked with developing “best practices” for owners of Internet platforms to “prevent, reduce, and respond” to child exploitation. But far from mere recommendations, those “best practices” would be approved by Congress as legal requirements: if a platform failed to adhere to them, it would lose essential legal protections for free speech.

It’s easy to predict how Attorney General William Barr would use that power: to break encryption. He’s said over and over that he thinks the “best practice” is to force encrypted messaging systems to give law enforcement access to our private conversations. The Graham-Blumenthal bill would finally give Barr the power to demand that tech companies obey him or face serious repercussions, including both civil and criminal liability. Such a demand would put encryption providers like WhatsApp and Signal in an awful conundrum: either face the possibility of losing everything in a single lawsuit or knowingly undermine their users’ security, making all of us more vulnerable to online criminals.

Matthew Green has a long explanation of the bill and its effects:

The new bill, out of Lindsey Graham’s Judiciary committee, is designed to force providers to either solve the encryption-while-scanning problem, or stop using encryption entirely. And given that we don’t yet know how to solve the problem — and the techniques to do it are basically at the research stage of R&D — it’s likely that “stop using encryption” is really the preferred goal.

EARN IT works by revoking a type of liability called Section 230 that makes it possible for providers to operate on the Internet, by preventing the provider for being held responsible for what their customers do on a platform like Facebook. The new bill would make it financially impossible for providers like WhatsApp and Apple to operate services unless they conduct “best practices” for scanning their systems for CSAM.

Since there are no “best practices” in existence, and the techniques for doing this while preserving privacy are completely unknown, the bill creates a government-appointed committee that will tell technology providers what technology they have to use. The specific nature of the committee is byzantine and described within the bill itself. Needless to say, the makeup of the committee, which can include as few as zero data security experts, ensures that end-to-end encryption will almost certainly not be considered a best practice.

So in short: this bill is a backdoor way to allow the government to ban encryption on commercial services. And even more beautifully: it doesn’t come out and actually ban the use of encryption, it just makes encryption commercially infeasible for major providers to deploy, ensuring that they’ll go bankrupt if they try to disobey this committee’s recommendations.

It’s the kind of bill you’d come up with if you knew the thing you wanted to do was unconstitutional and highly unpopular, and you basically didn’t care.

Another criticism of the bill. Commentary by EPIC. Kinder analysis.

Sign a petition against this act.

Let’s Encrypt Vulnerability

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/03/lets_encrypt_vu.html

The BBC is reporting a vulnerability in the Let’s Encrypt certificate service:

In a notification email to its clients, the organisation said: “We recently discovered a bug in the Let’s Encrypt certificate authority code.

“Unfortunately, this means we need to revoke the certificates that were affected by this bug, which includes one or more of your certificates. To avoid disruption, you’ll need to renew and replace your affected certificate(s) by Wednesday, March 4, 2020. We sincerely apologise for the issue.”

I am seeing nothing on the Let’s Encrypt website. And no other details anywhere. I’ll post more when I know more.

EDITED TO ADD: More from Ars Technica:

Let’s Encrypt uses Certificate Authority software called Boulder. Typically, a Web server that services many separate domain names and uses Let’s Encrypt to secure them receives a single LE certificate that covers all domain names used by the server rather than a separate cert for each individual domain.

The bug LE discovered is that, rather than checking each domain name separately for valid CAA records authorizing that domain to be renewed by that server, Boulder would check a single one of the domains on that server n times (where n is the number of LE-serviced domains on that server). Let’s Encrypt typically considers domain validation results good for 30 days from the time of validation–but CAA records specifically must be checked no more than eight hours prior to certificate issuance.

The upshot is that a 30-day window is presented in which certificates might be issued to a particular Web server by Let’s Encrypt despite the presence of CAA records in DNS that would prohibit that issuance.

Since Let’s Encrypt finds itself in the unenviable position of possibly having issued certificates that it should not have, it is revoking all current certificates that might not have had proper CAA record checking on Wednesday, March 4. Users whose certificates are scheduled to be revoked will need to manually force-renewal before then.

And Let’s Encrypt has a blog post about it.

Wi-Fi Chip Vulnerability

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/03/wi-fi_chip_vuln.html

There’s a vulnerability in Wi-Fi hardware that breaks the encryption:

The vulnerability exists in Wi-Fi chips made by Cypress Semiconductor and Broadcom, the latter a chipmaker Cypress acquired in 2016. The affected devices include iPhones, iPads, Macs, Amazon Echos and Kindles, Android devices, and Wi-Fi routers from Asus and Huawei, as well as the Raspberry Pi 3. Eset, the security company that discovered the vulnerability, said the flaw primarily affects Cypress’ and Broadcom’s FullMAC WLAN chips, which are used in billions of devices. Eset has named the vulnerability Kr00k, and it is tracked as CVE-2019-15126.

Manufacturers have made patches available for most or all of the affected devices, but it’s not clear how many devices have installed the patches. Of greatest concern are vulnerable wireless routers, which often go unpatched indefinitely.

That’s the real problem. Many of these devices won’t get patched — ever.

How to improve LDAP security in AWS Directory Service with client-side LDAPS

Post Syndicated from Dave Martinez original https://aws.amazon.com/blogs/security/how-to-improve-ldap-security-in-aws-directory-service-with-client-side-ldaps/

You can now better protect your organization’s identity data by encrypting Lightweight Directory Access Protocol (LDAP) communications between AWS Directory Service products (AWS Directory Service for Microsoft Active Directory, also known as AWS Managed Microsoft AD, and AD Connector) and self-managed Active Directory. Client-side secure LDAP (LDAPS) support enables applications that integrate with AWS Directory Service, such as Amazon WorkSpaces and AWS Single Sign-On, to connect to AD using Secure Sockets Layer/Transport Layer Security (SSL/TLS).

Note: In 2017, AWS Directory Service released server-side LDAPS support in AWS Managed Microsoft AD. This update adds client-side LDAPS support to both AWS Managed Microsoft AD and AD Connector.

In this post, I’ll step through configuring client-side LDAPS to enable encrypted communications between Amazon WorkSpaces and an Amazon Elastic Compute Cloud (Amazon EC2)-based self-managed AD.

Solution architecture

When you have completed the steps outlined in this post, your solution will look like Figure 1:

Figure 1: Solution architecture

Figure 1: Solution architecture

To build the solution, you will follow a three step process:

  1. Prepare all prerequisites, including the setup of certificate-based security in the self-managed AD environment.
  2. Register your certificate authority (CA) certificate into AWS Directory Service and enable client-side LDAPS (purple arrow in diagram above).
  3. Test client-side LDAPS using Amazon WorkSpaces and AWS Directory Service (yellow arrows in diagram above).

Step one: Set up prerequisites

To follow the steps described in this blog, you will need:

  1. A self-managed AD deployment to store your user identities. You can find setup guidance in “Step 1: Set Up Your Environment for Trusts” of the Tutorial: Creating a Trust from AWS Managed Microsoft AD to a Self-Managed Active Directory Installation on Amazon EC2.
  2. A server authentication certificate installed on your self-managed AD domain controller. Creating the certificate is typically done one of two ways:
    1. Using Active Directory Certificate Services (AD CS) in Windows Server to deploy an in-house CA for issuing server certificates. For help with setting up an AD CS deployment that supports LDAPS, see Microsoft’s LDAP over SSL (LDAPS) Certificate.
    2. Purchasing SSL certificates from a commercial CA like Verisign or AWS Certificate Manager. For help using commercial certificates with AD, see How to enable LDAP over SSL with a third-party certification authority.
  3. An AWS Directory Service directory, either AWS Managed Microsoft AD or AD Connector, to act as a bridge from AWS to your self-managed AD. See the documentation for AWS Managed Microsoft AD or AD Connector for detailed steps and tutorials. If you’re using AWS Managed Microsoft AD, also set up a two-way trust with your self-managed AD using Tutorial: Creating a Trust from AWS Managed Microsoft AD to a Self-Managed Active Directory Installation on Amazon EC2.
  4. Amazon WorkSpaces connected to your AWS Directory Service directory to look up and authenticate users. See the WorkSpaces documentation for detailed steps on using AWS Managed Microsoft AD with a Trusted Domain or AD Connector.

The remainder of this post assumes you have:

  1. Created an AWS Managed Microsoft AD instance called corp.example.com
  2. Connected corp.example.com via two-way trust to an EC2-based self-managed AD called example.local
  3. Deployed an AD CS enterprise root certificate authority in example.local with the common name Example SelfManaged CA.

When you perform the steps described below, you should replace these names with the names you selected.

Step two: Configure client-side LDAPS in AWS Directory Service

Now, you’ll retrieve the CA certificate — which represents the issuing certificate authority — from your self-managed AD and use it to enable client-side LDAPS in AWS Directory Service. To review CA certificate requirements for AWS Directory Service, see the client-side LDAPS documentation for AWS Managed Microsoft AD or AD Connector.

  1. Export the CA certificate from the example.local CA:
    1. To open the Certification Authority MMC snap-in, on the example.local server hosting AD CS, right-click the Windows icon, select Run, type certsrv.msc, and select OK.
    2. Right-click the name of the CA (in this case, Example SelfManaged CA) and select Properties.
    3. In the Properties window, on the General tab, under CA certificates, select the CA certificate listed, and then select View Certificate.
       
      Figure 2: View the CA certificate

      Figure 2: View the CA certificate

    4. In the Certificate window, on the Details tab, select Copy to File.
    5. In the Certificate Export Wizard, select Next.
    6. In the Export File Format screen, select Base-64 encoded X.509 (.CER), and then select Next. This saves the file in the format required by AWS.
       
      Figure 3: Select the base-64 encoded export file format

      Figure 3: Select the base-64 encoded export file format

    7. Select Browse, and then select a file name and save location for the CA certificate.
    8. Select Save, and then click Next.
    9. Select Finish, then select OK to complete the export process.
    10. Copy the file to a location accessible by the machine where you will be performing the AWS Directory Service configuration.
  2. Register the example.local CA certificate in AWS Directory Service:
    1. In the AWS Management Console, select Directory Service, and then select the Directory ID link for the AWS Directory Service directory connected to example.local (in this case, corp.example.com).
       
      Figure 4: Select the Directory ID

      Figure 4: Select the Directory ID

    2. On the Directory details page, in the Networking & security tab, in the Client-side LDAPS section (shown in Figure 5), select the Actions menu, and then select Register certificate.
       
      Figure 5: Select “Register certificate”

      Figure 5: Select “Register certificate”

    3. In the Register a CA certificate dialog box, select Browse, navigate to the location where you stored the CA certificate for your AD CS certificate authority, select Open, and then select Register certificate.
       
      Figure 6: Register a CA certificate

      Figure 6: Register a CA certificate

  3. Enable client-side LDAPS in AWS Directory Service:
    1. In the Client-side LDAPS section, once the Registration status field for the certificate reads Registered, select the Enable button. Click the Refresh button for updated status.
       
      Figure 7: Check the “Registration status” and then select “Enable”

      Figure 7: Check the “Registration status” and then select “Enable”

    2. In the Enable client-side LDAPS dialog box, select Enable.
    3. In the Client-side LDAPS section, under Status, when the status field changes to Enabled, LDAPS is successfully configured. Click the Refresh button for updated status.
       
      Figure 8: LDAPS successfully configured

      Figure 8: LDAPS successfully configured

Step three: Test client-side LDAPS with Amazon WorkSpaces

The last step is to test client-side LDAPS with an AWS application. Now that client-side LDAPS has been configured, all LDAP traffic to the self-managed AD will be encrypted and travel over port 636.

Note: Ensure that AWS security group, network firewall, and Windows firewall settings applied to the AWS Directory Service directory (outbound) and self-managed AD (inbound) allow TCP communications on port 636.

To test your client-side LDAPS configuration, perform a WorkSpaces user look up:

  1. In the AWS Management Console, choose WorkSpaces, and then click Launch WorkSpaces.
  2. On the Select a Directory screen, pick corp.example.com and then select Next Step.
  3. On the Identify Users screen, In the Select trust from forest menu, select example.local, and then select Show All Users (see Figure 9 for an example). This search will be executed over LDAPS.
     
    Figure 9: Searching users from a trusted domain with client-side LDAPS

    Figure 9: Searching users from a trusted domain with client-side LDAPS

Summary

In this post, we’ve explored how client-side LDAPS support in AWS Managed Microsoft AD and AD Connector improves LDAP security for AWS applications and services like Amazon WorkSpaces, AWS Single Sign-On, and Amazon QuickSight by encrypting sensitive network traffic between AWS and Active Directory.

To learn more about using AWS Managed Microsoft AD or AD Connector, visit the AWS Directory Service documentation. For general information and pricing, see the AWS Directory Service home page. If you have comments about this blog post, submit a comment in the Comments section below. If you have implementation or troubleshooting questions, start a new thread on the Directory Service forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Dave Martinez

Dave is a Senior Product Manager working on AWS Directory Service. Outside of work he enjoys Seattle sports and coaching his son’s Little League baseball team.

A New Clue for the Kryptos Sculpture

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/02/a_new_clue_for_.html

Jim Sanborn, who designed the Kryptos sculpture in a CIA courtyard, has released another clue to the still-unsolved part 4. I think he’s getting tired of waiting.

Did we mention Mr. Sanborn is 74?

Holding on to one of the world’s most enticing secrets can be stressful. Some would-be codebreakers have appeared at his home.

Many felt they had solved the puzzle, and wanted to check with Mr. Sanborn. Sometimes forcefully. Sometimes, in person.

Elonka Dunin, a game developer and consultant who has created a rich page of background information on the sculpture and oversees the best known online community of thousands of Kryptos fans, said that some who contact her (sometimes also at home) are obsessive and appear to have tipped into mental illness. “I am always gentle to them and do my best to listen to them,” she said.

Mr. Sanborn has set up systems to allow people to check their proposed solutions without having to contact him directly. The most recent incarnation is an email-based process with a fee of $50 to submit a potential solution. He receives regular inquiries, so far none of them successful.

The ongoing process is exhausting, he said, adding “It’s not something I thought I would be doing 30 years on.”

Another news article.

EDITED TO ADD (2/13): Another article.

Apple Abandoned Plans for Encrypted iCloud Backup after FBI Complained

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/01/apple_abandoned.html

This is new from Reuters:

More than two years ago, Apple told the FBI that it planned to offer users end-to-end encryption when storing their phone data on iCloud, according to one current and three former FBI officials and one current and one former Apple employee.

Under that plan, primarily designed to thwart hackers, Apple would no longer have a key to unlock the encrypted data, meaning it would not be able to turn material over to authorities in a readable form even under court order.

In private talks with Apple soon after, representatives of the FBI’s cyber crime agents and its operational technology division objected to the plan, arguing it would deny them the most effective means for gaining evidence against iPhone-using suspects, the government sources said.

When Apple spoke privately to the FBI about its work on phone security the following year, the end-to-end encryption plan had been dropped, according to the six sources. Reuters could not determine why exactly Apple dropped the plan.

Critical Windows Vulnerability Discovered by NSA

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/01/critical_window.html

Yesterday’s Microsoft Windows patches included a fix for a critical vulnerability in the system’s crypto library.

A spoofing vulnerability exists in the way Windows CryptoAPI (Crypt32.dll) validates Elliptic Curve Cryptography (ECC) certificates.

An attacker could exploit the vulnerability by using a spoofed code-signing certificate to sign a malicious executable, making it appear the file was from a trusted, legitimate source. The user would have no way of knowing the file was malicious, because the digital signature would appear to be from a trusted provider.

A successful exploit could also allow the attacker to conduct man-in-the-middle attacks and decrypt confidential information on user connections to the affected software.

That’s really bad, and you should all patch your system right now, before you finish reading this blog post.

This is a zero-day vulnerability, meaning that it was not detected in the wild before the patch was released. It was discovered by security researchers. Interestingly, it was discovered by NSA security researchers, and the NSA security advisory gives a lot more information about it than the Microsoft advisory does.

Exploitation of the vulnerability allows attackers to defeat trusted network connections and deliver executable code while appearing as legitimately trusted entities. Examples where validation of trust may be impacted include:

  • HTTPS connections
  • Signed files and emails
  • Signed executable code launched as user-mode processes

The vulnerability places Windows endpoints at risk to a broad range of exploitation vectors. NSA assesses the vulnerability to be severe and that sophisticated cyber actors will understand the underlying flaw very quickly and, if exploited, would render the previously mentioned platforms as fundamentally vulnerable.The consequences of not patching the vulnerability are severe and widespread. Remote exploitation tools will likely be made quickly and widely available.Rapid adoption of the patch is the only known mitigation at this time and should be the primary focus for all network owners.

Early yesterday morning, NSA’s Cybersecurity Directorate head Anne Neuberger hosted a media call where she talked about the vulnerability and — to my shock — took questions from the attendees. According to her, the NSA discovered this vulnerability as part of its security research. (If it found it in some other nation’s cyberweapons stash — my personal favorite theory — she declined to say.) She did not answer when asked how long ago the NSA discovered the vulnerability. She said that this is not the first time the NSA sent Microsoft a vulnerability to fix, but it was the first time it has publicly taken credit for the discovery. The reason is that the NSA is trying to rebuild trust with the security community, and this disclosure is a result of its new initiative to share findings more quickly and more often.

Barring any other information, I would take the NSA at its word here. So, good for it.

And — seriously — patch your systems now: Windows 10 and Windows Server 2016/2019. Assume that this vulnerability has already been weaponized, probably by criminals and certainly by major governments. Even assume that the NSA is using this vulnerability — why wouldn’t it?

Ars Technica article. Wired article. CERT advisory.

EDITED TO ADD: Washington Post article.

EDITED TO ADD (1/16): The attack was demonstrated in less than 24 hours.

Brian Krebs blog post.

New SHA-1 Attack

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/01/new_sha-1_attac.html

There’s a new, practical, collision attack against SHA-1:

In this paper, we report the first practical implementation of this attack, and its impact on real-world security with a PGP/GnuPG impersonation attack. We managed to significantly reduce the complexity of collisions attack against SHA-1: on an Nvidia GTX 970, identical-prefix collisions can now be computed with a complexity of 261.2rather than264.7, and chosen-prefix collisions with a complexity of263.4rather than267.1. When renting cheap GPUs, this translates to a cost of 11k US$ for a collision,and 45k US$ for a chosen-prefix collision, within the means of academic researchers.Our actual attack required two months of computations using 900 Nvidia GTX 1060GPUs (we paid 75k US$ because GPU prices were higher, and we wasted some time preparing the attack).

It has practical applications:

We chose the PGP/GnuPG Web of Trust as demonstration of our chosen-prefix collision attack against SHA-1. The Web of Trust is a trust model used for PGP that relies on users signing each other’s identity certificate, instead of using a central PKI. For compatibility reasons the legacy branch of GnuPG (version 1.4) still uses SHA-1 by default for identity certification.

Using our SHA-1 chosen-prefix collision, we have created two PGP keys with different UserIDs and colliding certificates: key B is a legitimate key for Bob (to be signed by the Web of Trust), but the signature can be transferred to key A which is a forged key with Alice’s ID. The signature will still be valid because of the collision, but Bob controls key A with the name of Alice, and signed by a third party. Therefore, he can impersonate Alice and sign any document in her name.

From a news article:

The new attack is significant. While SHA1 has been slowly phased out over the past five years, it remains far from being fully deprecated. It’s still the default hash function for certifying PGP keys in the legacy 1.4 version branch of GnuPG, the open-source successor to PGP application for encrypting email and files. Those SHA1-generated signatures were accepted by the modern GnuPG branch until recently, and were only rejected after the researchers behind the new collision privately reported their results.

Git, the world’s most widely used system for managing software development among multiple people, still relies on SHA1 to ensure data integrity. And many non-Web applications that rely on HTTPS encryption still accept SHA1 certificates. SHA1 is also still allowed for in-protocol signatures in the Transport Layer Security and Secure Shell protocols.

Security Vulnerabilities in the RCS Texting Protocol

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/12/security_vulner_21.html

Interesting research:

SRLabs founder Karsten Nohl, a researcher with a track record of exposing security flaws in telephony systems, argues that RCS is in many ways no better than SS7, the decades-old phone system carriers still used for calling and texting, which has long been known to be vulnerable to interception and spoofing attacks. While using end-to-end encrypted internet-based tools like iMessage and WhatsApp obviates many of those of SS7 issues, Nohl says that flawed implementations of RCS make it not much safer than the SMS system it hopes to replace.

Scaring People into Supporting Backdoors

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/12/scaring_people_.html

Back in 1998, Tim May warned us of the “Four Horsemen of the Infocalypse”: “terrorists, pedophiles, drug dealers, and money launderers.” I tended to cast it slightly differently. This is me from 2005:

Beware the Four Horsemen of the Information Apocalypse: terrorists, drug dealers, kidnappers, and child pornographers. Seems like you can scare any public into allowing the government to do anything with those four.

Which particular horseman is in vogue depends on time and circumstance. Since the terrorist attacks of 9/11, the US government has been pushing the terrorist scare story. Recently, it seems to have switched to pedophiles and child exploitation. It began in September, with a long New York Times story on child sex abuse, which included this dig at encryption:

And when tech companies cooperate fully, encryption and anonymization can create digital hiding places for perpetrators. Facebook announced in March plans to encrypt Messenger, which last year was responsible for nearly 12 million of the 18.4 million worldwide reports of child sexual abuse material, according to people familiar with the reports. Reports to the authorities typically contain more than one image, and last year encompassed the record 45 million photos and videos, according to the National Center for Missing and Exploited Children.

(That’s wrong, by the way. Facebook Messenger already has an encrypted option. It’s just not turned on by default, like it is in WhatsApp.)

That was followed up by a conference by the US Department of Justice: “Lawless Spaces: Warrant Proof Encryption and its Impact on Child Exploitation Cases.” US Attorney General William Barr gave a speech on the subject. Then came an open letter to Facebook from Barr and others from the UK and Australia, using “protecting children” as the basis for their demand that the company not implement strong end-to-end encryption. (I signed on to another another open letter in response.) Then, the FBI tried to get Interpol to publish a statement denouncing end-to-end encryption.

This week, the Senate Judiciary Committee held a hearing on backdoors: “Encryption and Lawful Access: Evaluating Benefits and Risks to Public Safety and Privacy.” Video, and written testimonies, are available at the link. Eric Neuenschwander from Apple was there to support strong encryption, but the other witnesses were all against it. New York District Attorney Cyrus Vance was true to form:

In fact, we were never able to view the contents of his phone because of this gift to sex traffickers that came, not from God, but from Apple.

It was a disturbing hearing. The Senators asked technical questions to people who couldn’t answer them. The result was that an adjunct law professor was able to frame the issue of strong encryption as an externality caused by corporate liability dumping, and another example of Silicon Valley’s anti-regulation stance.

Let me be clear. None of us who favor strong encryption is saying that child exploitation isn’t a serious crime, or a worldwide problem. We’re not saying that about kidnapping, international drug cartels, money laundering, or terrorism. We are saying three things. One, that strong encryption is necessary for personal and national security. Two, that weakening encryption does more harm than good. And three, law enforcement has other avenues for criminal investigation than eavesdropping on communications and stored devices. This is one example, where people unraveled a dark-web website and arrested hundreds by analyzing Bitcoin transactions. This is another, where policy arrested members of a WhatsApp group.

So let’s have reasoned policy debates about encryption — debates that are informed by technology. And let’s stop it with the scare stories.

EDITED TO ADD (12/13): The DoD just said that strong encryption is essential for national security.

All DoD issued unclassified mobile devices are required to be password protected using strong passwords. The Department also requires that data-in-transit, on DoD issued mobile devices, be encrypted (e.g. VPN) to protect DoD information and resources. The importance of strong encryption and VPNs for our mobile workforce is imperative. Last October, the Department outlined its layered cybersecurity approach to protect DoD information and resources, including service men and women, when using mobile communications capabilities.

[…]

As the use of mobile devices continues to expand, it is imperative that innovative security techniques, such as advanced encryption algorithms, are constantly maintained and improved to protect DoD information and resources. The Department believes maintaining a domestic climate for state of the art security and encryption is critical to the protection of our national security.

RSA-240 Factored

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/12/rsa-240_factore.html

This just in:

We are pleased to announce the factorization of RSA-240, from RSA’s challenge list, and the computation of a discrete logarithm of the same size (795 bits):

RSA-240 = 12462036678171878406583504460810659043482037465167880575481878888328 966680118821085503603957027250874750986476843845862105486553797025393057189121 768431828636284694840530161441643046806687569941524699318570418303051254959437 1372159029236099 = 509435952285839914555051023580843714132648382024111473186660296521821206469746 700620316443478873837606252372049619334517 * 244624208838318150567813139024002896653802092578931401452041221336558477095178 155258218897735030590669041302045908071447

[…]

The previous records were RSA-768 (768 bits) in December 2009 [2], and a 768-bit prime discrete logarithm in June 2016 [3].

It is the first time that two records for integer factorization and discrete logarithm are broken together, moreover with the same hardware and software.

Both computations were performed with the Number Field Sieve algorithm, using the open-source CADO-NFS software [4].

The sum of the computation time for both records is roughly 4000 core-years, using Intel Xeon Gold 6130 CPUs as a reference (2.1GHz). A rough breakdown of the time spent in the main computation steps is as follows.

RSA-240 sieving: 800 physical core-years
RSA-240 matrix: 100 physical core-years
DLP-240 sieving: 2400 physical core-years
DLP-240 matrix: 700 physical core-years

The computation times above are well below the time that was spent with the previous 768-bit records. To measure how much of this can be attributed to Moore’s law, we ran our software on machines that are identical to those cited in the 768-bit DLP computation [3], and reach the conclusion that sieving for our new record size on these old machines would have taken 25% less time than the reported sieving time of the 768-bit DLP computation.

EDITED TO ADD (12/4): News article. Dan Goodin points out that the speed improvements were more due to improvements in the algorithms than from Moore’s Law.

The NSA Warns of TLS Inspection

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/11/the_nsa_warns_o.html

The NSA has released a security advisory warning of the dangers of TLS inspection:

Transport Layer Security Inspection (TLSI), also known as TLS break and inspect, is a security process that allows enterprises to decrypt traffic, inspect the decrypted content for threats, and then re-encrypt the traffic before it enters or leaves the network. Introducing this capability into an enterprise enhances visibility within boundary security products, but introduces new risks. These risks, while not inconsequential, do have mitigations.

[…]

The primary risk involved with TLSI’s embedded CA is the potential abuse of the CA to issue unauthorized certificates trusted by the TLS clients. Abuse of a trusted CA can allow an adversary to sign malicious code to bypass host IDS/IPSs or to deploy malicious services that impersonate legitimate enterprise services to the hosts.

[…]

A further risk of introducing TLSI is that an adversary can focus their exploitation efforts on a single device where potential traffic of interest is decrypted, rather than try to exploit each location where the data is stored.Setting a policy to enforce that traffic is decrypted and inspected only as authorized, and ensuring that decrypted traffic is contained in an out-of-band, isolated segment of the network prevents unauthorized access to the decrypted traffic.

[…]

To minimize the risks described above, breaking and inspecting TLS traffic should only be conducted once within the enterprise network. Redundant TLSI, wherein a client-server traffic flow is decrypted, inspected, and re-encrypted by one forward proxy and is then forwarded to a second forward proxy for more of the same,should not be performed.Inspecting multiple times can greatly complicate diagnosing network issues with TLS traffic. Also, multi-inspection further obscures certificates when trying to ascertain whether a server should be trusted. In this case, the “outermost” proxy makes the decisions on what server certificates or CAs should be trusted and is the only location where certificate pinning can be performed.Finally, a single TLSI implementation is sufficient for detecting encrypted traffic threats; additional TLSI will have access to the same traffic. If the first TLSI implementation detected a threat, killed the session, and dropped the traffic, then additional TLSI implementations would be rendered useless since they would not even receive the dropped traffic for further inspection. Redundant TLSI increases the risk surface, provides additional opportunities for adversaries to gain unauthorized access to decrypted traffic, and offers no additional benefits.

Nothing surprising or novel. No operational information about who might be implementing these attacks. No classified information revealed.

News article.

Eavesdropping on SMS Messages inside Telco Networks

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/11/eavesdropping_o_8.html

Fireeye reports on a Chinese-sponsored espionage effort to eavesdrop on text messages:

FireEye Mandiant recently discovered a new malware family used by APT41 (a Chinese APT group) that is designed to monitor and save SMS traffic from specific phone numbers, IMSI numbers and keywords for subsequent theft. Named MESSAGETAP, the tool was deployed by APT41 in a telecommunications network provider in support of Chinese espionage efforts. APT41’s operations have included state-sponsored cyber espionage missions as well as financially-motivated intrusions. These operations have spanned from as early as 2012 to the present day. For an overview of APT41, see our August 2019 blog post or our full published report.

Yet another example that demonstrates why end-to-end message encryption is so important.

Post-quantum TLS now supported in AWS KMS

Post Syndicated from Andrew Hopkins original https://aws.amazon.com/blogs/security/post-quantum-tls-now-supported-in-aws-kms/

AWS Key Management Service (AWS KMS) now supports post-quantum hybrid key exchange for the Transport Layer Security (TLS) network encryption protocol that is used when connecting to KMS API endpoints. In this post, I’ll tell you what post-quantum TLS is, what hybrid key exchange is, why it’s important, how to take advantage of this new feature, and how to give us feedback.

What is post-quantum TLS?

Post-quantum TLS is a feature that adds new, post-quantum cipher suites to the protocol. AWS implements TLS using s2n, a streamlined open source implementation of TLS. In June, 2019, AWS introduced post-quantum s2n, which implements two proposed post-quantum hybrid cipher suites specified in this IETF draft. The cipher suites specify a key exchange that provides the security protections of both the classical and post-quantum schemes.

Why is this important?

A large-scale quantum computer would break the current public key cryptography that is used for key exchange in every TLS connection. While a large-scale quantum computer is not available today, it’s still important to think about and plan for your long-term security needs. TLS traffic recorded today could be decrypted by a large-scale quantum computer in the future. If you’re developing applications that rely on the long-term confidentiality of data passed over a TLS connection, you should consider a plan to migrate to post-quantum cryptography before a large-scale quantum computer is available for use by potential adversaries. AWS is working to prepare for this future, and we want you to be well-prepared, too.

We’re offering this feature now instead of waiting so you’ll have a way to measure the potential performance impact to your applications, and you’ll have the additional benefit of the protection afforded by the proposed post-quantum schemes today. While we believe the use of this feature raises the already high security bar for connecting to KMS endpoints, these new cipher suites will have an impact on bandwidth utilization, latency, and could also create issues for intermediate systems that proxy TLS connections. We’d like to get feedback from you on the effectiveness of our implementation so we can improve it over time.

Some background on post-quantum TLS

Today, all requests to AWS KMS use TLS with one of two key exchange schemes:

FFDHE and ECDHE are industry standards for secure key exchange. KMS uses only ephemeral keys for TLS key negotiation; this ensures every connection uses a unique key and the compromise of one connection does not affect the security of another connection. They are secure today against known cryptanalysis techniques which use classic computers; however, they’re not secure against known attacks which use a large-scale quantum computer. In the future a sufficiently capable large-scale quantum computer could run Shor’s Algorithm to recover the TLS session key of a recorded session, and therefore gain access to the data inside. Protecting against a large-scale quantum computer requires using a post-quantum key exchange algorithm during the TLS handshake.

The possibility of large-scale quantum computing has spurred the development of new quantum-resistant cryptographic algorithms. The National Institute for Science and Technology (NIST) has started the process of standardizing post-quantum cryptographic algorithms. AWS contributed to two NIST submissions:

BIKE and SIKE are Key Encapsulation Mechanisms (KEMs); a KEM is a type of key exchange used to establish a shared symmetric key. Post-quantum s2n only uses ephemeral BIKE and SIKE keys.

The NIST standardization process isn’t expected to complete until 2024. Until then, there is a risk that the exclusive use of proposed algorithms like BIKE and SIKE could expose data in TLS connections to security vulnerabilities not yet discovered. To mitigate this risk and use these new post-quantum schemes safely today, we need a way to combine classical algorithms with the expected post-quantum security of the new algorithms submitted to NIST. The Hybrid Post-Quantum Key Encapsulation Methods for Transport Layer Security 1.2 IETF draft describes how to combine BIKE and SIKE with ECDHE to create two new cipher suites for TLS.

These two cipher suites use a hybrid key exchange that performs two independent key exchanges during the TLS handshake and then cryptographically combines the keys into a single TLS session key. This strategy combines the high assurance of a classical key exchange with the security of the proposed post-quantum key exchanges.

The effect of hybrid post-quantum TLS on performance

Post-quantum cipher suites have a different performance profile and bandwidth requirements than traditional cipher suites. We measured the latency and bandwidth for a single handshake on an EC2 C5 2x.large. This provides a baseline for what to expect when you connect to KMS with the SDK. Your exact results will depend on your hardware (CPU speed and number of cores), existing workloads (how often you call KMS and what other work your application performs), and your network (location and capacity).

BIKE and SIKE have different performance tradeoffs: BIKE has faster computations and large keys, and SIKE has slower computations and smaller keys. The tables below show the results of the AWS measurements. ECDHE, a classic cryptographic key exchange algorithm, is included by itself for comparison.

Table 1
TLS MessageECDHE (bytes)ECDHE w/ BIKE (bytes)ECDHE w/SIKE (bytes)
ClientHello139147147
ServerKeyExchange3292,875711
ClientKeyExchange662,610470

Table 1 shows the amount of data (in bytes) sent in each TLS message. The ClientHello message is larger for post-quantum cipher suites because they include a new ClientHello extension. The key exchange messages are larger because they include BIKE or SIKE messages.

Table 2
ItemECDHE (ms)ECDHE w/ BIKE (ms)ECDHE w/SIKE (ms)
Server processing time0.1120.2695.53
Client processing time0.100.3957.05
Total handshake time1.1925.58155.08

Table 2 shows the time (in milliseconds) a client and server in the same region take to complete a handshake. Server processing time includes: key generation, signing the server key exchange message, and processing the client key exchange message. The client processing time includes: verifying the server’s certificate, processing the server key exchange message, and generating the client key exchange message. The total time was measured on the client from the start of the handshake to the end and includes network transfer time. All connections used RSA authentication with a 2048-bit key, and ECDHE used the secp256r1 curve. The BIKE test used the BIKE-1 Level 1 parameter and the SIKE test used the SIKEp503 parameter.

A TLS handshake is only performed once to setup a new connection. The SDK will reuse connections for multiple KMS requests when possible. This means that you don’t want to include measurements of subsequent round-trips under an existing TLS session, otherwise you will skew your performance data.

How to use hybrid post-quantum cipher suites

Note: The “AWS CRT HTTP Client” in the aws-crt-dev-preview branch of the aws-sdk-java-v2 repository is a beta release. This beta release and your use are subject to Section 1.10 (“Beta Service Participation”) of the AWS Service Terms.

To use the post-quantum cipher suites with AWS KMS, you’ll need the Developer Previews of the Java SDK 2.0 and the AWS Common Runtime. You’ll need to configure the AWS Common Runtime HTTP client to use s2n’s post-quantum hybrid cipher suites, and configure the AWS Java SDK 2.0 to use that HTTP client. This client can then be used when connecting to any KMS endpoints, but only those endpoints that are not using FIPS 140-2 validated crypto for the TLS termination. For example, kms.<region>.amazonaws.com supports the use of post-quantum cipher suites, while kms-fips.<region>.amazonaws.com does not.

To see a complete example of everything setup check out the example application here.
 

Figure 1: GitHub and package layout

Figure 1: GitHub and package layout

Figure 1 shows the GitHub and package layout. The steps below will walk you through building and configuring the SDK.

  1. Download the Java SDK v2 Common Runtime Developer Preview:
    
    $ git clone [email protected]:aws/aws-sdk-java-v2.git --branch aws-crt-dev-preview
    $ cd aws-sdk-java-v2
    

  2. Build the aws-crt-client JAR:
    
    $ mvn install -Pquick
    

  3. In your project add the AWS Common Runtime client to your Maven Dependencies:
    
    <dependency>
        <groupId>software.amazon.awssdk</groupId>
        <artifactId>aws-crt-client</artifactId>
        <version>2.10.7-SNAPSHOT</version>
    </dependency>
    

  4. Configure the new SDK and cipher suite in your application’s existing initialization code:
    
    if(!TLS_CIPHER_KMS_PQ_TLSv1_0_2019_06.isSupported()){
        throw new RuntimeException("Post Quantum Ciphers not supported on this Platform");
    }
    SdkAsyncHttpClient awsCrtHttpClient = AwsCrtAsyncHttpClient.builder()
              .tlsCipherPreference(TLS_CIPHER_KMS_PQ_TLSv1_0_2019_06)
              .build();
    KmsAsyncClient kms = KmsAsyncClient.builder()
             .httpClient(awsCrtHttpClient)
             .build();
    ListKeysResponse response = kms.listKeys().get();
    

Now, all connections made to AWS KMS in supported regions will use the new hybrid post-quantum cipher suites.

Things to try

Here are some ideas about how to use this post-quantum-enabled client:

  • Run load tests and benchmarks. These new cipher suites perform differently than traditional key exchange algorithms. You might need to adjust your connection timeouts to allow for the longer handshake times or, if you’re running inside an AWS Lambda function, extend the execution timeout setting.
  • Try connecting from different locations. Depending on the network path your request takes, you might discover that intermediate hosts, proxies, or firewalls with deep packet inspection (DPI) block the request. This could be due to the new cipher suites in the ClientHello or the larger key exchange messages. If this is the case, you might need to work with your Security team or IT administrators to update the relevant configuration to unblock the new TLS cipher suites. We’d like to hear from your about how your infrastructure interacts with this new variant of TLS traffic.

More info

If you’re interested to learn more about post-quantum cryptography check out:

Conclusion

In this blog post, I introduced you to the topic of post-quantum security and covered what AWS and NIST are doing to address the issue. I also showed you how to begin experimenting with hybrid post-quantum key exchange algorithms for TLS when connecting to KMS endpoints.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about how to configure the HTTP client or its interaction with KMS endpoints, please start a new thread on the AWS KMS discussion forum.

Former FBI General Counsel Jim Baker Chooses Encryption Over Backdoors

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/10/former_fbi_gene.html

In an extraordinary essay, the former FBI general counsel Jim Baker makes the case for strong encryption over government-mandated backdoors:

In the face of congressional inaction, and in light of the magnitude of the threat, it is time for governmental authorities­ — including law enforcement­ — to embrace encryption because it is one of the few mechanisms that the United States and its allies can use to more effectively protect themselves from existential cybersecurity threats, particularly from China. This is true even though encryption will impose costs on society, especially victims of other types of crime.

[…]

I am unaware of a technical solution that will effectively and simultaneously reconcile all of the societal interests at stake in the encryption debate, such as public safety, cybersecurity and privacy as well as simultaneously fostering innovation and the economic competitiveness of American companies in a global marketplace.

[…]

All public safety officials should think of protecting the cybersecurity of the United States as an essential part of their core mission to protect the American people and uphold the Constitution. And they should be doing so even if there will be real and painful costs associated with such a cybersecurity-forward orientation. The stakes are too high and our current cybersecurity situation too grave to adopt a different approach.

Basically, he argues that the security value of strong encryption greatly outweighs the security value of encryption that can be bypassed. He endorses a “defense dominant” strategy for Internet security.

Keep in mind that Baker led the FBI’s legal case against Apple regarding the San Bernardino shooter’s encrypted iPhone. In writing this piece, Baker joins the growing list of former law enforcement and national security senior officials who have come out in favor of strong encryption over backdoors: Michael Hayden, Michael Chertoff, Richard Clarke, Ash Carter, William Lynn, and Mike McConnell.

Edward Snowden also agrees.

EDITED TO ADD: Good commentary from Cory Doctorow.

NordVPN Breached

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/10/nordvpn_breache.html

There was a successful attack against NordVPN:

Based on the command log, another of the leaked secret keys appeared to secure a private certificate authority that NordVPN used to issue digital certificates. Those certificates might be issued for other servers in NordVPN’s network or for a variety of other sensitive purposes. The name of the third certificate suggested it could also have been used for many different sensitive purposes, including securing the server that was compromised in the breach.

The revelations came as evidence surfaced suggesting that two rival VPN services, TorGuard and VikingVPN, also experienced breaches that leaked encryption keys. In a statement, TorGuard said a secret key for a transport layer security certificate for *.torguardvpnaccess.com was stolen. The theft happened in a 2017 server breach. The stolen data related to a squid proxy certificate.

TorGuard officials said on Twitter that the private key was not on the affected server and that attackers “could do nothing with those keys.” Monday’s statement went on to say TorGuard didn’t remove the compromised server until early 2018. TorGuard also said it learned of VPN breaches last May, “and in a related development we filed a legal complaint against NordVPN.”

The breach happened nineteen months ago, but the company is only just disclosing it to the public. We don’t know exactly what was stolen and how it affects VPN security. More details are needed.

VPNs are a shadowy world. We use them to protect our Internet traffic when we’re on a network we don’t trust, but we’re forced to trust the VPN instead. Recommendations are hard. NordVPN’s website says that the company is based in Panama. Do we have any reason to trust it at all?

I’m curious what VPNs others use, and why they should be believed to be trustworthy.

Your new free online training courses for the autumn term

Post Syndicated from Dan Fisher original https://www.raspberrypi.org/blog/free-online-training-courses-autumn-19/

Over the autumn term, we’ll be launching three brand-new, free online courses on the FutureLearn platform. Wherever you are in the world, you can learn with us for free!

Three people looking facing forward

The course presenters are Pi Towers residents Mark, Janina, and Eirini

Design and Prototype Embedded Computer Systems

The first new course is Design and Prototype Embedded Computer Systems, which will start on 28 October. In this course, you will discover the product design life cycle as you design your own embedded system!

A diagram illustrating the iterative design life cycle with four stages: Analyse, design, build, test

You’ll investigate how the purpose of the system affects the design of the system, from choosing its components to the final product, and you’ll find out more about the design of an algorithm. You will also explore how embedded systems are used in the world around us. Book your place today!

Programming 103: Saving and Structuring Data

What else would you expect us to call the sequel to Programming 101 and Programming 102? That’s right — we’ve made Programming 103: Saving and Structuring Data! The course will begin on 4 November, and you can reserve your place now.

Illustration of a robot reading a book called 'human 2 binary phrase book'

Programming 103 explores how to use data across multiple runs of your program. You’ll learn how to save text and binary files, and how structuring data is necessary for programs to “understand” the data that they load. You’ll look at common types of structured files such as CSV and JSON files, as well as how you can connect to a SQL database to use it in your Python programs.

Introduction to Encryption and Cryptography

The third course, Introduction to Encryption and Cryptography, is currently in development, and therefore coming soon. In this course, you’ll learn what encryption is and how it was used in the past, and you’ll use the Caesar and Vigenère ciphers.

The Caesar cipher is a type of substitution cipher

You’ll also look at modern encryption and investigate both symmetric and asymmetric encryption schemes. The course also shows you the future of encryption, and it includes several practical encryption activities, which can be used in the classroom too.

National Centre for Computing Education

If you’re a secondary school teacher in England, note that all of the above courses count towards your Computer Science Accelerator Programme certificate.

Group shot of the first NCCE GCSE accelerator graduates

The very first group of teachers who earned Computer Science Accelerator Programme certificates: they got to celebrate their graduation at Google HQ in London.

What’s been your favourite online course this year? Tell us about it in the comments.

The post Your new free online training courses for the autumn term appeared first on Raspberry Pi.

Crown Sterling Claims to Factor RSA Keylengths First Factored Twenty Years Ago

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/09/crown_sterling_.html

Earlier this month, I made fun of a company called Crown Sterling, for…for…for being a company that deserves being made fun of.

This morning, the company announced that they “decrypted two 256-bit asymmetric public keys in approximately 50 seconds from a standard laptop computer.” Really. They did. This keylength is so small it has never been considered secure. It was too small to be part of the RSA Factoring Challenge when it was introduced in 1991. In 1977, when Ron Rivest, Adi Shamir, and Len Adelman first described RSA, they included a challenge with a 426-bit key. (It was factored in 1994.)

The press release goes on: “Crown Sterling also announced the consistent decryption of 512-bit asymmetric public key in as little as five hours also using standard computing.” They didn’t demonstrate it, but if they’re right they’ve matched a factoring record set in 1999. Five hours is significantly less than the 5.2 months it took in 1999, but slower than would be expected if Crown Sterling just used the 1999 techniques with modern CPUs and networks.

Is anyone taking this company seriously anymore? I honestly wouldn’t be surprised if this was a hoax press release. It’s not currently on the company’s website. (And, if it is a hoax, I apologize to Crown Sterling. I’ll post a retraction as soon as I hear from you.)

EDITED TO ADD: First, the press release is real. And second, I forgot to include the quote from CEO Robert Grant: “Today’s decryptions demonstrate the vulnerabilities associated with the current encryption paradigm. We have clearly demonstrated the problem which also extends to larger keys.”

People, this isn’t hard. Find an RSA Factoring Challenge number that hasn’t been factored yet and factor it. Once you do, the entire world will take you seriously. Until you do, no one will. And, bonus, you won’t have to reveal your super-secret world-destabilizing cryptanalytic techniques.

EDITED TO ADD (9/21): Others are laughing at this, too.

EDITED TO ADD (9/24): More commentary.

How Cloudflare and Wall Street Are Helping Encrypt the Internet Today

Post Syndicated from Nick Sullivan original https://blog.cloudflare.com/how-cloudflare-and-wall-street-are-helping-encrypt-the-internet-today/

How Cloudflare and Wall Street Are Helping Encrypt the Internet Today

How Cloudflare and Wall Street Are Helping Encrypt the Internet Today

Today has been a big day for Cloudflare, as we became a public company on the New York Stock Exchange (NYSE: NET). To mark the occasion, we decided to bring our favorite entropy machines to the floor of the NYSE. Footage of these lava lamps is being used as an additional seed to our entropy-generation system LavaRand — bolstering Internet encryption for over 20 million Internet properties worldwide.

(This is mostly for fun. But when’s the last time you saw a lava lamp on the trading floor of the New York Stock Exchange?)

How Cloudflare and Wall Street Are Helping Encrypt the Internet Today

A little context: generating truly random numbers using computers is impossible, because code is inherently deterministic (i.e. predictable). To compensate for this, engineers draw from pools of randomness created by entropy generators, which is a fancy term for “things that are truly unpredictable”.

It turns out that lava lamps are fantastic sources of entropy, as was first shown by Silicon Graphics in the 1990s. It’s a torch we’ve been proud to carry forward: today, Cloudflare uses lava lamps to generate entropy that helps make millions of Internet properties more secure.

How Cloudflare and Wall Street Are Helping Encrypt the Internet Today

Housed in our San Francisco headquarters is a wall filled with dozens of lava lamps, undulating with mesmerizing randomness. We capture these lava lamps on video via a camera mounted across the room, and feed the resulting footage into an algorithm — called LavaRand — that amplifies the pure randomness of these lava lamps to dizzying extremes (computers can’t create seeds of pure randomness, but they can massively amplify them).

Shortly before we rang the opening bell this morning, we recorded footage of our lava lamps in operation on the trading room floor of the New York Stock Exchange, and we’re ingesting the footage into our LavaRand system. The resulting entropy is mixed with the myriad additional sources of entropy that we leverage every day, creating a cryptographically-secure source of randomness — fortified by Wall Street.

How Cloudflare and Wall Street Are Helping Encrypt the Internet Today

We recently took our enthusiasm for randomness a step further by facilitating the League of Entropy, a consortium of global organizations and individual contributors, generating verifiable randomness via a globally distributed network. As one of the founding members of the League, LavaRand (pictured above) plays a key role in empowering developers worldwide with a pool of randomness with extreme entropy and high reliability.

And today, she’s enjoying the view from the podium!


One caveat: the lava lamps we run in our San Francisco headquarters are recorded in real-time, 24/7, giving us an ongoing stream of entropy. For reasons that are understandable, the NYSE doesn’t allow for live video feeds from the exchange floor while it is in operation. But this morning they did let us record footage of the lava lamps operating shortly before the opening bell. The video was recorded and we’re ingesting it into our LavaRand system (alongside many other entropy generators, including the lava lamps back in San Francisco).