Tag Archives: Uncategorized

China Possibly Hacking US “Lawful Access” Backdoor

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2024/10/china-possibly-hacking-us-lawful-access-backdoor.html

The Wall Street Journal is reporting that Chinese hackers (Salt Typhoon) penetrated the networks of US broadband providers, and might have accessed the backdoors that the federal government uses to execute court-authorized wiretap requests. Those backdoors have been mandated by law—CALEA—since 1994.

It’s a weird story. The first line of the article is: “A cyberattack tied to the Chinese government penetrated the networks of a swath of U.S. broadband providers.” This implies that the attack wasn’t against the broadband providers directly, but against one of the intermediary companies that sit between the government CALEA requests and the broadband providers.

For years, the security community has pushed back against these backdoors, pointing out that the technical capability cannot differentiate between good guys and bad guys. And here is one more example of a backdoor access mechanism being targeted by the “wrong” eavesdroppers.

Other news stories.

How to identify inactive users of Amazon Q Developer

Post Syndicated from Brian Beach original https://aws.amazon.com/blogs/devops/how-to-identify-inactive-users-of-amazon-q-developer/

Generative AI is leading to many new features and capabilities. As a result, your employees may not know about all the new tools you are deploying. I was recently working with a customer that had deployed Amazon Q Developer for all their software developers. However, many developers didn’t know they had access to the productivity companion. In this post, I will show you how to retrieve the list of users that have not yet activated their subscription, so you can reach out to them individually and remind them of the value using a tool like Q can bring to their daily work.

Amazon Q recently launched a feature that provides administrators more details about user subscriptions and usage. This capability provides insight into which users are adopting the service, their subscription status (e.g., active, pending, under free trial, canceled), and their corresponding associations. To get started, I will navigate to the Amazon Q console.

Note: I am navigating to the Amazon Q console, rather than Amazon Q Developer console. The Amazon Q console is used to manage subscriptions for both Amazon Q Business and Amazon Q Developer. The Amazon Q Developer console is used to configure features unique to Q Developer, such as customizations.

Once in the Amazon Q console, I select Subscriptions from the navigation options on the left. Then I select the Users tab. This view lists all the users that have access to Amazon Q. In the following example, I am viewing the organization instance. Therefore, the report includes users from all the accounts in my organization. Notice that the subscription status column tells me if a user is active, pending, or canceled. A pending user is one that has been invited, but has not yet activated a subscription. A user is active if they have configured the Amazon Q Developer extension or plugin in their integrated development environment (IDE).

Screenshot of the Amazon Q console showing the "Subscribed groups and users" page. The page displays a table of 120 users with columns for User name, Identity provider user ID, Subscription, and Subscription status. The table shows 10 users, some with "Active" status and others with "Pending" status for Amazon Q Developer Pro subscriptions. Options to download a total users report and search are visible above the table.

While I could filter the view using the search box, I prefer to click the Download the total users report button. This creates a comma-separated value (CSV) file that I will use in a mail merge. With the CSV file downloaded, I next create an email template used to send an email to all the pending users. Of course, I’ll use Generative AI to write the email. Amazon Q Business helped me create the following template that articulates the value proposition and includes a link to the Amazon Q Developer documentation to help the developer get started. You might prefer to include links to your internal wiki rather than the public documentation.

Subject: Activate Your Amazon Q Developer Subscription Today!

Dear Developer,

We hope this email finds you well. We noticed that you have an Amazon Q Developer subscription that hasn’t been activated yet. We wanted to remind you about this powerful tool and encourage you to start using it today!

Why Use Amazon Q Developer? Amazon Q Developer offers numerous benefits to streamline your development process:

  • AI-Powered Coding Assistance: Get real-time code suggestions and completions.
  • Intelligent Code Reviews: Receive automated feedback on your code quality and security.
  • Natural Language Query: Ask questions about your codebase in plain English.
  • Seamless Integration: Works with popular IDEs and the command line.

To get started, check out Installing the Amazon Q Developer extension. You will need the following AWS IAM Identity Center start URL and region.

  • Start URL: <insert start URL>.
  • Region: <insert region>.

Don’t miss out on the opportunity to enhance your development workflow and increase your productivity. Activate your Amazon Q Developer subscription today and experience the future of AI-assisted coding!

If you have any questions or need assistance, please don’t hesitate to reach out to our support team at <insert email address>.

Happy coding!

Best regards, The Cloud Center of Excellence Team

Now, I can run a simple mail merge to inform users that they have access to an Amazon Q Developer subscription. Before I close, I want to note that this post only briefly describes the reporting available in Amazon Q Developer. If you would like to learn more, you can read about the developer dashboard, Amazon CloudWatch Metrics and AWS CloudTrail telemetry events provided by Amazon Q Developer.

Conclusion

Your employees may not know about all the new tools you are deploying. Amazon Q gives you the power you to discover which users have activated their subscription. In this post, I showed you how to download the list of users who are not actively using the productivity tool, so you can contact the users to increase subscription activation. To learn how to activate Amazon Q Developer for your developers, read managing subscriptions in the user guide.

Accelerate application upgrades with Amazon Q Developer agent for code transformation

Post Syndicated from Jonathan Vogel original https://aws.amazon.com/blogs/devops/accelerate-application-upgrades-with-amazon-q-developer-agent-for-code-transformation/

In this blog, we will explore how Amazon Q Developer Agent for code transformation accelerates Java application upgrades. We will examine the benefits of this Generative AI-powered agent and outline strategies to achieve maximal acceleration, drawing from real-world success stories and best practices.

Benefits of using Amazon Q Developer to upgrade your applications

Amazon Q Developer addresses a critical challenge for organizations managing numerous Java applications, particularly as they face the approaching end of Long-Term-Support (LTS) for older Java versions. Upgrading to Java 17 enhances security, resolves vulnerabilities, and improves performance while ensuring long-term compatibility and access to modern features. Currently, Q Developer agent for code transformation supports upgrades from Java 8 and 11 to Java 17. Software developers can utilize Q Developer within their IDE (VS Code and JetBrains) to transform both single-module and multi-module applications. Q Developer will generate a plan that identifies necessary library upgrades and replacements for deprecated code in the application, proposing code changes with the goal of ensuring the transformed code compiles successfully in Java 17. Q Developer can significantly enhance the efficiency of your migration workflow, performing code transformations on applications in hours rather than weeks.

Customer success of using Q Developer to modernize legacy Java applications

Customers have used Q Developer to upgrade their Java applications successfully. Here is how two customers as well as Amazon internal teams use Q Developer to accelerate the migration process.

A large insurance company in North America strategically approached their Java upgrade initiative by identifying applications with dependencies that Q Developer could upgrade effectively. They focused on applications that rely on frameworks like Spring Boot, which can be time-consuming to upgrade manually. After leveraging Q Developer to transform 4 applications in pilot, they estimated a 36% acceleration in their upgrade process, indicating that Q Developer automatically completed over a third of the work that would have been required manually. While the remaining portion still necessitated manual intervention to ensure the code would build and run correctly, the effort acceleration was significant.

A major financial services firm’s experience with Q Developer proved equally compelling. In a focused two-day workshop, 20 developers successfully transformed 20 applications in production using the Amazon Q Developer agent. This results in 42% time savings using Q Developer compared to manual upgrade, saving on average 24 hours per application. They spent about 3 weeks to prepare for the transformation workshop. They identified first-party (1P) dependencies—internal libraries that other production applications rely on. Q developer does not guarantee upgrade of 1P dependencies. With a combination of Q Developer and manual work, the customer upgraded many of these common 1P dependencies leading up to the workshop. This step was crucial to gain maximum acceleration while using Q Developer for the upgrades.

Amazon uses Q Developer internally to upgrade Java applications following company-wide campaigns. The central team who owns the campaigns provides detailed guidance on which Java applications can be upgraded with Q developer most effectively. This team also manages Amazon’s internal build system and provides tooling to automate part of the manual efforts. They are able to achieve significant savings. Amazon was able to upgrade more than 50% of production applications in six months, 79% of the auto-generated code reviews were applied without additional changes.

Use Q Developer to upgrade your applications

To ensure that Q Developer is properly applied to the specific characteristics of their codebases, customers create and follow a transformation approach. Teams and individuals who understand the scope of the upgrade run campaigns across the company to effectively utilize Q Developer. To maximize the acceleration from Q Developer, these teams classify the applications which need to be upgraded, identify which ones can be upgraded using Q Developer, estimate the manual effort required, which provides a baseline to measure the value added by Q Developer agent for code transformation. The preparation phase is crucial before starting the execution phase of the upgrade. Each of the steps in the preparation phase plays an important role in maximizing the acceleration of Amazon Q in their upgrade processes.

  1. Classifying the applications to upgrade: Q Developer supports the upgrade of 30 most common Java libraries. Q Developer’s performance on less common and internal libraries is lower compared to the common libraries. In this case, you can use a combination of Q Developer and manual steps. It’s recommended to include both production applications and internal dependencies in this step. You should also classify your applications and internal libraries based on if/how they are used by other applications, it will help prioritize the applications to upgrade first in campaigns. Classifying applications by libraries used can help you identify the best upgrade approach using Q Developer.
  2. Defining baselines of efficiency: To measure the efficiency of the upgrade effort in your organization, it is crucial to establish baselines. Based on the classification of applications, use Q Developer in a pilot for each class to see which libraries are transformed correctly, and which ones have to be done manually. This helps you operationalize the process of using Q Developer and the manual steps required, and understand how this procedure accelerates the upgrade of a certain class of applications. Some customers use manual effort hours for each upgrade on dependency versions and deprecated code as baseline and compare the manual effort hours with time taken when completing the upgrade using Q Developer. For example, you can classify the applications based on the main frameworks used before upgrading applications using Q Developer. Compare the time taken by Q Developer with manual upgrade hours to understand which applications can be upgraded by Q Developer most effectively.
  3. Identifying applications for migration: Decide which applications to use Q Developer for, and prioritize the applications to upgrade in waves based on expected acceleration and business value. You can prioritize the applications which are most used by other applications and upgrade them in the initial campaign, then upgrade the rest of the applications in the subsequent campaigns. By addressing the foundational components first, the overall upgrade process will be streamlined. In Amazon, a centralized internal team defines migration waves and identifies which packages would be included in the upgrade campaign. Additionally, this team conducted analysis of the apps to determine the likelihood of the upgrade being successful using Q developer, and provides an estimate of the remaining engineering effort needed to complete the upgrade. The team will use this information to select applications and uses an Amazon-internal tool to assign the upgrade tasks to the team owning the applications. While SDEs were free to run the upgrade on their own, following the campaign with a set deadline mobilized the application owner teams to complete the upgrade.

Use Q Developer to automate upgrade tasks

Once the preparation phase is completed, you can start the execution phase. Software developers can use Q Developer to accelerate many of the steps in execution phase.

  1. Assessing the components of an application to upgrade. You can use Q Developer to start a transformation, at the beginning of the transformation, there will be a transformation plan generated for you to view which dependencies and deprecated code will be upgraded.
  2. Research and update dependency versions compatible to the target version. Q Developer will analyze your app and attempt to update the dependencies to the versions compatible with target Java version and in some cases the latest version.
  3. Replace deprecated methods and API calls which are not compatible to the target version. Q Developer will detect the deprecated code and attempt to update to what’s recommended in the compatible Java version.
  4. Reviewing the modified code and address any conflicts or issues that may arise. Q Developer will return code changes to you at the end of the transformation. If the transformation is successful, the app will compile in Java 17. If the transformation is partially successful, Q Developer was able to upgrade library versions and make code changes but could not compile the transformed app successfully in Java 17. Check out this part of our documentation on how to handle partial transformations.
  5. Test the upgraded application thoroughly to ensure correct functionality. Q Developer will run the unit tests and integration tests in your app when compiling in the target version.

Conclusion

As organizations face the pressing need to modernize their Java applications, Amazon Q Developer emerges as a powerful ally in this complex journey. The customer success stories demonstrate the tangible benefits of leveraging AI-assisted code transformation: significant time savings, reduced manual effort, and accelerated upgrade processes.

Q Developer not only addresses the technical challenges of Java upgrades, but also enables organizations to approach these initiatives strategically. By classifying applications, establishing baselines, and prioritizing upgrades, teams can maximize the efficiency of their modernization efforts. While Q Developer streamlines much of the upgrade process, it is important to note that some challenges may still arise. For a comprehensive understanding of potential challenges and detailed guidance on getting started with Q Developer, we encourage you to explore our public documentation.

The journey to Java 17 and beyond doesn’t have to be daunting. With Amazon Q Developer, you have a powerful tool at your disposal to accelerate your upgrade process, reduce costs, and ensure your applications remain secure, performant, and future-ready.

Take the first step towards modernizing your Java ecosystem today. Explore Amazon Q Developer and discover how it can transform your upgrade strategy. See Getting Started with Amazon Q Developer agent for code transformation for a how-to guide on using Q Developer to transform Java applications.

About the authors

Jonathan Vogel

Jonathan is a Developer Advocate at AWS. He was a DevOps Specialist Solutions Architect at AWS for two years prior to taking on the Developer Advocate role. Prior to AWS, he practiced professional software development for over a decade. Jonathan enjoys music, birding and climbing rocks.

Yiyi Guo

Yiyi is a Senior Product Manager at AWS working on Amazon Q developer agent for code transformation, she focuses on leveraging generative AI to accelerate enterprise application modernization.

Weird Zimbra Vulnerability

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2024/10/weird-zimbra-vulnerability.html

Hackers can execute commands on a remote computer by sending malformed emails to a Zimbra mail server. It’s critical, but difficult to exploit.

In an email sent Wednesday afternoon, Proofpoint researcher Greg Lesnewich seemed to largely concur that the attacks weren’t likely to lead to mass infections that could install ransomware or espionage malware. The researcher provided the following details:

  • While the exploitation attempts we have observed were indiscriminate in targeting, we haven’t seen a large volume of exploitation attempts
  • Based on what we have researched and observed, exploitation of this vulnerability is very easy, but we do not have any information about how reliable the exploitation is
  • Exploitation has remained about the same since we first spotted it on Sept. 28th
  • There is a PoC available, and the exploit attempts appear opportunistic
  • Exploitation is geographically diverse and appears indiscriminate
  • The fact that the attacker is using the same server to send the exploit emails and host second-stage payloads indicates the actor does not have a distributed set of infrastructure to send exploit emails and handle infections after successful exploitation. We would expect the email server and payload servers to be different entities in a more mature operation.
  • Defenders protecting Zimbra appliances should look out for odd CC or To addresses that look malformed or contain suspicious strings, as well as logs from the Zimbra server indicating outbound connections to remote IP addresses.

California AI Safety Bill Vetoed

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2024/10/california-ai-safety-bill-vetoed.html

Governor Newsom has vetoed the state’s AI safety bill.

I have mixed feelings about the bill. There’s a lot to like about it, and I want governments to regulate in this space. But, for now, it’s all EU.

(Related, the Council of Europe treaty on AI is ready for signature. It’ll be legally binding when signed, and it’s a big deal.)

Hacking ChatGPT by Planting False Memories into Its Data

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2024/10/hacking-chatgpt-by-planting-false-memories-into-its-data.html

This vulnerability hacks a feature that allows ChatGPT to have long-term memory, where it uses information from past conversations to inform future conversations with that same user. A researcher found that he could use that feature to plant “false memories” into that context window that could subvert the model.

A month later, the researcher submitted a new disclosure statement. This time, he included a PoC that caused the ChatGPT app for macOS to send a verbatim copy of all user input and ChatGPT output to a server of his choice. All a target needed to do was instruct the LLM to view a web link that hosted a malicious image. From then on, all input and output to and from ChatGPT was sent to the attacker’s website.

AI and the 2024 US Elections

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2024/09/ai-and-the-2024-us-elections.html

For years now, AI has undermined the public’s ability to trust what it sees, hears, and reads. The Republican National Committee released a provocative ad offering an “AI-generated look into the country’s possible future if Joe Biden is re-elected,” showing apocalyptic, machine-made images of ruined cityscapes and chaos at the border. Fake robocalls purporting to be from Biden urged New Hampshire residents not to vote in the 2024 primary election. This summer, the Department of Justice cracked down on a Russian bot farm that was using AI to impersonate Americans on social media, and OpenAI disrupted an Iranian group using ChatGPT to generate fake social-media comments.

It’s not altogether clear what damage AI itself may cause, though the reasons for concern are obvious—the technology makes it easier for bad actors to construct highly persuasive and misleading content. With that risk in mind, there has been some movement toward constraining the use of AI, yet progress has been painstakingly slow in the area where it may count most: the 2024 election.

Two years ago, the Biden administration issued a blueprint for an AI Bill of Rights aiming to address “unsafe or ineffective systems,” “algorithmic discrimination,” and “abusive data practices,” among other things. Then, last year, Biden built on that document when he issued his executive order on AI. Also in 2023, Senate Majority Leader Chuck Schumer held an AI summit in Washington that included the centibillionaires Bill Gates, Mark Zuckerberg, and Elon Musk. Several weeks later, the United Kingdom hosted an international AI Safety Summit that led to the serious-sounding “Bletchley Declaration,” which urged international cooperation on AI regulation. The risks of AI fakery in elections have not sneaked up on anybody.

Yet none of this has resulted in changes that would resolve the use of AI in U.S. political campaigns. Even worse, the two federal agencies with a chance to do something about it have punted the ball, very likely until after the election.

On July 25, the Federal Communications Commission issued a proposal that would require political advertisements on TV and radio to disclose if they used AI. (The FCC has no jurisdiction over streaming, social media, or web ads.) That seems like a step forward, but there are two big problems. First, the proposed rules, even if enacted, are unlikely to take effect before early voting starts in this year’s election. Second, the proposal immediately devolved into a partisan slugfest. A Republican FCC commissioner alleged that the Democratic National Committee was orchestrating the rule change because Democrats are falling behind the GOP in using AI in elections. Plus, he argued, this was the Federal Election Commission’s job to do.

Yet last month, the FEC announced that it won’t even try making new rules against using AI to impersonate candidates in campaign ads through deepfaked audio or video. The FEC also said that it lacks the statutory authority to make rules about misrepresentations using deepfaked audio or video. And it lamented that it lacks the technical expertise to do so, anyway. Then, last week, the FEC compromised, announcing that it intends to enforce its existing rules against fraudulent misrepresentation regardless of what technology it is conducted with. Advocates for stronger rules on AI in campaign ads, such as Public Citizen, did not find this nearly sufficient, characterizing it as a “wait-and-see approach” to handling “electoral chaos.”

Perhaps this is to be expected: The freedom of speech guaranteed by the First Amendment generally permits lying in political ads. But the American public has signaled that it would like some rules governing AI’s use in campaigns. In 2023, more than half of Americans polled responded that the federal government should outlaw all uses of AI-generated content in political ads. Going further, in 2024, about half of surveyed Americans said they thought that political candidates who intentionally manipulated audio, images, or video should be prevented from holding office or removed if they had won an election. Only 4 percent thought there should be no penalty at all.

The underlying problem is that Congress has not clearly given any agency the responsibility to keep political advertisements grounded in reality, whether in response to AI or old-fashioned forms of disinformation. The Federal Trade Commission has jurisdiction over truth in advertising, but political ads are largely exempt—again, part of our First Amendment tradition. The FEC’s remit is campaign finance, but the Supreme Court has progressively stripped its authorities. Even where it could act, the commission is often stymied by political deadlock. The FCC has more evident responsibility for regulating political advertising, but only in certain media: broadcast, robocalls, text messages. Worse yet, the FCC’s rules are not exactly robust. It has actually loosened rules on political spam over time, leading to the barrage of messages many receive today. (That said, in February, the FCC did unanimously rule that robocalls using AI voice-cloning technology, like the Biden ad in New Hampshire, are already illegal under a 30-year-old law.)

It’s a fragmented system, with many important activities falling victim to gaps in statutory authority and a turf war between federal agencies. And as political campaigning has gone digital, it has entered an online space with even fewer disclosure requirements or other regulations. No one seems to agree where, or whether, AI is under any of these agencies’ jurisdictions. In the absence of broad regulation, some states have made their own decisions. In 2019, California was the first state in the nation to prohibit the use of deceptively manipulated media in elections, and has strengthened these protections with a raft of newly passed laws this fall. Nineteen states have now passed laws regulating the use of deepfakes in elections.

One problem that regulators have to contend with is the wide applicability of AI: The technology can simply be used for many different things, each one demanding its own intervention. People might accept a candidate digitally airbrushing their photo to look better, but not doing the same thing to make their opponent look worse. We’re used to getting personalized campaign messages and letters signed by the candidate; is it okay to get a robocall with a voice clone of the same politician speaking our name? And what should we make of the AI-generated campaign memes now shared by figures such as Musk and Donald Trump?

Despite the gridlock in Congress, these are issues with bipartisan interest. This makes it conceivable that something might be done, but probably not until after the 2024 election and only if legislators overcome major roadblocks. One bill under consideration, the AI Transparency in Elections Act, would instruct the FEC to require disclosure when political advertising uses media generated substantially by AI. Critics say, implausibly, that the disclosure is onerous and would increase the cost of political advertising. The Honest Ads Act would modernize campaign-finance law, extending FEC authority to definitively encompass digital advertising. However, it has languished for years because of reported opposition from the tech industry. The Protect Elections From Deceptive AI Act would ban materially deceptive AI-generated content from federal elections, as in California and other states. These are promising proposals, but libertarian and civil-liberties groups are already signaling challenges to all of these on First Amendment grounds. And, vexingly, at least one FEC commissioner has directly cited congressional consideration of some of these bills as a reason for his agency not to act on AI in the meantime.

One group that benefits from all this confusion: tech platforms. When few or no evident rules govern political expenditures online and uses of new technologies like AI, tech companies have maximum latitude to sell ads, services, and personal data to campaigns. This is reflected in their lobbying efforts, as well as the voluntary policy restraints they occasionally trumpet to convince the public they don’t need greater regulation.

Big Tech has demonstrated that it will uphold these voluntary pledges only if they benefit the industry. Facebook once, briefly, banned political advertising on its platform. No longer; now it even allows ads that baselessly deny the outcome of the 2020 presidential election. OpenAI’s policies have long prohibited political campaigns from using ChatGPT, but those restrictions are trivial to evade. Several companies have volunteered to add watermarks to AI-generated content, but they are easily circumvented. Watermarks might even make disinformation worse by giving the false impression that non-watermarked images are legitimate.

This important public policy should not be left to corporations, yet Congress seems resigned not to act before the election. Schumer hinted to NBC News in August that Congress may try to attach deepfake regulations to must-pass funding or defense bills this month to ensure that they become law before the election. More recently, he has pointed to the need for action “beyond the 2024 election.”

The three bills listed above are worthwhile, but they are just a start. The FEC and FCC should not be left to snipe with each other about what territory belongs to which agency. And the FEC needs more significant, structural reform to reduce partisan gridlock and enable it to get more done. We also need transparency into and governance of the algorithmic amplification of misinformation on social-media platforms. That requires that the pervasive influence of tech companies and their billionaire investors should be limited through stronger lobbying and campaign-finance protections.

Our regulation of electioneering never caught up to AOL, let alone social media and AI. And deceiving videos harm our democratic process, whether they are created by AI or actors on a soundstage. But the urgent concern over AI should be harnessed to advance legislative reform. Congress needs to do more than stick a few fingers in the dike to control the coming tide of election disinformation. It needs to act more boldly to reshape the landscape of regulation for political campaigning.

This essay was written with Nathan Sanders, and originally appeared in The Atlantic.

NIST Recommends Some Common-Sense Password Rules

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2024/09/nist-recommends-some-common-sense-password-rules.html

NIST’s second draft of its “SP 800-63-4“—its digital identify guidelines—finally contains some really good rules about passwords:

The following requirements apply to passwords:

  1. lVerifiers and CSPs SHALL require passwords to be a minimum of eight characters in length and SHOULD require passwords to be a minimum of 15 characters in length.
  2. Verifiers and CSPs SHOULD permit a maximum password length of at least 64 characters.
  3. Verifiers and CSPs SHOULD accept all printing ASCII [RFC20] characters and the space character in passwords.
  4. Verifiers and CSPs SHOULD accept Unicode [ISO/ISC 10646] characters in passwords. Each Unicode code point SHALL be counted as a signgle character when evaluating password length.
  5. Verifiers and CSPs SHALL NOT impose other composition rules (e.g., requiring mixtures of different character types) for passwords.
  6. Verifiers and CSPs SHALL NOT require users to change passwords periodically. However, verifiers SHALL force a change if there is evidence of compromise of the authenticator.
  7. Verifiers and CSPs SHALL NOT permit the subscriber to store a hint that is accessible to an unauthenticated claimant.
  8. Verifiers and CSPs SHALL NOT prompt subscribers to use knowledge-based authentication (KBA) (e.g., “What was the name of your first pet?”) or security questions when choosing passwords.
  9. Verifiers SHALL verify the entire submitted password (i.e., not truncate it).

Hooray.

News article.Shashdot thread.

New Windows Malware Locks Computer in Kiosk Mode

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2024/09/new-windows-malware-locks-computer-in-kiosk-mode.html

Clever:

A malware campaign uses the unusual method of locking users in their browser’s kiosk mode to annoy them into entering their Google credentials, which are then stolen by information-stealing malware.

Specifically, the malware “locks” the user’s browser on Google’s login page with no obvious way to close the window, as the malware also blocks the “ESC” and “F11” keyboard keys. The goal is to frustrate the user enough that they enter and save their Google credentials in the browser to “unlock” the computer.

Once credentials are saved, the StealC information-stealing malware steals them from the credential store and sends them back to the attacker.

I’m sure this works often enough to be a useful ploy.

Israel’s Pager Attacks and Supply Chain Vulnerabilities

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2024/09/israels-pager-attacks.html

Israel’s brazen attacks on Hezbollah last week, in which hundreds of pagers and two-way radios exploded and killed at least 37 people, graphically illustrated a threat that cybersecurity experts have been warning about for years: Our international supply chains for computerized equipment leave us vulnerable. And we have no good means to defend ourselves.

Though the deadly operations were stunning, none of the elements used to carry them out were particularly new. The tactics employed by Israel, which has neither confirmed nor denied any role, to hijack an international supply chain and embed plastic explosives in Hezbollah devices have been used for years. What’s new is that Israel put them together in such a devastating and extravagantly public fashion, bringing into stark relief what the future of great power competition will look like—in peacetime, wartime and the ever expanding gray zone in between.

The targets won’t be just terrorists. Our computers are vulnerable, and increasingly so are our cars, our refrigerators, our home thermostats and many other useful things in our orbits. Targets are everywhere.

The core component of the operation, implanting plastic explosives in pagers and radios, has been a terrorist risk since Richard Reid, the so-called shoe bomber, tried to ignite some on an airplane in 2001. That’s what all of those airport scanners are designed to detect—both the ones you see at security checkpoints and the ones that later scan your luggage. Even a small amount can do an impressive degree of damage.

The second component, assassination by personal device, isn’t new, either. Israel used this tactic against a Hamas bomb maker in 1996 and a Fatah activist in 2000. Both were killed by remotely detonated booby-trapped cellphones.

The final and more logistically complex piece of Israel’s plan, attacking an international supply chain to compromise equipment at scale, is something that the United States has done, though for different purposes. The National Security Agency has intercepted communications equipment in transit and modified it not for destructive purposes but for eavesdropping. We know from an Edward Snowden document that the agency did this to a Cisco router destined for a Syrian telecommunications company. Presumably, this wasn’t the agency’s only operation of this type.

Creating a front company to fool victims isn’t even a new twist. Israel reportedly created a shell company to produce and sell explosive-laden devices to Hezbollah. In 2019 the FBI created a company that sold supposedly secure cellphones to criminals—not to assassinate them but to eavesdrop on and then arrest them.

The bottom line: Our supply chains are vulnerable, which means that we are vulnerable. Any individual, country or group that interacts with a high-tech supply chain can subvert the equipment passing through it. It can be subverted to eavesdrop. It can be subverted to degrade or fail on command. And although it’s harder, it can be subverted to kill.

Personal devices connected to the internet—and countries where they are in high use, such as the United States—are especially at risk. In 2007 the Idaho National Laboratory demonstrated that a cyberattack could cause a high-voltage generator to explode. In 2010 a computer virus believed to have been developed by the United States and Israel destroyed centrifuges at an Iranian nuclear facility. A 2017 dump of CIA documents included statements about the possibility of remotely hacking cars, which WikiLeaks asserted could be used to carry out “nearly undetectable assassinations.” This isn’t just theoretical: In 2015 a Wired reporter allowed hackers to remotely take over his car while he was driving it. They disabled the engine while he was on a highway.

The world has already begun to adjust to this threat. Many countries are increasingly wary of buying communications equipment from countries they don’t trust. The United States and others are banning large routers from the Chinese company Huawei because we fear that they could be used for eavesdropping and—even worse—disabled remotely in a time of escalating hostilities. In 2019 there was a minor panic over Chinese-made subway cars that could have been modified to eavesdrop on their riders.

It’s not just finished equipment that is under the scanner. More than a decade ago, the US military investigated the security risks of using Chinese parts in its equipment. In 2018 a Bloomberg report revealed US investigators had accused China of modifying computer chips to steal information.

It’s not obvious how to defend against these and similar attacks. Our high-tech supply chains are complex and international. It didn’t raise any red flags to Hezbollah that the group’s pagers came from a Hungary-based company that sourced them from Taiwan, because that sort of thing is perfectly normal. Most of the electronics Americans buy come from overseas, including our iPhones, whose parts come from dozens of countries before being pieced together primarily in China.

That’s a hard problem to fix. We can’t imagine Washington passing a law requiring iPhones to be made entirely in the United States. Labor costs are too high, and our country doesn’t have the domestic capacity to make these things. Our supply chains are deeply, inexorably international, and changing that would require bringing global economies back to the 1980s.

So what happens now? As for Hezbollah, its leaders and operatives will no longer be able to trust equipment connected to a network—very likely one of the primary goals of the attacks. And the world will have to wait to see if there are any long-term effects of this attack and how the group will respond.

But now that the line has been crossed, other countries will almost certainly start to consider this sort of tactic as within bounds. It could be deployed against a military during a war or against civilians in the run-up to a war. And developed countries like the United States will be especially vulnerable, simply because of the sheer number of vulnerable devices we have.

This essay originally appeared in The New York Times.

Hacking the “Bike Angels” System for Moving Bikeshares

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2024/09/hacking-the-bike-angels-system-for-moving-bikeshares.html

I always like a good hack. And this story delivers. Basically, the New York City bikeshare program has a system to reward people who move bicycles from full stations to empty ones. By deliberately moving bikes to create artificial problems, and exploiting exactly how the system calculates rewards, some people are making a lot of money.

At 10 a.m. on a Tuesday last month, seven Bike Angels descended on the docking station at Broadway and 53rd Street, across from the Ed Sullivan Theater. Each rider used his own special blue key -­- a reward from Citi Bike—­ to unlock a bike. He rode it one block east, to Seventh Avenue. He docked, ran back to Broadway, unlocked another bike and made the trip again.

By 10:14, the crew had created an algorithmically perfect situation: One station 100 percent full, a short block from another station 100 percent empty. The timing was crucial, because every 15 minutes, Lyft’s algorithm resets, assigning new point values to every bike move.

The clock struck 10:15. The algorithm, mistaking this manufactured setup for a true emergency, offered the maximum incentive: $4.80 for every bike returned to the Ed Sullivan Theater. The men switched direction, running east and pedaling west.

Nicely done, people.

Now it’s Lyft’s turn to modify its system to prevent this hack. Thinking aloud, it could try to detect this sort of behavior in the Bike Angels data—and then ban people who are deliberately trying to game the system. The detection doesn’t have to be perfect, just good enough to catch bad actors most of the time. The detection needs to be tuned to minimize false positives, but that feels straightforward.

FBI Shuts Down Chinese Botnet

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2024/09/fbi-shuts-down-chinese-botnet.html

The FBI has shut down a botnet run by Chinese hackers:

The botnet malware infected a number of different types of internet-connected devices around the world, including home routers, cameras, digital video recorders, and NAS drives. Those devices were used to help infiltrate sensitive networks related to universities, government agencies, telecommunications providers, and media organizations…. The botnet was launched in mid-2021, according to the FBI, and infected roughly 260,000 devices as of June 2024.

The operation to dismantle the botnet was coordinated by the FBI, the NSA, and the Cyber National Mission Force (CNMF), according to a press release dated Wednesday. The U.S. Department of Justice received a court order to take control of the botnet infrastructure by sending disabling commands to the malware on infected devices. The hackers tried to counterattack by hitting FBI infrastructure but were “ultimately unsuccessful,” according to the law enforcement agency.

Remotely Exploding Pagers

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2024/09/remotely-exploding-pagers.html

Wow.

It seems they all exploded simultaneously, which means they were triggered.

Were they each tampered with physically, or did someone figure out how to trigger a thermal runaway remotely? Supply chain attack? Malicious code update, or natural vulnerability?

I have no idea, but I expect we will all learn over the next few days.

EDITED TO ADD: I’m reading nine killed and 2,800 injured. That’s a lot of collateral damage. (I haven’t seen a good number as to the number of pagers yet.)

EDITED TO ADD: Reuters writes: “The pagers that detonated were the latest model brought in by Hezbollah in recent months, three security sources said.” That implies supply chain attack. And it seems to be a large detonation for an overloaded battery.

This reminds me of the 1996 assassination of Yahya Ayyash using a booby trapped cellphone.

EDITED TO ADD: I am deleting political comments. On this blog, let’s stick to the tech and the security ramifications of the threat.

EDITED TO ADD (9/18): More explosions today, this time radios. Good New York Times explainer. And a Wall Street Journal article. Clearly a physical supply chain attack.

EDITED TO ADD (9/18): Four more good articles.