Tag Archives: google

New Image/Video Prompt Injection Attacks

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2024/02/new-image-video-prompt-injection-attacks.html

Simon Willison has been playing with the video processing capabilities of the new Gemini Pro 1.5 model from Google, and it’s really impressive.

Which means a lot of scary new video prompt injection attacks. And remember, given the current state of technology, prompt injection attacks are impossible to prevent in general.

NVIDIA Shows Intel Gaudi2 is 4x Better Performance Per Dollar than its H100

Post Syndicated from Patrick Kennedy original https://www.servethehome.com/nvidia-shows-intel-gaudi2-is-4x-better-performance-per-dollar-than-its-h100/

In a stunning twist, NVIDIA shows the Intel Gaudi2 is roughly 4x better performance per dollar than the H100 in its MLPerf Training results

The post NVIDIA Shows Intel Gaudi2 is 4x Better Performance Per Dollar than its H100 appeared first on ServeTheHome.

MLPerf Inference v3.1 Shows NVIDIA Grace Hopper and a Cool AMD TPU v5e Win

Post Syndicated from Cliff Robinson original https://www.servethehome.com/mlperf-inference-v3-1-shows-nvidia-grace-hopper-and-a-cool-amd-tpu-v5e-win/

NVIDIA’s MLPerf Inference v3.1 is out. Two standouts were NVIDIA setting the stage to jettison x86 and AMD having a big win at Google

The post MLPerf Inference v3.1 Shows NVIDIA Grace Hopper and a Cool AMD TPU v5e Win appeared first on ServeTheHome.

Google Details TPUv4 and its Crazy Optically Reconfigurable AI Network

Post Syndicated from Patrick Kennedy original https://www.servethehome.com/google-details-tpuv4-and-its-crazy-optically-reconfigurable-ai-network/

Google detailed how its TPUv4 pods use optically reconfigurable networks to support efficient, large scale, AI workloads

The post Google Details TPUv4 and its Crazy Optically Reconfigurable AI Network appeared first on ServeTheHome.

Google Reportedly Disconnecting Employees from the Internet

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/07/google-reportedly-disconnecting-employees-from-the-internet.html

Supposedly Google is starting a pilot program of disabling Internet connectivity from employee computers:

The company will disable internet access on the select desktops, with the exception of internal web-based tools and Google-owned websites like Google Drive and Gmail. Some workers who need the internet to do their job will get exceptions, the company stated in materials.

Google has not confirmed this story.

More news articles.

Google Is Using Its Vast Data Stores to Train AI

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/07/google-is-using-its-vast-data-stores-to-train-ai.html

No surprise, but Google just changed its privacy policy to reflect broader uses of all the surveillance data it has captured over the years:

Research and development: Google uses information to improve our services and to develop new products, features and technologies that benefit our users and the public. For example, we use publicly available information to help train Google’s AI models and build products and features like Google Translate, Bard, and Cloud AI capabilities.

(I quote the privacy policy as of today. The Mastodon link quotes the privacy policy from ten days ago. So things are changing fast.)

Google Is Not Deleting Old YouTube Videos

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/05/google-is-not-deleting-old-youtube-videos.html

Google has backtracked on its plan to delete inactive YouTube videos—at least for now. Of course, it could change its mind anytime it wants.

It would be nice if this would get people to think about the vulnerabilities inherent in letting a for-profit monopoly decide what of human creativity is worth saving.

Malware Delivered through Google Search

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/02/malware-delivered-through-google-search.html

Criminals using Google search ads to deliver malware isn’t new, but Ars Technica declared that the problem has become much worse recently.

The surge is coming from numerous malware families, including AuroraStealer, IcedID, Meta Stealer, RedLine Stealer, Vidar, Formbook, and XLoader. In the past, these families typically relied on phishing and malicious spam that attached Microsoft Word documents with booby-trapped macros. Over the past month, Google Ads has become the go-to place for criminals to spread their malicious wares that are disguised as legitimate downloads by impersonating brands such as Adobe Reader, Gimp, Microsoft Teams, OBS, Slack, Tor, and Thunderbird.

[…]

It’s clear that despite all the progress Google has made filtering malicious sites out of returned ads and search results over the past couple decades, criminals have found ways to strike back. These criminals excel at finding the latest techniques to counter the filtering. As soon as Google devises a way to block them, the criminals figure out new ways to circumvent those protections.

Real-Time Risk Mitigation in Google Cloud Platform

Post Syndicated from Ben Austin original https://blog.rapid7.com/2022/10/12/real-time-risk-mitigation-in-google-cloud-platform/

Real-Time Risk Mitigation in Google Cloud Platform

With Google Cloud Next happening this week, there’s been some recent water cooler talk – okay, informal, ad hoc Zoom calls – where discussions about what makes Google Cloud Platform (GCP) unique when it comes to security. A few specific differences have popped up here and there (default data encryption, the way IAM is handled, etc.), but, generally speaking, many of the principles that apply to all other cloud providers apply to GCP environments.

For one, due to the speed and scale of these environments, it’s simultaneously very difficult and extremely critical to maintain an up-to-date inventory of the state of all resources in your environment. This means constantly monitoring your environment for resources being created, deleted, or modified in as close to real time as possible.

And in an effort to avoid ambiguity or hide behind marketing buzz terms, when I’m referring to “real time” here, I’m talking about sub 5-minute intervals based on activity happening in the environment. This is not to be confused with “near real time” approaches some vendors tout, which, in reality, still only pulls in data once or twice a day based on a static schedule.

In GCP, like in AWS, Azure, and all other cloud environments, simply getting a snapshot once a day to identify misconfigurations, vulnerabilities, or suspicious behaviors like you might with an on-prem data center just isn’t a scalable strategy. It’s a common cliche, but the ephemeral nature and rate of change in public cloud environments makes that kind of scanning strategy extremely ineffective when it comes to monitoring, analyzing, and eliminating actual risk in a cloud environment.

Let me lay out a couple examples where this kind of real-time monitoring can provide significant, potentially necessary, value to security teams working to make their cloud risk management programs more effective.

Identification of high-risk resources

As an example, say a developer is in a GCP project associated with your company’s revenue-generating application and they spin up a Cloud Storage instance that is, whether mistakenly or maliciously, open to the public internet.

If your security team is reliant on a scan to happen 12 hours later to get visibility into this activity, your organization will constantly be left open to significant risk. Take away the hyperbole here and assume it’s a much smaller risk or compliance violation. Even in that situation, your team is still working from behind and, presumably, almost always facing some level of stress about what issues are out there in the environment that they won’t know about for another 12-18 hours.

Worst of all, with this type of scanning you’re generally just getting a point-in-time snapshot of the environment and usually don’t know who made the change or how long ago it happened. This makes it much more difficult and time consuming for your team to actually assess the risk or get their hands on the right information to make an informed decision about how the situation should be addressed.

When a team is working with real-time data, however, they can be much more diligent and confident that they’re prioritizing the right issues at any given moment, with all the necessary context about who made the change and when it occurred. This not only helps teams stay ahead of issues and reduce the risk of a breach in their environment, but also helps keep individuals and teams feeling positive about the impact that the program is having on the organization.

Delayed remediation workflows

Building off of the previous example, it’s not only that teams can’t respond to risk they haven’t been notified of, it’s also that any automated response workflows your team may have built out to be more efficient are significantly less effective when they’re triggered by hours-old data. A 12-hour delay in an automation workflow all but eliminates the value of the automation itself, and it can actually cause headaches and confusion that detract from your team’s efficiency, rather than improving it (more on this in the next example).

In contrast, if you’re able to detect risky changes to your environment as they happen, you can automatically respond to that issue as it happens. In the case of this all being a mistake caused by a developer working a little too quickly, you’re able to automatically notify them of their error within a matter of minutes, likely while they’re still working within that project. Giving your development team this kind of feedback in the moment, rather than forcing them to context switch and go back into the project to fix the error a day later, is an excellent way to build stronger relationships and rapport with that team.

In the more rare case that this is indeed a malicious internal or external actor, enabling your automated remediation workflows to kick into gear within seconds and potentially stop the behavior could mean the difference between a minor incident and a breach requiring public disclosure from your organization.

Minimizing false positives and cross-team friction

Speaking of relationships with the development team (sorry, #DevSecOps), I can almost guarantee that working with data from scans or snapshots that occur every 12 or 24 hours in your cloud will cause friction between your two teams. Whether it’s tied to manual identification of risky resources or automated workflows notifying them of a non-compliant asset, working with stale data will inevitably lead to false positives that will both annoy and distract your already overburdened development team.

Take the example highlighted above, but instead, let’s say the developer actually spun up that Cloud Storage instance for a short amount of time in a dev instance with no actual customer data as part of a testing exercise. By the time your team gets visibility into this and either reaches out manually or has some automated notification sent to the developer, that instance could have already been deleted for hours. Now your team is looking at one set of old data and seeing an issue, meanwhile the developer is insisting that the storage container doesn’t even exist anymore. As mentioned above, this is going to cause headaches and frustration for both parties, and cause your team to lose credibility with the dev team.

At this point, you can probably guess where this is going next. With real-time monitoring in your environment this situation can be avoided altogether because your team will be looking at the same up-to-date information, and your team will be able to see that the storage container was shut down or removed from the project rather than spending time chasing down a false positive.

Earlier this month we released event-driven harvesting for GCP in InsightCloudSec. This agentless, real-time monitoring helps your security team achieve every one of the benefits outlined above while also avoiding API rate limiting. In addition, we’ve recently added GCP CIS Benchmarks v1.3.0, added GCP threat findings into our console, and added support for Google Directory to give visibility into IAM factors such as user last login, MFA status, group association and more.

If you want to learn more about how Rapid7 can help you secure Google Cloud Platform, or any other public cloud environment, sign up for our live bi-weekly demo of InsightCloudSec.

Vendors are Fixing Security Flaws Faster

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/02/vendors-are-fixing-security-flaws-faster.html

Google’s Project Zero is reporting that software vendors are patching their code faster.

tl;dr

  • In 2021, vendors took an average of 52 days to fix security vulnerabilities reported from Project Zero. This is a significant acceleration from an average of about 80 days 3 years ago.
  • In addition to the average now being well below the 90-day deadline, we have also seen a dropoff in vendors missing the deadline (or the additional 14-day grace period). In 2021, only one bug exceeded its fix deadline, though 14% of bugs required the grace period.
  • Differences in the amount of time it takes a vendor/product to ship a fix to users reflects their product design, development practices, update cadence, and general processes towards security reports. We hope that this comparison can showcase best practices, and encourage vendors to experiment with new policies.
  • This data aggregation and analysis is relatively new for Project Zero, but we hope to do it more in the future. We encourage all vendors to consider publishing aggregate data on their time-to-fix and time-to-patch for externally reported vulnerabilities, as well as more data sharing and transparency in general.

Finding Vulnerabilities in Open Source Projects

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/02/finding-vulnerabilities-in-open-source-projects.html

The Open Source Security Foundation announced $10 million in funding from a pool of tech and financial companies, including $5 million from Microsoft and Google, to find vulnerabilities in open source projects:

The “Alpha” side will emphasize vulnerability testing by hand in the most popular open-source projects, developing close working relationships with a handful of the top 200 projects for testing each year. “Omega” will look more at the broader landscape of open source, running automated testing on the top 10,000.

This is an excellent idea. This code ends up in all sorts of critical applications.

Log4j would be a prototypical vulnerability that the Alpha team might look for ­– an unknown problem in a high-impact project that automated tools would not be able to pick up before a human discovered it. The goal is not to use the personnel engaged with Alpha to replicate dependency analysis, for example.

People Are Increasingly Choosing Private Web Search

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/01/people-are-increasingly-choosing-private-web-search.html

DuckDuckGo has had a banner year:

And yet, DuckDuckGo. The privacy-oriented search engine netted more than 35 billion search queries in 2021, a 46.4% jump over 2020 (23.6 billion). That’s big. Even so, the company, which bills itself as the “Internet privacy company,” offering a search engine and other products designed to “empower you to seamlessly take control of your personal information online without any tradeoffs,” remains a rounding error compared to Google in search.

I use it. It’s not as a good a search engine as Google. Or, at least, Google often gets me what I want faster than DuckDuckGo does. To solve that, I use use the feature that allows me to use Google’s search engine through DuckDuckGo: prepend “!Google” to searches. Basically, DuckDuckGo launders my search.

EDITED TO ADD (1/12): I was wrong. DuckDuckGo does not provide privacy protections when searching using Google.

Google Shuts Down Glupteba Botnet, Sues Operators

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/12/google-shuts-down-glupteba-botnet-sues-operators.html

Google took steps to shut down the Glupteba botnet, at least for now. (The botnet uses the bitcoin blockchain as a backup command-and-control mechanism, making it hard to get rid of it permanently.) So Google is also suing the botnet’s operators.

It’s an interesting strategy. Let’s see if it’s successful.

MacOS Zero-Day Used against Hong Kong Activists

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/11/macos-zero-day-used-against-hong-kong-activists.html

Google researchers discovered a MacOS zero-day exploit being used against Hong Kong activists. It was a “watering hole” attack, which means the malware was hidden in a legitimate website. Users visiting that website would get infected.

From an article:

Google’s researchers were able to trigger the exploits and study them by visiting the websites compromised by the hackers. The sites served both iOS and MacOS exploit chains, but the researchers were only able to retrieve the MacOS one. The zero-day exploit was similar to another in-the-wild vulnerability analyzed by another Google researcher in the past, according to the report.

In addition, the zero-day exploit used in this hacking campaign is “identical” to an exploit previously found by cybersecurity research group Pangu Lab, Huntley said. Pangu Lab’s researchers presented the exploit at a security conference in China in April of this year, a few months before hackers used it against Hong Kong users.

The exploit was discovered in August. Apple patched the vulnerability in September. China is, of course, the obvious suspect, given the victims.

EDITED TO ADD (11/15): Another story.

Storing Encrypted Photos in Google’s Cloud

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/07/storing-encrypted-photos-in-googles-cloud.html

New paper: “Encrypted Cloud Photo Storage Using Google Photos“:

Abstract: Cloud photo services are widely used for persistent, convenient, and often free photo storage, which is especially useful for mobile devices. As users store more and more photos in the cloud, significant privacy concerns arise because even a single compromise of a user’s credentials give attackers unfettered access to all of the user’s photos. We have created Easy Secure Photos (ESP) to enable users to protect their photos on cloud photo services such as Google Photos. ESP introduces a new client-side encryption architecture that includes a novel format-preserving image encryption algorithm, an encrypted thumbnail display mechanism, and a usable key management system. ESP encrypts image data such that the result is still a standard format image like JPEG that is compatible with cloud photo services. ESP efficiently generates and displays encrypted thumbnails for fast and easy browsing of photo galleries from trusted user devices. ESP’s key management makes it simple to authorize multiple user devices to view encrypted image content via a process similar to device pairing, but using the cloud photo service as a QR code communication channel. We have implemented ESP in a popular Android photos app for use with Google Photos and demonstrate that it is easy to use and provides encryption functionality transparently to users, maintains good interactive performance and image quality while providing strong privacy guarantees, and retains the sharing and storage benefits of Google Photos without any changes to the cloud service