Tag Archives: Velociraptor

How To Hunt For UEFI Malware Using Velociraptor

Post Syndicated from Matthew Green original https://blog.rapid7.com/2024/02/29/how-to-hunt-for-uefi-malware-using-velociraptor/

How To Hunt For UEFI Malware Using Velociraptor

UEFI threats have historically been limited in number and mostly implemented by nation state actors as stealthy persistence. However, the recent proliferation of Black Lotus on the dark web, Trickbot enumeration module (late 2022), and Glupteba (November 2023) indicates that this historical trend may be changing.

With this context, it is becoming important for security practitioners to understand visibility and collection capabilities for UEFI threats. This post covers some of these areas and presents several recent Velociraptor artifacts that can be used in the field. Rapid7 has also released a white paper providing detailed information about how UEFI malware works and some of the most common types.

Background

Unified Extensible Firmware Interface, or UEFI, is the interface between a system’s hardware and its operating system (OS). The technology can be viewed as an updated BIOS capability to improve and add security to the boot process.

The two main types of UEFI persistence are:

  1. Serial Peripheral Interface (SPI) based
  • Firmware payload implant that is resilient to even a hard disk format.
  • Difficult to implement — there are risks associated with implementing and potentially bricking a machine if there are mistakes with the firmware.
  • Difficult to detect at scale — defenders need to extract firmware which typically requires a signed driver, then running tools for analysis.
  • Typically an analyst would dump firmware, then extract variables and other interesting files like PEs for deep dive analysis.

2. EFI System Partition (ESP) based

  • A special FAT partition that stores bootloaders and sits late in the EFI boot process.
  • Much easier to implement, only requiring root privileges and to bypass Secure Boot.
  • Does not survive a machine format.

EFI Secure Variables API visibility

EFI Secure Variables (or otherwise known as NVRAM) is how the system distributes components from the firmware during boot. From an analysis point of view, whilst dumping the firmware is difficult needing manual workflow, all operating systems provide some visibility from user space. This blog will discuss the Windows API; however, for reference Linux and macOS provides similar data.

How To Hunt For UEFI Malware Using Velociraptor

GetFirmwareEnvironmentVariable (Windows) can collect the name, namespace guid and value of EFI secure variables. This collection can be used to check current state including key/signature database and revocation.

Some of the data points it enables extracting are:

  • Platform Key (PK) — top level key.
  • Key Exchange Key (KEK)  — used to sign Signatures Database and Forbidden Signatures Database updates.
  • Signature database (db) — contains keys and/or hashes of allowed EFI binaries.
  • Forbidden signatures database (dbx) — contains keys and/or hashes of denylisted EFI binaries.
  • Other boot configuration settings.

It’s worth noting that this technique is relying on the Windows API and could be subverted with capable malware, but the visibility can provide leads for an analyst around boot configuration or signatures. There are also “boot only” NVRAM variables that can not be accessed outside boot, so a manual chip dump would need to be collected.

How To Hunt For UEFI Malware Using Velociraptor
Example of extracting EFI secure variables

Velociraptor has a community contributed capability: Generic.System.EfiSignatures. This artifact collects EFI Signature information from the client to check for unknown certificates and revoked hashes. This is a great artifact for data stacking across machines and is built by parsing data values from the efivariables() plugin.

How To Hunt For UEFI Malware Using Velociraptor

EFI System Partition (ESP) visibility

The ESP is a FAT partitioned file system that contains boot loaders and other critical files used during the boot process which do not change regularly. As such, it can be a relatively simple task to find abnormalities using forensics.

For example, parsing the File Allocation Table we can review metadata around path, timestamps, and deleted status that may provide leads for analysis.

How To Hunt For UEFI Malware Using Velociraptor
Viewing FAT metadata on *.EFI files

In the screenshot above we observe several EFI bootloader files with timestamps out of alignment. We would typically expect these files to have the same timestamps around operating system install. We can also observe deleted files and the existence of a System32 folder in the temporal range of these entries.

The EFI/ folder should be the only folder in the ESP root so querying for any paths that do not begin with EFI/ is a great hunt that detects our lead above. You can see in my screenshot below, the BlackLotus staging being bubbled to the top adding filtering for this use case.

How To Hunt For UEFI Malware Using Velociraptor
BlackLotus staging: Non ESP/ files

Interestingly, BlackLotus was known to use the Baton Drop exploit so we can compare to the publicly available Baton Drop and observe similarities to deleted files on the ESP.

How To Hunt For UEFI Malware Using Velociraptor
Publicly available Baton Drop iso contents on Github

The final component of ESP-based visibility is checking the bytes of file contents. We can run YARA to look for known malware traits, or obtain additional file type metadata that can provide leads for analysis. The screenshot below highlights the well known Black Lotus certificate information and PE header timestamp.

How To Hunt For UEFI Malware Using Velociraptor
BlackLotus PE header, suspicious Authenticode
How To Hunt For UEFI Malware Using Velociraptor
BlackLotus YARA hit in ESP

Available Velociraptor artifacts for this visibility of the ESP are:

  1. Windows.Forensics.UEFI — This artifact enables disk analysis over an EFI System Partition (ESP). The artifact queries the specified physical disk, parses the partition table to target the ESP File Allocation Table (FAT). The artifact returns file information, and PE enrichment as typical EFI files are in the PE format.
  2. Windows.Detection.Yara.UEFI This artifact expands on basic enumeration of the ESP and enables running yara over the EFI system partition.

Measured Boot log visibility

Bootkit security has always been a “race to the bottom.” If the malware could load prior to security tools, a defender would need to assume they may be defeated. Since Windows 8, Measured Boot is a feature implemented to help protect machines from early boot malware. Measured Boot checks each startup component — from firmware to boot drivers — and stores this information in the Trusted Platform Module (TPM). A binary log is then made available to verify the boot state of the machine. The default Measured Boot log location is C:\Windows\Logs\MeasuredBoot\*.log and a new file is recorded for each boot.

Windows.Forensics.UEFI.BootApplication parses Windows MeasuredBoot TCGLogs to extract PathName of events, which can assist detection of potential ESP based persistence (EV_EFI_Boot_Services_Application). The artifact leverages Velociraptor tools to deploy and execute Matt Graeber’s excellent powershell module TCGLogTools to parse TCGLogs on disk and memory.

How To Hunt For UEFI Malware Using Velociraptor

We can see when running on an infected machine that the BOOT application path has clearly changed from the default: \EFI\Microsoft\Boot\bootmgfw.efi. Therefore, Boot Application is a field that is stackable across the network.

We can also output extended values, including digest hashes for verification.

How To Hunt For UEFI Malware Using Velociraptor

Other forensic artifacts

There are many other generic forensic artifacts analysts could focus on for assisting detection of a UEFI threat. From malware network activity to unexpected errors in the event log associated with Antivirus/Security tools on the machine.

For example: BlackLotus made an effort to evade detection by changing Windows Defender access tokens to SE_PRIVILEGE_REMOVED. This technique keeps the Defender service running but effectively disables it. While Velociraptor may not have protected process privileges to check tokens directly, we can check for other indicators such as errors associated with use.

How To Hunt For UEFI Malware Using Velociraptor

Similarly, Memory integrity (HVCI) is a feature of virtualization-based security (VBS) in Windows. It provides a stronger virtualization environment via isolation and kernel memory allocations.The feature is related to Secure Boot and can be disabled for malware that needs a lower integrity environment to run. It requires setting the configuration registry key value to 0.

HKLM\SYSTEM\CurrentControlSet\Control\DeviceGuard\Scenarios\HypervisorEnforcedCodeIntegrity\Value

0 – disabled

1 – enabled
Windows.Registry.HVCI available on the artifact exchange can be used to query for this key value.

How To Hunt For UEFI Malware Using Velociraptor

Conclusion

Despite UEFI threats possessing intimidating capabilities, security practitioners can deploy some visibility with current tools for remote investigation. Forensically parsing disk and not relying on the Windows API, or reviewing other systemic indicators that may signal compromise, is a practical way to detect components of these threats. Knowing collection capabilities, the gaps, and how to mitigate these is just as important as knowing the threat.

In this post we have covered some of Velociraptor’s visibility for UEFI threats and we have only scratched the surface for those who know their environment and can query it effectively. Rapid7 supports Velociraptor open source, providing the community with Velociraptor and open source features unavailable even in some paid tools.

References:

  1. ESET, Martin Smolar – BlackLotus UEFI bootkit: Myth confirmed
  2. Microsoft Incident Response – Guidance for investigating attacks using CVE-2022-21894: The BlackLotus campaign
  3. Trellix Insights: TrickBot offers new TrickBoot
  4. Palo Alto Unit 42: Diving Into Glupteba’s UEFI Bootkit
  5. Sentinel1: Moving from common sense knowledge about uefi to actually dumping uefi firmware

Velociraptor 0.7.1 Release: Sigma Support, ETW Multiplexing, Local Encrypted Storage and New VQL Capabilities Highlight the Last Release of 2023

Post Syndicated from Rapid7 original https://blog.rapid7.com/2023/12/29/velociraptor-0-7-1-release-sigma-support-etw-multiplexing-local-encrypted-storage-and-new-vql-capabilities-highlight-the-last-release-of-2023/

Velociraptor 0.7.1 Release: Sigma Support, ETW Multiplexing, Local Encrypted Storage and New VQL Capabilities Highlight the Last Release of 2023

Written by Dr. Michael Cohen

Rapid7 is excited to announce that version 0.7.1 of Velociraptor is live and available for download.  There are several new features and capabilities that add to the power and efficiency of this open-source digital forensic and incident response (DFIR) platform.

In this post, Rapid7 Digital Paleontologist, Dr. Mike Cohen discusses some of the exciting new features.

GUI improvements

The GUI was updated in this release to improve user workflow and accessibility.

Notebook improvements

Velociraptor uses notebooks extensively to facilitate collaboration, and post processing. There are currently three types of notebooks:

  1. Global Notebooks – these are available from the GUI sidebar and can be shared with other users for a collaborative workflow.
  2. Collection notebooks – these are attached to specific collections and allow post processing the collection results.
  3. Hunt notebooks – are attached to a hunt and allow post processing of the collection data from a hunt.

This release further develops the Global notebooks workflow as a central place for collecting and sharing analysis results.

Templated notebooks

Many users use notebooks heavily to organize their investigation and guide users on what to collect. While Collection notebooks and Hunt notebooks can already include templates there was no way to customize the default Global notebook.

In this release, we define a new type of Artifact of type NOTEBOOK which allows a user to define a template for global notebooks.

In this example I will create such a template to help users gather server information about clients. I click on the artifact editor in the sidebar, then select Notebook Templates from the search screen. I then edit the built in Notebooks. Default artifact.

Velociraptor 0.7.1 Release: Sigma Support, ETW Multiplexing, Local Encrypted Storage and New VQL Capabilities Highlight the Last Release of 2023
Adding a new notebook template

I can define multiple cells in the notebook. Cells can be of type vql, markdown or vql_suggestion. I usually use the markdown cells to write instructions for users of how to use my notebook, while vql cells can run queries like schedule collections or preset hunts.

Next I select the Global notebooks in the sidebar and click the New Notebook button. This brings up a wizard that allows me to create a new global notebook. After filling in the name of the notebook and electing which user to share it with, I can choose the template for this notebook.

Velociraptor 0.7.1 Release: Sigma Support, ETW Multiplexing, Local Encrypted Storage and New VQL Capabilities Highlight the Last Release of 2023
Adding a new notebook template

I can see my newly added notebook template and select it.

Velociraptor 0.7.1 Release: Sigma Support, ETW Multiplexing, Local Encrypted Storage and New VQL Capabilities Highlight the Last Release of 2023
Viewing the notebook from template

Copying notebook cells

In this release, Velociraptor allows copying of a cell from any notebook to the Global notebooks. This facilitates a workflow where users may filter, post-process and identify interesting artifacts in various hunt notebooks or specific collection notebooks, but then copy the post processed cell into a central Global notebook for collaboration.

For the next example, I collect the server artifact Server.Information.Clients and post process the results in the notebook to count the different clients by OS.

Velociraptor 0.7.1 Release: Sigma Support, ETW Multiplexing, Local Encrypted Storage and New VQL Capabilities Highlight the Last Release of 2023
Post processing the results of a collection

Now that I am happy with this query, I want to copy the cell to my Admin Notebook which I created earlier.

Velociraptor 0.7.1 Release: Sigma Support, ETW Multiplexing, Local Encrypted Storage and New VQL Capabilities Highlight the Last Release of 2023
Copying a cell to a global notebook

I can then select which Global notebook to copy the cell into.

Velociraptor 0.7.1 Release: Sigma Support, ETW Multiplexing, Local Encrypted Storage and New VQL Capabilities Highlight the Last Release of 2023
The copied cell still refers to the old collection

Velociraptor will copy the cell to the target notebook and add VQL statements to still refer to the original collection. This allows users of the global notebook to further refine the query if needed.

This workflow allows better collaboration between users.

VFS Downloads

Velociraptor’s VFS view is an interactive view of the endpoint’s filesystem. Users can navigate the remote filesystem using a familiar tree based navigation and interactively fetch various files from the endpoint.

Before the 0.7.1 release, the user was able to download and preview individual files in the GUI but it was difficult to retrieve multiple files downloaded into the VFS.

In the 0.7.1 release, there is a new GUI button to initiate a collection from the VFS itself. This allows the user to download all or only some of the files they had previously interactively downloaded into the VFS.

For example consider the following screenshot that shows a few files downloaded into the VFS.

Velociraptor 0.7.1 Release: Sigma Support, ETW Multiplexing, Local Encrypted Storage and New VQL Capabilities Highlight the Last Release of 2023
Viewing the VFS

I can initiate a collection from the VFS. This is a server artifact (similar to the usual File Finder artifacts) that simply traverses the VFS with a glob uploading all files into a single collection.

Velociraptor 0.7.1 Release: Sigma Support, ETW Multiplexing, Local Encrypted Storage and New VQL Capabilities Highlight the Last Release of 2023
Initiating a VFS collection

Using the glob I can choose to retrieve files with a particular filename pattern (e.g. only executables) or all files.

Velociraptor 0.7.1 Release: Sigma Support, ETW Multiplexing, Local Encrypted Storage and New VQL Capabilities Highlight the Last Release of 2023
Inspecting the VFS collection

Finally the GUI shows a link to the collected flow where I can inspect the files or prepare a download zip just like any other collection.

New VQL plugins and capabilities

This release introduces an exciting new capability: Built-in Sigma Support.

Built-in Sigma Support

Sigma is fast emerging as a popular standard for writing and distributing detections. Sigma was originally designed as a portable notation for multiple backend SIEM products: Detections expressed in Sigma rules can be converted (compiled) into a target SIEM query language (for example into Elastic queries) to run on the target SIEM.

Velociraptor is not really a SIEM in the sense that we do not usually forward all events to a central storage location where large queries can run on it. Instead, Velociraptor’s philosophy is to bring the query to the endpoint itself.

In Velociraptor, Sigma rules can directly be used on the endpoint, without the need to forward all the events off the system first! This makes Sigma a powerful tool for initial triage:

  • Apply a large number of Sigma rules on the local event log files.
  • Those rules that trigger immediately surface potentially malicious activity for further scrutiny.

This can be done quickly and at scale to narrow down on potentially interesting hosts during an IR. A great demonstration of this approach can be seen in the Video Live Incident Response with Velociraptor where Eric Capuano uses the Hayabusa tool deployed via Velociraptor to quickly identify the attack techniques evident on the endpoint.

Previously we could only apply Sigma rules in Velociraptor by bundling the Hayabusa tool, which presents a curated set of Sigma rules but runs locally. In this release Sigma matching is done natively in Velociraptor and therefore the Velociraptor Sigma project simply curates the same rules that Hayabusa curates but does not require the Hayabusa binary itself.

You can read the full Sigma In Velociraptor blog post that describes this feature in great detail, but here I will quickly show how it can be used to great effect.

First I will import the set of curated Sigma rules from the Velociraptor Sigma project by collecting the Server.Import.CuratedSigma server artifact.

Velociraptor 0.7.1 Release: Sigma Support, ETW Multiplexing, Local Encrypted Storage and New VQL Capabilities Highlight the Last Release of 2023
Getting the Curated Sigma rules

This will import a new artifact to my system with up to date Sigma rules, divided into different Status, Rule Level etc. For this example I will select the Stable rules at a Critical Level.

Velociraptor 0.7.1 Release: Sigma Support, ETW Multiplexing, Local Encrypted Storage and New VQL Capabilities Highlight the Last Release of 2023
Collecting sigma rules from the endpoint

After launching the collection, the artifact will return all the matching rules and their relevant events. This is a quick artifact taking less than a minute on my test system. I immediately see interesting hits.

Velociraptor 0.7.1 Release: Sigma Support, ETW Multiplexing, Local Encrypted Storage and New VQL Capabilities Highlight the Last Release of 2023
Detecting critical level rules

Using Sigma rules for live monitoring

Sigma rules can be used on more than just log files. The Velociraptor Sigma project also provides monitoring rules that can be used on live systems for real time monitoring.

The Velociraptor Hayabusa Live Detection option in the Curated import artifact will import an event monitoring version of the same curated Sigma rules. After adding the rule to the client’s monitoring rules with the GUI, I can receive interesting events for matching rules:

Velociraptor 0.7.1 Release: Sigma Support, ETW Multiplexing, Local Encrypted Storage and New VQL Capabilities Highlight the Last Release of 2023
Live detection of Sigma rules

Other Improvements

SSH/SCP accessor

Velociraptor normally runs on the endpoint and can directly collect evidence from the endpoint. However, many devices on the network can not install an endpoint agent – either because the operating system is not supported (for example embedded versions of Linux) or due to policy.

When we need to investigate such systems we often can only access them by Secure Shell (SSH). In the 0.7.1 release, Velociraptor has an ssh accessor which allows all plugins that normally use the filesystem to transparently use SSH instead.

For example, consider the glob() plugin which searches for files.

Velociraptor 0.7.1 Release: Sigma Support, ETW Multiplexing, Local Encrypted Storage and New VQL Capabilities Highlight the Last Release of 2023
Globing for files over SSH

We can specify that the glob() use the ssh accessor to access the remote system. By setting the SSH_CONFIG VQL variable, the accessor is able to use the locally stored private key to be able to authenticate with the remote system to access remote files.

We can combine this new accessor with the remapping feature to reconfigure the VQL engine to substitute the auto accessor with the ssh accessor when any plugin attempts to access files. This allows us to transparently use the same artifacts that would access files locally, but this time transparently will access these files over SSH:

Velociraptor 0.7.1 Release: Sigma Support, ETW Multiplexing, Local Encrypted Storage and New VQL Capabilities Highlight the Last Release of 2023
Remapping the auto accessor with ssh

This example shows how to use the SSH accessor to investigate a debian system and collect the Linux.Debian.Packages artifact from it over SSH.

Distributed notebook processing

While Velociraptor is very efficient and fast, and can support a large number of endpoints connected to the server, many users told us that on busy servers, running notebook queries can affect server performance. This is because a notebook query can be quite intense (e.g. Sorting or Grouping a large data set) and in the default configuration the same server is collecting data from clients, performing hunts, and also running the notebook queries.

This release allows notebook processors to be run in another process. In Multi-Frontend configurations (also called Master/Minion configuration), the Minion nodes will now offer to perform notebook queries away from the master node. This allows this sudden workload to be distributed to other nodes in the cluster and improve server and GUI performance.

ETW Multiplexing

Previous support for Event Tracing For Windows (ETW) was rudimentary. Each query that called the watch_etw() plugin to receive the event stream from a particular provider created a new ETW session. Since the total number of ETW sessions on the system is limited to 64, this used precious resources.

In 0.7.1 the ETW subsystem was overhauled with the ability to multiplex many ETW watchers on top of the same session. The ETW sessions are created and destroyed on demand. This allows us to more efficiently track many more ETW providers with minimal impact on the system.

Additionally the etw_sessions() plugin can show statistics for all sessions currently running including the number of dropped events.

Artifacts can be hidden in the GUI

Velociraptor comes with a large number of built in artifacts. This can be confusing for new users and admins may want to hide artifacts in the GUI.

You can now hide an artifact from the GUI using the artifact_set_metadata() VQL function. For example the following query will hide all artifacts which do not have Linux in their name.

Velociraptor 0.7.1 Release: Sigma Support, ETW Multiplexing, Local Encrypted Storage and New VQL Capabilities Highlight the Last Release of 2023

Only Linux related artifacts will now be visible in the GUI.

Velociraptor 0.7.1 Release: Sigma Support, ETW Multiplexing, Local Encrypted Storage and New VQL Capabilities Highlight the Last Release of 2023
Hiding artifacts from the GUI

Local encrypted storage for clients

It is sometimes useful to write data locally on endpoints instead of transferring the data to the server. For example, if the client is not connected to the internet for long periods it is useful to write data locally. Also useful is to write data in case we want to recover it later during an investigation.

The downside of writing data locally on the endpoints is that this data may be accessed if the endpoint is later compromised. If the data contains sensitive information this can be used by an attacker. This is also primarily the reason that Velociraptor does not write a log file on the endpoint. Unfortunately this makes it difficult to debug issues.

The 0.7.1 release introduces a secure local log file format. This allows the Velociraptor client to write to the local disk in a secure way. Once written the data can only be decrypted by the server.

While any data can be written to the encrypted local file, the Generic.Client.LocalLogs artifact allows Velociraptor client logs to be written at runtime.

Velociraptor 0.7.1 Release: Sigma Support, ETW Multiplexing, Local Encrypted Storage and New VQL Capabilities Highlight the Last Release of 2023
Writing local logs

To read these locally stored logs I can fetch them using the Generic.Client.LocalLogsRetrieve artifact to retrieve the encrypted local file. The file is encrypted using the server’s public key and can only be decrypted on the server.

Velociraptor 0.7.1 Release: Sigma Support, ETW Multiplexing, Local Encrypted Storage and New VQL Capabilities Highlight the Last Release of 2023
Inspecting the uploaded encrypted local file

Once on the server, I can decrypt the file using the collection’s notebook which automatically decrypts the uploaded file.

Velociraptor 0.7.1 Release: Sigma Support, ETW Multiplexing, Local Encrypted Storage and New VQL Capabilities Highlight the Last Release of 2023
Decrypting encrypted local file

Conclusions

There are many more new features and bug fixes in the 0.7.1 release. If you’re interested in any of these new features, we welcome you to take Velociraptor for a spin by downloading it from our release page. It’s available for free on GitHub under an open-source license.

As always, please file bugs on the GitHub issue tracker or submit questions to our mailing list by emailing [email protected]. You can also chat with us directly on our Discord server.

Learn more about Velociraptor by visiting any of our web and social media channels below:

CVE-2023-5950 Rapid7 Velociraptor Reflected XSS

Post Syndicated from Dr. Mike Cohen original https://blog.rapid7.com/2023/11/10/cve-2023-5950-rapid7-velociraptor-reflected-xss/

CVE-2023-5950 Rapid7 Velociraptor Reflected XSS

This advisory covers a specific issue identified in Velociraptor and disclosed by a security code review. We want to thank Mathias Kujala for working with the Velociraptor team to identify and rectify this issue.  It has been fixed as of Version 0.7.0-4, released November 6, 2023.

CVSS · HIGH · 8.6/10 · CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:L/A:L

  • Scoring scenario: GENERAL
  • attackVector: NETWORK
  • attackComplexity: LOW
  • privilegesRequired: NONE
  • userInteraction: NONE
  • scope: UNCHANGED
  • confidentialityImpact: HIGH
  • integrityImpact: LOW
  • availabilityImpact: LOW

Open CVSS Calc

Rapid7 Velociraptor versions prior to 0.7.0-4 suffer from a reflected cross site scripting vulnerability. This vulnerability allows attackers to inject JS into the error path, potentially leading to unauthorized execution of scripts within a user’s web browser. This vulnerability is fixed in version 0.7.0-4 and a patch is available to download. Patches are also available for version 0.6.9 (0.6.9-1). This issue affects the server only.

Problem

CWE-79 Improper Neutralization of Input During Web Page Generation (‘Cross-site Scripting’)

Remediation

To remediate these vulnerabilities, Velociraptor users should upgrade their servers.

Product Status

Product affected: Rapid7 Velociraptor prior to 0.7.0-4

Credits

Mathias Kujala

References

docs.velociraptor.app/blog/2023/2023-07-27-release-notes-0.7.0/

Timeline

  • 2023-11-02 – Notification of the issue
  • 2023-11-06 – Release 0.7.0-4 made available on Github

Little Crumbs Can Lead To Giants

Post Syndicated from Christiaan Beek original https://blog.rapid7.com/2023/10/05/little-crumbs-can-lead-to-giants/

Little Crumbs Can Lead To Giants

This week is the Virus Bulletin Conference in London. Part of the conference is the Cyber Threat Alliance summit, where CTA members like Rapid7 showcase their research into all kinds of cyber threats and techniques.

Traditionally, when we investigate a campaign, the focus is mostly on the code of the file, the inner workings of the malware, and communications towards threat actor-controlled infrastructure. Having a background in forensics, and in particular data forensics, I’m always interested in new ways of looking at and investigating data. New techniques can help proactively track, detect, and hunt for artifacts.

In this blog, which highlights my presentation at the conference, I will dive into the world of Shell Link files (LNK) and Virtual Hard Disk files (VHD). As part of this research, Rapid7 is releasing a new feature in Velociraptor that can parse LNK files and will be released with the posting of this blog.

VHD files

VHD and its successor VHDX are formats representing a virtual hard disk. They can contain contents usually found on a physical hard drive, such as disk partitions and files. They are typically used as the hard disk of a virtual machine, are built into modern versions of Windows, and are the native file format for Microsoft’s hypervisor, Hyper-V. The format was created by Connectix for their Virtual PC, known as Microsoft Virtual since Microsoft acquired Connectix in 2003. As we will see later, the word “Connectix” is still part of the footer of a VHD file.

Why would threat actors use VHD files in their campaigns? Microsoft has a security technology that is called “Mark of the Web” (MOTW). When files are downloaded from the internet using Windows, they are marked with a secret Zone.Identifier NTFS Alternate Data Stream (ADS) with a particular value called the MOTW. MOTW-tagged files are restricted and unable to carry out specific operations. Windows Defender SmartScreen, which compares files with an allowlist of well-known executables, will process executables marked with the MOTW. SmartScreen will stop the execution of the file if it is unknown or untrusted and will alert the user not to run it. Since VHD files are a virtual hard-disk, they can contain files and folders. When files are inside a VHD container, they will not receive the MOTW and bypass the security restrictions.

Depending on the underlying operating system, the VHD file can be in FAT or NTFS. The great thing about that is that traditional file system forensics can be applied. Think about Master-File_Table analysis, Header/Footer analysis and data carving, to name a few.

Example case:

In the past we investigated a case where a threat-actor was using a VHD file as part of their campaign. The flow of the campaign demonstrates how this attack worked:

Little Crumbs Can Lead To Giants

After sending a spear-phishing email with a VHD file, the victim would open up the VHD file that would auto-mount in Windows. Next, the MOTW is bypassed and a PDF file with backdoor is opened to download either the Sednit or Zebrocy malware. The backdoor would then establish a connection with the command-and-control (C2) server controlled by the threat actor.

After retrieving the VHD file, first it is mounted as ‘read-only’ so we cannot change anything about the digital evidence. Secondly, the Master-File-Table (MFT) is retrieved and analyzed:

Little Crumbs Can Lead To Giants

Besides the valuable information like creation and last modification times (always take into consideration that these can be altered on purpose), two of the files were copied from a system into the VHD file. Another interesting discovery here is that the VHD disk contained a RECYCLE.BIN file that contained deleted files. That’s great since depending on the filesize of the VHD (the bigger, the more chance that files are not overwritten), it is possible to retrieve these deleted files by using a technique called “data carving.”

Using Photorec as one of the data carving tools, again the VHD file is mounted read-only and the tool pointed towards this share to attempt to recover the deleted files.

Little Crumbs Can Lead To Giants

After running for a short bit, the deleted files could be retrieved and used as part of the investigation. Since this is not relevant for this blog, we continue with the footer analysis.

Footer analysis of a VHD file

The footer, which is often referred to as the trailer, is an addition to the original header that is appended to the end of a file. It is a data structure that resembles a header.

A footer is never located at a fixed offset from the beginning of an image file unless the image data is always the same size because by definition it comes after the image data, which is typically of variable length. It is often situated a certain distance from the end of a picture file. Similar to headers, footers often have a defined size. A rendering application can use a footer’s identification field or magic number, like a header’s, to distinguish it from other data structures in the file.

When we look at the footer of the VHD file, certain interesting fields can be observed:

Little Crumbs Can Lead To Giants

These values are some of the examples of the data structures that are specified for the footer of a VHD file, but there are also other values like “type of disk” that can be valuable during comparisons of multiple campaigns by an actor.

From the screenshot, we can see that “conectix” is the magic number value of the footer of a VHD file, you can compare it to a small fingerprint. From the other values, we can determine that the actor used a Windows operating system, and we can derive from the HEX value the creation time of the VHD file.

From a threat hunting or tracking perspective, these values can be very useful. In the below example, a Yara rule was written to identify the file as a VHD file and secondly the serial number of the hard drive used by the actor:

Little Crumbs Can Lead To Giants

Shell link files (LNK), aka Shortcut files

A Shell link, also known as a Shortcut, is a data object in this format that houses data that can be used to reach another data object. Windows files with the “LNK” extension are in a format known as the Shell Link Binary File Format. Shell links can also be used by programs that require the capacity to store a reference to a destination file. Shell links are frequently used to facilitate application launching and linking scenarios, such as Object Linking and Embedding (OLE).

LNK files are massively abused in multiple cybercrime campaigns to download next stage payloads or contain code hidden in certain data fields. The data structure specification of LNK files mentions that LNK files store various information, including “optional data” in the “extra data” sections. That is an interesting area to focus on.

Below is a summarized overview of the Extra Data structure:

Little Crumbs Can Lead To Giants

The ‘Header’ LinkInfo part contains interesting data on the type of drive used, but more importantly it contains the SerialNumber of the hard drive used by the actor when creating the LNK file:

Little Crumbs Can Lead To Giants

Other interesting information can be found; for example, around a value with regards to the icon used and in this file used, it contains an interesting string.

Little Crumbs Can Lead To Giants

Combining again that information, a simple Yara rule can be written for this particular LNK file which might have been used in multiple campaigns:

Little Crumbs Can Lead To Giants

One last example is to look for the ‘Droids’ values in the Extra Data sections. Droids stands for Digital Record Object Identification. There are two values present in the example file:

Little Crumbs Can Lead To Giants

The value in these fields translates to the MAC address of the attacker’s system… yes, you read this correctly and may close your open mouth now…

Little Crumbs Can Lead To Giants

Also this can be used to build upon the previous LNK Yara rule, where you could replace the “.\\3.jpg” part with the MAC address value to hunt for LNK files that were created on that particular device with that MAC address.

In a recent campaign called “Raspberry Robin”, LNK files were used to distribute the malware. Analyzing the LNK files and using the above investigation technique, the following Yara rule was created:

Little Crumbs Can Lead To Giants

Velociraptor LNK parser

Based on our research into LNK files, an updated LNK parser was developed by Matt Green from Rapid7 for Velociraptor, our advanced open-source endpoint monitoring, digital forensics, and cyber response platform.

With the parser, multiple LNK files can be processed and information can be extracted to use as an input for Yara rules that can be pushed back into the platform to hunt.

Little Crumbs Can Lead To Giants

Windows.Forensics.Lnk parses LNK shortcut files using Velociraptor’s built-in binary parser. The artifact outputs fields aligning to Microsoft’s ms-shllink protocol specification and some analysis hints to assist review or detection use cases. Users have the option to search for specific indicators in key fields with regex, or control the definitions for suspicious items to bubble up during parsing.

Some of the default targeted suspicious attributes include:

  • Large size
  • Startup path location for auto execution
  • Environment variable script — environment variable with a common script configured to execute
  • No target with an environment variable only execution
  • Suspicious argument size — large sized arguments over 250 characters as default
  • Arguments have ticks — ticks are common in malicious LNK files
  • Arguments have environment variables — environment variables are common in malicious LNKs
  • Arguments have rare characters — look for specific rare characters that may indicate obfuscation
  • Arguments that have leading space. Malicious LNK files may have many leading spaces to obfuscate some tools
  • Arguments that have http strings — LNKs are regularly used as a download cradle
  • Suspicious arguments — some common malicious arguments observed in field
  • Suspicious trackerdata hostname
  • Hostname mismatch with trackerdata hostname

Due to the use of Velociraptor’s binary parser, the artifact is significantly faster than other analysis tools. It can be deployed as part of analysis or at scale as a hunting function using the IOCRegex and/or SuspiciousOnly flag.

Summary

It is worth investigating the characteristics of file types we tend to skip in threat actor campaigns. In this blog I provided a few examples of how artifacts can be retrieved from VHD and LNK files and then used for the creation of hunting logic. As a result of this research, Rapid7 is happy to release a new LNK parser feature in Velociraptor and we welcome any feedback.

What’s New in Rapid7 Detection & Response: Q3 2023 in Review

Post Syndicated from Margaret Wei original https://blog.rapid7.com/2023/10/05/whats-new-in-rapid7-detection-response-q3-2023-in-review/

What’s New in Rapid7 Detection & Response: Q3 2023 in Review

This post takes a look at some of the investments we’ve made throughout Q3 2023 to our Detection and Response offerings to provide advanced DFIR capabilities with Velociraptor, more flexibility with custom detection rules, enhancements to our dashboard and log search features, and more.

Stop attacks before they happen with Next-Gen Antivirus in Managed Threat Complete

As endpoint attacks become more elusive and frequent, we know security teams need reliable coverage to keep their organizations safe. To provide teams with protection from both known and unknown threats, we’ve released multilayered prevention with Next-Gen Antivirus in Managed Threat Complete. Available through the Insight Agent, you’ll get immediate coverage with no additional configurations or deployments. With Managed Next-Gen Antivirus you’ll be able to:

  • Block known and unknown threats early in the kill chain
  • Halt malware that’s built to bypass existing security controls
  • Maximize your security stack and ROI with existing Insight Agent
  • Leverage the expertise of our MDR team to triage and investigate these alerts

To see more on our Managed Next-Gen Antivirus offering, including a demo walkthrough, visit our Endpoint Hub Page here.

Achieve faster DFIR outcomes with Velociraptor now integrated into the Insight Platform

As security teams are facing more and more persistent threats on their endpoints, it’s crucial to have proactive security measures that can identify attacks early in the kill chain, and the ability to access detailed evidence to drive complete remediation. We’re excited to announce that InsightIDR Ultimate customers can now recognize the value of Velociraptor, Rapid7’s open-source DFIR framework, faster than ever with its new integration into the Insight Platform.

With no additional deployment or configurations required, InsightIDR customers can deploy Velociraptor through their existing Insight Agents for daily threat monitoring and hunting, swift threat response, and expanded threat detection capabilities. For more details, check out our recent blog post here.

What’s New in Rapid7 Detection & Response: Q3 2023 in Review

A view of Velociraptor in InsightIDR

Tailor alerts to your unique needs with Custom Detection Rules

We know every organization has unique needs when it comes to detections and alerting on threats. While InsightIDR provides over 3,000 out-of-the-box detection rules to detect malicious behaviors, we’ve added additional capabilities with Custom Detection Rules to offer teams the ability to author rules tailored to their own individual needs. With Custom Detection Rules, you will be able to:

  • Build upon Rapid7’s library of expertly curated detection rules by creating rules that uniquely fit your organization’s security needs
  • Use LEQL to write rule logic against a variety of data sources
  • Add grouping and threshold conditions to refine your rule logic over specific periods of time to decrease unnecessary noise
  • Assess the rules activity before it starts to trigger alerts for downstream teams
  • Group alerts by specific keys such as by user or by asset within investigations to reduce triage time
  • Create exceptions and view modification history as you would with out-of-the-box ABA detection rules
  • Attach InsightConnect automation workflows to your custom rules to mitigate manual tasks such as containing assets and enriching data, or set up notifications when detections occur
What’s New in Rapid7 Detection & Response: Q3 2023 in Review

Creating a Custom Detection Rule in InsightIDR

Enhanced Attacker Behavior Analytics (ABA) alert details in Investigations

Easily view information about your ABA alerts that are a part of an investigation with our updated Evidence panel. With these updates, you’ll see more information on alerts, including their source event data and detection rule logic that generated them. Additionally, the Evidence button has also been renamed to Alert Details to more accurately reflect its function.

New alert details include:

  • A brief description of the alert and a recommendation for triage
  • The detection rule logic that generated the alert and the corresponding key-value payload from your environment
  • The process tree, which displays details about the process that occurred when the alert was generated and the processes that occurred before and after (only for MDR customers)

Dashboard Improvements: Revamped card builder and a new heat map visualization

Our recently released revamped card builder provides more functionality to make it faster and easier to build dashboard cards. For a look at what’s new, check out the demo below.

The new calendar heat map visualization allows you to more easily visualize trends in your data over time so you can quickly spot trends and anomalies. To see this new visualization in action, check out the demo below.

Export data locally with new Log Search option

You now have more flexibility when it comes to exporting your log search data, making it easier to gather evidence related to incidents for additional searching, sharing with others in your organization, or gathering evidence associated with incidents.

With this update you can now:

  • Use edit key selection to define what columns to export to csv
  • Export results from a grouby/calculate query to a csv file

New event sources

  • Microsoft Internet Information Services (IIS): A web server that is used to exchange web content with internet users. Read the documentation
  • Amazon Security Lake: A security data lake service that allows customers to aggregate & manage security-related logs. Read the documentation
  • Salesforce Threat Detection: Uses machine learning to detect threats within a Salesforce organization. Read the documentation

A growing library of actionable detections

In Q3 2023 we added 530 new ABA detection rules. See them in-product or visit the Detection Library for actionable descriptions and recommendations.

Stay tuned!

As always, we’re continuing to work on exciting product enhancements and releases throughout the year. Keep an eye on our blog and release notes as we continue to highlight the latest in detection and response at Rapid7.

Velociraptor 0.7.0 Release: Dig Deeper With Enhanced Client Search, Server Improvements and Expanded VQL Library

Post Syndicated from Mike Cohen original https://blog.rapid7.com/2023/08/31/untitled-7/

Velociraptor 0.7.0 Release: Dig Deeper With Enhanced Client Search, Server Improvements and Expanded VQL Library

Carlos Canto contributed to this article.

Rapid7 is thrilled to announce version 0.7.0 of Velociraptor is now LIVE and available for download.  The focus of this release was on improving user efficiency while also expanding and strengthening the library of VQL plug-ins and artifacts.

Let’s take a look at some of the interesting new features in detail.

GUI improvements

The GUI was updated in this release to improve user workflow and accessibility.

In previous versions, client information was written to the datastore in individual files (one file per client record). This works ok, as long as the number of clients is not too large and the filesystem is fast. This has become more critical as users are now deploying Velociraptor with larger deployment sizes, often in excess of 50k.

In this release, the client index was rewritten to store all client records in a single snapshot file, while managing this file in memory. This approach allows client searching to be extremely quick even for large numbers of clients well over 100k.

Additionally, it is now possible to display the total number of hits in each search giving a more comprehensive indication of the total number of clients.

Velociraptor 0.7.0 Release: Dig Deeper With Enhanced Client Search, Server Improvements and Expanded VQL Library

Paged table in Flows list

Velociraptor’s collections view shows the list of collections from the endpoint (or the server). Previously, the GUI limited this view to 100 previous collections. This meant that for heavily collected clients it was impossible to view older collections (without custom VQL).

In this release, the GUI was updated to include a paged table (with suitable filtering and sorting capabilities) so all collections can be accessed.

VQL Plugins and artifacts

Chrome artifacts

Version 0.7.0 added a leveldb parser and several artifacts around Chrome Session Storage. This allows analyzing data that is stored by Chrome locally for various web apps.

Lnk forensics

This release added a more comprehensive Lnk parser covering all known Lnk file features. You can access the Lnk file analysis using the `Windows.Forensics.Lnk` artifact.

Direct S3 accessor

Velociraptor’s accessors provide a way to apply the many plugins that operate on files to other domains. In particular, the glob() plugin allows searching the accessors for filename patterns.

In this release, Velociraptor adds an Amazon S3 accessor. This allows plugins to directly operate on S3 buckets. In particular the glob() plugin can be used to query bucket contents and read files from various buckets. This capability opens the door for sophisticated automation around S3 buckets.

Volume Shadow Copies analysis

Windows Volume Shadow Service (VSS) is used to create snapshots of the drive at a specific point in time. Forensically, this can be very helpful as it captures a point-in-time view of the previous disk state (If the VSS is still around when we perform our analysis).

Velociraptor provides access to the different VSS volumes via the ntfs accessor, and many artifacts previously provided the ability to report files that differed between VSS snapshots.

In the 0.7.0 release, Velociraptor adds the ntfs_vss accessor. This accessor automatically considers different snapshots and deduplicates files that are identical in different snapshots. This makes it much easier to incorporate VSS analysis into your artifacts.

The SQLiteHunter project

Many artifacts consist of parsing SQLite files. For example, major browsers use SQLite files heavily. This release incorporates the SQLiteHunter artifact.

SQLiteHunter is a one stop shop for finding and analyzing SQLite files such as browser artifacts and OS internal files. Although the project started with SQLite files, it now automates a lot of artifacts such as WebCacheV01 parsing and the Windows Search Service – aka Windows.edb (which are ESE based parsers).

This one artifact combines and makes obsolete many distinct older artifacts.

More info can be found at https://github.com/Velocidex/SQLiteHunter.

Glob plugin improvements

The glob() plugin may be the most used plugin in VQL, as it allows for the efficient search of filenames in the filesystem. While the glob() plugin can accept a list of glob expressions so the filesystem walk can be optimized as much as possible, it was previously difficult to know why a particular reported file was chosen.

In this release, the glob() plugin reports the list of glob expressions that caused the match to be reported. This allows callers to more easily combine several file searches into the same plugin call.

URL style paths

In very old versions of Velociraptor, nested paths could be represented as URL objects. Until now, a backwards compatible layer was used to continue supporting this behavior. In the latest release, URL style paths are no longer supported.  Instead, use the pathspec() function to build proper OSPath objects.

Server improvements

Velociraptor offers automatic use of Let’s Encrypt certificates. However, Let’s Encrypt can only issue certificates for port 443. This means that the frontend service (which is used to communicate with clients) has to share the same port as the GUI port (which is used to serve the GUI application). This makes it hard to create firewall rules to filter access to the frontend and not to the GUI when used in this configuration.

In the 0.7.0 release, Velociraptor offers the GUI.allowed_cidr option. If specified, the list of CIDR addresses will specify the source IP acceptable to the server for connections to the GUI application (for example 192.168.1.0/24).

This filtering only applies to the GUI and forms an additional layer of security protecting the GUI application (in addition to the usual authentication methods).

Better handling of out of disk errors

Velociraptor can collect data very quickly and sometimes this results in a full disk. Previously, a full disk error could cause file corruption and data loss. In this release, the server monitors its free disk level and disables file writing when the disk is too full. This avoids data corruption when the disk fills up. When space is freed the server will automatically start writing again.

The offline collector

The offline collector is a pre-configured binary which can be used to automatically collect any artifacts into a ZIP file and optionally upload the file to a remote system like a cloud bucket or SMB share.

Previously, Velociraptor would embed the configuration file into the binary so it only needed to be executed (e.g. double clicked). While this method is still supported on Windows, it turned out that on MacOS this is no longer supported as binaries can not be modified after build. Even on Windows, embedding the configuration will invalidate the signature.

In this release, we added a generic collector:

Velociraptor 0.7.0 Release: Dig Deeper With Enhanced Client Search, Server Improvements and Expanded VQL Library

This collector will embed the configuration into a shell script instead of the Velociraptor binary. Users can then launch the offline collector using the unmodified official binary by specifying the --embedded_config flag:

velociraptor-v0.7.0-windows-amd64.exe -- --embedded_config Collector_velociraptor-collector

Velociraptor 0.7.0 Release: Dig Deeper With Enhanced Client Search, Server Improvements and Expanded VQL Library

While the method is required for MacOS, it can also be used for Windows in order to preserve the binary signature.

Conclusions

There are many more new features and bug fixes in the 0.7.0 release. If you’re interested in any of these new features, we welcome you to take Velociraptor for a spin by downloading it from our release page. It’s available for free on GitHub under an open-source license.

As always, please file bugs on the GitHub issue tracker or submit questions to our mailing list by emailing [email protected]. You can also chat with us directly on our Discord server.

Learn more about Velociraptor by visiting any of our web and social media channels below:

Finally, don’t forget to register for VeloCON 2023, taking place on Wednesday September 13, 2023.  VeloCON is a one-day virtual event which includes fascinating discussions, tech talks and the opportunity to get to know real members of the Velociraptor community.  It’s a forum to share experiences in using and developing Velociraptor to address the needs of the wider DFIR landscape and an opportunity to take a look ahead at the future of our platform.

Click here for more details and to register for the event.

Join us for VeloCON 2023: Digging Deeper Together!

Post Syndicated from Carlos Canto original https://blog.rapid7.com/2023/08/17/join-us-for-velocon-2023-digging-deeper-together/

September 13, 2023 at 9 am ET

Join us for VeloCON 2023: Digging Deeper Together!

Rapid7 is thrilled to announce that the 2nd annual VeloCON: Digging Deeper Together virtual summit will be held this September 13th at 9 am ET. Once again, the conference will be online and completely free!

VeloCON is a one-day event focused on the Velociraptor community. It’s a place to share experiences in using and developing Velociraptor to address the needs of the wider DFIR community and an opportunity to take a look ahead at the future of our platform.

This year’s event calls for even more of the stimulating and informative content that made last year’s VeloCON so much fun. Don’t miss your chance at being a part of the marquee event of the open-source DFIR calendar.

Registration is now OPEN!  Click here to register and get event updates and start time reminders.

Last year’s event was a tremendous success, with over 500 unique participants enjoying fascinating discussions, tech talks and the opportunity to get to know real members of our own community.

Leading Edge Panel

Rapid7 and the Velociraptor team have invited industry leading DFIR professionals, community advocates and thought leaders to host an exciting presentation panel.  Proposals underwent a thorough review process to select presentations of maximum interest to VeloCON attendees and the wider Velociraptor community.

VeloCON focuses on work that pushes the envelope of what is currently possible using Velociraptor. Potential topics to be addressed by the panel include, but are not limited to:

  • Use cases of Velociraptor in real investigations
  • Novel deployment modes to cater for specific requirements
  • Contributions to Velociraptor to address new capabilities
  • Potential future ideas and features that Velociraptor
  • Integration of Velociraptor with other tools/frameworks
  • Analysis and acquisition on novel Forensic Artifacts

Register Today

Please register for VeloCON 2023 by following this link.  You’ll be able to preview panelist bios as well as receive email confirmations and reminders as we get closer to the event.

Learn more about Velociraptor by visiting any of our web and social media channels below:

Velociraptor 0.6.9 Release: Digging Even Deeper with SMB Support, Azure Storage and Lockdown Server Mode

Post Syndicated from Mike Cohen original https://blog.rapid7.com/2023/06/07/velociraptor-0-6-9-release-digging-even-deeper-with-smb-support-azure-storage-and-lockdown-server-mode/

Velociraptor 0.6.9 Release: Digging Even Deeper with SMB Support, Azure Storage and Lockdown Server Mode

Carlos Canto contributed to this article.

Rapid7 is very excited to announce version 0.6.9 of Velociraptor is now LIVE and available for download.  Much of what went into this release was about expanding capabilities and improving workflows.

We’ll now explore some of the interesting new features in detail.

GUI Improvements

The GUI was updated in this release to improve user workflow and accessibility.

Table Filtering and Sorting
Previously, table filtering and sorting required a separate dialog. In this release, the filtering controls were moved to the header of each column making it more natural to use.

Velociraptor 0.6.9 Release: Digging Even Deeper with SMB Support, Azure Storage and Lockdown Server Mode
Filtering tables

VFS GUI Improvements

The VFS GUI allows the user to collect files from the endpoint in a familiar tree-based user interface. In previous versions, it was only possible to schedule a single download at a time. This proved problematic when the client was offline or transferring a large file, because the user had no way to kick off the next download until the first file was fully fetched.

In this release, the GUI was revamped to support multiple file downloads at the same time. Additionally it is now possible to schedule a file download by right clicking the download column in the file table and selecting “Download from client”.

Velociraptor 0.6.9 Release: Digging Even Deeper with SMB Support, Azure Storage and Lockdown Server Mode
Initiating file download in the VFS. Note multiple files can be scheduled at the same time, and the bottom details pane can be closed

Hex Viewer and File Previewer GUI

In release 0.6.9, a new hex viewer was introduced. This viewer makes it possible to quickly triage uploaded files from the GUI itself, implementing some common features:

  1. The file can be viewed as a hex dump or a strings-style output.
  2. The viewer can go to an arbitrary offset within the file, or page forward or backwards.
  3. The viewer can search forward or backwards in the file for a Regular Expression, String, or a Hex String.The hex viewer is available for artifacts that define a column of type preview_uploads including the File Upload table within the flow GUI.
Velociraptor 0.6.9 Release: Digging Even Deeper with SMB Support, Azure Storage and Lockdown Server Mode
The hex viewer UI can be used to quickly inspect an uploaded file

Artifact Pack Import GUI Improvements

Velociraptor allows uploading an artifact pack – a simple Zip file containing artifact definitions. For example, the artifact exchange is simply a zip file with artifact definitions.

Previously, artifact packs could only be uploaded in their entirety and always had an “Exchange” prefix prepended. However, in this release the GUI was revamped to allow only some artifacts to be imported from the pack and to customize the prefix.

Velociraptor 0.6.9 Release: Digging Even Deeper with SMB Support, Azure Storage and Lockdown Server Mode
It is now possible to import specific artifacts in a pack

Direct SMB Support

Windows file sharing is implemented over the SMB protocol. Within the OS, accessing remote file shares happens transparently, for example by mapping the remote share to a drive using the  net use command or accessing a file name starting with a UNC path (e.g. \\ServerName\Share\File.exe).

While Velociraptor can technically also access UNC shares by using the usual file APIs and providing a UNC path, in reality this does not work because Velociraptor is running as the local System user. The system user normally does not have network credentials, so it can not map remote shares.

This limitation is problematic, because sometimes we need to access remote shares (e.g. to verify hashes, perform YARA scans etc). Until this release, the only workaround for this limitation was to install the Velociraptor user as a domain user account with credentials.

As of the 0.6.9 release, SMB is supported directly within the Velociraptor binary as an accessor. This means that all plugins that normally operate on files can also operate on a remote SMB share transparently.

Velociraptor does not rely on the OS to provide credentials to the remote share, instead credentials can be passed directly to the smb accessor to access the relevant smb server.

The new accessor can be used in any VQL that needs to use a file, but to make it easier there is a new artifact called Windows.Search.SMBFileFinder that allows for flexible file searches on an SMB share.

Velociraptor 0.6.9 Release: Digging Even Deeper with SMB Support, Azure Storage and Lockdown Server Mode
Searching a remote SMB share

Using SMB For Distributing Tools

Velociraptor can manage third-party tools within its collected artifacts by instructing the endpoint to download the tool from an external server or the velociraptor server itself.

It is sometimes convenient to download external tools from an external server (e.g. a cloud bucket) due to bandwidth considerations.

Previously, this server could only be a HTTP server, but in many deployments it is actually simpler to download external tools from an SMB share.

In this release, Velociraptor accepts an SMB URL as the Serve URL parameter within the tool configuration screen.

You can configure the remote share with read-only permissions (read these instructions for more details on configuring SMB).

Velociraptor 0.6.9 Release: Digging Even Deeper with SMB Support, Azure Storage and Lockdown Server Mode
Serving a third-party tool from an SMB server

The Offline Collector

The offline collector is a popular mode of running Velociraptor. In this mode, the artifacts to collect are pre-programmed into the collector, which stores the results in a zip file. The offline collector can be pre-configured to encrypt and upload the collection automatically to a remote server without user interaction, making it ideal for using remote agents or people to manually run the collector without needing further training.

In this release, the Velociraptor offline collector adds two more upload targets. It is now possible to upload to an SMB server and to Azure Blob Storage.

SMB Server Uploads
Because the offline collector is typically used to collect large volumes of data, it is beneficial to upload the data to a networked server close to the collected machine. This avoids cloud network costs and bandwidth limitations. It works very well in air gapped networks, as well.

You can now simply create a new share on any machine, by adding a local Windows user with password credentials, exporting a directory as a share, and adjusting the upload user’s permissions to only be able to write on the share and not read from it. It is now safe to embed these credentials in the offline collector, which can upload data but cannot read or delete other data.

Read the full instructions of how to configure the offline collector for SMB uploads.

Azure Blob Storage Service
Velociraptor can now upload collections to an Amazon S3 or Google Cloud Storage bucket. Many users requested direct support for Azure blob storage, which is now in 0.6.9.

Read about how to configure Azure for safe uploads. Similar to the other methods, credentials embedded in the offline collector can only be used to upload data and not read or delete data in the storage account.

Debugging VQL Queries

One of the points of feedback we received from our annual user survey was that although VQL is an extremely powerful language, users struggle with debugging and understanding how the query proceeds.

Unlike a more traditional programming language (e.g. Python), there is no debugger that allows users to pause execution and inspect variables, or add print statements to see what data is passed between parts of the query.

We took this feedback to heart and in release 0.6.9 the EXPLAIN keyword was introduced. The EXPLAIN keyword can be added before any SELECT in the VQL statement to place that SELECT statement into tracing mode.

As a recap, the general syntax of the VQL statement is:

SELECT vql_fun(X=1, Y=2), Foo, Bar
FROM plugin(A=1, B=2)
WHERE X = 1 

When a query is in tracing mode:

  1. All rows emitted from the plugin are logged with their types
  2. All parameters into any function are also logged
  3. When a row is filtered because it did not pass the WHERE clause this is also logged

This additional tracing information can be used to understand how data flows throughout the query.

Velociraptor 0.6.9 Release: Digging Even Deeper with SMB Support, Azure Storage and Lockdown Server Mode
Explaining a query reveals details information on how the VQL engine handles data flows

You can use the EXPLAIN statement in a notebook or within an artifact as collected from the endpoint (although be aware that it can lead to extremely verbose logging).

Velociraptor 0.6.9 Release: Digging Even Deeper with SMB Support, Azure Storage and Lockdown Server Mode
Inspect the details by clicking on the logs button

For example in the above query we can see:

The clients() plugin generates a row

  1. The timestamp() function received the last_seen_at value
  2. The WHERE condition rejected the row because the last_seen_at time was more than 60 seconds ago

Locking Down The Server

Another concern raised in our survey was the perceived risk of having Velociraptor permanently installed due to its high privilege and efficient scaling.

While this risk is not higher than any other domain-wide administration tool, in some deployment scenarios, Velociraptor does not need this level of access all the time. While in an incident response situation, however, it is necessary to promote Velociraptor’s level of access easily.

In the 0.6.9 release, Velociraptor has introduced lock down mode. When a server is locked down certain permissions are removed (even from administrators). The lockdown is set in the config file, helping to mitigate the risk of a Velociraptor server admin account compromise.

After initial deployment and configuration, the administrator can set the server in lockdown by adding the following configuration directive to the server.config.yaml and restarting the server:

lockdown: true

After the server is restarted the following permissions will be denied:

  • ARTIFACT_WRITER
  • SERVER_ARTIFACT_WRITER
  • COLLECT_CLIENT
  • COLLECT_SERVER
  • EXECVE
  • SERVER_ADMIN
  • FILESYSTEM_WRITE
  • FILESYSTEM_READ
  • MACHINE_STATE

Therefore it will still be possible to read existing collections, and continue collecting client monitoring data, but it will not be possible to edit artifacts or start new hunts or collections.

During an active IR, the server may be taken out of lockdown by removing the directive from the configuration file and restarting the service. Usually, the configuration file is only writable by root and the Velociraptor server process is running as a low privilege account that can not write to the config file. This combination makes it difficult for a compromised Velociraptor administrator account to remove the lockdown and use Velociraptor as a lateral movement vehicle.

Audit Events

Velociraptor maintains a number of log files over its operation, normally stored in the <filestore>/logs directory. While the logs are rotated and separated into different levels, the most important log type is the audit log which records auditable events. Within Velociraptor auditable events are security critical events such as:

  • Starting a new collection from a client
  • Creating a new hunt
  • Modifying an artifact
  • Updating the client monitoring configuration

Previous versions of Velociraptor simply wrote those events to the logging directory. However, the logging directory can be deleted if the server becomes compromised.

In 0.6.9 there are two ways to forward auditable events off the server

  1. Using remote syslog services
  2. Uploading to external log management systems e.g. Opensearch/Elastic using the Elastic.Events.Upload artifact.Additionally, auditable events are now emitted as part of the Server.Audit.Logs artifact so they can be viewed or searched in the GUI by any user.
Velociraptor 0.6.9 Release: Digging Even Deeper with SMB Support, Azure Storage and Lockdown Server Mode
The server’s audit log is linked from the Welcome page
Velociraptor 0.6.9 Release: Digging Even Deeper with SMB Support, Azure Storage and Lockdown Server Mode
Inspecting user activity through the audit log

Because audit events are available now as part of the server monitoring artifact, it is possible for users to develop custom VQL server monitoring artifacts to forward or respond to auditable events just like any other event on the client or the server. This makes it possible to forward events (e.g. to Slack or Discord) as demonstrated by the `Elastic.Events.Upload` artifact above.

Tool Definitions Can Now Specify An Expected Hash

Velociraptor supports pushing tools to external endpoints. A Velociraptor artifact can define an external tool, allowing the server to automatically fetch the tool and upload it to the endpoint.

Previously, the artifact could only specify the URL where the tool should be downloaded from. However, in this release, it is also possible to declare the expected hash of the tool. This prevents potential substitution attacks effectively by pinning the third-party binary hash.

While sometimes the upstream file may legitimately change (e.g. due to a patch), Velociraptor will not automatically accept the new file when the hash does not match the expected hash.

Velociraptor 0.6.9 Release: Digging Even Deeper with SMB Support, Azure Storage and Lockdown Server Mode
Mismatched hash

In the above example we modified the expected hash to be slightly different from the real tool hash. Velociraptor refuses to import the binary but provides a button allowing the user to accept this new hash instead. This should only be performed if the administrator is convinced the tool hash was legitimately updated.

Conclusions

There are many more new features and bug fixes in the 0.6.9 release. If you’re interested in any of these new features, we welcome you to take Velociraptor for a spin by downloading it from our release page. It’s available for free on GitHub under an open-source license.

As always, please file bugs on the GitHub issue tracker or submit questions to our mailing list by emailing [email protected]. You can also chat with us directly on our Discord server.

Learn more about Velociraptor by visiting any of our web and social media channels below:

If you want to master Velociraptor, consider joining us at a week-long Velociraptor training course held this year at the BlackHat USA 2023 Conference and delivered by the Velociraptor developers themselves.Details are here: https://docs.velociraptor.app/announcements/2023-trainings/

The Velociraptor 2023 Annual Community Survey

Post Syndicated from Rapid7 original https://blog.rapid7.com/2023/05/10/the-velociraptor-2023-annual-community-survey/

The Velociraptor 2023 Annual Community Survey

By Dr. Mike Cohen & Carlos Canto

Velociraptor is an open-source project led and shaped by the community. Over the years, Velociraptor has become a real force in the field of DFIR, making it an obvious choice for many operational situations. Rapid7 is committed to continue making Velociraptor the premier open-source DFIR and security tool.

To learn more about how the tool is used in the community and what the community expectations are with regard to capabilities, features, and use cases the Velociraptor team distributed our first community survey in early 2023. We are using this information in order to shape future development direction, set priorities and develop our road map. We are grateful to the community members who took the time to respond.

As an open-source project, we depend on our community to contribute. There are many ways contributors can help the project, from developing code, to filing bugs, to improving documentation. One of the most important ways users can contribute is by providing valuable feedback through channels such as this survey, which helps to shape the future road map and new features.

We’re excited to share some of the responses we received in this blog post.

Who is the Velociraptor community?

Of the 213 survey respondents, the majority were analysts (57%) and managers (26%), indicating that most of the respondents are people who know and use Velociraptor frequently.

We also wanted to get a feel for the type of companies using Velociraptor. Users fell pretty evenly into company sizes, with about 30% of responses from small companies (less than 100 employees) and 20% of responses from very large companies of 10,000 employees or more.

These companies also came from a wide range of industries. While many were primarily in the information security fields such as managed security service providers (MSSPs), consultants, and cybersecurity businesses, we also saw a large number of responses from the government sector, the aerospace industries, education, banking/finance, healthcare, etc.

With such a wide range of users, we were interested in how often they use Velociraptor. About a third said they use Velociraptor frequently, another third use it occasionally, and the final third are in the process of evaluating and learning about the tool.

Velociraptor use cases

Velociraptor is a powerful tool with a wide feature set. We wanted to glimpse an idea of what features were most popular and how users prioritize these features. Specifically, we asked about the following main use cases:

Client monitoring and alerts (detection)
Velociraptor can collect client event queries focused on detection. This allows the client to autonomously monitor the endpoint and send back prioritized alerts when certain conditions are met.

→ 12% of users were actively using this feature to monitor endpoints.

Proactively hunting for indicators (threat intelligence)
Velociraptor’s unique ability to collect artifacts at scale from many systems can be combined with threat-intelligence information (such as hashes, etc.) to proactively hunt for compromises by known actors. This question was specifically related to hunting for threat-feed indicators, such as hashes, IP addresses, etc.

→ 16% of users were utilizing this feature.

Ongoing forwarding of events to another system
Velociraptor’s client monitoring queries can be used to simply forward events (such as ETW feeds).

→ 6% of users were utilizing this feature.

Collecting bulk files for analysis on another system (digital forensics)
Velociraptor can be used to collect bulk files from the endpoint for later analysis by other tools (for example, using the Windows.Collection.KapeFiles artifact).

→ 20% of users were using this feature regularly.

Parsing for indicators on the endpoint (digital forensics)
Velociraptor’s artifacts are used to directly parse files on the endpoint, quickly returning actionable high-value information without the need for lengthy post processing.

→ 21% of users use these types of queries.

Proactive hunting for indicators across many systems (incident response)
Velociraptor can hunt for artifacts from many endpoints at once.

→ 21% of users benefit from this capability.

We further asked for the relative importance of these features. Users most valued the ability to collect bulk files and hunt for artifacts across many systems, followed by the ability to directly parse artifacts on the endpoints.

Backwards compatibility

Some users deployed Velociraptor for limited-time engagements so they did not need backwards compatibility for stored data, as they wouldn’t be upgrading to major versions within the same deployment.

Other users required more stable data migration but were generally happy with removing backwards data compatibility, if necessary. For example, one response stated “I would rather you prioritize improvements over compatibility even if it breaks things.”

Another user explained: “In a typical Incident Response scenario, Digital Forensics data has a shelf life of a few weeks or months at best and I am comfortable with the convertibility and portability of much of the data that Velociraptor collects such that archival data can still be worked with even if newer versions of the server no longer support a deprecated format/archive. I think there will be workarounds if this becomes an issue for folks with mountains of legacy data that hasn’t been exported somewhere more meaningful for longer term storage and historical data analytic/intelligence purposes.”

Generally, most users indicated they rarely or never needed to go back to archived data and reanalyze.

Version compatibility

The Velociraptor support policy officially only supports clients and servers on the same release version. However, in reality it usually takes longer to upgrade clients than servers. While some users are able to upgrade clients promptly, many users estimate between 10-50% of deployed clients are a version (or more) older than the server. Therefore, the Velociraptor team needs to maintain some compatibility with older clients to allow time for users to upgrade their endpoints.

The offline collector

The offline collector gives users a way to use Velociraptor’s artifacts without needing to deploy a server. This feature is used exclusively by about 10% of users, while a further 30% of users employ it frequently.

Most users of the offline collection deploy it manually (50%). Deploying via another EDR tool or via Group Policy are also robust options. Some users have created custom wrappers to deploy the offline collector in the field. The offline collection supports directly uploading the collection to a cloud server using a number of methods.

The most popular upload method is to an AWS S3 bucket (30%) while the SFTP connector in the cloud or a custom SFTP server on a VM are also popular options (20% and 23%, respectively). Uploading directly to Google Cloud Storage is the least popular option at about 5%.

Manual copy methods were also popular, ranging from EDR-based copying to Zoom file copy.

Azure blob storage was a common request that Velociraptor currently does not support. Many responses indicate that SFTP is currently a workaround to the lack of direct Azure support. The Velociraptor team should prioritize supporting Azure blob storage.

Data analysis

Velociraptor supports collecting raw files (e.g. Event log files, $MFT etc.) for analysis in other tools. Alternatively, Velociraptor already contains extensive parsers for most forensic artifacts that can be used directly on the endpoint.

Most users do use the built-in forensic parsing and analysis artifacts (55%) but many users also collect raw files (e.g. via the Windows.Collection.KapeFiles artifact).

VQL artifacts

Velociraptor uses the Velociraptor Query Language to perform collections and analysis. The VQL is usually shared with the community via an artifact. Most users utilize the built-in artifacts as well as the artifact exchange. However, over 60% of users report they develop their own artifacts, as well. For those users who develop their own artifacts, we asked about limitations and difficulties in this process.

A common theme that arose was around debugging artifacts and the lack of a VQL debugger and better error reporting. Training and documentation were also pointed out as needing improvement. A suggestion was made to enhance documentation with more examples of how each VQL plugin can be used in practice.

In a related note, the Velociraptor team is running a training course at BlackHat 2023. Developers will impart detailed information on how to deploy Velociraptor and write effective custom VQL.

Role-based access controls

Velociraptor has a role-based access control (RBAC) mechanism where users can be assigned roles from administrator, to investigator, to read-only access provided by the reader role. Users generally found this feature useful—40% found it “moderately useful,” 20% “very useful” and 15% “extremely useful”.The main suggestions for improvements include:

  • Easier management through the GUI (as of version 0.6.8 all user ACLs are managed through the GUI)
  • Custom roles with more granular permissions
  • Better logging and auditing
  • The ability to allow a specific role to only run a pre-approved subset of artifacts
  • A way to only run signed/hashed VQL / prevent a malicious artifact being dropped on the server
  • Making it clearer what each permission grants the user

Multi-tenant support

Velociraptor offers a fully multi-tenanted mode, where organizations can be created or decommissioned quickly with minimal resource overhead. This feature is used by 25% of respondents, who are mainly consultants and service providers using it to support multiple customers. Some companies use multi-tenancy to separate different divisions or subsidiaries of the business.

Client monitoring and alerting

Velociraptor can run event queries on a client. These VQL queries run continuously and stream results to the server when certain conditions are met. Common use cases for these are to generate alerts and enhanced detection.

Some users deploy client monitoring artifacts frequently while others see it as an alternative to EDR tools, when these are available. The primary use-case breakdown was:

  • Detection (e.g. alert when an anomalous event occurs): 27% of users
  • Collection of client events (e.g. forward process event logs to an external system): 18% of users
  • Remediation (e.g. quarantine or remove files automatically): 15% of users

→ 30% of users do not use client monitoring at all.

The most common pain point with client monitoring is the lack of integrated alerting capability (an issue currently being worked on). Some useful feedback on this feature included:

  • Better support for integration with business tools (e.g., Teams, Slack, etc.)
  • Easier to manage event data
  • Not having to build a server side artifact for each client_event artifact
  • A dashboard that lists all alerts
  • An easier way to forward alerts based on severity
  • Lack of pre-built detection rules/packs—in other words, it would be easier to tune down, than to build up

The Quarantine feature

Velociraptor can quarantine an endpoint by collecting the Windows.Remediation.Quarantine artifact. This artifact tunes the firewall rules on the endpoint to block all external network communication while maintaining connectivity to the Velociraptor host. This allows for an endpoint to be isolated during investigation.

The feature is fairly popular—it was “sometimes used” by about 30% of users and “always used” by another 12%.

How is Velociraptor deployed?

Velociraptor is a very lightweight solution, typically taking a few minutes to provision a new deployment. For many of our users, Velociraptor is used in an incident response context on an as-needed basis (46%). Other users prefer a more permanent deployment (25%).

For larger environments, Velociraptor also supports multi-server configuration (13% of users), as well as the more traditional single-server deployment option (70% of users). While some users leverage very short-lived deployments of several days or less (13%), most users keep their deployment for several weeks (27%) to months or permanently (44%).

Velociraptor is designed to work efficiently with many endpoints. We recommend a maximum of 15-20k endpoints on a single server before switching to a multi-server architecture (although users reported success with larger deployment sizes on a single server). This level of performance is adequate in practice for the majority of users.

Many users run deployments of less than 250 endpoints (44%) while a further 40% of users deploy to less than 5,000 endpoints.

Approximately 10% of users have deployment sizes larger than 25,000 endpoints, with 2% of users over 100,000 endpoints.

Popular operating systems

Among Velociraptor’s supported operating systems, Windows 64-bit is the most popular (with 82% of users ranking it the most-deployed OS type), while Linux is the next most popular deployed endpoint OS. Mac is the third popular choice for Velociraptor’s users. Finally, 32-bit Windows systems are still prevalent, as well.

Resources and references

Velociraptor’s website at https://docs.velociraptor.app/ contains a wealth of reference material, training courses, and presentations. We also have an active YouTube channel with many instructional videos.

While some users ranked the website as “extremely useful” (25%), there is clearly room for improvement. 42% of users rated it as only “very useful” or “moderately useful” (28%).Suggestions for improvements included:

  • More in-depth YouTube videos breaking down the tool’s features with workflows
  • More detailed “how to” with practical examples
  • Improved documentation about functions and plugins, with a slightly more detailed explanation and a small example
  • Updates to the documentation to reflect the new versions and features

Testimonials

Finally, I wanted to share with you some of the testimonials that users wrote in the survey. We are humbled with the encouraging and positive words we read, and are excited to be making an impact on the DFIR field:

  • "I have to congratulate you and thank you for developing such an amazing tool. It’s the future of DFIR."
  • "Awesome product, can’t wait to use it in prod!"
  • "This is a game-changer for the DFIR industry. Keep up the great work."
  • "Keep the file system based backend, its simplicity makes chain of custody/court submissions possible."
  • "I thoroughly love Velociraptor. The team and community are absolutely fantastic. I would go as far as to say that Mike and Matthew Green are my favorite infosec gentlemen in the industry."
  • "Y’all are awesome. I feel like I was pretty critical, but that’s because this is an amazing software, and I want to see it continue to grow and improve."
  • "We have been deploying Velociraptor to client environments almost since it was released. Our DFIR business model is entirely centered around it and it works very well for us. It is a great solution that just keeps getting better and better."

Conclusions

This is our first Velociraptor community survey, and it has proven to be extremely useful. Since Velociraptor is a community-led, open-source project, we need an open feedback loop to our users. This helps us understand where things need improvement and which features should be prioritized.

At the same time, since Velociraptor is an open-source project, I hope this survey will inspire contributions from the community. We value all contributions, from code to documentation, testing, and bug reports.

Finally, for all of our US-based users, we hope to see you all in person this year at BlackHat 2023! Join us for an in-depth Velociraptor training and to geek out with VQL for 4 days, learning practical, actionable skills and supporting this open-source project.

Keep Digging!

Automating Qakbot Detection at Scale With Velociraptor

Post Syndicated from Rapid7 original https://blog.rapid7.com/2023/04/18/automating-qakbot-detection-at-scale-with/

Automating Qakbot Detection at Scale With Velociraptor

By Matt Green, Principal Software Engineer

In this blog, you will learn a practical methodology to extract configuration data from recent Qakbot samples. I will provide some background on Qakbot, then walk through decode themes in an easy to visualize manner. Additionally, I’ll share a Velociraptor artifact to detect and automate the decode process at scale.

QakBot or QBot, is a modular malware first observed in 2007 that has been historically known as a banking Trojan. Qakbot is used to steal credentials, financial, or other endpoint data, and in recent years, regularly a loader for other malware leading to hands-on-keyboard ransomware.

Malicious emails typically include a zipped attachment, LNK, Javascript, Documents, or an embedded executable. The example shown in this post was delivered by an email with an attached pdf file:

Automating Qakbot Detection at Scale With Velociraptor
An example Qakbot infection chain

Qakbot has some notable defense evasion capabilities including:

  1. Checking for Windows Defender sandbox and terminating on discovery.
  2. Checking for the presence of running anti-virus or analysis tools, then modifying its later stage behavior for evasion.
  3. Dynamic corruption of payload on startup and rewrite on system shutdown.

Due to the commodity nature of Qakbot delivery, capabilities, and end game,  it is worth extracting configuration from observed samples to scope impact from a given campaign. Hunting enterprise-wide and finding a previously missed machine or discovering an ineffective control can prevent a domain-wide ransomware event or similar cyber attacks.

Configuration

Qakbot has an RC4 encoded configuration, located inside two resources of the unpacked payload binary. The decryption process has not changed significantly in recent times, but for some minor key changes. It uses a SHA1 of a hard coded key that can typically be extracted as an encoded string in the .data section of the payload binary. This key often remains static across campaigns, which can speed up analysis if we maintain a recent key list.

Current samples undergo two rounds of RC4 decryption with validation built in. The validation bytes dropped from the data for the second round.

After the first round:

  • The first 20 bytes in hex is for validation and is compared with the SHA1 of the remaining decoded data
  • Bytes [20:40] is the key used for the second round of decoding.
  • The Data to decode is byte [40:] onwards
  • The same validation process occurs for the second round decoded data
  • Verification = data[:20]
  • DecodedData = data[20:]
Automating Qakbot Detection at Scale With Velociraptor
First round of Qakbot decode and verification

Campaign information is located inside the smaller resource where, after this decoding and verification process, data is clear text.

Automating Qakbot Detection at Scale With Velociraptor

The larger resource stores Command and Control configuration. This is typically stored in netaddress format with varying separators. A common technique for finding the correct method is searching for common ports and separator patterns in the decoded data.

Automating Qakbot Detection at Scale With Velociraptor
Easy to spot C2 patterns: port 443

Encoded strings

Qakbot stores blobs of xor encoded strings inside the .data section of its payload binary. The current methodology is to extract blobs of key and data from the referenced key offset which similarly is reused across samples.

Current samples start at offset 0x50, with an xor key, followed by a separator of 0x0000 before encoded data. In recent samples, we have observed more than one string blob and these have occurred in the same format after the separator.

Automating Qakbot Detection at Scale With Velociraptor
Encoded strings .data

Next steps are splitting on separators, decode expected blob pairs and drop any non printable. Results are fairly obvious when decoding is successful as Qakbot produces clean strings. I typically have seen two well defined groups with strings aligning to Qakbot capabilities.

Automating Qakbot Detection at Scale With Velociraptor
Decoded strings: RC4 key highlighted‌‌

Payload

Qakbot samples are typically packed and need execution or manual unpacking to retrieve the payload for analysis. It’s very difficult to obtain this payload remotely at scale, in practice the easiest way is to execute the sample in a VM or sandbox that enables extracting the payload with correct PE offsets.

When executing locally Qakbot typically injects its payload into a Windows process, and can be detected with yara targeting the process for an unbacked section with `PAGE_EXECUTE_READWRITE` protections.

Below, we have an example of running PE-Sieve / Hollows Hunter tool from Hasherezade. This helpful tool enables detection of several types of process injection, and the dumping of injected sections with appropriately aligned headers. In this case, the injected process is `wermgr.exe` but it’s worth to note, depending on variant and process footprint, your injected process may vary.

Automating Qakbot Detection at Scale With Velociraptor
Automating Qakbot Detection at Scale With Velociraptor
Dumping Qakbot payload using pe-sieve

Automation at scale

Now I have explained the decode process, time to enable both detection and decode automation in Velociraptor.

I have recently released Windows.Carving.Qakbot which leverages a PE dump capability in Velociraptor 0.6.8 to enable live memory analysis. The goal of the artifact was to automate my decoding workflow for a generic Qakbot parser and save time for a common analysis. I also wanted an easy to update parser to add additional keys or decode nuances when changes are discovered.

Automating Qakbot Detection at Scale With Velociraptor
Windows.Carving.Qakbot: parameters

This artifact uses Yara to detect an injected Qakbot payload, then attempts to parse the payload configuration and strings. Some of the features in the artifact cover changes observed in the past in the decryption process to allow a simplified extraction workflow:

  • Automatic PE extraction and offset alignment for memory detections.
  • StringOffset: the offset of the string xor key and encoded strings is reused regularly.
  • PE resource type: the RC4 encoded configuration is typically inside 2 resources, I’ve observed BITMAP and RCDATA
  • Unescaped key string: this field is typically reused over samples.
  • Type of encoding: single or double, double being the more recent.
  • Hidden TargetBytes parameter to enable piping payload in for analysis.
  • Worker threads: for bulk analysis / research use cases.
Automating Qakbot Detection at Scale With Velociraptor
Windows.Carving.Qakbot: live decode 

Research

The Qakbot parser can also be leveraged for research and run bulk analysis. One caveat is the content requires payload files that have been dumped with offsets intact. This typically requires some post collection filtering or PE offset realignment but enables Velociraptor notebook to manipulate post processed data.

Some techniques I have used to bulk collect samples:

  • Sandbox with PE dumping features: api based collection
  • Virustotal search: crowdsourced_yara_rule:0083a00b09|win_qakbot_auto AND tag:pedll AND NOT tag:corrupt (not this will collect some broken payloads)
Automating Qakbot Detection at Scale With Velociraptor
Bulk collection: IPs seen across multiple campaign names and ports

Some findings from a small data set ~60 samples:

  • Named campaigns are typically short and not longer than a few samples over a few days.
  • IP addresses are regularly reused and shared across campaigns
  • Most prevalent campaigns are “BB” and  “obama” prefixed
  • Minor campaigns observed: “azd”, “tok”  and “rds” with only one or two observed payload samples each.

Strings analysis can also provide insights to sample behavior over time to assist analysis. A great example is the adding to process name list for anti-analysis checks.

Automating Qakbot Detection at Scale With Velociraptor
Bulk collection: Strings highlighting anti-analysis check additions over time

Conclusion

PE dumping, which is not available in expensive paid tools, is a useful capability and enables advanced capability at enterprise scale. For widespread threats like Qakbot, this kind of content can significantly improve response for blue teams, or even provide insights into threats when analyzed in bulk. In the coming months, we will be publishing a series of similar blog posts, offering a sneak peek at some of the types of memory analysis enabled by Velociraptor and incorporated into our training courses.

I also would like to thank Jakob Denlinger and James Dunne for their assistance in writing this post.

References

  1. Malpedia, QakBot
  2. Elastic, QBOT Malware Analysis
  3. @hasherezade.  Hollows Hunter, https://github.com/hasherezade/hollows_hunter

Automating Qakbot decode at scale

Post Syndicated from Matthew Green original https://blog.rapid7.com/2023/04/14/automating-qakbot-decode/

Automating Qakbot decode at scale

This is a technical post covering practical methodology to extract configuration data from recent Qakbot samples. In this blog, I will provide some background on Qakbot, then walk through decode themes in an easy to visualize manner. I will then share a Velociraptor artifact to detect and automate the decode process at scale.

Automating Qakbot decode at scale
Qak!

Qakbot or QBot, is a modular malware first observed in 2007 that has been historically known as a banking Trojan. Qbot is used to steal credentials, financial, or other endpoint data, and in recent years, regularly a loader for other malware leading to hands on keyboard ransomware.

Typical delivery includes malicious emails as a zipped attachment, LNK, Javascript, Documents, or an embedded executable. The example shown in this post was delivered by an email with an attached pdf file:

Automating Qakbot decode at scale
An example Qakbot infection chain

Qakbot has some notable defense evasion capabilities including:

  1. Checking for Windows Defender sandbox and terminating on discovery.
  2. Checking for the presence of running anti-virus or analysis tools, then modifying its later stage behavior for evasion.
  3. Dynamic corruption of payload on startup and rewrite on system shutdown.

Due to the commodity nature of delivery, capabilities and end game, it is worth extracting configuration from observed samples to scope impact from a given campaign. Hunting enterprise wide and finding a previously missed machine or discovering an ineffective control can be the difference in preventing a domain wide ransomware event, or a similar really bad day.

Configuration

Qakbot has an RC4 encoded configuration, located inside two resources of the unpacked payload binary. The decryption process has not changed significantly in recent times, but for some minor key changes. It uses a SHA1 of a hard coded key that can typically be extracted as an encoded string in the .data section of the payload binary. This key often remains static across campaigns, which can speed up analysis with the maintainance of a recent key list.

Current samples undergo two rounds of RC4 decryption with validation built in. The validation bytes dropped from the data for the second round.

After the first round:

  • The first 20 bytes in hex is for validation and is compared with the
    SHA1 of the remaining decoded data
  • Bytes [20:40] is the key used for the second round of decoding
  • The Data to decode is byte [40:] onwards
  • The same validation process occurs for the second round decoded data: Verification = data[:20] , DecodedData = data[20:]
Automating Qakbot decode at scale
First round of Qakbot decode and verification

Campaign information is located inside the smaller resource where, after this decoding and verification process, data is clear text.

Automating Qakbot decode at scale
Decoded campaign information

The larger resource stores Command and Control configuration. This is typically stored in netaddress format with varying separators. A common technique for finding the correct method is searching for common ports and separator patterns in the decoded data.

Automating Qakbot decode at scale
Easy to spot C2 patterns: port 443

Encoded strings

Qakbot stores blobs of xor encoded strings inside the .data section of its payload binary. The current methodology is to extract blobs of key and data from the referenced key offset which similarly is reused across samples.

Current samples start at offset 0x50, with an xor key, followed by a separator of 0x0000 before encoded data. In recent samples I have observed more than one string blob and these have occurred in the same format after the separator.

Automating Qakbot decode at scale
Encoded strings .data

Next steps are splitting on separators, decode expected blob pairs and drop any non printable. Results are fairly obvious when decoding is successful as Qakbot produces clean strings. I typically have seen two well defined groups with strings aligning to Qakbot capabilities.

Automating Qakbot decode at scale
Decoded strings: RC4 key highlighted

Payload

Qakbot samples are typically packed and need execution or manual unpacking to retrieve the payload for analysis. Its very difficult to obtain this payload remotely at scale, in practice the easiest way is to execute the sample in a VM or sandbox that enables extracting the payload with correct PE offsets.

When executing locally Qakbot typically injects its payload into a Windows process, and can be detected with yara targeting the process for an unbacked section with PAGE_EXECUTE_READWRITE protections.

Below is an example of running PE-Sieve / Hollows Hunter tool from Hasherezade. This helpful tool enables detection of several types of process injection, and the dumping of injected sections with appropriately aligned headers. In this case, the injected process is wermgr.exe but it’s worth to note, depending on variant and process footprint, your injected process may vary.

Automating Qakbot decode at scale
Dumping Qakbot payload using pe-sieve

Doing it at scale

Now I have explained the decode process, time to enable both detection and decode automation in Velociraptor.

I have recently released Windows.Carving.Qakbot which leverages a PE dump capability in Velociraptor 0.6.8 to enable live memory analysis. The goal of the artifact was to automate my decoding workflow for a generic Qakbot parser and save time for a common analysis. I also wanted an easy to update parser to add additional keys or decode nuances when changes are discovered.

Automating Qakbot decode at scale
Windows.Carving.Qakbot: parameters

This artifact uses Yara to detect an injected Qakbot payload, then attempts to parse the payload configuration and strings. Some of the features in the artifact cover changes observed in the past in the decryption process to allow a simplified extraction workflow:

  • Automatic PE extraction and offset alignment for memory detections.
  • StringOffset – the offset of the string xor key and encoded strings is reused regularly.
  • PE resource type: the RC4 encoded configuration is typically inside 2 resources, I’ve observed BITMAP and RCDATA
  • Unescaped key string: this field is typically reused over samples.
  • Type of encoding: single or double, double being the more recent.
  • Hidden TargetBytes parameter to enable piping payload in for analysis.
  • Worker threads: for bulk analysis / research use cases.
Automating Qakbot decode at scale
Windows.Carving.Qakbot: live decode

Research

The Qakbot parser can also be leveraged for research and run bulk analysis. One caveat is the content requires payload files that have been dumped with offsets intact. This typically requires some post collection filtering or PE offset realignment but enables Velociraptor notebook to manipulate post processed data.

Some techniques I have used to bulk collect samples:

  • Sandbox with PE dumping features: api based collection
  • Virustotal search: crowdsourced_yara_rule:0083a00b09|win_qakbot_auto AND tag:pedll AND NOT tag:corrupt (note: this will collect some broken payloads)
Automating Qakbot decode at scale
Bulk collection: IPs seen across multiple campaign names and ports

Some findings from a small data set ~60 samples:

  • Named campaigns are typically short and not longer than a few samples over a few days.
  • IP addresses are regularly reused and shared across campaigns
  • Most prevalent campaigns are BB and obama prefixed
  • Minor campaigns observed: azd, tok and rds with only one or two observed payload samples each.

Strings analysis can also provide insights to sample behavior over time to assist analysis. A great example is the adding to process name list for anti-analysis checks.

Automating Qakbot decode at scale
Bulk collection: Strings highlighting anti-analysis check additions over time

Conclusion

During this post I have explained the Qakbot decoding process and introduced an exciting new feature in Velociraptor. PE dumping is a useful capability and enables advanced capability at enterprise scale, not even available in expensive paid tools. For widespread threats like Qakbot, this kind of content can significantly improve response for the blue team, or even provide insights into threats when analyzed in bulk. In the coming months the Velociraptor team will be publishing a series of similar blog posts, offering a sneak peek at some of the types of memory analysis enabled by Velociraptor and incorporated into our training courses.

I also would like to thank some of Rapid7’s great analysts – Jakob Denlinger and James Dunne for bouncing some ideas when writing this post.

References

  1. Malpedia, Qakbot
  2. Elastic, QBOT Malware Analysis
  3. Hasherezade, Hollows Hunter
  4. Windows.Carving.Qakbot

Velociraptor Version 0.6.8 Available Now

Post Syndicated from Carlos Canto original https://blog.rapid7.com/2023/03/30/velociraptor-version-0-6-8-available-now/

A New Client-Server Communication Protocol, VFS GUI, and More Performance Upgrades Make This The Fastest and Most Scalable Velociraptor Yet

Velociraptor Version 0.6.8 Available Now

Rapid7 is excited to announce the release of version 0.6.8 of Velociraptor—an advanced, open-source digital forensics and incident response (DFIR) tool that enhances visibility into your organization’s endpoints. This release has been in development and testing for several months and features significant contributions and testing from our community. We are thrilled to share its powerful new features and improvements here today.

Performance Improvements

A big theme in the 0.6.8 release was about performance improvement, making Velociraptor faster, more efficient and more scalable (even more so than it currently is!).

New Client-Server Communication Protocol

When collecting artifacts from endpoints Velociraptor maintains a collection state (e.g. how many bytes were transferred?, how many rows? was the collection successful? etc). Previously tracking the collection was the task of the server, but this extra processing limited the total number of collections it could process.

In the 0.6.8 release, a new communication protocol was added to offload a lot of the collection tracking to the client itself. This reduces the amount of work on the server and allows more collections to be processed at the same time.

To maintain support with older clients, the server continues to use the older communication protocol with them—but will achieve the most improvement in performance once the newer clients are deployed.

New Virtual File System GUI

The VFS feature in Velociraptor allows users to interactively inspect directories and files on the endpoint, in a familiar tree-style user interface. The previous VFS view would store the entire directory listing in a single table for each directory. For very large directories like C:\Windows or C:\Windows\System32 (which typically have thousands of files) this would strain the browser leading to unusable UI.

In the latest release, the VFS GUI uses the familiar paged table and syncs this directory listing in a more efficient way. This improves performance significantly: for example, it is now possible and reasonable to perform a recursive directory sync on C:\Windows, on my system syncs over 250k files in less than 90 seconds.

Velociraptor Version 0.6.8 Available Now
Inspecting a large directory is faster with paging tables.


Since the VFS is now using the familiar paging table UI, it is also possible to filter, sort on any column using that same UI.

Faster Export Functionality

Velociraptor hunts and collections can be exported to a ZIP file for easy consumption in other tools. The 0.6.8 release improved the export code to make it much faster. Additionally, the GUI was improved to show how many files were exported into the zip, and other statistics.

Velociraptor Version 0.6.8 Available Now
Exporting collections is much faster!


Tracing Capability On Client Collections

We often get questions about what happened to a collection that seems to be hung? It is difficult to know why a collection seems to be unresponsive or stopped – it could mean the client was killed for some reason, (e.g. due to excessive memory use or a timeout).

Previously the only way to gather client side information was to collect a Generic.Client.Profile collection. This required running it at just the right time and did not guarantee that we would get helpful insight of what the query and the client binary were doing during the operation in question.
In the latest release it is now possible to specify a trace on any collection to automatically collect client side state as the collection is progressing.

Velociraptor Version 0.6.8 Available Now
Enabling trace on every collection increases visibility


Velociraptor Version 0.6.8 Available Now
Trace files contain debugging information


VQL Improvement – Disk Based Materialize Operator

The VQL LET ... <= operator is called the materializing LET operator because it expands the following query into a memory array which can be accessed cheaply multiple times.

While this is useful for small queries, it has proved dangerous in some cases, because users inadvertently attempted to materialize a very large query (e.g. a large glob() operation) dramatically increasing memory use. For example, the following query could cause problems in earlier versions.

LET X <= SELECT * FROM glob(globs=specs.Glob, accessor=Accessor)

In the latest release the VQL engine was improved to support a temp file based materialized operator. If the materialized query exceeds a reasonable level (by default 1000 rows), the engine will automatically switch away from memory based storage into file backed storage. Although file based storage is slower, memory usage is more controlled.

Ideally the VQL is rewritten to avoid this type of operation, but sometimes it is unavoidable, and in this case, file based materialize operations are essential to maintain stability and acceptable memory consumption.

New MSI Deployment Option

On Windows the recommended way to install Velociraptor is via an MSI package. The MSI package allows the software to be properly installed and uninstalled and it is also compatible with standard Windows software management procedures.

Previously however, building the MSI required using the WIX toolkit – a Windows only MSI builder which is difficult to run on other platforms. Operationally building with WIX complicates deployment procedures and requires using a complex release platform.

In the 0.6.8 release, a new method for repacking the official MSI package is now recommended. This can be done on any operating system and does not require WIX to be installed. Simply embed the client configuration file in the officially distributed MSI packages using the following command:

velociraptor-v0.6.8-rc1-linux-amd64 config repack --msi 
velociraptor-v0.6.8-rc1-windows-amd64.msi client.config.yaml 
output.msi

Velociraptor Version 0.6.8 Available Now
Repacking an MSI for windows distribution


Conclusion

If you’re interested in any of these new features, we welcome you to take Velociraptor for a spin by downloading it from our release page. It’s available for free on GitHub under an open-source license.
As always, please file bugs on the GitHub issue tracker or submit questions to our mailing list by emailing [email protected]. You can also chat with us directly on our Discord server.

Learn more about Velociraptor by visiting any of our web and social media channels below:

Velociraptor Version 0.6.7: Better Offline Collection, Encryption, and an Improved NTFS Parser Dig Deeper Than Ever

Post Syndicated from Mike Cohen original https://blog.rapid7.com/2022/12/02/velociraptor-version-0-6-7-better-offline-collection-encryption-and-an-improved-ntfs-parser-dig-deeper-than-ever/

Velociraptor Version 0.6.7: Better Offline Collection, Encryption, and an Improved NTFS Parser Dig Deeper Than Ever

By Mike Cohen and Carlos Canto

Rapid7 is excited to announce the release of version 0.6.7 of Velociraptor – an advanced, open-source digital forensics and incident response (DFIR) tool that enhances visibility into your organization’s endpoints. This release has been in development and testing for several months and features significant contributions from our community.  We are thrilled to share its powerful new features and improvements.

NTFS Parser changes

In this release, the NTFS parser was improved significantly. The main areas of development focused on better support for NTFS compressed and sparse files as well as improved path reconstruction.

In NTFS, there is a Master File Table (MFT) containing a record for each file on the filesystem. The MFT entry describes a file by attaching several attributes to it. Some of these are $FILE_NAME attributes representing the names of the file.

In NTFS, a file may have multiple names. Normally, files have a long file name and a short filename. Each $FILE_NAME record also contains a reference to the parent MFT entry of its directory.

When Velociraptor parses the MFT, it attempts to reconstruct the full path of each entry by traversing the parent MFT entry, recovering its name, etc. Previously, Velociraptor used one of the $FILE_NAME records (usually the long file name) to determine the parent MFT entry. However, this is not strictly correct, as each $FILE_NAME record can be a different parent directory. This surprising property of NTFS is called hard links.

You can play with this property using the fsutil program. The following adds a hard link to the program at C:/users/test/downloads/X.txt into a different directory.

C:> fsutil hardlink create c:\Users\Administrator\Y.txt c:\Users\Administrator\downloads\X.txtHardlink created for c:\Users\Administrator\Y.txt <<===>> c:\Users\Administrator\downloads\X.txt

The same file in NTFS can exist in multiple directories at the same time by use of hard links. The filesystem simply adds a new $FILE_NAME entry to the MFT entry for the file pointing at another parent directory MFT entry.

Therefore, when scanning the MFT, Velociraptor needs to report all possible directories in which each MFT entry can exist – there can be many such directories, since each directory can have its own hard links.

As a rule, an MFT Entry can represent many files in different directories!

Velociraptor Version 0.6.7: Better Offline Collection, Encryption, and an Improved NTFS Parser Dig Deeper Than Ever
An example of the notepad MFT entry with its many hard links

Reassembling paths from MFT entries

When Velociraptor attempts to reassemble the path from an unallocated MFT entry, it might encounter an error where the parent MFT entry indicated has already been used for some other file or directory.

In previous versions, Velociraptor simply reported these parents as potential parts of the full path, since – for unallocated entries – the path reconstruction is best effort. This led to confusion among users with often nonsensical paths reported for unallocated entries.

In the latest release, Velociraptor is more strict in reporting parents of unallocated MFT entries, also ensuring that the MFT sequence numbers match. If the parent’s MFT entry sequence number does not match, Velociraptor’s path reconstruction indicates this as an error path.

Velociraptor Version 0.6.7: Better Offline Collection, Encryption, and an Improved NTFS Parser Dig Deeper Than Ever
Unallocated MFT entries may have errors reconstructing a full path

In the above example, the parent’s MFT entry has a sequence number of 5, but we need a sequence number of 4 to match it. Therefore, the parent’s MFT entry is rejected and instead we report the error as the path.

The offline collection and encryption

Velociraptor’s offline collector is a pre-configured Velociraptor binary, which is designed to be a single shot acquisition tool. You can build an Offline Collector by following the documentation. The Offline Collector does not require access to the server, instead simply collecting the specified artifacts into a zip file (which can subsequently be uploaded to the cloud or simply shared with the DFIR experts for further analysis).

Previously, Velociraptor only supported encrypting the zip archive using a password. This is problematic because the password had to be embedded inside the collector configuration and so could be viewed by anyone with access to the binary.

In the latest release, Velociraptor supports asymmetric encryption to protect the acquisition zip file. There are two asymmetric schemes: X509 encryption and PGP encryption. Having asymmetric encryption improves security greatly because only the public key needs to be included in the collector configuration. Dumping the configuration from the collection is not sufficient to be able to decrypt the collected data – the corresponding private key is also required!

This is extremely important for forensic collections since these will often contain sensitive and PII information.

Using this new feature is also extremely easy: One simply selects the X509 encryption scheme during the configuration of the offline collector in the GUI.

Velociraptor Version 0.6.7: Better Offline Collection, Encryption, and an Improved NTFS Parser Dig Deeper Than Ever
Configuring the offline collector for encryption

You can specify any X509 certificate here, but if you do not specify any, Velociraptor will use the server’s X509 certificate instead.

Velociraptor will generate a random password to encrypt the zip file, and then encrypt this password using the X509 certificate.

Velociraptor Version 0.6.7: Better Offline Collection, Encryption, and an Improved NTFS Parser Dig Deeper Than Ever
The resulting encrypted container

Since the ZIP standard does not encrypt the file names, Velociraptor embeds a second zip called data.zip inside the container. The above illustrates the encrypted data zip file and the metadata file that describes the encrypted password.

Because the password used to encrypt the container is not known and needs to be derived from the X509 private key, we must use Velociraptor itself to decrypt the container (i.e. we can not use something like 7zip).

Velociraptor Version 0.6.7: Better Offline Collection, Encryption, and an Improved NTFS Parser Dig Deeper Than Ever
Decrypting encrypted containers with the server&rsquo;s private key

Importing offline collections

Originally, the offline collector feature was designed as a way to collect the exact same VQL artifacts that Velociraptor allows in the usual client-server model in situations where installing the Velociraptor client was not possible. The same artifacts can be collected into a zip file.

As Velociraptor’s post processing capabilities improved (using notebooks and server side VQL to enrich the analysis), people naturally wanted to use Velociraptor to post process offline collections too.

Previously, Velociraptor did have the Server.Utils.ImportCollection artifact to allow an offline collection to be imported into Velociraptor. But this did not work well because the offline collector simply did not include enough information in the zip file to sufficiently emulate the GUI’s collection views.

In the recent release, the offline collector was updated to add more detailed information to the collection zip, allowing it to be easily imported.

Velociraptor Version 0.6.7: Better Offline Collection, Encryption, and an Improved NTFS Parser Dig Deeper Than Ever
Exported zip archives now contain more information

Exporting and importing collections

Velociraptor has previously had the ability to export collections and hunts from the GUI directly, mainly so they can be processed by external tools.

But there was no way to import those collections back into the GUI. We just never imagined this would be a useful feature!

Recently, Eric Capuano from ReconInfosec shared some data from an exercise using Velociraptor. People wanted to import into their own Velociraptor installations so they could run notebook post processing on the data themselves.

Velociraptor Version 0.6.7: Better Offline Collection, Encryption, and an Improved NTFS Parser Dig Deeper Than Ever
The OpenSoc challenge: https://twitter.com/eric_capuano/status/1559190056736378880

Our community has spoken though! This is a useful feature!

In the latest release, exported files from the GUI use the same container format at the offline collector, and therefore can be seamlessly imported into a different Velociraptor installation.

Handling of sparse files

When collecting files from the endpoint using the NTFS accessor, we quite often encounter sparse files. These are files with large unallocated holes in them. The most extreme sparse file is the USN Journal.

Velociraptor Version 0.6.7: Better Offline Collection, Encryption, and an Improved NTFS Parser Dig Deeper Than Ever
Acquiring the USN journal

In the above example, the USN journal size is reported to be 1.3 GB but in reality only about 40 MB is occupied on disk. When collecting this file, Velociraptor only collects the real data and marks the file as sparse. The zip file will contain an index file which specifies how to reassemble the file into its original form.

While Velociraptor stores the file internally in an efficient way, when exporting the file for use by other tools, they might expect the file to be properly padded out (so that file offsets are correct).

Velociraptor now allows the user the choice of exporting an individual file in a padded form (with sparse regions padded). This can also be applied to the entire zip export in the GUI.

For very large sparse files, it makes no sense to pad so much data out – some USN journal files are in the TB region. So, Velociraptor implements a limit on padding of very sparse files.

Parsing user registry hives

Many Velociraptor artifacts simply parse keys and values from the registry to detect indicators. Velociraptor offers two methods of accessing the registry:

  1. Using the Windows APIs
  2. Employing the built-in raw registry parser to parse the hive files

While the first method is very intuitive and easy to use, it is often problematic. Using the APIs requires the user hive to be mounted. Normally, the user hive is only mounted when a user logs in. Therefore querying registry keys in the user hive will only work on users that are currently logged in at the time of the check and miss other users (which are not currently logged in so their hive is not mounted).

To illustrate this problem consider the Windows.Registry.Sysinternals.Eulacheck artifact which checks the keys in HKEY_USERS\*\Software\Sysinternals\* for the Sysinternals EULA value.

In previous versions of Velociraptor, this artifact simply used the windows API to check these keys/values and completely missed any users that were not logged in.

While this issue is known, users previously had to employ complex VQL to customize the query so it could search the raw NTUSER.DAT files in each user registry. This is more difficult to maintain since it requires two separate types of artifact for the same indicator.

With the advent of Velociraptor’s dead disk capabilities, it is possible to run a VQL query in a “virtualized” context consisting of a remapped environment. The end result is that the same VQL query can be used to run on raw registry hives. It is now trivial to apply the same generic registry artifact to a raw registry parse.

Velociraptor Version 0.6.7: Better Offline Collection, Encryption, and an Improved NTFS Parser Dig Deeper Than Ever
Remapping the raw registry hive to a regular registry artifact

All that is required to add raw registry capabilities to any registry artifact is:

  1. Import the Windows.Registry.NTUser artifact
  2. Use the MapRawRegistryHives helper function from that artifact to set up the mappings automaticallyCall the original registry query using the registry accessor. In the background this will be remapped to the raw registry accessor automatically

Conclusion

If you’re interested in the new features, take Velociraptor for a spin by downloading it from our release page. It’s available for free on GitHub under an open-source license.

As always, please file bugs on the GitHub issue tracker or submit questions to our mailing list by emailing [email protected]. You can also chat with us directly on our Discord server.

Learn more about Velociraptor by visiting any of our web and social media channels below:

What’s New in InsightIDR: Q3 2022 in Review

Post Syndicated from KJ McCann original https://blog.rapid7.com/2022/10/05/whats-new-in-insightidr-q3-2022-in-review/

What's New in InsightIDR: Q3 2022 in Review

This Q3 2022 recap post takes a look at some of the latest investments we’ve made to InsightIDR to drive detection and response forward for your organization.

360-degree XDR and attack surface coverage with Rapid7

The Rapid7 XDR suite — flagship InsightIDR, alongside InsightConnect (SOAR), and Threat Command (Threat Intel) — unifies detection and response coverage across both your internal and external attack surface. Customers detect threats earlier and respond more quickly,  shrinking the window for attackers to succeed.  

With Threat Command alerts now directly ingested into InsightIDR, receive a more holistic picture of your threat landscape, beyond the traditional network perimeter. By unifying these detections and related workflows together in one place, customers can:

  • Manage and tune external Threat Command detections from InsightIDR console
  • Investigate external threats alongside context and detections of their broader internal environment
  • Activate automated response workflows for Threat Command alerts – powered by InsightConnect – from InsightIDR to extinguish threats faster

Rapid7 products have helped us close the gap on detecting and resolving security incidents to the greatest effect. This has resulted in a safer environment for our workloads and has created a culture of secure business practices.

— Manager, Security or IT, Medium Enterprise Computer Software Company via Techvalidate

Eliminate manual tasks with expanded automation

Reduce mean time to respond (MTTR) to threats and increase confidence in your response actions with the expanded integration between InsightConnect and InsightIDR. Easily create and map InsightConnect workflows to any attack behavior analytics (ABA), user behavior analytics (UBA), or custom detection rule, so tailored response actions can be initiated as soon as an alert fires. Quarantine assets, enrich investigations with more evidence, kick off ticketing workflows, and more – all with just a click.

Preview the impact of exceptions on detection rules

Building on our intuitive detection tuning experience, it’s now easier to anticipate how exceptions will impact your alert volume. Preview exceptions in InsightIDR to confirm your logic to ensure that tuning will yield relevant, high fidelity alerts. Exception previews allow you to confidently refine the behavior of ABA detection rules for specific users, assets, IP addresses, and more to fit your unique environments and circumstances.

What's New in InsightIDR: Q3 2022 in Review

Streamline investigations and collaboration with comments and attachments

With teams more distributed than ever, the ability to collaborate virtually around investigations is paramount. Our overhauled notes system now empowers your team to create comments and upload/download rich attachments through Investigation Details in InsightIDR, as well as through the API. This new capability ensures your team has continuity, documentation, and all relevant information at their fingertips as different analysts collaborate on an investigation.

What's New in InsightIDR: Q3 2022 in Review
Quickly and easily add comments and upload and download attachments to add relevant context gathered from other tools and stay connected to your team during an investigation.

New vCenter deployment option for the Insight Network Sensor

As a security practitioner looking to minimize your attack surface, you need to know the types of data on your network and how much of it is moving: two critical areas that could indicate malicious activity in your environment.

With our new vCenter deployment option, you can now use distributed port mirroring to monitor internal east-west traffic and traffic across multiple ESX servers using just a single virtual Insight Network Sensor. When using the vCenter deployment method, choose the GRETAP option via the sensor management page.

First annual VeloCON brings DFIR experts from around the globe together

Rapid7 brought DFIR experts and enthusiasts from around the world together this September to share experiences in using and developing Velociraptor to address the needs of the wider DFIR community.

What's New in InsightIDR: Q3 2022 in Review

Velociraptor’s unique, advanced open-source endpoint monitoring, digital forensic, and cyber response platform provides you with the ability to respond more effectively to a wide range of digital forensic and cyber incident response investigations and data breaches.

Watch VeloCON on-demand to see security experts delve into new ideas, workflows, and features that will take Velociraptor to the next level of endpoint management, detection, and response.

A growing library of actionable detections

In Q3, we added 385 new ABA detection rules to InsightIDR. See them in-product or visit the Detection Library for actionable descriptions and recommendations.

Stay tuned!

As always, we’re continuing to work on exciting product enhancements and releases throughout the year. Keep an eye on our blog and release notes as we continue to highlight the latest in detection and response at Rapid7.

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Velociraptor Version 0.6.6: Multi-Tenant Mode and More Let You Dig Deeper at Scale Like Never Before

Post Syndicated from Carlos Canto original https://blog.rapid7.com/2022/10/04/velociraptor-version-0-6-6-multi-tenant-mode-and-more-let-you-dig-deeper-at-scale-like-never-before/

Velociraptor Version 0.6.6: Multi-Tenant Mode and More Let You Dig Deeper at Scale Like Never Before

Rapid7 is excited to announce the release of version 0.6.6 of Velociraptor –  an advanced, open-source digital forensics and incident response (DFIR) tool that enhances visibility into your organization’s endpoints. After several months of development and testing, we are excited to share its powerful new features and improvements.

Multi-tenant mode

The largest improvement in the 0.6.6 release by far is the introduction of organizational division within Velociraptor. Velociraptor is now a fully multi-tenanted application. Each organization is like a completely different Velociraptor installation, with unique hunts, notebooks, and clients. That means:

  1. Organizations can be created and deleted easily with no overheads.
  2. Users can seamlessly switch between organizations using the graphic user interface (GUI).
  3. Operations like hunting and post processing can occur across organizations.

When looking at the latest Velociraptor GUI you might notice the organizations selector in the User Setting page.

Velociraptor Version 0.6.6: Multi-Tenant Mode and More Let You Dig Deeper at Scale Like Never Before
The latest User Settings page

This allows the user to switch between the different organizations they belong in.

Multi-tenanted example

Let’s go through a quick example of how to create a new organization and use this feature in practice.

Multi-tenancy is simply a layer of abstraction in the GUI separating Velociraptor objects (such as clients, hunts, notebooks, etc.) into different organizational units.

You do not need to do anything specific to prepare for a multi-tenant deployment. Every Velociraptor deployment can create a new organization at any time without affecting the current install base at all.

By default all Velociraptor installs (including upgraded ones) have a root organization which contains their current clients, hunts, notebooks, etc. (You can see this in the screenshot above.) If you choose to not use the multi-tenant feature, your Velociraptor install will continue working with the root organization without change.

Suppose a new customer is onboarded, but they do not have a large enough install base to warrant a new cloud deployment (with the associated infrastructure costs). We want to create a new organization for this customer in the current Velociraptor deployment.

Creating a new organization

To create a new organization, we simply run the Server.Orgs.NewOrg server artifact from the Server Artifacts screen.

Velociraptor Version 0.6.6: Multi-Tenant Mode and More Let You Dig Deeper at Scale Like Never Before
Creating a new organization

All we need to do is give the organization a name.

Velociraptor Version 0.6.6: Multi-Tenant Mode and More Let You Dig Deeper at Scale Like Never Before
New organization is created with a new OrgId and an Admin User

Velociraptor uses the OrgId internally to refer to the organization, but the organization name is used in the GUI to select the different organizations. The new organization is created with the current user being the new administrator of this org.

Deploying clients to the new organization

Since all Velociraptor agents connect to the same server, there has to be a way for the server to identify which organization each client belongs in. This is determined by the unique nonce inside the client’s configuration file. Therefore, each organization has a unique client configuration that should be deployed to that organization.

We will list all the organizations on the server using the Server.Orgs.ListOrgs artifact. Note that we are checking the AlsoDownloadConfigFiles parameter to receive the relevant configuration files.

Velociraptor Version 0.6.6: Multi-Tenant Mode and More Let You Dig Deeper at Scale Like Never Before
Listing all the organizations on the server

The artifact also uploads the configuration files.

Velociraptor Version 0.6.6: Multi-Tenant Mode and More Let You Dig Deeper at Scale Like Never Before
Viewing the organization’s configuration files

Now, we go through the usual deployment process with these configuration files and prepare MSI, RPM, or Deb packages as normal.

Switching between organizations

We can now switch between organizations using the organization selector.

Velociraptor Version 0.6.6: Multi-Tenant Mode and More Let You Dig Deeper at Scale Like Never Before
Switching between orgs

Now the interface is inside the new organization.

Velociraptor Version 0.6.6: Multi-Tenant Mode and More Let You Dig Deeper at Scale Like Never Before
Viewing an organization

Note the organization name is shown in the user tile, and client IDs have the org ID appended to them to remind us that the client exists within the org.

The new organization is functionally equivalent to a brand-new deployed server! It has a clean data store with new hunts, clients, notebooks, etc. Any server artifacts will run on this organization only, and server monitoring queries will also only apply to this organization.

Adding other users to the new organization

By default, the user which created the organization is given the administrator role within that organization. Users can be assigned arbitrary roles within the organization – so, for example, a user may be an administrator in one organization but a reader in another organization.

You can add new users or change the user’s roles using the Server.Utils.AddUser artifact. When using basic authentication, this artifact will create a user with a random password. The password will then be stored in the server’s metadata, where it can be shared with the user. We normally recommend Velociraptor to be used with single sign-on (SSO), such as OAuth2 or SAML, and not to use passwords to manage access.

Velociraptor Version 0.6.6: Multi-Tenant Mode and More Let You Dig Deeper at Scale Like Never Before
Adding a new user into the org

View the user’s password in the server metadata screen. (You can remove this entry when done with it or ask the user to change their password.)

Velociraptor Version 0.6.6: Multi-Tenant Mode and More Let You Dig Deeper at Scale Like Never Before
View the new user password in the server metadata screen

You can view all users in all orgs by collecting the Server.Utils.ListUsers artifact within the root org context.

Velociraptor Version 0.6.6: Multi-Tenant Mode and More Let You Dig Deeper at Scale Like Never Before
Viewing all the users on the system

Although Velociraptor respects the assigned roles of users within an organization, at this stage this should not be considered an adequate security control. This is because there are obvious escalation paths between roles on the same server. For example, currently an administrator role by design has the ability to write arbitrary files on the server and run arbitrary commands (primarily this functionality allows for post processing flows with external tools).

This is currently also the case in different organizations, so an organization administrator can easily add themselves to another organization (or indeed to the root organization) or change their own role.

Velociraptor is not designed to contain untrusted users to their own organization unit at this stage – instead, it gives administrators flexibility and power.

GUI improvements

The 0.6.6 release introduces a number of other GUI improvements.

Updating user’s passwords

Usually Velociraptor is deployed in production using SSO such as Google’s OAuth2, and in this case, users manage their passwords using the provider’s own infrastructure.

However, it is sometimes convenient to deploy Velociraptor in Basic authentication mode (for example, for on-premises or air-gapped deployment). Velociraptor now lets users change their own passwords within the GUI.

Velociraptor Version 0.6.6: Multi-Tenant Mode and More Let You Dig Deeper at Scale Like Never Before
Users may update their passwords in the GUI

Allow notebook GUI to set notebooks to public

Previously, notebooks could be shared with specific other users, but this proved unwieldy for larger installs with many users. In this release, Velociraptor offers a notebook to be public – this means the notebook will be shared with all users within the org.

Velociraptor Version 0.6.6: Multi-Tenant Mode and More Let You Dig Deeper at Scale Like Never Before
Sharing a notebook with all users

More improvements to the process tracker

The experimental process tracker is described in more details here, but you can already begin using it by enabling the Windows.Events.TrackProcessesBasic client event artifact and using artifacts just as Generic.System.Pstree, Windows.System.Pslist, and many others.

Context menu

A new context menu is now available to allow sending any table cell data to an external service.

Velociraptor Version 0.6.6: Multi-Tenant Mode and More Let You Dig Deeper at Scale Like Never Before
Sending a cell content to an external service

This allows for quick lookups using VirusTotal or a quick CyberChef analysis. You can also add your own send to items in the configuration files.

Conclusion

If you’re interested in the new features, take Velociraptor for a spin by downloading it from our release page. It’s available for free on GitHub under an open-source license.

As always, please file bugs on the GitHub issue tracker or submit questions to our mailing list by emailing [email protected]. You can also chat with us directly on our Discord server.

Learn more about Velociraptor by visiting any of our web and social media channels below:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Additional reading:

VeloCON 2022: Digging Deeper Together!

Post Syndicated from Carlos Canto original https://blog.rapid7.com/2022/09/08/velocon-2022-digging-deeper-together/

VeloCON 2022: Digging Deeper Together!

September 15, 2022  |  Live at 9 am EDT  |  Virtual and Free

VeloCON 2022: Digging Deeper Together!

Join the open-source digital forensics and incident response (DFIR) community for a day-long, virtual summit as we DIG DEEPER TOGETHER!

Have you ever wanted to share your passion and interest in Velociraptor with the rest of the community? VeloCON is your chance! Come together with other DFIR experts and enthusiasts from around the world on September 15th as we delve into new ideas, workflows, and features that will take Velociraptor to the next level of endpoint management, detection, and response.

The first annual VeloCON summit will be held Thursday Sept 15th, 2022 at 9 am EDT. It is a 1-day event focused on the Velociraptor community – a forum to share experiences in using and developing Velociraptor to address the needs of the wider DFIR community. This year, the conference will be online and completely free! User-created presentations will be streamed live via Zoom webinar and on the Velociraptor YouTube channel, and will be archived on our Velociraptor website.

Registration is completely free. Here is the speaker list and agenda at a glance:

VeloCON 2022: Digging Deeper Together!

We look forward to seeing you at VeloCON. If you can’t make the event live, be sure to catch a replay of the event, which we’ll have posted to our website and YouTube channel.

Register for VeloCON today! Learn more about Velociraptor by visiting any of our web and social media channels below:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Additional reading:

CVE-2022-35629..35632 Velociraptor Multiple Vulnerabilities (FIXED)

Post Syndicated from Mike Cohen original https://blog.rapid7.com/2022/07/26/cve-2022-35629-35632-velociraptor-multiple-vulnerabilities-fixed/

CVE-2022-35629..35632 Velociraptor Multiple Vulnerabilities (FIXED)

This advisory covers a number of issues identified in Velociraptor and disclosed by a security code review performed by Tim Goddard from CyberCX. We also thank Rhys Jenkins for working with the Velociraptor team to identify and rectify these issues. All of these identified issues have been fixed as of Version 0.6.5-2, released July 26, 2022.

CVE-2022-35629: Velociraptor client ID spoofing

Velociraptor uses client IDs to identify each client uniquely. The client IDs are derived from the client’s own cryptographic key and so usually require this key to be compromised in order to spoof another client.

Due to a bug in the handling of the communication between the client and server, it was possible for one client, already registered with their own client ID, to send messages to the server claiming to come from another client ID. This may allow a malicious client to attribute messages to another victim client ID (for example, claiming the other client contained some indicator or other data).

The impact of this issue is low because a successful exploitation would require:

  1. The malicious client to identify a specific host’s client ID – since client IDs are random, it is unlikely that an attacker could guess a valid client ID. Client IDs are also not present in network communications, so without access to the Velociraptor server, or indeed the host’s Velociraptor client writeback file, it is difficult to discover the client ID.
  2. Each collection of new artifacts from the client contains a unique random “flow ID.” In order to insert new data into a valid collection, the malicious client will need to guess the flow ID for a valid current flow. Therefore, this issue is most likely to affect client event monitoring feeds, which do not contain random flow IDs.

CVE-2022-35630: Unsafe HTML injection in artifact collection report

Velociraptor allows the user to export a “collection report” in HTML. This is a standalone HTML file containing a summary of the collection. The server will generate the HTML file, and the user’s browser will download it. Users then open the HTML file from their local disk.

A cross-site scripting (XSS) issue in generating this report made it possible for malicious clients to inject JavaScript code into the static HTML file.

The impact of this issue is considered low because the file is served locally (i.e. from a file:// URL) and so does not have access to server cookies or other information (although it may facilitate phishing attacks). This feature is also not used very often.

CVE-2022-35631: Filesystem race on temporary files

The Velociraptor client uses a local buffer file to store data it is unable to deliver to the server quickly enough. Although the file is created with restricted permissions, the filename is predictable (and stored in the client’s configuration file).

On MacOS and Linux, it may be possible to perform a symlink attack by replacing this predictable file name with a symlink to another file and have the Velociraptor client overwrite the other file.

This issue can be mitigated by using an in-memory buffer mechanism instead, or specifying that the buffer file should be created in a directory only writable by root. Set the Client.local_buffer.filename_linux to an empty string, or a directory only writable by root.

By default, on Windows, the buffer file is stored in C:\Program Files\Velociraptor\Tools, which is created with restricted permissions only writable by Administrators. Therefore, Windows clients in the default configuration are not affected by this issue.

CVE-2022-35632: XSS in user interface

The Velociraptor GUI contains an editor suggestion feature that can be used to offer help on various functions. It can also display the description field of a VQL function, plugin or artifact. This field was not properly sanitized and can lead to cross-site scripting (XSS).

Prior to the 0.6.5 release, the artifact description was also sent to this function, but after 0.6.5, this is no longer the case for performance reasons.

On servers older than 0.6.5, an authenticated attacker with the ARTIFACT_WRITER permission (usually only given to administrators) could create an artifact with raw HTML in the description field and trigger this XSS. Servers with version 0.6.5 or newer are not affected by this issue.

Remediation

To remediate these vulnerabilities, Velociraptor users should upgrade their servers.

Disclosure timeline

July, 2022: Issues discovered by Tim Goddard from CyberCX

July 11, 2022: Vulnerabilities disclosed by CyberCX

July 12, 2022: Validated by Rapid7/Velocidex

July 26, 2022: Fixes released in version 0.6.5-2

July 26, 2022: Rapid7 publishes this advisory

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Velociraptor Version 0.6.5: Table Transformations, Multi-Lingual Support, and Better VQL Error-Handling Let You Dig Deeper Than Ever

Post Syndicated from Carlos Canto original https://blog.rapid7.com/2022/06/24/velociraptor-version-0-6-5-table-transformations-multi-lingual-support-and-better-vql-error-handling-let-you-dig-deeper-than-ever/

Velociraptor Version 0.6.5: Table Transformations, Multi-Lingual Support, and Better VQL Error-Handling Let You Dig Deeper Than Ever

Rapid7 is pleased to announce the release of Velociraptor version 0.6.5 – an advanced, open-source digital forensics and incident response (DFIR) tool that enhances visibility into your organization’s endpoints.  This release has been in development and testing for several months now, and we are excited to share its new features and improvements.

Table transformations

Velociraptor collections or hunts are usually post-processed or filtered in Notebooks. This allows users to refine and post-process the data in complex ways. For example, to view only the Velociraptor service from a hunt collecting all services (Windows.System.Services), one would click on the Notebook tab and modify the query by adding a WHERE statement.

Velociraptor Version 0.6.5: Table Transformations, Multi-Lingual Support, and Better VQL Error-Handling Let You Dig Deeper Than Ever
Filtering rows with VQL

In our experience, this ability to quickly filter or sort a table is very common, and sometimes we don’t really need the full power of VQL. In 0.6.5, we introduced table transformations — simple filtering/sorting operations on every table in the GUI.

Velociraptor Version 0.6.5: Table Transformations, Multi-Lingual Support, and Better VQL Error-Handling Let You Dig Deeper Than Ever
Setting simple table transformations

Multi-lingual support

Velociraptor’s community of DFIR professionals is global! We have users from all over the world, and although most users are fluent in English, we wanted to acknowledge our truly international user base by adding internationalization into the GUI. You can now select from a number of popular languages. (Don’t see your language here? We would love additional contributions!)

Velociraptor Version 0.6.5: Table Transformations, Multi-Lingual Support, and Better VQL Error-Handling Let You Dig Deeper Than Ever
Select from a number of popular languages

Here is a screenshot showing our German translations:

Velociraptor Version 0.6.5: Table Transformations, Multi-Lingual Support, and Better VQL Error-Handling Let You Dig Deeper Than Ever
The Velociraptor interface in German

New interface themes

The 0.6.5 release expanded our previous offering of 3 themes into 7, with a selection of light and dark themes. We even have a retro feel ncurses theme that looks like a familiar terminal…

Velociraptor Version 0.6.5: Table Transformations, Multi-Lingual Support, and Better VQL Error-Handling Let You Dig Deeper Than Ever
A stunning retro ‘ncurses’ theme

Error-handling in VQL

Velociraptor is simply a VQL engine – users write VQL artifacts and run these queries on the endpoint.

Previously, it was difficult to tell when VQL encountered an error. Sometimes a missing file is expected, and other times it means something went wrong. From Velociraptor’s point of view, as long as the VQL query ran successfully on the endpoint, the collection was a success. The VQL query can generate logs to provide more information, but the user had to actually look at the logs to determine if there was a problem.

For example, in a hunt parsing a file on the endpoints, it was difficult to tell which of the thousands of machines failed to parse a file. Previously, Velociraptor marked the collection as successful if the VQL query ran – even if it returned no rows because the file failed to parse.

In 0.6.5, there is a mechanism for VQL authors to convey more nuanced information to the user by way of error levels. The VQL log() function was expanded to take a level parameter. When the level is ERROR the collection will be marked as failed in the GUI.

Velociraptor Version 0.6.5: Table Transformations, Multi-Lingual Support, and Better VQL Error-Handling Let You Dig Deeper Than Ever
A failed VQL query

Velociraptor Version 0.6.5: Table Transformations, Multi-Lingual Support, and Better VQL Error-Handling Let You Dig Deeper Than Ever
Query Log messages have their own log level

Custom time zone support

Timestamps are a central part of most DFIR work. Although it is best practice to always work in UTC times, it can be a real pain to have to convert from UTC to local time in your head! Since Velociraptor always uses RFC3389 to represent times unambiguously but for human consumption, it is convenient to represent these times in different local times.

You can now select a more convenient time zone in the GUI by clicking your user preferences and setting the relevant timezone.

Velociraptor Version 0.6.5: Table Transformations, Multi-Lingual Support, and Better VQL Error-Handling Let You Dig Deeper Than Ever
Selecting a custom timezone

The preferred time will be shown in most instances in the UI:

Velociraptor Version 0.6.5: Table Transformations, Multi-Lingual Support, and Better VQL Error-Handling Let You Dig Deeper Than Ever
Time zone selection influences how times are shown

A new MUSL build target

On Linux Go binaries are mostly static but always link to Glibc, which is shipped with the Linux distribution. This means that traditionally Velociraptor had problems running on very old Linux machines (previous to Ubuntu 18.04). We used to build a more compatible version on an old Centos VM, but this was manual and did not support the latest Go compiler.

In 0.6.5, we added a new build target using MUSL – a lightweight Glibc replacement. The produced binary is completely static and should run on a much wider range of Linux versions. This is still considered experimental but should improve the experience on older Linux machines.

Try it out!

If you’re interested in the new features, take Velociraptor for a spin by downloading it from our release page. It’s available for free on GitHub under an open source license.

As always, please file bugs on the GitHub issue tracker or submit questions to our mailing list by emailing [email protected]. You can also chat with us directly on our Discord server.

Learn more about Velociraptor by visiting any of our web and social media channels below:

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Velociraptor Version 0.6.4: Dead Disk Forensics and Better Path Handling Let You Dig Deeper

Post Syndicated from Carlos Canto original https://blog.rapid7.com/2022/04/25/velociraptor-version-0-6-4-dead-disk-forensics-and-better-path-handling-let-you-dig-deeper-2/

Velociraptor Version 0.6.4: Dead Disk Forensics and Better Path Handling Let You Dig Deeper

Rapid7 is pleased to announce the release of Velociraptor version 0.6.4 – an advanced, open-source digital forensics and incident response (DFIR) tool that enhances visibility into your organization’s endpoints. This release has been in development and testing for several months now and has a lot of new features and improvements.

The main focus of this release is in improving path handling in VQL to allow for more efficient path manipulation. This leads to the ability to analyze dead disk images, which depends on accurate path handling.

Path handling

A path is a simple concept – it’s a string similar to /bin/ls that can be used to pass to an OS API and have it operate on the file in the filesystem (e.g. read/write it).

However, it turns out that paths are much more complex than they first seem. For one thing, paths have an OS-dependent separator (usually / or \). Some filesystems support path separators inside a filename too! To read about the details, check out Paths and Filesystem Accessors, but one of the most interesting things with the new handling is that stacking filesystem accessors is now possible. For example, it’s possible to open a docx file inside a zip file inside an ntfs drive inside a partition.

Dead disk analysis

Velociraptor offers top-notch forensic analysis capability, but it’s been primarily used as a live response agent. Many users have asked if Velociraptor can be used on dead disk images. Although dead disk images are rarely used in practice, sometimes we do encounter these in the field (e.g. in cloud investigations).

Previously, Velociraptor couldn’t be used easily on dead disk images without having to carefully tailor and modify each artifact. In the 0.6.4 release, we now have the ability to emulate a live client from dead disk images. We can use this feature to run the exact same VQL artifacts that we normally do on live systems, but against a dead disk image. If you’d like to read more about this new feature, check out Dead Disk Forensics.

Resource control

When collecting artifacts from endpoints, we need to be mindful of the overall load that collection will cost on endpoints. For performance-sensitive servers, our collection can cause operational disruption. For example, running a yara scan over the entire disk would utilize a lot of IO operations and may use a lot of CPU resources. Velociraptor will then compete for these resources with the legitimate server functionality and may cause degraded performance.

Previously, Velociraptor had a setting called Ops Per Second, which could be used to run the collection “low and slow” by limiting the rate at which notional “ops” were utilized. In reality, this setting was only ever used for Yara scans because it was hard to calculate an appropriate setting: Notional ops didn’t correspond to anything measurable like CPU utilization.

In 0.6.4, we’ve implemented a feedback-based throttler that can control VQL queries to a target average CPU utilization. Since CPU utilization is easy to measure, it’s a more meaningful control. The throttler actively measures the Velociraptor process’s CPU utilization, and when the simple moving average (SMA) rises above the limit, the query is paused until the SMA drops below the limit.

Velociraptor Version 0.6.4: Dead Disk Forensics and Better Path Handling Let You Dig Deeper
Selecting resource controls for collections

The above screenshot shows the latest resource controls dialog. You can now set a target CPU utilization between 0 and 100%. The image below shows how that looks in the Windows task manager.

Velociraptor Version 0.6.4: Dead Disk Forensics and Better Path Handling Let You Dig Deeper
CPU control keeps Velociraptor at 15%

By reducing the allowed CPU utilization, Velociraptor will be slowed down, so collections will take longer. You may need to increase the collection timeout to correspond with the extra time it takes.

Note that the CPU limit refers to a percentage of the total CPU resources available on the endpoint. So for example, if the endpoint is a 2 core cloud instance a 50% utilization refers to 1 full core. But on a 32 core server, a 50% utilization is allowed to use 16 cores!

IOPS limits

On some cloud resources, IO operations per second (IOPS) are more important than CPU loading since cloud platforms tend to rate limit IOPS. So if Velociraptor uses many IOPS (e.g. in Yara scanning), it may affect the legitimate workload.

Velociraptor now offers limits on IOPS which may be useful for some scenarios. See for example here and here for a discussion of these limits.

The offline collector resource controls

Many people use the Velociraptor offline collector to collect artifacts from endpoints that they’re unable to install a proper client/server architecture on. In previous versions, there was no resource control or time limit imposed on the offline collector, because it was assumed that it would be used interactively by a user.

However, experience shows that many users use automated tools to push the offline collector to the endpoint (e.g. an EDR or another endpoint agent), and therefore it would be useful to provide resource controls and timeouts to control Velociraptor acquisitions. The below screenshot shows the new resource control page in the offline collector wizard.

Velociraptor Version 0.6.4: Dead Disk Forensics and Better Path Handling Let You Dig Deeper
Configuring offline collector resource controls

GUI changes

Version 0.6.4 brings a lot of useful GUI improvements.

Notebook suggestions

Notebooks are an excellent tool for post processing and analyzing the collected results from various artifacts. Most of the time, similar post processing queries are used for the same artifacts, so it makes sense to allow notebook templates to be defined in the artifact definition. In this release, you can define an optional suggestion in the artifact yaml to allow a user to include certain cells when needed.

The following screenshot shows the default suggestion for all hunt notebooks: Hunt Progress. This cell queries all clients in a hunt and shows the ones with errors, running and completed.

Velociraptor Version 0.6.4: Dead Disk Forensics and Better Path Handling Let You Dig Deeper
Hunt notebooks offer a hunt status cell

Velociraptor Version 0.6.4: Dead Disk Forensics and Better Path Handling Let You Dig Deeper
Hunt notebooks offer a hunt status cell

Multiple OAuth2 authenticators

Velociraptor has always had SSO support to allow strong two-factor authentication for access to the GUI. Previously, however, Velociraptor only supported one OAuth2 provider at a time. Users had to choose between Google, Github, Azure, or OIDC (e.g. Okta) for the authentication provider.

This limitation is problematic for some organizations that need to share access to the Velociraptor console with third parties (e.g. consultants need to provide read-only access to customers).

In 0.6.4, Velociraptor can be configured to support multiple SSO providers at the same time. So an organization can provide access through Okta for their own team members at the same time as Azure or Google for their customers.

Velociraptor Version 0.6.4: Dead Disk Forensics and Better Path Handling Let You Dig Deeper
The Velociraptor login screen supports multiple providers

The Velociraptor knowledge base

Velociraptor is a very powerful tool. Its flexibility means that it can do things that you might have never realized it can! For a while now, we’ve been thinking about ways to make this knowledge more discoverable and easily available.

Many people ask questions on the Discord channel and learn new capabilities in Velociraptor. We want to try a similar format to help people discover what Velociraptor can do.

The Velociraptor Knowledge Base is a new area on the documentation site that allows anyone to submit small (1-2 paragraphs) tips about how to do a particular task. Knowledge base tips are phrased as questions to help people search for them. Provided tips and solutions are short, but they may refer users to more detailed information.

If you learned something about Velociraptor that you didn’t know before and would like to share your experience to make the next user’s journey a little bit easier, please feel free to contribute a small note to the knowledge base.

Importing previous artifacts

Updating the VQL path handling in 0.6.4 introduces a new column called OSPath (replacing the old FullPath column), which wasn’t present in previous versions. While we attempt to ensure that older artifacts should continue to work on 0.6.4 clients, it’s possible that the new VQL artifacts built into 0.6.4 won’t work correctly on older versions.

To make migration easier, 0.6.4 comes built in with the Server.Import.PreviousReleases artifact. This server artifact will load all the artifacts from a previous release into the server, allowing you to use those older versions with older clients.

Velociraptor Version 0.6.4: Dead Disk Forensics and Better Path Handling Let You Dig Deeper
Importing previous versions of core artifacts

Try it out!

If you’re interested in the new features, take Velociraptor for a spin by downloading it from our release page. It’s available for free on GitHub under an open source license.

As always, please file bugs on the GitHub issue tracker or submit questions to our mailing list by emailing [email protected]. You can also chat with us directly on our discord server.

Learn more about Velociraptor by visiting any of our web and social media channels below:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Velociraptor Version 0.6.3: Dig Deeper With More Speed and Scalability

Post Syndicated from Carlos Canto original https://blog.rapid7.com/2022/02/03/velociraptor-version-0-6-3-dig-deeper-with-more-speed-and-scalability/

Velociraptor Version 0.6.3: Dig Deeper With More Speed and Scalability

Rapid7 is very excited to announce the latest Velociraptor release 0.6.3. This release has been in the making for a few months now and has several exciting new features.

Scalability and speed have been the main focus of development since our previous release. Working with some of our larger partners on scaling Velociraptor to a large number of endpoints, we’ve addressed a number of challenges that we believe have improved Velociraptor for everyone at any level of scale.

Performance running on EFS

Running on a distributed filesystem such as EFS presents many advantages, not the least of which is removing the risk that disk space will run out. Many users previously faced disk full errors when running large hunts and accidentally collecting too much data from endpoints. Since Velociraptor is so fast, it’s quite easy to do a hunt collecting a large number of files, but before you know it, the disk may be full.

Using EFS removed this risk, since storage is essentially infinite (but not free). So there is a definite advantage to running the data store on EFS even when not running multiple frontends. When scaling to multiple frontends, EFS use is essential to facilitate a shared distributed filesystem among all the servers.

However, EFS presents some challenges. Although conceptually EFS behaves as a transparent filesystem, in reality the added network latency of EFS IO has caused unacceptable performance issues.

In this release, we employed a number of strategies to improve performance on EFS — and potentially other distributed filesystems, such as NFS. You can read all about the new changes here, but the gist is that added caching and delayed writing strategies help isolate the GUI performance from the underlying EFS latency, making the GUI snappy and quick even with slow filesystems.

We encourage everyone to test the new release on an EFS backend, to assess the performance on this setup — there are many advantages to this configuration. While this configuration is still considered experimental, it’s running successfully in a number of environments.

Searching and indexing

More as a side effect of the EFS work, Velociraptor 0.6.3 moves the client index into memory. This means that searching for clients by DNS name or labels is almost instant, significantly improving the performance of these operations over previous versions.

VQL queries that walk over all clients are now very fast as well. For example, the following query iterates over all clients (maybe thousands!) and checks if their last IP came from a particular subnet:

SELECT * , split(sep=":", string=last_ip)[0] AS LastIp
FROM clients()
WHERE cidr_contains(ip=LastIp, ranges="192.168.1.0/16")

This query will complete in a few seconds even with a large number of clients.

The GUI search bar can now search for IP addresses (e.g. ip:192.168*), and the online only filter is much faster as a result.

Velociraptor Version 0.6.3: Dig Deeper With More Speed and Scalability
Searching is much faster

Another benefit of rapid index searching is that we can now quickly estimate how many hosts will be affected by a hunt (calculated based on how many hosts are included and how many are excluded from the hunt). When users have multiple label groups, this helps to quickly understand how targeted a specific hunt is.

Velociraptor Version 0.6.3: Dig Deeper With More Speed and Scalability
Estimating hunt scope

Regular expressions and Yara rules

Velociraptor artifacts are just a way of wrapping a VQL query inside a YAML file for ease of use. Artifacts accept parameters that are passed to the VQL itself, controlling how it runs.

Velociraptor artifacts accept a number of parameters of different types. Sometimes, they accept a windows path — for example, the Windows.EventLogs.EvtxHunter artifact accepts a Windows glob path like %SystemRoot%\System32\Winevt\Logs\*.evtx. In the same artifact, we also can provide a PathRegex, which is a regular expression.

A regular expression is not the same thing as a path at all. In fact, when users get mixed up providing something like C:\Windows\System32 to a regular expression field, this is an invalid expression — backslashes have a specific meaning in a regular expression.

In 0.6.3, there are now dedicated GUI elements for Regular Expression inputs. Special regex patterns, such as backslash sequences, are visually distinct. Additionally, the GUI verifies that the regex is syntactically correct and offers suggestions. Users can type ? to receive further regular expression suggestions and help them build their regex.

Velociraptor Version 0.6.3: Dig Deeper With More Speed and Scalability
Entering regex in the GUI

To receive a RegEx GUI selector in your custom artifacts, simply denote the parameter’s type as regex.

Similarly, other artifacts require the user to enter a Yara rule to use the yara() VQL plugin. The Yara domain specific language (DSL) is rather verbose, so even for very simple search terms (e.g. a simple keyword search) a full rule needs to be constructed.

To help with this task, the GUI now presents a specific Yara GUI element. Users can press ? to automatically fill in a skeleton Yara rule suitable for a simple keyword match. Additionally, syntax highlighting gives visual feedback to the validity of the yara syntax.

Velociraptor Version 0.6.3: Dig Deeper With More Speed and Scalability
Entering Yara Rules in the GUI

Some artifacts allow file upload as a parameter to the artifact. This allows users to upload larger inputs, for example a large Yara rule-set. The content of the file will be made available to the VQL running on the client transparently.

To receive a RegEx GUI selector in your custom artifacts, simply denote the parameter’s type as yara. To allow uploads in your artifact parameters simply denote the parameter as an upload type. Within the VQL, the content of the uploaded file will be available as that parameter.

Overriding Generic.Client.Info

When a new client connects to the Velociraptor server, the server performs an Interrogation flow by scheduling the Generic.Client.Info artifact on it. This artifact collects basic metadata about the client, such as the type of OS it is, the hostname, and the version of Velociraptor. This information is used to feed the search index and is also displayed in the “VQL drilldown” page of the Host Information screen.

In the latest release, it’s possible to customize the Generic.Client.Info artifact, and Velociraptor will use the customized version instead to interrogate new clients. This allows users to add more deployment specific collections to the interrogate flow and customize the “VQL drilldown” page. Simply search for Generic.Client.Info in the View Artifact screen, and customize as needed.

Root certificates are now embedded

By default, Golang searches for root certificates from the running system so it can verify TLS connections. This behavior caused problems when running Velociraptor on very old unpatched systems that did not receive the latest Let’s Encrypt Root Certificate update. We decided it was safer to just include the root certs in the binary so we don’t need to rely on the OS itself.

Additionally, Velociraptor will now accept additional root certs embedded in its config file — just add all the certs in PEM format under the Client.Crypto.root_certs key in the config file. This helps deployments that must use a MITM proxy or traffic inspection proxies.

When adding a Root Certificate to the configuration file, Velociraptor will treat that certificate as part of the public PKI roots — therefore, you’ll need to have Client.use_self_signed_ssl as false.

This allows Velociraptor to trust the TLS connection — however, bear in mind that Velociraptor’s internal encryption channel is still present. The MITM proxy won’t be able to actually decode the data or interfere with the communications by injecting or modifying data. Only the outer layer of TLS encryption can be stripped by the MITM proxy.

VQL changes

Glob plugin improvements

The glob plugin now has a new option: recursion_callback. This allows much finer control over which directories to visit making file searches much more efficient and targeted. To learn more about it, read our previous Velociraptor blog post “Searching for Files.”

Notable new artifacts

Many people use Velociraptor to collect and hunt for data from endpoints. Once the data is inspected and analyzed, often the data is no longer needed.

To help with the task of expiring old data, the latest release incorporates the Server.Utils.DeleteManyFlows and Server.Utils.DeleteMonitoringData artifacts that allow users to remove older collections. This helps manage disk usage and reduce ongoing costs.

Try it out!

If you’re interested in the new features, take Velociraptor for a spin by downloading it from our release page. It’s available for free on GitHub under an open source license.

As always, please file bugs on the GitHub issue tracker or submit questions to our mailing list by emailing [email protected]. You can also chat with us directly on our discord server.

Learn more about Velociraptor by visiting any of our web and social media channels below:

Dig Deeper!

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.