Part two of Confronting Security Fears to Control Cyber Risk was presented live on March 9th for EMEA and will be delivered on March 16th for APAC. The 40-minute session focuses on the importance of developing cybersecurity elasticity.
In the session, Jason Hart, Rapid7’s Chief Technology Officer, EMEA, will discuss how organisations can develop the ability to adapt while being able to quickly revert to their original structure after times of great stress and impact. To do so, organisations must first address some common cybersecurity challenges:
Alignment of ownership and accountability: Cybersecurity should be decentralised across the business–not just an IT security function
Scope on where to focus: Not all risks are equal and risk can compound based on business needs and transformation
Translation: The requirement to translate cybersecurity needs and requirements across the whole of a organisation
To accomplish these goals, Hart recommends focusing on:
Culture: Enable a culture that makes cybersecurity part of the business process and creates a culture of ownership and accountability
Measurement: Translating cybersecurity data to allow all organisational stakeholders and personas to understand the context and need
Direction: The creation of a Northstar “AKA” Cybersecurity Strategy that is clearly communicated and that has clearly defined objectives and outcomes
For many organizations, that strategy comes in the form of a Protection Level Agreement (PLA).
Cybersecurity Elasticity
A PLA is an agreement between two or more parties, where one is the business (stakeholders), and the others are protection provider(s) (Product Management, IT, 3rd Party Development). Both parties should be equally involved in creating and implementing the PLA, ensuring that expectations are realistic, needs are met, and all parties are bought in to the agreement.
In this session, Hart will detail how executives can create a PLA between the security department and senior leadership team, ensuring everyone works to a common timeline and goals. A well-designed PLA ensures teams are focused and efficient in responding to cybersecurity events. So, clearly defining who owns and is accountable for PLA responsibilities is essential.
Measuring success and identifying weaknesses in a PLA is also key. Cybersecurity tools that automate reporting on a wide variety of KPIs can help security teams communicate effectiveness to leadership.
To learn more, register here:
Confronting Security Fears to Control Cyber Risk: Part Two
Earlier this month, Rapid7 presented part one of a webinar called “Confronting Security Fears to Control Cyber Risk”. The webinar, available on demand, focused on cybersecurity simplicity and why everyone associated with your organization must develop a cybersecurity mindset. To do so, CISOs must decentralize cybersecurity and instil accountability and ownership across a business. If you haven’t already seen it, you can watch it below:
Last week, the Biden administration released a new National Cybersecurity Strategy (summary here). There is lotsofgoodcommentaryoutthere. It’s basically a smart strategy, but the hard parts are always the implementation details. It’s one thing to say that we need to secure our cloud infrastructure, and another to detail what the means technically, who pays for it, and who verifies that it’s been done.
One of the provisions getting the mostattention is a move to shift liability to software vendors, something I’ve been advocating for since at least 2003.
Last week, Rapid7 presented part one of a webinar called “Confronting Security Fears to Control Cyber Risk”. The webinar, which is available on demand, focused on cybersecurity simplicity and why everyone associated with your organization must develop a cybersecurity mindset. To do so, CISOs must decentralize cybersecurity and instil accountability and ownership across a business.
In the session, which you can view below, Jason Hart, Rapid7’s Chief Technology Officer, EMEA, shared his experiences to help executives enhance their cyber mission and vision statements to create a positive cybersecurity culture that permeates the business.
Cybersecurity effectiveness
Historically, cybersecurity was seen as a very technical discipline, and as a result, it was siloed as a department. Today, cybersecurity has become a responsibility of the entire organization, and as a result, mindsets within organizations need to change to reflect this shift.
Additionally, many organizations have good ideas and intentions when it comes to cybersecurity, but poor execution results in under-utilized security stacks. Stakeholders and other executives assume CISOs know what they are doing and trust them to get on with it. Meanwhile, CISOs, coming from a very technical background, need more business transformation experience and communicate their vision. This must change to encourage cybersecurity effectiveness.
“As an industry, we have an amazing ability to overcomplicate cybersecurity,” Hart said. “With this presentation, I want to enable organizations to execute an effective cyber security target operating model that reduces risk.”
Operating model for cybersecurity
Organizations need an operating model that works with its technology platform to decentralize cybersecurity. The operating model should translate the technical aspects of cybersecurity into something more digestible for stakeholders.
It is critical that the operating model takes a top-down approach. To be effective, accountability for security measures should be led by teams at the top. It doesn’t stop there, however. Roles and responsibilities must be defined across the entire organization – every single individual needs to be part of the cybersecurity process. A successful operating model for cybersecurity empowers everyone within the business to think about security.By involving every individual, organizations can increase their cybersecurity effectiveness and share accountability across the business.
Additionally, the operating model should include tools to measure outcomes and effectiveness, so organizations can understand which processes are working. This ensures technology is fully utilized to deliver the best possible outcomes and ROI. You can watch part one of our presentation below that discusses these points in greater detail:
Part two of Confronting Security Fears to Control Cyber Risk will be presented live on March 9th for EMEA and March 16th for APAC.
In this session, you’ll learn why modern organizations need to develop the ability to adapt while being able to quickly revert to their original structure after times of great stress and impact. Hart will also detail how executives can create a Protection Level Agreement (PLA) with the security department, ensuring everyone works to a common timeline and goals.
Confronting Security Fears to Control Cyber Risk: Part Two
What will it take for policy makers to take cybersecurity seriously? Not minimal-change seriously. Not here-and-there seriously. But really seriously. What will it take for policy makers to take cybersecurity seriously enough to enact substantive legislative changes that would address the problems? It’s not enough for the average person to be afraid of cyberattacks. They need to know that there are engineering fixes—and that’s something we can provide.
For decades, I have been waiting for the “big enough” incident that would finally do it. In 2015, Chinese military hackers hacked the Office of Personal Management and made off with the highly personal information of about 22 million Americans who had security clearances. In 2016, the Mirai botnet leveraged millions of Internet-of-Things devices with default admin passwords to launch a denial-of-service attack that disabled major Internet platforms and services in both North America and Europe. In 2017, hackers—years later we learned that it was the Chinese military—hacked the credit bureau Equifax and stole the personal information of 147 million Americans. In recent years, ransomware attacks have knocked hospitals offline, and many articles have been written about Russia inside the U.S. power grid. And last year, the Russian SVR hacked thousands of sensitive networks inside civilian critical infrastructure worldwide in what we’re now calling Sunburst (and used to call SolarWinds).
Those are all major incidents to security people, but think about them from the perspective of the average person. Even the most spectacular failures don’t affect 99.9% of the country. Why should anyone care if the Chinese have his or her credit records? Or if the Russians are stealing data from some government network? Few of us have been directly affected by ransomware, and a temporary Internet outage is just temporary.
Cybersecurity has never been a campaign issue. It isn’t a topic that shows up in political debates. (There was one question in a 2016 Clinton–Trump debate, but the response was predictably unsubstantive.) This just isn’t an issue that most people prioritize, or even have an opinion on.
So, what will it take? Many of my colleagues believe that it will have to be something with extreme emotional intensity—sensational, vivid, salient—that results in large-scale loss of life or property damage. A successful attack that actually poisons a water supply, as someone tried to do in January by raising the levels of lye at a Florida water-treatment plant. (That one was caught early.) Or an attack that disables Internet-connected cars at speed, something that was demonstrated by researchers in 2014. Or an attack on the power grid, similar to what Russia did to the Ukraine in 2015 and 2016. Will it take gas tanks exploding and planes falling out of the sky for the average person to read about the casualties and think “that could have been me”?
Here’s the real problem. For the average nonexpert—and in this category I include every lawmaker—to push for change, they not only need to believe that the present situation is intolerable, they also need to believe that an alternative is possible. Real legislative change requires a belief that the never-ending stream of hacks and attacks is not inevitable, that we can do better. And that will require creating working examples of secure, dependable, resilient systems.
Providing alternatives is how engineers help facilitate social change. We could never have eliminated sales of tungsten-filament household light bulbs if fluorescent and LED replacements hadn’t become available. Reducing the use of fossil fuel for electricity generation requires working wind turbines and cost-effective solar cells.
We need to demonstrate that it’s possible to build systems that can defend themselves against hackers, criminals, and national intelligence agencies; secure Internet-of-Things systems; and systems that can reestablish security after a breach. We need to prove that hacks aren’t inevitable, and that our vulnerability is a choice. Only then can someone decide to choose differently. When people die in a cyberattack and everyone asks “What can be done?” we need to have something to tell them.
We don’t yet have the technology to build a truly safe, secure, and resilient Internet and the computers that connect to it. Yes, we have lots of security technologies. We have older secure systems—anyone still remember Apollo’s DomainOS and MULTICS?—that lost out in a market that didn’t reward security. We have newer research ideas and products that aren’t successful because the market still doesn’t reward security. We have even newer research ideas that won’t be deployed, again, because the market still prefers convenience over security.
What I am proposing is something more holistic, an engineering research task on a par with the Internet itself. The Internet was designed and built to answer this question: Can we build a reliable network out of unreliable parts in an unreliable world? It turned out the answer was yes, and the Internet was the result. I am asking a similar research question: Can we build a secure network out of insecure parts in an insecure world? The answer isn’t obviously yes, but it isn’t obviously no, either.
While any successful demonstration will include many of the security technologies we know and wish would see wider use, it’s much more than that. Creating a secure Internet ecosystem goes beyond old-school engineering to encompass the social sciences. It will include significant economic, institutional, and psychological considerations that just weren’t present in the first few decades of Internet research.
Cybersecurity isn’t going to get better until the economic incentives change, and that’s not going to change until the political incentives change. The political incentives won’t change until there is political liability that comes from voter demands. Those demands aren’t going to be solely the results of insecurity. They will also be the result of believing that there’s a better alternative. It is our task to research, design, build, test, and field that better alternative—even though the market couldn’t care less right now.
This essay originally appeared in the May/June 2021 issue of IEEE Security & Privacy. I forgot to publish it here.
The field of machine learning (ML) security—and corresponding adversarial ML—is rapidly advancing as researchers develop sophisticated techniques to perturb, disrupt, or steal the ML model or data. It’s a heady time; because we know so little about the security of these systems, there are many opportunities for new researchers to publish in this field. In many ways, this circumstance reminds me of the cryptanalysis field in the 1990. And there is a lesson in that similarity: the complex mathematical attacks make for good academic papers, but we mustn’t lose sight of the fact that insecure software will be the likely attack vector for most ML systems.
We are amazed by real-world demonstrations of adversarial attacks on ML systems, such as a 3D-printed object that looks like a turtle but is recognized (from any orientation) by the ML system as a gun. Or adding a few stickers that look like smudges to a stop sign so that it is recognized by a state-of-the-art system as a 45 mi/h speed limit sign. But what if, instead, somebody hacked into the system and just switched the labels for “gun” and “turtle” or swapped “stop” and “45 mi/h”? Systems can only match images with human-provided labels, so the software would never notice the switch. That is far easier and will remain a problem even if systems are developed that are robust to those adversarial attacks.
At their core, modern ML systems have complex mathematical models that use training data to become competent at a task. And while there are new risks inherent in the ML model, all of that complexity still runs in software. Training data are still stored in memory somewhere. And all of that is on a computer, on a network, and attached to the Internet. Like everything else, these systems will be hacked through vulnerabilities in those more conventional parts of the system.
This shouldn’t come as a surprise to anyone who has been working with Internet security. Cryptography has similar vulnerabilities. There is a robust field of cryptanalysis: the mathematics of code breaking. Over the last few decades, we in the academic world have developed a variety of cryptanalytic techniques. We have broken ciphers we previously thought secure. This research has, in turn, informed the design of cryptographic algorithms. The classified world of the NSA and its foreign counterparts have been doing the same thing for far longer. But aside from some special cases and unique circumstances, that’s not how encryption systems are exploited in practice. Outside of academic papers, cryptosystems are largely bypassed because everything around the cryptography is much less secure.
The problem is that encryption is just a bunch of math, and math has no agency. To turn that encryption math into something that can actually provide some security for you, it has to be written in computer code. And that code needs to run on a computer: one with hardware, an operating system, and other software. And that computer needs to be operated by a person and be on a network. All of those things will invariably introduce vulnerabilities that undermine the perfection of the mathematics…
This remains true even for pretty weak cryptography. It is much easier to find an exploitable software vulnerability than it is to find a cryptographic weakness. Even cryptographic algorithms that we in the academic community regard as “broken”—meaning there are attacks that are more efficient than brute force—are usable in the real world because the difficulty of breaking the mathematics repeatedly and at scale is much greater than the difficulty of breaking the computer system that the math is running on.
ML systems are similar. Systems that are vulnerable to model stealing through the careful construction of queries are more vulnerable to model stealing by hacking into the computers they’re stored in. Systems that are vulnerable to model inversion—this is where attackers recover the training data through carefully constructed queries—are much more vulnerable to attacks that take advantage of unpatched vulnerabilities.
But while security is only as strong as the weakest link, this doesn’t mean we can ignore either cryptography or ML security. Here, our experience with cryptography can serve as a guide. Cryptographic attacks have different characteristics than software and network attacks, something largely shared with ML attacks. Cryptographic attacks can be passive. That is, attackers who can recover the plaintext from nothing other than the ciphertext can eavesdrop on the communications channel, collect all of the encrypted traffic, and decrypt it on their own systems at their own pace, perhaps in a giant server farm in Utah. This is bulk surveillance and can easily operate on this massive scale.
On the other hand, computer hacking has to be conducted one target computer at a time. Sure, you can develop tools that can be used again and again. But you still need the time and expertise to deploy those tools against your targets, and you have to do so individually. This means that any attacker has to prioritize. So while the NSA has the expertise necessary to hack into everyone’s computer, it doesn’t have the budget to do so. Most of us are simply too low on its priorities list to ever get hacked. And that’s the real point of strong cryptography: it forces attackers like the NSA to prioritize.
This analogy only goes so far. ML is not anywhere near as mathematically sound as cryptography. Right now, it is a sloppy misunderstood mess: hack after hack, kludge after kludge, built on top of each other with some data dependency thrown in. Directly attacking an ML system with a model inversion attack or a perturbation attack isn’t as passive as eavesdropping on an encrypted communications channel, but it’s using the ML system as intended, albeit for unintended purposes. It’s much safer than actively hacking the network and the computer that the ML system is running on. And while it doesn’t scale as well as cryptanalytic attacks can—and there likely will be a far greater variety of ML systems than encryption algorithms—it has the potential to scale better than one-at-a-time computer hacking does. So here again, good ML security denies attackers all of those attack vectors.
We’re still in the early days of studying ML security, and we don’t yet know the contours of ML security techniques. There are really smart people working on this and making impressive progress, and it’ll be years before we fully understand it. Attacks come easy, and defensive techniques are regularly broken soon after they’re made public. It was the same with cryptography in the 1990s, but eventually the science settled down as people better understood the interplay between attack and defense. So while Google, Amazon, Microsoft, and Tesla have all faced adversarial ML attacks on their production systems in the last three years, that’s not going to be the norm going forward.
All of this also means that our security for ML systems depends largely on the same conventional computer security techniques we’ve been using for decades. This includes writing vulnerability-free software, designing user interfaces that help resist social engineering, and building computer networks that aren’t full of holes. It’s the same risk-mitigation techniques that we’ve been living with for decades. That we’re still mediocre at it is cause for concern, with regard to both ML systems and computing in general.
I love cryptography and cryptanalysis. I love the elegance of the mathematics and the thrill of discovering a flaw—or even of reading and understanding a flaw that someone else discovered—in the mathematics. It feels like security in its purest form. Similarly, I am starting to love adversarial ML and ML security, and its tricks and techniques, for the same reasons.
I am not advocating that we stop developing new adversarial ML attacks. It teaches us about the systems being attacked and how they actually work. They are, in a sense, mechanisms for algorithmic understandability. Building secure ML systems is important research and something we in the security community should continue to do.
There is no such thing as a pure ML system. Every ML system is a hybrid of ML software and traditional software. And while ML systems bring new risks that we haven’t previously encountered, we need to recognize that the majority of attacks against these systems aren’t going to target the ML part. Security is only as strong as the weakest link. As bad as ML security is right now, it will improve as the science improves. And from then on, as in cryptography, the weakest link will be in the software surrounding the ML system.
This essay originally appeared in the May 2020 issue of IEEE Computer. I forgot to reprint it here.
Author: Thomas Elkins Contributors: Andrew Iwamaye, Matt Green, James Dunne, and Hernan Diaz
Rapid7 routinely conducts research into the wide range of techniques that threat actors use to conduct malicious activity. One objective of this research is to discover new techniques being used in the wild, so we can develop new detection and response capabilities.
Recently, we (Rapid7) observed malicious actors using OneNote files to deliver malicious code. We identified a specific technique that used OneNote files containing batch scripts, which upon execution started an instance of a renamed PowerShell process to decrypt and execute a base64 encoded binary. The base64 encoded binary subsequently decrypted a final payload, which we have identified to be either Redline Infostealer or AsyncRat.
This blog post walks through analysis of a OneNote file that delivered a Redline Infostealer payload.
Analysis of OneNote File
The attack vector began when a user was sent a OneNote file via a phishing email. Once the OneNote file was opened, the user was presented with the option to “Double Click to View File” as seen in Figure 1.
Figure 1 – OneNote file "Remittance" displaying the button “Double Click to View File”
We determined that the button “Double Click to View File” was moveable. Hidden underneath the button, we observed five shortcuts to a batch script, nudm1.bat. The hidden placement of the shortcuts ensured that the user double-clicked on one of the shortcuts when interacting with the “Double Click to View File” button.
Figure 2 – Copy of Batch script nudm1.bat revealed after moving “Double Click to View File” button
Once the user double clicked the button “Double Click to View File”, the batch script nudm1.bat executed in the background without the user’s knowledge.
Analysis of Batch Script
In a controlled environment, we analyzed the batch script nudm1.bat and observed variables storing values.
Figure 3 – Beginning contents of nudm1.bat
Near the middle of the script, we observed a large section of base64 encoded data, suggesting at some point, the data would be decoded by the batch script.
Figure 4 – Base64 encoded data contained within nudm1.bat
At the bottom of the batch script, we observed the declared variables being concatenated. To easily determine what the script was doing, we placed echo commands in front of the concatenations. The addition of the echo commands allowed for the batch script to deobfuscate itself for us upon execution.
Figure 5 – echo command placed in front of concatenated variables
We executed the batch file and piped the deobfuscated result to a text file. The text file contained a PowerShell script that was executed with a renamed PowerShell binary, nudm1.bat.exe.
Figure 6 – Output after using echo reveals PowerShell script
We determined the script performed the following:
Base64 decoded the data stored after :: within nudm1.bat, shown in Figure 4
AES Decrypted the base64 decoded data using the base64 Key 4O2hMB9pMchU0WZqwOxI/4wg3/QsmYElktiAnwD4Lqw= and base64 IV of TFfxPAVmUJXw1j++dcSfsQ==
Decompressed the decrypted contents using gunzip
Reflectively loaded the decrypted and decompressed contents into memory
Using CyberChef, we replicated the identified decryption method to obtain a decrypted executable file.
Figure 7 – AES decryption via Cyberchef reveals MZ header
We determined the decrypted file was a 32-bit .NET executable and analyzed the executable using dnSpy.
Analysis of .NET 32-bit Executable
In dnSpy we observed the original file name was tmpFBF7. We also observed that the file contained a resource named payload.exe.
Figure 8 – dnSpy reveals name of original program tmpFBF7 and a payload.exe resource
We navigated to the entry point of the file and observed base64 encoded strings. The base64 encoded strings were passed through a function SRwvjAcHapOsRJfNBFxi. The function SRwvjAcHapOsRJfNBFxi utilized AES decryption to decrypt data passed as argument.
Figure 9 – AES Decrypt Function SRwvjAcHapOsRJfNBFxi
As seen in Figure 9, the function SRwvjAcHapOsRJfNBFxi took in 3 arguments: input, key and iv.
We replicated the decryption process from the function SRwvjAcHapOsRJfNBFxi using CyberChef to decrypt the values of the base64 encoded strings. Figure 9 shows an example of the decryption process of the base64 encoded string vYhBhJfROLULmQk1P9jbiqyIcg6RWlONx2FLYpdRzZA= from line 30 of Figure 7 to reveal a decoded and decrypted string of CheckRemoteDebuggerPresent.
Figure 10 – Using Cyberchef to replicate decryption of function SRwvjAcHapOsRJfNBFxi
Repeating the decryption of the other base64 encoded strings revealed some anti-analysis and anti-AV checks performed by the executable:
EtwEventWrite /c choice /c y /n /d y /t 1 & attrib -h -s
After passing the anti-analysis and anti-AV checks, the executable called upon the payload.exe resource in line 94 of the code. We determined that the payload.exe resource was saved into the variable @string.
Figure 11 – @string storing payload.exe
On line 113, the variable @string was passed into a new function, aBTlNnlczOuWxksGYYqb, as well as the AES decryption function SRwvjAcHapOsRJfNBFxi.
Figure 12 – @string being passed through function hDMeRrMMQVtybxerYkHW
The function aBTlNnlczOuWxksGYYqb decompressed content passed to it using Gunzip.
Figure 13 – Function aBTlNnlczOuWxksGYYqb decompresses content using Gzip
Using CyberChef, we decrypted and decompressed the payload.exe resource to obtain another 32-bit .NET executable, which we named payload2.bin. Using Yara, we scanned payload2.bin and determined it was related to the Redline Infostealer malware family.
Figure 14 – Yara Signature identifying payload2.bin as Redline Infostealer
We also analyzed payload2.bin in dnSpy.
Analysis of Redline Infostealer
We observed that the original final name of payload2.bin was Footstools and that a class labeled Arguments contained the variables IP and Key. The variable IP stored a base64 encoded value GTwMCik+IV89NmBYISBRLSU7PlMZEiYJKwVVUg==.
Figure 15 – Global variable IP set as Base64 encoded string
The variable Key stored a UTF8 value of Those.
Figure 16 – Global variable Key set with value Those
We identified that the variable IP was called into a function, WriteLine(), which passed the variables IP and Key into a String.Decrypt function as arguments.
Figure 17 – String.Decrypt being passed arguments IP and Key
The function String.Decrypt was a simple function that XOR’ed input data with the value of Key.
Using Cyberchef, we replicated the String.Decrypt function for the ‘IP’ variable by XORing the base64 value shown in Figure 13 with the value of Key shown in Figure 16 to obtain the decrypted value for the IP variable, 172.245.45[.]213:3235.
Figure 19 – Using XOR in Cyberchef to reveal value of argument IP
Redline Info Stealer has the capability to steal credentials related to Cryptocurrency wallets, Discord data, as well as web browser data including cached cookies. Figure 19 shows functionality in Redline Infostealer that searches for known Cryptocurrency wallets.
Figure 20 – Redline Infostealer parsing for known Cryptocurrency wallet locations
Rapid7 Protection
Rapid7 has existing rules that detect the behavior observed within customers environments using our Insight Agent including:
Suspicious Process – Renamed PowerShell
OneNote Embedded File Parser
Rapid7 has also developed a OneNote file parser and detection artifact for Velociraptor. This artifact can be used to detect or extract malicious payloads like the one discussed in this post. https://docs.velociraptor.app/exchange/artifacts/pages/onenote/
Block .one attachments at the network perimeter or with an antiphishing solution if .one files are not business-critical
User awareness training
If possible, implement signatures to search for PowerShell scripts containing reverse strings such as gnirtS46esaBmorF
Watch out for OneNote as the parent process of cmd.exe executing a .bat file
NIST is planning a significant update of its Cybersecurity Framework. At this point, it’s asking for feedback and comments to its concept paper.
Do the proposed changes reflect the current cybersecurity landscape (standards, risks, and technologies)?
Are the proposed changes sufficient and appropriate? Are there other elements that should be considered under each area?
Do the proposed changes support different use cases in various sectors, types, and sizes of organizations (and with varied capabilities, resources, and technologies)?
Are there additional changes not covered here that should be considered?
For those using CSF 1.1, would the proposed changes affect continued adoption of the Framework, and how so?
For those not using the Framework, would the proposed changes affect the potential use of the Framework?
The NIST Cybersecurity Framework has turned out to be an excellent resource. If you use it at all, please help with version 2.0.
The head of both US Cyber Command and the NSA, Gen. Paul Nakasone, broadly discussed that first organization’s offensive cyber operations during the runup to the 2022 midterm elections. He didn’t name names, of course:
We did conduct operations persistently to make sure that our foreign adversaries couldn’t utilize infrastructure to impact us,” said Nakasone. “We understood how foreign adversaries utilize infrastructure throughout the world. We had that mapped pretty well. And we wanted to make sure that we took it down at key times.”
Nakasone noted that Cybercom’s national mission force, aided by NSA, followed a “campaign plan” to deprive the hackers of their tools and networks. “Rest assured,” he said. “We were doing operations well before the midterms began, and we were doing operations likely on the day of the midterms.” And they continued until the elections were certified, he said.
We know Cybercom did similar things in 2018 and 2020, and presumably will again in two years.
AWS re:Invent returned to Las Vegas, Nevada, November 28 to December 2, 2022. After a virtual event in 2020 and a hybrid 2021 edition, spirits were high as over 51,000 in-person attendees returned to network and learn about the latest AWS innovations.
Now in its 11th year, the conference featured 5 keynotes, 22 leadership sessions, and more than 2,200 breakout sessions and hands-on labs at 6 venues over 5 days.
With well over 100 service and feature announcements—and innumerable best practices shared by AWS executives, customers, and partners—distilling highlights is a challenge. From a security perspective, three key themes emerged.
Turn data into actionable insights
Security teams are always looking for ways to increase visibility into their security posture and uncover patterns to make more informed decisions. However, as AWS Vice President of Data and Machine Learning, Swami Sivasubramanian, pointed out during his keynote, data often exists in silos; it isn’t always easy to analyze or visualize, which can make it hard to identify correlations that spark new ideas.
“Data is the genesis for modern invention.” – Swami Sivasubramanian, AWS VP of Data and Machine Learning
At AWS re:Invent, we launched new features and services that make it simpler for security teams to store and act on data. One such service is Amazon Security Lake, which brings together security data from cloud, on-premises, and custom sources in a purpose-built data lake stored in your account. The service, which is now in preview, automates the sourcing, aggregation, normalization, enrichment, and management of security-related data across an entire organization for more efficient storage and query performance. It empowers you to use the security analytics solutions of your choice, while retaining control and ownership of your security data.
Amazon Security Lake has adopted the Open Cybersecurity Schema Framework (OCSF), which AWS cofounded with a number of organizations in the cybersecurity industry. The OCSF helps standardize and combine security data from a wide range of security products and services, so that it can be shared and ingested by analytics tools. More than 37 AWS security partners have announced integrations with Amazon Security Lake, enhancing its ability to transform security data into a powerful engine that helps drive business decisions and reduce risk. With Amazon Security Lake, analysts and engineers can gain actionable insights from a broad range of security data and improve threat detection, investigation, and incident response processes.
Strengthen security programs
According to Gartner, by 2026, at least 50% of C-Level executives will have performance requirements related to cybersecurity risk built into their employment contracts. Security is top of mind for organizations across the globe, and as AWS CISO CJ Moses emphasized during his leadership session, we are continuously building new capabilities to help our customers meet security, risk, and compliance goals.
In addition to Amazon Security Lake, several new AWS services announced during the conference are designed to make it simpler for builders and security teams to improve their security posture in multiple areas.
Identity and networking
Authorization is a key component of applications. Amazon Verified Permissions is a scalable, fine-grained permissions management and authorization service for custom applications that simplifies policy-based access for developers and centralizes access governance. The new service gives developers a simple-to-use policy and schema management system to define and manage authorization models. The policy-based authorization system that Amazon Verified Permissions offers can shorten development cycles by months, provide a consistent user experience across applications, and facilitate integrated auditing to support stringent compliance and regulatory requirements.
Additional services that make it simpler to define authorization and service communication include Amazon VPC Lattice, an application-layer service that consistently connects, monitors, and secures communications between your services, and AWS Verified Access, which provides secure access to corporate applications without a virtual private network (VPN).
Threat detection and monitoring
Monitoring for malicious activity and anomalous behavior just got simpler. Amazon GuardDuty RDS Protection expands the threat detection capabilities of GuardDuty by using tailored machine learning (ML) models to detect suspicious logins to Amazon Aurora databases. You can enable the feature with a single click in the GuardDuty console, with no agents to manually deploy, no data sources to enable, and no permissions to configure. When RDS Protection detects a potentially suspicious or anomalous login attempt that indicates a threat to your database instance, GuardDuty generates a new finding with details about the potentially compromised database instance. You can view GuardDuty findings in AWS Security Hub, Amazon Detective (if enabled), and Amazon EventBridge, allowing for integration with existing security event management or workflow systems.
To bolster vulnerability management processes, Amazon Inspector now supports AWS Lambda functions, adding automated vulnerability assessments for serverless compute workloads. With this expanded capability, Amazon Inspector automatically discovers eligible Lambda functions and identifies software vulnerabilities in application package dependencies used in the Lambda function code. Actionable security findings are aggregated in the Amazon Inspector console, and pushed to Security Hub and EventBridge to automate workflows.
Data protection and privacy
The first step to protecting data is to find it. Amazon Macie now automatically discovers sensitive data, providing continual, cost-effective, organization-wide visibility into where sensitive data resides across your Amazon Simple Storage Service (Amazon S3) estate. With this new capability, Macie automatically and intelligently samples and analyzes objects across your S3 buckets, inspecting them for sensitive data such as personally identifiable information (PII), financial data, and AWS credentials. Macie then builds and maintains an interactive data map of your sensitive data in S3 across your accounts and Regions, and provides a sensitivity score for each bucket. This helps you identify and remediate data security risks without manual configuration and reduce monitoring and remediation costs.
Encryption is a critical tool for protecting data and building customer trust. The launch of the end-to-end encrypted enterprise communication service AWS Wickr offers advanced security and administrative controls that can help you protect sensitive messages and files from unauthorized access, while working to meet data retention requirements.
Management and governance
Maintaining compliance with regulatory, security, and operational best practices as you provision cloud resources is key. AWS Config rules, which evaluate the configuration of your resources, have now been extended to support proactive mode, so that they can be incorporated into infrastructure-as-code continuous integration and continuous delivery (CI/CD) pipelines to help identify noncompliant resources prior to provisioning. This can significantly reduce time spent on remediation.
Managing the controls needed to meet your security objectives and comply with frameworks and standards can be challenging. To make it simpler, we launched comprehensive controls management with AWS Control Tower. You can use it to apply managed preventative, detective, and proactive controls to accounts and organizational units (OUs) by service, control objective, or compliance framework. You can also use AWS Control Tower to turn on Security Hub detective controls across accounts in an OU. This new set of features reduces the time that it takes to define and manage the controls required to meet specific objectives, such as supporting the principle of least privilege, restricting network access, and enforcing data encryption.
Do more with less
As we work through macroeconomic conditions, security leaders are facing increased budgetary pressures. In his opening keynote, AWS CEO Adam Selipsky emphasized the effects of the pandemic, inflation, supply chain disruption, energy prices, and geopolitical events that continue to impact organizations.
Now more than ever, it is important to maintain your security posture despite resource constraints. Citing specific customer examples, Selipsky underscored how the AWS Cloud can help organizations move faster and more securely. By moving to the cloud, agricultural machinery manufacturer Agco reduced costs by 78% while increasing data retrieval speed, and multinational HVAC provider Carrier Global experienced a 40% reduction in the cost of running mission-critical ERP systems.
“If you’re looking to tighten your belt, the cloud is the place to do it.” – Adam Selipsky, AWS CEO
Security teams can do more with less by maximizing the value of existing controls, and bolstering security monitoring and analytics capabilities. Services and features announced during AWS re:Invent—including Amazon Security Lake, sensitive data discovery with Amazon Macie, support for Lambda functions in Amazon Inspector, Amazon GuardDuty RDS Protection, and more—can help you get more out of the cloud and address evolving challenges, no matter the economic climate.
Security is our top priority
AWS re:Invent featured many more highlights on a variety of topics, such as Amazon EventBridge Pipes and the pre-announcement of GuardDuty EKS Runtime protection, as well as Amazon CTO Dr. Werner Vogels’ keynote, and the security partnerships showcased on the Expo floor. It was a whirlwind week, but one thing is clear: AWS is working harder than ever to make our services better and to collaborate on solutions that ease the path to proactive security, so that you can focus on what matters most—your business.
Recog Release v3.0.3, which is available now, includes updated fingerprints for Zoho ManageEngine PAM360, Password Manager Pro, and Access Manager Plus; Atlassian Bitbucket Server; and Supervisord Supervisor. It also includes new fingerprints and a number of bug fixes, all of which are detailed below.
Recog is an open source recognition framework used to identify products, operating systems, and hardware through matching network probe data against its extensive fingerprint collection. Support for Recog is part of Rapid7’s ongoing commitment to open source initiatives.
Zoho ManageEngine PAM360, Password Manager Pro, and Access Manager Plus
Fingerprints for these three Zoho ManageEngine products were added shortly after Cybersecurity and Infrastructure Security Agency (CISA) added CVE-2022-35405 to their Known Exploited Vulnerabilities (KEV) catalog on September 22nd, 2022. Favicon, HTML title, and HTTP server fingerprints were created for both PAM360 and Password Manager Pro, and favicon and HTML title fingerprints were created for Access Manager Plus. PAM360 version 5500 (and older) and Password Manager Pro version 12100 (and older) are both vulnerable to an unauthenticated remote code execution (RCE) vulnerability, and Access Manager Plus version 4302 (and older) is vulnerable to an authenticated remote code execution (RCE) vulnerability. In addition, Grant Willcox contributed the Metasploit Zoho Password Manager Pro XML-RPC Java Deserialization exploit module which is capable of exploiting the unauthenticated vulnerability via the XML-RPC interface in Password Manager Pro and PAM360 and attaining RCE as the NT AUTHORITY\SYSTEM user.
More recently, on January 4th, 2023, Zoho released details of a SQL injection vulnerability (CVE-2022-47523) in PAM360 version 5800 (and older), Password Manager Pro version 12200 (and older) and Access Manager Plus version 4308 (and older). From a quick analysis of internet scan data there appears to be only about 76 Password Manager Pro and 21 PAM360 instances on the internet.
Atlassian Bitbucket Server
Favicon, HTML title and HTTP cookie fingerprints for the Atlassian Bitbucket server were added shortly after our Emergent Threat Response for CVE-2022-36804 was published on September 20th, 2022 in response to the command injection vulnerability in multiple API endpoints of both Bitbucket Server and Data Center. An adversary with access to either a public repository or read permissions on a private repository can perform remote code execution simply through a malicious HTTP request. Shelby Pace contributed the Metasploit Bitbucket Git Command Injection exploit module which is capable of exploiting the unauthenticated command injection. Bitbucket Server and Data Center versions 7.6 prior to 7.6.17, 7.17 prior to 7.17.10, 7.21 prior to 7.21.4, 8.0 prior to 8.0.3, 8.1 prior to 8.1.3, 8.2 prior to 8.2.2 and 8.3 prior to 8.3.1 are vulnerable. From a quick analysis of internet scan data there appears to be just under a thousand of these exposed on the internet.
Supervisord Supervisor
Favicon and HTML title fingerprints were added for anyone interested in locating unsupervised Supervisor instances on their networks. The web interface for the process control system allows users to restart or stop processes under the software’s control, and even tail the standard output and error streams. There might be some interesting information in those streams! From a quick analysis of internet scan data there appears to be only about 165 instances on the internet.
It’s the holiday season when children all over the world cross their fingers in the hope that they don’t end up on a certain red-clad big man’s naughty list. Turns out, we at Rapid7 have a similar tradition, only we’re the ones making the list and there’s a whole lotta naughty going on (not like that, get your heads out of the gutter).
We’ve asked a few of our experts to share what in cybersecurity deserves to be on the naughty list, and what needs to be on the nice list. Some of these represent personal gripes, others are industry-wide, and still others are specific to certain aspects of what we do all day.
Obviously, we all lived through the many levels of Shell this year so we are taking that as the quintessential 2022 naughty entry. These are a few others that you may or may not have been tracking, but are worth thinking about as we put this year to bed.
Here, without further fan fare, is our non-exhaustive, thoroughly delightful, slightly deranged, 2022 Cybersecurity Naughty and Nice List. Enjoy.
The Naughty List
Virtual Private Nopes: I try, really hard, to take a charitable read on people’s motivations. So, normally, it takes a lot to get on my bad side. That said: I nominate the entire consumer VPN industry for this year’s Naughty List. This is based on a paper published by the University of Maryland titled, Investigating Influencer VPN Ads on YouTube, by Omer Akgul, Richard Roberts, Moses Namara, Dave Levin, and Michelle L. Mazurek.
Not to spoil the surprise, but the study shows that many consumer VPN influencer ads contain potentially misleading claims, including overpromises and exaggerations that could negatively influence viewers’ understanding of Internet safety. It also found that the ads’ presentation of information on complicated subjects of cryptography, networking, and cybersecurity in general is likely counterproductive and may make viewers resistant to learning true facts about these topics.
Naughty, naughty indeed. You can hear more about this on Security Nation, or if you’re feeling particularly ironic, on YouTube. – Tod Beardsley, Director of Research
When IoT Products Attack: There is a never ending flood of cheap white labeled IoT goods available for consumers to purchase online. Many of these devices have little or no security. Worse, most of these products don’t even have vendors backing them when vulnerabilities are found. As a result, many of the issues will never be fixed.
As this pile of garbage continues to grow, it seems we are just forced to wait and anticipate another Mirai-style botnet (or worse) to emerge and create havoc. – Deral Heiland, Principal Security Researcher, IoT
Ambulance Chasing in the Wake of the Uber breach: It is critical for cybersecurity vendors to react to cybersecurity events as quickly as possible and often in as close to real-time as we can get. From a marketing standpoint, this can be an opportunity to impart a timely, relevant message that showcases a security product in a positive light.
There’s nothing inherently wrong with that, but when vendors use it as an opportunity to tsk-tsk those who didn’t use their product they come off as unhelpful at best, and dangerously boastful at worst.
The Uber breach that hit headlines earlier this year is a good example of this where some of the most vocal vendors were also shown to be unable to stop the breach. Everyone should be proud of their products and their capabilities, but let’s stick to being helpful to the community rather than resorting to ambulance chasing and Monday morning quarterbacking. – Ryan Blanchard, Product Marketing Manager, InsightCloudSec
The Nice List
U.S. Government Agencies Pass New Cybersecurity Legislation: During 2022, the U.S. took some significant steps—in the form of regulation and legislation—to ensure proper disclosure of major cybersecurity incidents.
In March, President Biden signed new cybersecurity legislation mandating critical infrastructure operators report hacks to the Department of Homeland Security within 72 hours and within 24 hours of ransomware payments.
Additionally, the SEC voted to propose two new cybersecurity rules for publicly-traded companies. The first mandates reporting of material cybersecurity incidents in an 8-K form within four business days of the incident. The second requires companies disclose their policies for managing cybersecurity risks, including updates on previously reported material cybersecurity incidents.
In July, the House of Representatives passed two cybersecurity bills. The first requires the Federal Trade Commission to report cross-border complaints involving ransomware and other cybersecurity incidents. The second directs the Department of Energy to establish an energy cybersecurity university leadership program. – Ryan Blanchard, Product Marketing Manager, InsightCloudSec
Consumer Protections for IoT Devices: In October, the White House hosted a meeting with IoT industry leaders to start the process of developing an IoT Labeling system for consumers to help them identify products that meet a standard level of security.
Although this project will take time to complete, and the use of the labels will be voluntary for vendors, I do expect many vendors will embrace this labeling solution to help promote their products above their competitors. This project will be a major step forward for consumers, which will help them to make sound security decisions on what products to deploy in their homes. – Deral Heiland, Principal Security Researcher, IoT
Adventures in TOTP Token Extraction: I let backups for my phone lapse … for the entire pandemic. Oops. So, when my phone gave up the ghost, I lost the primary authentication device for 2FA (in addition to countless photos of my wife and I playing board games during lockdown). Oh no!
I was using a cloud-based TOTP token manager and was still authenticated and logged in on my desktop. So, “no problem,” says I, “I can just use the web UI to export these tokens to the new phone!” Well, not so fast—it turns out that it is super hard to grab these tokens and port them around. Which is infuriating.
So, there you have it. A bit of naughty, a touch of nice, something about TOTP tokens, this blog post has it all. Thank you from the entire Rapid7 team for being with us throughout this wild year!
The Finnish Transport and Communications Agency (Traficom) Cyber Security Centre published PiTuKri, which consists of 52 criteria that provide guidance when assessing the security of cloud service providers. The criteria are organized into the following 11 subdivisions:
Framework conditions
Security management
Personnel security
Physical security
Communications security
Identity and access management
Information system security
Encryption
Operations security
Transferability and compatibility
Change management and system development
It is our pleasure to announce the addition of 16 new services and two new Regions to our PiTuKri attestation scope. A few examples of the new security services included are:
AWS CloudShell – A browser-based shell that makes it simple to manage, explore, and interact with your AWS resources. With CloudShell, you can quickly run scripts with the AWS Command Line Interface (AWS CLI), experiment with AWS service APIs by using the AWS SDKs, or use a range of other tools to be productive.
Amazon HealthLake – A HIPAA-eligible service that offers healthcare and life sciences companies a chronological view of individual or patient population health data for query and analytics at scale.
AWS IoT SiteWise – A managed service that simplifies collecting, organizing, and analyzing industrial equipment data.
Amazon DevOps Guru – A service that uses machine learning to detect abnormal operating patterns to help you identify operational issues before they impact your customers.
The latest report covers the period from October 1, 2021 to September 30, 2022. It was issued by an independent third-party audit firm to assure customers that the AWS control environment is appropriately designed and implemented in accordance with PiTuKri requirements. This attestation demonstrates the AWS commitment to meet security expectations for cloud service providers set by Traficom.
Customers can find the full PiTuKri ISAE 3000 report on AWS Artifact. To learn more about the complete list of certified services and Regions, customers can also refer to AWS Compliance Programs and AWS Services in Scope for PiTuKri.
AWS strives to continuously bring new services into scope of its compliance programs to help customers meet their architectural and regulatory needs. Please reach out to your AWS account team for any questions about the PiTuKri report.
If you have feedback about this post, please submit them in the Comments section below. Want more AWS Security news? Follow us on Twitter.
Amazon Web Services (AWS) is pleased to announce the third issuance of the Swiss Financial Market Supervisory Authority (FINMA) International Standard on Assurance Engagements (ISAE) 3000 Type II attestation report. The scope of the report covers a total of 154 services and 24 global AWS Regions.
The latest FINMA ISAE 3000 Type II report covers the period from October 1, 2021, to September 30, 2022. AWS continues to assure Swiss financial industry customers that our control environment is capable of effectively addressing key operational, outsourcing, and business continuity management risks.
FINMA circulars
The report covers the five core FINMA circulars regarding outsourcing arrangements to the cloud. FINMA circulars help Swiss-regulated financial institutions to understand the approaches FINMA takes when implementing due diligence, third-party management, and key technical and organizational controls for cloud outsourcing arrangements, particularly for material workloads.
The scope of the report covers the following requirements of the FINMA circulars:
2018/03 Outsourcing – Banks, insurance companies and selected financial institutions under FinIA
2008/21 Operational Risks – Banks – Appendix 3 Handling of Electronic Client Identifying Data (31.10.2019)
2013/03 Auditing – Information Technology (04.11.2020)
2008/10 Self-regulation as a minimum standard – Minimum Business Continuity Management (BCM) minimum standards proposed by the Swiss Insurance Association (01.06.2015) and Swiss Bankers Association (29.08.2013)
It is our pleasure to announce the addition of 16 services and two Regions to the FINMA ISAE 3000 Type II attestation scope. The following are a few examples of the additional security services in scope:
AWS CloudShell – A browser-based shell that makes it simple to manage, explore, and interact with your AWS resources. With CloudShell, you can quickly run scripts with the AWS Command Line Interface (AWS CLI), experiment with AWS service APIs by using the AWS SDKs, or use a range of other tools to be productive.
Amazon HealthLake – A HIPAA-eligible service that offers healthcare and life sciences companies a chronological view of individual or patient population health data for query and analytics at scale.
AWS IoT SiteWise – A managed service that simplifies collecting, organizing, and analyzing industrial equipment data.
Amazon DevOps Guru – A service that uses machine learning to detect abnormal operating patterns to help you identify operational issues before they impact your customers.
Customers can continue to reference the FINMA workbooks, which include detailed control mappings for each FINMA circular covered under this audit report, through AWS Artifact. Customers can also find the entire FINMA report on AWS Artifact. To learn more about the list of certified services and Regions, see AWS Compliance Programs and AWS Services in Scope for FINMA.
As always, AWS is committed to adding new services into our future FINMA program scope based on your architectural and regulatory needs. If you have questions about the FINMA report, contact your AWS account team.
If you have feedback about this post, please submit them in the Comments section below. Want more AWS Security news? Follow us on Twitter.
Cybersecurity is acronym-heavy to say the least. If you’re reading this, you already know. From CVE to FTP, we in IT love our abbreviations, FR FR. Truthfully though, it can be a bit much, and even the nerdiest among us miss a few. So, In Case You Missed It, here are 10 cybersecurity acronyms you should know IRL, err in 2023.
HUMINT
Peppermint on a sticky day? How dare you. HUMINT is short for Human Intelligence. This abbreviation refers to information collected by threat researchers from sources across the clear, deep and dark web. Real people doing real things, you might say. These folks are out there hunting down potential threats and stopping them before they occur. Pretty cool stuff, TBH.
CSPM
Cloud Security Posture Management tools include use cases for compliance assessment, operational monitoring, DevOps integrations, incident response, risk identification, and risk visualization. Good posture: so hot RN.
IAM
Not the guy with the green eggs, this IAM stands for Identity and Access Management. CSO online says IAM is a “set of processes, policies, and tools for defining and managing the roles and access privileges of individual network entities (users and devices) to a variety of cloud and on-premises applications’. Green Eggs and Ham didn’t age well IMO, Sam was kind of a bully. JK JK.
XDR
AKA Extended Detection and Response. Forrester calls XDR the “evolution of endpoint detection and response”. Gartner says it’s integrating “multiple security products into a cohesive security operations system”. Essentially, XDR is about taking a holistic approach to more efficient, effective detection and response. It’s definitely not an Xtreme Dude Ranch. That’s just absurd.
XSPM
According to Hacker News, “Extended Security Posture Management is a multilayered process combining the capabilities of Attack Surface Management (ASM), Breach and Attack Simulation (BAS), Continuous Automated Red Teaming (CART), and Purple Teaming to continuously evaluate and score the infrastructure’s overall cyber resiliency.” Yes, that definition includes three additional acronyms. Plus, one of them is CART, SMH.
RASP
Runtime application self-protection tools can block malicious activity while an application is in production. If RASP detects a security event such as an attempt to run a shell, open a file, or call a database, it will automatically attempt to terminate that action, NBD.
MDR
Managed Detection and Response providers deliver technology and human expertise to perform threat hunting, monitoring, and response. The main benefit of MDR is that it helps organizations limit the impact of threats without the need for additional staffing. In other words, they are free to TCB instead of worrying about security stuff.
MSSP
A Managed Security Service Provider provides outsourced monitoring and management of security devices and systems. MSSPs deliver managed firewall, intrusion detection, virtual private network, vulnerability scanning, and other services. Oh BTW, sometimes MSSPs partner with MDR vendors to deliver services to their customers.
DAST
Dynamic Application Security Testing is the process of analyzing a web application to find vulnerabilities through simulated attacks. DAST is all about finding vulnerabilities in web applications and correcting them before they can be exploited by threat actors. A dastardly deed conducted with no ill will … if you will.
WAF
A Web Application Firewall is a type of firewall that filters, monitors, and blocks HTTP traffic to and from a web service. It is designed to prevent attacks exploiting a web application’s known vulnerabilities, such as SQL injection, cross-site scripting, file inclusion, and improper system configuration. Proper WAF definition there, zero Cardi B jokes. Those are NSFW.
Chestnuts roasting on an open fire, scammers nipping at your bank account… that might not be the carol you were expecting, but unfortunately it’s the frosty truth.
Most everyone has tons of shopping to do in preparation for holidays, whether they’re buying gifts, decorations, or tickets to visit loved ones. And with so many of these transactions happening online, all these shopping sprees add up to a potential goldmine for scammers.
Don’t let those grinches get you down. Fraud might be out in full force, but some simple cyber hygiene is all it takes to stay safe. In the spirit of the holiday season, we’ve made you a list—check it twice, and you’ll find out which online deals are naughty or nice.
1. All They Want for Christmas is Venmo
Not all payment methods are created equal—and scammers know this all too well. So if a seller is insisting you pay for those stocking stuffers with Zelle, gift cards, Dogecoin, or wire transfer, you should probably steer clear.
Peer-to-peer payment apps like Venmo, Zelle, or Cash App are incredibly handy, but they’re designed for paying your friends for your share of brunch, not for sending money to unknown online sellers. These apps offer you little to no recourse in the event of fraud, so stick to using them with close friends and family. No reputable online retailer will request payment through these apps.
Same goes for wire transfers. Wire transfers of money are irreversible, and next to untraceable to boot. So, they’re a popular choice for cybercriminals, and should be a huge red flag for holiday shoppers. Cryptocurrency is the favorite payment method of hackers worldwide for the same reasons; by design, cryptocurrency transactions are anonymous, untrackable, and impossible to reverse.
Gift cards might seem more at home at a lackluster White Elephant party than in a fraudster’s arsenal, but they’re used in online scams with surprising frequency as well. Some scammers offer to accept gift cards as payment—you just need to send them the card number and PIN. But, like all of the other types of payment above, gift cards can’t be tracked and offer no protection to fraud victims, and the fake sellers can quickly and easily convert the gift card’s contents into cash or items.
The bottom line: Stick to credit cards or digital wallets for anything you buy online this December. And of course, be sure to keep a close eye on your statements, so you can alert your credit card company of any transactions you didn’t make.
2. There Might Have Been Some Malware in That New Top Hat You Found
Right about now, online retailers are out in full force advertising their wares over social media and email—and scammers are right there with them. That email you got about a deep discount on PS5s might not actually be from Amazon, and the Instagram ad offering Taylor Swift tickets should definitely be looked at with suspicion. Hackers know all too well that many people are in a hurry to finish up their holiday shopping, or are desperately hunting for a good deal on that perfect gift, and they’re all too ready to take advantage.
Scammers will frequently prop up advertisements or send messages posing as companies you know and trust to get you to let your guard down. The goal, as in all phishing scams, is to get you to click on a link you shouldn’t. Just by clicking, you could be unknowingly downloading malware onto your computer.
Alternatively, these links may send you to a fake online storefront designed to look like a well-known legitimate retailer. These storefronts generally offer popular holiday items or travel fares at irresistible prices. When you make a purchase, the “retailer” might grab your credit card details or other personal information. Or, they might ask for payment in one of the unsecure methods discussed above, and never deliver you the goods.
So, don’t let holiday stress (or an excess of eggnog) get in the way of your better judgment. Be sure to hover over links to check where they actually lead before clicking—or better yet, open up a new tab and navigate to the retailer’s site directly. Make sure you thoroughly vet any seller before making purchases, checking for reviews and feedback. And remember: Any deal that seems too good to be true probably is.
3. Last Christmas, I Gave You My SSN. The Very Next Day, You Stole My Identity
Even if you’ve made all your holiday purchases safely, you’re not out of the woods quite yet. There’s a popular new type of scam on the rise you need to watch out for: fake delivery notifications.
At this time of year, just about everyone is waiting on one package or another, so some scammers send fake texts claiming that your package has been delayed, you missed its delivery, or something along those lines. And, of course, they’ll give you a link to click. Once you do, scammers will often ask for sensitive information—such as your credit card number, SSN, or even just login credentials to an online retailer—so that they can “find” your lost package. Alternatively, they may claim that you owe an extra fee before your package can be delivered.
Luckily, once you’re aware of this scam, it’s also fairly easy to avoid. Take note of tracking information for any online orders you make, so if you get any messages about problems with delivery, you can independently track your package and see what’s really going on. And know that delivery companies like FedEX or UPS will never ask you for sensitive personal information to track a package.
Cyber scams may be coming to town, but that doesn’t mean you have to be a victim. Just a few extra precautions—using safer payment methods, vetting sellers, and avoiding suspicious links—will keep you safe. Deck the halls with good cyber hygiene and make sure you know when those jingle bells should actually be alarm bells.
Rapid7 is excited to announce the release of version 0.6.7 of Velociraptor – an advanced, open-source digital forensics and incident response (DFIR) tool that enhances visibility into your organization’s endpoints. This release has been in development and testing for several months and features significant contributions from our community. We are thrilled to share its powerful new features and improvements.
NTFS Parser changes
In this release, the NTFS parser was improved significantly. The main areas of development focused on better support for NTFS compressed and sparse files as well as improved path reconstruction.
In NTFS, there is a Master File Table (MFT) containing a record for each file on the filesystem. The MFT entry describes a file by attaching several attributes to it. Some of these are $FILE_NAME attributes representing the names of the file.
In NTFS, a file may have multiple names. Normally, files have a long file name and a short filename. Each $FILE_NAME record also contains a reference to the parent MFT entry of its directory.
When Velociraptor parses the MFT, it attempts to reconstruct the full path of each entry by traversing the parent MFT entry, recovering its name, etc. Previously, Velociraptor used one of the $FILE_NAME records (usually the long file name) to determine the parent MFT entry. However, this is not strictly correct, as each $FILE_NAME record can be a different parent directory. This surprising property of NTFS is called hard links.
You can play with this property using the fsutil program. The following adds a hard link to the program at C:/users/test/downloads/X.txt into a different directory.
C:> fsutil hardlink create c:\Users\Administrator\Y.txt c:\Users\Administrator\downloads\X.txtHardlink created for c:\Users\Administrator\Y.txt <<===>> c:\Users\Administrator\downloads\X.txt
The same file in NTFS can exist in multiple directories at the same time by use of hard links. The filesystem simply adds a new $FILE_NAME entry to the MFT entry for the file pointing at another parent directory MFT entry.
Therefore, when scanning the MFT, Velociraptor needs to report all possible directories in which each MFT entry can exist – there can be many such directories, since each directory can have its own hard links.
As a rule, an MFT Entry can represent many files in different directories!
An example of the notepad MFT entry with its many hard links
Reassembling paths from MFT entries
When Velociraptor attempts to reassemble the path from an unallocated MFT entry, it might encounter an error where the parent MFT entry indicated has already been used for some other file or directory.
In previous versions, Velociraptor simply reported these parents as potential parts of the full path, since – for unallocated entries – the path reconstruction is best effort. This led to confusion among users with often nonsensical paths reported for unallocated entries.
In the latest release, Velociraptor is more strict in reporting parents of unallocated MFT entries, also ensuring that the MFT sequence numbers match. If the parent’s MFT entry sequence number does not match, Velociraptor’s path reconstruction indicates this as an error path.
Unallocated MFT entries may have errors reconstructing a full path
In the above example, the parent’s MFT entry has a sequence number of 5, but we need a sequence number of 4 to match it. Therefore, the parent’s MFT entry is rejected and instead we report the error as the path.
The offline collection and encryption
Velociraptor’s offline collector is a pre-configured Velociraptor binary, which is designed to be a single shot acquisition tool. You can build an Offline Collector by following the documentation. The Offline Collector does not require access to the server, instead simply collecting the specified artifacts into a zip file (which can subsequently be uploaded to the cloud or simply shared with the DFIR experts for further analysis).
Previously, Velociraptor only supported encrypting the zip archive using a password. This is problematic because the password had to be embedded inside the collector configuration and so could be viewed by anyone with access to the binary.
In the latest release, Velociraptor supports asymmetric encryption to protect the acquisition zip file. There are two asymmetric schemes: X509 encryption and PGP encryption. Having asymmetric encryption improves security greatly because only the public key needs to be included in the collector configuration. Dumping the configuration from the collection is not sufficient to be able to decrypt the collected data – the corresponding private key is also required!
This is extremely important for forensic collections since these will often contain sensitive and PII information.
Using this new feature is also extremely easy: One simply selects the X509 encryption scheme during the configuration of the offline collector in the GUI.
Configuring the offline collector for encryption
You can specify any X509 certificate here, but if you do not specify any, Velociraptor will use the server’s X509 certificate instead.
Velociraptor will generate a random password to encrypt the zip file, and then encrypt this password using the X509 certificate.
The resulting encrypted container
Since the ZIP standard does not encrypt the file names, Velociraptor embeds a second zip called data.zip inside the container. The above illustrates the encrypted data zip file and the metadata file that describes the encrypted password.
Because the password used to encrypt the container is not known and needs to be derived from the X509 private key, we must use Velociraptor itself to decrypt the container (i.e. we can not use something like 7zip).
Decrypting encrypted containers with the server’s private key
Importing offline collections
Originally, the offline collector feature was designed as a way to collect the exact same VQL artifacts that Velociraptor allows in the usual client-server model in situations where installing the Velociraptor client was not possible. The same artifacts can be collected into a zip file.
As Velociraptor’s post processing capabilities improved (using notebooks and server side VQL to enrich the analysis), people naturally wanted to use Velociraptor to post process offline collections too.
Previously, Velociraptor did have the Server.Utils.ImportCollection artifact to allow an offline collection to be imported into Velociraptor. But this did not work well because the offline collector simply did not include enough information in the zip file to sufficiently emulate the GUI’s collection views.
In the recent release, the offline collector was updated to add more detailed information to the collection zip, allowing it to be easily imported.
Exported zip archives now contain more information
Exporting and importing collections
Velociraptor has previously had the ability to export collections and hunts from the GUI directly, mainly so they can be processed by external tools.
But there was no way to import those collections back into the GUI. We just never imagined this would be a useful feature!
Recently, Eric Capuano from ReconInfosec shared some data from an exercise using Velociraptor. People wanted to import into their own Velociraptor installations so they could run notebook post processing on the data themselves.
Our community has spoken though! This is a useful feature!
In the latest release, exported files from the GUI use the same container format at the offline collector, and therefore can be seamlessly imported into a different Velociraptor installation.
Handling of sparse files
When collecting files from the endpoint using the NTFS accessor, we quite often encounter sparse files. These are files with large unallocated holes in them. The most extreme sparse file is the USN Journal.
Acquiring the USN journal
In the above example, the USN journal size is reported to be 1.3 GB but in reality only about 40 MB is occupied on disk. When collecting this file, Velociraptor only collects the real data and marks the file as sparse. The zip file will contain an index file which specifies how to reassemble the file into its original form.
While Velociraptor stores the file internally in an efficient way, when exporting the file for use by other tools, they might expect the file to be properly padded out (so that file offsets are correct).
Velociraptor now allows the user the choice of exporting an individual file in a padded form (with sparse regions padded). This can also be applied to the entire zip export in the GUI.
For very large sparse files, it makes no sense to pad so much data out – some USN journal files are in the TB region. So, Velociraptor implements a limit on padding of very sparse files.
Parsing user registry hives
Many Velociraptor artifacts simply parse keys and values from the registry to detect indicators. Velociraptor offers two methods of accessing the registry:
Using the Windows APIs
Employing the built-in raw registry parser to parse the hive files
While the first method is very intuitive and easy to use, it is often problematic. Using the APIs requires the user hive to be mounted. Normally, the user hive is only mounted when a user logs in. Therefore querying registry keys in the user hive will only work on users that are currently logged in at the time of the check and miss other users (which are not currently logged in so their hive is not mounted).
To illustrate this problem consider the Windows.Registry.Sysinternals.Eulacheck artifact which checks the keys in HKEY_USERS\*\Software\Sysinternals\* for the Sysinternals EULA value.
In previous versions of Velociraptor, this artifact simply used the windows API to check these keys/values and completely missed any users that were not logged in.
While this issue is known, users previously had to employ complex VQL to customize the query so it could search the raw NTUSER.DAT files in each user registry. This is more difficult to maintain since it requires two separate types of artifact for the same indicator.
With the advent of Velociraptor’s dead disk capabilities, it is possible to run a VQL query in a “virtualized” context consisting of a remapped environment. The end result is that the same VQL query can be used to run on raw registry hives. It is now trivial to apply the same generic registry artifact to a raw registry parse.
Remapping the raw registry hive to a regular registry artifact
All that is required to add raw registry capabilities to any registry artifact is:
Import the Windows.Registry.NTUser artifact
Use the MapRawRegistryHives helper function from that artifact to set up the mappings automaticallyCall the original registry query using the registry accessor. In the background this will be remapped to the raw registry accessor automatically
Conclusion
If you’re interested in the new features, take Velociraptor for a spin by downloading it from our release page. It’s available for free on GitHub under an open-source license.
As always, please file bugs on the GitHub issue tracker or submit questions to our mailing list by emailing [email protected]. You can also chat with us directly on our Discord server.
Learn more about Velociraptor by visiting any of our web and social media channels below:
Is there too much to do with too little talent? If your SOC hasn’t been running smoothly in a while, there’s likely multiple reasons why. As a popular slang phrase goes these days, it’s because of “all the reasons.” Budget, talent churn, addressing alerts all over the place; you might also work in an extremely high-risk/high-attack-frequency industry like healthcare or media.
Because of “all these reasons” – and possibly a few more – you find yourself with a heavy load to secure. A load that possibly never seems to get lighter. Even when you land some truly talented security personnel and begin the onboarding process, more often these days it seems like a huge question mark if they’ll even be around in a year. And maybe the current cybersecurity skills gap is here to stay.
But that doesn’t mean there’s nothing you can do about it. It doesn’t mean you can’t be powerful in the face of that heavy load and attack frequency. By shoring up your current roster and strategizing how your talent could best partner with a managed detection and response (MDR) services provider, you might not have to simply settle for weathering the talent gap. You may find you’re saving money, creating new efficiencies, and activating a superpower that can help you lift the load like never before.
The hidden benefit
Let’s say retention isn’t a huge issue in your organization. As a manager, you try to stay upbeat, reinforce daily positivity, and show your gratitude for a job well done. If that’s truly the case, then more likely than not people enjoy working for you and probably stick around if they’re paid well and fairly for the industry average. So why not shore up that culture and confidence by:
Lightening the load: Remove the need to deal with most false positives and frequent alerts. If your people really do like working in your organization – even in the midst of a challenging talent gap – they enjoy their work/life balance. Challenging that balance by demanding longer hours to turn your employees into glorified button pushers will send the wrong message – and could send them packing to other jobs.
Preventing burnout: Cybersecurity professionals have to begin somewhere, and likely in an entry-level position they’ll be dealing with lots of alerts and repetitive tasks while they earn valuable experience. But when faced with the increasing stress of compounding and repetitive incidents – whether false or not – experienced workers are more likely to think about ditching their current gig for something they consider better. Nearly 30% of respondents in a recent ThreatConnect survey cited major stress as a top reason they would leave a job.
Creating space to innovate: Everyone must deal with tedious alerts in some fashion throughout a career. However, skilled individuals should have the space to take on larger and more creative challenges versus something that can most likely be automated or handled by a skilled services partner. That’s why an MDR partner can be a force multiplier, providing value to your security program by freeing your analysts to do more so they can better protect the business.
Retention just might be the reason
The last point above is one that’s more than fair to make. Freeing your individual team members to work on projects that drive the more macro view and mission of the security organization can be that force multiplier that drives high rates of retention. And that’s great!
The subsequent challenge, then, lies in finding that partner that can be an extension of your security team, a detection and response specialist that can field the alerts and focus on ridding your organization of repetitive tasks – increasing the retention rate and creating space to innovate. Ensuring a great connection between your team and your service-provider-of-choice is critical. The provider will essentially become part of your team, so that relationship is just as important as the interpersonal dynamics of your in-house teams.
A provider with a squad of in-house incident response experts can help to speed identification of alerts and remediation of vulnerabilities. If you can partner with a provider who handles breach response 100% in-house – as opposed to subcontracting it – this can help to form closer bonds between your in-house team and that of the provider so you can more powerfully contain and eradicate threats.
Resources to help
To learn more about the process of researching and choosing a potential MDR vendor, check out the new Rapid7 eBook, 13 Tips for Overcoming the Cybersecurity Talent Shortage. It’s a deeper dive into the current cybersecurity skills gap and features steps you can take to address your own talent shortages or better partner with a services provider/partner. You can also read the previous entry in this blog series here.
The CCCS is Canada’s authoritative source of cyber security expert guidance for the Canadian government, industry, and the general public. Public and commercial sector organizations across Canada rely on CCCS’s rigorous Cloud Service Provider (CSP) IT Security (ITS) assessment in their decisions to use cloud services. In addition, CCCS’s ITS assessment process is a mandatory requirement for AWS to provide cloud services to Canadian federal government departments and agencies.
The CCCS Cloud Service Provider Information Technology Security Assessment Process determines if the Government of Canada (GC) ITS requirements for the CCCS Medium cloud security profile (previously referred to as GC’s Protected B/Medium Integrity/Medium Availability [PBMM] profile) are met as described in ITSG-33 (IT security risk management: A lifecycle approach). As of November 2022, 132 AWS services in the Canada (Central) Region have been assessed by the CCCS and meet the requirements for the CCCS Medium cloud security profile. Meeting the CCCS Medium cloud security profile is required to host workloads that are classified up to and including the medium categorization. On a periodic basis, CCCS assesses new or previously unassessed services and reassesses the AWS services that were previously assessed to verify that they continue to meet the GC’s requirements. CCCS prioritizes the assessment of new AWS services based on their availability in Canada, and on customer demand for the AWS services. The full list of AWS services that have been assessed by CCCS is available on our Services in Scope for CCCS Assessment page.
To learn more about the CCCS assessment or our other compliance and security programs, visit AWS Compliance Programs. As always, we value your feedback and questions; you can reach out to the AWS Compliance team through the Contact Us page.
If you have feedback about this post, submit comments in the Comments section below. Want more AWS Security news? Follow us on Twitter.
ENS is Spain’s National Security Framework. The ENS certification is regulated under the Spanish Royal Decree 3/2010 and is a compulsory requirement for central government customers in Spain. ENS establishes security standards that apply to government agencies and public organizations in Spain, and service providers on which Spanish public services depend. Updating and achieving this certification every year demonstrates our ongoing commitment to meeting the heightened expectations for cloud service providers set forth by the Spanish government.
We are happy to announce the addition of 17 services to the scope of our ENS High certification, for a new total of 166 services in scope. The certification now covers 25 Regions. Some of the additional security services in scope for ENS High include the following:
AWS CloudShell – a browser-based shell that makes it simpler to securely manage, explore, and interact with your AWS resources. With CloudShell, you can quickly run scripts with the AWS Command Line Interface (AWS CLI), experiment with AWS service APIs by using the AWS SDKs, or use a range of other tools for productivity.
AWS Cloud9 – a cloud-based integrated development environment (IDE) that you can use to write, run, and debug your code with just a browser. It includes a code editor, debugger, and terminal.
Amazon DevOps Guru – a service that uses machine learning to detect abnormal operating patterns so that you can identify operational issues before they impact your customers.
Amazon HealthLake – a HIPAA-eligible service that offers healthcare and life sciences companies a complete view of individual or patient population health data for query and analytics at scale.
AWS IoT SiteWise – a managed service that simplifies collecting, organizing, and analyzing industrial equipment data.
AWS achievement of the ENS High certification is verified by BDO Auditores S.L.P., which conducted an independent audit and confirmed that AWS continues to adhere to the confidentiality, integrity, and availability standards at its highest level.
As always, we are committed to bringing new services into the scope of our ENS High program based on your architectural and regulatory needs. If you have questions about the ENS program, reach out to your AWS account team or contact AWS Compliance.
If you have feedback about this post, submit comments in the Comments section below.
Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.
The collective thoughts of the interwebz
By continuing to use the site, you agree to the use of cookies. more information
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.