Tag Archives: Security Strategy

ISO 27001 Certification: What it is and why it matters

Post Syndicated from Drew Burton original https://blog.rapid7.com/2022/12/06/iso-27001-certification-what-it-is-and-why-it-matters/

ISO 27001 Certification: What it is and why it matters

Did you know that Rapid7 information security management system (ISMS) is ISO 27001 certified? This certification validates that our security strategy and processes meet very high standards. It underscores our commitment to corporate and customer data security.

What is ISO 27001?

ISO 27001 is an internationally recognized standard for information security management published by the International Standards Organization (ISO). It details requirements for establishing, implementing, maintaining and continually improving an ISMS.

ISO 27001 is focused on risk management and taking a holistic approach to security. Unlike some standards and frameworks, ISO 27001 does not require the implementation of specific technical controls. Instead, it provides a framework and checklist of controls that can be used to develop and maintain a comprehensive ISMS.

It is one of more than ten published standards in the ISO 27000 family. It is the only standard among them that an organization can be certified against.

To become ISO 27001 certified, an organization must:

  • Systematically examine its information security risks, taking account of the threats, vulnerabilities, and impacts.
  • Design and implement a coherent and comprehensive suite of information security controls and risk avoidance measures.
  • Adopt an overarching management process that ensures the information security controls continue to meet the organization’s information security needs over time.

Then, the ISMS must be audited by a third party. This is a rigorous process, which determines whether the organization has implemented applicable best practices as defined in the standard. Certified organizations must undergo annual audits to maintain compliance. Rapid7’s ISMS was audited by Schellman.

Why does ISO 27001 certification matter?

Rapid7 is committed to helping our customers reduce risk to their organizations. ISO 27001 certification is one way that we demonstrate that commitment. It is worth noting that certification is not a legal requirement, rather, it is proof that an organization’s security strategy and processes meet very high standards. Rapid7 believes that maintaining the highest standards of information security for ourselves and our clients is essential.

As noted above, ISO 27001 provides a framework to meet those standards. That framework is based on three guiding principles to help organizations build their security strategy and develop effective policies and controls: Confidentiality, Integrity, and Availability.

  • Confidentiality means that data should be kept private, secure, and accessible only by authorized individuals.
  • Integrity requires that organizations ensure consistent, accurate, reliable, and secure data.
  • Availability means systems, applications, and data are available and accessible to satisfy business needs.

Rapid7’s security strategy reflects these principles. Our platform and products are designed to fit securely into your environment and your data is accessible when you need it—with full visibility into where it lives, who has access to it, and how it is used. When you partner with Rapid7, your data stays safe. Period.

For more information about the policies and procedures Rapid7 has in place to keep our data, platform, and products secure, visit the Trust section of our website.

Grey Time: The Hidden Cost of Incident Response

Post Syndicated from Joshua Harr original https://blog.rapid7.com/2022/09/13/grey-time-the-hidden-cost-of-incident-response/

Grey Time: The Hidden Cost of Incident Response

The time cost of incident response for security teams may be greater – and more complex – than we’ve been assuming. To see that in action, let’s look at a hypothetical scenario that should feel familiar to most cybersecurity analysts.

An everyday story

A security engineer, Casey, is tuning a SIEM to detect a specific threat that poses an increased risk to their organization. This project has been allotted some set amount of time to get completed. The research and testing that Casey must do in order to get the query and tuning correct, accurate, and effective are essential to the business. This is one of many projects this engineer has on their plate. They are getting into the research and starting to understand the attack at a level they will be able to begin writing some preliminary factors of the alert, and then…

An employee forwards an email that they believe to be phishy. Casey looks at the email and confirms it requires further investigation. However, the engineer must respond to the user by giving them the process to send the email as an attachment to look into headers and other details that could help identify the artifacts of a malicious email. After that, the engineer will do their assessment and respond appropriately to the event.

Now, 25 minutes have passed. Casey returns to focus on tuning the alert but needs to go back over the research a bit more to confirm where they left off. Another 10 minutes have passed, and they are back where they were then the phishing alert came in. Now they are gathering the right information for the project and trying to get the right people involved, then…

An EDR alert comes in. It is from a director’s laptop. This begins to take priority, as the director needs this laptop for their presentation to a customer, and they leave for the airport in 3 hours. Casey steps away to analyze the alert, eradicate the malware, and begin a scan across the organization to determine if the malware hash value is seen elsewhere. 30 minutes go by, because an incident report needs to be added to the ticket. Casey sits back down and, for another 20 minutes, must recalibrate their thoughts to focus on the task at hand.

Grey time

Scenarios like this are happening in almost every organization today. High-risk security projects are delayed because fires pop up and need to be responded to. In the scenario we’ve just laid out, this engineer has lost one hour and 25 minutes from their project work due to incidents. These incidents may have a risk to them if not dealt with promptly, but the project that this engineer is working on carries a high risk of impact if not completed.

Cal Newport, a computer science professor at Georgetown University, famously explained in his seminal book “Deep Work” that it takes each person a different amount of time to pivot from one task to another. It’s how our brains work. I’m calling that amount of time that it takes to pivot “grey time.” Grey time is not normally added into the time it takes to respond to incidents, but we should change that.

Whether it takes 30 seconds, 5 minutes, or 15 minutes to respond to an incident, you have to add 5 to 25 minutes of grey time to the process to pivot back to the work previously being performed. The longer the break from the task, the longer it may take to get back into the project fully. Grey time is just as detrimental to an organization as not responding to the incidents. There are quite a few statistics out there that help us quantify distractions and interruptions:

Incidents can be distractions or interruptions. The fact is that some events that security professionals respond to are benign and do not lead to actioning an incident response plan or prevent prioritized work from being completed.

Here is where Security Orchestration, Automation, and Response (SOAR) comes into play. Those manual tasks security professionals are doing that take time away from risk-informed projects to secure the business can be automated. If tasks cannot be automated fully, we can at least automate the process of pivoting from tool to tool. SOAR can eliminate the manual notation in a ticketing system and the documentation of an incident report. It can also reduce time to respond and help eliminate grey time.

Grey time reduction through SOAR

In an industry where alert fatigue and employee attrition are pervasive issues, the need is high for SOAR’s extensive automation capabilities. Think about the tasks in your organization that you would automate if you could, because they are taking up more time than necessary. We can do some quick math to find your organization’s annual cost of manual response for each of those tasks, including grey time.

  1. First, think of a repetitive action your team does repeatedly.
  2. Assign a “task minutes” ™ value, which is approximately how long it takes to do that task.
  3. Then, estimate the “task instances per week” (ti) value.
  4. Multiply by 52 to find your “task minutes per year.”
  5. Divide by 60 to find your “task hours per year.”
  6. Multiply by your average hourly employee rate for the team that works on that task to find your annual cost of manual response.

I encourage you to do this for each playbook or process you have.

  • Task minutes ™ x task instances per week (ti) = total task minutes per week (ttw)
  • tw x 52 = total task minutes per year (tty)
  • tty / 60 = total hours per year (ty)
  • ty x hourly employee rate (hr) = cost of manual response

What we haven’t done here is add in the grey time. On average, it takes about 23 minutes and 15 seconds to regain focus on a task after a distraction. So, with that in mind, let’s round out this post by quantifying our story from earlier.

Let’s say that Casey, our engineer, takes 30 minutes for each phishing email, and malware compromises take 15 minutes to contain and eradicate. Both incident reports take about 20 minutes. Let’s also say that the organization sees about 16 phishing instances per week (ti) and phishing with the reporting takes 50 minutes. Let’s add in the grey time at 20 minutes to make it 70 minutes ™.

  • 70 x 16 = 1,120 minutes (tw)
  • 1,120 x 52 = 58,240 minutes (tty)
  • 58,240 / 60 = 970.7 hours (ty)

Using the national average salary of an entry-level incident and intrusion analyst at $88,226, we can break that down to an hourly rate of $42.41. From there, 970.7 (ty) x 42.41 (hr) = $41,167.39.

That’s just over $41K spent on manual responses to phishing each year. What about the malware? I’ll shorthand it because I believe you get the picture. Let’s say malware incidents happen about 10 times a week.

  • 25 min + 20 min = 45 min (Tm)
  • 45 x 10 = 450 (TTw)
  • 450 x 52 = 23,400 (TTy)
  • 23,400 / 60 = 390 (THy)
  • 390 x $42.41 = $16,539.90
  • $16,539.90 + $41,167.39 = $57,707.29

That’s nearly a full-time employee salary for just two manual processes!

SOAR past grey time

SOAR is becoming increasingly needed within our information security programs. Not only are we wasting time on manual processes that could be automated, but we are adding grey time to our workday and decreasing the time we have to work on high-priority projects that are informed by business risk and necessary to protect revenue and business operations. With SOAR, you can refocus your efforts on risk-relevant tasks and limit manual task interruptions. You can also reduce grey time and increase the effectiveness of your security program. With SOAR, it’s all blue skies – and no grey time.

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Building Cybersecurity KPIs for Business Leaders and Stakeholders

Post Syndicated from Rapid7 original https://blog.rapid7.com/2022/08/05/building-cybersecurity-kpis-for-business-leaders-and-stakeholders/

Building Cybersecurity KPIs for Business Leaders and Stakeholders

In the final part of our “Hackers ‘re Gonna Hack” series, we’re discussing how to bring together parts one and two of operationalising cybersecurity together into an overall strategy for your organisation, measured by key performance indicators (KPIs).

In part one, we spoke about the problem, which is the increasing cost (and risk) of cybersecurity, and proposed some solutions for making your budget go further.

In part two, we spoke about the foundational components of a target operating model and what that could look like for your business. In the third installment of our webinar series, we summarise the foundational elements required to keep pace with the changing threat landscape. In this talk, Jason Hart, Rapid7’s Chief Technology Officer for EMEA, discussed how to facilitate a move to a targeted operational model from your current operating model, one that is understood by all and leveraging KPIs the entire business will understand.

First, determine your current operating model

With senior stakeholders looking to you to help them understand risk and exposure, now is the time to highlight what you’re trying to achieve through your cybersecurity efforts. However, the reality is that most organisations have no granular visibility of their current operating model or even their approach to cybersecurity. A significant amount of money is likely being spent on deployment of technology across the organisation, which in turn garners a large amount of complex data. Yet, for the most part, security leaders find it hard to translate that data into something meaningful for their business leaders to understand.

In creating cyber KPIs, it’s important they are formed as part of a continual assessment of cyber maturity within your organisation. That means determining what business functions would have the most significant impact if they were compromised. Once you have discovered these functions, you can identify your essential data and locations, creating and attaching KPIs to the core six foundations we spoke of in part two. This will allow you to assess your level of maturity to determine your current operating model and begin setting KPIs to understand where you need to go to reach your target operating model.

Focus on 3 priority foundations

However, we all know cybersecurity is a wide-ranging discipline, making it a complex challenge that requires a holistic approach. It’s not possible to simply focus on one aspect and expect to be successful. We advise that, to begin with, security leaders consider three priority foundations: culture, measurement, and accountability.

For cybersecurity to have a positive and successful impact, we need to change our stakeholders’ mindsets to make it part of organisational culture. Everyone needs to understand its importance and why it’s necessary. We can’t simply assume everyone knows what is essential and that they’ll act. Instead, we need to measure our progress towards improving cybersecurity and hold people accountable for their efforts.

Translate cybersecurity problems into business problems

Cybersecurity problems are fundamentally business problems. That’s why it’s essential to translate them into business terms by creating KPIs for measuring the effectiveness of your cyber initiatives.

These KPIs can help you and your stakeholders understand where your organisation needs improvements, so you can develop a plan everyone understands. The core components that drive the effectiveness of a KPI, begin with defining the target, the owner, and accountability. The target is the business function or system that needs improvement. The owner is responsible for implementing the programme or meeting the KPI. Accountability is defined as who will review the data regularly to ensure progress towards achieving desired results.

40% of our webinar’s audience said they don’t currently use cybersecurity KPIs.

Additionally, when developing KPIs, it’s crucial to think about what information you’ll need to collect for them to be effective in helping you achieve your goals. KPIs are great, but to be successful, they need data. And once data is being fed into the KPIs, as security leaders, we need to translate the “technical stuff” – that is, talk about it in a way the business understands.

Remember, it’s about people, processes, and technology. Technology provides the data; processes are the glue that brings it together and makes cybersecurity part of the business process. And the people element is about taking the organisation on a journey. We need to present our KPIs in a way the organisation will understand to stakeholders who are both technical and non-technical.

Share and build the journey

As a security leader, you need to drive your company’s cybersecurity strategy and deploy it across all levels of your organisation, from the boardroom to the front lines of customer experience. However, we know that the approach we’re taking today isn’t working, as highlighted by the significant amounts of money we’re trying to throw at the problem.

So we need to take a different approach, going from a current to a target operating model, underpinned by KPIs that are further underpinned by data to take you in the direction you need to go. Not only will it reduce your organisational risk, but it will reduce your operational costs, too. But more importantly, it will translate what’s a very technical industry into a way everyone in your organisation will understand. It’s about a journey.

To find out what tools, processes, methodologies, and KPIs are needed to articulate key cybersecurity goals and objectives while illustrating ROI and keeping stakeholders accountable, watch part three of “Cybersecurity Series: Hackers ‘re Gonna Hack.”

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Deploying a SOAR Tool Doesn’t Have to Be Hard: I’ve Done It Twice

Post Syndicated from Ryan Fried original https://blog.rapid7.com/2022/07/21/deploying-a-soar-tool-doesnt-have-to-be-hard-ive-done-it-twice/

Deploying a SOAR Tool Doesn’t Have to Be Hard: I’ve Done It Twice

As the senior information security engineer at Brooks, an international running shoe and apparel company, I can appreciate the  challenge of launching a security orchestration, automation, and response (SOAR) tool for the first time. I’ve done it at two different companies, so I’ll share some lessons learned and examples of how we got over some speed bumps and past friction points. I’ll also describe the key steps that helped us create a solid SOAR program.

At Brooks we selected Rapid7’s InsightConnect (ICON) as our security automation tool after a thorough product review. I was familiar with ICON because I had used it at a previous company. There are other SOAR tools out there, but InsightConnect is my preferred option based on my experience, its integrations, support, and Rapid7’s track record of innovation in SOAR. InsightConnect is embedded in everything we do now. We use it to slash analyst time spent on manual, repetitive tasks and to streamline our incident response and vulnerability management processes.

When you’re starting out with SOAR, there are two important things you need to put in place.

  • One is getting buy-in from your active directory (AD) team on the automation process and the role they need to play. At Brooks, we have yearly goals that are broken down into quarters, so getting it on their quarterly goals as part of our overall SOAR goal was really important.  This also applies to other areas of the IT and security organizations
  • The second is getting all the integrations set up within the first 30 to 60 days. It’s critical because your automation tool is only as good as the integrations you have deployed. Maybe 50% to 60% of them fall under IT security, but the other 30% or 40% are still pretty important, given how dependent security teams are on other organizations and their systems. So, getting buy-in from the teams that own those systems and setting up all the integrations are key.

Start with collaboration and build trust

A successful SOAR program requires trust and collaboration with your internal partners – essentially, engineering and operations and the team that sets up your active directory domain – because they help set up the integrations that the security automations depend on. You need to develop that trust because IT teams often hesitate when it comes to automation.

In conversations with these teams, let them know you won’t be completely automating things like blocking websites or deleting users. In addition, stress that almost everything being done will require human interaction and oversight. We’re just enriching and accelerating the many of the processes we already have in place. Therefore, it will free up their time in addition to ours because it’s accomplishing things that they do for us already. And remember we have the ability to see if something happened that may have been caused by the SOAR tool, so it’s automation combined with human decision-making.

For example, say something starts not working. The team asks you: “Hey, what’s changed?” With ICON up and running, you can search within seconds to see, for example, what firewall changes have happened within the last 24 hours. What logins have occurred? Are there any user account lockouts? I can search that in seconds. Before, it used to take me 15 to 30 minutes to get back to them with a response. Not any more. That’s what I call fast troubleshooting.

Meet with your security analysts and explain the workflows

Right from the beginning, it’s important to meet with your security analysts and explain the initial workflows you’ve created. Then, get them thinking about the top five alerts that happen most often and consume a lot of their time, and what information they need from those alerts. For instance, with two-factor authentication logs, the questions might be, “What’s the device name? Who’s the user’s manager? What’s their location?” Then, you can work in the SOAR tool to get that information for them. This will help them see the benefit firsthand.

This approach helps with analyst retention because the automation becomes the platform glue for all of your other tools. It also reduces the time your analysts have to spend on repetitive drudge work. Now, they’re able to give more confident answers if something shows up in the environment, and they can focus on more creative work.

Dedicate a resource to SOAR

I believe it’s important to have one person dedicated to the SOAR project at least half-time for the first six months. This is where teams can come up short. When the staff and time commitment is there, the process quickly expands beyond simple tasks. Then you’re thinking, “What else can I automate? What additional workflows can I pick up from the Rapid7 workflow marketplace and customize for our own use?”

Take advantage of the Rapid7 Extensions Library

The good news is you don’t need to build workflows (playbooks) from scratch. The Rapid7 Extensions Library contains hundreds of workflows which you can use as a core foundation for your needs. Then you can tweak the last 15% to 20% to make the workflow fit even better. These pre-built workflows get you off the ground running. Think of them not as ready-to-go tools, but more as workflow ideas and curated best practices. The first time I used InsightConnect, I used the phishing workflow and started seeing value in less than two weeks.

Implementing a security automation tool within a company’s network environment can be a challenge if you don’t come at it the right way. I know because I’ve been there. But Rapid7’s InsightConnect makes it easier by enabling almost anything you can imagine. With a SOAR solution, your analysts will spend less time on drudge work and more time optimizing your security environment. These are real benefits I’ve seen firsthand at Brooks. You can have them as well by following this simple approach. Best of luck.

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

How to Build and Enable a Cyber Target Operating Model

Post Syndicated from Rapid7 original https://blog.rapid7.com/2022/07/08/how-to-build-and-enable-a-cyber-target-operating-model/

How to Build and Enable a Cyber Target Operating Model

Cybersecurity is complex and ever-changing. Organisations should be able to evaluate their capabilities and identify areas where improvement is needed.

In the webinar “Foundational Components to Enable a Cyber Target Operating Model,” – part two of our Cybersecurity Series – Jason Hart, Chief Technology Officer, EMEA, explained the journey to a targeted operating cybersecurity model. To build a cybersecurity program is to understand your business context. Hart explains how organisations can use this information to map out their cyber risk profile and identify areas for improvement.

Organisations require an integrated approach to manage all aspects of their cyber risk holistically and efficiently. They need to be able to manage their information security program as part of their overall risk management strategy to address both internal and external cyber threats effectively.

Identifying priority areas to begin the cyber target operating model journey

You should first determine what data is most important to protect, where it resides, and who has access to it. Once you’ve pinned down these areas, you can identify each responsible business function to create a list of priorities. We suggest mapping out:

  • All the types of data within your organisation
  • All locations where the data resides, including cloud, database, virtual machine, desktops, and servers
  • All the people that have access to the data and its locations
  • The business function associated with each area

Once you have identified the most recurring business functions, you can list your priority areas. Only 12% of our webinar audience said they were confident in understanding their organisation’s type of data.

Foundations to identify risk, protection, detection, response, and recovery

To start operationalising cybersecurity within a targeted area, we first set the maturity of each foundation. A strong foundation will help ensure all systems are protected from attacks and emerging threats. People play a critical role in providing protection and cyber resilience. They should be aware of potential risks so they can take appropriate actions to protect themselves and their business function.

1. Culture

A set of values shared by everyone in an organisation determines how people think and approach cybersecurity. Your culture should emphasise, reinforce, and drive behaviour to create a resilient workforce.

Every security awareness program should, at minimum, communicate security policy requirements to staff. Tracking employee policy acknowledgements will ensure your workforce is aware of the policy and helps you meet compliance requirements.

A quick response can reduce damages from an attack. Security awareness training should teach your workforce how to self-report incidents, malicious files, or phishing emails. This metric will prove you have safeguards in place. Tailor security awareness training to employees’ roles and functions to measure the effectiveness of each department.

2. Measurement

Measuring the ability to identify, protect, detect, respond, and recover from cybersecurity risks and threats enables a robust operating model. The best approach requires an understanding of what your most significant risks are. Consider analysing the following:

  • Phishing rate: A reduction in the phishing rate over time provides increased awareness of security threats and the effectiveness of awareness training. Leverage a phishing simulation to document the open rates per business function to track phishing risks.
  • The number of security breaches: Track and record the number of new incidents and breaches every month. Measure a monthly percentage increase or decrease.
  • Mean time to detect (MTTD): Calculate how long it takes your team to become aware of indicators of compromise and other security threats. To calculate MTTD, take the sum of the hours spent detecting, acknowledging, and resolving an alert, and divide it by the number of incidents.
  • Patching cadence: Determine how long it takes to implement application security patches or mitigate high-risk CVE-listed vulnerabilities.
  • Mean time to recovery (MTTR): Take the sum of downtime for a given period and divide it by the number of incidents. For example, if you had 20 minutes of downtime caused by two different events over two days, your MTTR is 20 divided by two, equalling 10 minutes.

3. Accountability

A security goal generates the requirement for actions of an entity to be traced uniquely to support non-repudiation, deterrence, fault isolation, intrusion detection, prevention, after-action recovery, and legal action.

The quality of your incident response plan will determine how much time passes between assigning tasks to different business functions. Calculate the mean time between business functions aware of a cyber attack and their response. Additionally, calculate the mean time to resolve a cyber attack once they have become aware by measuring how much time passes between assigning tasks to different business functions.

Also, consider recording how internal stakeholders perform with awareness or other security program efforts to track the effectiveness of training.

4. Process

Processes are critical to implementing an effective strategy and help maintain and support operationalising cybersecurity.

To determine your increase in the number of risks, link the percent differences in the number of risks identified across the business monthly. Identify accepted risks by stakeholders and vendors monthly, and hold regular information security forums between business functions to review levels of progress. It’s also wise to document meeting notes and actions for compliance and internal reference.

5. Resources

Ownership of cybersecurity across the business creates knowledge to manage, maintain and operate cybersecurity.

When determining the effectiveness of resources, analyse what levels of training you give different levels of stakeholders. For example, administration training will differ from targeted executives.

Calculate the engagement levels of input and feedback from previous awareness training and record positive and negative feedback from all stakeholders. Ensure that different parts of the business have the required skill level and knowledge within the business function’s scope. Use a skills matrix aligned to security domains to uncover stakeholders’ hidden knowledge or skill gaps.

6. Automation

The automation of security tasks includes administrative duties, incident detection, response, and identification risk.

Consider implementing automation in vulnerability management processes internally and externally to the business. Additionally, detect intrusion attempts and malicious actions that try to breach your networks. And finally, automate patch management actions on all assets within scope by assessing the number of patches deployed per month based on the environment, i.e. cloud.

A journey that delivers outcomes

A cyber-targeted operating model is a unique approach that provides defensibility, detectability, and accountability. The model is based on the idea that you can’t protect what you don’t know and aims to provide a holistic view of your organisation’s security posture. By identifying the most critical business functions and defining a process for each foundation, you can develop your cyber maturity over time.

To get the maximum benefit from Cybersecurity Series: Hackers ‘re Gonna Hack, watch Part One: Operationalising Cybersecurity to benchmark your existing maturity against the six foundational components. Watch Part 2: Foundational Components to Enable a Cyber Target Operating Model on-demand, or pre-register for Part Three: Cybersecurity KPIs to Track and Share with Your Board to begin mapping against your priority areas. Attendees will receive a complete list of Cybersecurity KPIs that align with the maturity level of your organisation.

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

4 Strategies to Help Your Cybersecurity Budget Work Harder

Post Syndicated from Rapid7 original https://blog.rapid7.com/2022/06/17/4-strategies-to-help-your-cybersecurity-budget-work-harder/

4 Strategies to Help Your Cybersecurity Budget Work Harder

The digital economy is being disrupted by data. An estimated 79 zettabytes of data was created and consumed in 2021— a staggering amount that is reshaping how we do business. But as the volume and value of data increases, so does the motivation for hackers to steal it. As such, cybersecurity is a growing concern for organisations across all industries, and budget requests are increasing as a result.

But if we’re spending more, why are organisations still getting hacked at an increasing rate?

In the first webinar of Cybersecurity Series: Hackers ‘re Gonna Hack, Jason Hart, Chief Technology Officer, EMEA, Rapid7, shared his experience on why executives need to reconsider their current operating model and ensure their cybersecurity budgets are working as hard as possible.

84% of our webinar audience agreed that doubling their cybersecurity budget would not halve the risk or impact for their business.

Cybersecurity departments are finding it extremely challenging to justify increases to their budget when they are not seen as directly contributing to revenue. There was also a time when cyber insurance was regarded as a safeguard and magic wand to protect us from risks. But now, these providers are placing more onus on organisations to ensure preventative measures are in place, including risk assessment, controls, and cybersecurity operations.

In an ever-evolving landscape, it is essential to take a step back and consider how you can improve your approach. The key question remains, “How do you do more with less?” You can’t protect everything – you need to understand what matters most and be able to manage, mitigate, and transfer risks by working with a range of stakeholders throughout your organisation. Here are four strategies that can help.

1. Embrace the evolution of profit and loss for cybersecurity

A profit-and-loss framework for cybersecurity enables organisations to identify their current level of risk, prioritise their efforts based on those risks, and then set benchmarks for improvements over time. The goal is to create an environment where you can proactively manage your cybersecurity risks rather than reactively mitigate them after they’ve occurred.

61% of our audience agreed they need to approach cybersecurity from a profit-and-loss perspective.

2. Become situation-aware

Awareness is the ability to look at all the information available, recognise what’s important, and act accordingly. It’s a skill that can be learned, practised, and improved over time.

You can’t fix what you don’t know, so it’s essential to have a clear understanding of the risks in your organisation and those that might arise in the future. We believe there are three levels of awareness:

  • Situation awareness: When an organisation understands the critical (people, data and process) and operational elements for executing information security strategy.
  • Situation ignorance: When organisations assume everything is OK without considering the impact of people, data, and processes. They may be implementing security control and awareness training, but there is no straightforward process. The strategy does not align to risk reduction and mitigation, and budgets continue to increase.
  • Situation arrogance: Organisations that continue to spend huge amounts of budget, while still getting compromised and breached. They might consider people, data, and process, but they fail to act.

57% of our audience believed they were situation-aware. 31% percent said they were situation-ignorant, and 11% felt their organisations were situation-arrogant.

Try to identify your organisation’s cyber maturity to make improvements. To test impact and likelihood, ask your peers – in the event of a breach, what data would you be most concerned about if hackers applied ransomware to it? To test risk versus control effectiveness, consider where that data is located. When understanding impact and level of risk, find out what business functions would be affected.

3. Adapt or become irrelevant

Cybersecurity operations should be tailored to your organisation’s unique needs; there’s no one-size-fits-all approach. The move away from traditional operation models to a more targeted one requires a strong foundation for transformation and change. This includes:

  • Culture
  • Process
  • Measurement
  • Resources
  • Accountability
  • Automation

Only 27% of our audience believed they have the foundations for a targeted operations model to carry over to cybersecurity.

4. Implement protection-level agreements

To eradicate and remove a critical vulnerability, you might need to reboot, consider patch management, or bring systems down. This can be hard to assign a value, but it will inevitably increase your budget.

For example, to reduce a critical vulnerability, the average annual cost for the business is £1 million per year. But what if we set up a protection-level agreement (PLA) so that any critical vulnerabilities are eradicated and managed within 30 days? That would reduce operational costs to approximately £250,000 per year.

But what if you are hacked on day 25? That isn’t not a control failure – it results from a business decision that has been agreed upon. PLAs enable you to track and monitor threat activity so the business and leadership team can understand why you were breached. The approach also highlights gaps in your foundation, enabling you to address them before they become serious problems. For example, it might highlight potential challenges in handoff, process, or accountability. Additionally, a PLA is a language your stakeholders understand.

Everyone is on the same journey

Each stakeholder in your organisation is at a different stage of their journey. They have different expectations about how cybersecurity will impact them or their department. They also have different levels of technical knowledge. When planning communications, consider these differences to get them on board with your vision, working with them to ensure everyone’s expectations can be met.

Register for Part 2 Cybersecurity: Hackers ‘re Gonna Hack to find out more about getting your executive team on board. Jason Hart, Chief Technology Officer, EMEA, Rapid7, will show you how to implement new ideas to build your target operating model to drive effectiveness and change.

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

How to Strategically Scale Vendor Management and Supply Chain Security

Post Syndicated from AJ Debole original https://blog.rapid7.com/2022/04/26/how-to-strategically-scale-vendor-management-and-supply-chain-security/

How to Strategically Scale Vendor Management and Supply Chain Security

This post is co-authored by Collin Huber

Recent security events — particularly the threat actor activity from the Lapsu$ group, Spring4Shell, and various new supply-chain attacks — have the security community on high alert. Security professionals and network defenders around the world are wondering what we can do to make the organizations we serve less likely to be featured in an article as the most recently compromised company.

In this post, we’ll articulate some simple changes we can all make in the near future to provide more impactful security guidance and controls to decrease risk in our environments.

Maintain good cyber hygiene

Here are some basic steps that organizations can take to ensure their security posture is in good health and risks are at a manageable level.

1.  Review privileged user activity for anomalies

Take this opportunity to review logs of privileged user activity. Additionally, review instances of changed passwords, as well as any other unexpected activity. Interview the end user to help determine the authenticity of the change. Take into consideration the types of endpoints used across your network, as well as expected actions or any changes to privileges (e.g. privilege escalation).

2. Enforce use of multifactor authentication

Has multifactor authentication (MFA) deployment stalled at your firm? This is an excellent opportunity to revisit deployment of these initiatives. Use of MFA reduces the potential for compromise in a significant number of instances. There are several options for deployment of MFA. Hardware-based MFA methods, such as FIDO tokens, are typically the strongest, and numerous options offer user-friendly ways to use MFA — for example, from a smartphone. Ensure that employees and third parties are trained not to accept unexpected prompts to approve a connection.

3. Understand vendor risks

Does your acquisition process consider the security posture of the vendor in question? Based on the use case for the vendor and the business need, consider the security controls you require to maintain the integrity of your environment. Additionally, review available security reports to identify security controls to investigate further. If a security incident has occurred, consider the mitigating controls that were missing for that vendor. Depending on the response of that vendor and their ability to implement those security controls, determine if this should influence purchase decisions or contract renewal.      

4. Review monitoring and alerts

Review system logs for other critical systems, including those with high volumes of data. Consider reviewing systems that may not store, process, or transmit sensitive data but could have considerable vulnerabilities. Depending on the characteristics of these systems and their mitigating controls, it may be appropriate to prioritize patching, implement additional mitigating controls, and even consider additional alerting.

Always make sure to act as soon as you can. It’s better to enact incident response (IR) plans and de-escalate than not to.

Build a more secure supply chain

Risks are inherent in the software supply chain, but there are some strategies that can help you ensure your vendors are as secure as possible. Here are three key concepts to consider implementing.

1. Enumerate edge connection points between internal and vendor environments

Every organization has ingress and egress points with various external applications and service providers. When new services or vendors are procured, access control lists (ACLs) are updated to accommodate the new data streams — which presents an opportunity to record simple commands for shutting those streams down in the event of a vendor compromise.

Early stages of an incident are often daunting, frustrating, and confusing for all parties involved. Empowering information security (IS) and information technology (IT) teams to have these commands ahead of time decreases the guesswork that needs to be done to create them when an event occurs. This frees up resources to perform other critical elements of your IR plan as appropriate.

One of the most critical elements of incident response is containment. Many vendors will immediately disable external connections when an attack is discovered, but relying on an external party to act in the best interest of your organization is a challenging position for any security professional. If your organization has a list of external connections open to the impacted vendor, creating templates or files to easily cut and paste commands to cut off the connection is an easy step in the planning phase of incident response. These commands can be approved for dispatch by senior leadership and immediately put in place to ensure whatever nefarious behavior occurring on the vendor’s network cannot pass into your environment.

An additional benefit of enumerating and memorializing these commands enables teams to practice them or review them during annual updates of the IRP or tabletop exercises. If your organization does not have this information prepared right now, you have a great opportunity to collaborate with your IS and IT teams to improve your preparedness for a vendor compromise.

Vendor compromises can result in service outages which may have an operational impact on your organization. When your organization is considering ways to mitigate potential risks associated with outages and other supply chain issues, review your business continuity plan to ensure it has the appropriate coverage and provides right-sized guidance for resiliency. It may not make business sense to have alternatives for every system or process, so memorialize accepted risks in a Plan of Action and Milestones (POAM) and/or your Risk Register to record your rationale and demonstrate due diligence.

2. Maintain a vendor inventory with key POCs and SLAs

Having a centralized repository of vendors with key points of contact (POCs) for the account and service-level agreements (SLAs) relevant to the business relationship is an invaluable asset in the event of a breach or attack. The repository enables rapid communication with the appropriate parties at the vendor to open and maintain a clear line of communication, so you can share updates and get critical questions answered in a timely fashion. Having SLAs related to system downtime and system support is also instrumental to ensure the vendor is furnishing the agreed-upon services as promised.

3. Prepare templates to communicate to customers and other appropriate parties

Finally, set up templates for communications about what your team is doing to protect the environment and answer any high-level questions in the event of a security incident. For these documents, it is best to work with legal departments and senior leadership to ensure the amount of information provided and the manner in which it is disclosed is appropriate.

  • Internal communication: Have a formatted memo to easily address some key elements of what is occurring to keep staff apprised of the situation. You may want to include remarks indicating an investigation is underway, your internal environment is being monitored, relevant impacts staff may see, who to contact if external parties have questions, and reiterate how to report unusual device behavior to your HelpDesk or security team.
  • External communication: Communication for the press regarding the investigation or severity of the breach as appropriate.
  • Regulatory notices: Work with legal teams to templatize regulatory notifications to ensure the right data is easily provided by technical teams to be shared in an easy-to-update format.

Complex software supply chains introduce a wide range of vulnerabilities into our environments – but with these strategic steps in place, you can limit the impacts of security incidents and keep risk to a minimum in your third-party vendor relationships.

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

8 Tips for Securing Networks When Time Is Scarce

Post Syndicated from Erick Galinkin original https://blog.rapid7.com/2022/03/22/8-tips-for-securing-networks-when-time-is-scarce/

8 Tips for Securing Networks When Time Is Scarce

“At this particular mobile army hospital, we’re not concerned with the ultimate reconstruction of the patient. We only care about getting the kid out of here alive enough for someone else to put on the fine touches. We work fast and we’re not dainty, because a lot of these kids who can stand 2 hours on the table just can’t stand one second more. We try to play par surgery on this course. Par is a live patient.” – Hawkeye, M*A*S*H

Recently, CISA released their Shields Up guidance around reducing the likelihood and impact of a cyber intrusion in response to increased risk around the Russia-Ukraine conflict. This week, the White House echoed those sentiments and released a statement about potential impact to Western companies from Russian threat actors. The White House guidance also included a fact sheet identifying urgent steps to take.

Given the urgency of these warnings, many information security teams find themselves scrambling to prioritize mitigation actions and protect their networks. We may not have time to make our networks less flat, patch all the vulnerabilities, set up a backup plan, encrypt all the data at rest, and practice our incident response scenarios before disaster strikes. To that end, we’ve put together 8 tips for “emergency field security” that defenders can take right now to protect themselves.

1. Familiarize yourself with CISA’s KEV, and prioritize those patches

CISA’s Known Exploited Vulnerabilities (KEV) catalog enumerates vulnerabilities that are, as the name implies, known to be exploited in the wild. This should be your first stop for patch remediation.

These vulns are known to be weaponized and effective — thus, they’re likely to be exploited if your organization is targeted and attackers expose one of them in your environment. CISA regularly updates this catalog, so it’s important to subscribe to their update notices and prioritize patching vulnerabilities included in future releases.

2. Keep an eye on egress

Systems, especially those that serve customers or live in a DMZ, are going to see tons of inbound requests – probably too many to keep track of. On the other hand, those systems are going to initiate very few outbound requests, and those are the ones that are far more likely to be command and control.

If you’re conducting hunting, look for signs that the calls may be coming from inside your network. Start keeping track of the outbound requests, and implement a default deny-all outbound rule with exceptions for the known-good domains. This is especially important for cloud environments, as they tend to be dynamic and suffer from “policy drift” far more than internal environments.

3. Review your active directory groups

Now is the perfect time to review active directory group memberships and permissions. Making sure that users are granted access to the minimum set of assets required to do their jobs is critical to making life hard for attackers.

Ideally, even your most privileged users should have regular accounts that they use for the majority of their job, logging into administrator accounts only when it’s absolutely necessary to complete a task. This way, it’s much easier to track privileged users and spot anomalous behavior for global or domain administrators. Consider using tools such as Bloodhound to get a handle on existing group membership and permissions structure.

4. Don’t laugh off LOL

Living off the land (LOL) is a technique in which threat actors use legitimate system tools in attacks. These tools are frequently installed by default and used by systems administrators to do their jobs. That means they’re often ignored or even explicitly allowed by antivirus and endpoint protection software.

You can help protect systems against LOL attacks by configuring logging for Powershell and adding recommended block rules for these binaries unless they are necessary. Refer to the regularly updated (but not comprehensive, as this is a constantly evolving space) list of these at LOLBAS.

5. Don’t push it

If your organization hasn’t mandated multi-factor authentication (MFA) yet, now would be a very good time to require it. Even if you already require MFA, you may need to let users know to immediately report any notifications they did not initiate.

Nobelium, a likely Russian-state sponsored threat actor, has been observed repeatedly sending MFA push notifications to users’ smartphones. Though push notifications are considered more secure than email or SMS notifications due to the need for physical access, it turns out that sending enough requests means many users eventually – either due to annoyance or accident – approve the request, effectively defeating the two-factor authentication.

When you do enable MFA, be sure to regularly review the authentication logs, keeping an eye out for accounts being placed in “recovery” mode, especially for extended periods of time or repeatedly. Also consider using tools or services that monitor the MFA configuration status of your environment to ensure configuration drift (or attackers) have not disabled MFA.

6. Stick to the script

Often, your enterprise’s first line of defense is the help desk. Over the next few days, it’s important that these people feel empowered to enforce your security policies.

Sometimes, people lose their phone and can’t perform their MFA. Other times, their company laptop just up and dies, and they can’t get at their presentation materials on the shared drive. Or maybe they’re sure what their password should be, but today, it just isn’t. It happens. Any number of regular disasters can befall your users, and they’ll turn to your help desk to get them back up and running. Most of the time, these aren’t devious social engineering attacks. They just need help.

Of course, the point of a help desk is to help people. Sometimes, however, the “users” are attackers in disguise, looking for a quick path around your security controls. It can be hard to tell when someone calling in to the help desk is a legitimate user who is pressed for time or an attacker trying to scale the walls. Your help desk folks should be extra wary of these requests — and, more importantly, know they won’t be fired, reprimanded, or retaliated against for following the standard, agreed-upon procedures. It might be a key executive or customer who’s having trouble, and it might not be.

You already have a procedure for resets and re-enrollments, and exceptions to that procedure need to be accompanied by exceptional evidence.

(Hat tip to Bob Lord for bringing this mentality up on a recent Security Nation episode.)

7. Call for backup

Now is the time to make sure you have solid offline backups of:

  • Business-critical data
  • Active Directory (or your equivalent identity store)
  • All network configurations (down to the device level)
  • All cloud service configurations

Continue to refresh these backups moving forward. In addition, make sure your backups are integrity-tested and that you can (quickly) recover them, especially for the duration of this conflict.

8. Practice good posture

While humans will be targeted with phishing attacks, your internet-facing components will also be in the sights of attackers. There are numerous attack surface profiling tools and services out there that help provide an attacker’s-eye view of what you’re exposing and identify any problematic services and configurations — we have one that is free to all Rapid7 customers, and CISA provides a free service to any US organization that signs up. You should review your attack surface regularly to ensure there are no unseen gaps.

While security is a daunting task, especially when faced with guidance from the highest levels of the US government, we don’t necessarily need to check all the boxes today. These 8 steps are a good start on “field security” to help your organization stabilize and prepare ahead of any impending attack.

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

The VM Lifecycle: How We Got Here, and Where We’re Going

Post Syndicated from Devin Krugly original https://blog.rapid7.com/2022/03/16/the-vm-lifecycle-how-we-got-here-and-where-were-going/

The VM Lifecycle: How We Got Here, and Where We’re Going

Written in collaboration with Joel Ashman

The immutable truth that vulnerability management (VM) programs have long adhered to is that successful programs should follow a consistent lifecycle. This concept is simply a series of phases or steps that have a logical sequence and are repeated according to an organization’s VM program cadence.

A lifecycle gives a VM program a central illustrative model, defining the high-level series of activities that must be performed to reduce attack surface risk — the ultimate goal of any VM program. This type of model provides a uniform set of expectations for all stakeholders, who are often cross-functional and geographically dispersed. It can also be used as a diagnostic tool to identify bottlenecks, inefficiencies, or gaps (more on that later).

There are many lifecycle model prototypes in circulation, and they are generally comparable and iterative in nature. They break large-bucket activities into four, five, or six phases of work which describe the effort needed to prepare and scan for vulnerabilities or configuration weaknesses, assess or analyze, distribute, and ultimately address those findings through remediation or another risk treatment plan (i.e. exceptions, retire a server, etc).

While any one specific lifecycle will (and should) vary by organization and the specific tools in use, there are some fundamental steps or phases that remain consistent. This educational series will focus on introducing those fundamental building blocks, followed by practical demonstrations on how best to leverage Rapid7 solutions and services to accelerate your program.

In this first installment of a multipart blog and webinar series, we will explore the concept of a VM program lifecycle and provide practical guidance and definition for what many consider the first of the iterative VM lifecycle phases – often referred to as “discover”, “understand,” or even “planning.”

A (very) brief history of the VM lifecycle

But let’s return to the lifecycle concept for just a moment.

Having just a couple of small variables in my life flip the other way, I could have ended up a forensic historian or anthropologist. Those interests have paid dividends time and time again: to understand where you want to go, you have to understand how you got here.

The need for vulnerability management has existed since long before it had a title. It falls under what could be argued is the most important cybersecurity discipline: security hygiene. If you want nice teeth, you have to have good dental hygiene (identify cavities and perform regular maintenance). Similarly, organizations that require secure digital infrastructure must regularly assess and identify weaknesses (vulnerabilities, defects, improper configs) and then address those weaknesses through updates or other mitigation.

Two key points about how we got here:

  1. We all know the evolution of a few worms and viruses in ARPANET in the 1970s, to the much more intentionally crafted viruses targeting operating systems of the 1980s, to today’s this-ware or that-ware that have malintent baked right into the very fibers of their assembly language. In computing, the potential for misuse in the form of vulnerabilities has been with us from the start.
  2. A subtle but countervailing force has slowly but surely crept forward to stem, reflect, contain, and now often eradicate the intentions of bad actors. We the Defenders, the Protectors, the Stewards of Vulnerability Management will not be dissuaded from our obligation to manage cyber risk: to safeguard secrets, to shield corporate data, and to protect the networks that allow us to share pictures of the animals living in our homes and purchase large quantities of toilet paper online. Let’s not forget the prevention of abuse of individual identity — stopping those thieves from taking vacation savings, using those funds for their own vacation, and posting pictures on a Caribbean island from their Instagram account (true story).

We are keen to meet and overcome the challenges of modern attackers and modern infrastructure and applications (with all its containers and microservices), both now and into the bright and hopefully still shiny future.

We have met this call and at times faltered, but we have never been discouraged — and a key element that has supported us Protectors has been the lifecycle artifact. A conceptual model that conveys the continuous nature of the management of vulnerability risk and provides steadfast guidance for all stakeholders.

The lifecycle holds true

Vulnerability risk management is a team sport. It is only through careful, judicious, and sometimes aggravatingly laborious detail that a full lifecycle successfully completes. This may entail the same conversation happening no less than 5 to 8 times with the same audience. Even if the last time you said, “I didn’t ever want to have this conversation ever again.” Amidst all the chaos and confusion, the VM lifecycle is an immutable truth. Its methods may evolve and its technology take a dramatically different approach, but it will remain true.

The compendium to this blog is a webinar, which you can watch here. Both are the first in our series to freshen up perceptions and maybe introduce a few new concepts by exploring the various phases and activities that are fundamental pillars for a strong VM program and its execution. In addition, we have created a worksheet as a guide to facilitate efficient collection of information to build a VM stakeholder map. You can access the worksheet and download it here.

Join me for the next in the series to dive deeper into the initial stages or phases, or whatever preferred term you use, of the VM lifecycle.

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

2022 Planning: Metrics That Matter and Curtailing the Cobra Effect

Post Syndicated from Erick Galinkin original https://blog.rapid7.com/2022/01/18/2022-planning-metrics-that-matter-and-curtailing-the-cobra-effect/

2022 Planning: Metrics That Matter and Curtailing the Cobra Effect

During the British rule of India, the British government became concerned about the number of cobras in the city of Delhi. The ambitious bureaucrats came up with what they thought was the perfect solution, and they issued a bounty for cobra skins. The plan worked wonderfully at first, as cobra skins poured in and reports of cobras in Delhi declined.

However, it wasn’t long before some of the Indian people began breeding these snakes for their lucrative scales. Once the British discovered this scheme, they immediately cancelled the bounty program, and the Indian snake farmers promptly released their now-worthless cobras into the wild.

Now, the cobra conundrum was even worse than before the bounty was offered, giving rise to the term “the cobra effect.” Later, the economist Charles Goodhart coined the closely related Goodhart’s Law, widely paraphrased as, “When a measure becomes a target, it ceases to be a good measure.”

Creating metrics in cybersecurity is hard enough, but creating metrics that matter is a harder challenge still. Any business-minded person can tell you that effective metrics (in any field) need to meet these 5 criteria:

  1. Cheap to create
  2. Consistently measured
  3. Quantifiable
  4. Significant to someone
  5. Tied to a business need

If your proposed metrics don’t meet any one of the above criteria, you are setting yourself up for a fantastic failure. Yet if they do meet those criteria, you aren’t totally out of the woods yet. You must still avoid the cobra effect.

A case study

I’d like to take a moment to recount a story from one of the more effective security operations centers (SOCs) I’ve had the pleasure of working with. They had a quite well-oiled 24/7 operation going. There was a dedicated team of data scientists who spent their time writing custom tooling and detections, as well as a wholly separate team of traditional SOC analysts, who were responsible for responding to the generated alerts. The data scientists were judged by the number of new threat detections they were able to come up with. The analysts were judged by the number of alerts they were able to triage, and they were bound by a (rather rapid) service-level agreement (SLA).

This largely worked well, with one fairly substantial caveat. The team of analysts had to sign off on any new detection that entered the production alerting system. These analysts, however, were largely motivated by being able to triage a new issue quickly.

I’m not here to say that I believe they were doing anything morally ambiguous, but the organizational incentive encouraged them to accept detections that could quickly and easily be marked as false positives and reject detections that took more time to investigate, even if they were more densely populated with true positives. The end effect was a system structured to create a success condition that was a massive number of false-positive alerts that could be quickly clicked away.

Avoiding common pitfalls

The most common metrics used by SOCs are number of issues closed and mean time to close.

While I personally am not in love with these particular quantifiers, there is a very obvious reason these are the go-to data points. They very easily fit all 5 criteria listed above. But on their own, they can lead you down a path of negative incentivization.

So how can we take metrics like this, and make them effective? Ideally, we could use these in conjunction with some analysis on false/true positivity rate to arrive at an efficacy rate that will maximize your true positive detections per dollar.

Arriving at efficacy

Before we get started, let’s make some assumptions. We are going to talk about SOC alerts that must be responded to by a human being. The ideal state is for high-fidelity alerting with automated response, but there is always a state where human intervention is necessary to make a disposition. We are also going to assume that there are a variety of types of detections that have different false-positive and true-positive rates, and for the sheer sake of brevity, we are going to pretend that false negatives incur no cost (an obvious absurdity, but my college physics professor taught me that this is fine for demonstration purposes). We are also going to assume, safely, I imagine, that reviewing these alerts takes time and that time incurs a dollars-and-cents cost.

For any alert type, you would want to establish the number of expected true positives, which is the alert rate multiplied by the true-positive rate (which you must be somehow tracking, by the way). This will give you the expected number of true positives over the alert rate period.

Great! So we know how many true positives to expect in a big bucket of alerts. Now what? Well, we need to know how much it costs to look through the alerts in that bucket! Take the alert rate, multiply by the alert review time, and if you are feeling up to it, multiply by the cost of the manpower, and you’ll arrive at the expected cost to review all the alerts in that bucket.

But the real question you want to know is, is the juice worth the squeeze? The detection efficacy will tell you the cost of each true positive and can be calculated by dividing the number of expected true positives by the expected cost. Or to simplify the whole process, divine the true-positive rate by the average alert review time, and multiply by the manpower cost.

If you capture detection efficacy this way, you can effectively discover which detections are costing you the most and which are most effective.

Dragging down distributions

Another important option to consider is the use of distributions in your metric calculation. We all remember mean, median, and mode from grade school — these and other statistics are tools we can use to tell us how effective we are. In particular, we want to ask whether our measure should be sensitive to outliers — data points that don’t look typical. We should also consider whether our mean and median are being dragged down by our distribution.

As a quick numerical example, assume we have 100 alerts come in, and we bulk-close 75 of them based on some heuristic. The other 25 alerts are all reviewed, taking 15 minutes each, and handed off as true positives. Then our median time to close is 0 minutes, and our mean time to close is 3 minutes and 45 seconds.

Those numbers are great, right? Well, not exactly. They tell us what “normal” looks like but give us no insight into what is actually happening.

To that end, we have two options. Firstly, we can remove zero values from our data! This is typical in data science as a way to clean data, since in most cases, zeros are values that are either useless or erroneous. This gives us a better idea of what “normal” looks like.

Second, we can use a value like the upper quartile to see that the 75th-percentile time to close is 15 minutes, which in this case is a much more representative example of how long an analyst would expect to spend on events. In particular, it’s easy to drag down the average — just close false positives quickly! But it’s much harder to drag down the upper quartile without making some real improvements.

3 keys to keep in mind

When creating metrics for your security program, there are a lot of available options. When choosing your metrics, there are a few keys:

  1. Watch out for the cobra effect. Your metrics should describe something meaningful, but they should be hard to game. When in doubt, remember Goodhart’s Law — if it’s a target, it’s not a good metric.
  2. Remember efficacy. In general, we are aiming to get high-quality responses to alerts that require human expertise. To that point, we want our analysts to be as efficient and our detections to be as effective as possible. Efficacy gives us a powerful metric that is tailor-made for this purpose.
  3. When in doubt, spread it out. A single number is rarely able to give a truly representative measure of what is happening in your environment. However, having two or more metrics — such as mean time to response and upper-quartile time to response — can make those metrics more robust to outliers and against being gamed, ensuring you get better information.

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Log4Shell Strategic Response: 5 Practices for Vulnerability Management at Scale

Post Syndicated from Joshua Harr original https://blog.rapid7.com/2022/01/07/log4shell-strategic-response-5-practices-for-vulnerability-management-at-scale/

Log4Shell Strategic Response: 5 Practices for Vulnerability Management at Scale

In today’s cybersecurity world, risks evolve faster than we can remediate them. To meet our goals and become resilient to these fast changes, we need the right balance of automation and human interaction. Enabling rapid response for protecting information systems is paramount, but how does a business reach this level of reaction?

How can organizations maintain a standard of excellence to their responses in high-risk situations?

Where do you even begin to respond to a critical vulnerability like the one in Apache’s Log4j Java library (a.k.a. Log4Shell)?

Most importantly, how do we transform the tactical actions that need to take place into an effective strategy to scale?

1. Empower personnel

The personnel with the knowledge about your various solutions must be empowered to make the decisions necessary to address your company’s information technology needs. If those team members don’t feel they can make those decisions, then they will defer to management — but managers may not know the intricacies of the solutions and could create a natural bottleneck, since there are going to be more decision points than managers to make decisions. Providing personnel with policy documents with uniform criteria for evaluating the risk these new vulnerabilities present, the ways to respond, and the time expectations is paramount for a timely resolution.

In a typical risk resolution process, there are many gates to safeguard our systems. This helps ensure that whatever change happens increases the solution’s confidentiality, integrity, or availability rather than diminishing it. However, a situation like Log4Shell needs to be treated like an incident response activity to quickly address the risk. Create a task force to effectively answer the important questions like:

  • How do we find vulnerable systems?
  • Which systems are vulnerable?
  • What options are there for a fix? One size may not fit all.
  • Who is going to track changes?
  • Who is going to validate the fix is in place?

Utilizing a strong incident response procedure to answer all these questions will assist with prioritization and remediation to an acceptable level of risk.

2. Promote visibility

Any standard vulnerability management lifecycle process begins with identifying affected systems to assess and evaluate the scope of a vulnerability’s presence on the network. The approach should utilize both proactive and reactive efforts through a combination of tools and well-documented processes to streamline and scale the response effectively.

A proactive process would first involve having well-documented use of any such library versions internally in an inventory, so that discoverability and traceability are much more narrowly focused efforts. If you conduct authenticated vulnerability scans continuously on pre-scheduled frequencies, this will also help with identification of third-party software utilizing this library over time. Classifying system criticality within the vulnerability management tool will help you more effectively scale future remediation processes.

These proactive processes help jumpstart an initial response, but you’ll still need reactive efforts to help ensure effective and timely remediation. Vulnerability scanning tools will receive signature updates regarding this newly discovered vulnerability, which will require updating your vulnerability management tool and initiating one-off alternative scans that may deviate from pre-scheduled rotations. These alternative scans should include tiered phases, so the most critical systems receive scan priority, and then remaining systems are scanned in order of criticality. Leveraging the pre-existing system criticality classification will significantly expedite this process.

A security incident and event management (SIEM) tool can also assist with identifying, tracking, and alerting for any suspicious activity that may be tied to exploitation of this vulnerability. Host agents and network detection systems that report back to the SIEM should be closely monitored, and any activity or traffic that deviates from baselines should receive an active response. You may need to adjust logging and alerting rules and thresholds to ensure your efforts are strategically focused.

Tactical processes help you achieve this continuous identification, but you still need to orchestrate and execute them through strategic planning to remain timely, efficient, and effective. Well-documented asset inventories and appropriate system criticality classifications help you prioritize your efforts, while continuous vulnerability scans and leveraging vulnerability management and SIEM tools help to identify, track, and manage vulnerability exposure. Leadership should provide the direction to guide these activities from inception to implementation through effective communication and allocation of resources. Lay out a short-term roadmap for tracking objectives and quick wins as part of the remediation process, so you can quickly and concisely show how you’re tracking toward goals.

3. Implement prioritization and mitigation

Now that your team has successfully identified all affected systems, you’ll need to roll out patches to those systems on a continuous basis during the next phase to mitigate risk. Current enterprise-wide patching timelines may require adjustment due to the urgency associated with such critical vulnerabilities. Patch testing and rollout phases must be expedited to support a more timely and effective response.

Much like conducting our vulnerability scans in terms of system criticality prioritization, our patch management response should follow a similar approach, with the caveat that a pilot group or pilot system deemed non-critical should be patched first for testing purposes to ensure no adverse effects prior to rolling out patches in order of system criticality. If you’ve configured a full test environment is configured, you can test patches on critical systems first within that environment and then roll them out in production according to criticality. The testing timeline itself should be reduced throughout all standard phases of a testing cycle — you may even need to eliminate certain testing phases altogether. The rollout timelines for patches across all systems will need to be expedited as well to ensure as timely coverage as possible. If your environment has widespread use of the vulnerable library, you may require reductions in timelines of anywhere from 25% to 50%.

Emergency patching procedures should provide for timely testing and production rollouts within roughly half the time of a normal patching cycle, or 5 to 10 days at a maximum for critical systems to minimize breach potential as quickly as possible. Also keep in mind that some vulnerabilities may involve more than just application of a simple patch — configuration changes may also be necessary to further mitigate potential exploitation by an adversary.

4. Validate remediation

Now, you’ve deployed patches to all affected systems, so the mitigation efforts are complete, right? While you may want to shift your focus back to other tasks, it’s essential to maintain continuous identification processes to ensure that no stone remains unturned.

The vulnerability management validation phase leverages those reactive identification processes, in addition to patch management processes, to assist in efficient and effective vulnerability remediation for affected systems. This stage involves re-scanning initially identified vulnerable systems to assess successful patch application and performing additional open scans of the network to ensure that there are no lingering systems that may still be affected by the vulnerability but weren’t originally identified — or perhaps weren’t successfully patched as part of the patch management process. This cycle of continuous validation will remain in effect until “clean” scans are reported across the enterprise regarding this vulnerability.

Since the Log4j logging library is widely used throughout many enterprise applications and even unknowingly embedded in so many others, continuous validation will become crucial in ensuring your organization remains vigilant and can mitigate the vulnerability quickly and effectively as you continue to discover affected systems.

5. Regularly review risks

A vulnerability management lifecycle rarely ever comes to a true end. As adversaries and security evangelists further evaluate a specific vulnerability over time, new methods of exploitation are identified, affected versions increase in scope and scale, and recent patches and fixes are found to be ineffective. This leaves organizations potentially open to exposure and at a loss for the best path forward. Continuous review of the trends surrounding an ongoing critical vulnerability will help organizations ensure they remain both aware of the impact and the current mitigating measures that have been most successful. Additionally, leveraging other solutions can help further identify and launch a coordinated defense-in-depth response to any potential malicious activity that may be associated with such vulnerabilities.

Working to continuously identify, mitigate, validate, and review vulnerabilities throughout their inevitable course will require commitment and fortitude to achieve the best results, but once the tides have subsided with Log4Shell and you’ve successfully and securely endured one of the worst security vulnerability exposures in a decade by following these processes, you can rest assured that your incident response processes were well-tested during this endeavor — and your IT security budget should be more than solidified for the next few years to come.

Check out our additional resources for further insight of this vulnerability, mitigating measures, and tools available to assist.

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

3 Strategies That Are More Productive Than Hack Back

Post Syndicated from boB Rudis original https://blog.rapid7.com/2021/12/07/3-strategies-that-are-more-productive-than-hack-back/

3 Strategies That Are More Productive Than Hack Back

2021 has been a banner year in terms of the frequency and diversity of cybersecurity breaking news events, with ransomware being the clear headline-winner. While the DarkSide group (now, in theory, retired) may have captured the spotlight early in the year due to the Colonial Pipeline attack, REvil — the ransomware-as-a-service group that helped enable the devastating Kaseya mass ransomware attack in July — made recent headlines as they were summarily shuttered by the FBI in conjunction with Cyber Command, the Secret Service, and like-minded countries.

This was a well-executed response by government agencies with the proper tools and authority to accomplish a commendable mission. While private-sector entities may have participated in this effort, they will have done so through direct government engagement and under the same oversight.

More recently, the LockBit and Marketo ransomware groups suffered distributed denial of service (DDoS) attacks, as our colleagues at IntSights reported, in retaliation for their campaigns: one targeting a large US firm, and another impacting a US government entity.

The former of these two DDoS attacks falls into a category known colloquially as “hack back.” Our own Jen Ellis did a deep dive on hacking back earlier this year and defined the practice as non-government organizations taking intrusive action against a cyber attacker on technical assets or systems not owned or leased by the person taking action or their client.”

The thorny path of hacking back

Hack back, as used by non-government entities, is problematic for many reasons, including:

  • Group attribution is hard, and most organizational cybersecurity teams are ill-equipped to conduct sufficiently thorough research to gain a high enough level of accuracy to ensure they know who the source really is/was.
  • Infrastructure used to conduct attacks is often compromised assets of legitimate organizations, and taking direct action against them can cause real harm to other innocent victims.
  • It is very likely illegal in most jurisdictions.

As our IntSights colleagues noted, the LockBit and Marketo DDoS hack-back attacks did take the groups offline for weeks and temporarily halted ransomware campaigns associated with their infrastructure. But the groups are both back online, and they — along with other groups — appear to be going after less problematic targets, a (hopefully) unexpected, unintended, but very real consequence of these types of cyber vigilante actions.

Choosing a more productive path

While the temptation may be strong to inflict righteous wrath upon those who have infiltrated and victimized your organization, there are ways to channel your reactions into active defense strategies that can help you regain a sense of control, waste attackers’ time (a precious resource for them), contribute to the greater good, and help change the economics of attacks enough to effect real change. Here are 3 possible alternative routes to consider.

1. Improve infrastructure visibility

You can only effect change in environments that have been instrumented for measurement. While this is true for cybersecurity defense in general, it is paramount if you want to take the step into contributing to the community efforts to reduce the levels and impacts of cybercrime (more on that later).

You have to know what assets are in play, where they are, the state they are in, and the activity happening on and between them. If you aren’t outfitted for that now, thankfully it’s the holiday season, and you still have time to get your shopping list to Santa (a.k.a. your CISO/CFO). If you’re strapped for cash, open-source tools and communities such as MISP provide a great foundation to build upon.

2. Invest in information sharing and analysis

There are times when it feels like we may be helpless in the face of so many adversaries and the daily onslaught of attacks. However, we protectors have communities and resources available that can help us all become safer and more resilient. If your organization isn’t part of at least one information sharing and analysis organization (ISAO), that is your first step into both regaining a sense of control and giving you and your cybersecurity teams active, positive steps you can take on a daily basis to secure our entire ecosystem. An added incentive to join one or more these groups is that many of them gain real-time cross-vendor insights via the Cyber Threat Alliance, a nonprofit that is truly leveling up protectors across the globe.

These groups share tools, techniques, and intelligence that enable you to level up your organization’s defenses and can help guide you through the adoption of cutting-edge, science-driven frameworks such as MITRE’s ATT&CK and D3FEND.

3. Consider the benefits of deception technology

“Oh! What a tangled web we weave when first we practice to deceive!”

Sir Walter Scott may not have had us protectors in mind when he penned that line, but it is a vital component of modern organizational cyber defenses. Deception technology can provide rich intelligence on attacker behavior (which you can share with the aforementioned ISAOs!), keep attackers in a playground safe from your real assets, and — perhaps most importantly — waste their time.

While we have some thoughts and offerings in the cyber deception space — and have tapped into the knowledge of other experts in the field — there are plenty of open-source solutions you can dig into, or create your own! Some of the best innovations come from organizations’ security teams.

Remember: You are not alone

Being a victim of a cyberattack of any magnitude is something we all hope to help our organizations avoid. Even if you’re a single-person “team” overseeing cybersecurity for your entire organization, you don’t have to go it alone, and you definitely do not have to give in to the thought of hacking back to “get even” with your adversaries.

As we’ve noted, there are many organizations who are there to help you channel your energies into building solid defenses and intelligence gathering practices to help you and the rest of us be safer and more resilient. Let’s leave the hacking back to the professionals we’ve tasked with legal enforcement and focus on protecting what’s in our purview.

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

2022 Planning: Prioritizing Defense and Mitigation Through Left of Boom

Post Syndicated from boB Rudis original https://blog.rapid7.com/2021/11/17/2022-planning-prioritizing-defense-and-mitigation-through-left-of-boom/

2022 Planning: Prioritizing Defense and Mitigation Through Left of Boom

In the military, the term “left of boom” refers to the strategy and tactics required to prevent — and protect personnel from — explosions by making proactive decisions before the event happens. Unless you’ve been fortunate enough to avoid tech and media press for the past 24 months, it should be clear by now that cyberattacks most certainly qualify as “boom” events, with the potential to cause reputational, financial, and even real-life physical harm to businesses, communities, and individuals, many of whom are truly innocent bystanders.

While telemetry-fueled detection and well-honed response plans are foundational components of truly effective cybersecurity programs, they are definitely “right of boom,” and we should not be so quick to cede ground to attackers with an “assume breach” mindset. Cybersecurity teams have myriad defense and mitigation strategies at their disposal to help ensure a sizable percentage of attackers never even have the chance to waltz their way through the killchain. In this post, we’ll use ransomware as an example for 3 left-of-boom areas to focus on (via the MITRE ATT&CK framework.)

The ransomware “booms”

One might argue that the singular “boom” of ransomware is the encryption of business critical information and assets, but attackers now also hunt for juicy data they can use for many purposes, including to pressure a target to pay or suffer a data disclosure event on top of a business-disrupting lock-up. There is another emerging scenario that adds a compounding denial-of-service attacks (or multiple attacks) into the mix – note that pure denial-of-service extortion, or “RansomDoS” in the modern vernacular, is out of scope for this post.

Knowing the potential negative outcomes, what can teams focus on ahead of time to help prevent these outcomes and protect their organizations? For ransomware (and, really, the vast majority of cyberattacks today), the main goal is to prevent initial access into your environment, so let’s explore what you need to do to stay left of that particular boom. Since there are many techniques used to gain initial access, we’ll focus the rest of the post on 3 areas (T1190, T1133, and T1078) and give you some tips on how to apply the same left-of-boom thinking to other ones.

←💥 Attack surface management: Preventing exploitation

Attack surface management (ASM) is just a 2021 pretty bow wrapped around the term “asset management” in the hopes that organizations will finally recognize the need for it, realizing that they aren’t just deploying cool services and capabilities but also providing potential inroads for attackers. With ASM, your goal is to understand:

  • What devices, operating systems, and software are deployed on your perimeter, intranet, and remote endpoints
  • The safe and resilient configurations required for those elements
  • The current state of those elements

You cannot get left of boom for a ransomware attack, and many other cyberattacks, without a functional ASM practice in place. This requires having a close partnership with your procurement department and IT endpoint/server/cloud operations teams, as well as the tools (proprietary or open-source) to help with organization and verification.

It’s vital to understand what you’re exposing to the internet — since that’s what attackers can directly see and touch — but it’s also critical to know the status of each node that may be involved in initial access attempts, including desktops, laptops, and mobile devices.

If you can stay ahead of exposing unpatched or unsafe services to the internet and keep your workforce systems patched and configured safely in a timely fashion, you’ll make it difficult to impossible for attackers to use known exploits (one of the most common methods in 2021) to achieve the access they need to carry out the rest of their campaign using that technique.

←←💥 Attack surface management: Safeguarding gateways

Even before our brave, newly expanded world of remote work, organizations needed ways for their workforce to access critical systems and applications outside the confines of the intranet. These include solutions such as virtual private networks (VPNs), remote desktop protocol (RDP), Citrix, and similar technologies. By their nature, these systems need to be configured well from the start, patched almost immediately, and require trusted authorized access (more on that in the last “boom”).

Your team needs to monitor each gateway vendor for patch/mitigation announcements and partner with all critical stakeholders to ensure you can change configurations or patch in an expedited fashion — which may mean having enough capacity and redundancy to take one set of systems down for patching but still let work continue. You should also have continuous configuration monitoring to ensure settings stay the way you need them to be.

←←←💥 Credentials, credentials, credentials

We discussed remote access in the previous section, and gaining remote access generally requires some sort of authentication and authorization. No external gateway, and no critical external application, should be accessible without a solid multi-factor authentication solution in place. Credentials are regularly up for sale on criminal marketplaces, and sellers test them regularly to ensure freshness. If you allow gateway or critical application access with just a single factor, you’ve pretty much handed the keys over to your adversaries.

Similarly, when a new breach is disclosed that includes stolen credential databases, it’s important to monitor services such as Have I Been Pwned and have a process in place to quickly reset any potentially compromised accounts (usually based on email address).

Staying left of boom: A general approach

The 3 examples covered here are important, but they’re far from the full picture. We encourage teams to look at all the forms of initial access and examine them through the lens of their threat assessment and remediation analysis library, so they can see all the areas that need to be covered and apply appropriate preventative measures. If your team doesn’t have said library, a good place to start is over at the MITRE bookshelf, where you can find free, vendor-agnostic, detailed resources on how to establish such a program in your organization.

However, a strong public-facing posture, solid service configurations, and multi-factor authentication will have your organization well-positioned to avoid many negative outcomes.

Want more 2022 planning tips from industry experts?

Sign up for our webinar series

2022 Planning: Straight Talk on Zero Trust

Post Syndicated from boB Rudis original https://blog.rapid7.com/2021/10/29/2022-planning-straight-talk-on-zero-trust/

2022 Planning: Straight Talk on Zero Trust

“Zero trust” is increasingly being heralded as the ultimate solution for organizational cyber safety and resilience — but what does it really mean, and how can you assess if it has a practical place in your organization’s cybersecurity strategy for 2022?

In this post, we’ll answer those questions by taking a look at what problems the concept of zero trust is trying to solve, what types of people, process, and technology are necessary for successful zero-trust implementations, and what mindset changes your organization many need to make to be fully ready for this new defender paradigm in the year to come.

What is zero trust?

At the core, the concept of zero trust is just what those two words suggest: every human, endpoint, mobile device, server, network component, network connection, application workload, business process, and flow of data is inherently untrusted. As such, they each must be authenticated and authorized continuously as each transaction is performed, and all actions must be auditable in real time and after the fact. Zero trust is a living system, with all access rules under continuous review and modification, and all allowed transactions under constant re-inspection.

What problems is zero trust trying to solve?

Zero trust aims to finally shatter the mythical concept of “castle and moat” (i.e., assuming individuals and components on the intranet are inherently safe) and fully realize the power of least privilege — the concept that individuals and components should only have the most minimal access necessary to perform a required action. We can see it better through the lens of a practical example, such as one of the most typical ransomware attack scenarios: an attacker gains initial access to a corporate network through simple VPN credentials.

In most current implementations, a VPN has one interface that sits on the internet and one that sits on the intranet. Unfortunately, most VPNs are still accessed via simple credentials. Once authenticated, an attacker impersonating a user represented by those credentials has general network access. They’re free to replay the credentials (or attempt to use various tools to obtain other credentials or tokens) on any other connected system until they gain access to one where they can elevate privileges and begin exfiltrating data and corrupting the integrity of filesystems and databases.

In a zero-trust environment, the user identified by a set of credentials would also need a second authentication factor. The entire authentication attempt would be risk-assessed in real time to see if the individual’s connection is, say, in an allowed geofence and that the access time is within the usual operating mode of that person (and that the individual does not already have an established session).

Even if an attacker managed to obtain multi-factor codes — for example, SMS 2-factor authentication (2FA) has weaknesses but may be the only 2FA an organization can afford to implement — they may achieve a successful connection but would not have general access to all intranet systems and services. In fact, the VPN connection would only grant them access to a defined set of applications or services. If the attacker makes any attempt to try a network scan or perform other noisy network actions, monitoring systems would be alerted, and that individual and connection would be quarantined for investigation.

Each transaction has a defined set of authentication, authorization, and behavior auditing rules that continually let the overarching zero-trust system ensure the safety of the interactions.

What do you need to move to zero trust?

While this section could fill an entire book, we’ll work under the assumption that you are just beginning your zero-trust journey. To make this initial move, you’ll need to pick at least one business process or service access scenario to move to this new model.

Every component and individual that is responsible for enabling that business process or service must be identified and the architecture fully documented. At this point in the process, you may find that you need to reimagine the architecture to ensure you have the necessary control and audit points in place. You’ll then need authentication, authorization, auditing, risk-assessing, and enforcement solutions to support the access decisions at each connection in the process or service. Finally, you’ll need staffing to support creation and maintenance of the rules that are enforced, along with traditional patching, mitigation, and configuration management enforcement activities.

Then, lather, rinse, and repeat for all other processes and services. In other words, you need quite a bit.

However, you should not — and, in reality, cannot — move every business process and service to zero trust all at once. Once you’ve assessed that initial service, begin the groundwork of acquiring the necessary tools and hiring the necessary staff to ensure a successful outcome. Then, transition that initial service over to zero trust when funding and time are on your side, and leave it in place for a while as you evaluate what it takes to maintain safety and resilience. Adjust your tooling and staffing plans accordingly, and get to work on the remaining processes or services.

Thankfully, you may have many of these components and personnel in place within existing security and compliance solutions and processes, and you can finally employ more of your existing investments’ capabilities than the 5 to 15% that most organizations generally utilize.

Adopting the zero-trust mindset

One of the biggest mindset challenges to overcome when introducing zero trust into your organization is the fear that the constraints the model imposes will reduce productivity and hamper creativity. These fears can be overcome with the right framing of zero trust.

Start by performing a scenario-based risk assessment of a given business process. Do this with the business process owner(s) or stakeholder(s), and ensure you enumerate what actions threat actors could take at each transaction point in the process, ideally with some measurement to the costs due to loss of safety and resilience.

Then, show how each threat is reduced or eliminated with a zero-trust implementation of the same business process, and note how new processes — developed with a zero-trust mindset at the start — will have reduced implementation costs, be far more safe and resilient, and be much easier to enhance over time as they will have been established on a solid foundation.

Zero trust is not some sticker on some point solution’s brochure. It is a fundamental change to how your organization approaches access, authentication, authorization, auditing, and continuous monitoring. You won’t adopt zero trust overnight, but you can begin that journey today, knowing that you’re on the path to helping your organization protect itself from tomorrow’s threats, as well as today’s.

Want more 2022 planning tips from industry experts?

Sign up for our webinar series

Kill Chains: Part 3→What’s Next

Post Syndicated from Jeffrey Gardner original https://blog.rapid7.com/2021/06/25/kill-chains-part-3-whats-next/

Kill Chains: Part 3→What’s Next

Life, the Universe, and Kill Chains

As the final entry in this blog series, we want to quickly recap what we have previously discussed and also look into the possible future of kill chains. If you haven’t already done so, please make sure to read the previous 2 entries in this series: Kill chains: Part 1→Strategic and operational value, and Kill chains: Part 2 →Strategic and tactical use cases.

Fun with Graphs

In an effort to save time (and your sanity) I’ve created the following graph to illustrate the differences between the different kill chains:

Kill Chains: Part 3→What’s Next

What’s the bottom line? To paraphrase a line from the film The Gentlemen, “for (almost) every use case there is a kill chain, and for every kill chain a strategy.” Focused on malware defense or security awareness? The Cyber Kill Chain is worth a look. Need to assess your operational capabilities? MITRE ATT&CK. Looking to accurately model the behavior of attackers? Unified Kill Chain is “the way” (#mandalorian).

The Future

The kill chains of today (Lockheed Martin Cyber Kill Chain, MITRE ATT&CK, Unified Kill Chain) can trace their origins to a model first proposed by the military in the late 1990’s known as F2T2EA (find, fix, track, target, engage, and assess). However, as we all know, attackers and their attacks evolve over time—and the rate at which they are evolving continues to accelerate. Since our kill chains evolved from military strategy, it only makes sense to look at what’s happened in military strategy since the 90s to get a glimpse of where the evolution of the cyber kill chains may be heading.

A newer model used by special operators is F3EAD (find, fix, finish, exploit, analyze, and disseminate). Let’s take a quick look at how this applies to cyber operations:

  • Find: Ask “who, what, where, when, why” when looking at an event
  • Fix: Verify what was discovered in the previous phase (true positive / false positive)
  • Finish: Use the information from the previous 2 phases to determine a course of action and degrade/eliminate the threat
  • Exploit: Identify IOCs using information from the previous phases
  • Analyze: Fuse your self-generated intelligence with third-party sources to identify any additional anomalous activity occurring in the environment
  • Disseminate: Distribute the results of the previous phases within the Security Operations Center (SOC) and to additional key stakeholders

One thing missing from the F3EAD model when applied to cyber operations is the inclusion of automation, aka Security Orchestration and Automation and Response (SOAR). The gains in efficiency can greatly increase the speed at which the finish, exploit, and analyze phases can be completed. The first two phases, find and fix, are something I believe still requires the human touch due to the “fuzzy” (aka contextual) nature of events occurring within an organization.

The TLDR of the above? The future of kill chains must include the fusion of intelligence and automation without removing the human element from the equation. Until the equivalent of Skynet is invented, e.g. a truly sentient version of artificial intelligence capable of thinking in abstract ways, the “gut feeling” an analyst or incident responder gets when examining data will continue to be an advantage for us regular humans. Pairing this with the unmatched efficiency and speed gained by utilizing SOAR = winning!

The Verdict

Kill chains represent a comprehensive way to think about and visualize cyber attacks. Being able to communicate using a common lexicon (i.e. the terms and concepts in a kill chain) is critical to helping all levels of your organization understand the importance of security. However, I fear another fracturing of our lexicon will occur as more and newer versions of kill chains are introduced. Additionally, there appears to be an overreliance on only detecting and preventing the Tactics, Techniques, and Procedures (TTPs) found within these frameworks. Attackers have proven to be incredibly creative and endlessly resourceful, so their TTPs are going to change and evolve in ways we cannot yet imagine. This doesn’t mean we should discount the importance of using kill chains as part of our toolkit, but it should remain a part of our kit, and not the gold standard by which we judge the effectiveness of the security programs we have created.  

——————

Jeffrey Gardner, Practice Advisor for Detection and Response at Rapid7, recently presented a deep dive into all things kill chain. In it, he discusses how these methodologies can help your security organization cut down on threats and drastically reduce breach response times. You can also read the previous entries in this series for a general overview of kill chains and the specific frameworks we’ve discussed.

Watch the webcast now

Go back and read Part 1→Strategic and operational value, or Part 2 →Strategic and tactical use cases

Addressing the OT-IT Risk and Asset Inventory Gap

Post Syndicated from Ben Garber original https://blog.rapid7.com/2021/02/01/addressing-the-ot-it-risk-and-asset-inventory-gap/

Addressing the OT-IT Risk and Asset Inventory Gap

Cyber-espionage and exploitation from nation-state-sanctioned actors have only become more prevalent in recent years, with recent examples including the SolarWinds attack, which was attributed to nation-state actors with alleged Russian ties.

There are suspicions that sensitive information has been stolen from victims of the SolarWinds attack, such as Black Start, the Federal Energy Regulatory Commission’s plan to restore power after a grid blackout.

Attacks on critical infrastructure have grown in popularity since 2010, with the first nation-state cyber-physical attack on the Natanz Nuclear Enrichment Facility (aka Stuxnet). The attack changed critical process parameters such as the RPM of the centrifuges and hid these changes from the system operators, causing random centrifuge failures and significantly delaying the uranium enrichment process by the Iranians. This was followed by the blackouts that were caused as a result of the attacks on the Ukrainian Grid in 2015 and 2016.

Critical infrastructure is now a prime target in the context of global cyber warfare. Operational technology (OT), the backbone of industrial automation, has become less segmented due to equipment being addressable from the internet or by receiving services from the internet, such as software updates.

With the introduction of remote access and remote vendor support comes a much larger attack surface for the OT group, which traditionally didn’t handle IT security and advanced threats. While the Stuxnet attack destroyed centrifuges and may have delayed Iran’s nuclear program, other compromises can cause serious environmental impacts, injuries, and even loss of life. While no ICS cyberattack to date has caused bodily injury, the Trisis attack campaign has the potential to do so by compromising SIS safety systems that are used to prevent fires and explosions.

Challenges facing security teams

Securing this space is no easy task. With the growth of IP-based communications into OT, the lines between OT and IT have become more and more blurred over who is in charge of securing these systems. Additionally, networks that were once disconnected (such as gas-fired power plants) are now connected for smart grid management.

As industrial control systems (ICS) are increasingly digitized, their attack surface grows, becoming more significant targets for malicious attacks. While the IT environment has foundationally evolved to have security as a cornerstone of management, OT has only recently started down that path. Much like the early days of internet protocols, developers of industrial protocols did not create protocol standards with security in mind, and many vendors developed proprietary protocols.

Fast forward to today, and we have a plethora of protocols with varying degrees of robustness and security in modern production environments. Many asset owners are hampered in their security efforts by not having the ability to effectively monitor or have the appropriate security tools to respond to incidents. The OT equipment itself can also be sensitive to active queries, causing it to fail when sent unexpected data, more data than it can handle at once, or using more active connections than allowed, making active monitoring somewhat risky.

Adding in the ever-growing PC servers and workstations to ICS networks, and you have a complex attack surface that encompasses traditional enterprise services and cyber-physical systems. The solutions often require an approach that can address security across both environments and can distinguish which systems are sensitive to active monitoring.

Bridging the gap with Rapid7 and SCADAFence

We can overcome these challenges by providing a unified system that monitors and assesses both environments. Security analysts need to understand what is happening within OT systems and how attackers breached those systems through the traditional IT infrastructure. Operators also need to be conscious of all the equipment within their production environments, including both OT assets and IT assets. With the integration of the SCADAfence product suite into Rapid7’s InsightVM, customers can get in-depth information around their OT assets and single out those devices that are sensitive to traditional layer-3 scanning techniques.

Through establishing a risk profile of all devices across the IT and OT infrastructure, operators and analysts can optimize risk prioritization and remediation efforts. Not only can IT and OT assets be enumerated and assessed, but Internet of Things (IoT) devices can as well.
With the integration of SCADAfence, automation customers can achieve full coverage across both the IT and OT environments by leveraging the Rapid7 Insight product portfolio, leading to risk reduction for the entire organization.

See how Rapid7 and SCADAfence deliver full OT & IoT visibility to SecOps teams

Learn More

Top Security Recommendations for 2021

Post Syndicated from Justin Turcotte original https://blog.rapid7.com/2020/12/24/top-security-recommendations-for-2021/

Top Security Recommendations for 2021

Happy HaXmas! We hope everyone is having a wonderful holiday season so far. This year has been wild and unpredictable, and has brought unique risks and threats to the forefront of business activities. So, to help everyone stay safer in 2021, the Strategic Advisory Services team here at Rapid7 is going to share some security recommendations going into the new year to help you better secure your business and minimize risk.

Reserve Your 2021 Cybersecurity History Calendar

Get Started

Governance around remote work and work from home

When the pandemic hit, many companies found they lacked governance around remote work and mobile devices because they hadn’t facilitated that type of work in the past. Many companies were—and still are—resistant to change and averse to work-from-home opportunities for their employees.

If you find yourself in that position, consider implementing policies for acceptable use around remote work, mobile devices, and bring-your-own-device (BYOD). Having these policies and measures in place will help ensure employees are aware of what is and is not acceptable use of company assets or networks, what their responsibilities are, and organizational expectations and processes.

Mobile device management

Mobile device management is key when it comes to implementing work-from-home security measures. Without the ability to manage and protect remote endpoints, the risk is higher that your company network could be compromised by an unsecured system utilizing a VPN to access company networks. Additionally, ensure you have controls in place to limit corporate VPN access to corporate-owned and -controlled devices—you don’t know (and probably don’t want to know!) what is lurking on systems that may not be protected from internet threats.

Consider vulnerability management, antivirus, and anti-malware tools as primary requirements for corporate endpoints in the wild. Many companies haven’t had the ability to update antivirus on systems that aren’t connected to the company network or patch those same systems when not connected. Utilizing cloud-based solutions that can be updated remotely without first needing a VPN connection to the company network is ideal in the post-pandemic world.
Rapid7’s InsightVM tool can give you the cloud-based vulnerability management capabilities that you need to assess remote corporate endpoints.

Securing VPN connections

How many companies were caught without an operational client VPN option when the lockdowns went into effect? Many customers that we have spoken to during the pandemic had to rush to implement VPN solutions, whether that was a client-based VPN or some type of SSL VPN solution, to allow employees the ability to work from home.

While implementing these VPN solutions, many customers opted for the get-it-working approach and failed to secure those VPN entry points as well as they should have. One way of ensuring VPN connections are protected is to require users to use multi-factor authentication (MFA) to remotely log in to the company network. This will help to protect VPN accounts from compromise by adding a layer to the authentication process.

Having a pre-authentication check for security compliance on your VPN connections will also help ensure systems that are not properly configured or contain a vulnerability are not able to connect to the company network without the issue being remediated. This will help lessen the exposure of the company network through poorly secured remote endpoints. These capabilities are provided by many VPN solution and network access control solution providers.

Securing data in the cloud

We have seen many of our customers making the move to the cloud, using solutions like Amazon Web Services (AWS), Microsoft Azure, and Google Cloud.

Securing your data in the cloud is key, even when there is not an ongoing pandemic. Ensure that your cloud infrastructure is secured and well protected from possible attack or compromise. While the security of the cloud platform is the responsibility of the provider, the security of the systems and data that you place in the cloud is your responsibility, and no one else is going to do it for you.

A strong identity access management (IAM) program implemented for your cloud systems can help you control permissions to resources and help prevent data loss or theft.

It’s extremely important to monitor your cloud deployments so you can detect any suspicious or anomalous behavior or activity. Can you detect a brute force attack in your cloud environment? Can you detect suspicious behavior in a timely fashion? If not, look at Rapid7’s InsightIDR tool to give you that capability, and much more.

Validating protective measures

The validation of protective measures should be performed regardless of whether we are responding to a pandemic, but it is important even more now than ever before. Many security and IT teams have deployed new solutions and measures to provide for their remote employees and have been busy responding to these new requirements during the pandemic.

Now that we are into eight months or more of working from home and social distancing, companies should begin the process of testing their protective measures and newly deployed security tools. This can be done through red, blue, or purple teaming or engaging third-party penetration testing teams to help ensure your newly deployed systems are protecting the network and remote endpoints as you believe them to be.

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

More HaXmas blogs

UPnP With a Holiday Cheer

Post Syndicated from Deral Heiland original https://blog.rapid7.com/2020/12/22/upnp-with-a-holiday-cheer/

UPnP With a Holiday Cheer

T’was the night before HaXmas,
when all through the house,
Not a creature was stirring, not even a mouse.
The stockings were hung by the chimney with care,
in hopes that St. Nicholas soon would be there.

This may be the way you start your holiday cheer,
but before you get started, let me make you aware.
I spend my holidays quite differently, I fear.
As a white-hat hacker with a UPnP cheer.

And since you may not be aware,
let me share what I learned with you,
so that you can also care,
how to port forward with UPnP holiday cheer.

Universal Plug and Play (UPnP) is a service that has been with us for many years and is used to automate discovery and setup of network and communication services between devices on your network. For today’s discussion, this blog post will only cover the port forwarding services and will also share a Python script you can use to start examining this service.

UPnP port forwarding services are typically enabled by default on most consumer internet-facing Network Address Translation (NAT) routers supplied by internet service providers (ISP) for supporting IPv4 networks. This is done so that devices on the internal network can automate their setup of needed TCP and UDP port forwarding functions on the internet-facing router, so devices on the internet can connect to services on your internal network.

So, the first thing I would like to say about this is that if you are not running applications or systems such as internet gaming systems that require this feature, I would recommend disabling this on your internet-facing router. Why? Because it has been used by malicious actors to further compromise a network by opening up port access into internal networks via malware. So, if you don’t need it, you can remove the risk by disabling it. This is the best option to help reduce any unnecessary exposure.

To make all this work, UPnP uses a discovery protocol known as Simple Service Discovery Protocol (SSDP). This SSDP discovery service for UPnP is a UDP service that responds on port 1900 and can be enumerated by broadcasting an M-SEARCH message via the multicast address 239.255.255.250. This M-SEARCH message will return device information, including the URL and port number for the device description file ‘rootDesc.xml’. Here is an example of a returned M-SEARCH response from a NETGEAR Wi-Fi router device on my network:

UPnP With a Holiday Cheer

To send a M-SEARCH multicast message, here is a simple Python script:

# simple script to enumerate UPNP devices
 
import socket
 
# M-Search message body
MS = \
    'M-SEARCH * HTTP/1.1\r\n' \
    'HOST:239.255.255.250:1900\r\n' \
    'ST:upnp:rootdevice\r\n' \
    'MX:2\r\n' \
    'MAN:"ssdp:discover"\r\n' \
    '\r\n'
 
# Set up a UDP socket for multicast
SOC = socket.socket(socket.AF_INET, socket.SOCK_DGRAM, socket.IPPROTO_UDP)
SOC.settimeout(2)
 
# Send M-Search message to multicast address for UPNP
SOC.sendto(MS.encode('utf-8'), ('239.255.255.250', 1900) )
 
#listen and capture returned responses
try:
    while True:
        data, addr = SOC.recvfrom(8192)
        print (addr, data)
except socket.timeout:
        pass

The next step is to access the rootDesc.xml file. In this case, this is accessible on my device via http://192.168.2.74:5555/rootDesc.xml. Looking at the M-SEARCH response above, we can see that the IP address for rootDesc.xml at 169.254.39.187.  169.254.*.* is known as an Automatic Private IP address. It is not uncommon to see an address in that range returned by an M-SEARCH request. Trying to access it will fail because it is incorrect. To actually access the rootDesc.xml file, you will need to use the device’s true IP address, which in my case was 192.168.2.74 and was shown in the header of the M-SEARCH message response.

Once the rootDesc.xml is returned, you will see some very interesting things listed, but in this case, we are only interested in port forwarding. If port forwarding service is available, it will be listed in the rootDesc.xml file as service type WANIPConnection, as shown below:

UPnP With a Holiday Cheer

You can open WANIPCn.xml on the same http service and TCP port location that you retrieved the rootDesc.xml file. The WANIPCn.xml file identifies various actions that are available, and this will often include the following example actions:

  • AddPortMapping
  • GetExternalIPAddress
  • DeletePortMapping
  • GetStatusInfo
  • GetGenericPortMappingEntry
  • GetSpecificPortMappingEntry

Under each of these actions will be an argument list. This argument list specifies the argument values that can be sent via Simple Object Access Protocol (SOAP) messages to the control URL at http://192.168.2.74:5555/ctl/IPConn, which is used to configure settings or retrieve status on the router device. SOAP is a messaging specification that uses a Extensible Markup Language (XML) format to exchange information.

Below are a couple captured SOAP messages, with the first one showing AddPortMapping. This will set up port mapping on the router at the IP address 192.168.1.1. The port being added in this case is TCP 1234 and it is set up to map the internet side of the router to the internal IP address of 192.168.1.241, so anyone connecting to TCP port 1234 on the external IP address of the router will be connected to port 1234 on internal host at 192.168.1.241.

UPnP With a Holiday Cheer

The following captured SOAP message shows the action DeletePortMapping being used to delete the port mapping that was created in the above SOAP message:

UPnP With a Holiday Cheer

To conclude this simple introduction to UPnP, SSDP, and port forwarding services, I highly recommend that you do not experiment on your personal internet-facing router or DSL modem where you could impact your home network’s security posture. But I do recommend that you set up a test environment. This can easily be done with any typical home router or Wi-Fi access point with router services. These can often be purchased used, or you may even have one laying around that you have upgraded from. It is amazing how simple it is to modify a router using these UPnP services by sending SOAP messages, and I hope you will take this introduction and play with these services to further expand your knowledge in this area. If you are looking for further tools for experimenting with port forwarding services, you can use the UPnP IGD SOAP Port Mapping Utility in  Metasploit to create and delete these port mappings.

But I heard him exclaim, ere he drove out of sight-
Happy HaXmas to all, and to all a good UPnP night

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

More HaXmas blogs

Help Others Be “Cyber Aware” This Festive Season—And All Year Round!

Post Syndicated from Jen Ellis original https://blog.rapid7.com/2020/12/17/help-others-be-cyber-aware-this-festive-season-and-all-year-round/

Help Others Be

Are you tired of being the cybersecurity help desk for everyone you know? Are you frustrated with spending all your time securing your corporate environment, only to have to deal with the threat that snuck in through naive end-users? Are you new to security and wondering how you ended up here? This blog is for you!

Introducing the Cyber Aware Campaign

Every year, November and December tend to be awash with media articles sharing tips for “safe” online shopping, particularly around Cyber Monday. This has been compounded in 2020, a year characterized in cybersecurity by increased remote working, reliance on online and delivery services, and COVID-19-themed scams and attacks. Many have viewed 2020 as a hacker’s playground.

It’s in this setting then that the U.K. government has relaunched its Cyber Aware campaign to help internet citizens navigate the rocky shores of defending their digital lives. The campaign—which features TV, radio, and print ads, as well as various (virtual) events—offers six practical and actionable tips for helping people protect themselves online.

The tips are designed to be applicable to the broadest audience possible. They are not necessarily the most sophisticated security best practices, but rather (and very intentionally), they are fairly basic and applicable to a wide range of people. The list has been devised as the result of considerable development and testing: The U.K. government not only sought input from security experts, but also from nonprofits and civil society groups representing various constituent groups. This helped them ensure the tips would be practical for everyone from your granny to your favorite athlete (maybe they are the same person).

As with enterprise security, there is regrettably no silver bullet for personal security, so these tips will not make people completely invulnerable. However, they do focus on steps that are manageable and will meaningfully reduce risk exposure for individuals. The U.K. government has focused on finding a balance between being thorough and not alienating people from making the effort, hence settling on just six tips. Naturally, we prefer things that come in sevens, but this is a decent start. 😉

The tips

Four of the six tips focus on passwords and identity access management. This seems like a good choice; it’s extremely hard to change behavior such that people stop sharing personal information or clicking on links, but if you can make it harder for attackers to access accounts, that’s a good step toward meaningfully reducing risk.

So, let’s take a look at the actual tips…

  1. Use a strong and separate password for your email
  2. Create strong passwords using three random words
  3. Save your passwords in your browser
  4. Turn on two-factor authentication (2FA)
  5. Update your devices
  6. Back up your data

We recommend clicking on the links and taking a look at the full guidance. Or, for more information on the tips, how they were developed, and what the Cyber Aware campaign entails, check out this Security Nation podcast interview with the delightful Cub Llewelyn-Davies of the UK National Cyber Security Centre.

As a starting point or personal security baseline, this is a very decent list, and we hope it will have a meaningful impact in encouraging individuals to make a few small changes to protect themselves online.  

As overzealous security enthusiasts, though, we had to take it one step further. We’ve created a free personal security guide of our own that starts with the Cyber Aware steps, then offers additional advice for those that want to go further. We know that for the vast majority of internet users, even six steps feels like too many, but we also hold out hope that many people may be inspired to dig deeper or may just have more specific circumstances they need help with.

You can download the guide for free here. Maybe include it with your holiday cards this year—personal security is the gift that keeps on giving!

Why should you care about this?

If you are reading the Rapid7 blog, the chances are that you already think about security and are almost certainly taking these steps or some appropriate alternative to them (if only more websites accepted 50-character passwords, eh?). Nonetheless, even if you are a security professional, the need to educate others likely affects you. Maybe it’s because you’re sick of constantly being asked for security tips or assistance by family and friends. Maybe you just can’t handle reading more headlines about security incidents that could have been avoided with some basic personal security hygiene. Maybe you’re worried that no matter how diligently you work to protect your corporate environment, an attacker will gain a foothold through an unwitting end-user with access to your systems.

The point is that we are all engaging in the internet together. A better informed internet citizenry is one that makes the job of attackers slightly harder, reducing the potential opportunities for attackers and raising the bar of entry into the cybercrime economy. It’s not a revolution or that ever-elusive silver bullet that will save us all, but increasing even the basic security level of all internet citizens creates a more secure ecosystem for everyone. As security professionals, we should be highly invested in seeing that become a reality, so send the guide or Cyber Aware web page to your less security-savvy friends, family, and/or users today.

Help them become more Cyber Aware, and help create a safer internet for us all.

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

More HaXmas blogs