All posts by Aaron Wells

IAM Never Gonna Give You Up, Never Gonna Breach Your Cloud

Post Syndicated from Aaron Wells original https://blog.rapid7.com/2021/03/03/iam-never-gonna-give-you-up-never-gonna-breach-your-cloud/

IAM Never Gonna Give You Up, Never Gonna Breach Your Cloud

This blog is part of an ongoing series sharing key takeaways from Rapid7’s 2020 Cloud Security Executive Summit. Interested in participating in the next summit on Tuesday, March 9? Register here!

Identity and access management (IAM) credentials have solved myriad security issues, but the recent cloud-based IAM movement has left many scratching their heads as to why it can be so complex.

IAM on-premises vs. IAM off-premises

IAM on-premises, well, it’s become a whole lot simpler. In many organizations, it is LDAP-based, so most things are tied back into it, such as database credentials and system accounts. There are more known processes and ways to deal with those. However, when it comes to the cloud, organizations now have to deal with inheritance and different aspects that may not correlate back to the on-premises world. These new concepts, new constructs, and different ways to interact can be overwhelming.

Complexity can really become an issue with something like assume-role in AWS. Going to a least-privileged model can frustrate people, so they may just want access to everything on a given surface, promising to scale the permissions back later. The worry there is that you can end up with over-permissioned identities that never get fixed. With assume-role in particular, credentials are no longer stored inside a physical operating system, but rather in a metadata layer associated to a piece of infrastructure. This applies to a number of different services specifically within the cloud provider—everything from compute instances, to database instances, to storage assets, and more. These aspects can all be very complex to secure, but there’s no question it makes operations safer. Speaking of safe…

Going fast vs. going safe

This may be stating the obvious, but a general sentiment of feature developers is that they aren’t all that excited about security. Often, this is because their jobs may depend on speed—they’re told to go fast. A main value prop on the cloud is that teams buy into the sentiment of “I can be as agile as I need to be, I can be as quick as I need to be.” So there’s a sort of relief at that, but it’s very likely those teams are also told, “Make sure we don’t have incidents, data breaches, or data exposures.” A tension can grow from increased use of the cloud and operational offloading of IAM protocols. In this case, keeping things as simple as possible is a way to maintain efficient processes and keep them moving. What are some tools organizations can use to lessen friction between developer teams and security?  

Service-control policies and session-control policies

If there’s an umbrella structure sitting over your cloud accounts, or over apps like Google projects, policies can be pushed from a top level down with service-control policies. A session policy is generated either from the user side, from an application, or from an assume-role. Going even further, there’s also an identity policy associated with each individual in an active session. With all of this potential complexity, how would organizations go about simplifying, especially as they shift authentication to the cloud?

The above process does provide an abundance of granularity as well as freedom to explicitly allow or deny at many, many levels. The flipside of that, of course, is that there are many, many levels.  

  • Leverage all aspects: Some companies may restrict access to specific services or regions. Plus, they may not want to go and use just any old cloud service due to various IAM implications, based on the level of security in the organization. So it’s about tailoring a set or policies to specific goals.
  • Pre-canned policies: Leveraging automation, it might be faster to deploy these types of policies while also leaving room for a certain amount of autonomy. In this way, teams can tailor some resource-level access standards.  

Again, based on a company’s specific goals, these types of processes can help ease friction between teams looking to ship a project fast and those trying their best to keep them secure.

Seeing it and protecting it

At the end of the day, the feeling from much of the security world is that visibility has to be there for SecOps to be able to tell you that something is secure. They may not need to go inside and read the data, but if there is no visibility, they can’t comment on any aspect of configuration. An old idea around securing infrastructure is that everything in a private network is implicitly authorized to access everything else. So, once a security checkpoint is crossed, anything could be accessed.

Of course, this whole process is aiming to improve incident-response efficiency by reducing security controls at every step of the kill chain. This would mean not assuming identities and not trusting anybody, also known as zero trust. Therefore, if the information on your website is something that’s super-secret, it might make sense to put it behind a VPN just for that extra layer of security—even if you have HTTPS authentication authorization—against someone who might not be part of the team.

A zero-trust initiative opens the discussion of workload identity in the cloud. Teams could use Google-native services to ensure apps are talking to each other, authenticating properly (AWS), and that connections originate from the right machines.

Identity is everything

At the end of the day, it’s never about invasion of any sort of privacy. It’s all about securing as much as you can and authenticating connections to protect against threats. A combination of technical processes and open communication is key in mitigating the challenges of protecting against those threats in a cloud-based IAM solution.

Want to learn more? Register for the upcoming Cloud Security Executive Summit on March 9 to hear industry-leading experts discuss the critical issues affecting cloud security today.

How to Achieve and Maintain Continuous Cloud Compliance

Post Syndicated from Aaron Wells original https://blog.rapid7.com/2021/03/01/how-to-achieve-and-maintain-continuous-cloud-compliance/

How to Achieve and Maintain Continuous Cloud Compliance

This blog is part of an ongoing series sharing key takeaways from Rapid7’s 2020 Cloud Security Executive Summit. Interested in participating in the next summit on Tuesday, March 9? Register here!

There are two things that make data a hot topic. First, keeping track of your organization’s sheer volume of data is extremely difficult. Second, the evolving nature of the threat and threat-vector landscape can make data management and protection astonishingly challenging. We should be focused on staying compliant in the present, but also paying attention to and evolving for what’s coming in the near future.

Why is it so hard to achieve continuous compliance in the cloud?

Getting it all right and continuously achieving compliance can be taxing on any security organization, and the cloud adds another layer of mobility to this. Pushing more operations into the cloud has so many “shiny” benefits that the possibility of losing direct visibility into your physical environment could be a drawback whose true impact isn’t known until the long term.

People can also be a big x-factor when it comes to cloud compliance. For years, your workforce has been trained in one area of compliance, and now you might have to take such measures as hiring new talent or retraining existing employees in proper cloud-compliance methodologies. So, it’s certainly worth it to regularly take a hard look at your existing policies, because the last thing anyone wants is for their compliance to be out-of-date and no longer addressing the right issues.

Taking these considerations into account, and given the ephemeral nature of the cloud in general, is continuous cloud compliance even achievable?

The answer is yes, but of course it depends on what you’re trying to get compliant with. You might actually meet all of the laws, rules, and regulations in your industry, but at the same time not mitigating all the risks. This means ensuring you have preventative controls in place, as well as continuous monitoring and scanning, so that you can identify threats faster and mitigate more overall risk.

Therefore, the answer will be different for each organization as goals are defined and you go about building a unique and continuous cloud-compliant solution. A more specific goal would come back to people and ensuring they work within the guardrails you set for them in your monitoring and preventative measures.

Changing the culture of blame

What happens when unencrypted data ends up in the cloud? Is the engineer to blame? The security team? Is your organization properly calibrated to share accountability with other teams, or does responsibility end with you? There is no question that continuous compliance takes constant teamwork that revolves around being solution-oriented and learning to not “point the finger.” It’s important that all players are comfortable asking for help and, in a worst-case scenario, don’t feel scared for their jobs.

Innovation comes from experimentation, and personnel should feel a certain amount of freedom to do that as well as identify when things go wrong and own it. Sourcing the right talent plays a big role in changing the culture of blame. On-premises skills are different from a cloud-based set. For instance, an engineer who is great at automation on-premises could be struggling in the cloud. It simply comes down to a new spectrum of vocabularies they have to learn. It’s key to source talent that’s eager to learn, so that the whole team can ultimately become successful in this new environment.

Constant tuning

It would be amazing if there were a big red easy button for automating compliance and just calling it done. As automation does become easier, it will still take vigilance to optimize processes as budgets are freed up and stakeholders decide where it should be spent. Consider having the conversation around spending money to be compliant at the beginning of the project or you might delay it for a long period of time, and then spend $100 million more to become compliant. Getting everyone on the same page early can save lots of red-tape cutting and, of course, money.

Want to learn more? Register for the upcoming Cloud Security Executive Summit on March 9 to hear industry-leading experts discuss the critical issues affecting cloud security today.

Building a Holistic VRM Strategy That Includes the Web Application Layer

Post Syndicated from Aaron Wells original https://blog.rapid7.com/2021/02/25/building-a-holistic-vrm-strategy-that-includes-the-web-application-layer/

Building a Holistic VRM Strategy That Includes the Web Application Layer

Building security into your overall vulnerability risk management (VRM) strategy is a must-do in the age of the all-important web app. Between security and IT-Ops teams, there are a number of steps in the VRM process, including asset identification, enumeration, prioritization, and remediation. How does application security fit in?

Co-sponsored by Forrester, a recent Rapid7 webcast expounds upon the topics discussed in this blog post. The distinguished subject-matter experts and presenters also dive deep into the nitty gritty of what it takes to get a better night’s sleep by creating a VRM strategy that extends to the application layer. Watch the webcast here, and read on for our recap below!

Web applications and APIs are assets, too

Applications are one of the most common ways attackers are getting in. In a recent survey, Forrester found that 31% of firms suffered a breach as a result of an external attack, with applications serving as one of the most common attack vectors. Along with all other assets in a VRM program, web apps must be prioritized as assets that need to be covered.

Knowing this, security leaders have started to think harder about application security. But just because it’s a top priority, does that mean it’s the company’s? Bringing stakeholders into the process early is key, because getting that application layer covered affects the entire organization. The more buy-in and support from everyone who has a stake in getting secure products to customers, the more value everyone gets from a comprehensive VRM investment.

Building security in

Buy-in comes from building in. Static Application Security Testing (SAST) is a process that can find flaws early in the life cycle of applications, providing guidance to dev teams so they can find and fix issues early in the process. Adopting SAST in the development phase means making it easier for developers to remediate as they’re coding.

Further, Software Composition Analysis (SCA) tools can help analyze the open-source libraries and third-party components that go into creating a large portion of today’s applications. A modern VRM program also needs to consider these components as assets to cover. Building these processes and tools into the Software Development Lifecycle (SDLC) will help dev teams experience fewer security flaws, get real-time education, and eventually find the ability to scale quickly.

However, as development approaches change, more and more organizations are struggling to identify and secure the sheer number of APIs built into their applications. Security teams might understandably be rushing to keep up with:

  • Identifying and cataloging APIs and endpoints
  • Assuring and managing API user identities
  • Meeting regulatory and compliance requirements        

How can security pros start thinking about baking those processes in earlier?

Understanding API security

There is no single tool for API security. A holistic approach includes identifying what sorts of APIs are out there, assessing them for organizational fit, and scanning and testing them for vulnerabilities. It also includes managing them throughout deployment and production. Does the traffic match how you expect the API to behave?

Looking at API security from the client to the backend is also key. Not only does your existing application tooling need to be inclusive of API behavior, but additional tooling may be of great insight when looking at API-specific issues like managing authentication and authorization. Remember, new development methodologies will requite new security patterns.

Zoom out: What are you looking to accomplish?

When it comes to rethinking or building a sound VRM strategy, performing foundational work up front will help get organizational buy-in faster. It’ll take time to inventory everything that’s sitting at the edge, from web applications to APIs to third-party vendors. Recognizing that a significant shift will take time and being transparent about this with stakeholders can only help streamline the process. So, why invest the time?

As more people than ever before shift to a work-from-home environment, organizations may not feel as safe as they once did having corporate information residing on endpoints scattered around the city and, indeed, the world. Following along naturally to this issue is increased questioning and anxiety from cyber-insurers and auditors, particularly as it concerns things like an organization’s supply chain and partners. Much like the recent SolarWinds incident, an attack on one organization can quickly escalate into a threat against its partners.  

If you’re part of an organization beginning to engage more with your existing supply chain or validations, it’s important to remember that you are also part of their chain. So, it can be a reciprocal nature of checks and scrutiny as more partners come online. In this entire ecosystem, a good rule of thumb is to remember that exploitation has a real cost—whether the attacker’s intent is simply to disseminate sensitive data or there’s a ransom scenario afoot. Defining security frameworks and testing them against overall goals can help translate processes down into each project as well as speed up validation with a potential partner.

Extend, extend, extend

When it comes to rethinking or building a sound VRM strategy, extending that foundational security work to your web applications at the edge is a modern best practice that can yield many benefits—whether it’s protecting against someone probing for their own nefarious purposes or looking to sell that information down the line. It can also start to create an ingrained culture of taking proactive and protective steps to secure applications and the tools on which they’re built.  

For more information about broadening your VRM strategy to include the application layer, please watch our webcast with Forrester here.

Take the Full-Stack Approach to Securing Your Modern Attack Surface

Post Syndicated from Aaron Wells original https://blog.rapid7.com/2021/02/19/take-the-full-stack-approach-to-securing-your-modern-attack-surface/

Take the Full-Stack Approach to Securing Your Modern Attack Surface

A growing remote-work culture demands a graduation in the approach to security. It’s time to test, monitor, secure, and extend to the application layer.

A modern methodology for vulnerability management (VM) is vital for organizations looking to minimize attack surfaces by prioritizing potential threats. This includes identifying, evaluating, treating, and reporting on security risks across key systems and the software that runs on them. An example of this full-stack approach includes broader coverage of on-premises and virtual environments, inclusive of web-application testing, and leveraging best-in-class practices and tools.

A good place to start is establishing an asset management solution. Gaining a full understanding of the vulnerabilities associated with each asset across the network is key to informing stakeholders, prioritizing vulnerabilities, and remediating issues. Due to the persisting COVID-19 pandemic, these assets are increasingly part of a growing remote workforce continuously expanding every organization’s attack surface. As assets are no longer regularly connecting to corporate networks, traditional vulnerability scans aren’t possible.

This has paved the way for agents to plug that particular vulnerability. For instance, Rapid7’s Insight Agent is lightweight data-collection software you can install on any cloud-based asset.  Let’s take a more in-depth look at modern vulnerability risk management (VRM) and what to look for in a holistic solution.

The need for speed

The COVID-19 pandemic has accelerated the evolution of security and protections for an unplanned, exponential growth in the global remote workforce. This means a faster digital transformation for every industry and organization. It means a faster pace of spinning up and scaling new apps. And it means quickening cloud adoption as IT teams scramble for accessible and reliable places to host mission-critical services. So how do we go about securing every layer in this new era of VRM?

  • Prioritizing vulnerabilities is more important than ever. Limited time and an ever-changing threat landscape make it unrealistic for teams to try and fix everything. Scrambling to do so could mean critical threats escaping through the cracks.
  • Developing strong partnerships has new meaning because, most likely, those partnerships will be virtual for the foreseeable future. Thus extra attention must be paid to maintaining them so there are more reliable eyes monitoring for vulnerabilities and ready to jump into action if a threat arises.  
  • Incorporating a full-stack approach means testing traditional and cloud infrastructure, and extends to the applications those environments host. Teams must move carefully, but also expediently when leveraging scan engines and agents to remotely monitor servers.  

With the acceleration of seemingly all security processes, it’s also important to remember to take stock of what’s working and what’s not. No matter how many fancy features, a solution is only worth the investment if it meets your organization’s unique needs and drives eventual ROI.

About that application layer

Gaining real-time understanding of an attack on your web apps provides actionable intelligence for quick remediation while providing an opportunity for a team teaching moment for the next time it happens. InsightAppSec and tCell from Rapid7 is a test-monitor-prevent solution that focuses on neutralizing vulnerabilities at the application layer.    

With guided remediation into web app flaws, you can begin building a road map for making more secure applications. You’ll start by scanning your applications in as few as five minutes so you can get visibility into the weaknesses that exist in your applications. From there, you’ll be able to view severity and remediation guidance, and share with key stakeholders to allow you to collaborate faster and scale easier. Scan on- and off-premises apps with InsightAppSec’s powerful cloud engines, accessing all of your internal and external scan configurations from a central console.

The ability to monitor more apps in more environments will be key for the future of your business, and is an extra layer of protection for vulnerabilities you can’t remediate in time. Finding solutions that include functionality to help your remediation stakeholders understand the context of the associated vulnerabilities (Attack Replay, granular remediation guidance, etc.) will allow you to partner more effectively.

An increased reliance on direct-to-cloud app deployment is a natural evolution. Benefits like higher baseline security, automated hardening, and increased flexibility are attractive. But all of that demands more time and more vigilance.

But what about the infrastructure? (People and machines)

Consider this: It’s not just about remediation, it’s also how you navigate the red tape. Grasping a more complete picture of how vulnerabilities translate to business risk is key not only for communicating those risks to higher-ups, but also maintaining and growing things like team headcount. After all, you have to have people to solve the problems. InsightVM, Rapid7’s vulnerability management solution, can help you understand and prioritize risk, with clarity.

Assume everything along your attack surface is being targeted by threat actors. These days, the reports of malicious events are coming more frequently. But covering local, remote, cloud, containerized, and virtual infrastructure is possible with InsightVM. It’s not a guaranteed catch-all solution, but it does provide the shared view and common language that can bring together traditionally siloed teams. It also paves the way for collaboration and accountability between those teams, making it easier for remediators to drive impact, celebrate progress, and improve ROI.  

With more fully supported integrations than any other VM vendor as well as the ability to automate virtually any aspect of vulnerability scanning with RESTful API, it’s now possible to get a near-complete story of the security of your infrastructure and how it affects business.

A fortified foundation

Together, InsightVM and InsightAppSec can be complementary solutions to security organizations looking to tailor or refine any on-premises, off-premises, or hybrid VRM program.  

  • Comprehensive visibility at the infrastructure layer empowers you to leverage people more efficiently.
  • Click-and-scan security testing at the application layer enables rapid return of actionable results … and peace of mind.
  • Robust reporting capabilities featured in both solutions make it easy to measure progress and report it to key stakeholders.
  • A single pane of glass is the best way to see real-time processes at work as well as the overall security status of your world.

A full-stack approach can help you secure every layer of your attack surface. Then someday, perhaps we won’t call it an “attack” surface anymore.

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

SOAR Tools: What to Look for When Investing in Security Automation Tech

Post Syndicated from Aaron Wells original https://blog.rapid7.com/2021/02/10/soar-tools-what-to-look-for-when-investing-in-security-automation-tech/

SOAR Tools: What to Look for When Investing in Security Automation Tech

Security orchestration and automation (SOAR) refers to a collection of software solutions and tools that organizations can leverage to streamline security operations in three key areas: threat and vulnerability management, incident response, and security-operations automation.

From a single platform, teams can use automation to create efficiencies and stay firmly in control of IT security functions. SOAR solutions, like Rapid7 InsightConnect, also enable process implementation, efficiency gap analysis and incorporate machine learning to help analysts accelerate operations intelligently.

3 core competencies of SOAR

According to Gartner, these are the most important technological features of SOAR:

  • Threat and vulnerability management support vulnerability remediation as well as formalized workflows, reporting, and collaboration.
  • Security-incident response supports how an organization plans, tracks, and coordinates incident responses.
  • Security-operations automation supports orchestration of workflows, processes, policy execution, and reporting.

Your SOAR: Essential elements

A solution tailored to your team will yield the greatest benefits to the organization. With regard to the features mentioned above, security teams typically are looking at some key benefits as must-haves when planning a SOAR solution.

Redistribute brainpower with orchestration and automation tools. Teams build real-time triggers into workflows, which kick-start automation. Triggers listen for certain behaviors, and then initiate workflows when the required input passes through the trigger. Without orchestration from a SOAR tool, the security team would coordinate these workflows manually. SOAR integrates across security tools via APIs, with workflows across these tools detecting and responding to incidents and threats.

Execute security tasks in seconds versus hours by automating a series of steps that make up a playbook. Teams can monitor these automated processes in a user-friendly dashboard or in their preferred chat tools. While orchestration enables integrations and coordination across security tools, playbooks automatically execute the interdependent actions in a particular sequence—without the need for human interaction.

Once implemented, a comprehensive SOAR solution should help streamline and simplify. With InsightConnect, teams can customize workflows as much or as little as they like. Connect teams and tools for clear communication, deploy no-code-connect-and-go workflows, and put automation to work for your business without sacrificing control.

Rapid solutions

SOAR platforms are designed to accelerate response times. A quality solution should be easy to deploy and use; it should also be reliable, nonintrusive, and safe. Teams should tailor it to be as efficient as possible so that it doesn’t end up costing time. This also means enabling mobile device access and control so teams can run playbooks, review security artifacts, and triage events—all on the go. How else can SOAR solve your need for speed?

  • Scalability: Your automation engine will scale with your organization and the number of incidents it eventually incurs. Think about optimizing performance by designing your solution to allow for vertical (CPU and RAM increases) and horizontal (server-instance increases) scaling.
  • Dual action: Security teams receive an average of 12,000 alerts a day. Your SOAR solution should be able to quickly compile relevant context about security events so your team can focus on analysis and response. False positives and threats are resolved faster, and experts can hone in on tasks requiring intervention. With a quality platform, teams can exercise as much human judgment as they deem necessary and automate menial tasks.
  • Extensibility: Designing your SOAR for openness and extensibility will help optimize results. It should incorporate new security scenarios with ease, and ideally, it will integrate with third-party tools like SIEM, IPS, and IDS solutions.
  • Broad ecosystem: Orchestrate any piece of your technology stack with InsightConnect. You’ll spend less time assembling: Pre-built workflows easily integrate across a wide stack so you can more quickly innovate on the things that matter. Plus, create threat-specific workflows so everyone is notified faster, sees the same critical data and is able to take action across multiple technologies with rapid efficiency.

The real return on investment

Pricing models will always vary by tailored solution. For example, costs might be based on the number of users or the number of processes you want to automate or by the size of your environment. Begin your quest for value by searching for:

  • SOAR products that aren’t hiding costs. Your vendor should give a clear picture of charges related to configuration, deployment, and maintenance of the product.
  • SOAR tools with flexible options that work best with your budget. Make sure to accurately evaluate which features you need and those you can do without.

Also, consider the possibility of bringing greater collaboration to your team with features like chat tool integrations and workflow-notes documentation. Playbook and information sharing become easier and resolutions arrive faster. A SOAR workflow should ultimately become a community-based solution, with the potential to bolster your organization’s bottom line and prove out greater investments in security practices.

Want to learn more about Rapid7 InsightConnect can help you with your automation goals? Request a demo today.

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Shifting Security Right: How Cloud-Based SecOps Can Speed Processes While Maintaining Integrity

Post Syndicated from Aaron Wells original https://blog.rapid7.com/2021/01/04/shifting-security-right-how-cloud-based-secops-can-speed-processes-while-maintaining-integrity/

Shifting Security Right: How Cloud-Based SecOps Can Speed Processes While Maintaining Integrity

When it comes to offloading security controls to the cloud, it may seem counterintuitive to the notion of “securing” things. But, when we consider the efficiency to be gained by shifting right with some security controls, it makes sense to send more granular, ground-up responsibilities to a trusted managed services cloud partner. This could help to increase development-and-deployment velocity, without compromising the integrity of your bespoke process.  

Building a true DevSecOps ecosystem is probably a common goal for most teams. However, uncommonality most often enters the picture in the forms of both technical and organizational roadblocks. Let’s take a look at some key insights from a 2020 SANS Institute survey on current industry efforts to more closely integrate DevOps and SecOps—and how you can plot your best path forward.

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

The security landscape

In more traditional environments, security teams often feel they’ve been left behind by the pace of DevOps. Vulnerabilities are introduced faster than SecOps can likely find them. The shift is with teams that are building continuous delivery frameworks, with compliance checks at every stage of the game. It becomes a matter of defending the environment as it’s being built.

Currently, about 74% of organizations are deploying changes more than once per month, according to SANS. Often, these are weekly or daily instances. So, velocity is increasing, primarily out of a need to get customers what they need, faster. Traditional change approvals and security controls are becoming more guardrail-style checks. The challenge, however, lies in optimizing the process and keeping it as secure as possible.

Increasing cloud adoption

From a security perspective, transitioning to a cloud provider’s responsibility model can better match the pace of DevOps and increase delivery speed. When both of these velocities are increasing, albeit responsibly, that’s better for business.

  • Cloud-hosted VM platforms allow teams to spin up processes more quickly compared to a traditional setup.
  • Adoption is accelerating for cloud-hosted container services and serverless platforms because providers are doing more provisioning, patching, and upgrading for many existing execution environments.
  • More organizations are running on cloud-hosted VMs versus container services and serverless platforms, but that could change because the latter two options allow you to further reduce your responsibility model.

Multi-cloud motivations

About 92% of organizations run on at least one public cloud provider. But for about 60% of those companies, the main motivations behind spreading services out between multiple providers are not quite as technical as one might imagine.

Mergers and acquisitions can cause obvious complexity, as companies link up and potentially run similar processes in different cloud environments like AWS, Azure, or GCP. There are also decision-makers and teams that prioritize a task-based approach and pick the best environment to get a particular job done. The benefits of a multi-cloud environment could then become drawbacks, as security becomes more difficult to plan for and understand. And no one wants complexity in an approach that is essentially supposed to offload responsibilities and make things easier.

Risk doesn’t translate for SecOps

As more DevOps teams increase their use of JavaScript, traditional security controls don’t support the popular format as well as other legacy languages. In this situation, there is greater risk. However, an older web app that hasn’t been updated in a while could be the tip of the iceberg in terms of the technical debt sitting out there.

Apps built on older languages like Java, .NET, and C++ could leave exposures open as teams roll over to newer languages. So, this situation also presents risk. Security teams may not even be aware they’re in the dark about vulnerabilities those legacy apps present, as they try to keep pace with DevOps.

The future of shifting left

When it comes to security testing phases, there’s still a heavy tendency toward QA. More is being done to integrate those protocols in the process, but the sea change of baking testing into earlier phases largely has yet to occur.  

  • Over the next decade, teams will likely adopt more cloud-based integration tools like AWS CodePipeline, Microsoft Azure DevOps, GitHub Actions, and GitLab CI. In these instances, the cloud provider is managing more for you, minimizing attack surfaces and providing more built-in security. GitHub and GitLab, in particular, are trending toward greater baked-in security.
  • Jenkins has been the continuous integration tool of choice for about the last decade. However, the 24/7 nature of running on-premises or in the cloud to manage builds, releases, and patches can increase the attack surface.
  • When it comes to container orchestration tools, cloud-managed services like AWS Fargate and Azure Container are beginning to pull even with cloud-hosted services like Docker and Kubernetes. It’s becoming more attractive to outsource control-point and hardening responsibilities, so that security can shift further left into containers; it simplifies testing and helps ease deployment.

The future of shifting right

Security-testing responsibility lies with actual security teams about 65% of the time. Yet, managing corrective actions lies with development teams about 63% of the time, according to SANS. These numbers indicate largely siloed actions blocking the path to a true DevSecOps approach.

The biggest success measurement of DevSecOps is the time it takes to fix an issue. Aligning teams to tackle an issue in a speedy manner can make or break. Additionally, identifying post-deployment issues can help to improve shift-left controls to prevent those issues from ever escaping into production.

A 100% cross-functional effort most likely will not be achieved by every organization. However, moving closer to this goal could help strengthen teams, boost morale, and feed back key learnings to ultimately increase the speed of success.

In conclusion

Ironically, the biggest challenge of all isn’t technical in nature. Red tape within organizations can present challenges like lack of buy-in from management, insufficient budget (open-source tools can help here!), and siloed efforts. Additionally, a shortage of skilled workers could reinforce the same old  decision-making patterns at those management levels.  

When it comes to closely aligning teams and getting more time back to innovate, it’s often a cyclical dance of shifting right to improve your efforts in shifting left. For example, can you move further right into the cloud rather than building do-it-yourself, comprehensive solutions to security? Offloading could help to create more controls for enforcing security in tandem with DevOps.

No one wants to compromise the integrity of deploying on time, particularly as it relates to customers and your company’s bottom line. Co-sponsored by Rapid7, this recent SANS webinar presents an in-depth look at key statistics from a recent survey of companies and their advancements—or lack thereof—in DevSecOps.

For more insights, access the full 2020 SANS Institute survey on Extending DevSecOps Security Controls into the Cloud.