All posts by Jared Murrell

The evolving role of operations in DevOps

Post Syndicated from Jared Murrell original https://github.blog/2020-12-03-the-evolving-role-of-operations-in-devops/

This is the third blog post in our series of DevOps fundamentals. For a quick intro on what DevOps is, check out part one; for a primer on automation in DevOps, visit part two.

As businesses reorganize for DevOps, the responsibilities of teams throughout the software lifecycle inevitably shift. Operations teams that traditionally measure themselves on uptime and stability—often working in silos separate from business and development teams—become collaborators with new stakeholders throughout the software lifecycle. Development and operations teams begin to work closely together to build and continually improve their delivery and management processes. In this blog post, we’ll share more on what these evolving roles and responsibilities look like for IT teams today, and how operations help drive consistency and success across the entire organization.

The Ops role in DevOps compared to traditional IT operations

To better understand how DevOps changes the responsibilities of operations teams, it will help to recap the traditional, pre-DevOps role of operations. Let’s take a look at a typical organization’s software lifecycle: before DevOps, developers package an application with documentation, and then ship it to a QA team. The QA teams install and test the application, and then hand off to production operations teams. The operations teams are then responsible for deploying and managing the software with little-to-no direct interaction with the development teams.

These dev-to-ops handoffs are typically one-way, often limited to a few scheduled times in an application’s release cycle. Once in production, the operations team is then responsible for managing the service’s stability and uptime, as well as the infrastructure that hosts the code. If there are bugs in the code, the virtual assembly line of dev-to-qa-to-prod is revisited with a patch, with each team waiting on the other for next steps. This model typically requires pre-existing infrastructure that needs to be maintained, and comes with significant overhead. While many businesses continue to remain competitive with this model, the faster, more collaborative way of bridging the gap between development and operations is finding wide adoption in the form of DevOps.

Accelerating through public cloud adoption

Over the past decade, the maturation of the public cloud has added complexity to the responsibilities of operations teams. The ability to rent stable, secure infrastructure by the minute and provide everything as a service to customers has enabled organizations to deploy rapidly and frequently, often several times per day. Smaller, faster delivery cycles give organizations the critical capability of improving their customer experience through rapid feedback and automated deployments. Cloud technologies have made development velocity a fundamental part of delivering a competitive customer experience.

What the cloud, DevOps, and developer velocity mean for operations teams

Cloud technologies have transformed how we deliver and operate software, impacting how we do DevOps  today. Developers now focus more on stability and uptime in addition to developer velocity, and operations teams now have a stake in developer velocity along with their traditional role of maintaining uptime. When it comes to the specific role of operations in DevOps, this often means:

  • Enabling self-service for developers. In order to support developer velocity—and minimize risks that stem from “shadow operations”, where developers seek their own solutions—operations teams work more closely with developers to provide on-demand access to secure, compliant tooling and environments.
  • Standardized tooling and processes across the business. The best way to enable a sustainable self-service model and empower teams to work more efficiently together is by standardizing on the tooling that is in use. Tools and processes that are shared across the business unit enable organizational unity and greater collaboration. In turn, this reduces the friction developers and operations teams experience when sharing responsibilities.
  • Bringing extensible automation to traditional operations tasks. As operations teams focus more on empowering other teams through self-service and collaboration, there is less time to handle other work. Traditional operations tasks like resolving incidents, updating systems, or scaling infrastructure still need to be addressed—only smarter. When development and operations unite under DevOps, operations teams turn to automation for more of the repeatable tasks and drive consistency across the organization. This also enables teams and business units to track and measure the results of their efforts.
  • Working and shipping like developers. As operations teams shift more towards greater automation, ‘X’ as code becomes the new normal. Like application source code, the code controlling operations systems needs to be stored, versioned, secured, and maintained. As a result, the development-operations relationship starts to feel more balanced on both sides: operations specialists become more like the developers and more familiar with their working models, and in some organizations, developers become more like operations, sharing in the responsibility of debugging problems with their own code in production.

Closing the development-operations gap

While it’s well understood that DevOps requires close collaboration between teams, we’re often asked “How are development and operations functions really coordinated in a DevOps model?” At GitHub, we’re fortunate to partner with thousands of businesses every year on improving their DevOps practices. Sometimes these organizations focus on the clearest target, asking developers and delivery teams to go to market faster while paying less attention to the post-deployment operations teams.

However, we find the best results come through improving the practices of all the teams involved in the software lifecycle, together. Operations teams aren’t simply infrastructure and process owners for the organizations, but are also a critical part of the feedback loop for development. Try it out for yourself—a small pilot project that includes developers, release engineering, operations, and even InfoSec can give more teams the momentum they need. It can give them confidence to continue their work, establish best practices, and even train others within your organization along the way.

For a closer look at IT operations in DevOps, tune in to next week’s GitHub Universe session: Continuous delivery with GitHub Actions

Getting started with DevOps automation

Post Syndicated from Jared Murrell original https://github.blog/2020-10-29-getting-started-with-devops-automation/

This is the second post in our series on DevOps fundamentals. For a guide to what DevOps is and answers to common DevOps myths check out part one.

What role does automation play in DevOps?

First things first—automation is one of the key principles for accelerating with DevOps. As noted in my last blog post, it enables consistency, reliability, and efficiency within the organization, making it easier for teams to discover and troubleshoot problems. 

However, as we’ve worked with organizations, we’ve found not everyone knows where to get started, or which processes can and should be automated. In this post, we’ll discuss a few best practices and insights to get teams moving in the right direction.

A few helpful guidelines

The path to DevOps automation is continually evolving. Before we dive into best practices, there are a few common guidelines to keep in mind as you’re deciding what and how you automate. 

  • Choose open standards. Your contributors and team may change, but that doesn’t mean your tooling has to. By maintaining tooling that follows common, open standards, you can simplify onboarding and save time on specialized training. Community-driven standards for packaging, runtime, configuration, and even networking and storage—like those found in Kubernetes—also become even more important as DevOps and deployments move toward the cloud.
  • Use dynamic variables. Prioritizing reusable code will reduce the amount of rework and duplication you have, both now and in the future. Whether in scripts or specialized tools, securely using externally-defined variables is an easy way to apply your automation to different environments without needing to change the code itself.
  • Use flexible tooling you can take with you. It’s not always possible to find a tool that fits every situation, but using a DevOps tool that allows you to change technologies also helps reduce rework when companies change direction. By choosing a solution with a wide ecosystem of partner integrations that works with any cloud, you’ll be able to  define your unique set of best practices and reach your goals—without being restricted by your toolchain.

DevOps automation best practices

Now that our guidelines are in place, we can evaluate which sets of processes we need to automate. We’ve broken some best practices for DevOps automation into four categories to help you get started. 

1. Continuous integration, continuous delivery, and continuous deployment

We often think of the term “DevOps” as being synonymous with “CI/CD”. At GitHub we recognize that DevOps includes so much more, from enabling contributors to build and run code (or deploy configurations) to improving developer productivity. In turn, this shortens the time it takes to build and deliver applications, helping teams add value and learn faster. While CI/CD and DevOps aren’t precisely the same, CI/CD is still a core component of DevOps automation.

  • Continuous integration (CI) is a process that implements testing on every change, enabling users to see if their changes break anything in the environment. 
  • Continuous delivery (CD) is the practice of building software in a way that allows you to deploy any successful release candidate to production at any time.
  • Continuous deployment (CD) takes continuous delivery a step further. With continuous deployment, every successful change is automatically deployed to production. Since some industries and technologies can’t immediately release new changes to customers (think hardware and manufacturing), adopting continuous deployment depends on your organization and product.

Together, continuous integration and continuous delivery (commonly referred to as CI/CD) create a collaborative process for people to work on projects through shared ownership. At the same time, teams can maintain quality control through automation and bring new features to users with continuous deployment. 

2. Change management

Change management is often a critical part of business processes. Like the automation guidelines, there are some common principles and tooling that development and operations teams can use to create consistency.  

  • Version control: The practice of using version control has a long history rooted in helping people revert changes and learn from past decisions. From RCS to SVN, CVS to Perforce, ClearCase to Git, version control is a staple for enabling teams to collaborate by providing a common workflow and code base for individuals to work with. 
  • Change control: Along with maintaining your code’s version history, having a system in place to coordinate and facilitate changes helps to maintain product direction, reduces the probability of harmful changes to your code, and encourages a collaborative process.
  • Configuration management: Configuration management makes it easier for everyone to manage complex deployments through templates and manage changes at scale with proper controls and approvals.

3. ‘X’ as code

By now, you also may have heard of “infrastructure as code,” “configuration as code,” “policy as code,” or some of the other “as code” models. These models provide a declarative framework for managing different aspects of your operating environments through high level abstractions. Stated another way, you provide variables to a tool and the output is consistently the same, allowing you to recreate your resources consistently. DevOps implements the “as code” principle with several goals, including: an auditable change trail for compliance, collaborative change process via version control, a consistent, testable and reliable way of deploying resources, and as a way to lower the learning curve for new team members. 

  • Infrastructure as code (IaC) provides a declarative model for creating immutable infrastructure using the same versioning and workflow that developers use for source code. As changes are introduced to your infrastructure requirements, new infrastructure is defined, tested, and deployed with new configurations through automated declarative pipelines.
  • Platform as code (PaC) provides a declarative model for services similar to how infrastructure as code provides a framework for recreating the same infrastructure—allowing you to rapidly deploy services to existing infrastructure with high-level abstractions.
  • Configuration as code (CaC) brings the next level of declarative pipelining by defining the configuration of your applications as versioned resources.
  • Policy as code brings versioning and the DevOps workflow to security and policy management. 

4. Continuous monitoring

Operational insights are an invaluable component of any production environment. In order to understand the behaviors of your software in production, you need to have information about how it operates. Continuous monitoring—the processes and technology that monitor performance and stability of applications and infrastructure throughout the software lifecycle—provides operations teams with data to help troubleshoot, and development teams the information needed to debug and patch. This also leads into an important aspect of security, where DevSecOps takes on these principles with a security focus. Choosing the right monitoring tools can be the difference between a slight service interruption and a major outage. When it comes to gaining operational insights, there are some important considerations: 

  • Logging gives you a continuous stream of data about your business’ critical components. Application logs, infrastructure logs, and audit logs all provide important data that helps teams learn and improve products.
  • Monitoring provides a level of intelligence and interpretation to the raw data provided in logs and metrics. With advanced tooling, monitoring can provide teams with correlated insights beyond what the raw data provides.
  • Alerting provides proactive notifications to respective teams to help them stay ahead of major issues. When effectively implemented, these alerts not only let you know when something has gone wrong, but can also provide teams with critical debugging information to help solve the problem quickly.
  • Tracing takes logging a step further, providing a deeper level of application performance and behavioral insights that can greatly impact the stability and scalability of applications in production environments.

Putting DevOps automation into action

At this point, we’ve talked much about automation in the DevOps space, so is DevOps all about automation? Put simply, no. Automation is an important means to accomplishing this work efficiently between teams. Whether you’re new to DevOps or migrating from another set of automation solutions, testing new tooling with a small project or process is a great place to start. It will lay the foundation for scaling and standardizing automation across your entire organization, including how to measure effectiveness and progression toward your goals. 

Regardless of which toolset you choose to automate your DevOps workflow, evaluating your teams’ current workflows and the information you need to do your work will help guide you to your tool and platform selection, and set the stage for success. Here are a few more resources to help you along the way:

Want to see what DevOps automation looks like in practice? See how engineers at Wiley build faster and more securely with GitHub Actions.