Tag Archives: developer experience

How GitHub uses merge queue to ship hundreds of changes every day

Post Syndicated from Will Smythe original https://github.blog/2024-03-06-how-github-uses-merge-queue-to-ship-hundreds-of-changes-every-day/


At GitHub, we use merge queue to merge hundreds of pull requests every day. Developing this feature and rolling it out internally did not happen overnight, but the journey was worth it—both because of how it has transformed the way we deploy changes to production at scale, but also how it has helped improve the velocity of customers too. Let’s take a look at how this feature was developed and how you can use it, too.

Merge queue is generally available and is also now available on GitHub Enterprise Server! Find out more.

Why we needed merge queue

In 2020, engineers from across GitHub came together with a goal: improve the process for deploying and merging pull requests across the GitHub service, and specifically within our largest monorepo. This process was becoming overly complex to manage, required special GitHub-only logic in the codebase, and required developers to learn external tools, which meant the engineers developing for GitHub weren’t actually using GitHub in the same way as our customers.

To understand how we got to this point in 2020, it’s important to look even further back.

By 2016, nearly 1,000 pull requests were merging into our large monorepo every month. GitHub was growing both in the number of services deployed and in the number of changes shipping to those services. And because we deploy changes prior to merging them, we needed a more efficient way to group and deploy multiple pull requests at the same time. Our solution at this time was trains. A train was a special pull request that grouped together multiple pull requests (passengers) that would be tested, deployed, and eventually merged at the same time. A user (called a conductor) was responsible for handling most aspects of the process, such as starting a deployment of the train and handling conflicts that arose. Pipelines were added to help manage the rollout path. Both these systems (trains and pipelines) were only used on our largest monorepo and were implemented in our internal deployment system.

Trains helped improve velocity at first, but over time started to negatively impact developer satisfaction and increase the time to land a pull request. Our internal Developer Experience (DX) team regularly polls our developers to learn about pain points to help inform where to invest in improvements. These surveys consistently rated deployment as the most painful part of the developer’s daily experience, highlighting the complexity and friction involved with building and shepherding trains in particular. This qualitative data was backed by our quantitative metrics. These showed a steady increase in the time it took from pull request to shipped code.

Trains could also grow large, containing the changes of 15 pull requests. Large trains frequently “derailed” due to a deployment issue, conflicts, or the need for an engineer to remove their change. On painful occasions, developers could wait 8+ hours after joining a train for it to ship, only for it to be removed due to a conflict between two pull requests in the train.

Trains were also not used on every repository, meaning the developer experience varied significantly between different services. This led to confusion when engineers moved between services or contributed to services they didn’t own, which is fairly frequent due to our inner source model.

In short, our process was significantly impacting the productivity of our engineering teams—both in our large monorepo and service repositories.

Building a better solution for us and eventually for customers

By 2020, it was clear that our internal tools and processes for deploying and merging across our repositories were limiting our ability to land pull requests as often as we needed. Beyond just improving velocity, it became clear that our new solution needed to:

  1. Improve the developer experience of shipping. Engineers wanted to express two simple intents: “I want to ship this change” and “I want to shift to other work;” the system should handle the rest.
  2. Avoid having problematic pull requests impact everyone. Those causing conflicts or build failures should not impact all other pull requests waiting to merge. The throughput of the overall system should be favored over fairness to an individual pull request.
  3. Be consistent and as automated as possible across our services and repositories. Manual toil by engineers should be removed wherever possible.

The merge queue project began as part of an overall effort within GitHub to improve availability and remove friction that was preventing developers from shipping at the frequency and level of quality that was needed. Initially, it was only focused on providing a solution for us, but was built with the expectation that it would eventually be made available to customers.

By mid-2021, a few small, internal repositories started testing merge queue, but moving our large monorepo would not happen until the next year for a few reasons.

For one, we could not stop deploying for days or weeks in order to swap systems. At every stage of the project we had to have a working system to ship changes. At a maximum, we could block deployments for an hour or so to run a test or transition. GitHub is remote-first and we have engineers throughout the world, so there are quieter times but never a free pass to take the system offline.

Changing the way thousands of developers deploy and merge changes also requires lots of communication to ensure teams are able to maintain velocity throughout the transition. Training 1,000 engineers on a new system overnight is difficult, to say the least.

By rolling out changes to the process in phases (and sometimes testing and rolling back changes early in the morning before most developers started working) we were able to slowly transition our large monorepo and all of our repositories responsible for production services onto merge queue by 2023.

How we use merge queue today

Merge queue has become the single entry point for shipping code changes at GitHub. It was designed and tested at scale, shipping 30,000+ pull requests with their associated 4.5 million CI runs, for GitHub.com before merge queue was made generally available.

For GitHub and our “deploy the merge process,” merge queue dynamically forms groups of pull requests that are candidates for deployment, kicks off builds and tests via GitHub Actions, and ensures our main branch is never updated to a failing commit by enforcing branch protection rules. Pull requests in the queue that conflict with one another are automatically detected and removed, with the queue automatically re-forming groups as needed.

Because merge queue is integrated into the pull request workflow (and does not require knowledge of special ChatOps commands, or use of labels or special syntax in comments to manage state), our developer experience is also greatly improved. Developers can add their pull request to the queue and, if they spot an issue with their change, leave the queue with a single click.

We can now ship larger groups without the pitfalls and frictions of trains. Trains (our old system) previously limited our ability to deploy more than 15 changes at once, but now we can now safely deploy 30 or more if needed.

Every month, over 500 engineers merge 2,500 pull requests into our large monorepo with merge queue, more than double the volume from a few years ago. The average wait time to ship a change has also been reduced by 33%. And it’s not just numbers that have improved. On one of our periodic developer satisfaction surveys, an engineer called merge queue “one of the best quality-of-life improvements to shipping changes that I’ve seen a GitHub!” It’s not a stretch to say that merge queue has transformed the way GitHub deploys changes to production at scale.

How to get started

Merge queue is available to public repositories on GitHub.com owned by organizations and to all repositories on GitHub Enterprise (Cloud or Server).

To learn more about merge queue and how it can help velocity and developer satisfaction on your busiest repositories, see our blog post, GitHub merge queue is generally available.

Interested in joining GitHub? Check out our open positions or learn more about our platform.

The post How GitHub uses merge queue to ship hundreds of changes every day appeared first on The GitHub Blog.

How to get in the flow while coding (and why it’s important)

Post Syndicated from Gwen Davis original https://github.blog/2024-01-22-how-to-get-in-the-flow-while-coding-and-why-its-important/


It’s the dream: your ideas are flowing, time and space fade away, the path ahead of you is clear, you’re moving at the speed of thought, and every click you make is gold.

This is called being in the flow or flow state. When you’re in the flow, you block out the world, are fully immersed in what you’re doing, and enjoy increased creativity, innovation, and happiness.

“Being in the flow is magical,” says Jonathan Carter, technical advisor of the CEO at GitHub. “You tell your teammates they can go to lunch and you’ll catch them later—not because you’re a workaholic, but because there’s truly nothing else you’d rather be doing right now.”

In this blog, we’ll explore what flow state entails, its benefits, and three tips for reaching it the next time you sit down to code. Let’s go.

What exactly is the flow state?

The concept of flow state came from positive psychologist Mihaly Csikszentmihalyi and his 1990 book, Flow: The Psychology of Optimal Experience. In it, Csikszentmihalyi describes nine dimensions of flow:

  1. Challenge-skills balance
  2. Total concentration
  3. Clear goals
  4. Immediate feedback
  5. Transformation of time
  6. Feeling intrinsically rewarded
  7. Effortlessness
  8. Loss of self-consciousness
  9. Feeling of total control

Csikszentmihalyi discovered these dimensions by conducting research to understand how people achieve productivity and happiness. He found that in people’s favorite, most absorbed moments, their thoughts and actions “flowed,” and brought unrivaled motivation, meaning, and creativity.

“Software has historically been viewed as mathematical or scientific in nature, but I would argue that writing code has more in common with other creative acts,” says Idan Gazit, senior director of research at GitHub. “Whether you’re writing an essay or writing a program, the challenge is getting into the headspace where you can untangle the thing you want to express.”

What are the benefits of flow state for developers?

When developers reach that coveted frame of mind, their productivity soars. According to our recent developer productivity research, developers produce higher quality work when they can easily collaborate—a hallmark of flow state—through comments, pull requests, issues, etc. According to the study, developers reported that effective collaboration provides a host of benefits:

  • Improved test coverage
  • Faster, cleaner, more secure code writing
  • Novel, creative solutions
  • Speedier deployments

On the flip side, when developers can’t freely collaborate, their work suffers (it takes 23 minutes, on average, to get back into the task at hand after an interruption, according to a study from the University of California, Irvine).

And flow state isn’t important just for individual developers—it helps businesses, too.

“When it comes to business success, flow state is everything,” says Chris Reddington, senior manager of developer advocacy at GitHub. This is because today’s environments use dozens of languages and often leverage multiple cloud providers, creating pressure, complexity, and distractions. He adds, “The more we can help engineering teams stay in the flow, where they are just focused on solving those bigger problems, the better.”

Quick tips for developers who want to get and stay in the flow state

So, how can you achieve flow state during your day-to-day tasks? The following tips should help you reach the flow state and stay there—regardless of industry or where you are in your developer career.

Tip #1: Optimize your environment

Creating a distraction-free environment that’s conducive to work can pay huge dividends. Here are some ideas:

  • Block time. Create personal focus events on your calendar where no one can schedule meetings with you.
  • Schedule breaks. Use a timer to give yourself 15-30 minute breaks throughout the workday.
  • Snooze Slack and phone notifications. Be antisocial and make yourself unavailable to the world.
  • Eliminate or reduce multitasking. Being able to do more than one task at a time is a myth, anyway.
  • Invest in headphones. Noise-canceling headphones, in particular, can keep your stress down and your focus high.
  • Get comfortable. Invest in ergonomic office equipment, wear comfortable clothes, and ensure you’ve had enough to eat.
  • Hold on scheduling meetings. If you’re a team leader, be mindful of meeting frequency.
  • Create a pre-flow ritual. Routines like grabbing coffee, checking your messages, and then putting your phone on silent can cue your brain that it’s time to get to work.

Of course, even with our best attempts, distractions happen. If you need to step away from the task at hand, that’s okay. Gazit also suggests pair programming or solution design to help overcome mental hurdles.

“It’s a great magic trick,” he says. “Stepping back and talking through the problem with a teammate is often the fastest route to getting unstuck.”

He also adds that GitHub Copilot can be helpful for this.

“GitHub Copilot is never busy,” he says. “I’m not distracting it when I put it to work. Debugging with a rubber duck is fantastic, but GitHub Copilot is the rubber duck that talks back. It helps me reason about the solution space and suggests approaches I wouldn’t have considered.”

A banner advertising GitHub's developer email newsletter.

Tip #2: Map out your work

You can also achieve flow state by ensuring you have a clear path for accomplishing your goal.

Gazit describes how he can get into the flow state when he correctly nails the balance of architectural work. This is especially important when it comes to complex tasks like designing an API, where you first have to build an architecture while considering how it will be used and what kind of load it’ll put on your database.

“If I do the architectural work well, I can then do the bricklaying with a feeling that I’m super clear on what I’m doing,” he says. “I know exactly where I’m going.”

Reddington notes that mapping your work and the practice of blocking time, as mentioned above, often go hand in hand.

“When I block out chunks of time, I can figure out how I’m going to solve the problems I’m tackling appropriately,” he says.

However, he warns that you’re not always going to fix the things you’re trying to do in the allotted blocked time. But at least you can start mentally organizing.

Finding the optimal mix of challenge and skill is also important to achieve flow. If something is too easy, you’ll be bored and unsatisfied. If it’s too challenging on the other hand, you’ll be stressed about not not getting it done, which will also keep flow elusive.

“A good mix can make all the difference,” Reddington says.

Tip #3: Find joy in the work you’re doing

You won’t be able to hit flow state if you’re not enjoying yourself.

“It’s only when you’re not worrying about meetings, or your email, or what you’re going to have for dinner, that you can hit the flow state,” Carter says.

It’s a similar experience to being entertained.

“It’s like when you’re reading a book and you just have to finish the next chapter or you’re binging Netflix and you need to see the next episode,” he says. “It’s that same energy.”

Enjoyable work pertains to teams, too.

Carter notes that office work enjoyment can be increased by clearly articulating the outcomes you’re trying to accomplish. When a product manager writes a well-articulated issue that clarifies the end result, there’s a higher likelihood that the team will be more motivated to take that work on and do it quickly.

“They’re not focused on the complexity anymore but on the desire to get there,” he says.

Similarly, if you’re involved in a project you don’t enjoy, it can be useful to rethink why you’re doing that work in the first place.

“I find that if I can recreate the mindset of why we should solve the problem, I can bootstrap curiosity and get back into the flow state,” he says.

The bottom line

Achieving flow state can significantly enhance norepinephrine, dopamine, anandamide, serotonin, and endorphins—increasing feelings of motivation and intrinsic reward, as well as pattern recognition and lateral thinking ability. It’s a win for productivity, well-being, and keeping the intrinsic developer fire strong.

“With flow state, you’re never at a point where you’re performing the same mechanics twice,” Carter says. “You’re learning in response to what you’re doing. You’re naturally interested, building up an unconscious muscle of curiosity. The learning potential is endless.”

To learn more about how businesses are incorporating flow state into their processes, read Developer experience: What is it and why should you care? and explore how GitHub can help.

The post How to get in the flow while coding (and why it’s important) appeared first on The GitHub Blog.

How GitHub uses GitHub Actions and Actions larger runners to build and test GitHub.com

Post Syndicated from Max Wagner original https://github.blog/2023-09-26-how-github-uses-github-actions-and-actions-larger-runners-to-build-and-test-github-com/

The Developer Experience (DX) team at GitHub collaborated with a number of other teams to work on moving our continuous integration (CI) system to GitHub Actions to support the development and scaling demands of our engineering team. Our goal as a team is to enable our engineers to confidently and quickly ship software. To that end, we’ve worked on providing paved paths, a suite of automated tools and applications to streamline our development, runtime platforms, and deployments. Recently, we’ve been working to make our CI experience better by leveraging the newly released GitHub feature, Actions larger runners, to run our CI.

Read on to see how we run 15,000 CI jobs within an hour across 150,000 cores of compute!

Brief history of CI at GitHub

GitHub has invested in a variety of different CI systems throughout its history. With each system, our aim has been to enhance the development experience for both GitHub engineers writing and deploying code and for engineers maintaining the systems.

However, with past CI systems we faced challenges with scaling the system to meet the needs of our engineering team to provide both stable and ephemeral build environments. Neither of these challenges allowed us to provide the optimal developer experience.

Then, GitHub released GitHub Actions larger runners. This gave us an opportunity not only to transition to a fully featured CI system, but also to develop, experience, and utilize the systems we are creating for our customers and to drive feedback to help build the product. For the GitHub DX team, this transition was a great opportunity to move away from maintaining our past CI systems while delivering a superior developer experience.

What are larger runners?

Larger runners are GitHub Actions runners that are hosted by GitHub. They are managed virtual machines (VMs) with more RAM, CPU, and disk space than standard GitHub-hosted runners. There are a variety of different machine sizes offered for the runners as well as some additional features compared to the standard GitHub-hosted runners.

Larger runners are available to GitHub Team and GitHub Enterprise Cloud customers. Check out these docs to learn more about larger runners.

Why did we pick larger runners?

Autoscaling and managed

Coming from previous iterations of GitHub’s CI systems, we needed the ability to create CI machines on demand to meet the fast feedback cycles needed by GitHub engineers and to scale with the rate of change of the site.

With larger runners, we maintain the ability to autoscale our CI system because GitHub will automatically create multiple instances of a runner that scale up and down to match the job demands of our engineers. An added benefit is that the GitHub DX team no longer has to worry about the scaling of the runners since all of those complexities are handled by GitHub itself!

We wanted to share some raw numbers on our current peak utilization of larger runners:

  • Uses 4,500 concurrent 32-core runners
  • Runs 125,000 build minutes per hour
  • Queues and runs approximately 15,000 jobs within an hour
  • Allocates around 150,000 cores of compute

(Beta) Custom VM image support

GitHub Actions provides runners with a lot of tools already baked in, which is sufficient and convenient for a variety of projects across the company. However, for some complex production GitHub services, the prebuilt runners did not satisfy all our requirements.

To maintain an efficient and fast CI system, the DX team needed the ability to provide machines with all the tools needed to build those production services. We didn’t want to spend extra time installing tools or compiling projects during CI jobs.

We are currently building features into larger runners so they have the ability to be launched from a custom VM image, called custom images. While this feature is still in beta, using custom images is a huge benefit to GitHub’s CI lifecycle for a couple of reasons.

First, custom images allows GitHub to bundle all the required software and tools needed to build and test complex production bearing services. Anything that is unique to GitHub or one of our projects can be pre-installed on the image before a GitHub Actions workflow even starts.

Second, custom images enable GitHub to dramatically speed up our GitHub Actions workflows by acting as a bootstrapping cache for some projects. During custom image creation, we bundle a pre-built version of a project’s source code into the image. Subsequently, when the project starts a GitHub Actions workflow, it can utilize a cached version of its source code, and any other build artifacts, to speed up its build process.

The cached project source code on the custom VM image can quickly become out of date due to the rapid rate of development within GitHub. This, in turn, causes workflow durations to increase. The DX team worked with the GitHub Actions engineering team to create an API on GitHub to regularly update the custom image multiple times a day to keep the project source up to date.

In practice, this has reduced the bootstrapping time of our projects significantly. Without custom images, our workflows would take around 50 minutes from start to finish, versus the 12 minutes they take today. This is a game changer for our engineers.

We’re working on a way to offer this functionality at scale. If you are interested in custom images for your CI/CD workflows, please reach out to your account manager to learn more!

Important GitHub Actions features

There are thousands of projects at GitHub — from services that run production workloads to small tools that need to run CI to perform their daily operations. To make this a reality, GitHub leverages several important features in GitHub Actions that enable us to use the platform efficiently and securely across the company at scale.

Reusable workflows

One of the DX team’s driving goals is to pave paths for all repositories to run CI without introducing unnecessary repetition across repositories. Prior to GitHub Actions, we created single job configurations that could be used across multiple projects. In GitHub Actions, this was not as easy because any repository can define its own workflows. Reusable workflows to the rescue!

The reusable workflows feature in GitHub Actions provides a way to centrally manage a workflow in a repository that can be utilized by many other repositories in an organization. This was critical in our transition from our previous CI system to GitHub Actions. We were able to create several prebuilt workflows in a single repository, and many repositories could then use those workflows. This makes the process of adding CI to an existing or new project very much plug and play.

In our central repository hosting our reusable workflows, we can have workflows defined like:

on:
  workflow_call:
    inputs:
      cibuild-script:
        description: 'Which cibuild script to run.'
        type: string
        required: false
        default: "script/cibuild"
    secrets:
      service-api-key:
        required: true

jobs:
  reusable_workflow_job:
    runs-on: gh-larger-runner-medium
    name: Simple Workflow Job
    timeout-minutes: 20
    steps:
      - name: Checkout Project
        uses: actions/checkout@v3
      - name: Run cibuild script
        run: |
          bash ${{ inputs.cibuild-script }}
        shell: bash

And in consuming repositories, they can simply utilize the reusable workflow, with just a few lines of code!

name: my-new-project
on:
  workflow_dispatch:
  push:

jobs:
  call-reusable-workflow:
    uses: github/internal-actions/.github/workflows/default.yml@main
    with:
      cibuild-script: "script/cibuild-my-tests"
    secrets:
      service-api-key: ${{ secrets.SERVICE_API_KEY }}

Another great benefit of the reusable workflows feature is that the runner can be defined in the Reusable Workflow, meaning that we can guarantee all users of the workflow will run on our designated larger runner pool. Now, projects don’t need to worry about which runner they need to use!

(Beta) Reusing previous workflow outcomes

To optimize our developer experience, the DX team worked with our engineering team to create a feature for GitHub Actions that allows workflows to reuse the outcome of a previous workflow run where the outcomes would be the same.

In some cases, the file contents of a repository are exactly the same between workflow runs that run on different commits. That is, the Git tree IDs for the current commit is the same as the previous commit (there are no file differences). In these cases, we can bypass CI checks by reusing the previous workflow outcomes and allow engineers to not have to wait for CI to run again.

This feature saves GitHub engineers from running anywhere from 300 to 500 workflows runs a day!

Other challenges faced

Private service access

During some internal GitHub Actions workflow runs, the workflows need the ability to access some GitHub private services, within a GitHub virtual private cloud (VPC), over the network. These could be resources such as artifact storage, application metadata services, and other services that enable invocation of our test harness.

When we moved to larger runners, this requirement to access private services became a top-of-mind concern. In previous iterations of our CI infrastructure, these private services were accessible through other cloud and network configurations. However, larger runners are isolated from other production environments, meaning they cannot access our private services.

Like all companies, we need to focus on both the security of our platform as well as the developer experience. To satisfy these two requirements, GitHub developed a remote access solution that allows clients residing outside of our VPCs (larger runners) to securely access select private services.

This remote access solution works on the principle of minting an OIDC token in GitHub Actions, passing the OIDC token to a remote access gateway that authorizes the request by validating the OIDC token, and then proxying the request to the private service residing in a private network.

Flow diagram showing an OIDC token being mined in GitHub Actions, passed to a remote access gateway that authorizes the request by validating the OIDC token, and then proxying the request to the private service residing in a private network.

With this solution we are able to securely provide remote access from larger runners running GitHubActions to our private resources within our VPC.

GitHub has open sourced the basic scaffolding of this remote access gateway in the github/actions-oidc-gateway-example repository, so be sure to check it out!

Conclusion

GitHub Actions provides a robust and smooth developer experience for GitHub engineers working on GitHub.com. We have been able to accomplish this by using the power of GitHub Actions features, such as reusable workflows and reusable workflow outcomes, and by leveraging the scalability and manageability of the GitHub Actions larger runners. We have also used this effort to enhance the GitHub Actions product. To put it simply, GitHub runs on GitHub.

The post How GitHub uses GitHub Actions and Actions larger runners to build and test GitHub.com appeared first on The GitHub Blog.

How we build containerized services at GitHub using GitHub

Post Syndicated from MV Karan original https://github.blog/2023-08-02-how-we-build-containerized-services-at-github-using-github/

The developer experience engineering team at GitHub works on creating safe, delightful, and inclusive solutions for GitHub engineers to efficiently code, ship, and operate software–setting an example for the world on how to build software with GitHub. To achieve this we provide our developers with a paved path–a comprehensive suite of automated tools and applications to streamline our runtime platforms, deployment, and hosting that helps power some of the microservices on the GitHub.com platform and many internal tools. Let’s take a deeper look at how one of our main paved paths works.

Our development ecosystem

GitHub’s main paved path covers everything that’s needed for running software–creating, deploying, scaling, debugging, and running applications. It is an ecosystem of tools like Kubernetes, Docker, load balancers, and many custom apps that work together to create a cohesive experience for our engineers. It isn’t just infrastructure and isn’t just Kubernetes. Kubernetes is our base layer, and the paved path is a mix of conventions, tools, and settings built on top of it.

The kind of services that we typically run using the paved path include web apps, computation pipelines, batch processors, and monitoring systems.

Kubernetes, which is the base layer of the paved path, runs in a multi-cluster, multi-region topology.

Benefits of the paved path

There are hundreds of services at GitHub–from a small internal tool to an external API supporting production workloads. For a variety of reasons, it would be inefficient to spin up virtual machines for each service.

  • Planning and capacity usage across all services wouldn’t be efficient. We would encounter significant overhead in managing both physical and Kubernetes infrastructure on an ongoing basis.
  • Teams would need to build deep expertise in managing their own Kubernetes clusters and would have less time to focus on their application’s unique needs.
  • We would have less central visibility of applications.
  • Security and compliance would be difficult to standardize and enforce.

With the paved path based on Kubernetes and other runtime apps, we’re instead able to:

  • Plan capacity centrally and only for the Kubernetes nodes, so we can optimally use capacity across nodes, as small workloads and large workloads coexist on the same machines.
  • Scale rapidly thanks to central capacity planning.
  • Easily manage configuration and deployments across services in one central control plane.
  • Consistently provide insights into app and deployment performance for individual services.

Onboarding a service

Onboarding a service with the code living in its own repository has been made easy with our ChatOps command service, called Hubot, and GitHub Apps. Service owners can easily generate some basic scaffolding needed to deploy the service by running a command like:

hubot gh-platform app scaffold monalisa-app

A custom GitHub App installed on the service’s GitHub repository will then automatically generate a pull request to add the necessary configurations, which includes:

  • A deployment.yaml file that defines the service’s deployment environments.
  • Kubernetes manifests that define Deployment and Service objects for deploying the service.
  • A Debian Dockerfile that runs a trivial web server to start off with, which will be used by the Kubernetes manifests.
  • Setting up CI builds as GitHub Checks that build the Docker images on every push, and store in a container registry ready for deployment.

Each service that is onboarded to the paved path has its unique Kubernetes namespace that is defined by <app-name>-<environment> and generally has a staging and production environment. This helps separate the workloads of multiple services, and also multiple environments for the same service since each environment gets its own Kubernetes namespace.

Deploying a service

At GitHub, we deploy branches and perform deployments through Hubot ChatOps commands. To deploy a branch named bug-fixes in the monalisa-app repository to the staging environment, a developer would run a ChatOps command like:

hubot deploy monalisa-app/bug-fixes to staging

This triggers a deployment that fetches the Docker image associated with the latest commit in the bug-fixes branch, updates the Kubernetes manifests, and applies those manifests to the clusters in the runtime platform relevant to that environment.

Typically, the Docker image would be deployed to multiple Kubernetes clusters across multiple geographical sites in a region that forms a part of the runtime platform.

To automate pull request merges into the busiest branches and orchestrate the rollout across environments we’re also using merge queue and deployment pipelines, which our engineers can observe and interact with during their deployment.

Securing our services

For any company, the security of the platform itself, along with services running within it, is critical. In addition to our engineering-wide practices, such as requiring two-person reviews on every pull request, we also have Security and Platform teams automating security measures, such as:

  • Pre-built Docker images to be used as base images for the Dockerfiles. These base images contain only the necessary packages/dependencies with security updates, a set of installed software that is auditable and curated according to shared needs.
  • Build-time and periodic scanning of all packages and running container images, for any vulnerabilities or dependencies needing a patch, powered by our own products for software supply chain security like Dependabot.
  • Build time and periodic scanning of GitHub repositories of services for exposed secrets and vulnerabilities, using GitHub’s native security features for advanced security like code scanning and secret scanning.
  • Multiple authentication and authorization mechanisms that allow only the relevant individuals to directly access underlying Kubernetes resources.
  • Comprehensive telemetry for threat detection.
  • Services running in the platform are by default accessible only within GitHub’s internal networks and not exposed on the public internet.
  • Branch protection policies are enforced on all production repositories. These policies prevent merging a pull request until designated automated tests pass and the change has been reviewed by a different developer from the one who proposed the change.

Another key aspect of security for an application is how secrets like keys and tokens are managed. At GitHub, we use a centralized secret store to manage secrets. Each service and each environment within the service has its own vault to store secrets. These secrets are then injected into the relevant pods in Kubernetes, which are then exposed to the containers.

The deployment flow, from merge to rollout

The whole deployment process would look something like this:

  1. A GitHub engineer merges a pull request to a branch in a repository. In the above example, it is the bug-fixes branch in the monalisa-app repository. This repository would also contain the Kubernetes manifest template files for deploying the application.
  2. The pull request merge triggers relevant CI workflows. One of them is a build of the Docker image, which builds the container image based on the Dockerfile specified in the repository, and pushes the image to an internal artifact registry.
  3. Once all the CI workflows have completed successfully, the engineer initiates a deployment by running a ChatOps command like hubot deploy monalisa-app/bug-fixes to staging. This triggers a deployment to our environments such as Staging.
  4. The build systems fetch the Kubernetes manifest files from the repository branch, replaces the latest image to be deployed from the artifact registry, injects app secrets from the secret store, and runs some custom operations. At the end of this stage, a ready-to-deploy Kubernetes manifest is available.
  5. Our deployment systems then apply the Kubernetes manifest to relevant clusters and monitor the rollout status of new changes.

Conclusion

GitHub’s internal paved path helps developers at GitHub focus on building services and delivering value to our users, with minimal focus on the infrastructure. We accomplish this by providing a streamlined path to our GitHub engineers that uses the power of containers and Kubernetes; scalable security, authentication, and authorization mechanisms; and the GitHub.com platform itself.

Want to try some of these for yourself? Learn more about all of GitHub’s features on github.com/features. If you have adopted any of our practices for your own development, give us a shout on Twitter!

Survey reveals AI’s impact on the developer experience

Post Syndicated from Inbal Shani original https://github.blog/2023-06-13-survey-reveals-ais-impact-on-the-developer-experience/


Developers today do more than just write and ship code—they’re expected to navigate a number of tools, environments, and technologies, including the new frontier of generative artificial intelligence (AI) coding tools. But the most important thing for developers isn’t story points or the speed of deployments. It’s the developer experience, which determines how efficiently and productively developers can exceed standards, enter a flow state, and drive impact.

I say this not only as GitHub’s chief product officer, but as a long-time developer who has worked across every part of the stack. Decades ago, when I earned my master’s in mechanical engineering, I became one of the first technologists to apply AI in the lab. Back then, it would take our models five days to process our larger datasets—which is striking considering the speed of today’s AI models. I yearned for tools that would make me more efficient and shorten my time to production. This is why I’m passionate about developer experience (DevEx) and have made it my focus as GitHub’s chief product officer.

Amid the rapid advancements in generative AI, we wanted to get a better understanding from developers about how new tools—and current workflows—are impacting the overall developer experience. As a starting point, we focused on some of the biggest components of the developer experience: developer productivity, team collaboration, AI, and how developers think they can best drive impact in enterprise environments.

To do so, we partnered with Wakefield Research to survey 500 U.S.-based developers at enterprise companies. In the following report, we’ll show how organizations can remove barriers to help enterprise engineering teams drive innovation and impact in this new age of software development. Ultimately, the way to innovate at scale is to empower developers by improving their productivity, increasing their satisfaction, and enabling them to do their best work—every day. After all, there can be no progress without developers who are empowered to drive impact.

Inbal Shani
Chief Product Officer // GitHub

Learn how generative AI is changing the developer experience

Discover how generative AI is changing software development in a pre-recorded session from GitHub.

Watch the video >

Why developer experience matters

At GitHub, we’re aware there’s often a significant gap between the day-to-day reality for most developers and “conversations about ‘what developers want.’”

With this survey, we wanted to better understand the typical experience for developers—and identify key ways companies can empower their developers and achieve greater success.

One big takeaway: It starts with investing in a great developer experience. And collaboration, as we learned from our research, is at the core of how developers want to work and what makes them most productive, satisfied, and impactful.

A diagram of a formula behind the developer experience that accounts for productivity, impact, satisfaction, and collaboration.
C = Collaboration, the multiplier across the entire developer experience.

DevEx is a formula that takes into account:

  • How simple and fast it is for a developer to implement a change on a codebase—or be productive.
  • How frictionless it is to move from idea through production to impact.
  • How positively or negatively the work environment, workflows, and tools affect developer satisfaction.

For leaders, developer experience is about creating a collaborative environment where developers can be their most productive, impactful, and satisfied at work. For developers, collaboration is one of the most important parts of the equation.

Current performance metrics fall short of developer expectations

Developers say performance metrics don’t meet expectations

The way developers are currently evaluated doesn’t align with how they think their performance should be measured.

  • For instance, the developers we surveyed say they’re currently measured by the number of incidents they resolve. But developers believe that how they handle those bugs and issues is more important to performance. This aligns with the belief that code quality over code quantity should remain a top performance metric.
  • Developers also believe collaboration and communication should be just as important as code quality in terms of performance measures. Their ability to collaborate and communicate with others is essential to their job, but only 33% of developers report that their companies use it as a performance metric.
Key survey findings showing what developer say their managers use to measure their performance and what developers think will matter more when they start using AI coding tools.
Metrics currently used to measure performance, compared with metrics developers think should be used to measure their performance.
More than output quantity and efficiency, code quality and collaboration are the most
important performance metrics, according to the developers we surveyed.
Twitter logo LinkedIn logo
A chart showing what developers say their teams spend the most time doing at work.
The top ranked responses that developers say their teams are working the most on including writing code and finding and fixing security vulnerabilities.

Developers want more opportunities to upskill and drive impact

When developers are asked about what makes a positive impact on their workday, they rank learning new skills (43%), getting feedback from end users (39%), and automated tests (38%), and designing solutions to novel problems (36%) as top contenders.

A ranked list of the tasks 500 U.S.-based developers say have the most positive impact on their workdays.
The top tasks developers say positively impact their workdays.

But developers say they’re spending most of their time writing code and tests, then waiting for that code to be reviewed or builds and tests to be executed.

On a typical day, the enterprise developers we surveyed report their teams are busy with a variety of tasks, including writing code, fixing security vulnerabilities, and getting feedback from end users, among other things. Developers also report that they spend a similar amount of time across these tasks, indicating that they’re stretched thin throughout the day.

A ranked list of the top tasks developers and software engineers say they spend the most time working on each day.
The tasks developers say they spend the most time working on each day.

Notably, developers say they spend the same amount of time waiting for builds and tests as they do writing new code.

  • This suggests that wait times for builds and tests are still a persistent problem despite investments in DevOps tools over the past decade.
  • Developers also continue to face obstacles, such as waiting on code review, builds, and test runs, which can hinder their ability to learn new skills and design solutions to novel problems, and our research suggests that these factors can have the biggest impact on their overall satisfaction.

Developers want feedback from end users, but face challenges

Developers say getting feedback from end users (39%) is the second-most important thing that positively impacts their workdays—but it’s often challenging for development teams to get that feedback directly.

  • Product managers and marketing teams often act as intermediaries, making it difficult for developers to directly receive end-user feedback.
  • Developers would ideally receive feedback from automated and validation tests to improve their work, but sometimes these tests are sent to other teams before being handed off to engineering teams.

The top two daily tasks for development teams include writing code (32%) and finding and fixing security vulnerabilities (31%).

  • This shows the increased importance developers have placed on security and underscores how companies are prioritizing security.
  • It also demonstrates the critical role that enterprise development teams play in meeting policy and board edicts around security.

The bottom line
Developers want to upskill, design solutions, get feedback from end users, and be evaluated on their communication skills. However, wait times on builds and tests, as well as the current performance metrics they’re evaluated on, are getting in the way.

Collaboration is the cornerstone of the developer experience

Developers thrive in collaborative environments

In our survey of enterprise engineers, developers say they work with an average of 21 other developers on a typical project—and 52% report working with other teams daily or weekly. Notably, they rank regular touchpoints as the most important factor for effective collaboration.

A survey finding that developers at enterprise companies often work with an average of 21 developers on other projects and often work on a daily or weekly basis with colleagues.
Developers in enterprise settings often work with an average of 21 other developers on a daily or weekly cadence.

But developers also have a holistic view of collaboration—it’s defined not only by talking and meeting with others, but also by uninterrupted work time, access to fully configured developer environments, and formal mentor-mentee relationships.

  • Specified blocks with no team communication give developers the time and space to write code and work towards team goals.
  • Access to fully configured developer environments promotes consistency throughout the development process. It also helps developers collaborate faster and avoid hearing the infamous line, “But it worked on my machine.”
  • Mentorships can help developers upskill and build interpersonal skills that are essential in a collaborative work environment.

It’s important to note these factors can also negatively impact a developer’s work day—which suggests that ineffective meetings can serve to distract rather than help developers (something we’ve found in previous research).

The key factors developers in a survey say contribute most highly to effective team collaboration including meetings, dedicated time for individual work, and access to fully configured dev environments.

Our survey indicates the factors most important to effective collaboration are so critical that when they’re not done effectively, they have a noticeable, negative impact on a developer’s work.

A ranked list of the top tasks developers in a survey reported as having a negative impact on their overall workday experience.
The tasks developers say most often have a negative impact on their workday experience.
Developers work with an average of 21 people on any given project. They need the time and tools for success—including regular touchpoints, heads-down time, access to fully-configured dev environments, and formal mentor-mentee relationships.
Twitter logo LinkedIn logo

We wanted to learn more about how developers collaborate

So, we sourced some answers from our followers on Twitter. We asked developers what tips they have for effective collaboration. Here’s what one developer had to say:

Twitter user Colby Ray had multiple points in response to our prompt. Click the image to read his tweet.

We also asked what makes for a productive and valuable meeting:

Twitter user kettenaito had several points in response to our prompt. Click the image to read on Twitter.

Twitter user Mateus Feira had several points in response to our prompt. Click the image to read on Twitter.

Effective collaboration improves code quality

As developer experience continues to be defined, so, too, will successful developer collaboration. Too many pings and messages can affect flow, but there’s still a need to stay in touch. In our survey, developers say effective collaboration results in improved test coverage and faster, cleaner, more secure code writing—which are best practices for any development team. This shows that when developers work effectively with others, they believe they build better and more secure software.

Developers in a survey report that collaboration positively impacts how they write code, how fast they can ship it, and more.
Developers widely view effective collaboration as helping to improve what they ship and how often they ship it.

Developers we surveyed believe collaboration and communication—along with code quality—should be the top priority for evaluation.

  • From DevOps to agile methodologies, developers and the greater business world have been talking about the importance of collaboration for a long time.
  • But developers are still not being measured on it.
Developers in a survey respond to a question about what metrics they believe their companies should use to measure their performance and productivity.
The metrics that developers think their managers should use to evaluate their performance and productivity.

We asked developers to share their ideas for measuring how well they collaborate. Here’s what one developer had to say:

Twitter user Andrew DiMola had several points in response to our prompt. Click to read on Twitter.

  • The takeaway: Companies and engineering managers should encourage regular team communication, and set time to check in–especially in remote environments–but respect developers’ need to work and focus.
Developers think regular touchpoints with their teams including meetings, asynchronous communication, and innersource practices help organizations collaborate at scale.
Developers believe that effective and regular touchpoints with their colleagues are critical for effective team collaboration.

4 tips for engineering managers to improve collaboration

At GitHub, our researchers, developers, product teams, and analysts are dedicated to studying and improving developer productivity and satisfaction. Here are their tips for engineering leaders who want to improve collaboration among developers:

  1. Make collaboration a goal in performance objectives. This builds the space and expectation that people will collaborate. This could be in the form of lunch and learns, joint projects, etc.
  2. Define and scope what collaboration looks like in your organization. Let people know when they’re being informed about something vs. being consulted about something. A matrix outlining roles and responsibilities helps define each person’s role and is something GitHub teams have implemented.
  3. Give developers time to converse and get to know one another. In particular, remote or hybrid organizations need to dedicate a portion of a developer’s time and virtual space to building relationships. Check out the GitHub guides to remote work.
  4. Identify principal and distinguished engineers. Academic research supports the positive impact of change agents in organizations—and how they should be the people who are exceptionally great at collaboration. It’s a matter of identifying your distinguished engineers and elevating them to a place where they can model desired behaviors.

The bottom line
Effective developer collaboration improves code quality and should be a performance measure. Regular touchpoints, heads-down time, access to fully configured dev environments, and formal mentor-mentee relationships result in improved test coverage and faster, cleaner, more secure code writing.

AI improves individual performance and team collaboration

Developers are already using AI coding tools at work

A staggering 92% of U.S.-based developers working in large companies report using an AI coding tool either at work or in their personal time—and 70% say they see significant benefits to using these tools.

  • AI is here to stay—and it’s already transforming how developers approach their day-to-day work. That makes it critical for businesses and engineering leaders to adopt enterprise-grade AI tools to avoid their developers using non-approved applications. Companies should also establish governance standards for using AI tools to ensure that they are used ethically and effectively.
92% of developers in a survey say they're already using AI coding tools at work.
Almost all developers are already using AI coding tools at and outside of work.

70% of developers see a benefit to using AI coding tools at work.

Almost all (92%) developers use AI coding tools at work—and a majority (67%) have used these tools in both a work setting and during their personal time. Curiously, only 6% of developers in our survey say they solely use these tools outside of work.
Twitter logo LinkedIn logo

Developers believe AI coding tools will enhance their performance

With most developers experimenting with AI tools in the workplace, our survey results suggest it’s not just idle interest leading developers to use AI. Rather, it’s a recognition that AI coding tools will help them meet performance standards.

  • In our survey, developers say AI coding tools can help them meet existing performance standards with improved code quality, faster outputs, and fewer production-level incidents. They also believe that these metrics should be used to measure their performance beyond code quantity.
The metrics developers say their managers use to measure their productivity vs. the metrics developers think their managers should use to measure their productivity if they use AI coding tools.
Developers widely think that AI coding tools will layer into their existing workflows and bring greater efficiencies—but they do not think AI will change how software is made.

Around one-third of developers report that their managers currently assess their performance based on the volume of code they produce—and an equal number anticipate that this will persist when they start using AI-based coding tools.

  • Notably, the quantity of code a developer produces may not necessarily correspond to its business value.
  • Stay smart. With the increase of AI tooling being used in software development—which often contributes to code volume—engineering leaders will need to ask whether measuring code volume is still the best way to measure productivity and output.

Developers think AI coding tools will lead to greater team collaboration

Beyond improving individual performance, more than 4 in 5 developers surveyed (81%) say AI coding tools will help increase collaboration within their teams and organizations.

  • In fact, security reviews, planning, and pair programming are the most significant points of collaboration and the tasks that development teams are expected to, and should, work on with the help of AI coding tools. This also indicates that code and security reviews will remain important as developers increase their use of AI coding tools in the workplace.
Developers believe that AI coding tools will make engineering teams more collaborative as the quality of code produced becomes ever more important.
Developers think their teams will need to become more collaborative as they start using AI coding tools.
Sometimes, developers can do the same thing with one line or multiple lines of code. Even still, one-third of developers in our survey say their managers measure their performance based on how much code they produce.
Twitter logo LinkedIn logo

Notably, developers believe AI coding tools will give them more time to focus on solution design. This has direct organizational benefits and means developers believe they’ll spend more time designing new features and products with AI instead of writing boilerplate code.

  • Developers are already using generative AI coding tools to automate parts of their workflow, which frees up time for more collaborative projects like security reviews, planning, and pair programming.
Developers think AI coding tools will help them upskill, become more productive, and focus on higher-value problem solving.
Developers believe that AI coding tools will help them focus on higher-value problem solving.

Developers think AI increases productivity and prevents burnout

Not only can AI coding tools help improve overall productivity, but they can also provide upskilling opportunities to help create a smarter workforce according to the developers we surveyed.

  • 57% of developers believe AI coding tools help them improve their coding language skills—which is the top benefit they see. Beyond the prospect of acting as an upskilling aid, developers also say AI coding tools can also help with reducing cognitive effort, and since mental capacity and time are both finite resources, 41% of developers believe that AI coding tools can help with preventing burnout.
  • In previous research we conducted, 87% of developers reported that the AI coding tool GitHub Copilot helped them preserve mental effort while completing more repetitive tasks. This shows that AI coding tools allow developers to preserve cognitive effort and focus on more challenging and innovative aspects of software development or research and development.
  • AI coding tools help developers upskill while they work. Across our survey, developers consistently rank learning new skills as the number one contributor to a positive workday. But 30% also say learning and development can have a negative impact on their overall workday, which suggests some developers view learning and development as adding more work to their workdays. Notably, developers say the top benefit of AI coding tools is learning new skills—and these tools can help developers learn while they work, instead of making learning and development an additional task.
Developers are already using generative AI coding tools to automate parts of their workflow, which frees up time for more collaborative projects like security reviews, planning, and pair programming.
Twitter logo LinkedIn logo

AI is improving the developer experience across the board

Developers in our survey suggest they can better meet standards around code quality, completion time, and the number of incidents when using AI coding tools—all of which are measures developers believe are key areas for evaluating their performance.

AI coding tools can also help reduce the likelihood of coding errors and improve the accuracy of code—which ultimately leads to more reliable software, increased application performance, and better performance numbers for developers. As AI technology continues to advance, it is likely that these coding tools will have an even greater impact on developer performance and upskilling.

AI coding tools are layering into existing developer workflows and creating greater efficiencies

Developers believe that AI coding tools will increase their productivity—but our survey suggests that developers don’t think these tools are fundamentally altering the software development lifecycle. Instead, developers suggest they’re bringing greater efficiencies to it.

  • The use of automation and AI has been a part of the developer workflow for a considerable amount of time, with developers already utilizing a range of automated and AI-powered tools, such as machine learning-based security checks and CI/CD pipelines.
  • Rather than completely overhauling operations, these tools create greater efficiencies within existing workflows, and that frees up more time for developers to concentrate on developing solutions.

The bottom line
Almost all developers (92%) are using AI coding at work—and they say these tools not only improve day-to-day tasks but enable upskilling opportunities, too. Developers see material benefits to using AI tools including improved performance and coding skills, as well as increased team collaboration.

The path forward

Developer satisfaction, productivity, and organizational impact are all positioned to get a boost from AI coding tools—and that will have a material impact on the overall developer experience.

92% of developers already saying they use AI coding tools at work and in their personal time, which makes it clear AI is here to stay. 70% of the developers we surveyed say they already see significant benefits when using AI coding tools, and 81% of the developers we surveyed expect AI coding tools to make their teams more collaborative—which is a net benefit for companies looking to improve both developer velocity and the developer experience.

Notably, 57% of developers believe that AI could help them upskill—and hold the potential to build learning and development into their daily workflow. With all of this in mind, technical leaders should start exploring AI as a solution to improve satisfaction, productivity, and the overall developer experience.

In addition to exploring AI tools, here are three takeaways engineering and business leaders should consider to improve the developer experience:

  1. Help your developers enter a flow state with tools, processes, and practices that help them be productive, drive impact, and do creative and meaningful work.
  2. Empower collaboration by breaking down organizational silos and providing developers with the opportunity to communicate efficiently.
  3. Make room for upskilling within developer workflows through key investments in AI to help your organization experiment and innovate for the future.

Methodology

This report draws on a survey conducted online by Wakefield Research on behalf of GitHub from March 14, 2023 through March 29, 2023 among 500 non-student, U.S.-based developers who are not managers and work at companies with 1,000-plus employees. For a complete survey methodology, please contact [email protected].

Developer experience: What is it and why should you care?

Post Syndicated from Gwen Davis original https://github.blog/2023-06-08-developer-experience-what-is-it-and-why-should-you-care/


Developer experience examines how people, processes, and tools affect developers’ ability to work efficiently. Learn more about what developers want in our developer experience survey >

What do building software and vacuuming your house have in common?

Jonathan Carter, technical advisor of the CEO at GitHub, used to hate vacuuming. That’s because his vacuum was located on the first floor of his home and bringing it upstairs to the main floor was tedious. But when he realized he could simply keep the vacuum where he needed it, the task wasn’t that hard. Now he vacuums every other day.

“The same is true with building software,” he says. “When we construct the experience to empower the desired behavior naturally and effortlessly, we get a great outcome.”

This is what developer experience (DevEx) is about. DevEx—sometimes called DevX or DX—examines how the juxtaposition of developers, processes, and tools positively or negatively affects software development. In this article, we’ll explore the key components of DevEx and how its optimization is integral for business success.

Let’s jump in.

Are you a visual learner? 😎 We’ve got you covered.
Learn about DevEx in our What is DevEx? video.

What is developer experience?

DevEx refers to the systems, technology, process, and culture that influence the effectiveness of software development. It looks at all components of a developer’s ecosystem—from environment to workflows to tools—and asks how they are contributing to developer productivity, satisfaction, and operational impact.

“Building software is like having a giant house of cards in our brains,” says Idan Gazit, senior director of research at GitHub. “Tiny distractions can knock it over in an instant. DevEx is ultimately about how we contend with that house of cards.”

With DevEx, every aspect of a developer’s journey is questioned.

“Is the tool making my job harder or easier?” Gazit asks. “Is the environment helping me focus? Is the process eliminating ways in which I can make mistakes? Is the system keeping me in my flow—and confidently enabling me to stack my cards ever higher?”

Additionally, how developers subjectively feel makes all the difference—which can be gauged by user testing, surveys, and feedback.

“DevEx puts developers at the center and works to understand how they feel and think about the work that they do,” says Eirini Kalliamvakou, staff researcher at GitHub. Developer sentiment can uncover points of friction and provide the opportunity to find appropriate fixes.

“You can’t improve the developer experience with developers out of the loop,” she says.

Importantly, collaboration is the multiplier across the entire DevEx. Developers need to be able to easily communicate and share with each other to do their best work.

What is the history of developer experience?

While DevEx might seem like a logical strategy to improve software development, the industry has been slow to apply it.

Over the past few decades, developers have witnessed an explosion of technologies, open source libraries, package managers, languages, and services—with more tools, APIs, and integrations arriving by the day. The result is an ecosystem where nearly everything developers could want or need is at their fingertips.

But as the analyst firm RedMonk notes, while developers have access to an exponential amount of technology and DevOps tooling—which has produced a large degree of innovation and competition—they’re on their own figuring out how it all works together. This has led to a fragmented DevEx. It also puts pressure on developers to constantly learn about the latest products (or even just how to connect to the newest API).

“We need a holistic view of what makes up developers’ workflow,” GitHub’s Kalliamvakou says. “And once we have that, we need to make sure that the experience is collaborative and smooth every step of the way.”

Why is developer experience important?

In short, a good DevEx is important because it enables developers to build with more confidence, drive greater impact, and feel satisfied.

Greg Mondello, director of product at GitHub, says it’s no surprise that DevEx has seen a significant increase in investment over the past five years.

“In most contexts, software development capacity is the limiting factor for innovation,” he says. “Therefore, improvements to the effectiveness of software development are inherently valuable.”

Moreover, development is only becoming more complex. Building software today involves many tools, technologies, and services across different providers, which requires developers to manage far more intricate environments.

At its best, a well-conceived DevEx provides greater consistency across environments, processes, and workflows, while automating the more tedious and manual processes.

“This enables companies with better DevEx to outperform their competitors, regardless of vertical,” Mondello says.

The research backs this up.

According to a report from McKinsey, a better DevEx can lead to extensive benefits for organizations, such as improved employee attraction and retention, enhanced security, and increased developer productivity. As such, DevEx is important for all companies—and not just tech.

“It doesn’t matter what industry you’re part of or what geography you’re in,” Mondello says. “With better DevEx, you’ll have better business results.”

And the importance of DevEx will only continue to grow.

According to a Forrester opportunity snapshot, teams can reduce time to market and grow revenue by creating an easier way for developers to write code, build software, and ship updates to customers. As a result of improving DevEx:

  • 74% of survey respondents said they can drive developer productivity
  • 77% can shorten time to market
  • 85% can impact revenue growth
  • 75% can better attract and retain customers
  • 82% can increase customer satisfaction

“I find it fascinating how anxious people get sitting at a stoplight,” GitHub’s Carter says. “They’re not there for very long. Yes, it’s a psychological thing that humans don’t like to wait.”

The same goes for building software.

“Great DevEx shortens the distance between intention and reality,” he says.

What makes a good developer experience?

A good DevEx is where developers “have the info they need and can pivot between focus and collaboration,” Kalliamvakou says. “They can complete tasks with minimal delay.”

Low friction is important.

“Or ideally, no friction at all,” she notes.

Developers experience many types of friction during their end-to-end workflow, especially if they’re using multiple tools. From meetings to requests to many other types of disruptions, developers often have to piece together context from fragmented, out-of-date sources, which hinders their ability to be productive and write high-quality code.

In the end, collaboration is king.

“Without collaboration, a good DevEx isn’t possible,” Kalliamvakou says.

What are key developer experience metrics?

Unfortunately, there are currently no standardized industry metrics to measure DevEx. However, Mondello says the DevOps Research and Assessment (DORA) framework, which measures an organization’s DevOps performance, can be helpful. Key metrics include:

  • Deployment frequency (DF): how frequently an organization releases new software
  • Lead time for changes (LT): the time taken from when a change is requested or initiated to when it is deployed
  • Mean time to recovery (MTTR): the average time it takes to recover from a failure
  • Change failure rate (CFR): the percentage of changes that result in a failure

However, Carter thinks good metrics go beyond DORA. For instance, he believes a great DevEx metric is the time to first contribution for a new hire. A good time signifies that the new developer got all the context they needed plus feels empowered from creating value, which is what DevEx is all about.

“No amount of moral boosting or being friendly makes up for the fact that people want to feel valuable,” Carter says. “Happier developers is the goal. There’s nobody in the world who feels great about opening a pull request that sits in an approval queue for two days.”

Likewise, Carter says customer response time is a good metric. A strong response time indicates that the team has what they need to move quickly, while feeling empowered and helpful.

“The more we can treat developer happiness as a goal, and measure thoughtful signals to make sure we’re doing that, the better,” he says. “This requires addressing culture, tooling, and policies to make sure teams have clarity and autonomy.”

Kalliamvakou notes that measuring DevEx underscores the need to continually check in with developers and see how they’re feeling. While organizations already know how to capture system performance data to gauge how efficient processes are, most don’t collect developers’ opinions on the systems they’re using.

“How can we improve developers’ experiences without checking in with developers about what their experience is?” she asks.

Kalliamvakou says that running periodic surveys is critical. These surveys need to capture developers’ satisfaction—or dissatisfaction—with systems and what it’s like to work with them daily. “Without these surveys, even the most sophisticated telemetry is incomplete and potentially misleading,” she says.

Kalliamvakou also warns that this work is not optional. “Organizations that are not surveying their developers on productivity, ease of work, etc. will fall behind,” she says.

What are ways to improve developer experience?

Companies and development teams should improve their DevEx the same way they improve other product spaces—by using a strategy that includes research, discovery, user testing, and other key design components.

“Here at GitHub, we are constantly striving to reduce the time it takes for our developers to execute their workflows,” Mondello says, mentioning how GitHub’s invention of the pull request was a pivotal moment in DevEx history. “This means finding ways to make the build process more efficient, optimizing deployment, and tuning tests to execute more effectively.”

He also adds that GitHub plays a leading role in the DevEx space: GitHub is a collaboration company, first and foremost, and collaboration is essential to DevEx. Especially in the age of AI. As time goes on, collaboration will become increasingly important, since it’s the only way to ensure that AI-generated code is solid.

“If you improve your collaboration, you’ll inevitably improve your DevEx,” Mondello says.

Kalliamvakou also notes that organizations need to understand their current DevEx and the most critical friction points.

“Is the documentation scattered and do developers have to spend precious energy to understand context?” she asks. “Do the build systems take a long time? Or are they flaky, leaving your developers frustrated by the delays and inconsistent behavior? Worst of all, are your developers unable to focus?”

Once an organization has done the work to identify friction, it needs to simplify, accelerate, or optimize existing systems and processes.

“Careful though!” Kalliamvakou says. “Any change will involve tradeoffs, so companies need to monitor if DevEx is actually improving or if friction is actually introduced by an intervention.”

Cutting down on the number of meetings, for instance, can seem like a great idea for leveling interruptions. But if developers start reporting that their collaboration is poor, you may end up in a worse place than when you started.

“It’s a lot of work to approach DevEx holistically and effectively,” Kalliamvakou says. This is why many organizations create DevEx teams that are dedicated to understanding, improving, and monitoring it.

What role does generative AI play in developer experience?

There is no doubt that generative AI is the future of DevEx, as it enables developers to write high-quality code faster.

“As models get better and more functionality is built around how developers work, we can expect AI to suggest whole workflows,” Kalliamvakou says, in addition to the code and pull request suggestions that they already provide. “AI could remove major disruptions, delays, and cognitive load that developers previously had to endure.”

Mondello agrees.

“Generative AI will unlock the potential for developers to leapfrog large amounts of the software development process,” he says. “Instead of merely focusing on eliminating toil or friction, DevEx will focus on finding ways to enable developers to make large strides in their development workflows.”

However, with the enablement of faster code, companies will also need to determine ways to speed up their build and test processes and improve their overall pipelines to production.

Mondello points to the impact that’s being made by GitHub’s generative AI product, GitHub Copilot.

“We will build upon our success with GitHub Copilot as we shape GitHub Copilot X and bring generative AI to the entire software development lifecycle,” he says.

The bottom line

In today’s engineering environments, DevEx is one of the most important aspects to innovating quickly and achieving business goals. Developer happiness and empowerment are critical for software success, regardless of industry or niche—and will only continue to become more important over time.

Learn more about what developers want in our developer experience survey >

How companies are boosting productivity with generative AI

Post Syndicated from Chris Reddington original https://github.blog/2023-05-09-how-companies-are-boosting-productivity-with-generative-ai/

Is your company using generative AI yet?

While it’s still in its infancy, generative AI coding tools are already changing the way developers and companies build software. Generative AI can boost developer and business productivity by automating tasks, improving communication and collaboration, and providing insights that can inform better decision-making.

In this post, we’ll explore the full story of how companies are adopting generative AI to ship software faster, including:

Want to explore the world of generative AI for developers? 🌎

Check out our generative AI guide to learn what it is, how it works, and what it means for developers everywhere.

Get the guide >

What is generative AI?

Generative AI refers to a class of artificial intelligence (AI) systems designed to create new content similar to what humans produce. These systems are trained on large datasets of content that include text, images, audio, music, or code.

Generative AI is an extension of traditional machine learning, which trains models to predict or classify data based on existing patterns. But instead of simply predicting the outcome, generative AI models are designed to identify underlying patterns and structures of the data, and then use that knowledge to quickly generate new content. However, the main difference between the two is one of magnitude and the size of the prediction or generation. Machine learning typically predicts the next word. Generative AI can generate the next paragraph.

AI-generated image from Shutterstack of a developer using a generative AI tool to code faster.
AI-generated image from Shutterstack of a developer using a generative AI tool to code faster.

Generative AI tools have attracted particular interest in the business world. From marketing to software development, organizational leaders are increasingly curious about the benefits of the new generative AI applications and products.

“I do think that all companies will adopt generative AI tools in the near future, at least indirectly,” said Albert Ziegler, principal machine learning engineer at GitHub. “The bakery around the corner might have a logo that the designer made using a generative transformer. The neighbor selling knitted socks might have asked Bing where to buy a certain kind of wool. My taxi driver might do their taxes with a certain Excel plugin. This adoption will only increase over time.”

What are some business uses of generative AI tools? 💡

  • Software development: generative AI tools can assist engineers with building, editing, and testing code.
  • Content creation: writers can use generative AI tools to help personalize product descriptions and write ad copy.
  • Design creation: from generating layouts to assisting with graphics, generative AI design tools can help designers create entirely new designs.
  • Video creation: generative AI tools can help videographers with building, editing, or enhancing videos and images.
  • Language translation: translators can use generative AI tools to create communications in different languages.
  • Personalization: generative AI tools can assist businesses with personalizing products and services to meet the needs of individual customers.
  • Operations: from supply chain management to pricing, generative AI tools can help operations professionals drive efficiency.

How generative AI coding tools are changing the developer experience

Generative AI has big implications for developers, as the tools can enable them to code and ship software faster.

How is generative AI affecting software development?⚡

Check out our guide to learn what generative AI coding tools are, what developers are using them for, and how they’re impacting the future of development.

Get the guide >

Similar to how spell check and other automation tools can help writers build content more efficiently, generative AI coding tools can help developers produce cleaner work—and the models powering these tools are getting better by the month. Tools such as GitHub Copilot, for instance, can be used in many parts of the software development lifecycle, including in IDEs, code reviews, and testing.

The science backs this up. In 2022, we conducted research into how our generative AI tool, GitHub Copilot, helps developers. Here’s what we found:

Source: Research: quantifying GitHub Copilot’s impact on developer productivity and happiness

GitHub Copilot is only continuing to improve. When the tool was first launched for individuals in June 2022, more than 27% of developers’ code was generated by GitHub Copilot, on average. Today, that number is 46% across all programming languages—and in Java, that jumps to 61%.

How can generative AI tools help you build software? 🚀

These tools can help:

  • Write boilerplate code for various programming languages and frameworks.
  • Find information in documentation to understand what the code does.
  • Identify security vulnerabilities and implement fixes.
  • Streamline code reviews before merging new or edited code.

Explore GitHub’s vision for embedding generative AI into every aspect of the developer workflow.

Using generative AI responsibly 🙏

Like all technologies, responsibility and ethics are important with generative AI.

In February 2023, a group of 10 companies including OpenAI, Adobe, the BBC, and others agreed upon a new set of recommendations on how to use generative AI content in a responsible way.

The recommendations were put together by the Partnership on AI (PAI), an AI research nonprofit, in consultation with more than 50 organizations. The guidelines call for creators and distributors of generative AI to be transparent about what the technology can and can’t do and disclose when users might be interacting with this type of content (by using watermarks, disclaimers, or traceable elements in an AI model’s training data).

Is generative AI accurate? 🔑

Businesses should be aware that while generative AI tools can speed up the creation of content, they should not be solely relied upon as a source of truth. A recent study suggests that people can identify whether AI-generated content is real or fake only 50% of the time. Here at GitHub, we named our generative AI tool “GitHub Copilot” to signify just this—the tool can help, but at the end of the day, it’s just a copilot. The developer needs to take responsibility for ensuring that the finished code is accurate and complete.

How companies are using generative AI

Even as generative AI models and tools continue to rapidly advance, businesses are already exploring how to incorporate these into their day-to-day operations.

This is particularly true for software development teams.

“Going forward, tech companies that don’t adopt generative AI tools will have a significant productivity disadvantage,” Ziegler said. “Given how much faster this technology can help developers build, organizations that don’t adopt these tools or create their own will have a harder time in the marketplace.”

3 primary generative AI business models for organizations 📈

Enterprises all over the world are using generative AI tools to transform how work gets done. Three of the business models organizations use include:

  • Model as a Service (MaaS): Companies access generative AI models through the cloud and use them to create new content. OpenAI employs this model, which licenses its GPT-3 AI model, the platform behind ChatGPT. This option offers low-risk, low-cost access to generative AI, with limited upfront investment and high flexibility.
  • Built-in apps: Companies build new—or existing—apps on top of generative AI models to create new experiences. GitHub Copilot uses this model, which relies on Codex to analyze the context of the code to provide intelligent suggestions on how to complete it. This option offers high customization and specialized solutions with scalability.
  • Vertical integration: Vertical integration leverages existing systems to enhance the offerings. For instance, companies may use generative AI models to analyze large amounts of data and make predictions about prices or improve the accuracy of their services.

Duolingo, one of the largest language-learning apps in the world, is one company that recently adopted generative AI capabilities. They chose GitHub’s generative AI tool, GitHub Copilot, to help their developers write and ship code faster, while improving test coverage. Duolingo’s CTO Severin Hacker said GitHub Copilot delivered immediate benefits to the team, enabling them to code quickly and deliver their best work.

”[The tool] stops you from getting distracted when you’re doing deep work that requires a lot of your brain power,” Hacker noted. “You spend less time on routine work and more time on the hard stuff. With GitHub Copilot, our developers stay in the flow state and keep momentum instead of clawing through code libraries or documentation.”

After adopting GitHub Copilot and the GitHub platform, Duolingo saw a:

  • 25% increase in developer speed for those who are new to working with a specific repository
  • 10% increase in developer speed for those who are familiar with the respective codebase
  • 67% decrease in median code review turnaround time

“I don’t know of anything available today that’s remotely close to what we can get with GitHub Copilot,” Hacker said.

Looking forward

Generative AI is changing the world of software development. And it’s just getting started. The technology is quickly improving and more use cases are being identified across the software development lifecycle. With the announcement of GitHub Copilot X, our vision for the future of AI-powered software development, we’re committed to installing AI capabilities into every step of the developer workflow. There’s no better time to get started with generative AI at your company.

How generative AI is changing the way developers work

Post Syndicated from Damian Brady original https://github.blog/2023-04-14-how-generative-ai-is-changing-the-way-developers-work/

During a time when computers were solely used for computation, the engineer, Douglas Engelbart, gave the “mother of all demos,” where he reframed the computer as a collaboration tool capable of solving humanity’s most complex problems. At the start of his demo, he asked audience members how much value they would derive from a computer that could instantly respond to their actions.

You can ask the same question of generative AI models. If you had a highly responsive generative AI coding tool to brainstorm new ideas, break big ideas into smaller tasks, and suggest new solutions to problems, how much more creative and productive could you be?

This isn’t a hypothetical question. AI-assisted engineering workflows are quickly emerging with new generative AI coding tools that offer code suggestions and entire functions in response to natural language prompts and existing code. These tools, and what they can help developers accomplish, are changing fast. That makes it important for every developer to understand what’s happening now—and the implications for how software is and will be built.

In this article, we’ll give a rundown of what generative AI in software development looks like today by exploring:

The unique value generative AI brings to the developer workflow

AI and automation have been a part of the developer workflow for some time now. From machine learning-powered security checks to CI/CD pipelines, developers already use a variety of automation and AI tools, like CodeQL on GitHub, for example.

While there’s overlap between all of these categories, here’s what makes generative AI distinct from automation and other AI coding tools:

Automation: 🛤
You know what needs to be done, and you know of a reliable way to get there every time.
Rules-based logic: 🔎
You know the end goal, but there’s more than one way to achieve it.
Machine learning: 🧠
You know the end goal, but the amount of ways to achieve it scales exponentially.
Generative AI: 🌐
You have big coding dreams, and want the freedom to bring them to life.
You want to make sure that any new code pushed to your repository follows formatting specifications before it’s merged to the main branch. Instead of manually validating the code, you use a CI/CD tool like GitHub Actions to trigger an automated workflow on the event of your choosing (like a commit or pull request). You know some patterns of SQL injections, but it’s time consuming to manually scan for them in your code. A tool like Code QL uses a system of rules to sort through your code and find those patterns, so you don’t have to do it by hand. You want to stay on top of security vulnerabilities, but the list of SQL injections continues to grow. A coding tool that uses a machine learning (ML) model, like Code QL, is trained to not only detect known injections, but also patterns similar to those injections in data it hasn’t seen before. This can help you increase recognition of confirmed vulnerabilities and predict new ones. Generative AI coding tools leverage ML to generate novel answers and predict coding sequences. A tool like GitHub Copilot can reduce the amount of times you switch out of your IDE to look up boilerplate code or help you brainstorm coding solutions. Shifting your role from rote writing to strategic decision making, generative AI can help you reflect on your code at a higher, more abstract level—so you can focus more on what you want to build and spend less time worrying about how.

How are generative AI coding tools designed and built?

Building a generative AI coding tool requires training AI models on large amounts of code across programming languages via deep learning. (Deep learning is a way to train computers to process data like we do—by recognizing patterns, making connections, and drawing inferences with limited guidance.)

To emulate the way humans learn patterns, these AI models use vast networks of nodes, which process and weigh input data, and are designed to function like neurons. Once trained on large amounts of data and able to produce useful code, they’re built into tools and applications. The models can then be plugged into coding editors and IDEs where they respond to natural language prompts or code to suggest new code, functions, and phrases.

Before we talk about how generative AI coding tools are made, let’s define what they are first. It starts with LLMs, or large language models, which are sets of algorithms trained on large amounts of code and human language. Like we mentioned above, they can predict coding sequences and generate novel content using existing code or natural language prompts.

Today’s state-of-the-art LLMs are transformers. That means they use something called an attention mechanism to make flexible connections between different tokens in a user’s input and the output that the model has already generated. This allows them to provide responses that are more contextually relevant than previous AI models because they’re good at connecting the dots and big-picture thinking.

Here’s an example of how a transformer works. Let’s say you encounter the word log in your code. The transformer node at that place would use the attention mechanism to contextually predict what kind of log would come next in the sequence.

Let’s say, in the example below, you input the statement from math import log. A generative AI model would then infer you mean a logarithmic function.

And if you add the prompt from logging import log, it would infer that you’re using a logging function.

Though sometimes a log is just a log.

LLMs can be built using frameworks besides transformers. But LLMs using frameworks, like a recurrent neural network or long short-term memory, struggle with processing long sentences and paragraphs. They also typically require training on labeled data (making training a labor-intensive process). This limits the complexity and relevance of their outputs, and the data they can learn from.

Transformer LLMs, on the other hand, can train themselves on unlabeled data. Once they’re given basic learning objectives, LLMs take a part of the new input data and use it to practice their learning goals. Once they’ve achieved these goals on that portion of the input, they apply what they’ve learned to understand the rest of the input. This self-supervised learning process is what allows transformer LLMs to analyze massive amounts of unlabeled data—and the larger the dataset an LLM is trained on, the more they scale by processing that data.

Why should developers care about transformers and LLMs?

LLMs like OpenAI’s GPT-3, GPT-4, and Codex models are trained on an enormous amount of natural language data and publicly available source code. This is part of the reason why tools like ChatGPT and GitHub Copilot, which are built on these models, can produce contextually accurate outputs.

Here’s how GitHub Copilot produces coding suggestions:

  • All of the code you’ve written so far, or the code that comes before the cursor in an IDE, is fed to a series of algorithms that decide what parts of the code will be processed by GitHub Copilot.
  • Since it’s powered by a transformer-based LLM, GitHub Copilot will apply the patterns it’s abstracted from training data and apply those patterns to your input code.
  • The result: contextually relevant, original coding suggestions. GitHub Copilot will even filter out known security vulnerabilities, vulnerable code patterns, and code that matches other projects.

Keep in mind: creating new content such as text, code, and images is at the heart of generative AI. LLMs are adept at abstracting patterns from their training data, applying those patterns to existing language, and then producing language or a line of code that follows those patterns. Given the sheer scale of LLMs, they might generate a language or code sequence that doesn’t even exist yet. Just as you would review a colleague’s code, you should assess and validate AI-generated code, too.

Why context matters for AI coding tools

Developing good prompt crafting techniques is important because input code passes through something called a context window, which is present in all transformer-based LLMs. The context window represents the capacity of data an LLM can process. Though it can’t process an infinite amount of data, it can grow larger. Right now, the Codex model has a context window that allows it to process a couple of hundred lines of code, which has already advanced and accelerated coding tasks like code completion and code change summarization.

Developers use details from pull requests, a folder in a project, open issues—and the list goes on—to contextualize their code. So, when it comes to a coding tool with a limited context window, the challenge is to figure out what data, in addition to code, will lead to the best suggestions.

The order of the data also impacts a model’s contextual understanding. Recently, GitHub made updates to its pair programmer so that it considers not only the code immediately before the cursor, but also some of the code after the cursor. The paradigm—which is called Fill-In-the-Middle (FIM)—leaves a gap in the middle of the code for GitHub Copilot to fill, providing the tool with more context about the developer’s intended code and how it should align with the rest of the program. This helps produce higher quality code suggestions without any added latency.

Visuals can also contextualize code. Multimodal LLMs (MMLLMs) scale transformer LLMs so they process images and videos, as well as text. OpenAI recently released its new GPT-4 model—and Microsoft revealed its own MMLLM called Kosmos-1. These models are designed to respond to natural language and images, like alternating text and images, image-caption pairs, and text data.

GitHub’s senior developer advocate Christina Warren shares the latest on GPT-4 and the creative potential it holds for developers:

Our R&D team at GitHub Next has been working to move AI past the editor with GitHub Copilot X. With this new vision for the future of AI-powered software development, we’re not only adopting OpenAI’s new GPT-4 model, but also introducing chat and voice, and bringing GitHub Copilot to pull requests, the command line, and docs. See how we’re investigating the future of AI-powered software development >

How developers are using generative AI coding tools

The field of generative AI is filled with experiments and explorations to uncover the technology’s full capabilities—and how they can enable effective developer workflows. Generative AI tools are already changing how developers write code and build software, from improving productivity to helping developers focus on bigger problems.

While generative AI applications in software development are still being actively defined, today, developers are using generative AI coding tools to:

  • Get a head start on complex code translation tasks. A study presented at the 2021 International Conference on Intelligent User Interfaces found that generative AI provided developers with a skeletal framework to translate legacy source code into Python. Even if the suggestions weren’t always correct, developers found it easier to assess and fix those mistakes than manually translate the source code from scratch. They also noted that this process of reviewing and correcting was similar to what they already do when working with code produced by their colleagues.

With GitHub Copilot Labs, developers can use the companion VS Code extension (that’s separate from but dependent on the GitHub Copilot extension) to translate code into different programming languages. Watch how GitHub Developer Advocate, Michelle Mannering, uses GitHub Copilot Labs to translate her Python code into Ruby in just a few steps.

Our own research supports these findings, too. As we mentioned earlier, we found that developers who used GitHub Copilot coded up to 55% faster than those who didn’t. But productivity gains went beyond speed with 74% of developers reporting that they felt less frustrated when coding and were able to focus on more satisfying work.

  • Tackle new problems and get creative. The PACMPL study also found that developers used GitHub Copilot to find creative solutions when they were unsure of how to move forward. These developers searched for next possible steps and relied on the generative AI coding tool to assist with unfamiliar syntax, look up the right API, or discover the correct algorithm.

I was one of the developers who wrote GitHub Copilot, but prior to that work, I had never written a single line of TypeScript. That wasn’t a problem because I used the first prototype of GitHub Copilot to learn the language and, eventually, help ship the world’s first at-scale generative AI coding tool.

– Albert Ziegler, Principal Machine Learning Engineer // GitHub
  • Find answers without leaving their IDEs. Some participants in the PACMPL study also treated GitHub Copilot’s multi-suggestion pane like StackOverflow. Since they were able to describe their goals in natural language, participants could directly prompt GitHub Copilot to generate ideas for implementing their goals, and press Ctrl/Cmd + Enter to see a list of 10 suggestions. Even though this kind of exploration didn’t lead to deep knowledge, it helped one developer to effectively use an unfamiliar API.

A 2023 study published by GitHub in the Association for Computing Machinery’s Queue magazine also found that generative AI coding tools save developers the effort of searching for answers online. This provides them with more straightful forward answers, reduces context switching, and conserves mental energy.

Part of GitHub’s new vision for the future of AI-powered software development is a ChatGPT-like experience directly in your editor. Watch how Martin Woodward, GitHub’s Vice President of Developer Relations, uses GitHub Copilot Chat to find and fix bugs in his code.

  • Build better test coverage. Some generative AI coding tools excel in pattern recognition and completion. Developers are using these tools to build unit and functional tests—and even security tests—via natural language prompts. Some tools also offer security vulnerability filtering, so a developer will be alerted if they unknowingly introduce a vulnerability in their code.

Want to see some examples in action? Check out how Rizel Scarlett, a developer advocate at GitHub, uses GitHub Copilot to develop tests for her codebase:

  • Discover tricks and solutions they didn’t know they needed. Scarlett also wrote about eight unexpected ways developers can use GitHub Copilot—from prompting it to create a dictionary of two-letter ISO country codes and their contributing country name, to helping developers exit Vim, an editor with a sometimes finicky closing process. Want to learn more? Check out the full guide >

The bottom line

Generative AI provides humans with a new mode of interaction—and it doesn’t just alleviate the tedious parts of software development. It also inspires developers to be more creative, feel empowered to tackle big problems, and model large, complex solutions in ways they couldn’t before. From increasing productivity and offering alternative solutions, to helping you build new skills—like learning a new language or framework, or even writing clear comments and documentation—there are so many reasons to be excited about the next wave of software development. This is only the beginning.

Additional resources

Experiment: The hidden costs of waiting on slow build times

Post Syndicated from Natalie Somersall original https://github.blog/2022-12-08-experiment-the-hidden-costs-of-waiting-on-slow-build-times/

The cost of hardware is one of the most common objections to providing more powerful computing resources to development teams—and that’s regardless of whether you’re talking about physical hardware in racks, managed cloud providers, or a software-as-a-service based (SaaS) compute resource. Paying for compute resources is an easy cost to “feel” as a business, especially if it’s a recurring operating expense for a managed cloud provider or SaaS solution.

When you ask a developer whether they’d prefer more or less powerful hardware, the answer is almost always the same: they want more powerful hardware. That’s because more powerful hardware means less time waiting on builds—and that means more time to build the next feature or fix a bug.

But even if the upfront cost is higher for higher-powered hardware, what’s the actual cost when you consider the impact on developer productivity?

To find out, I set up an experiment using GitHub’s new, larger hosted runners, which offer powerful cloud-based compute resources, to execute a large build at each compute tier from 2 cores to 64 cores. I wanted to see what the cost of each build time would be, and then compare that with the average hourly cost of a United States-based developer to figure out the actual operational expense for a business.

The results might surprise you.

Testing build times vs. cost by core size on compute resources

For my experiment, I used my own personal project where I compile the Linux kernel (seriously!) for Fedora 35 and Fedora 36. For background, I need a non-standard patch to play video games on my personal desktop without having to deal with dual booting.

Beyond being a fun project, it’s also a perfect case study for this experiment. As a software build, it takes a long time to run—and it’s a great proxy for more intensive software builds developers often navigate at work.

Now comes the fun part: our experiment. Like I said above, I’m going to initiate builds of this project at each compute tier from 2 cores to 64 cores, and then determine how long each build takes and its cost on GitHub’s larger runners. Last but not least: I’ll compare how much time we save during the build cycle and square that with how much more time developers would have to be productive to find the true business cost.

The logic here is that developers could either be waiting the entire time a build runs or end up context-switching to work on something else while a build runs. Both of these impact overall productivity (more on this below).

To simplify my calculations, I took the average runtimes of two builds per compute tier.

Pro tip: You can find my full spreadsheet for these calculations here if you want to copy it and play with the numbers yourself using other costs, times for builds, developer salaries, etc.

How much slow build times cost companies

In scenario number one of our experiment, we’ll assume that developers may just wait for a build to run and do nothing else during that time frame. That’s not a great outcome, but it happens.

So, what does this cost a business? According to StackOverflow’s 2022 Developer Survey, the average annual cost of a developer in the United States is approximately $150,000 per year including fringe benefits, taxes, and so on. That breaks down to around $75 (USD) an hour. In short, if a developer is waiting on a build to run for one hour and doing nothing in that timeframe, the business is still spending $75 on average for that developer’s time—and potentially losing out on time that developer could be focusing on building more code.

Now for the fun part: calculating the runtimes and cost to execute a build using each tier of compute power, plus the cost of a developer’s time spent waiting on the build. (And remember, I ran each of these twice at each tier and then averaged the results together.)

You end up with something like this:

Compute power Fedora 35 build Fedora 36 build Average time

(minutes)

Cost/minute for compute Total cost of 1 build Developer cost

(1 dev)

Developer cost

(5 devs)

2 core 5:24:27 4:54:02 310 $0.008 $2.48 $389.98 $1,939.98
4 core 2:46:33 2:57:47 173 $0.016 $2.77 $219.02 $1,084.02
8 core 1:32:13 1:30:41 92 $0.032 $2.94 $117.94 $577.94
16 core 0:54:31 0:54:14 55 $0.064 $3.52 $72.27 $347.27
32 core 0:36:21 0:32:21 35 $0.128 $4.48 $48.23 $223.23
64 core 0:29:25 0:24:24 27 $0.256 $6.91 $40.66 $175.66

You can immediately see how much faster each build completes on more powerful hardware—and that’s hardly surprising. But it’s striking how much money, on average, a business would be paying their developers in the time it takes for a build to run.

When you plot this out, you end up with a pretty compelling case for spending more money on stronger hardware.

A chart showing the cost of a build on servers of varying CPU power.
A chart showing the cost of a build on servers of varying CPU power.

The bottom line: The cost of hardware is much, much less than the total cost for developers, and giving your engineering teams more CPU power means they have more time to develop software instead of waiting on builds to complete. And the bigger the team you have in a given organization, the more upside you have to invest in more capable compute resources.

How much context switching costs companies

Now let’s change the scenario in our experiment: Instead of assuming that developers are sitting idly while waiting for a build to finish, let’s consider they instead start working on another task while a build runs.

This is a classic example of context switching, and it comes with a cost, too. Research has found that context switching is both distracting and an impediment to focused and productive work. In fact, Gloria Mark, a professor of informatics at the University of California, Irvine, has found it takes about 23 minutes for someone to get back to their original task after context switching—and that isn’t even specific to development work, which often entails deeply involved work.

Based on my own experience, switching from one focused task to another takes at least an hour so that’s what I used to run the numbers against. Now, let’s break down the data again:

Compute power Minutes Cost of 1 build Partial developer cost

(1 dev)

Partial developer cost

(5 devs)

2 core 310 $2.48 $77.48 $377.48
4 core 173 $2.77 $77.77 $377.77
8 core 92 $2.94 $77.94 $377.94
16 core 55 $3.52 $78.52 $378.52
32 core 35 $4.48 $79.48 $379.48
64 core 27 $6.91 $81.91 $381.91

Here, the numbers tell a different story—that is, if you’re going to switch tasks anyways, the speed of build runs doesn’t significantly matter. Labor is much, much more expensive than compute resources. And that means spending a few more dollars to speed up the build is inconsequential in the long run.

Of course, this assumes it will take an hour for developers to get back on track after context switching. But according to the research we cited above, some people can get back on track in 23 minutes (and, additional research from Cornell found that it sometimes takes as little as 10 minutes).

To account for this, let’s try shortening the time frames to 30 minutes and 15 minutes:

Compute power Minutes Cost of 1 build Partial dev cost

(1 dev, 30 mins)

Partial dev cost

(5 devs, 30 mins)

Partial dev cost

(1 dev, 15 mins)

Partial dev cost

(5 devs, 15 mins)

2 core 310 $2.48 $39.98 $189.98 $21.23 $96.23
4 core 173 $2.77 $40.27 $190.27 $21.52 $96.52
8 core 92 $2.94 $40.44 $190.44 $21.69 $96.69
16 core 55 $3.52 $41.02 $191.02 $22.27 $97.27
32 core 35 $4.48 $41.98 $191.98 $23.23 $98.23
64 core 27 $6.91 $44.41 $194.41 $25.66 $100.66

And when you visualize this data on a graph, the cost for a single developer waiting on a build or switching tasks looks like this:

A chart showing how much it costs for developers to wait for a build to execute.
A chart showing how much it costs for developers to wait for a build to execute.

When you assume the average hourly rate of a developer is $75 (USD), the graph above shows that it almost always makes sense to pay more for more compute power so your developers aren’t left waiting or context switching. Even the most expensive compute option—$15 an hour for 64 cores and 256GB of RAM—only accounts for a fifth of the hourly cost of a single developer’s time. As developer salaries increase, the cost of hardware decreases, or the time the job takes to run decreases—and this inverse ratio bolsters the case for buying better equipment.

That’s something to consider.

The bottom line

It’s cheaper—and less frustrating for your developers—to pay more for better hardware to keep your team on track.

In this case, spending an extra $4-5 on build compute saves about $40 per build for an individual developer, or a little over $200 per build for a team of five, and the frustration of switching tasks with a productivity cost of about an hour. That’s not nothing. Of course, spending that extra $4-5 at scale can quickly compound—but so can the cost of sunk productivity.

Even though we used GitHub’s larger runners as an example here, these findings are applicable to any type of hardware—whether self-hosted or in the cloud. So remember: The upfront cost for more CPU power pays off over time. And your developers will thank you (trust us).

Want to try our new high-performance GitHub-hosted runners? Sign up for the beta today.