Tag Archives: Collaboration

How GitHub uses merge queue to ship hundreds of changes every day

Post Syndicated from Will Smythe original https://github.blog/2024-03-06-how-github-uses-merge-queue-to-ship-hundreds-of-changes-every-day/


At GitHub, we use merge queue to merge hundreds of pull requests every day. Developing this feature and rolling it out internally did not happen overnight, but the journey was worth it—both because of how it has transformed the way we deploy changes to production at scale, but also how it has helped improve the velocity of customers too. Let’s take a look at how this feature was developed and how you can use it, too.

Merge queue is generally available and is also now available on GitHub Enterprise Server! Find out more.

Why we needed merge queue

In 2020, engineers from across GitHub came together with a goal: improve the process for deploying and merging pull requests across the GitHub service, and specifically within our largest monorepo. This process was becoming overly complex to manage, required special GitHub-only logic in the codebase, and required developers to learn external tools, which meant the engineers developing for GitHub weren’t actually using GitHub in the same way as our customers.

To understand how we got to this point in 2020, it’s important to look even further back.

By 2016, nearly 1,000 pull requests were merging into our large monorepo every month. GitHub was growing both in the number of services deployed and in the number of changes shipping to those services. And because we deploy changes prior to merging them, we needed a more efficient way to group and deploy multiple pull requests at the same time. Our solution at this time was trains. A train was a special pull request that grouped together multiple pull requests (passengers) that would be tested, deployed, and eventually merged at the same time. A user (called a conductor) was responsible for handling most aspects of the process, such as starting a deployment of the train and handling conflicts that arose. Pipelines were added to help manage the rollout path. Both these systems (trains and pipelines) were only used on our largest monorepo and were implemented in our internal deployment system.

Trains helped improve velocity at first, but over time started to negatively impact developer satisfaction and increase the time to land a pull request. Our internal Developer Experience (DX) team regularly polls our developers to learn about pain points to help inform where to invest in improvements. These surveys consistently rated deployment as the most painful part of the developer’s daily experience, highlighting the complexity and friction involved with building and shepherding trains in particular. This qualitative data was backed by our quantitative metrics. These showed a steady increase in the time it took from pull request to shipped code.

Trains could also grow large, containing the changes of 15 pull requests. Large trains frequently “derailed” due to a deployment issue, conflicts, or the need for an engineer to remove their change. On painful occasions, developers could wait 8+ hours after joining a train for it to ship, only for it to be removed due to a conflict between two pull requests in the train.

Trains were also not used on every repository, meaning the developer experience varied significantly between different services. This led to confusion when engineers moved between services or contributed to services they didn’t own, which is fairly frequent due to our inner source model.

In short, our process was significantly impacting the productivity of our engineering teams—both in our large monorepo and service repositories.

Building a better solution for us and eventually for customers

By 2020, it was clear that our internal tools and processes for deploying and merging across our repositories were limiting our ability to land pull requests as often as we needed. Beyond just improving velocity, it became clear that our new solution needed to:

  1. Improve the developer experience of shipping. Engineers wanted to express two simple intents: “I want to ship this change” and “I want to shift to other work;” the system should handle the rest.
  2. Avoid having problematic pull requests impact everyone. Those causing conflicts or build failures should not impact all other pull requests waiting to merge. The throughput of the overall system should be favored over fairness to an individual pull request.
  3. Be consistent and as automated as possible across our services and repositories. Manual toil by engineers should be removed wherever possible.

The merge queue project began as part of an overall effort within GitHub to improve availability and remove friction that was preventing developers from shipping at the frequency and level of quality that was needed. Initially, it was only focused on providing a solution for us, but was built with the expectation that it would eventually be made available to customers.

By mid-2021, a few small, internal repositories started testing merge queue, but moving our large monorepo would not happen until the next year for a few reasons.

For one, we could not stop deploying for days or weeks in order to swap systems. At every stage of the project we had to have a working system to ship changes. At a maximum, we could block deployments for an hour or so to run a test or transition. GitHub is remote-first and we have engineers throughout the world, so there are quieter times but never a free pass to take the system offline.

Changing the way thousands of developers deploy and merge changes also requires lots of communication to ensure teams are able to maintain velocity throughout the transition. Training 1,000 engineers on a new system overnight is difficult, to say the least.

By rolling out changes to the process in phases (and sometimes testing and rolling back changes early in the morning before most developers started working) we were able to slowly transition our large monorepo and all of our repositories responsible for production services onto merge queue by 2023.

How we use merge queue today

Merge queue has become the single entry point for shipping code changes at GitHub. It was designed and tested at scale, shipping 30,000+ pull requests with their associated 4.5 million CI runs, for GitHub.com before merge queue was made generally available.

For GitHub and our “deploy the merge process,” merge queue dynamically forms groups of pull requests that are candidates for deployment, kicks off builds and tests via GitHub Actions, and ensures our main branch is never updated to a failing commit by enforcing branch protection rules. Pull requests in the queue that conflict with one another are automatically detected and removed, with the queue automatically re-forming groups as needed.

Because merge queue is integrated into the pull request workflow (and does not require knowledge of special ChatOps commands, or use of labels or special syntax in comments to manage state), our developer experience is also greatly improved. Developers can add their pull request to the queue and, if they spot an issue with their change, leave the queue with a single click.

We can now ship larger groups without the pitfalls and frictions of trains. Trains (our old system) previously limited our ability to deploy more than 15 changes at once, but now we can now safely deploy 30 or more if needed.

Every month, over 500 engineers merge 2,500 pull requests into our large monorepo with merge queue, more than double the volume from a few years ago. The average wait time to ship a change has also been reduced by 33%. And it’s not just numbers that have improved. On one of our periodic developer satisfaction surveys, an engineer called merge queue “one of the best quality-of-life improvements to shipping changes that I’ve seen a GitHub!” It’s not a stretch to say that merge queue has transformed the way GitHub deploys changes to production at scale.

How to get started

Merge queue is available to public repositories on GitHub.com owned by organizations and to all repositories on GitHub Enterprise (Cloud or Server).

To learn more about merge queue and how it can help velocity and developer satisfaction on your busiest repositories, see our blog post, GitHub merge queue is generally available.

Interested in joining GitHub? Check out our open positions or learn more about our platform.

The post How GitHub uses merge queue to ship hundreds of changes every day appeared first on The GitHub Blog.

AWS launches AWS Wickr ATAK Plugin

Post Syndicated from Anne Grahn original https://aws.amazon.com/blogs/security/aws-launches-aws-wickr-atak-plugin/

AWS is excited to announce the launch of the AWS Wickr ATAK Plugin, which makes it easier for ATAK users to maintain secure communications.

The Android Team Awareness Kit (ATAK)—also known as Android Tactical Assault Kit (ATAK) for military use—is a smartphone geospatial infrastructure and situational awareness application. It provides mapping, messaging, and geofencing capabilities to enable safe collaboration over geography.

ATAK users, referred to as operators, can view the location of other operators and potential hazards—a major advantage over relying on hand-held radio transmissions. While ATAK was initially designed for use in combat zones, the technology has been adapted to fit the missions of local, state, and federal agencies.

ATAK is currently in use by over 40,000 US Department of Defense (DoD) users—including the Air Force, Army, Special Operations, and National Guard—along with the Department of Justice (DOJ), the Department of Homeland Security (DHS), and 32,000 nonfederal users.

Using AWS Wickr with ATAK

AWS Wickr is a secure collaboration service that provides enterprises and government agencies with advanced security and administrative controls to help them meet security and compliance requirements. The AWS Wickr service is now in preview.

With AWS Wickr, communication mechanisms such as one-to-one and group messaging, audio and video calling, screen sharing, and file sharing are protected with 256-bit end-to-end encryption (E2EE). Encryption takes place locally, on the endpoint. Every message, call, and file is encrypted with a new random key, and no one but the intended recipients can decrypt them. Flexible administrative features enable organizations to deploy at scale, and facilitate information governance.

AWS Wickr supports many agencies that use ATAK. However, until now, ATAK operators have had to leave the ATAK application in order to use AWS Wickr, which creates operational risk.

AWS Wickr ATAK Plugin

AWS Wickr has developed a plugin that enhances ATAK with secure communications features. ATAK operators are provided with a Wickr Enterprise or Wickr Pro account, so they can use AWS Wickr within ATAK for secure messaging, calling, and file transfer. This helps reduce interruptions, and the complexity of configuration with ATAK chat features.

Use cases

The AWS Wickr ATAK Plugin has multiple use cases.

Military

The military uses ATAK for blue force tracking to locate team members, red force tracking to locate enemies, terrain and weather analysis, and to visually communicate their movements to friendly forces.

The AWS Wickr ATAK Plugin enhances the ability of military personnel to maintain the situational awareness ATAK provides, while quickly receiving and reacting to Wickr communications. Ephemeral messaging options allow unit leaders to send mission plans, GPS points of interest, and set burn-on-read and expiration timers. Information can be deleted from the device, while being retained on the AWS Wickr service to help meet compliance requirements, and facilitate the creation of after-action reports.

Law enforcement

ATAK is a powerful tool for team tracking and mission planning that promotes a safer and better response to critical law enforcement and public-safety events.

The AWS Wickr ATAK Plugin adds to the capabilities of ATAK by supporting secure communications between tactical, negotiation, and investigative teams.

First responders

ATAK aids in search-and-rescue and multi-jurisdictional natural disaster responses, such as hurricane relief efforts.

The AWS Wickr ATAK Plugin provides secure, uninterrupted communication between all levels of first responders to help them get oriented quickly, and support complex coordination needs.

Getting started

AWS customers can sign up to use AWS Wickr at no cost during the preview period. For more information about the AWS Wickr ATAK Plugin, email [email protected], and visit the AWS Wickr web page.

If you have feedback about this blog post, let us know in the Comments section below.

Want more AWS Security news? Follow us on Twitter.

Anne Grahn

Anne Grahn

Anne is a Senior Worldwide Security GTM Specialist at AWS based in Chicago. She has more than a decade of experience in the security industry, and has a strong focus on privacy risk management. She maintains a Certified Information Systems Security Professional (CISSP) certification.

Randy Brumfield

Randy Brumfield

Randy leads technology business for new initiatives and the Cloud Support Engineering team at Wickr, an AWS Company. Prior to Wickr (and AWS), Randy spent close to two and a half decades in Silicon Valley across several start-ups, networking companies, and system integrators in various corporate development, product management, and operations roles. Randy currently resides in San Jose, California.

Hackathons with AWS Cloud9: Collaboration simplified for your next big idea

Post Syndicated from Mahesh Biradar original https://aws.amazon.com/blogs/devops/hackathons-with-aws-cloud9-collaboration-simplified-for-your-next-big-idea/

Many organizations host ideation events to innovate and prototype new ideas faster.  These events usually run for a short duration and involve collaboration between members of participating teams. By the end of the event, a successful demonstration of a working prototype is expected and the winner or the next steps are determined. Therefore, it’s important to build a working proof of concept quickly, and to do that teams need to be able to share the code and get peer reviewed in real time.

In this post, you see how AWS Cloud9 can help teams collaborate, pair program, and track each other’s inputs in real time for a successful hackathon experience.

AWS Cloud9 is a cloud-based integrated development environment (IDE) that lets you to write, run, and debug code from any machine with just a browser. A shared environment is an AWS Cloud9 development environment that multiple users have been invited to participate in and can edit or view its shared resources.

Pair programming and mob programming are development approaches in which two or more developers collaborate simultaneously to design, code, or test solutions. At the core is the premise that two or more people collaborate on the same code at the same time, which allows for real-time code review and can result in higher quality software.

Hackathons are one of the best ways to collaboratively solve problems, often with code. Cross-functional two-pizza teams compete with limited resources under time constraints to solve a challenging business problem. Several companies have adopted the concept of hackathons to foster a culture of innovation, providing a platform for developers to showcase their creativity and acquire new skills. Teams are either provided a roster of ideas to choose from or come up with their own new idea.

Solution overview

In this post, you create an AWS Cloud9 environment shared with three AWS Identity and Access Management (IAM) users (the hackathon team). You also see how this team can code together to develop a sample serverless application using an AWS Serverless Application Model (AWS SAM) template.

 

The following diagram illustrates the deployment architecture.

Architecture diagram

Figure1: Solution Overview

Prerequisites

To complete the steps in this post, you need an AWS account with administrator privileges.

Set up the environment

To start setting up your environment, complete the following steps:

    1. Create an AWS Cloud9 environment in your AWS account.
    2. Create and attach an instance profile to AWS Cloud9 to call AWS services from an environment.For more information, see Create and store permanent access credentials in an environment.
    3. On the AWS Cloud9 console, select the environment you just created and choose View details.

      Screenshot of Cloud9 console

      Figure2: Cloud9 View details

    4. Note the environment ID from the Environment ARN value; we use this ID in a later step.

      Screenshot of Cloud9 console showing ARN

      Figure3: Environment ARN

    5. In your AWS Cloud9 terminal, create the file usersetup.sh with the following contents:
      #USAGE: 
      #STEP 1: Execute following command within Cloud9 terminal to retrieve environment id
      # aws cloud9 list-environments
      #STEP 2: Execute following command by providing appropriate parameters: -e ENVIRONMENTID -u USERNAME1,USERNAME2,USERNAME3 
      # sh usersetup.sh -e 877f86c3bb80418aabc9956580436e9a -u User1,User2
      function usage() {
        echo "USAGE: sh usersetup.sh -e ENVIRONMENTID -u USERNAME1,USERNAME2,USERNAME3"
      }
      while getopts ":e:u:" opt; do
        case $opt in
          e)  if ! aws cloud9 describe-environment-status --environment-id "$OPTARG" 2>&1 >/dev/null; then
                echo "Please provide valid cloud9 environmentid."
                usage
                exit 1
              fi
              environmentId="$OPTARG" ;;
          u)  if [ "$OPTARG" == "" ]; then
                echo "Please provide comma separated list of usernames."
                usage
                exit 1
              fi
              users="$OPTARG" ;;
          \?) echo "Incorrect arguments."
              usage
              exit 1;;
        esac
      done
      if [ "$OPTIND" -lt 5 ]; then
        echo "Missing required arguments."
        usage
        exit 1
      fi
      IFS=',' read -ra userNames <<< "$users"
      groupName='HackathonUsers'
      groupPolicy='arn:aws:iam::aws:policy/AdministratorAccess'
      userArns=()
      function createUsers() {
          userList=""    
          if aws iam get-group --group-name $groupName  > /dev/null 2>&1; then
            echo "$groupName group already exists."  
          else
            if aws iam create-group --group-name $groupName 2>&1 >/dev/null; then
              echo "Created user group - $groupName."  
            else
              echo "Error creating user group - $groupName."  
              exit 1
            fi
          fi
          if aws iam attach-group-policy --policy-arn $groupPolicy --group-name $groupName; then
            echo "Attached group policy."  
          else
            echo "Error attaching group policy to - $groupName."  
            exit 1
          fi
          
          for userName in "${userNames[@]}" ; do 
              
              randomPwd=`aws secretsmanager get-random-password \
              --require-each-included-type \
              --password-length 20 \
              --no-include-space \
              --output text`
          
              userList="$userList"$'\n'"Username: $userName, Password: $randomPwd"
              
              userArn=`aws iam create-user \
              --user-name $userName \
              --query 'User.Arn' | sed -e 's/\/.*\///g' | tr -d '"'`
              
              userArns+=( $userArn )
            
              aws iam wait user-exists \
              --user-name $userName
              
              echo "Successfully created user $userName."
              
              aws iam create-login-profile \
              --user-name $userName \
              --password $randomPwd \
              --password-reset-required 2>&1 >/dev/null
              
              aws iam add-user-to-group \
              --user-name $userName \
              --group-name $groupName
          done
          echo "Waiting for users profile setup..."
          sleep 8
          
          for arn in "${userArns[@]}" ; do 
            aws cloud9 create-environment-membership \
              --environment-id $environmentId \
              --user-arn $arn \
              --permissions read-write 2>&1 >/dev/null
          done
          echo "Following users have been created and added to $groupName group."
          echo "$userList"
      }
      createUsers
      
    6. Run the following command by replacing the following parameters:
        1. ENVIRONMENTID – The environment ID you saved earlier
        2. USERNAME1, USERNAME2… – A comma-separated list of users. In this example, we use three users.

      sh usersetup.sh -e ENVIRONMENTID -u USERNAME1,USERNAME2,USERNAME3
      The script creates the following resources:

        • The number of IAM users that you defined
        • The IAM user group HackathonUsers with the users created from previous step assigned with administrator access
        • These users are assigned a random password, which must be changed before their first login.
        • User passwords can be shared with your team from the AWS Cloud9 Terminal output.
    7. Instruct your team to sign in to the AWS Cloud9 console open the shared environment by choosing Shared with you.

      Screenshot of Cloud9 console showing environments

      Figure4: Shared environments

    8. Run the create-repository command, specifying a unique name, optional description, and optional tags:
      aws codecommit create-repository --repository-name hackathon-repo --repository-description "Hackathon repository" --tags Team=hackathon
    9. Note the cloneUrlHttp value from the output; we use this in a later step.
      Terminal showing environment metadata after running the command

      Figure5: CodeCommit repo url

      The environment is now ready for the hackathon team to start coding.

    10. Instruct your team members to open the shared environment from the AWS Cloud9 dashboard.
    11. For demo purposes, you can quickly create a sample Python-based Hello World application using the AWS SAM CLI
    12. Run the following commands to commit the files to the local repo:

      cd hackathon-repo
      git config --global init.defaultBranch main
      git init
      git add .
      git commit -m "Initial commit
    13. Run the following command to push the local repo to AWS CodeCommit by replacing CLONE_URL_HTTP with the cloneUrlHttp value you noted earlier:
      git push <CLONEURLHTTP> —all

For a sample collaboration scenario, watch the video Collaboration with Cloud9 .

 

Clean up

The cleanup script deletes all the resources it created. Make a local copy of any files you want to save.

  1. Create a file named cleanup.sh with the following content:
    #USAGE: 
    #STEP 1: Execute following command within Cloud9 terminal to retrieve envronment id
    # aws cloud9 list-environments
    #STEP 2: Execute following command by providing appropriate parameters: -e ENVIRONMENTID -u USERNAME1,USERNAME2,USERNAME3 
    # sh cleanup.sh -e 877f86c3bb80418aabc9956580436e9a -u User1,User2
    function usage() {
      echo "USAGE: sh cleanup.sh -e ENVIRONMENTID -u USERNAME1,USERNAME2,USERNAME3"
    }
    while getopts ":e:u:" opt; do
      case $opt in
        e)  if ! aws cloud9 describe-environment-status --environment-id "$OPTARG" 2>&1 >/dev/null; then
              echo "Please provide valid cloud9 environmentid."
              usage
              exit 1
            fi
            environmentId="$OPTARG" ;;
        u)  if [ "$OPTARG" == "" ]; then
              echo "Please provide comma separated list of usernames."
              usage
              exit 1
            fi
            users="$OPTARG" ;;
        \?) echo "Incorrect arguments."
            usage
            exit 1;;
      esac
    done
    if [ "$OPTIND" -lt 5 ]; then
      echo "Missing required arguments."
      usage
      exit 1
    fi
    IFS=',' read -ra userNames <<< "$users"
    groupName='HackathonUsers'
    groupPolicy='arn:aws:iam::aws:policy/AdministratorAccess'
    function cleanUp() {
        echo "Starting cleanup..."
        groupExists=false
        if aws iam get-group --group-name $groupName  > /dev/null 2>&1; then
          groupExists=true
        else
          echo "$groupName does not exist."  
        fi
        
        for userName in "${userNames[@]}" ; do 
            if ! aws iam get-user --user-name $userName >/dev/null 2>&1; then
              echo "$userName does not exist."  
            else
              userArn=$(aws iam get-user \
              --user-name $userName \
              --query 'User.Arn' | tr -d '"') 
              
              if $groupExists ; then 
                aws iam remove-user-from-group \
                --user-name $userName \
                --group-name $groupName
              fi
      
              aws iam delete-login-profile \
              --user-name $userName 
      
              if aws iam delete-user --user-name $userName ; then
                echo "Succesfully deleted $userName"
              fi
              
              aws cloud9 delete-environment-membership \
              --environment-id $environmentId --user-arn $userArn
              
            fi
        done
        if $groupExists ; then 
          aws iam detach-group-policy \
          --group-name $groupName \
          --policy-arn $groupPolicy
      
          if aws iam delete-group --group-name $groupName ; then
            echo "Succesfully deleted $groupName user group"
          fi
        fi
        
        echo "Cleanup complete."
    }
    cleanUp
  2. Run the script by passing the same parameters you passed when setting up the script:
    sh cleanup.sh -e ENVIRONMENTID -u USERNAME1,USERNAME2,USERNAME3
  3. Delete the CodeCommit repository by running the following commands in the root directory with the appropriate repository name:
    aws codecommit delete-repository —repository-name hackathon-repo
    rm -rf hackathon-repo
  4. You can delete the Cloud9 environment when the event is over

 

Conclusion

In this post, you saw how to use an AWS Cloud9 IDE to collaborate as a team and code together to develop a working prototype. For organizations looking to host hackathon events, these tools can be a powerful way to deliver a rich user experience. For more information about AWS Cloud9 capabilities, see the AWS Cloud9 User Guide. If you plan on using AWS Cloud9 for an ongoing collaboration, refer to the best practices for sharing environments in Working with shared environment in AWS Cloud9.

About the authors

Mahesh Biradar is a Solutions Architect at AWS. He is a DevOps enthusiast and enjoys helping customers implement cost-effective architectures that scale.
Guy Savoie is a Senior Solutions Architect at AWS working with SMB customers, primarily in Florida. In his role as a technical advisor, he focuses on unlocking business value through outcome based innovation.
Ramesh Chidirala is a Solutions Architect focused on SMB customers in the Central region. He is passionate about helping customers solve challenging technical problems with AWS and help them achieve their desired business outcomes.