Using Rover in CI/CD
Integrate Rover into continuous integration and deployment workflows
You can use Rover in any CI/CD environment that uses a Rover-supported operating system (Linux, MacOS, or Windows). Most commonly, this is to run schema checks with rover graph check
or rover subgraph check
.
Rover's installation is similar to many other CLI tools, but the recommended method varies depending on which provider you're using. We've included instructions for some of the most common CI/CD providers:
If you're using Rover with a CI/CD provider not listed here, we'd love for you to share the steps by opening an issue or pull request.
CircleCI
Linux jobs using the curl
installer
Normally when installing, Rover adds the path of its executable to your $PATH
. However, CircleCI doesn't use the $PATH
variable between run step
s. This means that if you install Rover and try to run it in the next step, you get a command not found: rover
error.
To fix this, you can modify the $PATH
and append it to $BASH_ENV
. $BASH_ENV
is executed at the beginning of each step, enabling any changes to be maintained across steps. You can add Rover to your $PATH
using $BASH_ENV
like this:
1echo 'export PATH=$HOME/.rover/bin:$PATH' >> $BASH_ENV
After you install Rover and modify the $BASH_ENV
as shown, Rover should work like normal.
rover config auth
command is interactive, you need to authenticate using an environment variable in your project settings.Full example
1# Use the latest 2.1 version of CircleCI pipeline process engine. See: https://circleci.com/docs/2.0/configuration-reference
2version: 2.1
3
4jobs:
5 build:
6 docker:
7 - image: cimg/node:15.11.0
8 steps:
9 - run:
10 name: Install
11 command: |
12 # download and install Rover
13 curl -sSL https://rover.apollo.dev/nix/v0.26.2 | sh
14
15 # This allows the PATH changes to persist to the next `run` step
16 echo 'export PATH=$HOME/.rover/bin:$PATH' >> $BASH_ENV
17 - checkout
18 # after rover is installed, you can run it just like you would locally!
19 # only run this command with the `--background` flag if you have the Apollo Studio GitHub integration enabled on your repository
20 - run: rover graph check my-graph@prod --schema ./schema.graphql --background
GitHub Actions
Displaying schema check results on GitHub pull requests
If you use GitHub Actions to automatically run schema checks on every pull request (as shown below), you can install the Apollo Studio GitHub app to provide links to the results of those checks alongside your other pull request checks:
To display schema check results on pull requests correctly, you need to make sure Rover associates the schema check execution with the pull request's HEAD
commit, as opposed to the merge commit that GitHub adds. To guarantee this, set the APOLLO_VCS_COMMIT
environment variable in your action's configuration, like so:
1env:
2 APOLLO_VCS_COMMIT: ${{ github.event.pull_request.head.sha }}
Linux/MacOS jobs using the curl
installer
Normally when installing, Rover adds the path of its executable to your $PATH
. However, GitHub Actions doesn't use the $PATH
variable between run step
s. This means that if you install Rover and try to run it in the next step, you get a command not found: rover
error.
To fix this, you can append Rover's location to the $GITHUB_PATH
variable. $GITHUB_PATH
is similar to your system's $PATH
variable, and additions to $GITHUB_PATH
can be used across multiple steps. You can modify it like this:
1echo "$HOME/.rover/bin" >> $GITHUB_PATH
rover config auth
command is interactive, you need to authenticate using an environment variable in your project settings.GitHub actions uses project environments to set up secret environment variables. In your action, you choose a build.environment
by name and set build.env
variables using the saved secrets.The following is a full example script, showing how to choose an apollo
environment and set an APOLLO_KEY
variable:
Full example
1# .github/workflows/check.yml
2
3name: Check Schema
4
5# Controls when the action will run. Triggers the workflow on push or pull request events
6on: [push, pull_request]
7
8# A workflow run is made up of one or more jobs that can run sequentially or in parallel
9jobs:
10 # This workflow contains a single job called "build"
11 build:
12 # The type of runner that the job will run on
13 runs-on: ubuntu-latest
14
15 # https://docs.github.com/en/actions/reference/environments
16 environment: apollo
17
18 # https://docs.github.com/en/actions/reference/encrypted-secrets
19 # https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions#jobsjob_idstepsenv
20 env:
21 APOLLO_KEY: ${{ secrets.APOLLO_KEY }}
22 APOLLO_VCS_COMMIT: ${{ github.event.pull_request.head.sha }}
23
24 # Steps represent a sequence of tasks that will be executed as part of the job
25 steps:
26 # Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
27 - uses: actions/checkout@v2
28
29 - name: Install Rover
30 run: |
31 curl -sSL https://rover.apollo.dev/nix/v0.26.2 | sh
32
33 # Add Rover to the $GITHUB_PATH so it can be used in another step
34 # https://docs.github.com/en/actions/reference/workflow-commands-for-github-actions#adding-a-system-path
35 echo "$HOME/.rover/bin" >> $GITHUB_PATH
36 # only run this command with the `--background` flag if you have the Apollo Studio GitHub integration enabled on your repository
37 - name: Run check against prod
38 run: |
39 rover graph check my-graph@prod --schema ./test.graphql --background
40
You can also use this Apollo Solutions repository install Rover on GitHub action runners.
Once installed, rover
is added to PATH
, so it can be used in subsequent steps.
Bitbucket Pipelines
The following is a full example configuration for Bitbucket Pipelines. It shows how to:
Run
rover subgraph check
for each commit on all branchesRun
rover subgraph publish
to keep the schema definition of yourmain
branch in-sync with a base variant (@local
in this case)
The example uses the following Pipeline Repository Variables to make the pipeline configuration portable across different repositories:
APOLLO_KEY
APOLLO_SUBGRAPH_NAME
, which represents the name of the subgraph you're running schema checks forAPOLLO_LOCAL_PORT
, which represents the port number of the base variant
Full example
1# ./bitbucket-pipelines.yml
2
3image: node
4
5definitions:
6 steps:
7 - step: &rover-subgraph-check
8 name: "[Rover] Subgraph Check"
9 caches:
10 - node
11 script:
12 - 'echo "Subgraph name: $APOLLO_SUBGRAPH_NAME"'
13 - npx -p @apollo/rover@latest
14 rover subgraph check my-graph@prod
15 --name $APOLLO_SUBGRAPH_NAME
16 --schema ./schema.graphql
17
18 - step: &local-publish
19 name: "[Rover] @local publish (sync with main/master)"
20 caches:
21 - node
22 script:
23 - 'echo "Subgraph name: $APOLLO_SUBGRAPH_NAME"'
24 - 'echo "Local variant port: $APOLLO_LOCAL_PORT"'
25
26 - npx -p @apollo/rover@latest
27 rover subgraph publish my-graph@local
28 --name $APOLLO_SUBGRAPH_NAME
29 --schema ./schema.graphql
30 --routing-url http://localhost:$APOLLO_LOCAL_PORT/graphql
31
32pipelines:
33 default:
34 - step: *rover-subgraph-check
35
36 branches:
37 '{main,master}':
38 - step: *rover-subgraph-check
39 - step: *local-publish
Jenkins
To set up Rover for use with Jenkins, first consider which type of Jenkins agent
you'll use in your pipelines. The samples below demonstrate a golang pipeline that uses Docker, but you can modify them to meet your specific needs.
Distributed builds via the node
agent
If you're running a distributed build system using the node
agent type, make sure that Rover is installed on all machines either as part of a baseline image or via a setup script. Also make sure it's available globally via the PATH
environment variable.
Pipelines using Docker
If you're using Rover with a Docker-enabled pipeline, note the following additional considerations:
$PATH
issues
Normally when installing, Rover adds the path of its executable to your $PATH
. However, Jenkins doesn't persist the $PATH
variable between runs of sh
steps
, because each sh
block runs as its own process. This means that if you install Rover and try to run it in the next step, you get a command not found: rover
error. This is functionally similar to the CircleCI note, but the resolution is different.
To avoid this issue, do one of the following:
Use the script, but reference
rover
by its full path ($HOME/.rover/bin/rover
)Download the latest release via cURL and extract the binary like so (this downloads Rover
0.23.0
for Linux x86 architectures):Text1curl -L https://github.com/apollographql/rover/releases/download/v0.26.2/rover-v0.26.2-x86_64-unknown-linux-gnu.tar.gz | tar --strip-components=1 -zxv
Permission issues
If you run into permissions issues within Docker, you can resolve many of them by creating a user to run the install and build processes. The example Dockerfile below shows how to accomplish this with a specific Docker image for your Jenkins build pipeline:
1FROM golang:1.18
2RUN useradd -m rover && echo "rover:rover" | chpasswd
3USER rover
4RUN curl -sSL https://rover.apollo.dev/nix/latest | sh
Jenkinsfile Configuration
After you've installed Rover appropriately, you can execute the rover
command within a sh
step, as shown in the example configuration below. Because rover
outputs logs via stderr and emits proper status codes, it generates build errors if the rover subgraph check
command fails.
We recommend passing arguments to rover
commands via environment variables. This enables you to reuse large portions of your pipeline, making it faster to onboard new subgraphs without rewriting code.
Additionally, we strongly recommend passing in the APOLLO_KEY
by using a Jenkins credential and referencing it using credentials(key_name)
within your jenkinsfile
. An example of this is below.
1pipeline {
2 agent {
3 dockerfile {
4 filename './build_artifacts/Dockerfile'
5 }
6
7 }
8 stages {
9 stage('Rover Check') {
10 steps {
11 sh '''echo "Subgraph: $APOLLO_SUBGRAPH_NAME
12 $HOME/.rover/bin/rover subgraph check $APOLLO_GRAPH_REF --name $APOLLO_SUBGRAPH_NAME --schema $SCHEMA_PATH'''
13 }
14 }
15
16 stage('Build') {
17 steps {
18 sh 'go build .'
19 }
20 }
21
22 stage('Go Test') {
23 steps {
24 sh 'go test ./... -v'
25 }
26 }
27
28 stage('Schema Publish to Dev') {
29 when {
30 expression { env.BRANCH_NAME == 'main' }
31 }
32 steps {
33 sh '$HOME/.rover/bin/rover subgraph publish $APOLLO_GRAPH_REF --name $APOLLO_SUBGRAPH_NAME --schema $SCHEMA_PATH'
34 }
35 }
36
37 }
38 environment {
39 APOLLO_KEY = credentials('apollo_key')
40 APOLLO_SUBGRAPH_NAME = 'products'
41 APOLLO_CONFIG_HOME = '~/.config/rover'
42 SCHEMA_PATH = './graph/schema.graphqls'
43 APOLLO_GRAPH_REF = 'ApolloJenkins@dev'
44 }
45}
Gitlab
Since there isn't any official Docker image for Rover, we can use the debian:stable-slim
as a base image. All you need to do is fetch the source via cURL, add the executable to the PATH variable, then publish your subgraphs.
1push_subgraphs:
2stages:
3 - publish_subgraphs
4
5publish_subgraphs:
6 stage: publish_subgraphs
7 image: debian:stable-slim
8 retry: 1 # to retry if any connection issue or such happens
9 before_script:
10 - apt-get update && apt-get install curl -y
11 script:
12 - curl -sSL https://rover.apollo.dev/nix/latest | sh # Install the latest version of Rover
13 - export PATH="$HOME/.rover/bin:$PATH" # Manually add it to the ruuner PATH
14 - export APOLLO_KEY=$APOLLO_FEDERATION_KEY
15 - rover subgraph publish $APOLLO_GRAPH_REF --name $APOLLO_SUBGRAPH_NAME --schema $SCHEMA_PATH
Using With npm
/npx
If you're running in a Node.js workflow, it might be easier to use the NPM distribution of Rover. This way, you don't need to adjust the PATH at all to run Rover, and it might fit better into your existing workflow.
You can use Rover by adding it to your package.json
dependencies using these instructions and then execute it using npm scripts, similar to other workflows you might already have. If you don't want to install Rover as a dependency, you can run it with npx
by using the -p
flag:
--background
flag if you have the Apollo Studio GitHub integration enabled on your repository.1npx -p @apollo/rover rover graph check my-graph@prod --schema=./schema.graphql --background
Since most commands require you be authenticated, see the above sections for instructions on how to add environment variables for your CI/CD provider.