Builds.rs Documentation
builds.rs is a service that builds artifacts for all crates published at crates.io, the Rust community's crate registry. builds.rs takes all crates published there and generates artifacts from them, such as executables for different platforms. This makes it easy to use Rust tooling without needing to compile it from source.
builds.rs is written, run and maintained by a team of volunteers who are passionate about the Rust language and the quality of tooling it has provided. We want to do our little part in making this tooling available to as many people as possible.
Sections
This guide is split up into different sections for the different target audiences that might read this guide. We recommend reading these sections in the order that they are presented.
This section is for people that want to use builds.rs, for example to download artifacts for Rust crates. It summarizes what builds.rs does, how it works, and how you can use it.
This section is for crate authors that would like to customize how builds.rs builds their crates. As a crate author, you can add metadata to your crate's manifest that controls how your crate is built.
This section is for anyone who would like to help maintain builds.rs. It explains how the project is structured, how you can run and test it locally, and what you need to keep in mind when creating merge requests.
Introduction
This section explains features which are not implemented yet.
The builds.rs project aims to generate build artifacts for all crates published on crates.io.
Browse and download artifacts
The easiest way to find and download artifacts is through the web interface at builds.rs.
Download artifacts
You can also fetch artifacts using curl
, for example in CI pipelines.
curl -sSL https://builds.rs/crates/mycrate/0.1.2/binaries/x86_64-unknown-linux-gnu > mycrate-0.1.2.tar.gz
Cargo Builds
This section explains features which are not implemented yet.
This project also has a CLI that you can install which integrates with the
cargo
build tool. You can use this to fetch binaries.
Installation
The easiest way to install this tool is using cargo
itself.
cargo install cargo-builds
Once you have it installed, you should be able to call it like this:
cargo builds
Usage
Fetch crate
By default, the fetch command will fetch binary artifacts for the latest version and current architecture. However, you can use command-line arguments to override those defaults.
cargo builds fetch serde
Test crate build
You can also use this tool to test building of your local package.
cargo builds build
Introduction
This section of the builds.rs documentation is aimed at crate developers. builds.rs is a service which will create build artefacts for the crates you publish on crates.io automatically and for free. The aim in doing so is to make it as easy as possible to deploy the code you write, without depending on you to create and maintain CI setups to build for different architectures.
You do not need to use builds.rs, in fact if your crates have a complex build system then you may not want to use it at all. But if you do want to use it, this section will tell you what you can do to make sure your crate builds easily and cleanly and you can get the most out of the service.
Usage
You may use builds.rs in any way you like. You are free to link directly to the builds. You do not need to attribute builds.rs in any way. builds.rs will never charge money for the services it provides, nor will it ever interfere with the way crates are built, such as by injecting code that is not a normal part of the crate's build process.
Metadata
This section explains features which are not implemented yet.
builds.rs aims to do the right thing by default, and will try it's best to figure out how to build your crate. However, it is not perfect. In some situations, it needs extra information to tell it how your crate needs to be built.
For those situations, it is possible to add metadata to your Cargo manifest which builds.rs can parse and use in the build process. This chapter describes what that metadata looks like and how you can use it.
In general, any and all metadata you can set will be under the
package.metadata.buildsrs
table.
Features
Using the features
array, you can set the features that are enabled when building
your crate. If you do not specify this, then the crate's default feature set will
be used.
[package.metadata.buildsrs]
features = ["feature-x", "feature-y"]
binaries = ["main", "other"]
targets = ["x86_64-unknown-linux-musl"]
Overrides
This section explains features which are not implemented yet.
In order to get crates to build which you have already published, we have the ability to override incorrect metadata for existing crates. For this, you can write the configuration in much the same way as you could in your Cargo manifest, and it will be overlaid to the metadata that exists in your crate.
These overlay configurations are managed in a Git repository for collaboration and transparency.
Testing
This section explains features which are not implemented yet.
Testing the metadata for builds.rs is quite important, you likely do not
want to publish crates with broken metadata. For this reason, the cargo-builds
tool ships with the ability to locally build your crate's artifacts exactly
the same way that builds.rs would.
To use this, run this command:
cargo builds build
What this command will do is parse your Cargo manifest and build all crate
artifacts just like builds.rs would build them. They will be placed inside
of target/buildsrs/
. Note that this will call cargo package
to crate a
package containing everything that would exist if you were to publish your
crate, and it needs access to Docker for running the build steps.
Introduction
This section is aimed at developers of builds.rs. If you are looking to understand the code or be able to add features to it, you should read this section carefully because it attempts to give you all of the necessary context.
Note that builds.rs is still under heavy development, and as such the things documented in here may still change. If you notice anything that is incorrect, feel free to fix it.
Components
This chapter explores the architecture of this project both in terms of deployed services as well as in terms of crates.
Services
graph BT Storage[fa:fa-database Storage] Database[fa:fa-database Database] Frontend[fa:fa-globe Frontend] subgraph builds.rs Backend[fa:fa-server Backend] Sync[fa:fa-download Registry Sync] Builder[fa:fa-wrench Builder] Builder --> Backend end Sync --> Database Backend --> Database Backend --> Storage Frontend --> Storage Frontend --> Backend
This project uses somewhat of a microservice architecture, although one could argue that since most of the action happens in the single backend component, it is more of a monolith.
Every component that needs deployment is built into a Docker container in the CI, and then deployed on a cluster.
There are only two components that are external and persistent: storage and the database. These are abstracted away in the code. The storage component is usually any S3-compatible storage provider, and the database is typically a Postgres database.
Crates
graph BT frontend[buildsrs_frontend<br/><i>Web user interface</i>] backend[buildsrs_backend<br/><i>API for frontend and builder</i>] common[buildsrs_common<br/><i>Common type definitions</i>] database[buildsrs_database<br/><i>Database interactions</i>] protocol[buildsrs_protocol<br/><i>Builder protocol types</i>] builder[buildsrs_builder<br/><i>Builds crate artifacts</i>] registry_sync[buildsrs_registry_sync<br/><i>Registry sync service</i>] storage[buildsrs_storage<br/><i>Storage</i>] database-->common backend-->database backend-->common backend-->storage backend-->protocol builder-->protocol frontend-->common registry_sync-->database click database "/rustdoc/buildsrs_database" click backend "/rustdoc/buildsrs_backend" click builder "/rustdoc/buildsrs_builder" click registry_sync "/rustdoc/buildsrs_registry_sync" click protocol "/rustdoc/buildsrs_protocol" click frontend "/rustdoc/buildsrs_frontend" click common "/rustdoc/buildsrs_common" click storage "/rustdoc/buildsrs_storage"
Code-wise, this project is a Cargo workspace with multiple crates. Every target that needs to be built is it's own crate. In addition to that, any code that needs to be used from multiple target crates is split out into it's own crate.
The next chapters will deal with each of these components, explaining what they do and how they are related to the other components.
Frontend
The frontend is a Rust WebAssembly application written using the Yew framework. It is deployed as the main website for builds.rs. It talks to the backend using a REST API, and offers capabilities to search and explore crates, versions and artifacts for each. Styling is done using Tailwind CSS.
Interactions
graph BT frontend[Frontend] backend[Backend] storage[Storage] frontend --> backend frontend --> storage click backend "./backend.html" click storage "./storage.html"
The frontend mainly needs to interact with the backend's REST API. For artifact downloads, they may be performed directly from the storage service using a redirect from the backend. The frontend may also receive the ability to do server-side rendering at some point.
Dependencies
graph BT frontend[buildsrs_frontend] common[buildsrs_common] frontend --> common click frontend "/rustdoc/buildsrs_frontend" click common "/rustdoc/buildsrs_common"
The frontend is implemented in the buildsrs_frontend crate. It uses the buildsrs_common crate for shared data types between it and the backend.
Backend
The backend is responsible for offering two APIs: the public REST API that the frontend uses to fetch metadata, such as which crates and versions exist and which artifacts have been built. The second API is for the builder instances to connect and fetch build jobs, consisting of a WebSocket and a REST API for uploading artifacts. This component tracks the number of downloads for each crate and periodically writes this data to the database.
Interactions
graph BT database[Database] backend[Backend] storage[Storage] frontend[Frontend] builder[Builder] backend --> storage backend --> database builder --> backend frontend --> backend click frontend "./frontend.html" click database "./database.html" click storage "./storage.html" click builder "./builder.html"
The backend uses the storage service to store crate artifacts, and the database to store metadata (crates, versions, artifacts, builders, build logs, jobs).
It offers a REST API that exposes all of the metadata and artifacts. This API is consumed by the frontend, and external tools. It also offers a WebSocket, which is used by the builders to connect to the backend, receive jobs and stream logs.
Dependencies
graph BT common[buildsrs_common] backend[buildsrs_backend] storage[buildsrs_storage] database[buildsrs_database] protocol[buildsrs_protocol] backend --> common backend --> storage backend --> database backend --> protocol click storage "/rustdoc/buildsrs_storage" click protocol "/rustdoc/buildsrs_protocol" click database "/rustdoc/buildsrs_database" click common "/rustdoc/buildsrs_common" click backend "/rustdoc/buildsrs_backend"
The backend is implemented in the buildsrs_backend crate. It uses the buildsrs_common crate for common type definitions. It uses the buildsrs_database and buildsrs_storage crates to connect to those respective services. It uses the buildsrs_protocol crate to implement the builder websocket protocol.
Features
Name | Description |
---|---|
frontend | Serve frontend static files. |
frontend-vendor | When building, builds frontend using trunk and bundles the resulting files into the binary. Implies frontend . |
Builder
The builder is a component that fetches jobs from the backend, builds them using Docker, and pushes the resulting binaries back into the backend. This can be replicated as needed for parallel building.
Interactions
graph BT backend[Backend] builder[Builder] builder --> backend click backend "./backend.html"
The builder connects to the backend using a WebSocket. This is the only service dependency it has.
Dependencies
graph BT builder[buildsrs_builder] protocol[buildsrs_protocol] builder --> protocol click protocol "/rustdoc/buildsrs_protocol" click builder "/rustdoc/buildsrs_builder"
The builder is implemented in the buildsrs_builder crate. It depends on the buildsrs_protocol crate, which defines the protocol it uses to interact with the backend.
Features
Name | Description |
---|---|
docker | Enables the Docker strategy |
options | Command-line options parsing |
Registry Sync
The registry sync components keeps the system in sync with the list of crates published on crates.io. To do this, it polls the crates.io index and inserts any changes into the database directly.
Interactions
graph BT database[Database] registry-sync[Registry Sync] registry-sync --> database click database "./database.html"
The Registry Sync service connects directly to the database to keep it in sync. It has no other dependencies.
Dependencies
graph BT database[buildsrs_database] registry-sync[buildsrs_registry_sync] registry-sync --> database click database "/rustdoc/buildsrs_database" click registry-sync "/rustdoc/buildsrs_registry_sync"
It is implemented in the buildsrs_registry_sync crate. It depends on the buildsrs_database crate for database interactions.
Storage
The storage service stores artifacts that have been built. Storage is typically handled by an S3-compatible storage provider. Currently, we are using Wasabi for this, because they do not charge a fee for egress. Depending on configuration, artifacts may be served directly from the storage service.
The storage interactions are implemented in the buildsrs_storage crate.
Interactions
graph BT storage[Storage] backend[Backend] backend --> storage click backend "./backend.html"
The storage crate itself is not a component that can be deployed, it is merely library which allows for connecting to a storage provider.
The only component that directly interacts with the storage service is the backend. However, when clients retrieve crate artifacts, they may be served directly from storage.
Dependencies
graph BT storage[buildsrs_storage] common[buildsrs_common] backend[buildsrs_backend] backend-->storage storage-->common click storage "/rustdoc/buildsrs_storage" click common "/rustdoc/buildsrs_common" click backend "/rustdoc/buildsrs_backend"
Features
Name | Description |
---|---|
s3 | Allows using a S3-compatible storage. |
filesystem | Allows using a filesystem-backed storage. |
cache | Enables an in-memory cache layer for storage assets. |
options | Command-line options parser for storage. |
temp | Temporary storage creation. |
By default, the filesystem
, s3
, options
and cache
features are enabled.
Database
Uses a Postgres database to store metadata. This includes a list of crates and crate versions that is synced to the list of crates on crates.io using the Registry Sync service, a list of registered Builders, a list of current or previous jobs and a list of artifacts for every crate version.
These are the tables that the database currently stores:
Name | Description |
---|---|
pubkeys | Public keys |
pubkey_fingerprints | Public key fingerprints |
builders | Builders that are registered with the backend. |
targets | Targets that can be built. |
builder_targets | Targets that are enabled per builder. |
crates | Crates (synced from crates.io) |
crate_versions | Crate versions (synced from crates.io) |
job_stages | Job stages |
jobs | Jobs |
job_logs | Job log entries |
job_artifacts | Job artifacts |
job_artifact_downloads | Daily download counts for artifacts |
Interactions
graph BT database[Database] backend[Backend] registry-sync[Registry Sync] backend --> database registry-sync --> database click backend "./backend.html" click registry-sync "./registry-sync.html"
There are two services that connect to the database: the backend and the registry sync service.
Dependencies
graph BT database[buildsrs_database] backend[buildsrs_backend] registry-sync[buildsrs_registry_sync] backend-->database registry-sync-->database click database "/rustdoc/buildsrs_database" click backend "/rustdoc/buildsrs_database" click registry-sync "/rustdoc/buildsrs_database"
All database interactions are implemented in the buildsrs_database crate.
Features
Name | Description |
---|---|
migrations | Enables migrations |
cli | Enables database CLI |
temp | Creation of temporary databases, used for testing |
options | Command-line options parsing for database connection |
Protocols and Interfaces
This section explains the protocols and interfaces that buildsrs uses and exposes.
Crates Workflow
This section describes the workflow of buildsrs, from the point where new crate versions are known to the generation of artifacts.
flowchart LR crate[Crate] metadata[Metadata] binary[Binary] library[Library] coverage[Coverage] binary_release[Binary Releases] debian_package[Debian Packages] library_release[Library Release] crate-->|Generate Metadata| metadata metadata-->|Binary crate| binary metadata-->|Library crate| library library-->coverage library-->library_release binary-->binary_release binary-->debian_package
Information about crates is put into the system by the Registry Sync service. It synchronizes the crates index with the database, creating entries for crates that are newly published. It then creates on job for every new crate, which is to build metadata.
These jobs are picked up by the builders, which fetch the crate and generate metadata using Cargo. This metadata, which is a JSON manifest, gets uploaded as an artifact to the backend.
Upon receiving the manifest, the backend will parse it to record what kind of crate this is (binary or library), and create the appropriate jobs. Depending on what kind of crate it is, different jobs are created.
Targets
For library crates, some possibilities are:
- Generating coverage information using
cargo llvm-cov
- Generating library releases as cdylibs
- Generating library releases as WebAssembly
For binary crates, some possibilities are:
- Generating binary releases for different target triples
- Generating Debian packages using
cargo deb
- Generating Web Applications using
trunk build
Builder Protocol
The connection between the builder and the backend happens over a WebSocket. The protocol that they use to communicate is defined as message enums in the Protocol crate. This section illustrates how this protocol works on a high level.
sequenceDiagram autonumber Note over Builder,Backend: Authenticate Builder->>+Backend: ClientMessage::Hello(aa:bb:cc:dd:ee:ff) alt no builder with fingerprint found Backend-->Builder: Close connection end Backend->>Builder: ServerMessage::ChallengeRequest("xyz..") deactivate Backend activate Builder Builder->>Backend: ClientMessage::ChallengeResponse(Signature) deactivate Builder activate Backend alt signature invalid for pubkey Backend-->Builder: Close connection end deactivate Backend par Work loop Note over Builder,Backend: Request next job Builder->>+Backend: ClientMessage::JobRequest Backend->>-Builder: ClientMessage::Jobs activate Builder Note over Builder,Backend: Process job and upload artifacts par Stream job logs Builder->>Backend: ClientMessage::JobEvent end Builder->>Backend: Upload Artifact using Job token Builder->>Backend: ClientMessage::JobResponse deactivate Builder end
Here is explanations for every step of this protocol:
- The builder uses an SSH key to authenticate with the server. Upon connecting, it sends the fingerprint of it's key to the backend.
- The backend then looks in the database to see if a builder with said fingerprint is known. If not, it terminates the connection.
- The backend generates a random byte sequence and sends it to the builder as challenge.
- The builder response with a message of the same bytes and a signature.
- The backend verifies the signature, both that it is valid and that it was generated by the correct public key. If the signature is invalid, the connection is closed.
- The builder requests a job from the backend.
- The backend response with a job description, which contains a URL to fetch the crate source, a hashsum of the contents, an indication of which artifact to generate and a job token.
- While running the job, the builder streams logs back to the backend.
- When the job is completed, the builder uploads the generated artifacts to the backend using the job token.
- Finally, the builder sends a message to the backend informing it of the job completion and sending a signature of the completed build.
Backend API
Getting Started
This section tells you everything you need to know in order to get started developing for builds.rs.
Generally, it is easiest to use Linux to develop builds.rs, because that generally makes it much easier to install the tooling needed to build and run it. However, this is not a requirement.
If you are not using Linux natively, we recommend that you do one of the following:
macOS
If you are on macOS, building and running builds.rs locally should work. However, if you do still want to use Linux, we recommend you install UTM, which lets you install Linux inside a virtual machine. It is built on top of QEMU and works good, even on the newer M2 macs.
Windows
If you are on Windows, we recommend that you enable the Windows Subsystem for Linux and use that. If this does not work for you, an alternative approach is to use a virtualisation software such as VirtualBox.
Issues
If you encounter any issue, feel free to open an issue report and we will look into it. If it is a bug or something incorrect in the documentation, we will fix it.
Prerequisites
Developing in this project requires some tooling to be installed locally. We try to require as few locally installed tools as possible, however these four have proven to be worth the effort to install them.
- Rustup: Manages your local Rust installation.
- Just: Runner for custom commands.
- Trunk: Helps you to build Rust WebAssembly frontend.
- Docker: Container runtime used to launch local services.
Optionally, you can also install these two tools. They are not required for development, but they enable you to build certain things that you otherwise cannot.
- cargo-llvm-cov: Optional, used to determine test coverage.
- mdbook: Optional, used to build documentation.
- mdbook-mermaid: Optional, used to build documentation.
Here are explanations for what each tool does and a quick guide to getting it installed.
Rustup
Rustup manages Rust installations. It is able to keep the Rust toolchain updated. While it is not strictly required, it is the recommended way to install Rust on your system as it lets us easily lock this project to a specific version of Rust.
On a Unix-like system, you can install it like this. Please follow the instructions that it shows, for example you may have to open a new shell session to be able to use it. For other systems, check the website for more information.
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
Once you have installed Rustup, you should be able to build the code in the repository by running this command. If this succeeds, then you have successfully installed Rustup.
cargo build
If you intend to build the frontend as well, you likely want to add the WebAssembly target for Rust. You can do it by running this command:
rustup target add wasm32-unknown-unknown
The easiest way to test if this works is by heading to the Trunk section, and installing and testing it by building the frontend.
Trunk
Installing Trunk is not required, you only need it if you want to build and run the frontend locally.
Trunk is a tool that helps with building Rust WebAssembly frontends.
It wraps around cargo
for building the WebAssembly and bundles the resulting
raw binaries into a working website, ready to be consumed by the browser.
We use it to build the frontend for builds.rs, which is written in Rust using the Yew framework. If you do not want to run the frontend, you do not need to install Trunk on your system.
If you already have a Rust toolchain installed, one easy way to get Trunk is by installing it using cargo.
cargo install trunk
In order to use Trunk, you also need to add the WebAssembly target for Rustup. The Rustup section will tell you how to do this.
You can verify that your installation works by running this command:
cd frontend && trunk build
Make sure you update it occasionally by re-running the installation command as it is still being developed and gaining new features.
Docker
Docker is a containerization platform that allows you to package, distribute, and run applications and their dependencies in isolated, portable environments. It is used to run services (such as the database) in a way that does not require you to install it locally, but rather in a prepackaged container.
The installation process depends on the platform that you are using, but the simplest installation method if you are on Debian is by using APT:
apt install docker.io apparmor-utils
adduser $USER docker
Make sure that you also install Docker Compose, as that is needed to launch local services.
Just
Just is a command runner, similar to how Makefiles are often used. It offers less complexity compared to Makefiles and has some neat features, including command arguments and built-in documentation.
If you already have a Rust toolchain installed, one easy way to get Just is by installing it using cargo.
cargo install just
There is one Justfile
in this repository, and if you run only just
you will
see a list of targets that are defined.
just --list
Available recipes:
backend # launch registry sync
builder # launch builder
coverage # generate test coverage report
database # start postgres database
database-cli *COMMAND # run database cli
database-dump NAME='latest' DATABASE='postgres' # save database dump
database-repl DATABASE # start repl with specified postgres database
database-test # test database
format # Format source with rustfmt nightly
frontend # launch frontend
list # list targets and help
registry-sync # launch registry sync
test filter='' # run all unit tests
Most of these are shortcuts to launch specific components (database
,
backend
, builder
, registry-sync
), or do specific actions (test
,
coverage
, format
). These commands are further explained in the rest of
this guide.
Cargo llvm-cov
Cargo llvm-cov is a tool that lets us build test coverage reports to measure how good of a job we are doing in testing the code base. It is not required for development, but can be a handy tool.
You can install it with cargo like this:
cargo install llvm-cov
To test it, you can use the coverage
target by running:
just coverage
If this runs without producing errors, then you know that the tool is properly installed.
mdBook
Installing mdBook is not required, it is only needed if you want to build the documentation locally.
mdBook is a tool used to build the documentation for build.rs. It
takes as input the markdown files found in the docs/
folder of the
repository, and produces this nice documentation page.
We also use the mdBook Mermaid plugin to render those pretty diagrams.
If you want to work on improving the documentation, it is recommended that you install this locally so you can render the documentation.
You can install it using cargo
by running this command:
cargo install mdbook mdbook-mermaid
You can verify that it does work by building the documentation locally, like this:
mdbook build
If this command runs, then you know that it is working.
Troubleshooting
This section is dedicated to any common issues one might encounter with these tools. If you run into any issues, feel free to open an issue in our issue tracker and let us know about it. We generally cannot help you too much with troubleshooting your local environment, but we are happy to fix incorrect documentation or document common issues here.
Commands
This section explains what tools and commands there are in this repository that can help you launch and manage buildsrs.
Justfile
This project uses the Just tool to help in running common commands. This tool allows
us to define shortcuts for common tasks in a Justfile
. If you don't have just
installed, see the Installing Just section.
The way that just works is that we can define commands in the Justfile
. Commands
can have optional parameters. You can see which commands are available along with
some help text by running just
without any arguments in the repository root.
This is a full list of all of the commands, and what they are used for.
Name | Description |
---|---|
backend | Launches backend. |
builder | Launches builder. |
check | Run formatting an style checks. |
ci | Run tasks similar to what the CI runs. |
coverage | Generate test coverage report. |
database-cli | Run database command-line interface. This is a tool that is defined in database/ that allows you to manage the database. Commonly, you can use it to run migrations with just database-cli migrate . Run it without options to see what is available. |
database-dump | Create a dump of the current database contents. This is useful for testing. |
database-repl | Launch an interactive console to check the database. |
docs | Build documentation. |
format | Formats code. |
frontend | Launch frontend. |
registry-sync | Launches registry-sync service. |
services | Launches services. |
test | Runs unit tests. |
Patterns
This explains some of the commands more in-depth, to give some context on what they do and how they are meant to be used.
Database Dump
While the migrations are tested in the unit tests, it can be difficult to ensure that data which lies in the database can be properly migrated. For this reason, there exists a command to create a dump of a locally running database which is saved into the repository and can be used to create a unit test from.
# create database/dumps/latest.sql.xz
just database-dump
After taking such a dump, the database crate unit tests have a functionality to create a unit test which restores this dump into a temporary database, runs all migrations over it, and then check if the data is still accessible.
Database REPL
When making changes to the database migrations or handlers, it may be possible to break unit tests. Every unit test works by creating a temporary database, run the migrations on it, execute the code in it and finally deleting the temporary database. In case of an error, the temporary database is not deleted but kept in order to be able to inspect it.
In that case, look for an output similar to this in the test results:
=> Creating database "test_jvqbcyqagfmuncq"
=> Run `just database-repl "test_jvqbcyqagfmuncq"` to inspect database
This output hints at the ability to use a command to inspect the database after the test failure. Keep in mind that temporary databases are only kept in case of an error in the test.
Use the appropriate Just command to start a REPL that you can use to inspect the database at the time which the error occured.
just database-repl test_jvqbcyqagfmunc
Database CLI
Running tests
Testing is one of the most important parts of the process of developing this software. Tests serve both as documentation to some extent and they allow for teams to implement features without needing to communicate all hidden assumptions, they can instead be encoded in the form of unit tests.
The approach that this project is taking is by writing as many unit tests as are necessary, and using coverage reporting to measure how the test coverage changes over time. All new features should come with matching tests, if possible.
Services
In order to be able to run the tests, you must first launch the required services.
You can launch them using the services
command:
# launch services
just services
If you want to tear them down and delete any state, you can use this command
with the down
subcommand, like this:
# delete services
just services down
Testing
There are two targets that are useful for running tests. Both of these targets require a running database, but they do not require the database to be migrated as they create temporary virtual databases.
You can run all tests like this:
just test
If you only want to run tests for a specific crate, you can run them like this:
just test-crate database
Coverage
For estimating test coverage, llvm-cov
is used which needs to be separately
installed. This uses instrumentation to figure out which parts of the codebase
are executed by tests and which are not.
There is a useful target for running the coverage tests.
just coverage
Here you can see the latest coverage from the main
branch to
compare it against.
Database
The database is something which has a state and that state needs to be carefully managed. For this reason, it takes special care to ensure correctness. There are specific commands useful for helping test and inspect the database.
Database Dump
While the migrations are tested in the unit tests, it can be difficult to ensure that data which lies in the database can be properly migrated. For this reason, there exists a command to create a dump of a locally running database which is saved into the repository and can be used to create a unit test from.
# create database/dumps/latest.sql.xz
just database-dump
After taking such a dump, the database crate unit tests have a functionality to create a unit test which restores this dump into a temporary database, runs all migrations over it, and then check if the data is still accessible.
Database REPL
When making changes to the database migrations or handlers, it may be possible to break unit tests. Every unit test works by creating a temporary database, run the migrations on it, execute the code in it and finally deleting the temporary database. In case of an error, the temporary database is not deleted but kept in order to be able to inspect it.
In that case, look for an output similar to this in the test results:
=> Creating database "test_jvqbcyqagfmuncq"
=> Run `just database-repl "test_jvqbcyqagfmuncq"` to inspect database
This output hints at the ability to use a command to inspect the database after the test failure. Keep in mind that temporary databases are only kept in case of an error in the test.
Use the appropriate Just command to start a REPL that you can use to inspect the database at the time which the error occured.
just database-repl test_jvqbcyqagfmunc
Running locally
It should be relatively straightforward to run buildsrs locally. To do so, you need to run a few components:
- services
- database (postgres, stores metadata)
- minio (S3-compatible API for storing builds)
- backend (serves API)
- registry-sync (synchronizes crates from crates.io with database)
- builder (fetches jobs and builds crates)
The only thing you need to get these running is having Docker running on your system. Docker is not necessary, but it simplifies running the services that the stack needs to talk to.
Services
To launch the services that buildsrs needs to run locally, the easiest approach
is to run them using Docker. There is a docker-compose.yml
file in the
repository and a Just target. You should be able to launch them like this:
just services
In order to use the database, you will need to run migrations. There is a CLI
tool in the buildsrs-database
crate that you can use for this. You can run
them like this:
just database-cli migrate
Once you have launched the services and run the migration, your setup is ready.
If you make changes to the database migrations, you may have to reset the database in order to be able to apply them. To do this, simply cancel the launched database and re-launch it, as it is not persistent.
Backend
The backend hosts the API for the frontend and for the runners to connect. By
default, it will listen locally on localhost:8000
for API requests. It
requires the database to be running and migrated for it to run.
just backend
Registry Sync
In order to synchronize the database with the crates on crates.io, you need to launch the registry sync service. This requires a running and migrated database.
just registry-sync
Builder
The builder is the component that actually builds crates. You need to launch the backend before you can launch the builder. You will also need to register it with the database. Here is how to do that:
just database-cli builder add ~/.ssh/id_ed25519.pub
just builder
The builder uses SSH keys to authenticate with the backend. You can use any SSH
key, however by default it can use your local ed25519
key. If you do not have
a local ed25519
key, you can create one by running this and pressing enter
on any question the tool asks:
ssh-keygen -t ed25519
Testing
Testing is an integral part of the development process of this project. The aim is to make sure that all context is carefully encoded in the form of tests, to make sure that this project can grow without being dependent on communicating constraints and thoughts between developers.
To ensure that testing is being done thoroughly, some thought has been put into measuring it and designing this project in a way that facilitates testing.
Coverage
In order to measure the progress of the testing effort of this project, test coverage is measured for every commit in the CI pipeline. The coverage report is available here.
The goal is for this coverage to be almost perfect. In the CI, we enforce a minimum coverage percentage that gets adjusted as coverage grows to prevent regressions.
State
Another tricky issue when building tests is dealing with state. What we did for this project is to build the stateful parts, which are the database and the storage layer, in a way that is generic so that the implementations can be swapped out. At the same time, they are built in a way that it is possible to create a new, ephemeral instance for every unit test.
Both stateful aspects can be run as Docker containers, and the requisite files are present in this repository, so that running tests is as simple as:
just services up
just test
Building the project in this way should reduce the friction in running tests locally and in making sure new features come with tests.
Processes
Deployment
Deployment
Checklists
Pre-Commit
Maintenance
This checklist contains tasks that should be regularly performed, and a suggested interval that they should be performed at. None of these tasks are critical, but it makes sense to keep ahead of things.
Weekly
Update dependencies
Dependency versions are specified as bounds in the Cargo manifests, but resolved in the Cargo.lock file. Occasionally, the resolved dependency versions should be updated to use the latest versions.
To do so, use Cargo to update the lock file, make sure nothing breaks by running tests afterwards.
cargo update
just ci
If everything works (no errors), create a merge request with the changes.
Upgrade dependencies
Occasionally, dependencies will publish new versions which are not backwards-compatible. These upgrades tend to involve a bit more work, because the code often needs to be adjusted.
You can use the tool cargo-outdated
to check which dependencies are outdated:
cargo outdated
For each of the outdated dependencies, you can try to manually upgrade them
by updating their version in the Cargo.toml
and modifying the code. Check
that everything works locally with:
just ci
If everything works (no errors), create a merge request with the changes.
Monthly
Update Rust toolchain
The team behind Rust regularly releases a new version of the Rust toolchain. For stability reasons, we currently hardcode which version we build and test against in the CI.
When a new version is released, update in the repository:
- Adjust
RUST_VERSION
in.gitlab-ci.yml
to the new version - Adjust the Rust version in each of the
Dockerfile
(inbackend
,builder
,registry-sync
,database
) to the new version
Run tests to make sure nothing broke:
just ci
If everything works (no errors), create a merge request with the changes.
Update CI tooling
In the CI, we use bunch of tooling:
For each of these tools, we have a variable such as SCCACHE_VERSION
in the
.gitlab-ci.yml
which tells the CI which version of the tool to download and
use. Occasionally, these tools get new releases, in which case we should update
to the most recent version of the tool.
For every tool:
- Check if there is a new version. If not, skip this tool.
- Update the version variable in
.gitlab-ci.yml
to point to the new version of the tool - Check if it passes CI
Create a merge request with all the upgrades that were successful. Feel free to indicate which dependencies you were not able to upgrade, and why.
Yearly
Review new Clippy lints
Every so often, the Clippy team releases new
lints. It makes sense to
check them out occasionally and test if some of the newly added ones make sense
to add to the lint configuration in the Cargo.toml
.
When adding new lints, run the checks to make sure existing code passes them, if not you may have to fix the code.
just check
Once you had added some lints that appear to make sense and have adjusted the code, feel free to create a merge request with the changes.
Review test coverage minimum
In the CI, it is possible to set a minimum test coverage percentage. This is a value that should
- Find out current test coverage at the coverage report.
- Adjust the
fail-under-lines
setting in the.gitlab-ci.yml
to be closer to the current test coverage to prevent it regressing.
Create a merge request and make sure that the pipeline succeeds.