Crafting Your Software

Table of Contents
This article was originally posted on the Ubuntu Discourse, and is reposted here. I welcome comments and further discussion in that thread.
Packaging software is notoriously tricky. Every language, framework, and build system has its quirks, and the variety of artifact types — from Debian packages to OCI images and cloud images — only adds to the complexity.
Over the past decade, Canonical has been refining a family of tools called “crafts” to tame this complexity and make building, testing, and releasing software across ecosystems much simpler.
The journey began on 23rd June 2015 when the first commit was made to Snapcraft, the tool used to build Snap packages. For years, Snapcraft was the only craft in our portfolio, but in the last five years, we’ve generalized much of what we learned about building, testing, and releasing software into a number of “crafts” for building different artifact types.
Last month, I outlined Canonical’s plan to build debcraft
as a next-generation way to build Debian packages. In this post I’ll talk about what exactly makes a craft, and why you should bother learning to use them.
Software build lifecycle #
At the heart of all our crafts is craft-parts
, which according to the documentation “provides a mechanism to obtain data from different sources, process it in various ways, and prepare a filesystem sub-tree suitable for packaging”.
Put simply, craft-parts
gives developers consistent tools to fetch, build, and prepare software from any ecosystem for packaging into various formats.
Lifecycle stages #
Every part has a minimum of four lifecycle stages:
PULL
: source code or binary artifacts, along with dependencies are pulled from various sourcesBUILD
: software is built automatically by aplugin
, or a set of custom steps defined by the developerSTAGE
: select outputs from theBUILD
phase are copied to a unified staging area for all partsPRIME
: files from the staging area are copied to the priming area for use in the final artifact.
The STAGE
and PRIME
steps are similar, except that PRIME
only happens after all parts of the build are staged. Additionally, STAGE
provides the opportunity for parts to build/supply dependencies for other parts, but that might not be required in the final artifact.
Lifecycle in the CLI #
The lifecycle stages aren’t just in the build recipe, they’re also first-class citizens in each craft’s CLI, thanks to the craft-cli library. This ensures a consistent command-line experience across all craft tools.
Take the following examples:
|
|
This design feature supports a smoother iterative development and debugging workflow for building and testing software artifacts.
Part definition #
The parts
of a build vary in complexity - some require two-three trivial lines, others require detailed specification of dependencies, build flags, environment variables and steps. The best way to understand the flexibility of this system is by looking at some examples.
First, consider this (annotated) example from my icloudpd snap:
|
|
This spec is everything required to fetch, build and stage the important bits required to run the software - in this case a Python wheel and its dependencies.
Some projects might require more set up, perhaps an additional package is required or a specific version of a dependency is needed. Let’s take a look at a slightly more complex example taken from my zinc-k8s-operator project:
|
|
This instructs rockcraft
to fetch a Git repository at a particular tag, change into the sub-directory images/build/go-runner
, then build the software using the go
plugin. It also specifies that the build required the go
snap from the 1.20/stable
track, and sets some environment variables. That’s a lot of result for not much YAML. The end result of this is a single binary that’s “staged” and ready to be placed (in this case) into a Rock (Canonical’s name for OCI images).
And the best part: this exact definition can be used in a rockcraft.yaml
when building a Rock, a snapcraft.yaml
when building a Snap, a charmcraft.yaml
when building a Charm, etc.
The plugin system is extensive: at the time of writing there are 22 supported plugins, including go
, maven
, uv
, meson
and more. If your build system of choice isn’t supported you can specify manual steps, giving you as much flexibility as you need:
|
|
Here, multiple stages of the lifecycle are overridden using override-build
, override-pull
and override-stage
, and we see craftctl default
for the first time, which instructs snapcraft to do whatever it would have done prior being overridden, but allows the developer to provide additional steps either before or after the default actions.
Isolated build environments #
Even once a recipe for building software is defined, preparing machines to build software can be painful. Different major versions of the same OS might have varying package availability, your team might run completely different operating systems, and you might have limited image availability in your CI environment.
The crafts solve this with build “backends”. Currently the crafts can use LXD or Multipass to create isolated build environments, which makes it work nicely on Linux, macOS and Windows. This functionality is handled automatically by the crafts through the craft-providers
library. The craft-providers
library provides uniform interfaces for creating build environments, configuring base images and executing builds.
This means if you can run snapcraft pack
on your machine, your teammates can also run the same command without worrying about installing the right dependencies or polluting their machines with software and temporary files that might result from the build.
One of my favourite features of this setup is the ability to drop into a shell inside the build environment automatically on a few different conditions:
|
|
This makes troubleshooting a failing build much simpler, while allowing the developer to maintain a clean separation between the build environment and their local machine. Should the build environment ever become polluted, or otherwise difficult to work with, you can always start from a clean slate with snapcraft|rockcraft|charmcraft clean
. Each build machine is constructed using a cached build-base
, which contains all the baseline packages required by the craft - so recreating the build environment for a specific package only requires that base to be cloned and augmented with project specific concerns - making the process faster.
Saving space #
When packaging any kind of software, a common concern is the size of the artifact. This might be because you’re building an OCI-image that is pulled thousands of times a day as part of a major SaaS deployment, or maybe it’s a Snap for an embedded device running Ubuntu Core with a limited flash. In the container world, “distroless” became a popular way to solve this problem - essentially popularising the practice of shipping the barest minimum in a container image, eschewing much of the traditional Unix FHS.
The parts mechanism has provided a way of “filtering” what is staged or primed into a final artifact from the start, which already gave developers autonomy to choose exactly what went into their builds.
In addition to this, Canonical built “chisel”, which extends the distroless concept beyond containers to any kind of artifact. With chisel
, developers can slice out just the binaries, libraries, and configuration files they need from the Ubuntu Archive, enabling ultra-small packages without losing the robustness of Ubuntu’s ecosystem.
We later launched Chiseled JRE containers, and there are numerous other Rocks that utilise chisel
to provide a balance between shipping tiny container images, while benefiting from the huge selection and quality of software in the Ubuntu Archive.
Because the crafts are all built on a common platform, they now all have the ability to use “slices” from chisel-releases, which enables a greater range of use-cases where artifact size is a primary concern. Slices are community maintained, and specified in simple to understand YAML files. You can see the list of available slices for the most recent Ubuntu release (25.04 Plucky Puffin) on GitHub, and further documentation on slices and how they’re used in the Chisel docs.
Multi-architecture builds #
Ubuntu supports six major architectures at the time of writing (amd64
, arm64
, armhf
, ppc64le
, s390x
, riscv64
), and all of our crafts have first-class support for each of them. This functionality is provided primarily by the craft-platforms library, and supported by the craft-grammar library, which enables more complex definitions where builds may have different steps or requirements for different architectures.
At a high-level, each artifact defines which architectures or platforms it is built for, and which it is built on. These are often, but not always, the same. For example:
|
|
This is shorthand for “build the project on amd64
for amd64
”, but in a different example taken from a charmcraft.yaml
|
|
In this case the software is built on amd64
, but can run on any of the supported architectures - this can happen with all-Python wheels, bash
scripts and other interpreted languages which don’t link platform-specific libraries.
In some build processes, the process or dependencies might differ per-architecture, which is where craft-grammar
comes in, enabling expressions such as (taken from GitHub):
|
|
Being able to define how to build on different architectures is only half of the battle, though. It’s one thing to define how to build software on an s390x
machine but few developers have mainframes handy to actually run the build! This is where the crafts’ remote-build
capability comes in. The remote-build
command sends builds to Canonical’s build farm, which has native support for all of Ubuntu’s supported architectures. This is built into all of our crafts, and is triggered with snapcraft remote-build
, rockcraft remote-build
, etc.
Remote builds are a lifeline for publishers and communities who need to reach a larger audience, but can’t necessarily get their own build farm together. One example of this is Snapcrafters, a community-driven organisation that packages popular software as Snaps, who use remote-build
to drive multi-architecture builds from GitHub Actions as part of their publishing workflow (as seen here and here for example).
Unified testing framework #
Testing is often the missing piece in build tools: developers are forced to rely on separate CI systems or ad-hoc scripts to verify their artifacts. To close this gap, we’re introducing a unified test
sub-command in the crafts.
We recently added the test
sub-command to our crafts as an experimental (for now!) feature. Under the hood, craft test
will introduce a new lifecycle stage (TEST
). The enables packagers of any artifact type to specify how that artifact should be tested using a common framework across artifact types.
Craft’s testing capability is powered by spread, a convenient full-system task distribution system. Spread was built to simplify the massive number of integration tests run for the snapd project. It enables developers to specify tests in a simple language, and distribute them concurrently to any infrastructure they have available.
This enables a developer to define tests and test infrastructure, and make it trivial to run the same tests locally, or remotely on cloud infrastructure. This can really speed up the development process - preventing developers from needing to wait on CI runners to spin up and test their code while iterating, they can run the very same integration tests locally using craft test
.
There are lots of fine details to spread
, and the team is working on artifact-specific abstractions for the crafts that will make testing delightful. Imagine maintaining the Snap for a GUI application, and being able to enact the following workflow:
|
|
By integrating a common testing tool into the build tooling, the Starcraft team will be able to curate unique testing experiences for each kind of artifact. A snap might need a headless graphical VM, where an OCI-image simply requires a container runtime, but the spread
underpinnings allow a common test-definition language for each.
There are a couple of examples of this in the wild already:
|
|
The test above is powered by this spread.yaml, and this test definition. With a little bit of work, it’s also possible to integrate spread
with GitHub matrix actions, giving you one GitHub job per spread
test - as seen here.
You can see a similar example in our PostgreSQL Snap test suite, and we’ll be adding more and more of this kind of test across our Rock, Snap, Charm, Image and Deb portfolio.
There is work to do, but I’m really excited about bringing a common testing framework to the crafts which should make the testing of all kinds of artifacts more consistent and easier to integrate across teams and systems.
Crafting the crafts #
As the portfolio expanded from snapcraft
, to charmcraft
, to rockcraft
and is now expanding further to debcraft
and imagecraft
it was clear that we’d need a way to make it easy to build crafts for different artifacts, while being rigorous about consistency across the tools. A couple of years ago, the team built the craft-application base library, which now forms the foundation of all our crafts.
The craft-application
library combines many of the existing libraries that were in use across the crafts (listed below), providing a consistent base upon which artifact-specific logic can be built. The allows craft developers to spend less time implementing CLI details, parts
lifecycles and store interactions, and more time on curating a great experience for the maintainers of their artifact type.
For the curious, craft-application
builds upon the following libraries:
- craft-archives: manages interactions with
apt
package repositories - craft-cli: CLI client builder that follows the Canonical’s CLI guidelines
- craft-parts: obtain, process, and organize data sources into deployment-ready filesystems.
- craft-grammar: advanced description grammar for parts
- craft-providers: interface for instantiating and executing builds for a variety of target environments
- craft-platforms: manage target platforms and architectures for craft applications
- craft-store: manage interactions with Canonical’s software stores
- craft-artifacts: pack artifacts for craft applications
Examples and docs #
Before I leave you, I wanted to reference a few *craft.yaml
examples, and link to the documentation for each of the crafts, where you’ll find the canonical (little c!) truth on each tool.
You can find documentation for the crafts below:
And some example recipes:
- Snap:
icloudpd
- snapcraft.yaml - Snap:
parca-agent
- snapcraft.yaml - Snap:
signal-desktop
- snapcraft.yaml - Charm:
ubuntu-manpages-operator
- charmcraft.yaml - Rock:
grafana
- rockcraft.yaml - Rock:
temporal-server
- rockcraft.yaml
Summary #
The craft ecosystem provides developers with a rigorous, consistent and pleasant experience for building many kinds of artifacts. At the moment, we support Snaps, Rocks and Charms but we’re actively developing crafts for Debian packages, cloud images and more.The basic build process, parts
ecosystem and foundations of the crafts are “battle tested” at this point, and I’m excited to see how the experimental craft test
commands shape up across the crafts.
One of the killer features for the crafts is the ability to reuse part definitions across different artifacts - which makes the pay off for learning the parts
language very high - it’s a skill you’ll be able to use to build Snaps, Rocks, Charms, VM Images and soon Debs!
If I look at ecosystems like Debian, where tooling like autopkgtest
is the standard, I think debcraft test
will offer an intuitive entrypoint and encourage more testing, and the same is true of Snaps, both graphical and command-line.
That’s all for now!