<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Canonical on Jon Seager</title><link>https://jnsgr.uk/tags/canonical/</link><description>Recent content in Canonical on Jon Seager</description><generator>Hugo -- gohugo.io</generator><language>en</language><lastBuildDate>Thu, 26 Mar 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://jnsgr.uk/tags/canonical/index.xml" rel="self" type="application/rss+xml"/><item><title>ntpd-rs: it's about time!</title><link>https://jnsgr.uk/2026/03/ntpd-rs-its-about-time/</link><pubDate>Thu, 26 Mar 2026 00:00:00 +0000</pubDate><guid>https://jnsgr.uk/2026/03/ntpd-rs-its-about-time/</guid><description>&lt;blockquote&gt;
&lt;p&gt;This article was originally posted &lt;a href="https://discourse.ubuntu.com/t/ntpd-rs-its-about-time/79154" target="_blank" rel="noreferrer"&gt;on the Ubuntu Discourse&lt;/a&gt;, and is reposted here. I welcome comments and further discussion in that thread.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2 id="introduction" class="relative group"&gt;Introduction &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#introduction" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;I am thrilled to announce the next target in our campaign to replace core system utilities with memory-safe Rust rewrites in Ubuntu. In upcoming releases, Ubuntu will be adopting &lt;a href="https://trifectatech.org/projects/ntpd-rs/" target="_blank" rel="noreferrer"&gt;ntpd-rs&lt;/a&gt; as the default time synchronization client and server, eventually replacing &lt;a href="https://chrony-project.org/" target="_blank" rel="noreferrer"&gt;&lt;code&gt;chrony&lt;/code&gt;&lt;/a&gt;, &lt;a href="https://www.linuxptp.org/" target="_blank" rel="noreferrer"&gt;&lt;code&gt;linuxptp&lt;/code&gt;&lt;/a&gt; and with any luck, &lt;a href="https://gpsd.io/" target="_blank" rel="noreferrer"&gt;&lt;code&gt;gpsd&lt;/code&gt;&lt;/a&gt; for time syncing use-cases.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://trifectatech.org/projects/ntpd-rs/" target="_blank" rel="noreferrer"&gt;&lt;code&gt;ntpd-rs&lt;/code&gt;&lt;/a&gt; is a full-featured implementation of the Network Time Protocol (NTP), written entirely in Rust. Maintained by the Trifecta Tech Foundation as part of &lt;a href="https://github.com/pendulum-project" target="_blank" rel="noreferrer"&gt;Project Pendulum&lt;/a&gt;, &lt;code&gt;ntpd-rs&lt;/code&gt; places a strong focus on security, stability, and memory safety.&lt;/p&gt;
&lt;p&gt;To deliver on this goal, we&amp;rsquo;re building on our partnership with the &lt;a href="https://trifectatech.org/" target="_blank" rel="noreferrer"&gt;Trifecta Tech Foundation&lt;/a&gt; who are behind &lt;a href="https://trifectatech.org/projects/sudo-rs/" target="_blank" rel="noreferrer"&gt;sudo-rs&lt;/a&gt;, &lt;a href="https://trifectatech.org/projects/zlib-rs/" target="_blank" rel="noreferrer"&gt;zlib-rs&lt;/a&gt; and more. We will be funding the Trifecta Tech Foundation to build new features, enhance security isolation, and ultimately deliver a unified, memory-safe time synchronization utility for the Linux ecosystem. This work meshes well with the Trifecta Tech Foundations goals to improve the security of time synchronization everywhere.&lt;/p&gt;
&lt;h2 id="ntp-nts-and-ptp" class="relative group"&gt;NTP, NTS, and PTP &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#ntp-nts-and-ptp" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;Before diving into the mechanics and reasoning behind the transition, I wanted to give some background on the protocols at play, and the problems we&amp;rsquo;re hoping to solve. Keeping accurate time is a critical system function, not least because it involves constant interaction with the internet and forms the basis for cryptographic verification in protocols such as Transport Layer Security (TLS).&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;NTP (Network Time Protocol)&lt;/strong&gt; is the foundational protocol that most operating systems implement to accurately determine the current time from a network source.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;NTS (Network Time Security)&lt;/strong&gt; is to NTP what HTTPS is to HTTP. Historically, the Network Time Protocol was used unencrypted, like many of the early web protocols. NTS introduces cryptographic security to time synchronization, ensuring that bad actors cannot intercept or spoof time data. We already pushed to make NTS the default out-of-the-box in Ubuntu 25.10, which we accomplished by migrating away from &lt;code&gt;ntpd&lt;/code&gt; to &lt;code&gt;chrony&lt;/code&gt; as the default time-syncing implementation.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;PTP (Precision Time Protocol)&lt;/strong&gt; is used for systems that require sub-microsecond synchronization. While the precision offered by a standard NTP deployment is sufficient for general-purpose computing, PTP is often used for complex, specialized deployments like telecommunications networks, power grids, and automotive applications.&lt;/p&gt;
&lt;h2 id="proven-at-scale" class="relative group"&gt;Proven at Scale &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#proven-at-scale" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;Transitioning core utilities in Ubuntu comes with a responsibility to ensure that replacements are of high quality, resilient and offer something to the platform. We may be the first major Linux distribution to adopt ntpd-rs by default, but we aren&amp;rsquo;t the first to recognize the readiness of &lt;code&gt;ntpd-rs&lt;/code&gt; - it has already been &lt;a href="https://letsencrypt.org/2024/06/24/ntpd-rs-deployment" target="_blank" rel="noreferrer"&gt;proven at scale by Let&amp;rsquo;s Encrypt&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;While Let&amp;rsquo;s Encrypt&amp;rsquo;s core Certificate Authority software has always been written in memory-safe Go, their server operating systems and network infrastructure historically relied on memory-unsafe languages like C and C++, which routinely led to vulnerabilities requiring patching.&lt;/p&gt;
&lt;p&gt;Following extensive development, &lt;code&gt;ntpd-rs&lt;/code&gt; was deployed to Let&amp;rsquo;s Encrypt&amp;rsquo;s staging environment in April 2024, and rolled out to full production by June 2024, marking a major milestone for ntpd-rs.&lt;/p&gt;
&lt;p&gt;The fact that one of the world&amp;rsquo;s most prolific and security-conscious certificate authorities trusts &lt;code&gt;ntpd-rs&lt;/code&gt; to keep time across its fleet should provide us, and our enterprise customers, with tremendous confidence in its resilience and suitability.&lt;/p&gt;
&lt;h2 id="a-single-memory-safe-utility-for-ntp-and-ptp" class="relative group"&gt;A Single, Memory-Safe Utility for NTP and PTP &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#a-single-memory-safe-utility-for-ntp-and-ptp" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;We want to provide a single utility for configuring both NTP/NTS and Precision Time Protocol (PTP) on Linux. The Trifecta Tech Foundation is concurrently developing &lt;a href="https://trifectatech.org/projects/statime/" target="_blank" rel="noreferrer"&gt;Statime&lt;/a&gt;, a memory-safe PTP implementation that delivers synchronization performance on par with &lt;code&gt;linuxptp&lt;/code&gt;, but with the goal of being easier to configure and use.&lt;/p&gt;
&lt;p&gt;The goal is to integrate Statime&amp;rsquo;s PTP capabilities directly into &lt;code&gt;ntpd-rs&lt;/code&gt;, improving the user experience by bringing all time synchronization concerns into one utility with common configuration and usage patterns, obviating the need for complex manual configuration (and troubleshooting) that users of &lt;code&gt;linuxptp&lt;/code&gt; may be familiar with.&lt;/p&gt;
&lt;h2 id="timelines-and-goals" class="relative group"&gt;Timelines and Goals &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#timelines-and-goals" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;As with our transition to &lt;code&gt;sudo-rs&lt;/code&gt; and &lt;code&gt;uutils coreutils&lt;/code&gt;, leading the mainstream adoption of foundational system utilities comes with responsibility. We want to ensure that &lt;code&gt;ntpd-rs&lt;/code&gt; matches the security isolation and performance standards our users expect from &lt;code&gt;chrony&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Canonical is funding the Trifecta Tech Foundation&amp;rsquo;s development efforts toward these goals over the coming cycles. This work will take place between July 2026 and January 2027 in several major milestones. Our current timeline and targeted goals are:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Ubuntu 26.10:&lt;/strong&gt; If all goes well, we aim to land the latest version of &lt;code&gt;ntpd-rs&lt;/code&gt; in the archive, making it available to test.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Ubuntu 27.04:&lt;/strong&gt; By 27.04, &lt;code&gt;ntpd-rs&lt;/code&gt; should have integrated &lt;code&gt;statime&lt;/code&gt;, and we will ship the unified client/server binary for NTP, NTS and PTP in Ubuntu by default, with the aim of providing a smooth migration path for those who already manage complex &lt;code&gt;chrony&lt;/code&gt; configs.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;To get us there, the Trifecta Tech Foundation will be working on the following items:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Feature Parity &amp;amp; Hardware Support:&lt;/strong&gt; Adding &lt;code&gt;gpsd&lt;/code&gt; IP socket support, multi-threading support for NTP servers, and support for multi-homed servers.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Security &amp;amp; Isolation:&lt;/strong&gt; &lt;code&gt;chrony&lt;/code&gt; is isolated via AppArmor and seccomp. We&amp;rsquo;ll be working on robust AppArmor and seccomp profiles for &lt;code&gt;ntpd-rs&lt;/code&gt; to ensure we don&amp;rsquo;t buy memory safety at the cost of system-level privilege boundaries. We are also ensuring &lt;code&gt;rustls&lt;/code&gt; can use &lt;code&gt;openssl&lt;/code&gt; as a crypto provider to satisfy strict corporate cryptography policies.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;PTP &amp;amp; Automotive Profiles:&lt;/strong&gt; Adding support for gPTP, which will allow us to support complex deployments like the Automotive profile directly from &lt;code&gt;nptd-rs&lt;/code&gt; (via Statime). Additionally, experimental support for the proposed Client-Server PTP protocol (CSPTP, IEEE 1588.1) will be added.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Benchmarking &amp;amp; Testing:&lt;/strong&gt; Comprehensive benchmarking of long-term memory, CPU usage, and synchronization performance against &lt;code&gt;chrony&lt;/code&gt; to give our cloud partners and enterprise users complete confidence in the transition.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;User-experience:&lt;/strong&gt; Logging improvements and enhancements to configuration that help users configure the time synchronisation target to optimise network usage, as well as improvements to the ntp-cli&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="about-the-trifecta-tech-foundation" class="relative group"&gt;About the Trifecta Tech Foundation &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#about-the-trifecta-tech-foundation" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;Trifecta Tech Foundation is a non-profit and a Public Benefit Organisation (501(c)(3) equivalent) that creates open-source building blocks for critical infrastructure software. Their initiatives on data compression, time synchronization, and privilege boundary, impact the digital security of millions of people. If you&amp;rsquo;d like to support their work, please contact them via &lt;a href="https://trifectatech.org/support" target="_blank" rel="noreferrer"&gt;https://trifectatech.org/support&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id="summary" class="relative group"&gt;Summary &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#summary" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;I am really excited to deepen our already productive relationship with the Trifecta Tech Foundation to make these transitions viable for the wider ecosystem. We&amp;rsquo;ll be working hard on testing and integration to ensure seamless migration paths, and heavily document the changes ahead of the 26.10 and 27.04 releases.&lt;/p&gt;
&lt;p&gt;Stay tuned!&lt;/p&gt;</description></item><item><title>An update on upki</title><link>https://jnsgr.uk/2026/02/upki-update/</link><pubDate>Mon, 16 Feb 2026 00:00:00 +0000</pubDate><guid>https://jnsgr.uk/2026/02/upki-update/</guid><description>&lt;blockquote&gt;
&lt;p&gt;This article was originally posted &lt;a href="https://discourse.ubuntu.com/t/an-update-on-upki/77063" target="_blank" rel="noreferrer"&gt;on the Ubuntu Discourse&lt;/a&gt;, and is reposted here. I welcome comments and further discussion in that thread.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Last year, I &lt;a href="https://jnsgr.uk/2025/12/addressing-linuxs-missing-pki-infra/" target="_blank" rel="noreferrer"&gt;announced&lt;/a&gt; that Canonical had begun supporting the development of &lt;a href="https://jnsgr.uk/2025/12/addressing-linuxs-missing-pki-infra/" target="_blank" rel="noreferrer"&gt;upki&lt;/a&gt;, a project that will bring browser-grade Public Key Infrastructure (PKI) to Linux. Since then, development has been moving at pace thanks to the tireless work of &lt;a href="https://dirkjan.ochtman.nl/" target="_blank" rel="noreferrer"&gt;Dirkjan&lt;/a&gt; and &lt;a href="https://jbp.io/" target="_blank" rel="noreferrer"&gt;Joe&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;In this post, I’ll explore the progress we’ve made, how you can try an early version, and where we’re going next.&lt;/p&gt;
&lt;h3 id="architecture--progress" class="relative group"&gt;Architecture &amp;amp; Progress &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#architecture--progress" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h3&gt;&lt;p&gt;As a reminder, upki’s primary goal is to provide a reliable, privacy-preserving, and efficient certificate revocation mechanism for Linux system utilities, package managers, and language runtimes. The solution is built around &lt;a href="https://blog.mozilla.org/security/2020/01/09/crlite-part-1-all-web-pki-revocations-compressed/" target="_blank" rel="noreferrer"&gt;CRLite&lt;/a&gt;, an efficient data format that compresses and distributes certificate revocation information at scale.&lt;/p&gt;
&lt;p&gt;The upki &lt;a href="https://github.com/rustls/upki" target="_blank" rel="noreferrer"&gt;repository&lt;/a&gt; is structured as a Cargo workspace containing five crates, each serving a distinct role:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;&lt;code&gt;upki&lt;/code&gt;&lt;/strong&gt;: the core library and CLI tool. This crate contains the revocation query engine, the client-side sync logic for fetching filter updates, and the command-line interface. The revocation interface was originally embedded in the CLI, but has since been promoted into the library so that other Rust projects can use it directly as a dependency.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;code&gt;upki-mirror&lt;/code&gt;&lt;/strong&gt;: the server-side mirroring tool. This binary fetches and validates CRLite filters from Mozilla&amp;rsquo;s infrastructure such that they can be served using a standard web server like &lt;code&gt;nginx&lt;/code&gt; or &lt;code&gt;apache&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;code&gt;upki-ffi&lt;/code&gt;&lt;/strong&gt;: the C Foreign Function Interface. Built as a &lt;code&gt;cdylib&lt;/code&gt;, this crate uses &lt;a href="https://github.com/mozilla/cbindgen" target="_blank" rel="noreferrer"&gt;&lt;code&gt;cbindgen&lt;/code&gt;&lt;/a&gt; to auto-generate a &lt;code&gt;upki.h&lt;/code&gt; header file, exposing the revocation query API to C, C++, Go and any other language with C FFI support.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;code&gt;rustls-upki&lt;/code&gt;&lt;/strong&gt;: an integration crate that wires upki&amp;rsquo;s revocation engine into &lt;a href="https://github.com/rustls/rustls" target="_blank" rel="noreferrer"&gt;rustls&lt;/a&gt;, enabling any Rust application using rustls to perform CRLite-backed revocation checks transparently.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;code&gt;revoke-test&lt;/code&gt;&lt;/strong&gt;: testing infrastructure for validating revocation queries against known-revoked certificates.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The team recently released &lt;a href="https://github.com/rustls/upki/releases/tag/upki-0.1.0" target="_blank" rel="noreferrer"&gt;v0.1.0&lt;/a&gt;, which should help us to gather more feedback on the work we&amp;rsquo;ve done so far.&lt;/p&gt;
&lt;h3 id="how-to-try-it" class="relative group"&gt;How to try it &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#how-to-try-it" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h3&gt;&lt;p&gt;If you&amp;rsquo;d like to try the code in its current form, you&amp;rsquo;ll need to have a version of the Rust toolchain installed. The easiest way to do this on Ubuntu is &lt;a href="https://documentation.ubuntu.com/ubuntu-for-developers/howto/rust-setup/#installing-the-latest-rust-toolchain-using-rustup" target="_blank" rel="noreferrer"&gt;using the &lt;code&gt;rustup&lt;/code&gt; snap&lt;/a&gt;:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt"&gt; 1
&lt;/span&gt;&lt;span class="lnt"&gt; 2
&lt;/span&gt;&lt;span class="lnt"&gt; 3
&lt;/span&gt;&lt;span class="lnt"&gt; 4
&lt;/span&gt;&lt;span class="lnt"&gt; 5
&lt;/span&gt;&lt;span class="lnt"&gt; 6
&lt;/span&gt;&lt;span class="lnt"&gt; 7
&lt;/span&gt;&lt;span class="lnt"&gt; 8
&lt;/span&gt;&lt;span class="lnt"&gt; 9
&lt;/span&gt;&lt;span class="lnt"&gt;10
&lt;/span&gt;&lt;span class="lnt"&gt;11
&lt;/span&gt;&lt;span class="lnt"&gt;12
&lt;/span&gt;&lt;span class="lnt"&gt;13
&lt;/span&gt;&lt;span class="lnt"&gt;14
&lt;/span&gt;&lt;span class="lnt"&gt;15
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Ensure you have a C compiler in your PATH&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo apt update
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo apt install -y build-essential curl
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Install the rustup snap and get the stable toolchain&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo snap install --classic rustup
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;rustup install stable
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Install upki&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;cargo install upki
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nb"&gt;export&lt;/span&gt; &lt;span class="nv"&gt;PATH&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;&amp;#34;&lt;/span&gt;&lt;span class="nv"&gt;$HOME&lt;/span&gt;&lt;span class="s2"&gt;/.cargo/bin:&lt;/span&gt;&lt;span class="nv"&gt;$PATH&lt;/span&gt;&lt;span class="s2"&gt;&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Fetch revocation data. This will be done in the background&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# when installed through the distro in the future&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;upki fetch
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;p&gt;That should be all you need to install the development version of &lt;code&gt;upki&lt;/code&gt;, and you can now use it to run a revocation check by piping certificate output from &lt;code&gt;curl&lt;/code&gt; into &lt;code&gt;upki&lt;/code&gt;:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt"&gt;1
&lt;/span&gt;&lt;span class="lnt"&gt;2
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;curl -sw &lt;span class="s1"&gt;&amp;#39;%{certs}&amp;#39;&lt;/span&gt; https://google.com &lt;span class="p"&gt;|&lt;/span&gt; upki revocation check
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;NotRevoked
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;p&gt;Early versions of docs for the &lt;a href="https://docs.rs/upki-ffi/latest/upki/" target="_blank" rel="noreferrer"&gt;C FFI crate&lt;/a&gt; and &lt;a href="https://docs.rs/upki/latest/upki/" target="_blank" rel="noreferrer"&gt;Rust crate documentation&lt;/a&gt; are available, but if you&amp;rsquo;d like to explore, build the project from source, or contribute, the &lt;a href="https://github.com/rustls/upki" target="_blank" rel="noreferrer"&gt;repository&lt;/a&gt; is the best place to start. For an example of the C FFI interface in action you can take a look at the &lt;a href="https://github.com/rustls/upki-go-demo" target="_blank" rel="noreferrer"&gt;upki-go-demo&lt;/a&gt; Dirkjan published.&lt;/p&gt;
&lt;h3 id="next-steps" class="relative group"&gt;Next Steps &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#next-steps" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h3&gt;&lt;p&gt;Now the foundational pieces are in place, our focus is shifting to external consumption, performance, and integration with the wider Linux ecosystem. In the coming days there should be an early &lt;code&gt;0.1.0&lt;/code&gt; binary release.&lt;/p&gt;
&lt;p&gt;We&amp;rsquo;ll also be doing some performance benchmarking on the initial fetch and of the revocation checks themselves. Currently, each revocation check reads several CRLite filter files into memory. There may be quick wins to improve this, but we’ll benchmark first and see if it warrants optimisation at this time.&lt;/p&gt;
&lt;p&gt;We also need to deploy some production infrastructure for serving the CRLite filters. If you follow the steps above, you&amp;rsquo;ll be fetching from a pre-production web server hosted at &lt;a href="https://upki.rustls.dev" target="_blank" rel="noreferrer"&gt;https://upki.rustls.dev&lt;/a&gt;. We&amp;rsquo;ve built a &lt;a href="https://github.com/jnsgruk/upki-mirror-k8s-operator" target="_blank" rel="noreferrer"&gt;Juju charm&lt;/a&gt; for operating the CRLite mirror on Kubernetes. This charm packages the &lt;code&gt;upki-mirror&lt;/code&gt; binary in a &lt;a href="https://ubuntu.com/blog/combining-distroless-and-ubuntu-chiselled-containers" target="_blank" rel="noreferrer"&gt;chiselled Rock&lt;/a&gt;, and will be deployed into Canonical&amp;rsquo;s datacentres to serve CRLite data at &lt;a href="https://crlite.ubuntu.com/" target="_blank" rel="noreferrer"&gt;crlite.ubuntu.com&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Our Ubuntu Foundations team is also working on packaging the various upki components for inclusion in the Ubuntu archive, which will enable you to &lt;code&gt;apt install upki&lt;/code&gt; in the future, and also enable us to package and enable it by default in Ubuntu 26.10 and beyond.&lt;/p&gt;
&lt;h3 id="further-down-the-road" class="relative group"&gt;Further Down the Road &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#further-down-the-road" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h3&gt;&lt;p&gt;While the work above covers what&amp;rsquo;s immediately in front of us, there is scope to expand upki&amp;rsquo;s capabilities further. Two areas of interest are Certificate Transparency enforcement, and support for Merkle Tree Certificates.&lt;/p&gt;
&lt;h4 id="certificate-transparency-enforcement" class="relative group"&gt;Certificate Transparency Enforcement &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#certificate-transparency-enforcement" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h4&gt;&lt;p&gt;While upki&amp;rsquo;s initial focus is on revocation checking, the project also aims to eventually support &lt;a href="https://certificate.transparency.dev/" target="_blank" rel="noreferrer"&gt;Certificate Transparency&lt;/a&gt; (CT) enforcement. CT is a more modern security measure that relies upon a set of publicly auditable, append-only logs that record every TLS certificate issued by a Certificate Authority (CA). This prevents CAs from issuing fraudulent or erroneous certificates without a means for that fraudulent activity to be discovered - a problem that has &lt;a href="https://blog.cloudflare.com/unauthorized-issuance-of-certificates-for-1-1-1-1/" target="_blank" rel="noreferrer"&gt;bitten organisations&lt;/a&gt; in the past.&lt;/p&gt;
&lt;p&gt;CT Enforcement would enable clients to refuse to establish a connection unless the server provides cryptographic proof that its certificate has been correctly logged. Browsers like Chrome and Firefox already enforce this, but the rest of the Linux ecosystem would need a tool such as upki to enable such functionality.&lt;/p&gt;
&lt;h4 id="intermediate-preloading" class="relative group"&gt;Intermediate Preloading &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#intermediate-preloading" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h4&gt;&lt;p&gt;A correctly configured TLS server should not only send its own certificate, but also the intermediate certificates needed to chain back to a trusted root. In practice, many servers omit the intermediate certificates, and because browsers have quietly worked around this for years, the misconfiguration often goes unnoticed.&lt;/p&gt;
&lt;p&gt;Firefox has been &lt;a href="https://blog.mozilla.org/security/2020/11/13/preloading-intermediate-ca-certificates-into-firefox/" target="_blank" rel="noreferrer"&gt;preloading all intermediates&lt;/a&gt; disclosed to the &lt;a href="https://www.ccadb.org/" target="_blank" rel="noreferrer"&gt;Common CA Database&lt;/a&gt; (CCADB) since Firefox 75, while Chrome and Edge will silently fetch missing intermediates using the Authority Information Access (AIA) extension in the server&amp;rsquo;s certificate. The result is that a broken certificate chain that works perfectly in every browser will produce an opaque &lt;code&gt;UNKNOWN_ISSUER&lt;/code&gt; error when accessed by Linux utilities like &lt;code&gt;curl&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Because upki already maintains a regularly synced local data store, it&amp;rsquo;s well positioned to ship the known set of intermediates alongside the CRLite filters. This wouldn&amp;rsquo;t provide a security improvement so much as a usability improvement. It would also bring non-browser clients up to parity with browsers with respect to connection reliability. There is an additional privacy benefit too: rather than fetching a missing intermediate from the issuing CA (which discloses browsing activity to the CA), the intermediate is already present locally.&lt;/p&gt;
&lt;h4 id="merkle-tree-certificates" class="relative group"&gt;Merkle Tree Certificates &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#merkle-tree-certificates" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h4&gt;&lt;p&gt;Looking even further ahead, upki could support the next generation of web PKI by including support for &lt;a href="https://datatracker.ietf.org/doc/draft-davidben-tls-merkle-tree-certs/" target="_blank" rel="noreferrer"&gt;Merkle Tree Certificates (MTCs)&lt;/a&gt;. This is an area of active development in the IETF, with Cloudflare and Chrome recently &lt;a href="https://blog.cloudflare.com/bootstrap-mtc/" target="_blank" rel="noreferrer"&gt;announcing an experimental deployment&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The motivation for MTCs comes largely from the transition to &lt;a href="https://openquantumsafe.org/post-quantum-crypto.html" target="_blank" rel="noreferrer"&gt;Post-Quantum (PQ) cryptography&lt;/a&gt;. PQ signatures are significantly larger than their non-PQ counterparts. The signatures for &lt;a href="https://openquantumsafe.org/liboqs/algorithms/sig/ml-dsa.html" target="_blank" rel="noreferrer"&gt;ML-DSA-44&lt;/a&gt; are 2,420 bytes compared to 64 bytes for ECDSA-P256. A typical TLS handshake today involves multiple signatures and public keys across the certificate chain and CT proofs, which means a simple swap to PQ algorithms would add tens of kilobytes of overhead per connection and likely a noticeable increase in connection latency.&lt;/p&gt;
&lt;p&gt;MTCs address this by rethinking how certificates are validated. Rather than transmitting a full certificate chain with multiple signatures, a Certificate Authority can batch certificates into a Merkle Tree and sign only the tree&amp;rsquo;s root hash. The client then receives just a single signature, a public key, and a compact Merkle tree inclusion proof that demonstrates the certificate&amp;rsquo;s presence in the batch. The signed tree heads can be distributed to clients out-of-band, meaning the per-handshake overhead is drastically reduced.&lt;/p&gt;
&lt;p&gt;Because upki already maintains a local data store that is regularly synced, it could cache tree head data alongside CRLite filters, thereby enabling the inclusion proofs sent during TLS handshakes to be even smaller. Rather than proving inclusion all the way from the leaf to the root, the server could send a &amp;ldquo;truncated&amp;rdquo; proof that starts partway up the tree, with the client computing the remainder from data it already has locally. There is a &lt;a href="https://datatracker.ietf.org/doc/draft-davidben-tls-merkle-tree-certs/" target="_blank" rel="noreferrer"&gt;TLS extension&lt;/a&gt; being developed to negotiate this.&lt;/p&gt;
&lt;p&gt;The implementation of MTCs for TLS is still highly experimental. MTCs are not yet deployed in any browser, but upki will lay the groundwork for Linux system utilities to benefit from this evolution as the technology is adopted.&lt;/p&gt;
&lt;h3 id="summary" class="relative group"&gt;Summary &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#summary" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h3&gt;&lt;p&gt;In the few weeks since we announced upki, the core revocation engine has been established and is now functional, the CRLite mirroring tool is working and a production deployment in Canonical&amp;rsquo;s datacentres is ongoing. We&amp;rsquo;re now preparing for an alpha release and remain on track for an opt-in preview for Ubuntu 26.04 LTS.&lt;/p&gt;
&lt;p&gt;Beyond revocation, we&amp;rsquo;re keeping a close eye on the evolving PKI landscape and particularly CT enforcement and Merkle Tree Certificates.&lt;/p&gt;
&lt;p&gt;I&amp;rsquo;d like to extend my thanks again to &lt;a href="https://dirkjan.ochtman.nl/" target="_blank" rel="noreferrer"&gt;Dirkjan&lt;/a&gt; and &lt;a href="https://jbp.io/" target="_blank" rel="noreferrer"&gt;Joe&lt;/a&gt; for their continued collaboration on this work, and the utmost professionalism they&amp;rsquo;ve demonstrated throughout.&lt;/p&gt;</description></item><item><title>Developing with AI on Ubuntu</title><link>https://jnsgr.uk/2026/01/developing-with-ai-on-ubuntu/</link><pubDate>Tue, 20 Jan 2026 00:00:00 +0000</pubDate><guid>https://jnsgr.uk/2026/01/developing-with-ai-on-ubuntu/</guid><description>&lt;blockquote&gt;
&lt;p&gt;This article was originally posted &lt;a href="https://discourse.ubuntu.com/t/developing-with-ai-on-ubuntu/75299" target="_blank" rel="noreferrer"&gt;on the Ubuntu Discourse&lt;/a&gt;, and is reposted here. I welcome comments and further discussion in that thread.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;AI-assisted tooling is becoming more and more common in the workflows of engineers at all experience levels. As I see it, our challenge is one of consideration, enablement and constraint. We must enable those who opt-in to safely and responsibly harness the power of these tools, while respecting those who do not wish to have their platform defined or overwhelmed by this class of software.&lt;/p&gt;
&lt;p&gt;The use of AI is a divisive topic among the tech community. I find myself a little in both camps, somewhere between sceptic and advocate. While I&amp;rsquo;m quick to acknowledge the negative impacts that the use of LLMs &lt;em&gt;can have&lt;/em&gt; on open source projects, I&amp;rsquo;m also surrounded by examples where it has been used responsibly to great effect.&lt;/p&gt;
&lt;p&gt;Examples of this include &lt;a href="https://filippo.io" target="_blank" rel="noreferrer"&gt;Filippo&lt;/a&gt;&amp;rsquo;s article &lt;a href="https://words.filippo.io/claude-debugging/" target="_blank" rel="noreferrer"&gt;debugging low-level cryptography with Claude Code&lt;/a&gt;, &lt;a href="https://mitchellh.com" target="_blank" rel="noreferrer"&gt;Mitchell&lt;/a&gt;&amp;rsquo;s article on &lt;a href="https://mitchellh.com/writing/non-trivial-vibing" target="_blank" rel="noreferrer"&gt;Vibing a Non-Trivial Ghostty Feature&lt;/a&gt;, and &lt;a href="https://github.com/crawshaw" target="_blank" rel="noreferrer"&gt;David&lt;/a&gt;&amp;rsquo;s article &lt;a href="https://crawshaw.io/blog/programming-with-agents" target="_blank" rel="noreferrer"&gt;How I Program with Agents&lt;/a&gt;. These articles come from engineers with proven expertise in careful, precise software engineering, yet they share an important sentiment: AI-assisted tools can be a remarkable force-multiplier when used &lt;em&gt;in conjunction&lt;/em&gt; with their lived experience, but care must still be taken to avoid poor outcomes.&lt;/p&gt;
&lt;p&gt;The aim of this post is not to convince you to use AI in your work, but rather to introduce the elements of Ubuntu that make it a first-class platform for safe, efficient experimentation and development. My goals for AI and Ubuntu are currently focused on enabling those who want to develop responsibly with AI tools, without negatively impacting the experience of those who&amp;rsquo;d prefer not to opt-in.&lt;/p&gt;
&lt;h3 id="hardware--drivers" class="relative group"&gt;Hardware &amp;amp; Drivers &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#hardware--drivers" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h3&gt;&lt;p&gt;AI-specific silicon is moving just as fast as AI software tooling, and without constant work to integrate drivers and userspace tools into Ubuntu, it would be impossible to efficiently utilise this specialised hardware.&lt;/p&gt;
&lt;p&gt;Last year we announced that we will ship both &lt;a href="https://canonical.com/blog/canonical-announces-it-will-support-and-distribute-nvidia-cuda-in-ubuntu" target="_blank" rel="noreferrer"&gt;NVIDIA&amp;rsquo;s CUDA&lt;/a&gt; and &lt;a href="https://canonical.com/blog/canonical-amd-rocm-ai-ml-hpc-libraries" target="_blank" rel="noreferrer"&gt;AMD&amp;rsquo;s ROCm&lt;/a&gt; in the Ubuntu archive for Ubuntu 26.04 LTS, in addition to our previous work on &lt;a href="https://snapcraft.io/publisher/openvino" target="_blank" rel="noreferrer"&gt;OpenVINO&lt;/a&gt;. This will make installing the latest drivers and toolkits easier and more secure, with no third-party software repositories. Distributing this software as part of Ubuntu enables us to be proactive in the delivery of security updates and the demonstration of provenance.&lt;/p&gt;
&lt;p&gt;Our work is not limited to AMD and NVIDIA; we recently &lt;a href="https://canonical.com/blog/ubuntu-ga-for-qualcomm-dragonwing" target="_blank" rel="noreferrer"&gt;announced&lt;/a&gt; support for Qualcomm&amp;rsquo;s &lt;a href="https://www.qualcomm.com/dragonwing" target="_blank" rel="noreferrer"&gt;Dragonwing&lt;/a&gt; platforms and others. You can read more about our silicon partner projects &lt;a href="https://canonical.com/partners/silicon" target="_blank" rel="noreferrer"&gt;on our website&lt;/a&gt;.&lt;/p&gt;
&lt;h3 id="inference-snaps" class="relative group"&gt;Inference Snaps &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#inference-snaps" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h3&gt;&lt;p&gt;At the &lt;a href="https://ubuntu.com/summit" target="_blank" rel="noreferrer"&gt;Ubuntu Summit 25.10&lt;/a&gt;, we &lt;a href="https://canonical.com/blog/canonical-releases-inference-snaps" target="_blank" rel="noreferrer"&gt;released&lt;/a&gt; &amp;ldquo;Inference Snaps&amp;rdquo; into the wild, which provide a hassle-free mechanism for obtaining the “famous model” you want to work with, but automatically receive a version of that model which is optimised for the silicon in your machine. This removes the need to spend hours on &lt;a href="https://huggingface.co/" target="_blank" rel="noreferrer"&gt;HuggingFace&lt;/a&gt; identifying the correct model to download that matches with your hardware, and obviates the need for in-depth understanding of model quantisation and tuning when getting started.&lt;/p&gt;
&lt;p&gt;Each of our inference snaps provide a consistent experience: you need only learn the basics once, but can apply those skills to different models as they emerge, whether you&amp;rsquo;re on a laptop or a server.&lt;/p&gt;
&lt;p&gt;At the time of writing, we&amp;rsquo;ve published &lt;code&gt;beta&lt;/code&gt; quality snaps for &lt;a href="https://snapcraft.io/qwen-vl" target="_blank" rel="noreferrer"&gt;qwen-vl&lt;/a&gt;, &lt;a href="https://snapcraft.io/deepseek-r1" target="_blank" rel="noreferrer"&gt;deepseek-r1&lt;/a&gt; and &lt;a href="https://snapcraft.io/gemma3" target="_blank" rel="noreferrer"&gt;gemma3&lt;/a&gt;. You can find a current list of snaps &lt;a href="https://documentation.ubuntu.com/inference-snaps/reference/snaps/" target="_blank" rel="noreferrer"&gt;in the documentation&lt;/a&gt;, along with the silicon-optimised variants.&lt;/p&gt;
&lt;h3 id="sandboxing-agents" class="relative group"&gt;Sandboxing Agents &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#sandboxing-agents" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h3&gt;&lt;p&gt;While many start their journey in a web browser chatting to &lt;a href="https://chat.com" target="_blank" rel="noreferrer"&gt;ChatGPT&lt;/a&gt;, &lt;a href="https://claude.ai" target="_blank" rel="noreferrer"&gt;Claude&lt;/a&gt;, &lt;a href="https://gemini.google.com/app" target="_blank" rel="noreferrer"&gt;Gemini&lt;/a&gt;, &lt;a href="https://perplexity.ai" target="_blank" rel="noreferrer"&gt;Perplexity&lt;/a&gt; or one of the myriad of alternatives, many developers will find &amp;ldquo;agentic&amp;rdquo; tools such as &lt;a href="https://github.com/features/copilot" target="_blank" rel="noreferrer"&gt;Copilot&lt;/a&gt;, &lt;a href="https://openai.com/codex/" target="_blank" rel="noreferrer"&gt;Codex&lt;/a&gt;, &lt;a href="https://claude.com/product/claude-code" target="_blank" rel="noreferrer"&gt;Claude Code&lt;/a&gt; or &lt;a href="https://ampcode.com/" target="_blank" rel="noreferrer"&gt;Amp&lt;/a&gt; quite attractive. In my experience, agents are a clear level-up in an LLM&amp;rsquo;s capability for developers, but they can still make poor decisions and are generally safer to run in sandboxed environment at the time of writing.&lt;/p&gt;
&lt;p&gt;Where a traditional chat-based AI tool responds reactively to user prompts within a single conversation, an agent operates (semi-)autonomously to pursue goals. It perceives its environment, plans, makes decisions and can call out to external tools and services to achieve those goals. If you grant permission, an agent can read and understand your code, implement features, troubleshoot bugs, optimise performance and many other tasks. The catch is that they often need &lt;em&gt;access to your system&lt;/em&gt; - whether that be to modify files or run commands.&lt;/p&gt;
&lt;p&gt;Issues such as accidental file deletion, or the inclusion of a spurious (and potentially compromised) dependency are an inevitable failure mode of the current generation of agents due to how they&amp;rsquo;re trained (see the &lt;a href="https://www.reddit.com/r/ClaudeAI/comments/1pgxckk/claude_cli_deleted_my_entire_home_directory_wiped/" target="_blank" rel="noreferrer"&gt;Reddit post&lt;/a&gt; about Claude Code deleting a user&amp;rsquo;s home directory).&lt;/p&gt;
&lt;h4 id="my-agent-sandboxes-itself" class="relative group"&gt;My agent sandboxes itself! &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#my-agent-sandboxes-itself" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h4&gt;&lt;p&gt;Some of you will be reading this wondering why additional sandboxing is required, since many of the popular agents &lt;a href="https://code.claude.com/docs/en/sandboxing" target="_blank" rel="noreferrer"&gt;advertise their own sandboxing&lt;/a&gt;. The fact that some agents include some measures to protect the user&amp;rsquo;s machine is of course a good thing. The touted benefits include filesystem isolation by restricting the agent to a specific directory, or prompting for approval before modifying files. Some agents also include network sandboxing to restrict network access to a list of approved domains, or by using a custom proxy to impose rules on outbound traffic.&lt;/p&gt;
&lt;p&gt;On Linux, these agent-imposed sandboxes are often implemented with &lt;a href="https://github.com/containers/bubblewrap" target="_blank" rel="noreferrer"&gt;bubblewrap&lt;/a&gt;, which is &amp;ldquo;a tool for constructing sandbox environments&amp;rdquo;, but note that the upstream project&amp;rsquo;s README includes &lt;a href="https://github.com/containers/bubblewrap#sandbox-security" target="_blank" rel="noreferrer"&gt;a section&lt;/a&gt; which states that it is &lt;em&gt;not&lt;/em&gt; a &amp;ldquo;ready-made sandbox with a specific security policy&amp;rdquo;. &lt;code&gt;bubblewrap&lt;/code&gt; is a relatively low-level tool that must be given its configuration, which in this case is provided &lt;em&gt;by the agent&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;The limitation upon these tools is the shared kernel - a severe kernel exploit could enable an agent to escape from its sandbox. Of course, such vulnerabilities are rare, but note that even if the sandboxing technologies do their job, agents often run in the context of the user&amp;rsquo;s session, meaning they inherit environment variables which could contain sensitive information. They&amp;rsquo;re also agent specific: Claude Code&amp;rsquo;s sandboxing won&amp;rsquo;t help you if you&amp;rsquo;re using &lt;a href="https://cursor.com/" target="_blank" rel="noreferrer"&gt;Cursor&lt;/a&gt; or &lt;a href="https://antigravity.google/" target="_blank" rel="noreferrer"&gt;Antigravity&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Depending on your threat model and the project you&amp;rsquo;re working on, you may deem the built-in sandboxing of coding agents to be sufficient, but there are other options available to Ubuntu users that provide either different, or additional protection&amp;hellip;&lt;/p&gt;
&lt;h4 id="sandbox-with-lxd-containers" class="relative group"&gt;Sandbox with LXD containers &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#sandbox-with-lxd-containers" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h4&gt;&lt;p&gt;Canonical&amp;rsquo;s &lt;a href="https://canonical.com/lxd" target="_blank" rel="noreferrer"&gt;LXD&lt;/a&gt; works out-of-the-box on Ubuntu, and is a great way to sandbox an agent into a disposable environment where the blast radius is limited should the agent make a mistake. My personal workflow is to create an Ubuntu container (or VM) with my project directory mounted. This way, I can edit my code directly on my filesystem with my preferred (already configured) editor, but have the agent run inside the container.&lt;/p&gt;
&lt;p&gt;For example:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt"&gt; 1
&lt;/span&gt;&lt;span class="lnt"&gt; 2
&lt;/span&gt;&lt;span class="lnt"&gt; 3
&lt;/span&gt;&lt;span class="lnt"&gt; 4
&lt;/span&gt;&lt;span class="lnt"&gt; 5
&lt;/span&gt;&lt;span class="lnt"&gt; 6
&lt;/span&gt;&lt;span class="lnt"&gt; 7
&lt;/span&gt;&lt;span class="lnt"&gt; 8
&lt;/span&gt;&lt;span class="lnt"&gt; 9
&lt;/span&gt;&lt;span class="lnt"&gt;10
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Initialise the container&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;lxc init ubuntu:noble dev
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Mount my project directory into the container&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;lxc config device add -q dev datadir disk &lt;span class="nv"&gt;source&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;&amp;#34;&lt;/span&gt;&lt;span class="nv"&gt;$HOME&lt;/span&gt;&lt;span class="s2"&gt;/my-project&amp;#34;&lt;/span&gt; &lt;span class="nv"&gt;path&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/home/ubuntu/project
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Start the container&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;lxc start dev
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Get a shell inside the container as the &amp;#39;ubuntu&amp;#39; user&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;lxc &lt;span class="nb"&gt;exec&lt;/span&gt; dev -- sudo -u ubuntu -i bash
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Run a command in the container&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;lxc &lt;span class="nb"&gt;exec&lt;/span&gt; dev -- sudo -u ubuntu -i bash -c &lt;span class="s2"&gt;&amp;#34;cd project; claude&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;p&gt;You can learn more about LXD in the official &lt;a href="https://documentation.ubuntu.com/lxd/stable-5.21/" target="_blank" rel="noreferrer"&gt;documentation&lt;/a&gt; and &lt;a href="https://documentation.ubuntu.com/lxd/stable-5.21/tutorial/first_steps/#first-steps" target="_blank" rel="noreferrer"&gt;tutorial&lt;/a&gt;, as well as specific instructions on &lt;a href="https://ubuntu.com/tutorials/gpu-data-processing-inside-lxd#1-overview" target="_blank" rel="noreferrer"&gt;enabling GPU data processing in containers/VMs&lt;/a&gt;. I&amp;rsquo;ve written &lt;a href="https://jnsgr.uk/2024/06/desktop-vms-lxd-multipass/" target="_blank" rel="noreferrer"&gt;previously&lt;/a&gt; about my use of LXD in development.&lt;/p&gt;
&lt;p&gt;With LXD, you can choose between running your sandbox as a container or a VM, depending on your project&amp;rsquo;s needs. If I&amp;rsquo;m working on a project that requires Kubernetes or similar, I use a VM, but for lighter projects I use system containers, preferring their lower overhead.&lt;/p&gt;
&lt;h4 id="sandbox-with-lxd-vms" class="relative group"&gt;Sandbox with LXD VMs &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#sandbox-with-lxd-vms" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h4&gt;&lt;p&gt;LXD is best known for its ability to run &amp;ldquo;system containers&amp;rdquo;, which are somewhat analogous to Docker/OCI containers, but rather than being focused on a single application (and dependencies), a system container essentially runs an entire Ubuntu user-space (including &lt;code&gt;systemd&lt;/code&gt;, etc.). Like OCI containers, however, system containers share the kernel with the host.&lt;/p&gt;
&lt;p&gt;In some situations, you may seek more isolation from your host machine by running tools inside a virtual machine with their own kernel. LXD makes this simple - you can follow the same commands as above, but add &lt;code&gt;--vm&lt;/code&gt; to the &lt;code&gt;init&lt;/code&gt; command:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt"&gt;1
&lt;/span&gt;&lt;span class="lnt"&gt;2
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Initialise the virtual machine&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;lxc init --vm ubuntu:noble dev
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;p&gt;You can also configure the virtual machine&amp;rsquo;s CPU, memory and disk requirements. A simple example is below:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt"&gt;1
&lt;/span&gt;&lt;span class="lnt"&gt;2
&lt;/span&gt;&lt;span class="lnt"&gt;3
&lt;/span&gt;&lt;span class="lnt"&gt;4
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;lxc init --vm ubuntu:noble dev &lt;span class="se"&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; -c limits.cpu&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="m"&gt;8&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; -c limits.memory&lt;span class="o"&gt;=&lt;/span&gt;8GiB &lt;span class="se"&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; -d root,size&lt;span class="o"&gt;=&lt;/span&gt;100GiB
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;p&gt;You can find more details on instance configuration in the &lt;a href="https://documentation.ubuntu.com/lxd/stable-5.21/howto/instances_configure/" target="_blank" rel="noreferrer"&gt;LXD documentation&lt;/a&gt;.&lt;/p&gt;
&lt;h4 id="sandbox-with-multipass" class="relative group"&gt;Sandbox with Multipass &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#sandbox-with-multipass" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h4&gt;&lt;p&gt;&lt;a href="https://multipass.run/" target="_blank" rel="noreferrer"&gt;Multipass&lt;/a&gt; provides on-demand access to Ubuntu VMs from any workstation - whether that workstation is running Linux, macOS or Windows. It is designed to replicate, in a lightweight way, the experience of provisioning a simple Ubuntu VM on a cloud.&lt;/p&gt;
&lt;p&gt;Multipass&amp;rsquo; scope is more limited than LXD, but for many users it provides a simple on-ramp for development with Ubuntu. Where it lacks advanced features like GPU passthrough, it boasts a simplified CLI and a first-class &lt;a href="https://documentation.ubuntu.com/multipass/latest/reference/gui-client/" target="_blank" rel="noreferrer"&gt;GUI client&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;To get started similarly to the LXD example above, try the following:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt"&gt; 1
&lt;/span&gt;&lt;span class="lnt"&gt; 2
&lt;/span&gt;&lt;span class="lnt"&gt; 3
&lt;/span&gt;&lt;span class="lnt"&gt; 4
&lt;/span&gt;&lt;span class="lnt"&gt; 5
&lt;/span&gt;&lt;span class="lnt"&gt; 6
&lt;/span&gt;&lt;span class="lnt"&gt; 7
&lt;/span&gt;&lt;span class="lnt"&gt; 8
&lt;/span&gt;&lt;span class="lnt"&gt; 9
&lt;/span&gt;&lt;span class="lnt"&gt;10
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Install Multipass&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo snap install multipass
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Launch an instance&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;multipass launch noble -n dev
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Mount your project directory&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;multipass mount ~/my-project dev:/home/ubuntu/project
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Get a shell in the instance&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;multipass shell dev
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Run a command in the instance&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;multipass &lt;span class="nb"&gt;exec&lt;/span&gt; dev -- claude
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;p&gt;You can find more details on how to configure and manage instances &lt;a href="https://documentation.ubuntu.com/multipass/latest/" target="_blank" rel="noreferrer"&gt;in the docs&lt;/a&gt;.&lt;/p&gt;
&lt;h4 id="sandbox-with-wsl" class="relative group"&gt;Sandbox with WSL &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#sandbox-with-wsl" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h4&gt;&lt;p&gt;If you&amp;rsquo;re on Windows, &lt;a href="https://documentation.ubuntu.com/wsl/stable/tutorials/develop-with-ubuntu-wsl/" target="_blank" rel="noreferrer"&gt;development with WSL&lt;/a&gt; includes first-class &lt;a href="https://documentation.ubuntu.com/wsl/stable/howto/gpu-cuda/" target="_blank" rel="noreferrer"&gt;support for GPU acceleration&lt;/a&gt;, and is even supported for use with the &lt;a href="https://ubuntu.com/blog/accelerate-ai-development-with-ubuntu-and-nvidia-ai-workbench" target="_blank" rel="noreferrer"&gt;NVIDIA AI Workbench&lt;/a&gt;, &lt;a href="https://docs.nvidia.com/nim/wsl2/latest/getting-started.html" target="_blank" rel="noreferrer"&gt;NVIDIA NIM&lt;/a&gt; and &lt;a href="https://learn.microsoft.com/en-us/windows/ai/directml/gpu-cuda-in-wsl" target="_blank" rel="noreferrer"&gt;CUDA&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Ubuntu is the default Linux distribution for WSL, and you can find more information about how to set up and use Ubuntu on WSL in &lt;a href="https://documentation.ubuntu.com/wsl/stable/" target="_blank" rel="noreferrer"&gt;our documentation&lt;/a&gt;. WSL benefits from all the same technologies as a &amp;ldquo;regular&amp;rdquo; Ubuntu install, including the ability to use Snaps, Docker and LXD.&lt;/p&gt;
&lt;p&gt;For the enterprise developer, we recently announced &lt;a href="https://canonical.com/blog/canonical-announces-ubuntu-pro-for-wsl" target="_blank" rel="noreferrer"&gt;Ubuntu Pro for WSL&lt;/a&gt;, as well as the ability to manage WSL instances &lt;a href="https://documentation.ubuntu.com/landscape/how-to-guides/wsl-integration/manage-wsl-instances/" target="_blank" rel="noreferrer"&gt;using Landscape&lt;/a&gt;, making it easier to get access to first-class developer tooling with Ubuntu on your corporate machine.&lt;/p&gt;
&lt;h3 id="summary" class="relative group"&gt;Summary &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#summary" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h3&gt;&lt;p&gt;While opinion remains divided on the value and impact of current AI tooling, its presence in modern development workflows and its demands on underlying compute infrastructure are difficult to ignore.&lt;/p&gt;
&lt;p&gt;Developers who wish to experiment need reliable access to modern hardware, predictable tooling, and strong isolation boundaries. Ubuntu’s role is not to dictate how these tools are used, but to provide a stable and dependable platform on which they can be explored and deployed safely, without compromising security, provenance, or the day-to-day experience of those who choose to opt out.&lt;/p&gt;
&lt;p&gt;In addition to powering development workflows, Ubuntu makes for a dependable production operating system for your workloads. We&amp;rsquo;re building &lt;a href="https://documentation.ubuntu.com/canonical-kubernetes/latest/" target="_blank" rel="noreferrer"&gt;Canonical Kubernetes&lt;/a&gt; with first-class GPU support, &lt;a href="https://canonical.com/mlops/kubeflow" target="_blank" rel="noreferrer"&gt;Kubeflow&lt;/a&gt; and &lt;a href="https://canonical.com/mlops/mlflow" target="_blank" rel="noreferrer"&gt;MLFlow&lt;/a&gt; for model training and serving and a suite of applications like &lt;a href="https://canonical.com/data/postgresql" target="_blank" rel="noreferrer"&gt;PostgreSQL&lt;/a&gt;, &lt;a href="https://canonical.com/data/mysql" target="_blank" rel="noreferrer"&gt;MySQL&lt;/a&gt;, &lt;a href="https://canonical.com/data/opensearch" target="_blank" rel="noreferrer"&gt;Opensearch&lt;/a&gt;, as well as other data-centric tools such as &lt;a href="https://canonical.com/data/kafka" target="_blank" rel="noreferrer"&gt;Kafka&lt;/a&gt; and &lt;a href="https://canonical.com/data/spark" target="_blank" rel="noreferrer"&gt;Spark&lt;/a&gt; that can be deployed with full &lt;a href="https://ubuntu.com/pro" target="_blank" rel="noreferrer"&gt;Ubuntu Pro&lt;/a&gt; support. Let me know if you&amp;rsquo;d find value in a follow-up post on those topics!&lt;/p&gt;</description></item><item><title>Addressing Linux's Missing PKI Infrastructure</title><link>https://jnsgr.uk/2025/12/addressing-linuxs-missing-pki-infra/</link><pubDate>Mon, 08 Dec 2025 00:00:00 +0000</pubDate><guid>https://jnsgr.uk/2025/12/addressing-linuxs-missing-pki-infra/</guid><description>&lt;blockquote&gt;
&lt;p&gt;This article was originally posted &lt;a href="https://discourse.ubuntu.com/t/addressing-linuxs-missing-pki-infrastructure/73314" target="_blank" rel="noreferrer"&gt;on the Ubuntu Discourse&lt;/a&gt;, and is reposted here. I welcome comments and further discussion in that thread.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Earlier this year, &lt;a href="https://lwn.net/" target="_blank" rel="noreferrer"&gt;LWN&lt;/a&gt; featured an excellent article titled &amp;ldquo;&lt;a href="https://lwn.net/Articles/1033809/" target="_blank" rel="noreferrer"&gt;Linux&amp;rsquo;s missing CRL infrastructure&lt;/a&gt;&amp;rdquo;. The article highlighted a number of key issues surrounding traditional Public Key Infrastructure (PKI), but critically noted how even the available measures are effectively ignored by the majority of system-level software on Linux.&lt;/p&gt;
&lt;p&gt;One of the motivators for the discussion is that the Online Certificate Status Protocol (OCSP) will cease to be supported by Let&amp;rsquo;s Encrypt. The remaining alternative is to use Certificate Revocation Lists (CRLs), yet there is little or no support for managing (or even querying) these lists in most Linux system utilities.&lt;/p&gt;
&lt;p&gt;To solve this, I&amp;rsquo;m happy to share that in partnership with &lt;a href="https://github.com/rustls/rustls" target="_blank" rel="noreferrer"&gt;rustls&lt;/a&gt; maintainers &lt;a href="https://dirkjan.ochtman.nl/" target="_blank" rel="noreferrer"&gt;Dirkjan Ochtman&lt;/a&gt; and &lt;a href="https://jbp.io/" target="_blank" rel="noreferrer"&gt;Joe Birr-Pixton&lt;/a&gt;, we&amp;rsquo;re starting the development of upki: a universal PKI tool. This project initially aims to close the revocation gap through the combination of a new system utility and eventual library support for common TLS/SSL libraries such as &lt;a href="https://openssl-library.org/" target="_blank" rel="noreferrer"&gt;OpenSSL&lt;/a&gt;, &lt;a href="https://gnutls.org/" target="_blank" rel="noreferrer"&gt;GnuTLS&lt;/a&gt; and &lt;a href="https://github.com/rustls/rustls" target="_blank" rel="noreferrer"&gt;rustls&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id="the-problem" class="relative group"&gt;The Problem &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#the-problem" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;Online Certificate Authorities responsible for issuing TLS certificates have long had mechanisms for revoking known bad certificates. What constitutes a known bad certificate varies, but generally it means a certificate was issued either in error, or by a malicious actor of some form. There have been two primary mechanisms for this revocation: &lt;a href="https://datatracker.ietf.org/doc/html/rfc5280" target="_blank" rel="noreferrer"&gt;Certificate Revocation Lists&lt;/a&gt; (CRLs) and the &lt;a href="https://datatracker.ietf.org/doc/html/rfc6960" target="_blank" rel="noreferrer"&gt;Online Certificate Status Protocol&lt;/a&gt; (OCSP).&lt;/p&gt;
&lt;p&gt;In July 2024, &lt;a href="https://letsencrypt.org/" target="_blank" rel="noreferrer"&gt;Let’s Encrypt&lt;/a&gt; &lt;a href="https://letsencrypt.org/2024/07/23/replacing-ocsp-with-crls.html" target="_blank" rel="noreferrer"&gt;announced&lt;/a&gt; the deprecation of support for the Online Certificate Status Protocol (OCSP). This wasn&amp;rsquo;t entirely unexpected - the protocol has suffered from privacy defects which leak the browsing habits of users to Certificate Authorities. Various implementations have also suffered reliability issues that forced most implementers to adopt &amp;ldquo;soft-fail&amp;rdquo; policies, rendering the checks largely ineffective.&lt;/p&gt;
&lt;p&gt;The deprecation of OCSP leaves us with CRLs. Both Windows and macOS rely on operating system components to centralise the fetching and parsing of CRLs, but Linux has traditionally delegated this responsibility to individual applications. This is done most effectively in browsers such as Mozilla Firefox, Google Chrome and Chromium, but this has been achieved with bespoke infrastructure.&lt;/p&gt;
&lt;p&gt;However, Linux itself has fallen short by not providing consistent revocation checking infrastructure for the rest of userspace - tools such as curl, system package managers and language runtimes lack a unified mechanism to process this data.&lt;/p&gt;
&lt;p&gt;The ideal solution to this problem, which is slowly &lt;a href="https://letsencrypt.org/2025/12/02/from-90-to-45.html" target="_blank" rel="noreferrer"&gt;becoming more prevalent&lt;/a&gt;, is to issue short-lived credentials with an expiration of 10 days or less, somewhat removing the need for complicated revocation infrastructure, but reducing certificate lifetimes is happening slowly and requires significant automation.&lt;/p&gt;
&lt;h2 id="crlite" class="relative group"&gt;CRLite &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#crlite" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;There are several key challenges with CRLs in practice - the size of the list has grown dramatically as the web has scaled, and one must collate CRLs from all relevant certificate authorities in order to be useful. CRLite was originally proposed by researchers at IEEE S&amp;amp;P and subsequently adopted in Mozilla Firefox. It offers a pragmatic solution to the problem of distributing large CRL datasets to client machines.&lt;/p&gt;
&lt;p&gt;In a recent &lt;a href="https://hacks.mozilla.org/2025/08/crlite-fast-private-and-comprehensive-certificate-revocation-checking-in-firefox/" target="_blank" rel="noreferrer"&gt;blog post&lt;/a&gt;, Mozilla outlined how their CRLite implementation meant that on average users &amp;ldquo;downloaded 300kB of revocation data per day, a 4MB snapshot every 45 days and a sequence of &amp;ldquo;delta-updates&amp;rdquo; in-between&amp;rdquo;, which amounts to CRLite being 1000x more bandwidth-efficient than daily CRL downloads.&lt;/p&gt;
&lt;p&gt;At its core, CRLite is a data structure compressing the full set of web-PKI revocations into a compact, efficiently queryable form. You can find more information about CRLite&amp;rsquo;s design and implementation on &lt;a href="https://blog.mozilla.org/security/tag/crlite/" target="_blank" rel="noreferrer"&gt;Mozilla&amp;rsquo;s Security Blog&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id="introducing-upki" class="relative group"&gt;Introducing upki &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#introducing-upki" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;Following our work on &lt;a href="https://jnsgr.uk/2025/03/carefully-but-purposefully-oxidising-ubuntu/" target="_blank" rel="noreferrer"&gt;oxidizing Ubuntu&lt;/a&gt;, &lt;a href="https://dirkjan.ochtman.nl/" target="_blank" rel="noreferrer"&gt;Dirkjan&lt;/a&gt; reached out to me with a proposal to introduce a system-level utility backed by CRLite to non-browser users.&lt;/p&gt;
&lt;p&gt;upki will be an open source project, initially packaged for Ubuntu but available to all Linux distributions, and likely portable to other Unix-like operating systems. Written in Rust, upki supports three roles:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Server-side mirroring tool&lt;/strong&gt;: responsible for downloading and mirroring the CRLite filters provided by Mozilla, enabling us to operate independent CDN infrastructure for CRLite users, and serving them to clients. This will insulate upki from changes in the Mozilla backend, and enable standing up an independent data source if required. The server-side tool will manifest as a service that periodically checks the Mozilla Firefox CRLite filters, downloads and validates the files, and serves them.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Client-side sync tool&lt;/strong&gt;: run regularly by a systemd-timer, network-up events or similar, this tool ensures the contents of the CDN are reflected in the on-disk filter cache. This will be extremely low on bandwidth and CPU usage assuming everything is up to date.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Client-side query tool&lt;/strong&gt;: a CLI interface for querying revocation data. This will be useful for monitoring and deployment workflows, as well as for users without a good C FFI.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The latter two roles are served by a single Rust binary that runs in different modes depending on how it is invoked. The server-side tool will be a separate binary, since its use will be much less widespread. Under the hood, all of this will be powered by Rust library crates that can be integrated in other projects via crates.io.&lt;/p&gt;
&lt;p&gt;For the initial release, Canonical will stand up the backend infrastructure required to mirror and serve the CRLite data for upki users, though the backend will be configurable. This prevents unbounded load on Mozilla’s infrastructure and ensures long-term stability even if Firefox’s internal formats evolve.&lt;/p&gt;
&lt;p&gt;&lt;a href="01.png"&gt;
&lt;figure&gt;
&lt;picture
class="mx-auto my-0 rounded-md"
&gt;
&lt;source
srcset="https://jnsgr.uk/2025/12/addressing-linuxs-missing-pki-infra/01_hu_443927a2cc8ea5be.webp 330w,https://jnsgr.uk/2025/12/addressing-linuxs-missing-pki-infra/01_hu_f1c7127e41b7d6cc.webp 660w
,https://jnsgr.uk/2025/12/addressing-linuxs-missing-pki-infra/01_hu_705ea1ebe4137e28.webp 1024w
,https://jnsgr.uk/2025/12/addressing-linuxs-missing-pki-infra/01_hu_be2b5fbb2881c88a.webp 1320w
"
sizes="100vw"
type="image/webp"
/&gt;
&lt;img
width="1720"
height="1670"
class="mx-auto my-0 rounded-md"
alt="architecture diagram for upki"
loading="lazy" decoding="async"
src="https://jnsgr.uk/2025/12/addressing-linuxs-missing-pki-infra/01_hu_b16265b7d66a056c.png" srcset="https://jnsgr.uk/2025/12/addressing-linuxs-missing-pki-infra/01_hu_77c0dd2534a34637.png 330w,https://jnsgr.uk/2025/12/addressing-linuxs-missing-pki-infra/01_hu_b16265b7d66a056c.png 660w
,https://jnsgr.uk/2025/12/addressing-linuxs-missing-pki-infra/01_hu_21c7d0a4f341695e.png 1024w
,https://jnsgr.uk/2025/12/addressing-linuxs-missing-pki-infra/01_hu_34470e9d78fe7948.png 1320w
"
sizes="100vw"
/&gt;
&lt;/picture&gt;
&lt;/figure&gt;
&lt;/a&gt;&lt;/p&gt;
&lt;h2 id="ecosystem-compatibility" class="relative group"&gt;Ecosystem Compatibility &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#ecosystem-compatibility" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;So far we&amp;rsquo;ve covered the introduction of a new Rust binary (and crate) for supporting the fetching, serving and querying of CRL data, but that doesn&amp;rsquo;t provide much service to the existing ecosystem of Linux applications and libraries in the problem statement.&lt;/p&gt;
&lt;p&gt;The upki project will also provide a shared object library for a stable ABI that allows C and C-FFI programs to make revocation queries, using the contents of the on-disk filter cache.&lt;/p&gt;
&lt;p&gt;Once &lt;code&gt;upki&lt;/code&gt; is released and available, work can begin on integrating existing crypto libraries such as OpenSSL, GNUtls and rustls. This will be performed through the shared object library by means of an optional callback mechanism these libraries can use to check the revocation lists before establishing a connection to a given server with a certificate.&lt;/p&gt;
&lt;h2 id="timeline" class="relative group"&gt;Timeline &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#timeline" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;While we&amp;rsquo;ve been discussing this project for a couple of months, ironing out the details of funding and design, work will soon begin on the initial implementation of upki.&lt;/p&gt;
&lt;p&gt;Our aim is to make upki available as an opt-in preview for the release of Ubuntu 26.04 LTS, meaning we&amp;rsquo;ll need to complete the implementation of the server/client functionality, and bootstrap the mirroring/serving infrastructure at Canonical before April 2026.&lt;/p&gt;
&lt;p&gt;In the following Ubuntu release cycle, the run up to Ubuntu 26.10, we&amp;rsquo;ll aim to ship the tool by default on Ubuntu systems, and begin work on integration with the likes of NSS, OpenSSL, GNUtls and rustls.&lt;/p&gt;
&lt;h2 id="summary" class="relative group"&gt;Summary &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#summary" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;Linux has a clear gap in its handling of revocation data for PKIs. Over the coming months we&amp;rsquo;re hoping to address that gap by developing upki not just for Ubuntu, but for the entire ecosystem. Thanks to Mozilla&amp;rsquo;s work on CRLite, and the expertise of Dirkjan and Joe, we&amp;rsquo;re confident that we&amp;rsquo;ll deliver a resilient and efficient solution that should make a meaningful contribution to systems security across the web.&lt;/p&gt;
&lt;p&gt;If you&amp;rsquo;d like to do more reading on the subject, I&amp;rsquo;d recommend the following:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;LWN.net:&lt;/strong&gt; &lt;a href="https://lwn.net/Articles/1033809/" target="_blank" rel="noreferrer"&gt;Linux&amp;rsquo;s missing CRL infrastructure&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Mozilla Security Blog:&lt;/strong&gt; &lt;a href="https://blog.mozilla.org/security/2020/01/09/crlite-part-1-all-web-pki-revocations-compressed/" target="_blank" rel="noreferrer"&gt;CRLite Part 1: All Web PKI Revocations Compressed&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Mozilla Security Blog:&lt;/strong&gt; &lt;a href="https://blog.mozilla.org/security/2020/01/09/crlite-part-2-end-to-end-design/" target="_blank" rel="noreferrer"&gt;CRLite Part 2: End-to-End Design&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Let’s Encrypt:&lt;/strong&gt; &lt;a href="https://letsencrypt.org/2024/07/23/replacing-ocsp-with-crls.html" target="_blank" rel="noreferrer"&gt;Replacing OCSP with CRLs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;IEEE Symposium on Security &amp;amp; Privacy:&lt;/strong&gt; &lt;a href="https://www.google.com/search?q=https://ieeexplore.ieee.org/document/7958572" target="_blank" rel="noreferrer"&gt;CRLite: A Scalable System for Pushing All TLS Revocations to All Browsers&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</description></item><item><title>Ubuntu Summit 25.10: Personal Highlights</title><link>https://jnsgr.uk/2025/11/ubuntu-summit-25/</link><pubDate>Sun, 02 Nov 2025 00:00:00 +0000</pubDate><guid>https://jnsgr.uk/2025/11/ubuntu-summit-25/</guid><description>&lt;blockquote&gt;
&lt;p&gt;This article was originally posted &lt;a href="https://discourse.ubuntu.com/t/ubuntu-summit-25-10-personal-highlights/71509" target="_blank" rel="noreferrer"&gt;on the Ubuntu Discourse&lt;/a&gt;, and is reposted here. I welcome comments and further discussion in that thread.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;I recently had the privilege of attending the &lt;a href="https://ubuntu.com/summit" target="_blank" rel="noreferrer"&gt;Ubuntu Summit 25.10&lt;/a&gt;. Ubuntu Summits have a relatively long history. Some years ago Canonical ran the ‘Ubuntu Developer Summits (UDS)’, but recently the events were brought back and reimagined as the ‘Ubuntu Summit’.&lt;/p&gt;
&lt;p&gt;For the most recent Summit, we tried out a new format. We invited a select few folks to come and give talks at our London office, with a small in-person crowd. In addition, the event was livestreamed, and we encouraged people to host &amp;ldquo;watching parties&amp;rdquo; across the world as part of &lt;a href="https://ubuntu.com/community/docs/locos?next=%2Fg1m%2F" target="_blank" rel="noreferrer"&gt;Ubuntu Local Communities (LoCos)&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;While Ubuntu may feature in the name, the event does not require talks to be centred on Ubuntu, and in fact is aiming to draw contributions from our partners and from right across the open source community, whether or not the content is relevant to Ubuntu or Canonical - it&amp;rsquo;s designed to be a showcase for the very best of open source, and this year I felt that the talks were of a particularly high calibre.&lt;/p&gt;
&lt;p&gt;In this post I&amp;rsquo;ll highlight some of my favourite talks, in no particular order! If any of these catch your interest, you can see &lt;a href="https://discourse.ubuntu.com/t/ubuntu-summit-25-10-timetable/65271" target="_blank" rel="noreferrer"&gt;when they were aired&lt;/a&gt; and catch-up on the &lt;a href="https://www.youtube.com/live/bEEamxJ60aI" target="_blank" rel="noreferrer"&gt;Day 1&lt;/a&gt; and &lt;a href="https://www.youtube.com/live/WvNgMEumSoA" target="_blank" rel="noreferrer"&gt;Day 2&lt;/a&gt; streams.&lt;/p&gt;
&lt;h2 id="doom-in-space" class="relative group"&gt;DOOM in Space &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#doom-in-space" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;&lt;a href="04.png"&gt;
&lt;figure&gt;
&lt;picture
class="mx-auto my-0 rounded-md"
&gt;
&lt;source
srcset="https://jnsgr.uk/2025/11/ubuntu-summit-25/04_hu_65dfe195c8d0716f.webp 330w,https://jnsgr.uk/2025/11/ubuntu-summit-25/04_hu_9322be6f635856bb.webp 660w
,https://jnsgr.uk/2025/11/ubuntu-summit-25/04_hu_9172933e609edc31.webp 1024w
,https://jnsgr.uk/2025/11/ubuntu-summit-25/04_hu_8b2cd4b205915e58.webp 1280w
"
sizes="100vw"
type="image/webp"
/&gt;
&lt;img
width="1280"
height="720"
class="mx-auto my-0 rounded-md"
alt="opening slide for doom in space talk"
loading="lazy" decoding="async"
src="https://jnsgr.uk/2025/11/ubuntu-summit-25/04_hu_2091739036a71cb.png" srcset="https://jnsgr.uk/2025/11/ubuntu-summit-25/04_hu_b9533f51793b46ca.png 330w,https://jnsgr.uk/2025/11/ubuntu-summit-25/04_hu_2091739036a71cb.png 660w
,https://jnsgr.uk/2025/11/ubuntu-summit-25/04_hu_9ae56e6f039ce272.png 1024w
,https://jnsgr.uk/2025/11/ubuntu-summit-25/04.png 1280w
"
sizes="100vw"
/&gt;
&lt;/picture&gt;
&lt;/figure&gt;
&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;What a way to kick off the Summit! &lt;a href="https://discourse.ubuntu.com/t/doom-in-space/67019" target="_blank" rel="noreferrer"&gt;DOOM in Space&lt;/a&gt; was a talk given by &lt;a href="https://olafurw.com/aboutme/" target="_blank" rel="noreferrer"&gt;Ólafur Waage&lt;/a&gt;, who introduced himself as a &amp;ldquo;professional keyboard typist&amp;rdquo;!&lt;/p&gt;
&lt;p&gt;The talk was immediately after Mark Shuttleworth&amp;rsquo;s opening remarks, and covered his journey in getting DOOM to run on the European Space Agency&amp;rsquo;s &lt;a href="https://en.wikipedia.org/wiki/OPS-SAT" target="_blank" rel="noreferrer"&gt;OPS-SAT&lt;/a&gt; satellite. DOOM has famously been ported to &lt;a href="https://en.wikipedia.org/wiki/List_of_Doom_ports" target="_blank" rel="noreferrer"&gt;many devices&lt;/a&gt;, though some were only questionably &amp;ldquo;running&amp;rdquo; the game.&lt;/p&gt;
&lt;p&gt;Ólafur covered how he became involved in the project, and the unique approach they needed to take to guarantee success, since they would only get a very limited amount of time in order to conduct their &amp;ldquo;experiment&amp;rdquo; on the satellite.&lt;/p&gt;
&lt;p&gt;Of particular note was the work done to integrate imagery from the OPS-SAT&amp;rsquo;s onboard camera into the game, which involved some clever reassigning of colors in the game&amp;rsquo;s original palette to more faithfully represent the imagery taken from the camera in-game.&lt;/p&gt;
&lt;h2 id="infrastructure-wide-profiling-of-nvidia-cuda" class="relative group"&gt;Infrastructure-Wide Profiling of Nvidia CUDA &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#infrastructure-wide-profiling-of-nvidia-cuda" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;&lt;a href="03.jpeg"&gt;
&lt;figure&gt;
&lt;picture
class="mx-auto my-0 rounded-md"
&gt;
&lt;source
srcset="https://jnsgr.uk/2025/11/ubuntu-summit-25/03_hu_c19c7f7a82ce28ec.webp 330w,https://jnsgr.uk/2025/11/ubuntu-summit-25/03_hu_3289cc8a3c6ace68.webp 660w
,https://jnsgr.uk/2025/11/ubuntu-summit-25/03_hu_63ea02edaca8b2f8.webp 1024w
,https://jnsgr.uk/2025/11/ubuntu-summit-25/03_hu_47b446e14afc4dc7.webp 1320w
"
sizes="100vw"
type="image/webp"
/&gt;
&lt;img
width="1600"
height="900"
class="mx-auto my-0 rounded-md"
alt="opening slide for profiling talk"
loading="lazy" decoding="async"
src="https://jnsgr.uk/2025/11/ubuntu-summit-25/03_hu_a857ebebb22203c0.jpeg" srcset="https://jnsgr.uk/2025/11/ubuntu-summit-25/03_hu_ee47e997a3027b8c.jpeg 330w,https://jnsgr.uk/2025/11/ubuntu-summit-25/03_hu_a857ebebb22203c0.jpeg 660w
,https://jnsgr.uk/2025/11/ubuntu-summit-25/03_hu_a29a7d039dd13cc1.jpeg 1024w
,https://jnsgr.uk/2025/11/ubuntu-summit-25/03_hu_cb8dc774efe7bf9f.jpeg 1320w
"
sizes="100vw"
/&gt;
&lt;/picture&gt;
&lt;/figure&gt;
&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://discourse.ubuntu.com/t/infrastructure-wide-profiling-of-nvidia-cuda/67248" target="_blank" rel="noreferrer"&gt;This talk&lt;/a&gt; was given by &lt;a href="https://github.com/brancz" target="_blank" rel="noreferrer"&gt;Frederic Branczyk&lt;/a&gt;, CEO and Founder of &lt;a href="https://polarsignals.com" target="_blank" rel="noreferrer"&gt;Polar Signals&lt;/a&gt;. Canonical has partnered with Polar Signals a couple of times in recent years. They were part of our journey to &lt;a href="https://ubuntu.com/blog/ubuntu-performance-engineering-with-frame-pointers-by-default" target="_blank" rel="noreferrer"&gt;enabling frame pointers by default&lt;/a&gt; on Ubuntu, and many of our teams have been using their zero-instrumentation &lt;a href="https://github.com/parca-dev/parca-agent" target="_blank" rel="noreferrer"&gt;eBPF profiler&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;While CPU profiling has been commonplace for developers for many years, giving the ability to analyse CPU and memory-bound workloads, profiling GPU workloads has been less prominent, and is particularly difficult in production.&lt;/p&gt;
&lt;p&gt;Polar Signals advocate for &amp;ldquo;continuous profiling&amp;rdquo;, which means running a profiler at all times, on all nodes, in production. The benefit of this is that when an issue occurs, you don&amp;rsquo;t have to set up a profiler and try to reproduce the issue - you already have the data. It also negates the uncertainty of the impact a profiler might have on the code during reproduction. This would have been difficult with traditional profiling tools, but with technologies like &lt;a href="https://ebpf.io/" target="_blank" rel="noreferrer"&gt;eBPF&lt;/a&gt;, the overhead of the profiler is incredibly low compared to the potential performance gains from acting on the data it produces.&lt;/p&gt;
&lt;p&gt;In this talk, Frederic outlined the work they have done bringing infrastructure-wide profiling of CUDA workloads into Polar Signals Cloud. Their approach combines the &lt;a href="https://docs.nvidia.com/cupti/" target="_blank" rel="noreferrer"&gt;CUPTI profiling API&lt;/a&gt; with &lt;a href="https://docs.ebpf.io/linux/concepts/usdt/" target="_blank" rel="noreferrer"&gt;USDT&lt;/a&gt; probes and eBPF into a pipeline, relying upon the ability to inject a small library into CUDA workloads using the &lt;code&gt;CUDA_INJECTION64_PATH&lt;/code&gt; without modification.&lt;/p&gt;
&lt;p&gt;You can see more details &lt;a href="https://www.polarsignals.com/blog/posts/2025/10/22/gpu-profiling" target="_blank" rel="noreferrer"&gt;on their website&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id="inference-snaps" class="relative group"&gt;Inference Snaps &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#inference-snaps" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;&lt;a href="02.png"&gt;
&lt;figure&gt;
&lt;picture
class="mx-auto my-0 rounded-md"
&gt;
&lt;source
srcset="https://jnsgr.uk/2025/11/ubuntu-summit-25/02_hu_1d018afaaf936f4c.webp 330w,https://jnsgr.uk/2025/11/ubuntu-summit-25/02_hu_633333fcdd4ed333.webp 660w
,https://jnsgr.uk/2025/11/ubuntu-summit-25/02_hu_9d1bbcb6c1e5044a.webp 1024w
,https://jnsgr.uk/2025/11/ubuntu-summit-25/02_hu_ca0ce363eac0f419.webp 1280w
"
sizes="100vw"
type="image/webp"
/&gt;
&lt;img
width="1280"
height="720"
class="mx-auto my-0 rounded-md"
alt="opening slide for inference snaps talk"
loading="lazy" decoding="async"
src="https://jnsgr.uk/2025/11/ubuntu-summit-25/02_hu_d3e5811b82e6933.png" srcset="https://jnsgr.uk/2025/11/ubuntu-summit-25/02_hu_ebbc2e9e26d4db3.png 330w,https://jnsgr.uk/2025/11/ubuntu-summit-25/02_hu_d3e5811b82e6933.png 660w
,https://jnsgr.uk/2025/11/ubuntu-summit-25/02_hu_711ebc18bd1dda97.png 1024w
,https://jnsgr.uk/2025/11/ubuntu-summit-25/02.png 1280w
"
sizes="100vw"
/&gt;
&lt;/picture&gt;
&lt;/figure&gt;
&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;This talk served as the first &lt;a href="https://canonical.com/blog/canonical-releases-inference-snaps" target="_blank" rel="noreferrer"&gt;public announcement&lt;/a&gt; of Inference Snaps from Canonical, which represents a few months of working combining many of the new technologies behind Snaps.&lt;/p&gt;
&lt;p&gt;As Large Language Models continue to gain pace along with the rest of the AI community, silicon manufacturers are increasingly including dedicated hardware in commodity CPUs and GPUs, as well as shipping dedicated accelerators for some workloads.&lt;/p&gt;
&lt;p&gt;AI models often need to be tuned in some way in order to work optimally - for example &lt;a href="https://huggingface.co/docs/optimum/en/concept_guides/quantization" target="_blank" rel="noreferrer"&gt;quantisation&lt;/a&gt; which aims to reduce the computational memory costs of running inference on a given model.&lt;/p&gt;
&lt;p&gt;Inference snaps provide a hassle-free mechanism for users to obtain the &amp;ldquo;famous model&amp;rdquo; they want to work with, but automatically receive a version of that model which is optimised for the silicon in their machine, removing the need to spend hours on HuggingFace trying to identify the correct model to download that matches with their hardware.&lt;/p&gt;
&lt;p&gt;Using our extensive partner network, we&amp;rsquo;ll continue to work with multiple silicon vendors to ensure that models are available for the latest hardware as it drops, and provide a consistent experience to Ubuntu users that wish to work with AI.&lt;/p&gt;
&lt;p&gt;Find out more in the &lt;a href="https://canonical.com/blog/canonical-releases-inference-snaps" target="_blank" rel="noreferrer"&gt;announcement&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id="nøughty-linux-ubuntus-stability-meets-nixpkgs-freshness" class="relative group"&gt;Nøughty Linux: Ubuntu’s Stability Meets Nixpkgs’ Freshness &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#n%c3%b8ughty-linux-ubuntus-stability-meets-nixpkgs-freshness" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;&lt;a href="05.png"&gt;
&lt;figure&gt;
&lt;picture
class="mx-auto my-0 rounded-md"
&gt;
&lt;source
srcset="https://jnsgr.uk/2025/11/ubuntu-summit-25/05_hu_a99f9dc0efe25bc0.webp 330w,https://jnsgr.uk/2025/11/ubuntu-summit-25/05_hu_dc0c16f39603b598.webp 660w
,https://jnsgr.uk/2025/11/ubuntu-summit-25/05_hu_40086b0b15441a95.webp 1024w
,https://jnsgr.uk/2025/11/ubuntu-summit-25/05_hu_d2c99ab7d8057151.webp 1280w
"
sizes="100vw"
type="image/webp"
/&gt;
&lt;img
width="1280"
height="720"
class="mx-auto my-0 rounded-md"
alt="opening slide for noughty linux talk"
loading="lazy" decoding="async"
src="https://jnsgr.uk/2025/11/ubuntu-summit-25/05_hu_61b22107d4c49f5e.png" srcset="https://jnsgr.uk/2025/11/ubuntu-summit-25/05_hu_448ff89cbcd9feba.png 330w,https://jnsgr.uk/2025/11/ubuntu-summit-25/05_hu_61b22107d4c49f5e.png 660w
,https://jnsgr.uk/2025/11/ubuntu-summit-25/05_hu_fc9324d6dc71ff1c.png 1024w
,https://jnsgr.uk/2025/11/ubuntu-summit-25/05.png 1280w
"
sizes="100vw"
/&gt;
&lt;/picture&gt;
&lt;/figure&gt;
&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;This &lt;a href="https://discourse.ubuntu.com/t/noughty-linux-ubuntus-stability-meets-nixpkgs-freshness/69962" target="_blank" rel="noreferrer"&gt;talk&lt;/a&gt; was a bit of a guilty pleasure for me! Delivered by &lt;a href="https://wimpysworld.com/" target="_blank" rel="noreferrer"&gt;Martin Wimpress (wimpy)&lt;/a&gt;, the audience were shown how they could take a stock Ubuntu Server deployment, and use a collection of scripts to layer a cutting-edge GUI stack on top using &lt;a href="https://github.com/NixOS/nixpkgs" target="_blank" rel="noreferrer"&gt;Nixpkgs&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Wimpy outlined his motivation as wanting to rely upon the stable kernel and hardware support offered by Ubuntu, but wanting to be more experimental with his desktop environment and utilities - preferring a tiling window management experience.&lt;/p&gt;
&lt;p&gt;Having spent some years on NixOS, Wimpy was recently required to run a security &amp;ldquo;agent&amp;rdquo; for work, which was very difficult to enable on NixOS, but worked out of the box on Ubuntu. Recognising the need to make the switch, he was reluctant to move away from the workflow he&amp;rsquo;d built so much muscle-memory around - and so &lt;a href="https://noughtylinux.org/" target="_blank" rel="noreferrer"&gt;Nøughty Linux&lt;/a&gt; was born!&lt;/p&gt;
&lt;p&gt;Nøughty Linux is not a Linux distribution, rather a set of configurations for an Ubuntu Server machine. It utilises &lt;a href="https://github.com/soupglasses/nix-system-graphics" target="_blank" rel="noreferrer"&gt;&lt;code&gt;nix-system-graphics&lt;/code&gt;&lt;/a&gt; and &lt;a href="https://github.com/numtide/system-manager" target="_blank" rel="noreferrer"&gt;&lt;code&gt;system-manager&lt;/code&gt;&lt;/a&gt; and is actually &lt;em&gt;very&lt;/em&gt; similar to a configuration I ran in my own &lt;a href="https://github.com/jnsgruk/nixos-config" target="_blank" rel="noreferrer"&gt;nixos-config&lt;/a&gt; repository for my laptop for a while - though Wimpy has chased down significantly more of the papercuts than I did!&lt;/p&gt;
&lt;h2 id="are-we-stuck-with-the-same-desktop-ux-forever" class="relative group"&gt;Are we stuck with the same Desktop UX forever? &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#are-we-stuck-with-the-same-desktop-ux-forever" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;&lt;a href="06.png"&gt;
&lt;figure&gt;
&lt;picture
class="mx-auto my-0 rounded-md"
&gt;
&lt;source
srcset="https://jnsgr.uk/2025/11/ubuntu-summit-25/06_hu_9509dcaa896a87c.webp 330w,https://jnsgr.uk/2025/11/ubuntu-summit-25/06_hu_cd096a2b6be3a7cf.webp 660w
,https://jnsgr.uk/2025/11/ubuntu-summit-25/06_hu_e7aaa31012bfd7fe.webp 1024w
,https://jnsgr.uk/2025/11/ubuntu-summit-25/06_hu_8caa86dfbc373634.webp 1280w
"
sizes="100vw"
type="image/webp"
/&gt;
&lt;img
width="1280"
height="720"
class="mx-auto my-0 rounded-md"
alt="opening slide for desktop ux talk"
loading="lazy" decoding="async"
src="https://jnsgr.uk/2025/11/ubuntu-summit-25/06_hu_652830ca100ce6c5.png" srcset="https://jnsgr.uk/2025/11/ubuntu-summit-25/06_hu_942ca798315d8db4.png 330w,https://jnsgr.uk/2025/11/ubuntu-summit-25/06_hu_652830ca100ce6c5.png 660w
,https://jnsgr.uk/2025/11/ubuntu-summit-25/06_hu_63850a6cf7ed9e47.png 1024w
,https://jnsgr.uk/2025/11/ubuntu-summit-25/06.png 1280w
"
sizes="100vw"
/&gt;
&lt;/picture&gt;
&lt;/figure&gt;
&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://jenson.org/" target="_blank" rel="noreferrer"&gt;Scott Jenson&lt;/a&gt; delivered an incredibly engaging &lt;a href="https://discourse.ubuntu.com/t/are-we-stuck-with-the-same-desktop-ux-forever/67253" target="_blank" rel="noreferrer"&gt;talk&lt;/a&gt; in which he posited that desktop user experience has somewhat stagnated, and worse that many of the patterns we&amp;rsquo;ve become used to on the desktop are antiquated and unergonomic.&lt;/p&gt;
&lt;p&gt;The crux of the talk was to focus on user &lt;em&gt;experience&lt;/em&gt;, rather than user &lt;em&gt;interfaces&lt;/em&gt; - challenging developers to think about how people learn, and how desktops could benefit more from design affordances by rethinking some critical elements such as window management or text editing.&lt;/p&gt;
&lt;p&gt;Using his years of experience at Apple, Symbian and Google, Scott delivered one of the most engaging conference talks I&amp;rsquo;ve seen, and I thoroughly recommend watching it on our YouTube channel!&lt;/p&gt;
&lt;h2 id="honorable-mentions" class="relative group"&gt;Honorable Mentions &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#honorable-mentions" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;In addition to the talks above, it was a delight to meet &lt;a href="https://cs.ru.nl/~M.Schoolderman/" target="_blank" rel="noreferrer"&gt;Mark Schoolderman&lt;/a&gt; from the &lt;a href="https://trifectatech.org/" target="_blank" rel="noreferrer"&gt;Trifecta Tech Foundation&lt;/a&gt; in-person, who led the work on &lt;a href="https://github.com/trifectatechfoundation/sudo-rs" target="_blank" rel="noreferrer"&gt;&lt;code&gt;sudo-rs&lt;/code&gt;&lt;/a&gt; as part of our &amp;ldquo;Oxidising Ubuntu&amp;rdquo; story, and interesting to hear about the value the project derived from Ubuntu&amp;rsquo;s &lt;a href="https://documentation.ubuntu.com/project/MIR/main-inclusion-review/" target="_blank" rel="noreferrer"&gt;Main Inclusion Review&lt;/a&gt; process as part of landing &lt;code&gt;sudo-rs&lt;/code&gt; in &lt;code&gt;main&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Equally, I was delighted that &lt;a href="https://github.com/kaplun" target="_blank" rel="noreferrer"&gt;Samuele Kaplun&lt;/a&gt; from &lt;a href="https://proton.me/" target="_blank" rel="noreferrer"&gt;Proton&lt;/a&gt; could join us to talk about the work we&amp;rsquo;ve been doing together on bringing first-class Snap packages for &lt;a href="https://proton.me/mail" target="_blank" rel="noreferrer"&gt;Proton Mail&lt;/a&gt;, &lt;a href="https://protonvpn.com/?ref=pme_lp_b2c_proton_submenu" target="_blank" rel="noreferrer"&gt;Proton VPN&lt;/a&gt;, &lt;a href="https://proton.me/pass" target="_blank" rel="noreferrer"&gt;Proton Pass&lt;/a&gt; and &lt;a href="https://proton.me/authenticator" target="_blank" rel="noreferrer"&gt;Proton Authenticator&lt;/a&gt; to the &lt;a href="https://snapcraft.io/publisher/proton-ag" target="_blank" rel="noreferrer"&gt;Snap store&lt;/a&gt;, and their reasons for choosing Snaps, adventures with &lt;a href="https://snapcraft.io/docs/snap-confinement" target="_blank" rel="noreferrer"&gt;confinement&lt;/a&gt;, and more.&lt;/p&gt;
&lt;p&gt;I was delighted to see &lt;a href="https://www.craigloewen.com/" target="_blank" rel="noreferrer"&gt;Craig Loewen&lt;/a&gt; and &lt;a href="https://www.linkedin.com/in/clintrutkas/" target="_blank" rel="noreferrer"&gt;Clint Rutkas&lt;/a&gt; present on their &lt;a href="https://discourse.ubuntu.com/t/engineering-wsl-in-the-open-a-deep-dive-into-open-sourcing-wsl-at-microsoft/67022" target="_blank" rel="noreferrer"&gt;journey&lt;/a&gt; open sourcing the &lt;a href="https://en.wikipedia.org/wiki/Windows_Subsystem_for_Linux" target="_blank" rel="noreferrer"&gt;Windows Subsystem For Linux (WSL)&lt;/a&gt;, which represents a growing proportion of Ubuntu users, and a key bridge to open source development for many.&lt;/p&gt;
&lt;p&gt;Finally, thank you to &lt;a href="https://github.com/utkarsh2102" target="_blank" rel="noreferrer"&gt;Utkarsh&lt;/a&gt; for this wonderful slide as part of his talk on Ubuntu Snapshot Releases:&lt;/p&gt;
&lt;p&gt;&lt;a href="01.jpg"&gt;
&lt;figure&gt;
&lt;picture
class="mx-auto my-0 rounded-md"
&gt;
&lt;source
srcset="https://jnsgr.uk/2025/11/ubuntu-summit-25/01_hu_33528e98771d0bee.webp 330w,https://jnsgr.uk/2025/11/ubuntu-summit-25/01_hu_3200c3c97ea3fdc8.webp 660w
,https://jnsgr.uk/2025/11/ubuntu-summit-25/01_hu_bdcdf80fde7402d2.webp 1024w
,https://jnsgr.uk/2025/11/ubuntu-summit-25/01_hu_dee895df12e2c378.webp 1320w
"
sizes="100vw"
type="image/webp"
/&gt;
&lt;img
width="1445"
height="813"
class="mx-auto my-0 rounded-md"
alt="a slide depicting my profile picture, but with laser eyes and the title &amp;ldquo;violence&amp;rdquo;"
loading="lazy" decoding="async"
src="https://jnsgr.uk/2025/11/ubuntu-summit-25/01_hu_25071a7fcfa9dc2a.jpg" srcset="https://jnsgr.uk/2025/11/ubuntu-summit-25/01_hu_aa0d0f0d8dcb5517.jpg 330w,https://jnsgr.uk/2025/11/ubuntu-summit-25/01_hu_25071a7fcfa9dc2a.jpg 660w
,https://jnsgr.uk/2025/11/ubuntu-summit-25/01_hu_f6f1d1cda3bd05f2.jpg 1024w
,https://jnsgr.uk/2025/11/ubuntu-summit-25/01_hu_41b986c0c48d6e4d.jpg 1320w
"
sizes="100vw"
/&gt;
&lt;/picture&gt;
&lt;/figure&gt;
&lt;/a&gt;&lt;/p&gt;
&lt;h2 id="conclusion" class="relative group"&gt;Conclusion &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#conclusion" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;Overall, I found the Ubuntu Summit 25.10 a really enjoyable event, with talks that were uniformly high in quality, charisma and creativity. I&amp;rsquo;m pleased that Canonical has broadened the Summit&amp;rsquo;s reach and I hope it continues to serve as a platform to showcase the very best open source innovation.&lt;/p&gt;
&lt;p&gt;Until next time!&lt;/p&gt;</description></item><item><title>Ubuntu Engineering in 2025: A Retrospective</title><link>https://jnsgr.uk/2025/10/ubuntu-25/</link><pubDate>Thu, 09 Oct 2025 00:00:00 +0000</pubDate><guid>https://jnsgr.uk/2025/10/ubuntu-25/</guid><description>&lt;blockquote&gt;
&lt;p&gt;This article was originally posted &lt;a href="https://discourse.ubuntu.com/t/ubuntu-25-10-a-retrospective/69127" target="_blank" rel="noreferrer"&gt;on the Ubuntu Discourse&lt;/a&gt;, and is reposted here. I welcome comments and further discussion in that thread.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2 id="ubuntu-2510-a-retrospective" class="relative group"&gt;Ubuntu 25.10: A Retrospective &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#ubuntu-2510-a-retrospective" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;In February this year, I published &lt;a href="https://discourse.ubuntu.com/t/engineering-ubuntu-for-the-next-20-years/55000" target="_blank" rel="noreferrer"&gt;Engineering Ubuntu For The Next 20 Years&lt;/a&gt;, which was something of a manifesto I pledged to enact in the design, build and release of Ubuntu. This week, we released Ubuntu 25.10 Questing Quokka, which was the first full engineering cycle under this new manifesto, and it seems like a good time to reflect on what we achieved in each category, as well as highlight some of the more impactful changes that have just landed in Ubuntu.&lt;/p&gt;
&lt;p&gt;In that first article, I outline four themes for Ubuntu Engineering at Canonical to focus on: Communication, Automation, Process and Modernisation.&lt;/p&gt;
&lt;h3 id="communication" class="relative group"&gt;Communication &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#communication" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h3&gt;&lt;p&gt;A notable improvement throughout this engineering cycle has been the frequency with which the teams at Canonical have written about their work, often in some detail. Many of these posts can be found under the &lt;a href="https://discourse.ubuntu.com/tag/blog" target="_blank" rel="noreferrer"&gt;blog tag&lt;/a&gt;, which had never been used until around six months ago, and now sees a couple of new posts per week outlining the work people are doing toward these themes.&lt;/p&gt;
&lt;p&gt;I stated that I consider documentation a key part of our communication strategy, and this last six months has seen some of the most substantial changes to Ubuntu documentation in many years. The &lt;a href="https://documentation.ubuntu.com/project/" target="_blank" rel="noreferrer"&gt;Ubuntu Project Docs&lt;/a&gt; was a project started in May 2025, and is quickly becoming the single documentation hub that a current or potential Ubuntu contributor needs to understand how, why and when to do their job. Similarly, the &lt;a href="https://documentation.ubuntu.com/ubuntu-for-developers/" target="_blank" rel="noreferrer"&gt;Ubuntu for Developers&lt;/a&gt; was created to illuminate a path for developers across numerous languages on Ubuntu.&lt;/p&gt;
&lt;p&gt;It&amp;rsquo;s important for us to celebrate such efforts, but also to remember that this is only the start! In order for these efforts to remain useful, both our internal teams and our community must continue to engage with these efforts - adding, refining and pruning content as necessary. As the sun-setting of wiki.ubuntu.com approaches, it&amp;rsquo;s imperative that these new documentation sites continue to get the attention they need.&lt;/p&gt;
&lt;p&gt;Lots of the changes we&amp;rsquo;ve made in the last cycle have attracted attention from online blogs, news outlets, youtubers, etc. Part of the challenge with such changes is &amp;ldquo;owning the narrative&amp;rdquo; and ensuring that legitimate concerns are heard (and taken into account), but also that there are appropriate responses to uncertainty, without getting drawn into unproductive discussions.&lt;/p&gt;
&lt;p&gt;Finally, the transition to &lt;a href="https://ubuntu.com/community/docs/communications/matrix" target="_blank" rel="noreferrer"&gt;Matrix&lt;/a&gt; as the default synchronous communication means for the project has, in my opinion, made it easier than ever to get in touch with our community of experts - whether it be for support, or to start a journey for contribution to Ubuntu.&lt;/p&gt;
&lt;h3 id="automation" class="relative group"&gt;Automation &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#automation" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h3&gt;&lt;p&gt;The largest item we took on here was in pursuit of the &lt;a href="https://discourse.ubuntu.com/t/61876" target="_blank" rel="noreferrer"&gt;monthly snapshot releases&lt;/a&gt;. This went much better than I expected, and to some extent covers off the &amp;ldquo;Process&amp;rdquo; theme as well as &amp;ldquo;Automation&amp;rdquo;, but through a combination of studying our process and whittling it down as lean as we could, and beginning to automate more of the process, the team were able to release four snapshot releases before the 25.10 Beta.&lt;/p&gt;
&lt;p&gt;The scale of the automation efforts was relatively limited this cycle, but the automation of release testing has really accelerated in the past few months. The vast majority of the &lt;a href="https://github.com/canonical/ubuntu-gui-testing/tree/main/tests" target="_blank" rel="noreferrer"&gt;test cases&lt;/a&gt; that qualify an Ubuntu Desktop ISO for release are now fully automated, and the &lt;a href="https://github.com/canonical/yarf" target="_blank" rel="noreferrer"&gt;same framework&lt;/a&gt; that makes this possible was also used to develop a suite of tests for &lt;a href="https://discourse.ubuntu.com/t/tpm-fde-progress-for-ubuntu-25-10/65146" target="_blank" rel="noreferrer"&gt;TPM FDE&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Work was also done on our &lt;a href="https://discourse.ubuntu.com/t/crafting-your-software/64809" target="_blank" rel="noreferrer"&gt;craft tools&lt;/a&gt; to better the experience with the &lt;code&gt;test&lt;/code&gt; sub-command of build tools like &lt;code&gt;snapcraft&lt;/code&gt;, &lt;code&gt;rockcraft&lt;/code&gt; and &lt;code&gt;charmcraft&lt;/code&gt; - all of which will have a trickle-down effect on the upcoming &lt;code&gt;debcraft&lt;/code&gt;, and make it trivial to include many new kinds of tests in our packaging workflows.&lt;/p&gt;
&lt;p&gt;Behind the scenes, every team in Ubuntu Engineering at Canonical has been writing charms that make the underlying infrastructure behind Ubuntu more portable, resilient and scalable. This includes services like &lt;a href="https://manpages.ubuntu.com/" target="_blank" rel="noreferrer"&gt;Ubuntu Manpages&lt;/a&gt;, &lt;a href="https://autopkgtest.ubuntu.com/" target="_blank" rel="noreferrer"&gt;autopkgtest&lt;/a&gt;, &lt;a href="https://errors.ubuntu.com/" target="_blank" rel="noreferrer"&gt;error-tracker&lt;/a&gt;, and a staging deployment of &lt;a href="https://temporal.io" target="_blank" rel="noreferrer"&gt;Temporal&lt;/a&gt; to enable the next phase of our release automation.&lt;/p&gt;
&lt;h3 id="process" class="relative group"&gt;Process &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#process" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h3&gt;&lt;p&gt;This item was probably where the least concrete progress was made, though I probably could have predicted that. Many of the processes in the Ubuntu project serve to ensure that we ship resilient software, and don&amp;rsquo;t break users - so changing them in a hurry is not generally a good idea.&lt;/p&gt;
&lt;p&gt;That said, there was some good progress on the &lt;a href="https://documentation.ubuntu.com/project/MIR/main-inclusion-review/#mir-process-overview" target="_blank" rel="noreferrer"&gt;Main Inclusion Review&lt;/a&gt; (MIR) process, whose team documentation was moved into the &lt;a href="https://documentation.ubuntu.com/project" target="_blank" rel="noreferrer"&gt;Ubuntu Project Docs&lt;/a&gt; after a thorough review, and the &lt;a href="https://documentation.ubuntu.com/project/how-ubuntu-is-made/processes/stable-release-updates/" target="_blank" rel="noreferrer"&gt;Stable Release Updates&lt;/a&gt; (SRU) team are in the process of the same transition. Moving and re-reviewing the documentation is essentially the first step of the process improvement I was seeking: understanding where we are!&lt;/p&gt;
&lt;p&gt;Internally, we&amp;rsquo;ve been piloting a new process for onboarding &lt;a href="https://documentation.ubuntu.com/project/who-makes-ubuntu/developers/dmb-index/#the-uploader-s-journey" target="_blank" rel="noreferrer"&gt;Ubuntu Developers&lt;/a&gt; that sees engineers start by working toward gaining upload rights for a single package, but has a complete curriculum that can take them through to Core Developer status. Details of this should be released in the coming months, outlining a clear and well-trodden journey for new contributors. Much of this material already existed, but the team have worked on polishing it, and making it clearer how the process work from end to end.&lt;/p&gt;
&lt;p&gt;The next step for each of these processes is measurement. We&amp;rsquo;ve begun instrumenting these processes to understand where the most time is spent so we can use that information to guide improvements and streamline processes in future cycles, and even set &lt;a href="https://en.wikipedia.org/wiki/Service-level_objective" target="_blank" rel="noreferrer"&gt;Service Level Objectives&lt;/a&gt; (SLOs) against those timelines.&lt;/p&gt;
&lt;h3 id="modernisation" class="relative group"&gt;Modernisation &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#modernisation" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h3&gt;&lt;p&gt;Much of what I’ve already described could be considered modernisation, but from a technical standpoint the most obvious candidate here was the &amp;ldquo;&lt;a href="https://discourse.ubuntu.com/t/carefully-but-purposefully-oxidising-ubuntu/56995" target="_blank" rel="noreferrer"&gt;Oxidising Ubuntu&lt;/a&gt;&amp;rdquo; effort, which has seen us replace numerous core utilities in Ubuntu 25.10 with modern Rust rewrites.&lt;/p&gt;
&lt;p&gt;We began this effort in close collaboration with the &lt;a href="https://uutils.github.io/" target="_blank" rel="noreferrer"&gt;uutils&lt;/a&gt; project and the &lt;a href="https://trifectatech.org/" target="_blank" rel="noreferrer"&gt;Trifecta Tech Foundation&lt;/a&gt;. The former is the maintainer of a Rust &lt;code&gt;coreutils&lt;/code&gt; rewrite, and the latter the maintainer of &lt;code&gt;sudo-rs&lt;/code&gt;, which we &lt;a href="https://discourse.ubuntu.com/t/adopting-sudo-rs-by-default-in-ubuntu-25-10/60583" target="_blank" rel="noreferrer"&gt;made the default&lt;/a&gt; in 25.10. The technical impact of these changes in defaults will only truly be known once Ubuntu 25.10 is &amp;ldquo;out there&amp;rdquo;, but I&amp;rsquo;m pleased with how we approached the shift. In both cases, we contacted the upstreams in good time to ascertain their view on their projects&amp;rsquo; readiness, then agreed funding to ensure they had the financial support they needed to land changes in support of Ubuntu, and then worked closely with them throughout the cycle to solve various performance and implementation issues we discovered along the way.&lt;/p&gt;
&lt;p&gt;As it stands today, &lt;code&gt;sudo-rs&lt;/code&gt; is the default &lt;code&gt;sudo&lt;/code&gt; implementation on Ubuntu 25.10, and uutils&amp;rsquo; &lt;code&gt;coreutils&lt;/code&gt; has &lt;em&gt;mostly&lt;/em&gt; replaced the GNU implementation, with a &lt;a href="https://git.launchpad.net/ubuntu/&amp;#43;source/coreutils-from/tree/debian/coreutils-from-uutils.links" target="_blank" rel="noreferrer"&gt;few exceptions&lt;/a&gt;, many of which will be resolved by releases in the coming weeks. These diversions back to the existing implementations demonstrate that stability and resilience are more important than &amp;ldquo;hype&amp;rdquo; in our approach: I expect us to have completed the migration during the next cycle, but not before the tools are ready.&lt;/p&gt;
&lt;p&gt;Following the &lt;a href="https://discourse.ubuntu.com/t/spec-switch-to-dracut/54776" target="_blank" rel="noreferrer"&gt;&amp;ldquo;Switch to Dracut&amp;rdquo; specification&lt;/a&gt;, Ubuntu Desktop 25.10 will use &lt;a href="https://dracut-ng.github.io/dracut-ng/" target="_blank" rel="noreferrer"&gt;Dracut&lt;/a&gt; as its default initrd infrastructure (replacing initramfs-tools). Dracut will use systemd in the initrd and supports new features like Bluetooth and NVMe over Fabric (NVM-oF) support. Ubuntu Server installations will continue using &lt;code&gt;initramfs-tools&lt;/code&gt; until &lt;a href="https://bugs.launchpad.net/ubuntu/&amp;#43;source/dracut/&amp;#43;bug/2125790" target="_blank" rel="noreferrer"&gt;remaining hooks are ported&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;For each of these changes (&lt;code&gt;coreutils&lt;/code&gt;, &lt;code&gt;sudo-rs&lt;/code&gt; and &lt;code&gt;dracut&lt;/code&gt;) the previous implementations will remain supported for now, with well-documented instructions on the reversion of each change for those who run into unavoidable issues - though we expect this to be a very small number of cases.&lt;/p&gt;
&lt;h2 id="whats-next" class="relative group"&gt;What&amp;rsquo;s Next? &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#whats-next" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;Well&amp;hellip; more of the same! We intend to carry on with the increased cadence of written updates, so keep an eye out for those.&lt;/p&gt;
&lt;p&gt;We have some exciting announcements to make over the coming weeks, including support for more modern micro-architectural variants (like &lt;code&gt;amd64v3&lt;/code&gt;), better system-wide handling of revoked TLS certificates, updates on our Debcraft package for a more modern packaging experience and an effort to update many of our tools from &amp;ldquo;behind the scenes&amp;rdquo; using a combination of Rust and Go.&lt;/p&gt;
&lt;p&gt;My final words are to thank all of those who have driven these efforts. I&amp;rsquo;ll omit the long list of names, but there have been countless examples of people stepping up substantially to deliver these efforts - without whom we&amp;rsquo;d have made a lot less progress.&lt;/p&gt;
&lt;p&gt;Well done, and let&amp;rsquo;s make Resolute Raccoon an LTS to remember - for all the &lt;em&gt;right&lt;/em&gt; reasons!&lt;/p&gt;</description></item><item><title>The Immutable Linux Paradox</title><link>https://jnsgr.uk/2025/09/immutable-linux-paradox/</link><pubDate>Mon, 01 Sep 2025 00:00:00 +0000</pubDate><guid>https://jnsgr.uk/2025/09/immutable-linux-paradox/</guid><description>&lt;blockquote&gt;
&lt;p&gt;This article was originally posted &lt;a href="https://discourse.ubuntu.com/t/the-immutable-linux-paradox/66456" target="_blank" rel="noreferrer"&gt;on the Ubuntu Discourse&lt;/a&gt;, and is reposted here. I welcome comments and further discussion in that thread.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Immutable Linux distributions have been around since the early 2000s, but adoption has significantly accelerated in the last five years. Mainstream operating systems (OSes) such as &lt;a href="https://www.apple.com/macos" target="_blank" rel="noreferrer"&gt;macOS&lt;/a&gt;, &lt;a href="https://www.android.com/intl/en_uk/" target="_blank" rel="noreferrer"&gt;Android&lt;/a&gt;, &lt;a href="https://chromeos.google/intl/en_uk/" target="_blank" rel="noreferrer"&gt;ChromeOS&lt;/a&gt; and &lt;a href="https://www.apple.com/ios" target="_blank" rel="noreferrer"&gt;iOS&lt;/a&gt; have all embraced similar principles, reflecting a growing trend toward resilience, longevity, and maintainability as core ideals of OS development.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://ubuntu.com/core" target="_blank" rel="noreferrer"&gt;Ubuntu Core&lt;/a&gt; has been at the forefront of this movement for IoT, appliances and edge deployments, with work ongoing to release a &amp;ldquo;Core Desktop&amp;rdquo; experience. Other projects such as &lt;a href="https://nixos.org/" target="_blank" rel="noreferrer"&gt;NixOS&lt;/a&gt;, &lt;a href="https://fedoraproject.org/atomic-desktops/silverblue/" target="_blank" rel="noreferrer"&gt;Fedora Silverblue&lt;/a&gt; and &lt;a href="https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/using_image_mode_for_rhel_to_build_deploy_and_manage_operating_systems/introducing-image-mode-for-rhel_using-image-mode-for-rhel-to-build-deploy-and-manage-operating-systems" target="_blank" rel="noreferrer"&gt;Red Hat image mode&lt;/a&gt; are gaining adoption, alongside more specialised immutable distributions such as &lt;a href="https://store.steampowered.com/steamos" target="_blank" rel="noreferrer"&gt;SteamOS&lt;/a&gt; and &lt;a href="https://www.talos.dev/" target="_blank" rel="noreferrer"&gt;Talos&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;This post explores how different Linux distributions achieve immutability, the trade-offs, and why you should give it a try!&lt;/p&gt;
&lt;h2 id="what-is-an-immutable-linux-distribution" class="relative group"&gt;What is an immutable Linux distribution? &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#what-is-an-immutable-linux-distribution" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;The key principle of an immutable OS is that the core system is unchangeable at runtime.&lt;/p&gt;
&lt;p&gt;Every OS installation has at least one filesystem that stores system software, user software, and user data. Immutable OSes must cleanly separate &amp;ldquo;system&amp;rdquo; and &amp;ldquo;user&amp;rdquo; software and data, such that regular user interactions cannot compromise the integrity of the OS.&lt;/p&gt;
&lt;p&gt;Immutable deployments are often separated into three layers:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Base OS&lt;/strong&gt; - immutable core, updated only through controlled mechanisms&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Applications&lt;/strong&gt; - user applications, often delivered in containerised formats such as &lt;a href="https://snapcraft.io/docs" target="_blank" rel="noreferrer"&gt;Snap&lt;/a&gt;, &lt;a href="https://flatpak.org/" target="_blank" rel="noreferrer"&gt;Flatpak&lt;/a&gt;, &lt;a href="https://appimage.org/" target="_blank" rel="noreferrer"&gt;AppImage&lt;/a&gt;, &lt;a href="https://github.com/Containerpak/cpak" target="_blank" rel="noreferrer"&gt;cpak&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;User data&lt;/strong&gt; - writable and persistent, independent of OS updates or rollbacks&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Immutable systems use atomic, transactional updates meaning updates are applied as unitary, indivisible operations that either wholly succeed, or fail completely and trigger an automated roll-back to a previous known-good state.&lt;/p&gt;
&lt;h2 id="why-immutability" class="relative group"&gt;Why immutability? &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#why-immutability" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;The major benefit of an immutable OS is &lt;em&gt;resilience&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;Immutable OSes make it easier to reproduce systems with a given configuration, which is particularly useful in scale-out use-cases such as cloud or IoT.&lt;/p&gt;
&lt;p&gt;Traditional package managers often maintain a database of installed packages, consisting of those included in the base OS, and those explicitly installed by the user, and their dependencies. The package manager &lt;em&gt;doesn&amp;rsquo;t&lt;/em&gt; have a clear notion of which packages make up the &amp;ldquo;core system&amp;rdquo;, and which are &amp;ldquo;optional&amp;rdquo;.&lt;/p&gt;
&lt;p&gt;This can cause &amp;ldquo;configuration drift&amp;rdquo;, which occurs over time - a package could be explicitly installed by a user, used for a while and then removed, but without removing its dependencies. This leaves the system in a different, and somewhat undefined, state than it was in prior to the package being installed.&lt;/p&gt;
&lt;p&gt;Often the traditional notion of OS security is improved with immutable OS concepts too. In most implementations, the core OS files are mounted read-only such that users &lt;em&gt;cannot&lt;/em&gt; make changes - which also raises the bar for malicious modifications. When combined with technologies such as secure boot and confinement, immutable OSes can dramatically reduce the attack surface of a machine.&lt;/p&gt;
&lt;p&gt;Finally, convenience! Immutable OSes often include recovery or rollback features, which enable users to &amp;ldquo;undo&amp;rdquo; a bad system change, reverting to a previous known-good revision.&lt;/p&gt;
&lt;h2 id="the-immutability-paradox" class="relative group"&gt;The immutability paradox &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#the-immutability-paradox" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;In reality, no general-purpose operating system is fully immutable.&lt;/p&gt;
&lt;p&gt;There is always persistent, user-writable storage - because without this there would be a huge limitation on usefulness! Similarly, how can a system be truly immutable, yet still support software updates?&lt;/p&gt;
&lt;p&gt;The terms &amp;ldquo;immutable&amp;rdquo; and &amp;ldquo;stateless&amp;rdquo; are often conflated - when in reality neither are excellent terms for describing what has become widely known as &amp;ldquo;immutable OSes&amp;rdquo;. This was explored in some depth in &lt;a href="https://blog.verbum.org/2020/08/22/immutable-%E2%86%92-reprovisionable-anti-hysteresis/" target="_blank" rel="noreferrer"&gt;this blog post&lt;/a&gt; which proposes terms such as &amp;ldquo;image based&amp;rdquo; and &amp;ldquo;fully managed&amp;rdquo;.&lt;/p&gt;
&lt;p&gt;By definition, changes to configuration, the installation of applications and the use of temporary runtime storage are all violations of immutability, and thus immutability concepts must be applied in some sort of layering system.&lt;/p&gt;
&lt;p&gt;Striking the balance between &amp;rsquo;true&amp;rsquo; immutability and user experience is one of the hardest challenges in immutable OS design. A system that is too rigid can be difficult to manage and use, appearing inflexible to end users.&lt;/p&gt;
&lt;p&gt;A common pattern is to run an immutable desktop OS and use virtualisation or containerisation technologies (e.g. &lt;a href="https://canonical.com/lxd" target="_blank" rel="noreferrer"&gt;LXD&lt;/a&gt;, &lt;a href="https://podman.io/" target="_blank" rel="noreferrer"&gt;Podman&lt;/a&gt;, &lt;a href="https://containertoolbx.org/" target="_blank" rel="noreferrer"&gt;&lt;code&gt;toolbx&lt;/code&gt;&lt;/a&gt; &lt;a href="https://distrobox.it/" target="_blank" rel="noreferrer"&gt;Distrobox&lt;/a&gt;) to create mutable environments in which to work on projects. This results in a very stable workstation that benefits from immutability, with the flexibility of a traditional mutable OS where it&amp;rsquo;s needed.&lt;/p&gt;
&lt;h2 id="approaches-to-immutability" class="relative group"&gt;Approaches to immutability &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#approaches-to-immutability" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;Different distributions solve the immutability challenge in different ways. In this section we&amp;rsquo;ll explore the four different approaches of &lt;code&gt;ostree&lt;/code&gt; based distributions, &lt;code&gt;bootc&lt;/code&gt; based distributions, NixOS and Ubuntu Core.&lt;/p&gt;
&lt;h3 id="fedora-silverblue--coreos--endlessos-ostree" class="relative group"&gt;Fedora Silverblue / CoreOS / EndlessOS (&lt;code&gt;ostree&lt;/code&gt;) &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#fedora-silverblue--coreos--endlessos-ostree" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h3&gt;&lt;p&gt;&lt;a href="https://fedoraproject.org/atomic-desktops/silverblue/" target="_blank" rel="noreferrer"&gt;Fedora Silverblue&lt;/a&gt; and &lt;a href="https://fedoraproject.org/coreos/" target="_blank" rel="noreferrer"&gt;Fedora CoreOS&lt;/a&gt; are also popular choices for those exploring immutable OSes. The two share a lot of underlying technology with Silverblue targeting desktop use cases, and CoreOS targeting server deployments.&lt;/p&gt;
&lt;p&gt;Both are based on &lt;a href="https://ostreedev.github.io/ostree/" target="_blank" rel="noreferrer"&gt;&lt;code&gt;ostree&lt;/code&gt;&lt;/a&gt;, which provides:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;tools that combine a &amp;lsquo;git-like&amp;rsquo; model for committing and downloading bootable filesystem trees, along with a layer for deploying them and managing the bootloader configuration.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Silverblue and CoreOS actually rely on &lt;a href="https://coreos.github.io/rpm-ostree/" target="_blank" rel="noreferrer"&gt;&lt;code&gt;rpm-ostree&lt;/code&gt;&lt;/a&gt; , a &amp;ldquo;hybrid image/package manager&amp;rdquo; which combines RPM packaging technology with &lt;code&gt;ostree&lt;/code&gt; to manage deployments.&lt;/p&gt;
&lt;p&gt;The update mechanism involves switching the filesystem to track a different remote &amp;ldquo;ref&amp;rdquo;, which is analogous to a git &lt;a href="https://git-scm.com/book/ms/v2/Git-Internals-Git-References" target="_blank" rel="noreferrer"&gt;ref&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.endlessos.org/" target="_blank" rel="noreferrer"&gt;EndlessOS&lt;/a&gt; is based on &lt;a href="https://www.debian.org/" target="_blank" rel="noreferrer"&gt;Debian&lt;/a&gt;, but uses &lt;code&gt;ostree&lt;/code&gt; to achieve immutability. EndlessOS is a desktop experience designed more for the &amp;ldquo;average user&amp;rdquo; and focuses on providing a reliable system that works well in low-bandwidth or offline situations.&lt;/p&gt;
&lt;p&gt;Users often use Flatpak to install graphical user applications atop the immutable base, or a user-space package manager such as &lt;a href="https://brew.sh/" target="_blank" rel="noreferrer"&gt;brew&lt;/a&gt; for other utilities.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;ostree&lt;/code&gt; based distributions also support &amp;ldquo;&lt;a href="https://docs.fedoraproject.org/en-US/fedora-silverblue/getting-started/#package-layering" target="_blank" rel="noreferrer"&gt;package layering&lt;/a&gt;&amp;rdquo; which enables adding packages to the base system without fetching a whole new filesystem ref, but does require the system to be rebooted before the package is persistently available. The documentation notes that this approach is to be used &amp;ldquo;sparingly&amp;rdquo;, and that users should prefer using Flatpak or &lt;a href="https://containertoolbx.org/" target="_blank" rel="noreferrer"&gt;&lt;code&gt;toolbx&lt;/code&gt;&lt;/a&gt; to access additional packages.&lt;/p&gt;
&lt;h3 id="rhel-image-mode-bootc" class="relative group"&gt;RHEL &amp;ldquo;Image Mode&amp;rdquo; (&lt;code&gt;bootc&lt;/code&gt;) &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#rhel-image-mode-bootc" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h3&gt;&lt;p&gt;&lt;code&gt;bootc&lt;/code&gt; based distributions use an alternate approach, packaging the base system into OCI containers (commonly referred to as Docker containers). Atomicity and transactionality are achieved by using container images to deliver the entire core system, and rebooting into a new revision.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/using_image_mode_for_rhel_to_build_deploy_and_manage_operating_systems/introducing-image-mode-for-rhel_using-image-mode-for-rhel-to-build-deploy-and-manage-operating-systems" target="_blank" rel="noreferrer"&gt;RHEL Image Mode&lt;/a&gt; uses &lt;a href="https://bootc-dev.github.io/bootc/intro.html" target="_blank" rel="noreferrer"&gt;&lt;code&gt;bootc&lt;/code&gt;&lt;/a&gt;. This technology capitalises on the success of OCI containers as a transport and delivery mechanism for software by packing an entire OS base image into a single container, including the kernel image.&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;bootc&lt;/code&gt; project builds on &lt;a href="https://ostreedev.github.io/ostree/" target="_blank" rel="noreferrer"&gt;&lt;code&gt;ostree&lt;/code&gt;&lt;/a&gt; , but where &lt;code&gt;ostree&lt;/code&gt; never delivered an opinionated &amp;ldquo;install mechanism&amp;rdquo;, &lt;code&gt;bootc&lt;/code&gt; does. The contents of a &lt;code&gt;bootc&lt;/code&gt; image is an &lt;code&gt;ostree&lt;/code&gt; filesystem.&lt;/p&gt;
&lt;p&gt;Installing new system packages generally means building a new base image, downloading that image and rebooting into it with a command such as &lt;code&gt;bootc switch &amp;lt;image reference&amp;gt;&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Users often use Flatpak to install graphical user applications atop the immutable base, or a user-space package manager such as &lt;a href="https://brew.sh/" target="_blank" rel="noreferrer"&gt;brew&lt;/a&gt; for other utilities.&lt;/p&gt;
&lt;h3 id="nixos" class="relative group"&gt;NixOS &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#nixos" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h3&gt;&lt;p&gt;The Nix project first appeared in 2003. &lt;a href="https://nixos.org/" target="_blank" rel="noreferrer"&gt;NixOS&lt;/a&gt; is built on top of the Nix package manager, using it to manage both packages &lt;em&gt;and&lt;/em&gt; system configuration.&lt;/p&gt;
&lt;p&gt;NixOS defines the entire system through a declarative configuration, with changes applied via “generations” that can be rolled back. Changes to the system are applied by &amp;ldquo;rebuilding&amp;rdquo; the system configuration, which produces a new &amp;ldquo;generation&amp;rdquo;.&lt;/p&gt;
&lt;p&gt;Nix packages, and therefore NixOS, eschews the traditional &lt;a href="https://en.wikipedia.org/wiki/Filesystem_Hierarchy_Standard" target="_blank" rel="noreferrer"&gt;Unix FHS&lt;/a&gt; in favour of the Nix &amp;ldquo;store&amp;rdquo; and a collection of symlinks and wrappers managed by Nix. Only the Nix package manager can write to the store.&lt;/p&gt;
&lt;p&gt;The Nix store also (mostly) enables the building and switching of generations without a reboot. Updates are atomic: new generations must build completely before they can be activated. The &lt;a href="https://github.com/nix-community/home-manager" target="_blank" rel="noreferrer"&gt;&lt;code&gt;home-manager&lt;/code&gt;&lt;/a&gt; project extends these concepts to the user environment and dotfile management.&lt;/p&gt;
&lt;p&gt;The &lt;a href="https://github.com/nix-community/impermanence" target="_blank" rel="noreferrer"&gt;&lt;code&gt;impermanence&lt;/code&gt;&lt;/a&gt; project requires that every persistent directory is explicitly labelled, or else it&amp;rsquo;s deleted on every reboot, forcing the base OS to be rebuilt from the Nix store and system configuration - essentially &amp;ldquo;enforcing&amp;rdquo; core system immutability between reboots. This was inspired by blog posts &amp;ldquo;&lt;a href="https://grahamc.com/blog/erase-your-darlings/" target="_blank" rel="noreferrer"&gt;Erase Your Darlings&lt;/a&gt;&amp;rdquo; and &amp;ldquo;&lt;a href="https://elis.nu/blog/2020/05/nixos-tmpfs-as-root/" target="_blank" rel="noreferrer"&gt;NixOS tmpfs as root&lt;/a&gt;&amp;rdquo;, which are worth a read, too!&lt;/p&gt;
&lt;h3 id="ubuntu-core" class="relative group"&gt;Ubuntu Core &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#ubuntu-core" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h3&gt;&lt;p&gt;Ubuntu Core achieves immutability by packaging every component (kernel, base system, applications) as Snaps.&lt;/p&gt;
&lt;p&gt;Snap &lt;a href="https://snapcraft.io/docs/snap-confinement" target="_blank" rel="noreferrer"&gt;confinement&lt;/a&gt; enforces isolation, and &lt;code&gt;snapd&lt;/code&gt; manages transactional updates and rollbacks. The system is designed for reliability, fleet management, and modular upgrades, making it well-suited for IoT and soon, desktop use.&lt;/p&gt;
&lt;p&gt;The key &lt;a href="https://documentation.ubuntu.com/core/explanation/core-elements/inside-ubuntu-core/" target="_blank" rel="noreferrer"&gt;components&lt;/a&gt; of an Ubuntu Core deployment are:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Gadget snap&lt;/strong&gt;: provides boot assets, including board specific binaries and data (bootloader, device tree, etc.)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Kernel snap&lt;/strong&gt;: kernel image and associated modules, along with initial ramdisk for system initialisation&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Base snap&lt;/strong&gt;: execution environment in which applications run - includes &amp;ldquo;base&amp;rdquo; Ubuntu LTS packages&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;System snaps&lt;/strong&gt;: packages critical to system function such as &lt;a href="https://documentation.ubuntu.com/core/explanation/system-snaps/network-manager/" target="_blank" rel="noreferrer"&gt;Network-Manager&lt;/a&gt;, &lt;a href="https://documentation.ubuntu.com/core/explanation/system-snaps/bluetooth/" target="_blank" rel="noreferrer"&gt;bluez&lt;/a&gt;, pulseaudio, etc.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Application snaps&lt;/strong&gt;: define the functionality of the system, &lt;a href="https://snapcraft.io/docs/snap-confinement" target="_blank" rel="noreferrer"&gt;confined to a sandbox&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Snapd&lt;/strong&gt;: manages updates, rollbacks and snapshotting/restoring of user data&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In a Core Desktop installation, the desktop environment (GNOME, Plasma, etc.), display manager, login manager would all be delivered as &amp;ldquo;system snaps&amp;rdquo;.&lt;/p&gt;
&lt;p&gt;Snap &lt;a href="https://snapcraft.io/docs/snap-confinement" target="_blank" rel="noreferrer"&gt;confinement&lt;/a&gt; ensures packages cannot incorrectly interact with the underlying system or user data without explicit approval. In an Ubuntu Core deployment, this notion is extended to every component of the OS, offering a straightforward yet powerful way to manage risk for each system component.&lt;/p&gt;
&lt;h2 id="summary" class="relative group"&gt;Summary &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#summary" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;Immutable Linux distributions approach the immutability paradox differently. We explored four different approaches here, and you can learn more about other approaches taken by the likes of &lt;a href="https://microos.opensuse.org/" target="_blank" rel="noreferrer"&gt;SUSE MicroOS&lt;/a&gt; (filesystem based immutability) and &lt;a href="https://vanillaos.org/" target="_blank" rel="noreferrer"&gt;Vanilla OS&lt;/a&gt; (uses &lt;a href="https://github.com/Vanilla-OS/ABRoot" target="_blank" rel="noreferrer"&gt;ABRoot&lt;/a&gt;) in this &lt;a href="https://dataswamp.org/~solene/2023-07-12-intro-to-immutable-os.html" target="_blank" rel="noreferrer"&gt;excellent blog post&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Ubuntu Core focuses on transactional packaging and a clean separation of system &amp;amp; user data. &lt;code&gt;bootc&lt;/code&gt;-based systems take a full image-based approach, while NixOS offers extreme flexibility through declarative configuration, but at the cost of complexity.&lt;/p&gt;
&lt;p&gt;If you&amp;rsquo;ve yet to try an immutable Linux distribution I&amp;rsquo;d recommend giving it a go. Whether you prioritise simplicity, security or declarative control there&amp;rsquo;s almost certainly an immutable Linux distribution that fits your needs.&lt;/p&gt;</description></item><item><title>Crafting Your Software</title><link>https://jnsgr.uk/2025/07/crafting-your-software/</link><pubDate>Mon, 21 Jul 2025 00:00:00 +0000</pubDate><guid>https://jnsgr.uk/2025/07/crafting-your-software/</guid><description>&lt;blockquote&gt;
&lt;p&gt;This article was originally posted &lt;a href="https://discourse.ubuntu.com/t/crafting-your-software/64809" target="_blank" rel="noreferrer"&gt;on the Ubuntu Discourse&lt;/a&gt;, and is reposted here. I welcome comments and further discussion in that thread.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Packaging software is notoriously tricky. Every language, framework, and build system has its quirks, and the variety of artifact types — from Debian packages to OCI images and cloud images — only adds to the complexity.&lt;/p&gt;
&lt;p&gt;Over the past decade, Canonical has been refining a family of tools called “crafts” to tame this complexity and make building, testing, and releasing software across ecosystems much simpler.&lt;/p&gt;
&lt;p&gt;The journey began on 23rd June 2015 when the first commit was made to &lt;a href="https://github.com/canonical/snapcraft" target="_blank" rel="noreferrer"&gt;Snapcraft&lt;/a&gt;, the tool used to build Snap packages. For years, Snapcraft was &lt;em&gt;the only&lt;/em&gt; craft in our portfolio, but in the last five years, we’ve generalized much of what we learned about building, testing, and releasing software into a number of &amp;ldquo;crafts&amp;rdquo; for building different artifact types.&lt;/p&gt;
&lt;p&gt;Last month, I &lt;a href="https://jnsgr.uk/2025/06/introducing-debcrafters/" target="_blank" rel="noreferrer"&gt;outlined&lt;/a&gt; Canonical&amp;rsquo;s plan to build &lt;code&gt;debcraft&lt;/code&gt; as a next-generation way to build Debian packages. In this post I&amp;rsquo;ll talk about what exactly &lt;em&gt;makes&lt;/em&gt; a craft, and why you should bother learning to use them.&lt;/p&gt;
&lt;h2 id="software-build-lifecycle" class="relative group"&gt;Software build lifecycle &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#software-build-lifecycle" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;At the heart of all our crafts is &lt;a href="https://canonical-craft-parts.readthedocs-hosted.com/latest/" target="_blank" rel="noreferrer"&gt;&lt;code&gt;craft-parts&lt;/code&gt;&lt;/a&gt;, which according to the &lt;a href="https://canonical-craft-parts.readthedocs-hosted.com/latest/" target="_blank" rel="noreferrer"&gt;documentation&lt;/a&gt; &amp;ldquo;provides a mechanism to obtain data from different sources, process it in various ways, and prepare a filesystem sub-tree suitable for packaging&amp;rdquo;.&lt;/p&gt;
&lt;p&gt;Put simply, &lt;code&gt;craft-parts&lt;/code&gt; gives developers consistent tools to fetch, build, and prepare software from any ecosystem for packaging into various formats.&lt;/p&gt;
&lt;h3 id="lifecycle-stages" class="relative group"&gt;Lifecycle stages &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#lifecycle-stages" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h3&gt;&lt;p&gt;Every part has a minimum of four lifecycle stages:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;PULL&lt;/code&gt;: source code or binary artifacts, along with dependencies are pulled from various sources&lt;/li&gt;
&lt;li&gt;&lt;code&gt;BUILD&lt;/code&gt;: software is built automatically by a &lt;code&gt;plugin&lt;/code&gt;, or a set of custom steps defined by the developer&lt;/li&gt;
&lt;li&gt;&lt;code&gt;STAGE&lt;/code&gt;: select outputs from the &lt;code&gt;BUILD&lt;/code&gt; phase are copied to a unified staging area for all parts&lt;/li&gt;
&lt;li&gt;&lt;code&gt;PRIME&lt;/code&gt;: files from the staging area are copied to the priming area for use in the final artifact.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The &lt;code&gt;STAGE&lt;/code&gt; and &lt;code&gt;PRIME&lt;/code&gt; steps are similar, except that &lt;code&gt;PRIME&lt;/code&gt; only happens after &lt;em&gt;all&lt;/em&gt; parts of the build are staged. Additionally, &lt;code&gt;STAGE&lt;/code&gt; provides the opportunity for parts to build/supply dependencies for other parts, but that might not be required in the final artifact.&lt;/p&gt;
&lt;h3 id="lifecycle-in-the-cli" class="relative group"&gt;Lifecycle in the CLI &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#lifecycle-in-the-cli" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h3&gt;&lt;p&gt;The lifecycle stages aren’t just in the build recipe, they’re also first-class citizens in each craft’s CLI, thanks to the &lt;a href="https://github.com/canonical/craft-cli" target="_blank" rel="noreferrer"&gt;craft-cli&lt;/a&gt; library. This ensures a consistent command-line experience across all craft tools.&lt;/p&gt;
&lt;p&gt;Take the following examples:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt"&gt; 1
&lt;/span&gt;&lt;span class="lnt"&gt; 2
&lt;/span&gt;&lt;span class="lnt"&gt; 3
&lt;/span&gt;&lt;span class="lnt"&gt; 4
&lt;/span&gt;&lt;span class="lnt"&gt; 5
&lt;/span&gt;&lt;span class="lnt"&gt; 6
&lt;/span&gt;&lt;span class="lnt"&gt; 7
&lt;/span&gt;&lt;span class="lnt"&gt; 8
&lt;/span&gt;&lt;span class="lnt"&gt; 9
&lt;/span&gt;&lt;span class="lnt"&gt;10
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Run the full process including PULL, BUILD, STAGE, PRIME and then pack the final artifact&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;snapcraft pack
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;charmcraft pack
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;rockcraft pack
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Run the process up to the end of the STAGE step&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;rockcraft stage
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Run the process up to the PRIME step&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;charmcraft prime
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;p&gt;This design feature supports a smoother iterative development and debugging workflow for building and testing software artifacts.&lt;/p&gt;
&lt;h3 id="part-definition" class="relative group"&gt;Part definition &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#part-definition" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h3&gt;&lt;p&gt;The &lt;code&gt;parts&lt;/code&gt; of a build vary in complexity - some require two-three trivial lines, others require detailed specification of dependencies, build flags, environment variables and steps. The best way to understand the flexibility of this system is by looking at some examples.&lt;/p&gt;
&lt;p&gt;First, consider this (annotated) example from my &lt;a href="https://github.com/jnsgruk/icloudpd-snap/blob/beb2c7d2539547dfff5d4fd99687573d75597633/snap/snapcraft.yaml" target="_blank" rel="noreferrer"&gt;icloudpd snap&lt;/a&gt;:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt"&gt; 1
&lt;/span&gt;&lt;span class="lnt"&gt; 2
&lt;/span&gt;&lt;span class="lnt"&gt; 3
&lt;/span&gt;&lt;span class="lnt"&gt; 4
&lt;/span&gt;&lt;span class="lnt"&gt; 5
&lt;/span&gt;&lt;span class="lnt"&gt; 6
&lt;/span&gt;&lt;span class="lnt"&gt; 7
&lt;/span&gt;&lt;span class="lnt"&gt; 8
&lt;/span&gt;&lt;span class="lnt"&gt; 9
&lt;/span&gt;&lt;span class="lnt"&gt;10
&lt;/span&gt;&lt;span class="lnt"&gt;11
&lt;/span&gt;&lt;span class="lnt"&gt;12
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-yaml" data-lang="yaml"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nt"&gt;icloudpd&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="c"&gt;# Use the &amp;#39;python&amp;#39; plugin to build the&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="c"&gt;# software. This takes care of identifying&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="c"&gt;# Python package dependencies, building the wheel&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="c"&gt;# and ensuring the project&amp;#39;s dependencies are staged&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="c"&gt;# appropriately.&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;plugin&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;python&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="c"&gt;# Fetch the project from Github, using the tag the matches&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="c"&gt;# the version of the project.&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;source&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;https://github.com/icloud-photos-downloader/icloud_photos_downloader&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;source-tag&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;v$SNAPCRAFT_PROJECT_VERSION&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;source-type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;git&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;p&gt;This spec is everything required to fetch, build and stage the important bits required to run the software - in this case a Python wheel and its dependencies.&lt;/p&gt;
&lt;p&gt;Some projects might require more set up, perhaps an additional package is required or a specific version of a dependency is needed. Let&amp;rsquo;s take a look at a slightly more complex example taken from my &lt;a href="https://github.com/jnsgruk/zinc-k8s-operator/blob/5516be2c50e52b33742c674f266c8dfca55e6edf/rockcraft.yaml#L90C3-L100C20" target="_blank" rel="noreferrer"&gt;zinc-k8s-operator&lt;/a&gt; project:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt"&gt; 1
&lt;/span&gt;&lt;span class="lnt"&gt; 2
&lt;/span&gt;&lt;span class="lnt"&gt; 3
&lt;/span&gt;&lt;span class="lnt"&gt; 4
&lt;/span&gt;&lt;span class="lnt"&gt; 5
&lt;/span&gt;&lt;span class="lnt"&gt; 6
&lt;/span&gt;&lt;span class="lnt"&gt; 7
&lt;/span&gt;&lt;span class="lnt"&gt; 8
&lt;/span&gt;&lt;span class="lnt"&gt; 9
&lt;/span&gt;&lt;span class="lnt"&gt;10
&lt;/span&gt;&lt;span class="lnt"&gt;11
&lt;/span&gt;&lt;span class="lnt"&gt;12
&lt;/span&gt;&lt;span class="lnt"&gt;13
&lt;/span&gt;&lt;span class="lnt"&gt;14
&lt;/span&gt;&lt;span class="lnt"&gt;15
&lt;/span&gt;&lt;span class="lnt"&gt;16
&lt;/span&gt;&lt;span class="lnt"&gt;17
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-yaml" data-lang="yaml"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nt"&gt;kube-log-runner&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="c"&gt;# Use the &amp;#39;go&amp;#39; plugin to build the software.&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;plugin&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;go&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="c"&gt;# Fetch the source code from Git at the &amp;#39;v0.17.0&amp;#39; tag.&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;source&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;https://github.com/kubernetes/release&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;source-type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;git&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;source-tag&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;v0.17.8&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="c"&gt;# Change to the specified sub-directory for the build.&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;source-subdir&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;images/build/go-runner&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="c"&gt;# Install the following snaps in the build environment.&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;build-snaps&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="l"&gt;go/1.20/stable&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="c"&gt;# Set the following environment variables in the build&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="c"&gt;# environment.&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;build-environment&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="nt"&gt;CGO_ENABLED&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="nt"&gt;GOOS&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;linux&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;p&gt;This instructs &lt;code&gt;rockcraft&lt;/code&gt; to fetch a Git repository at a particular tag, change into the sub-directory &lt;code&gt;images/build/go-runner&lt;/code&gt;, then build the software using the &lt;code&gt;go&lt;/code&gt; plugin. It also specifies that the build required the &lt;code&gt;go&lt;/code&gt; snap from the &lt;code&gt;1.20/stable&lt;/code&gt; track, and sets some environment variables. That&amp;rsquo;s a lot of result for not much YAML. The end result of this is a single binary that&amp;rsquo;s &amp;ldquo;staged&amp;rdquo; and ready to be placed (in this case) into a &lt;a href="https://documentation.ubuntu.com/rockcraft/en/latest/explanation/rocks/" target="_blank" rel="noreferrer"&gt;Rock&lt;/a&gt; (Canonical&amp;rsquo;s name for OCI images).&lt;/p&gt;
&lt;p&gt;And the best part: this exact definition can be used in a &lt;code&gt;rockcraft.yaml&lt;/code&gt; when building a Rock, a &lt;code&gt;snapcraft.yaml&lt;/code&gt; when building a Snap, a &lt;code&gt;charmcraft.yaml&lt;/code&gt; when building a Charm, etc.&lt;/p&gt;
&lt;p&gt;The plugin system is extensive: at the time of writing there are &lt;a href="https://canonical-craft-parts.readthedocs-hosted.com/latest/reference/plugins/" target="_blank" rel="noreferrer"&gt;22 supported plugins&lt;/a&gt;, including &lt;code&gt;go&lt;/code&gt;, &lt;code&gt;maven&lt;/code&gt;, &lt;code&gt;uv&lt;/code&gt;, &lt;code&gt;meson&lt;/code&gt; and more. If your build system of choice isn&amp;rsquo;t supported you can specify manual steps, giving you as much flexibility as you need:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt"&gt; 1
&lt;/span&gt;&lt;span class="lnt"&gt; 2
&lt;/span&gt;&lt;span class="lnt"&gt; 3
&lt;/span&gt;&lt;span class="lnt"&gt; 4
&lt;/span&gt;&lt;span class="lnt"&gt; 5
&lt;/span&gt;&lt;span class="lnt"&gt; 6
&lt;/span&gt;&lt;span class="lnt"&gt; 7
&lt;/span&gt;&lt;span class="lnt"&gt; 8
&lt;/span&gt;&lt;span class="lnt"&gt; 9
&lt;/span&gt;&lt;span class="lnt"&gt;10
&lt;/span&gt;&lt;span class="lnt"&gt;11
&lt;/span&gt;&lt;span class="lnt"&gt;12
&lt;/span&gt;&lt;span class="lnt"&gt;13
&lt;/span&gt;&lt;span class="lnt"&gt;14
&lt;/span&gt;&lt;span class="lnt"&gt;15
&lt;/span&gt;&lt;span class="lnt"&gt;16
&lt;/span&gt;&lt;span class="lnt"&gt;17
&lt;/span&gt;&lt;span class="lnt"&gt;18
&lt;/span&gt;&lt;span class="lnt"&gt;19
&lt;/span&gt;&lt;span class="lnt"&gt;20
&lt;/span&gt;&lt;span class="lnt"&gt;21
&lt;/span&gt;&lt;span class="lnt"&gt;22
&lt;/span&gt;&lt;span class="lnt"&gt;23
&lt;/span&gt;&lt;span class="lnt"&gt;24
&lt;/span&gt;&lt;span class="lnt"&gt;25
&lt;/span&gt;&lt;span class="lnt"&gt;26
&lt;/span&gt;&lt;span class="lnt"&gt;27
&lt;/span&gt;&lt;span class="lnt"&gt;28
&lt;/span&gt;&lt;span class="lnt"&gt;29
&lt;/span&gt;&lt;span class="lnt"&gt;30
&lt;/span&gt;&lt;span class="lnt"&gt;31
&lt;/span&gt;&lt;span class="lnt"&gt;32
&lt;/span&gt;&lt;span class="lnt"&gt;33
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-yaml" data-lang="yaml"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nt"&gt;wasi-sdk&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="c"&gt;# There is no appropriate plugin for this part, so set&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="c"&gt;# it to &amp;#39;nil&amp;#39; and we&amp;#39;ll specify our own build process&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="c"&gt;# using &amp;#39;override-build&amp;#39;.&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;plugin&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;nil&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="c"&gt;# In this recipe, a previous part named &amp;#39;clang&amp;#39; is&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="c"&gt;# required to build before attempting to build this&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="c"&gt;# part.&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;after&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="l"&gt;clang&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="c"&gt;# Specify any `apt` packages required in the build&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="c"&gt;# environment.&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;build-packages&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="l"&gt;wget&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="c"&gt;# Set some environment variables for the build&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="c"&gt;# environment.&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;build-environment&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="nt"&gt;WASI_BRANCH&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;#34;15&amp;#34;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="nt"&gt;WASI_RELEASE&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;#34;15.0&amp;#34;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="c"&gt;# Define how to pull the software manually.&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;override-pull&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;|&lt;/span&gt;&lt;span class="sd"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="sd"&gt; ROOT=https://github.com/WebAssembly/wasi-sdk/releases/download/wasi-sdk-$WASI_BRANCH
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="sd"&gt; wget $ROOT/wasi-sysroot-$WASI_RELEASE.tar.gz
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="sd"&gt; wget $ROOT/libclang_rt.builtins-wasm32-wasi-$WASI_RELEASE.tar.gz&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="c"&gt;# Define how to &amp;#39;build&amp;#39; the software manually&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;override-build&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;|&lt;/span&gt;&lt;span class="sd"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="sd"&gt; craftctl default
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="sd"&gt; tar -C $CRAFT_STAGE -xf wasi-sysroot-$WASI_RELEASE.tar.gz
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="sd"&gt; tar -C $CRAFT_STAGE/usr/lib/clang/* -xf libclang_rt.builtins-wasm32-wasi-$WASI_RELEASE.tar.gz&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="c"&gt;# Don&amp;#39;t prime anything for inclusion in the&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="c"&gt;# final artifact; this part is only used for&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="c"&gt;# another part&amp;#39;s build process.&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;override-prime&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;&amp;#39;&amp;#39;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;p&gt;Here, multiple stages of the lifecycle are overridden using &lt;code&gt;override-build&lt;/code&gt;, &lt;code&gt;override-pull&lt;/code&gt; and &lt;code&gt;override-stage&lt;/code&gt;, and we see &lt;code&gt;craftctl default&lt;/code&gt; for the first time, which instructs snapcraft to do whatever it would have done prior being overridden, but allows the developer to provide additional steps either before or after the default actions.&lt;/p&gt;
&lt;h2 id="isolated-build-environments" class="relative group"&gt;Isolated build environments &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#isolated-build-environments" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;Even once a recipe for building software is defined, preparing machines to build software can be painful. Different major versions of the same OS might have varying package availability, your team might run completely different operating systems, and you might have limited image availability in your CI environment.&lt;/p&gt;
&lt;p&gt;The crafts solve this with build &amp;ldquo;backends&amp;rdquo;. Currently the crafts can use &lt;a href="https://canonical.com/lxd" target="_blank" rel="noreferrer"&gt;LXD&lt;/a&gt; or &lt;a href="https://canonical.com/multipass" target="_blank" rel="noreferrer"&gt;Multipass&lt;/a&gt; to create isolated build environments, which makes it work nicely on Linux, macOS and Windows. This functionality is handled automatically by the crafts through the &lt;a href="https://canonical-craft-providers.readthedocs-hosted.com/en/latest/" target="_blank" rel="noreferrer"&gt;&lt;code&gt;craft-providers&lt;/code&gt;&lt;/a&gt; library. The &lt;code&gt;craft-providers&lt;/code&gt; library provides uniform interfaces for creating build environments, configuring base images and executing builds.&lt;/p&gt;
&lt;p&gt;This means if you can run &lt;code&gt;snapcraft pack&lt;/code&gt; on your machine, your teammates can also run the same command without worrying about installing the right dependencies or polluting their machines with software and temporary files that might result from the build.&lt;/p&gt;
&lt;p&gt;One of my favourite features of this setup is the ability to drop into a shell inside the build environment automatically on a few different conditions:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt"&gt;1
&lt;/span&gt;&lt;span class="lnt"&gt;2
&lt;/span&gt;&lt;span class="lnt"&gt;3
&lt;/span&gt;&lt;span class="lnt"&gt;4
&lt;/span&gt;&lt;span class="lnt"&gt;5
&lt;/span&gt;&lt;span class="lnt"&gt;6
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Drop into a shell if any part of the build fails.&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;snapcraft pack --debug
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Drop into a shell after the build stage.&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;rockcraft build --shell-after
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Drop to a shell in lieu of the prime stage.&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;snapcraft prime --shell
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;p&gt;This makes troubleshooting a failing build much simpler, while allowing the developer to maintain a clean separation between the build environment and their local machine. Should the build environment ever become polluted, or otherwise difficult to work with, you can always start from a clean slate with &lt;code&gt;snapcraft|rockcraft|charmcraft clean&lt;/code&gt;. Each build machine is constructed using a cached &lt;code&gt;build-base&lt;/code&gt;, which contains all the baseline packages required by the craft - so recreating the build environment for a specific package only requires that base to be cloned and augmented with project specific concerns - making the process faster.&lt;/p&gt;
&lt;h2 id="saving-space" class="relative group"&gt;Saving space &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#saving-space" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;When packaging any kind of software, a common concern is the size of the artifact. This might be because you&amp;rsquo;re building an OCI-image that is pulled thousands of times a day as part of a major SaaS deployment, or maybe it&amp;rsquo;s a Snap for an embedded device running &lt;a href="https://ubuntu.com/core" target="_blank" rel="noreferrer"&gt;Ubuntu Core&lt;/a&gt; with a limited flash. In the container world, &amp;ldquo;&lt;a href="https://github.com/GoogleContainerTools/distroless" target="_blank" rel="noreferrer"&gt;distroless&lt;/a&gt;&amp;rdquo; became a popular way to solve this problem - essentially popularising the practice of shipping the barest minimum in a container image, eschewing much of the traditional Unix FHS.&lt;/p&gt;
&lt;p&gt;The parts mechanism has provided a way of &amp;ldquo;filtering&amp;rdquo; what is staged or primed into a final artifact from the start, which already gave developers autonomy to choose exactly what went into their builds.&lt;/p&gt;
&lt;p&gt;In addition to this, Canonical built &amp;ldquo;&lt;a href="https://documentation.ubuntu.com/chisel/en/latest/tutorial/getting-started/" target="_blank" rel="noreferrer"&gt;chisel&lt;/a&gt;&amp;rdquo;, which extends the distroless concept beyond containers to any kind of artifact. With &lt;code&gt;chisel&lt;/code&gt;, developers can slice out just the binaries, libraries, and configuration files they need from the Ubuntu Archive, enabling ultra-small packages without losing the robustness of Ubuntu’s ecosystem.&lt;/p&gt;
&lt;p&gt;We later launched &lt;a href="https://ubuntu.com/blog/chiseled-ubuntu-containers-openjre" target="_blank" rel="noreferrer"&gt;Chiseled JRE&lt;/a&gt; containers, and there are numerous other Rocks that utilise &lt;code&gt;chisel&lt;/code&gt; to provide a balance between shipping &lt;em&gt;tiny&lt;/em&gt; container images, while benefiting from the huge selection and quality of software in the Ubuntu Archive.&lt;/p&gt;
&lt;p&gt;Because the crafts are all built on a common platform, they now all have the ability to use &amp;ldquo;slices&amp;rdquo; from &lt;a href="https://github.com/canonical/chisel-releases" target="_blank" rel="noreferrer"&gt;chisel-releases&lt;/a&gt;, which enables a greater range of use-cases where artifact size is a primary concern. Slices are community maintained, and specified in simple to understand YAML files. You can see the list of available slices for the most recent Ubuntu release (25.04 Plucky Puffin) &lt;a href="https://github.com/canonical/chisel-releases/tree/ubuntu-25.04/slices" target="_blank" rel="noreferrer"&gt;on GitHub&lt;/a&gt;, and further documentation on slices and how they&amp;rsquo;re used in the &lt;a href="https://documentation.ubuntu.com/chisel/en/latest/explanation/mode-of-operation/" target="_blank" rel="noreferrer"&gt;Chisel docs&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id="multi-architecture-builds" class="relative group"&gt;Multi-architecture builds &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#multi-architecture-builds" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;Ubuntu supports six major architectures at the time of writing (&lt;code&gt;amd64&lt;/code&gt;, &lt;code&gt;arm64&lt;/code&gt;, &lt;code&gt;armhf&lt;/code&gt;, &lt;code&gt;ppc64le&lt;/code&gt;, &lt;code&gt;s390x&lt;/code&gt;, &lt;code&gt;riscv64&lt;/code&gt;), and all of our crafts have first-class support for each of them. This functionality is provided primarily by the &lt;a href="https://github.com/canonical/craft-platforms" target="_blank" rel="noreferrer"&gt;craft-platforms&lt;/a&gt; library, and supported by the &lt;a href="https://github.com/canonical/craft-grammar" target="_blank" rel="noreferrer"&gt;craft-grammar&lt;/a&gt; library, which enables more complex definitions where builds may have different steps or requirements for different architectures.&lt;/p&gt;
&lt;p&gt;At a high-level, each artifact defines which architectures or platforms it is built &lt;em&gt;for&lt;/em&gt;, and which it is built &lt;em&gt;on&lt;/em&gt;. These are often, but not always, the same. For example:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt"&gt;1
&lt;/span&gt;&lt;span class="lnt"&gt;2
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-yaml" data-lang="yaml"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nt"&gt;platforms&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;amd64&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;p&gt;This is shorthand for &amp;ldquo;build the project on &lt;code&gt;amd64&lt;/code&gt; for &lt;code&gt;amd64&lt;/code&gt;&amp;rdquo;, but in a different example taken from a &lt;code&gt;charmcraft.yaml&lt;/code&gt;&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt"&gt;1
&lt;/span&gt;&lt;span class="lnt"&gt;2
&lt;/span&gt;&lt;span class="lnt"&gt;3
&lt;/span&gt;&lt;span class="lnt"&gt;4
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-yaml" data-lang="yaml"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nt"&gt;platforms&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;all&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;build-on&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="l"&gt;amd64]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;build-for&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="l"&gt;all]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;p&gt;In this case the software is built on &lt;code&gt;amd64&lt;/code&gt;, but can run on any of the supported architectures - this can happen with all-Python wheels, &lt;code&gt;bash&lt;/code&gt; scripts and other interpreted languages which don&amp;rsquo;t link platform-specific libraries.&lt;/p&gt;
&lt;p&gt;In some build processes, the process or dependencies might differ per-architecture, which is where &lt;code&gt;craft-grammar&lt;/code&gt; comes in, enabling expressions such as (taken from &lt;a href="https://github.com/canonical/mesa-core22/blob/86060bf66e70d0f5d421fe818d61cdc0f18f9b31/snap/snapcraft.yaml#L265C3-L280C46" target="_blank" rel="noreferrer"&gt;GitHub&lt;/a&gt;):&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt"&gt; 1
&lt;/span&gt;&lt;span class="lnt"&gt; 2
&lt;/span&gt;&lt;span class="lnt"&gt; 3
&lt;/span&gt;&lt;span class="lnt"&gt; 4
&lt;/span&gt;&lt;span class="lnt"&gt; 5
&lt;/span&gt;&lt;span class="lnt"&gt; 6
&lt;/span&gt;&lt;span class="lnt"&gt; 7
&lt;/span&gt;&lt;span class="lnt"&gt; 8
&lt;/span&gt;&lt;span class="lnt"&gt; 9
&lt;/span&gt;&lt;span class="lnt"&gt;10
&lt;/span&gt;&lt;span class="lnt"&gt;11
&lt;/span&gt;&lt;span class="lnt"&gt;12
&lt;/span&gt;&lt;span class="lnt"&gt;13
&lt;/span&gt;&lt;span class="lnt"&gt;14
&lt;/span&gt;&lt;span class="lnt"&gt;15
&lt;/span&gt;&lt;span class="lnt"&gt;16
&lt;/span&gt;&lt;span class="lnt"&gt;17
&lt;/span&gt;&lt;span class="lnt"&gt;18
&lt;/span&gt;&lt;span class="lnt"&gt;19
&lt;/span&gt;&lt;span class="lnt"&gt;20
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-yaml" data-lang="yaml"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nt"&gt;fit-image&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="c"&gt;# ...&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;build-packages&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="c"&gt;# ...&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="l"&gt;wget&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="l"&gt;libjson-c-dev:${CRAFT_ARCH_BUILD_FOR}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="l"&gt;libcryptsetup-dev:${CRAFT_ARCH_BUILD_FOR}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="c"&gt;# Only use the following build packages when building for armhf&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="nt"&gt;to armhf&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="l"&gt;binutils-arm-linux-gnueabi&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="l"&gt;gcc-arm-linux-gnueabihf&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="l"&gt;pkgconf:armhf&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="c"&gt;# When building for arm64, use a different set&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="nt"&gt;to arm64&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="c"&gt;# Dependencies for building *for* arm64 *on* amd64!&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="nt"&gt;on amd64&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="l"&gt;gcc-aarch64-linux-gnu&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="l"&gt;pkgconf:arm64&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="nt"&gt;on arm64&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="l"&gt;gcc&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;p&gt;Being able to define how to build on different architectures is only half of the battle, though. It&amp;rsquo;s one thing to define &lt;em&gt;how&lt;/em&gt; to build software on an &lt;code&gt;s390x&lt;/code&gt; machine but few developers have mainframes handy to actually &lt;em&gt;run&lt;/em&gt; the build! This is where the crafts&amp;rsquo; &lt;code&gt;remote-build&lt;/code&gt; capability comes in. The &lt;code&gt;remote-build&lt;/code&gt; command sends builds to Canonical&amp;rsquo;s build farm, which has native support for all of Ubuntu&amp;rsquo;s supported architectures. This is built into all of our crafts, and is triggered with &lt;code&gt;snapcraft remote-build&lt;/code&gt;, &lt;code&gt;rockcraft remote-build&lt;/code&gt;, etc.&lt;/p&gt;
&lt;p&gt;Remote builds are a lifeline for publishers and communities who need to reach a larger audience, but can&amp;rsquo;t necessarily get their own build farm together. One example of this is &lt;a href="https://snapcrafters.org/" target="_blank" rel="noreferrer"&gt;Snapcrafters&lt;/a&gt;, a community-driven organisation that packages popular software as Snaps, who use &lt;code&gt;remote-build&lt;/code&gt; to drive multi-architecture builds from &lt;a href="https://github.com/snapcrafters/ci" target="_blank" rel="noreferrer"&gt;GitHub Actions&lt;/a&gt; as part of their publishing workflow (as seen &lt;a href="https://github.com/snapcrafters/helm/actions/runs/16166314558" target="_blank" rel="noreferrer"&gt;here&lt;/a&gt; and &lt;a href="https://github.com/snapcrafters/terraform/actions/runs/15607983328" target="_blank" rel="noreferrer"&gt;here&lt;/a&gt; for example).&lt;/p&gt;
&lt;h2 id="unified-testing-framework" class="relative group"&gt;Unified testing framework &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#unified-testing-framework" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;Testing is often the missing piece in build tools: developers are forced to rely on separate CI systems or ad-hoc scripts to verify their artifacts. To close this gap, we’re introducing a unified &lt;code&gt;test&lt;/code&gt; sub-command in the crafts.&lt;/p&gt;
&lt;p&gt;We recently added the &lt;code&gt;test&lt;/code&gt; sub-command to our crafts as an experimental (for now!) feature. Under the hood, &lt;code&gt;craft test&lt;/code&gt; will introduce a new lifecycle stage (&lt;code&gt;TEST&lt;/code&gt;). The enables packagers of any artifact type to specify how that artifact should be tested using a common framework across artifact types.&lt;/p&gt;
&lt;p&gt;Craft&amp;rsquo;s testing capability is powered by &lt;a href="https://github.com/canonical/spread" target="_blank" rel="noreferrer"&gt;spread&lt;/a&gt;, a convenient full-system task distribution system. Spread was built to simplify the massive number of integration tests run for the &lt;a href="https://github.com/canonical/snapd" target="_blank" rel="noreferrer"&gt;snapd&lt;/a&gt; project. It enables developers to specify tests in a simple language, and distribute them concurrently to any infrastructure they have available.&lt;/p&gt;
&lt;p&gt;This enables a developer to define tests and test infrastructure, and make it trivial to run the same tests locally, or remotely on cloud infrastructure. This can really speed up the development process - preventing developers from needing to wait on CI runners to spin up and test their code while iterating, they can run the very same integration tests locally using &lt;code&gt;craft test&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;There are lots of fine details to &lt;code&gt;spread&lt;/code&gt;, and the team is working on artifact-specific abstractions for the crafts that will make testing &lt;em&gt;delightful&lt;/em&gt;. Imagine maintaining the Snap for a GUI application, and being able to enact the following workflow:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt"&gt;1
&lt;/span&gt;&lt;span class="lnt"&gt;2
&lt;/span&gt;&lt;span class="lnt"&gt;3
&lt;/span&gt;&lt;span class="lnt"&gt;4
&lt;/span&gt;&lt;span class="lnt"&gt;5
&lt;/span&gt;&lt;span class="lnt"&gt;6
&lt;/span&gt;&lt;span class="lnt"&gt;7
&lt;/span&gt;&lt;span class="lnt"&gt;8
&lt;/span&gt;&lt;span class="lnt"&gt;9
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Pull the repository&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;git clone https://github.com/some-gui-app/snap &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;cd&lt;/span&gt; snap
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Make some changes, perhaps fix a bug&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;vim snap/snapcraft.yaml
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Build the snap, and run the integration tests.&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# These tests might include spinning up a headless&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# graphical VM, which actually installs and runs&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# the snap, and interacts with it&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;snapcraft &lt;span class="nb"&gt;test&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;p&gt;By integrating a common testing tool into the build tooling, the Starcraft team will be able to curate unique testing experiences for each kind of artifact. A snap might need a headless graphical VM, where an OCI-image simply requires a container runtime, but the &lt;code&gt;spread&lt;/code&gt; underpinnings allow a common test-definition language for each.&lt;/p&gt;
&lt;p&gt;There are a couple of examples of this in the wild already:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt"&gt; 1
&lt;/span&gt;&lt;span class="lnt"&gt; 2
&lt;/span&gt;&lt;span class="lnt"&gt; 3
&lt;/span&gt;&lt;span class="lnt"&gt; 4
&lt;/span&gt;&lt;span class="lnt"&gt; 5
&lt;/span&gt;&lt;span class="lnt"&gt; 6
&lt;/span&gt;&lt;span class="lnt"&gt; 7
&lt;/span&gt;&lt;span class="lnt"&gt; 8
&lt;/span&gt;&lt;span class="lnt"&gt; 9
&lt;/span&gt;&lt;span class="lnt"&gt;10
&lt;/span&gt;&lt;span class="lnt"&gt;11
&lt;/span&gt;&lt;span class="lnt"&gt;12
&lt;/span&gt;&lt;span class="lnt"&gt;13
&lt;/span&gt;&lt;span class="lnt"&gt;14
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Install charmcraft&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo snap install --classic charmcraft
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Clone the repo&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;git clone https://github.com/jnsgruk/zinc-k8s-operator
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nb"&gt;cd&lt;/span&gt; zinc-k8s-operator
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# List the available tests&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;charmcraft &lt;span class="nb"&gt;test&lt;/span&gt; --list lxd:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Run the integration testing suite, spinning up&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# a small VM, inside which is a full Kubernetes&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# instance, with a Juju controller bootstrapped.&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# From here the charm will be deployed and tested to&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# ensure it&amp;#39;s integrations with the observability&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# stack and ingress charms are functioning correctly.&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;charmcraft &lt;span class="nb"&gt;test&lt;/span&gt; -v lxd:ubuntu-24.04:tests/spread/observability-relations:juju_3_6
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;p&gt;The test above is powered by this &lt;a href="https://github.com/jnsgruk/zinc-k8s-operator/blob/main/spread.yaml" target="_blank" rel="noreferrer"&gt;spread.yaml&lt;/a&gt;, and this &lt;a href="https://github.com/jnsgruk/zinc-k8s-operator/blob/5516be2c50e52b33742c674f266c8dfca55e6edf/tests/spread/observability-relations/task.yaml" target="_blank" rel="noreferrer"&gt;test definition&lt;/a&gt;. With a little bit of &lt;a href="https://github.com/jnsgruk/zinc-k8s-operator/blob/5516be2c50e52b33742c674f266c8dfca55e6edf/.github/workflows/build-and-test.yaml#L80-L129" target="_blank" rel="noreferrer"&gt;work&lt;/a&gt;, it&amp;rsquo;s also possible to integrate &lt;code&gt;spread&lt;/code&gt; with GitHub matrix actions, giving you one GitHub job per &lt;code&gt;spread&lt;/code&gt; test - as seen &lt;a href="https://github.com/jnsgruk/zinc-k8s-operator/actions/runs/15638336939" target="_blank" rel="noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;You can see a similar example in our &lt;a href="https://github.com/canonical/postgresql-snap/tree/7e6ee6d3148c20309cc7067dc40520e208f862e5/spread/tests" target="_blank" rel="noreferrer"&gt;PostgreSQL Snap test suite&lt;/a&gt;, and we&amp;rsquo;ll be adding more and more of this kind of test across our Rock, Snap, Charm, Image and Deb portfolio.&lt;/p&gt;
&lt;p&gt;There is work to do, but I&amp;rsquo;m really excited about bringing a common testing framework to the crafts which should make the testing of all kinds of artifacts more consistent and easier to integrate across teams and systems.&lt;/p&gt;
&lt;h2 id="crafting-the-crafts" class="relative group"&gt;Crafting the crafts &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#crafting-the-crafts" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;As the portfolio expanded from &lt;code&gt;snapcraft&lt;/code&gt;, to &lt;code&gt;charmcraft&lt;/code&gt;, to &lt;code&gt;rockcraft&lt;/code&gt; and is now expanding further to &lt;code&gt;debcraft&lt;/code&gt; and &lt;code&gt;imagecraft&lt;/code&gt; it was clear that we&amp;rsquo;d need a way to make it easy to build crafts for different artifacts, while being rigorous about consistency across the tools. A couple of years ago, the team built the &lt;a href="https://github.com/canonical/craft-application" target="_blank" rel="noreferrer"&gt;craft-application&lt;/a&gt; base library, which now forms the foundation of all our crafts.&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;craft-application&lt;/code&gt; library combines many of the existing libraries that were in use across the crafts (listed below), providing a consistent base upon which artifact-specific logic can be built. The allows craft developers to spend less time implementing CLI details, &lt;code&gt;parts&lt;/code&gt; lifecycles and store interactions, and more time on curating a great experience for the maintainers of their artifact type.&lt;/p&gt;
&lt;p&gt;For the curious, &lt;code&gt;craft-application&lt;/code&gt; builds upon the following libraries:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/canonical/craft-archives" target="_blank" rel="noreferrer"&gt;craft-archives&lt;/a&gt;: manages interactions with &lt;code&gt;apt&lt;/code&gt; package repositories&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/canonical/craft-cli" target="_blank" rel="noreferrer"&gt;craft-cli&lt;/a&gt;: CLI client builder that follows the Canonical&amp;rsquo;s CLI guidelines&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/canonical/craft-parts" target="_blank" rel="noreferrer"&gt;craft-parts&lt;/a&gt;: obtain, process, and organize data sources into deployment-ready filesystems.&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/canonical/craft-grammar" target="_blank" rel="noreferrer"&gt;craft-grammar&lt;/a&gt;: advanced description grammar for parts&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/canonical/craft-providers" target="_blank" rel="noreferrer"&gt;craft-providers&lt;/a&gt;: interface for instantiating and executing builds for a variety of target environments&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/canonical/craft-platforms" target="_blank" rel="noreferrer"&gt;craft-platforms&lt;/a&gt;: manage target platforms and architectures for craft applications&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/canonical/craft-store" target="_blank" rel="noreferrer"&gt;craft-store&lt;/a&gt;: manage interactions with Canonical&amp;rsquo;s software stores&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/canonical/craft-artifacts" target="_blank" rel="noreferrer"&gt;craft-artifacts&lt;/a&gt;: pack artifacts for craft applications&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="examples-and-docs" class="relative group"&gt;Examples and docs &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#examples-and-docs" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;Before I leave you, I wanted to reference a few &lt;code&gt;*craft.yaml&lt;/code&gt; examples, and link to the documentation for each of the crafts, where you&amp;rsquo;ll find the canonical (little c!) truth on each tool.&lt;/p&gt;
&lt;p&gt;You can find documentation for the crafts below:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://documentation.ubuntu.com/snapcraft/stable/" target="_blank" rel="noreferrer"&gt;Snapcraft docs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://canonical-charmcraft.readthedocs-hosted.com/stable/" target="_blank" rel="noreferrer"&gt;Charmcraft docs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://documentation.ubuntu.com/rockcraft/en/stable/" target="_blank" rel="noreferrer"&gt;Rockcraft docs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://canonical-robotics.readthedocs-hosted.com/en/latest/tutorials/" target="_blank" rel="noreferrer"&gt;Robotics / Snapcraft tutorial&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;And some example recipes:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Snap: &lt;code&gt;icloudpd&lt;/code&gt; - &lt;a href="https://github.com/jnsgruk/icloudpd-snap/blob/main/snap/snapcraft.yaml" target="_blank" rel="noreferrer"&gt;snapcraft.yaml&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Snap: &lt;code&gt;parca-agent&lt;/code&gt; - &lt;a href="https://github.com/parca-dev/parca-agent/blob/main/snap/snapcraft.yaml" target="_blank" rel="noreferrer"&gt;snapcraft.yaml&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Snap: &lt;code&gt;signal-desktop&lt;/code&gt; - &lt;a href="https://github.com/snapcrafters/signal-desktop/blob/candidate/snap/snapcraft.yaml" target="_blank" rel="noreferrer"&gt;snapcraft.yaml&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Charm: &lt;code&gt;ubuntu-manpages-operator&lt;/code&gt; - &lt;a href="https://github.com/canonical/ubuntu-manpages-operator/blob/main/charmcraft.yaml" target="_blank" rel="noreferrer"&gt;charmcraft.yaml&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Rock: &lt;code&gt;grafana&lt;/code&gt; - &lt;a href="https://github.com/canonical/grafana-rock/blob/main/11.4.0/rockcraft.yaml" target="_blank" rel="noreferrer"&gt;rockcraft.yaml&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Rock: &lt;code&gt;temporal-server&lt;/code&gt; - &lt;a href="https://github.com/canonical/temporal-rocks/blob/main/temporal-server/1.23.1/rockcraft.yaml" target="_blank" rel="noreferrer"&gt;rockcraft.yaml&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="summary" class="relative group"&gt;Summary &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#summary" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;The craft ecosystem provides developers with a rigorous, consistent and pleasant experience for building many kinds of artifacts. At the moment, we support Snaps, Rocks and Charms but we&amp;rsquo;re actively developing crafts for Debian packages, cloud images and more.The basic build process, &lt;code&gt;parts&lt;/code&gt; ecosystem and foundations of the crafts are &amp;ldquo;battle tested&amp;rdquo; at this point, and I&amp;rsquo;m excited to see how the experimental &lt;code&gt;craft test&lt;/code&gt; commands shape up across the crafts.&lt;/p&gt;
&lt;p&gt;One of the killer features for the crafts is the ability to reuse part definitions across different artifacts - which makes the pay off for learning the &lt;code&gt;parts&lt;/code&gt; language very high - it&amp;rsquo;s a skill you&amp;rsquo;ll be able to use to build Snaps, Rocks, Charms, VM Images and soon Debs!&lt;/p&gt;
&lt;p&gt;If I look at ecosystems like Debian, where tooling like &lt;code&gt;autopkgtest&lt;/code&gt; is the standard, I think &lt;code&gt;debcraft test&lt;/code&gt; will offer an intuitive entrypoint and encourage more testing, and the same is true of Snaps, both graphical and command-line.&lt;/p&gt;
&lt;p&gt;That&amp;rsquo;s all for now!&lt;/p&gt;</description></item><item><title>Introducing Debcrafters</title><link>https://jnsgr.uk/2025/06/introducing-debcrafters/</link><pubDate>Mon, 30 Jun 2025 00:00:00 +0000</pubDate><guid>https://jnsgr.uk/2025/06/introducing-debcrafters/</guid><description>&lt;blockquote&gt;
&lt;p&gt;This article was originally posted &lt;a href="https://discourse.ubuntu.com/t/63674" target="_blank" rel="noreferrer"&gt;on the Ubuntu Discourse&lt;/a&gt;, and is reposted here. I welcome comments and further discussion in that thread.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Earlier this year, Canonical&amp;rsquo;s Ubuntu Engineering organisation gained a new team, seeded with some of our most prolific contributors to Ubuntu. Debcrafters is a new team dedicated to the maintenance of the Ubuntu Archive.&lt;/p&gt;
&lt;p&gt;The team&amp;rsquo;s primary goal is to maintain the health of the Ubuntu Archive, but its unique construction aims to attract a broad range of Linux distribution expertise; contributors to distributions like Debian, Arch Linux, NixOS and others are encouraged to join the team, and will even get paid to contribute one day per week to those projects to foster learning and idea sharing&lt;/p&gt;
&lt;h3 id="bootstrapping-the-team" class="relative group"&gt;Bootstrapping the team &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#bootstrapping-the-team" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h3&gt;&lt;p&gt;The Debcrafters team is a global team. We have a squad in the Americas, a squad in EMEA and will have a squad in APAC. At present, we&amp;rsquo;ve staffed the AMER and EMEA teams with existing Canonical employees from our Foundations, Desktop, Server and Public Cloud teams. Each team currently has a manager, and four engineers.&lt;/p&gt;
&lt;p&gt;The team comprises Debian Developers, Stable Release Updates (SRU) team members and archive administrators, and began working together for the first time at our recent Engineering Sprint in Frankfurt held in early May 2025.&lt;/p&gt;
&lt;h3 id="mission" class="relative group"&gt;Mission &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#mission" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h3&gt;&lt;p&gt;The Debcrafters&amp;rsquo; primary mission is to maintain the health of the Ubuntu Archive.&lt;/p&gt;
&lt;p&gt;This team will take the lead on syncing &amp;amp; merging packages from Debian, reviewing proposed migration issues, upstreaming Ubuntu deltas, and take ownership of major transitions such as upgrades to &lt;code&gt;glibc&lt;/code&gt; and past examples such as the &lt;code&gt;t64&lt;/code&gt; and &lt;code&gt;python3&lt;/code&gt; transitions.&lt;/p&gt;
&lt;p&gt;They&amp;rsquo;ll manage the scheduling, triggering and reporting on archive test rebuilds which we conduct when making major changes to critical packages. We did this when we enabled frame pointers by default, and when we switched &lt;code&gt;coreutils&lt;/code&gt; to the &lt;code&gt;uutils&lt;/code&gt; implementation in Ubuntu 25.10.&lt;/p&gt;
&lt;p&gt;They&amp;rsquo;ll be responsible for the evolution and maintenance of the &lt;code&gt;autopkgtest&lt;/code&gt; infrastructure for Ubuntu, as well as taking an instrumental role in introducing more distro-scale integration tests.&lt;/p&gt;
&lt;p&gt;They&amp;rsquo;ll work on improving the reporting and dashboarding of the Ubuntu Archive, its contributors and status, as well as taking a broader interest in shaping the tools we use to build and shape Ubuntu.&lt;/p&gt;
&lt;p&gt;What sets this team apart from the likes of Desktop, Server and Foundations is the range of packages they will work on. Members of the Debcrafters team will move thousands of packages every cycle - many of which they will not be intimately familiar with, but will use their growing distro maintenance and packaging skills to perform maintenance where there is no other clear or present owner.&lt;/p&gt;
&lt;h3 id="tools--processes" class="relative group"&gt;Tools &amp;amp; processes &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#tools--processes" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h3&gt;&lt;p&gt;One of the key goals in my first &lt;a href="https://jnsgr.uk/2025/02/engineering-ubuntu-for-the-next-20-years/" target="_blank" rel="noreferrer"&gt;post&lt;/a&gt; was to modernise the contribution experience for Ubuntu Developers by focusing on tools and processes.&lt;/p&gt;
&lt;p&gt;The Debian project recently adopted &lt;a href="https://wiki.debian.org/tag2upload" target="_blank" rel="noreferrer"&gt;tag2upload&lt;/a&gt;, which allows Debian Developers to use &lt;a href="https://packages.debian.org/search?keywords=git-debpush" target="_blank" rel="noreferrer"&gt;git-debpush&lt;/a&gt; to push a signed &lt;code&gt;git&lt;/code&gt; tag when uploading packages. While we’re not following that exact path, we share many of the same goals and intentions.&lt;/p&gt;
&lt;p&gt;For some time Ubuntu Developers have been able to use &lt;a href="https://canonical-git-ubuntu.readthedocs-hosted.com/en/latest/" target="_blank" rel="noreferrer"&gt;&lt;code&gt;git-ubuntu&lt;/code&gt;&lt;/a&gt; as part of their development workflow, which aims to provide &amp;ldquo;unified git-based workflows for the development of Ubuntu source packages&amp;rdquo;. This project brought us closer to our desired experience, but still needs work to achieve our complete vision. I&amp;rsquo;d like to put more emphasis on the experience we provide for &lt;em&gt;testing&lt;/em&gt; packages, as well as signing, uploading and releasing packages.&lt;/p&gt;
&lt;p&gt;In the coming weeks our Starcraft team (responsible for &lt;a href="https://github.com/canonical/snapcraft" target="_blank" rel="noreferrer"&gt;Snapcraft&lt;/a&gt;, &lt;a href="https://github.com/canonical/rockcraft" target="_blank" rel="noreferrer"&gt;Rockcraft&lt;/a&gt;, &lt;a href="https://github.com/canonical/charmcraft" target="_blank" rel="noreferrer"&gt;Charmcraft&lt;/a&gt;) will begin prototyping &lt;code&gt;debcraft&lt;/code&gt;, which will (in time) become the de facto method for creating, testing and uploading packages to the Ubuntu archive.&lt;/p&gt;
&lt;p&gt;The first prototype of &lt;code&gt;debcraft&lt;/code&gt; will focus on unifying the current workflow adopted by most Ubuntu Developers at Canonical. It will wrap existing tools (such as &lt;code&gt;git-ubuntu&lt;/code&gt;, &lt;code&gt;lintian&lt;/code&gt;, &lt;code&gt;autopkgtest&lt;/code&gt;) to provide familiar, streamlined commands such as &lt;code&gt;debcraft pack&lt;/code&gt;, &lt;code&gt;debcraft lint&lt;/code&gt; and &lt;code&gt;debcraft test&lt;/code&gt;. Uploading packages, and a more native &amp;ldquo;craft&amp;rdquo; experience for constructing packages will come later.&lt;/p&gt;
&lt;p&gt;Details will make their way into the new &lt;a href="https://canonical-ubuntu-project.readthedocs-hosted.com/" target="_blank" rel="noreferrer"&gt;Ubuntu Project Docs&lt;/a&gt; throughout the course of the 25.10 Questing Quokka cycle, including the newly renovated &amp;ldquo;Ubuntu Packaging Guide&amp;rdquo;, which will aim to provide a &amp;ldquo;one ring to rule them all&amp;rdquo; approach to documenting how to package software for Ubuntu.&lt;/p&gt;
&lt;h3 id="attracting-contributors" class="relative group"&gt;Attracting contributors &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#attracting-contributors" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h3&gt;&lt;p&gt;While the team has been seeded with seasoned Ubuntu contributors, one of the primary goals of the team is to grow the contributor base across generations.&lt;/p&gt;
&lt;p&gt;One of the sub-teams is currently leading the roll out of a new contributor journey that will soon be publicly available. This process lays out the journey from complete beginner to &amp;ldquo;Core Dev&amp;rdquo;, stopping off at &amp;ldquo;Package Maintainer&amp;rdquo;, &amp;ldquo;Package Set Maintainer&amp;rdquo;, &amp;ldquo;&lt;a href="https://canonical-ubuntu-project.readthedocs-hosted.com/reference/glossary/#term-MOTU" target="_blank" rel="noreferrer"&gt;MOTU&lt;/a&gt;&amp;rdquo;, etc. along the way. The process also aims to help candidates prepare for Developer Membership Board interviews.&lt;/p&gt;
&lt;p&gt;Whether you&amp;rsquo;re a junior engineer just graduating from University, or you&amp;rsquo;re a seasoned Linux contributor elsewhere in the Linux ecosystem, the Debcrafters team is an excellent place to learn software packaging skills and contribute to the world&amp;rsquo;s most deployed Linux distribution.&lt;/p&gt;
&lt;h3 id="contribution-beyond-ubuntu" class="relative group"&gt;Contribution beyond Ubuntu &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#contribution-beyond-ubuntu" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h3&gt;&lt;p&gt;The Debcrafters&amp;rsquo; primary commitment is to Ubuntu, but we recognise the enormous value in collaborating with other distributions. Many of the hard lessons I&amp;rsquo;ve personally learned resulted from contributing to NixOS and building Snaps. Packaging is a complex and ever-changing discipline, and other distributions are facing many of the complex problems we are - often with different or novel approaches to solving them.&lt;/p&gt;
&lt;p&gt;In recognition of this, we&amp;rsquo;re actively seeking maintainers from other distributions - be that Debian, Arch, NixOS, Guix, Fedora, Universal Blue or any other - packaging and distribution engineering skills are often common across distributions, and we believe that Ubuntu can benefit from broader perspectives, while contributing back to the wider ecosystem of distributions in the process.&lt;/p&gt;
&lt;p&gt;The Debcrafters must spend the majority of their work time on Ubuntu, but they will be encouraged to spend a day per week contributing to other distributions to gain understanding, and bring fresh perspectives to Ubuntu (and the reverse, hopefully!). This will be structured as a &lt;em&gt;literal&lt;/em&gt; day per week, agreed with the team management - for example &amp;ldquo;I work on NixOS on Tuesdays&amp;rdquo;.&lt;/p&gt;
&lt;h3 id="summary" class="relative group"&gt;Summary &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#summary" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h3&gt;&lt;p&gt;Canonical has launched a new team, the Debcrafters, who are dedicated to maintaining the very core of Ubuntu: the archive. This team has a global footprint, and deep expertise in software packaging drawn from across the Linux ecosystem. They&amp;rsquo;ll lead transitions, improve tooling improvements and strengthen our distribution testing infrastructure.&lt;/p&gt;
&lt;p&gt;Whether you&amp;rsquo;re an experienced Debian Developer, a maintainer from another Linux distribution or a new engineer starting your career in open source, Debcrafters offers a unique opportunity to learn, grow, and contribute to the world’s most widely deployed Linux distribution.&lt;/p&gt;</description></item><item><title>Supercharging Ubuntu Releases: Monthly Snapshots &amp; Automation</title><link>https://jnsgr.uk/2025/05/supercharging-ubuntu-releases/</link><pubDate>Thu, 29 May 2025 00:00:00 +0000</pubDate><guid>https://jnsgr.uk/2025/05/supercharging-ubuntu-releases/</guid><description>&lt;blockquote&gt;
&lt;p&gt;This article was originally posted &lt;a href="https://discourse.ubuntu.com/t/61876" target="_blank" rel="noreferrer"&gt;on the Ubuntu Discourse&lt;/a&gt;, and is reposted here. I welcome comments and further discussion in that thread.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2 id="introduction" class="relative group"&gt;Introduction &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#introduction" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;Ubuntu has shipped on a predictable, six-month cadence for two decades. Twenty years ago, the idea of releasing an entire distribution every six months was considered forward looking, bold and even difficult. Things have changed since then: software engineering has evolved as a practice, and the advent of both rolling-release distributions like Arch Linux, and more recently image-based immutable distributions such as Universal Blue have meant that other projects with similar goals have adopted vastly different release models with some desirable properties.&lt;/p&gt;
&lt;p&gt;My goal over the coming months is to build a release process that takes advantage of modern release engineering practices, while retaining the resilience and stability of our six-monthly releases. We&amp;rsquo;ll introduce significantly more automated testing, and ensure that the release process is transparent, repeatable and executable in a much shorter and well-known timeframe with little to no human intervention.&lt;/p&gt;
&lt;p&gt;This journey will also create space for better system-wide testing, earlier detection of regressions, and a more productive collaboration with our community.&lt;/p&gt;
&lt;h2 id="monthly-snapshot-releases" class="relative group"&gt;Monthly Snapshot Releases &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#monthly-snapshot-releases" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;Starting in May 2025, we&amp;rsquo;re introducing monthly snapshot releases for Ubuntu.&lt;/p&gt;
&lt;p&gt;Ubuntu is not &amp;ldquo;moving to monthly releases&amp;rdquo; or adopting a rolling release model; we&amp;rsquo;re committed to our six-monthly releases with a Long Term Support (LTS) release every two years. That doesn&amp;rsquo;t mean that our release process should be exempt from the same scrutiny that the rest of our engineering processes are subject to.&lt;/p&gt;
&lt;p&gt;Today the Ubuntu Release process is the product of twenty years of evolution: it safeguards Ubuntu releases with a wealth of checks and balances, but is a largely manual process requiring significant human involvement.&lt;/p&gt;
&lt;p&gt;The Ubuntu Release Team is a crowd of seasoned Ubuntu veterans who have been steadily releasing Ubuntu for many years. Many of this team are community members, some are or have been employed by Canonical in the past. More recently we have established the Canonical Ubuntu Release Management Team - a relatively new team at Canonical who&amp;rsquo;ll be collaborating with the Ubuntu Release Team to develop the new process.&lt;/p&gt;
&lt;p&gt;To aid the Canonical team in their understanding of the existing processes, and the immovable requirements that sit beneath it, we&amp;rsquo;re introducing monthly snapshot releases for Ubuntu. These will not be fully-fledged releases of Ubuntu, but rather curated, testable milestones from our development stream. For the 25.10 (Questing Quokka) cycle, you can expect the following &lt;a href="https://discourse.ubuntu.com/t/questing-quokka-release-schedule/36462" target="_blank" rel="noreferrer"&gt;release schedule&lt;/a&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;May 29, 2025&lt;/strong&gt;: Questing Quokka - Snapshot 1&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;June 26, 2025&lt;/strong&gt;: Questing Quokka - Snapshot 2&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;July 31, 2025&lt;/strong&gt;: Questing Quokka - Snapshot 3&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;August 28, 2025&lt;/strong&gt;: Questing Quokka - Snapshot 4&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;September 18, 2025&lt;/strong&gt;: Questing Quokka - Beta&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;October 9, 2025&lt;/strong&gt;: Questing Quokka - Final Release&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This doesn&amp;rsquo;t mean you&amp;rsquo;ll start seeing Ubuntu versions off the six-month cadence. There will be no Ubuntu 25.07 or 25.08, etc. The monthly snapshots are exactly that: a snapshot of the development of Ubuntu 25.10. Snapshots are not meant for production use, but will help the release team move away from deep institutional knowledge, and toward clean well-documented automated workflows that are transparent, repeatable and testable.&lt;/p&gt;
&lt;p&gt;With our current model, failure modes are not detected until they&amp;rsquo;re urgent and blocking an imminent release. The team conducts rigorous retrospectives on each release, but in my opinion it&amp;rsquo;s hard to meaningfully evolve such a process when it&amp;rsquo;s only exercised every six months. The monthly snapshots will create opportunities for us to test, understand and improve the process.&lt;/p&gt;
&lt;h2 id="embracing-automation" class="relative group"&gt;Embracing Automation &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#embracing-automation" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;One of the most valuable outcomes of this journey will be the opportunity to automate more of the process, freeing up time for the team to focus on more strategic tasks. Releasing a distribution is a complex process requiring coordination across architectures, images, mirrors, websites, testing infrastructure and even partner agreements. This also makes it hard to place a traditional CI tool at the heart of the process. As much as I like Github Actions, I think we&amp;rsquo;d quickly get lost trying to release Ubuntu with such a system, notwithstanding the fact that we&amp;rsquo;d lose control of the underlying infrastructure that releases Ubuntu.&lt;/p&gt;
&lt;p&gt;I&amp;rsquo;ve been exploring the world of Durable Execution, which according to &lt;a href="https://restate.dev/what-is-durable-execution/" target="_blank" rel="noreferrer"&gt;restate.dev&lt;/a&gt; is:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;the practice of making code execution persistent, so that services recover automatically from crashes and restore the results of already completed operations and code blocks without re-executing them.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;At Canonical, we&amp;rsquo;ve adopted &lt;a href="https://temporal.io/" target="_blank" rel="noreferrer"&gt;Temporal&lt;/a&gt; in a few of our products and in many of our business processes. Temporal is a durable execution product that enables developers to solve complex distributed problems, but without being deep distributed systems experts. It&amp;rsquo;s a framework for composing tasks into workflows, with first-class primitives for dealing with failures, retries, exponential back-off and other concepts that enable the build of long-running complex workflows.&lt;/p&gt;
&lt;p&gt;Having spent some time with Temporal myself, and watched other teams adopt it, I think it&amp;rsquo;s a great fit for engineering our next-generation release process. I want our engineers to focus on the logic of the release process, not the infrastructure behind it, and Temporal should enable them to do just that. The Temporal &lt;a href="https://temporal.io/" target="_blank" rel="noreferrer"&gt;homepage&lt;/a&gt; sums it up nicely:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Write code as if failure doesn’t exist&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Temporal &lt;a href="https://docs.temporal.io/evaluate/understanding-temporal#workflow" target="_blank" rel="noreferrer"&gt;workflows&lt;/a&gt; and &lt;a href="https://docs.temporal.io/evaluate/understanding-temporal#activities" target="_blank" rel="noreferrer"&gt;activities&lt;/a&gt; can be written in many languages - and particularly in Python and Go. My expectation is that Go will prove to be an excellent fit for our process: it&amp;rsquo;s a fast and productive language that specialises in concurrency and asynchronous network operations, and has a powerful standard library containing much of the functionality we&amp;rsquo;ll need to build our new release process.&lt;/p&gt;
&lt;p&gt;To take an overly simplistic view of how I expect this to go: we&amp;rsquo;ll take our existing release checklist, write a Go function for each step with some &lt;a href="https://docs.temporal.io/develop/go/testing-suite" target="_blank" rel="noreferrer"&gt;tests&lt;/a&gt;, and compose them together into one or more Temporal workflows that represent the full release process. This will take time, but this approach will enable us to incrementally demonstrate progress toward a fully-automated process over the coming cycles.&lt;/p&gt;
&lt;p&gt;By making this move, not only will we make the process quicker, but also more &lt;a href="https://docs.temporal.io/develop/go/observability" target="_blank" rel="noreferrer"&gt;observable&lt;/a&gt;, &lt;a href="https://docs.temporal.io/develop/go/testing-suite" target="_blank" rel="noreferrer"&gt;testable&lt;/a&gt;, reliable and easier to understand for everyone, not just the release team.&lt;/p&gt;
&lt;h2 id="improving-test-coverage" class="relative group"&gt;Improving Test Coverage &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#improving-test-coverage" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;One area I&amp;rsquo;d like to improve as a side-effect of this work is more full-system integration testing. Packages in the Ubuntu archive generally enjoy good coverage through a suite of &lt;a href="https://autopkgtest.ubuntu.com/" target="_blank" rel="noreferrer"&gt;autopkgtest&lt;/a&gt; tests, and there are numerous other places where integration tests are run on Ubuntu. With our traditional six-monthly cadence, full end-to-end testing of ISOs and the installer typically ramps up close to release time when changes are fewer (and riskier) and time is short.&lt;/p&gt;
&lt;p&gt;With the introduction of monthly snapshots, we can integrate installer testing, full-disk encryption testing, graphical application testing and more as a regular, automated part of the release pipeline - not just as part of the development pipeline of each individual package. This means we should catch regressions earlier and surface more edge cases to be resolved before release.&lt;/p&gt;
&lt;p&gt;One of the most important parts of increasing our testing culture is to make it clear where and how to contribute tests to Ubuntu. The easier we make it to write and contribute tests, the more tests we&amp;rsquo;re likely to add to the suite. We&amp;rsquo;re doing some work on this in parallel which will likely turn into a blog post of its own in the coming months.&lt;/p&gt;
&lt;p&gt;In our current process, we have a heroic group of volunteers who kindly spend hours on our behalf testing the various flavours - exercising all the possible install paths and validating that what is about to be published is fit for purpose. I&amp;rsquo;d like to ensure that our volunteers&amp;rsquo; time is spent as productively and rewardingly as possible, and I think we can automate much of this testing and allow them to focus on the more complex and nuanced aspects of each release, and raise the quality of Ubuntu across all the flavours.&lt;/p&gt;
&lt;h2 id="whats-next" class="relative group"&gt;What&amp;rsquo;s Next? &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#whats-next" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;We’re starting by modeling the current release process as it is. Once we&amp;rsquo;ve validated our assumptions about the current process, we’ll layer in improvements by reducing manual gates, parallelising independent steps, introducing more testing, and exercising the process each month to test (and measure) any improvements we&amp;rsquo;ve made.&lt;/p&gt;
&lt;p&gt;My ultimate goal is a release system that’s incredibly &amp;ldquo;boring&amp;rdquo;: transparent, predictable, observable, and easy to reason about (even when things go wrong).&lt;/p&gt;
&lt;p&gt;The new, fully-automated process will likely take several months to complete. When we think we&amp;rsquo;re done, we&amp;rsquo;ll do a release that runs both processes in parallel to ensure we get the outcome we expect before finally sunsetting the old process.&lt;/p&gt;
&lt;p&gt;We’ll be building this work in the open (and &lt;a href="https://canonical.com/careers" target="_blank" rel="noreferrer"&gt;hiring&lt;/a&gt;!) so if you’ve used Temporal in similar contexts, or are curious about contributing to this effort, we’d love to hear from you.&lt;/p&gt;
&lt;p&gt;Until next time!&lt;/p&gt;</description></item><item><title>Adopting sudo-rs By Default in Ubuntu 25.10</title><link>https://jnsgr.uk/2025/05/adopting-sudo-rs-by-default-in-ubuntu/</link><pubDate>Tue, 06 May 2025 00:00:00 +0000</pubDate><guid>https://jnsgr.uk/2025/05/adopting-sudo-rs-by-default-in-ubuntu/</guid><description>&lt;blockquote&gt;
&lt;p&gt;This article was originally posted &lt;a href="https://discourse.ubuntu.com/t/60583" target="_blank" rel="noreferrer"&gt;on the Ubuntu Discourse&lt;/a&gt;, and is reposted here. I welcome comments and further discussion in that thread.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2 id="introduction" class="relative group"&gt;Introduction &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#introduction" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;Following on from &lt;a href="https://jnsgr.uk/2025/03/carefully-but-purposefully-oxidising-ubuntu/" target="_blank" rel="noreferrer"&gt;Carefully But Purposefully Oxidising Ubuntu&lt;/a&gt;, Ubuntu will be the first major Linux distribution to adopt &lt;code&gt;sudo-rs&lt;/code&gt; as the default implementation of &lt;code&gt;sudo&lt;/code&gt;, in partnership with the &lt;a href="https://trifectatech.org/" target="_blank" rel="noreferrer"&gt;Trifecta Tech Foundation&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;The change will be effective from the release of Ubuntu 25.10. You can see the Trifecta Tech Foundation&amp;rsquo;s announcement &lt;a href="https://trifectatech.org/blog/memory-safe-sudo-to-become-the-default-in-ubuntu/" target="_blank" rel="noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id="what-is-sudo-rs" class="relative group"&gt;What is &lt;code&gt;sudo-rs&lt;/code&gt;? &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#what-is-sudo-rs" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;&lt;code&gt;sudo-rs&lt;/code&gt; is a reimplementation of the traditional &lt;code&gt;sudo&lt;/code&gt; tool, written in Rust. It’s being developed by the &lt;a href="https://trifectatech.org/" target="_blank" rel="noreferrer"&gt;Trifecta Tech Foundation (TTF)&lt;/a&gt;, a nonprofit focused on building secure, open source infrastructure components. The project is part of the Trifecta Tech Foundation&amp;rsquo;s &lt;a href="https://trifectatech.org/initiatives/privilege-boundary/" target="_blank" rel="noreferrer"&gt;Privilege Boundary initiative&lt;/a&gt;, which aims to handle privilege escalation with memory-safe alternatives.&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;sudo&lt;/code&gt; command has long served as the defacto means of privilege escalation on Linux. As described in the &lt;a href="https://jnsgr.uk/2025/03/carefully-but-purposefully-oxidising-ubuntu/" target="_blank" rel="noreferrer"&gt;original post&lt;/a&gt;, Rust provides strong guarantees against certain classes of memory-safety issues, which is pivotal for components at the privilege boundary.&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;sudo-rs&lt;/code&gt; team is collaborating with &lt;a href="https://www.millert.dev/" target="_blank" rel="noreferrer"&gt;Todd Miller&lt;/a&gt;, who’s maintained the original &lt;code&gt;sudo&lt;/code&gt; for over thirty years. &lt;code&gt;sudo-rs&lt;/code&gt; should not be considered a fork in the road, but rather a handshake across generations of secure systems. Throughout the development of &lt;code&gt;sudo-rs&lt;/code&gt;, the TTF team have also made contributions to enhance the original &lt;code&gt;sudo&lt;/code&gt; implementation.&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;sudo-rs&lt;/code&gt; project is designed to be a drop in replacement for the original tool. For the vast majority of users, the upgrade should be completely transparent to their workflow. That said, &lt;code&gt;sudo-rs&lt;/code&gt; is a not a &amp;ldquo;blind&amp;rdquo; reimplementation. The developers are taking a &amp;ldquo;less is more&amp;rdquo; approach. This means that some features of the original &lt;code&gt;sudo&lt;/code&gt; may not be reimplemented if they serve only niche, or more recently considered &amp;ldquo;outdated&amp;rdquo; practices.&lt;/p&gt;
&lt;p&gt;Erik Jonkers, Chair of the Trifecta Tech Foundation explains:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;While no piece of software - in any language - is flawless, we believe the transition to Rust in systems programming is a vital step forward, it is very exciting to see Ubuntu committing to &lt;code&gt;sudo-rs&lt;/code&gt; and taking the lead in moving the needle.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2 id="sponsoring-mainstream-adoption" class="relative group"&gt;Sponsoring Mainstream Adoption &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#sponsoring-mainstream-adoption" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;Leading the mainstream adoption of a replacement to such a universally understood tool comes with responsibility. Before committing to ship &lt;code&gt;sudo-rs&lt;/code&gt; in Ubuntu 26.04 LTS, we&amp;rsquo;ll test the transition in Ubuntu 25.10. We&amp;rsquo;re also sponsoring the development of some specific items, which has manifested as &lt;a href="https://trifectatech.org/initiatives/workplans/sudo-rs/#current-work" target="_blank" rel="noreferrer"&gt;Milestone 5&lt;/a&gt; in the upstream project:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Coarse-grained shell escape prevention (NOEXEC) on Linux (See &lt;a href="https://github.com/trifectatechfoundation/sudo-rs/pull/1073" target="_blank" rel="noreferrer"&gt;PR #1073&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;The ability to control AppArmor profiles (First &lt;a href="https://github.com/trifectatechfoundation/sudo-rs/pull/1067" target="_blank" rel="noreferrer"&gt;PR #1067&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;A &lt;code&gt;sudoedit&lt;/code&gt; implementation&lt;/li&gt;
&lt;li&gt;Support for Linux Kernels older than version 5.9&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The final item may seem out of place, but because Ubuntu 20.04 LTS is still in support, without this work there could be situations where &lt;code&gt;sudo&lt;/code&gt; fails to function if, for example, a 26.04 LTS OCI container was run on a 20.04 LTS host!&lt;/p&gt;
&lt;p&gt;The team have also already &lt;a href="https://github.com/trifectatechfoundation/sudo-rs/pull/1079" target="_blank" rel="noreferrer"&gt;begun work&lt;/a&gt; on ensuring that the test-suite is as compatible as possible with Ubuntu, to ensure any issues are caught early.&lt;/p&gt;
&lt;p&gt;This isn’t just about shipping a new binary. It’s about setting a direction. We&amp;rsquo;re not abandoning C, or even rewriting all the utilities ourselves, but by choosing to replace one of the most security-critical tools in the system with a memory-safe alternative, we&amp;rsquo;re making a statement: resilience and sustainability are not optional in the future of open infrastructure.&lt;/p&gt;
&lt;h2 id="progress-on-coreutils" class="relative group"&gt;Progress on &lt;code&gt;coreutils&lt;/code&gt; &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#progress-on-coreutils" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;Since the initial announcement, we&amp;rsquo;ve been working hard to more clearly define a plan for the migration to uutils &lt;code&gt;coreutils&lt;/code&gt; in 25.10 and beyond. Similarly to our engagement with the Trifecta Tech Foundation, we&amp;rsquo;re also sponsoring the uutils project to ensure that some key gaps are closed before we ship 25.10. The sponsorship will primarily cover the development of SELinux support for common commands such as &lt;code&gt;mv&lt;/code&gt;, &lt;code&gt;ls&lt;/code&gt;, &lt;code&gt;cp&lt;/code&gt;, etc.&lt;/p&gt;
&lt;p&gt;The first step toward developing SELinux support was to &lt;a href="https://github.com/uutils/coreutils/pull/7440/files" target="_blank" rel="noreferrer"&gt;add support for automated testing in Github Actions&lt;/a&gt;, since then the maintainers have begun work on the actual implementation.&lt;/p&gt;
&lt;p&gt;The other feature we&amp;rsquo;re sponsoring is internationalisation support. At present, some of the utility implementations (such as &lt;code&gt;sort&lt;/code&gt;) have an incomplete understanding of locales, and therefore may yield unexpected results. We expect that these two features should land in time for us to ship in 25.10, and we&amp;rsquo;ll continue to work with the uutils project throughout the 26.04 LTS cycle to close any remaining gaps we identify in the interim release.&lt;/p&gt;
&lt;p&gt;One of the major concerns outlined in Julian&amp;rsquo;s post is about binary size. We&amp;rsquo;ve got a few tricks we can play here to get the size down, and there is already some conversation started &lt;a href="https://salsa.debian.org/rust-team/debcargo-conf/-/merge_requests/895" target="_blank" rel="noreferrer"&gt;upstream in Debian&lt;/a&gt; on how that might be achieved. There are also security implications, such as AppArmor’s lack of support for multi-call binaries. We’re currently working with the respective upstreams to discuss addressing this systematically, through in the interim we may need to build small wrapper binaries to enable compatibility with existing AppArmor profiles from the start.&lt;/p&gt;
&lt;h2 id="migration-mechanics" class="relative group"&gt;Migration Mechanics &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#migration-mechanics" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;Julian Klode &lt;a href="https://discourse.ubuntu.com/t/migration-to-rust-coreutils-in-25-10/59708" target="_blank" rel="noreferrer"&gt;posted recently&lt;/a&gt; on the Ubuntu Discourse outlining the packaging plan that will enable us both to migrate transparently to uutils &lt;code&gt;coreutils&lt;/code&gt;, but also provide a convenient means for users to opt-out and switch back to GNU &lt;code&gt;coreutils&lt;/code&gt; if they wish, or if they identify a gap in the new implementation. I expect this will be rare, but we want to make sure it&amp;rsquo;s as easy as possible to revert, and will be documenting this in detail before release.&lt;/p&gt;
&lt;p&gt;Replacing coreutils isn&amp;rsquo;t as simple as swapping binaries. As an &lt;code&gt;Essential&lt;/code&gt; package, its replacement must work immediately upon unpacking without relying on maintainer scripts, and without conflicting files across packages. To solve this, we’re introducing new &lt;code&gt;coreutils-from-uutils&lt;/code&gt; and &lt;code&gt;coreutils-from-gnu&lt;/code&gt; packages, as well as &lt;code&gt;coreutils-from&lt;/code&gt; itself. For all the gory details, see the &lt;a href="https://discourse.ubuntu.com/t/migration-to-rust-coreutils-in-25-10/59708" target="_blank" rel="noreferrer"&gt;Discourse post&lt;/a&gt;!&lt;/p&gt;
&lt;p&gt;The packaging work required to switch to &lt;code&gt;sudo-rs&lt;/code&gt; is somewhat less complicated than with &lt;code&gt;coreutils&lt;/code&gt;. The package is already available in Ubuntu (which you can still test on Ubuntu 24.04, 24.10 and 25.04 with &lt;a href="https://github.com/jnsgruk/oxidizr" target="_blank" rel="noreferrer"&gt;oxidizr&lt;/a&gt;!), but unlike &lt;code&gt;coreutils&lt;/code&gt;, &lt;code&gt;sudo&lt;/code&gt; is not an &lt;code&gt;Essential&lt;/code&gt; package, so we&amp;rsquo;ll be able to make use of the Debian &lt;a href="https://wiki.debian.org/DebianAlternatives" target="_blank" rel="noreferrer"&gt;alternatives&lt;/a&gt; system for the transition.&lt;/p&gt;
&lt;h2 id="summary" class="relative group"&gt;Summary &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#summary" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;Things are progressing nicely. We’ve established strong, productive relationships and are sponsoring work upstream to make these transitions viable.&lt;/p&gt;
&lt;p&gt;We&amp;rsquo;ve got a strategy for migrating the default implementation of &lt;code&gt;coreutils&lt;/code&gt; and &lt;code&gt;sudo&lt;/code&gt; in Ubuntu 25.10 which will enable a seamless revert in cases where that is desired. While &lt;code&gt;sudo-rs&lt;/code&gt; will be the default in 25.10, the original &lt;code&gt;sudo&lt;/code&gt; will remain available for users who need it, and we’ll be gathering feedback to ensure a smooth transition before the 26.04 LTS.&lt;/p&gt;
&lt;p&gt;Additionally, we&amp;rsquo;ve begun investigating the feasibility of providing &lt;a href="https://sequoia-pgp.org/" target="_blank" rel="noreferrer"&gt;SequoiaPGP&lt;/a&gt; and using it in APT instead of GnuPG. SequoiaPGP is a new OpenPGP library with a focus on safety and correctness, written in Rust. The GnuPG maintainers have recently forked the OpenPGP standard and are no longer compliant with it. Sequoia provides a modern alternative to GnuPG with strict behavior, and is already used in various other systems. More details to follow!&lt;/p&gt;
&lt;p&gt;Stay tuned!&lt;/p&gt;</description></item><item><title>Revitalising Ubuntu Project Documentation</title><link>https://jnsgr.uk/2025/04/revitalising-ubuntu-project-documentation/</link><pubDate>Tue, 01 Apr 2025 00:00:00 +0000</pubDate><guid>https://jnsgr.uk/2025/04/revitalising-ubuntu-project-documentation/</guid><description>&lt;blockquote&gt;
&lt;p&gt;This article was originally posted &lt;a href="https://discourse.ubuntu.com/t/58694" target="_blank" rel="noreferrer"&gt;on the Ubuntu Discourse&lt;/a&gt;, and is reposted here. I welcome comments and further discussion in that thread.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2 id="introduction" class="relative group"&gt;Introduction &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#introduction" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;Back in February I &lt;a href="https://jnsgr.uk/2025/02/engineering-ubuntu-for-the-next-20-years/" target="_blank" rel="noreferrer"&gt;wrote some thoughts&lt;/a&gt; on Ubuntu&amp;rsquo;s documentation and its role within the community. For a mostly-online software community, documentation is one of our most critical forms of communication.&lt;/p&gt;
&lt;p&gt;In the last two years there has been lots of focus on the technical aspects of our documentation (how-to guides, tutorials, etc.), but I&amp;rsquo;d like to focus more on what I&amp;rsquo;m calling the &amp;ldquo;Ubuntu Project Documentation&amp;rdquo; over the coming months.&lt;/p&gt;
&lt;p&gt;Documentation isn&amp;rsquo;t only about technical how-to guides and tutorials, nor is it only about troubleshooting or satisfying particular use-cases. Our documentation can set the tone for the project, give a means for the community to state an intent, and guide both current and future contributors in their daily work.&lt;/p&gt;
&lt;p&gt;Ubuntu has a lot of documentation, most of which has grown organically over the last 20 years, but it&amp;rsquo;s not always easy to find or understand. Our documentation should illuminate and inspire a path to contribution. It should provide direction and clarity on complex issues, reference on technology and past decisions, and precision in the execution of process.&lt;/p&gt;
&lt;p&gt;Our project documentation should detail what makes Ubuntu happen. How are decisions made? What are the teams contributing to Ubuntu? How are those teams appointed? What are their responsibilities? If you&amp;rsquo;re on the Main Inclusion Review (MIR) team and you&amp;rsquo;re assigned a package to review, what steps should you take? How does package sponsorship work, and who should you contact if you&amp;rsquo;re stuck? How are the Access Control Lists (ACLs) updated for packages and package sets, and who can make those changes? What does the journey look like from first time package bug-fixer to Ubuntu Core Developer?&lt;/p&gt;
&lt;p&gt;These are all examples of questions that we, the collective conscious of Ubuntu, know the answers to, yet it is still difficult to find up-to-date answers to these questions, often requiring input from some of our busiest and most knowledgeable contributors to settle discussions and answer basic queries.&lt;/p&gt;
&lt;p&gt;If a potential contributor identifies a bug in a package, there should be one authoritative source of information on where the package source can be located, how it can be pulled, built and tested, and how to work with a sponsor to land changes. Such a process is satisfying for contributors, making it more likely they&amp;rsquo;ll stay engaged, and therefore benefits the distribution&amp;rsquo;s longevity and sustainability.&lt;/p&gt;
&lt;p&gt;Answering questions and mentoring people will remain a central part of our community&amp;rsquo;s role, but many of the first questions asked could be serviced by better documentation.&lt;/p&gt;
&lt;h2 id="the-challenge" class="relative group"&gt;The Challenge &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#the-challenge" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;Much of the required content already exists. The venerable &lt;a href="https://wiki.ubuntu.com/" target="_blank" rel="noreferrer"&gt;Ubuntu Wiki&lt;/a&gt; was the go-to destination for such documentation, but has become outdated both technologically and in the content it serves. This degradation gained pace as we diversified the number of destinations that documentation could live: the Wiki, Discourse, Github, Launchpad, etc.&lt;/p&gt;
&lt;p&gt;The Ubuntu Community team have made significant efforts over the past months to centralise the documentation for &lt;a href="https://ubuntu.com/community/membership" target="_blank" rel="noreferrer"&gt;membership&lt;/a&gt;, our &lt;a href="https://ubuntu.com/community/ethos/code-of-conduct" target="_blank" rel="noreferrer"&gt;code of conduct&lt;/a&gt; and project &lt;a href="https://ubuntu.com/community/governance" target="_blank" rel="noreferrer"&gt;governance&lt;/a&gt;. I also called out the renewed &lt;a href="https://documentation.ubuntu.com/sru/en/latest/" target="_blank" rel="noreferrer"&gt;Stable Release Update (SRU)&lt;/a&gt; documentation in my first post for having made its first steps toward a new and improved structure.&lt;/p&gt;
&lt;p&gt;These examples prove we have all the skills we need to write &lt;em&gt;excellent&lt;/em&gt; documentation. Throughout the 25.10 cycle, I intend to put some focus on this, consolidating as much of our content into modern formats as possible, thereby making it as accessible as possible.&lt;/p&gt;
&lt;p&gt;In doing this work, I hope to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Illustrate contributor journeys across disciplines&lt;/li&gt;
&lt;li&gt;Create resilience in the project by reducing the &amp;ldquo;&lt;a href="https://en.wikipedia.org/wiki/Bus_factor" target="_blank" rel="noreferrer"&gt;bus factor&lt;/a&gt;&amp;rdquo;&lt;/li&gt;
&lt;li&gt;Increase the accessibility and ergonomics of our documentation&lt;/li&gt;
&lt;li&gt;Enable more efficient, asynchronous collaboration on a wide range of tasks&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="end-goal" class="relative group"&gt;End Goal &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#end-goal" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;To quote the &lt;a href="https://canonical.com/documentation" target="_blank" rel="noreferrer"&gt;Canonical.com&lt;/a&gt; page on Documentation Practice:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;we have embarked on a comprehensive, long-term project to transform documentation. Our aim is to create and maintain documentation product and practice that will represent a standard of excellence. We want documentation to be the best it possibly can be.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;At the heart of this mission &lt;a href="https://diataxis.fr/" target="_blank" rel="noreferrer"&gt;Diátaxis&lt;/a&gt;: a way of thinking about documentation. Diátaxis &amp;ldquo;prescribes approaches to content, architecture and form that emerge from a systematic approach to understanding the needs of documentation users&amp;rdquo;.&lt;/p&gt;
&lt;p&gt;You&amp;rsquo;ll have seen Diátaxis in use across many of our product documentation pages: the &lt;a href="https://documentation.ubuntu.com/juju/3.6/" target="_blank" rel="noreferrer"&gt;Juju docs&lt;/a&gt;, the &lt;a href="https://maas.io/docs" target="_blank" rel="noreferrer"&gt;MAAS docs&lt;/a&gt;, the &lt;a href="https://documentation.ubuntu.com/pebble/" target="_blank" rel="noreferrer"&gt;Pebble docs&lt;/a&gt;, the &lt;a href="https://documentation.ubuntu.com/rockcraft/en/latest/" target="_blank" rel="noreferrer"&gt;Rockcraft docs&lt;/a&gt; and many more.&lt;/p&gt;
&lt;p&gt;Most of those existing sites are specific - they document a particular &lt;em&gt;product&lt;/em&gt; or &lt;em&gt;ecosystem&lt;/em&gt; which neatly scopes the documentation structure, but the Diátaxis framework can also be used to bring structure, precision and clarity to the documentation of the Ubuntu project as a whole.&lt;/p&gt;
&lt;p&gt;Earlier this month I surveyed the various documentation sites in use by Canonical and the Ubuntu Community, and settled on three common themes around which we will structure our renewed project documentation:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Governance&lt;/strong&gt;: in which membership, code of conduct, team structures, communication practices, delegation, mission, software licensing and 3rd-party software guidelines will be documented.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Develop Ubuntu&lt;/strong&gt;: documentation for current and aspiring Ubuntu developers, including how to package software for Ubuntu, how to merge packages from Debian, how to sponsor packages, how to use &lt;code&gt;git-ubuntu&lt;/code&gt; and conduct &amp;ldquo;&lt;a href="https://wiki.ubuntu.com/PlusOneMaintenanceTeam" target="_blank" rel="noreferrer"&gt;+1 Maintenance&lt;/a&gt;&amp;rdquo;, etc.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Archive Administration&lt;/strong&gt;: the nuts and bolts of managing Ubuntu&amp;rsquo;s prolific software repositories: how to manage seeds, configure phased updates, conduct an MIR, run an SRU process, etc.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;These categories were not immediately obvious, and they&amp;rsquo;re not necessarily mutually exclusive, but they fell out quite naturally when trying to logically organise our existing content.&lt;/p&gt;
&lt;p&gt;During the process, I came up with this rough sketch:&lt;/p&gt;
&lt;p&gt;&lt;a href="02.png"&gt;
&lt;figure&gt;
&lt;picture
class="mx-auto my-0 rounded-md"
&gt;
&lt;source
srcset="https://jnsgr.uk/2025/04/revitalising-ubuntu-project-documentation/02_hu_33eb0915229d8456.webp 330w,https://jnsgr.uk/2025/04/revitalising-ubuntu-project-documentation/02_hu_7a7a7ec846696071.webp 660w
,https://jnsgr.uk/2025/04/revitalising-ubuntu-project-documentation/02_hu_201b198e9ec6a925.webp 1024w
,https://jnsgr.uk/2025/04/revitalising-ubuntu-project-documentation/02_hu_f16f633e9202adc.webp 1320w
"
sizes="100vw"
type="image/webp"
/&gt;
&lt;img
width="1456"
height="1445"
class="mx-auto my-0 rounded-md"
alt="an outline of how our Ubuntu Project documentation might be structured"
loading="lazy" decoding="async"
src="https://jnsgr.uk/2025/04/revitalising-ubuntu-project-documentation/02_hu_b6a78828bc273f24.png" srcset="https://jnsgr.uk/2025/04/revitalising-ubuntu-project-documentation/02_hu_66850f0a42a7fa52.png 330w,https://jnsgr.uk/2025/04/revitalising-ubuntu-project-documentation/02_hu_b6a78828bc273f24.png 660w
,https://jnsgr.uk/2025/04/revitalising-ubuntu-project-documentation/02_hu_6fc297bdd4b822b0.png 1024w
,https://jnsgr.uk/2025/04/revitalising-ubuntu-project-documentation/02_hu_5724714e3f85011e.png 1320w
"
sizes="100vw"
/&gt;
&lt;/picture&gt;
&lt;/figure&gt;
&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;This illustrates how multiple categories of documentation from different corners of Ubuntu might come together in a single landing page. To give an idea of how we might further break down existing content by type, then category:&lt;/p&gt;
&lt;p&gt;&lt;a href="03.png"&gt;
&lt;figure&gt;
&lt;picture
class="mx-auto my-0 rounded-md"
&gt;
&lt;source
srcset="https://jnsgr.uk/2025/04/revitalising-ubuntu-project-documentation/03_hu_461980ebada284ec.webp 330w,https://jnsgr.uk/2025/04/revitalising-ubuntu-project-documentation/03_hu_ea902b706ac7a8c7.webp 660w
,https://jnsgr.uk/2025/04/revitalising-ubuntu-project-documentation/03_hu_983d860a85a74b80.webp 800w
,https://jnsgr.uk/2025/04/revitalising-ubuntu-project-documentation/03_hu_983d860a85a74b80.webp 800w
"
sizes="100vw"
type="image/webp"
/&gt;
&lt;img
width="800"
height="910"
class="mx-auto my-0 rounded-md"
alt="an outline of how our Ubuntu Project documentation TOC might be structured"
loading="lazy" decoding="async"
src="https://jnsgr.uk/2025/04/revitalising-ubuntu-project-documentation/03_hu_ce266968f87a5ceb.png" srcset="https://jnsgr.uk/2025/04/revitalising-ubuntu-project-documentation/03_hu_774bfd2e138d3eb3.png 330w,https://jnsgr.uk/2025/04/revitalising-ubuntu-project-documentation/03_hu_ce266968f87a5ceb.png 660w
,https://jnsgr.uk/2025/04/revitalising-ubuntu-project-documentation/03.png 800w
,https://jnsgr.uk/2025/04/revitalising-ubuntu-project-documentation/03.png 800w
"
sizes="100vw"
/&gt;
&lt;/picture&gt;
&lt;/figure&gt;
&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;This may not be the final structure, but it&amp;rsquo;s indicative of how we can use Diátaxis to break down large documentation premises into smaller, more digestible and more ergonomic pieces.&lt;/p&gt;
&lt;h2 id="the-plan" class="relative group"&gt;The Plan &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#the-plan" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;During the Ubuntu 25.10 cycle, we&amp;rsquo;ll be dedicating two of our &lt;a href="https://canonical.com/documentation/work-and-careers" target="_blank" rel="noreferrer"&gt;Technical Authors&lt;/a&gt; to make this happen. One of these authors has been largely responsible for overhauling the &lt;a href="https://documentation.ubuntu.com/server/" target="_blank" rel="noreferrer"&gt;Ubuntu Server docs&lt;/a&gt;, but both are very familiar with Diátaxis and the tooling we using to deliver documentation.&lt;/p&gt;
&lt;p&gt;Throughout this process, we&amp;rsquo;ll likely come across outdated, poorly reviewed or incorrect documentation, but as we work through the process of consolidating, we can note where this has happened and get it on our backlog to fix.&lt;/p&gt;
&lt;p&gt;Perhaps we&amp;rsquo;ll find items which lend themselves to inclusion in the &lt;a href="https://canonical.com/documentation/open-documentation-academy" target="_blank" rel="noreferrer"&gt;Canonical Open Documentation Academy&lt;/a&gt;, or maybe we&amp;rsquo;ll need to reach out to some of our less active community members for clarification, but once the structure is in place we&amp;rsquo;ll at least have a place to collaborate.&lt;/p&gt;
&lt;p&gt;Once the transition is complete, there will be an authoritative source for project documentation that is easy to navigate, easy to contribute to and with a well-defined review process that encourages progress over gatekeeping.&lt;/p&gt;
&lt;h2 id="summary" class="relative group"&gt;Summary &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#summary" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;Documentation is the backbone of a thriving open-source community, guiding contributors, setting expectations, and ensuring long-term sustainability.&lt;/p&gt;
&lt;p&gt;While Ubuntu has extensive documentation, much of it is scattered, outdated, or difficult to navigate. By leveraging the Diátaxis framework, we aim to bring structure, clarity, and accessibility to Ubuntu Project Documentation. Our focus will be on governance, development, and archive administration, ensuring that key processes and responsibilities are well-documented and easy to follow.&lt;/p&gt;
&lt;p&gt;With dedicated technical authors and community collaboration, the Ubuntu 25.10 cycle will mark a significant step toward making our documentation searchable, structured, and sustainable.&lt;/p&gt;
&lt;p&gt;I hope this effort will empower contributors, reduce reliance on institutional knowledge, and create a more resilient project for the next generation of Ubuntu developers and users.&lt;/p&gt;</description></item><item><title>Carefully But Purposefully Oxidising Ubuntu</title><link>https://jnsgr.uk/2025/03/carefully-but-purposefully-oxidising-ubuntu/</link><pubDate>Wed, 12 Mar 2025 00:00:00 +0000</pubDate><guid>https://jnsgr.uk/2025/03/carefully-but-purposefully-oxidising-ubuntu/</guid><description>&lt;blockquote&gt;
&lt;p&gt;This article was originally posted &lt;a href="https://discourse.ubuntu.com/t/carefully-but-purposefully-oxidising-ubuntu/56995" target="_blank" rel="noreferrer"&gt;on the Ubuntu Discourse&lt;/a&gt;, and is reposted here. I welcome comments and further discussion in that thread.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2 id="introduction" class="relative group"&gt;Introduction &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#introduction" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;Last month I published &lt;a href="https://jnsgr.uk/2025/02/engineering-ubuntu-for-the-next-20-years/" target="_blank" rel="noreferrer"&gt;Engineering Ubuntu For The Next 20 Years&lt;/a&gt;, which outlines four key themes for how I intend to evolve Ubuntu in the coming years. In this post, I&amp;rsquo;ll focus on &amp;ldquo;Modernisation&amp;rdquo;. There are many areas we could look to modernise in Ubuntu: we could focus on the graphical shell experience, the virtualisation stack, core system utilities, default shell utilities, etc.&lt;/p&gt;
&lt;p&gt;Over the years, projects like GNU Coreutils have been instrumental in shaping the Unix-like experience that Ubuntu and other Linux distributions ship to millions of users. According to the GNU &lt;a href="https://www.gnu.org/software/coreutils/" target="_blank" rel="noreferrer"&gt;website&lt;/a&gt;:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;The GNU Core Utilities are the basic file, shell and text manipulation utilities of the GNU operating system. These are the core utilities which are expected to exist on every operating system.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;This package provides utilities which have become synonymous with Linux to many - the likes of &lt;code&gt;ls&lt;/code&gt;, &lt;code&gt;cp&lt;/code&gt;, and &lt;code&gt;mv&lt;/code&gt;. In recent years, there has been an &lt;a href="https://uutils.github.io/" target="_blank" rel="noreferrer"&gt;effort&lt;/a&gt; to reimplement this suite of tools in Rust, with the goal of reaching 100% compatibility with the existing tools. Similar projects, like &lt;a href="https://github.com/trifectatechfoundation/sudo-rs" target="_blank" rel="noreferrer"&gt;sudo-rs&lt;/a&gt;, aim to replace key security-critical utilities with more modern, memory-safe alternatives.&lt;/p&gt;
&lt;p&gt;Starting with Ubuntu 25.10, my goal is to adopt some of these modern implementations as the default. My immediate goal is to make uutils&amp;rsquo; coreutils implementation the default in Ubuntu 25.10, and subsequently in our next Long Term Support (LTS) release, Ubuntu 26.04 LTS, if the conditions are right.&lt;/p&gt;
&lt;h2 id="but-why" class="relative group"&gt;But… why? &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#but-why" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;Performance is a frequently cited rationale for &amp;ldquo;Rewrite it in Rust&amp;rdquo; projects. While performance is high on my list of priorities, it&amp;rsquo;s not the primary driver behind this change. These utilities are at the heart of the distribution - and it&amp;rsquo;s the enhanced resilience and safety that is more easily achieved with Rust ports that are most attractive to me.&lt;/p&gt;
&lt;p&gt;The Rust language, its type system and its borrow checker (and its community!) work together to encourage developers to write safe, sound, resilient software. With added safety comes an increase in security guarantees, and with an increase in security comes an increase in overall resilience of the system - and where better to start than with the foundational tools that build the distribution?&lt;/p&gt;
&lt;p&gt;I recently read an &lt;a href="https://smallcultfollowing.com/babysteps/blog/2025/03/10/rust-2025-intro/" target="_blank" rel="noreferrer"&gt;article&lt;/a&gt; about targeting foundational software with Rust in 2025. Among other things, the article asserts that &amp;ldquo;foundational software needs performance, reliability — and productivity&amp;rdquo;. If foundational software fails, so do all of the other layers built on top. If foundational packages have performance bottlenecks, they become a floor on the performance achievable by the layers above.&lt;/p&gt;
&lt;p&gt;Ubuntu powers millions of devices around the world, from servers in your data centre, to safety critical systems in autonomous systems, so it behooves us to be absolutely certain we&amp;rsquo;re shipping the most resilient and trustworthy software we can.&lt;/p&gt;
&lt;p&gt;There are lots of ways to achieve this: we can provide &lt;a href="https://canonical.com/blog/12-year-lts-for-kubernetes" target="_blank" rel="noreferrer"&gt;long term support for projects like Kubernetes&lt;/a&gt;, we can &lt;a href="https://canonical.com/blog/canonicals-commitment-to-quality-management" target="_blank" rel="noreferrer"&gt;assure the code we write&lt;/a&gt;, and we can &lt;a href="https://canonical.com/blog/canonical-achieves-iso-21434-certification" target="_blank" rel="noreferrer"&gt;strive to achieve compliance with safety-centric standards&lt;/a&gt;, but another is by shipping software with the values of safety, soundness, correctness and resilience at their core.&lt;/p&gt;
&lt;p&gt;That&amp;rsquo;s not to throw shade on the existing implementations, of course. Many of these tools have been stable for many years, quietly improving performance and fixing bugs. A lovely side benefit of working on newer implementations, is that it &lt;a href="https://ferrous-systems.com/blog/testing-sudo-rs/" target="_blank" rel="noreferrer"&gt;sometimes facilitates&lt;/a&gt; improvements in the original upstream projects, too!&lt;/p&gt;
&lt;p&gt;I&amp;rsquo;ve written about my desire to increase the number of Ubuntu contributors, and I think projects like this will help. Rust may present a steeper learning curve than C in some ways, but by providing such a strong framework around the use of memory it also lowers the chances that a contributor accidentally commits potentially unsafe code.&lt;/p&gt;
&lt;h2 id="introducing-oxidizr" class="relative group"&gt;Introducing &lt;code&gt;oxidizr&lt;/code&gt; &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#introducing-oxidizr" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;I did my homework before writing this post. I wanted to see how easy it was for me to live with these newer implementations and get a sense of their readiness for prime-time within the distribution. I also wanted a means of toggling between implementations so that I could easily switch back should I run into incompatibilities - and so &lt;a href="https://github.com/jnsgruk/oxidizr" target="_blank" rel="noreferrer"&gt;&lt;code&gt;oxidizr&lt;/code&gt;&lt;/a&gt; was born!&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;code&gt;oxidizr&lt;/code&gt; is a command-line utility for managing system experiments that replace traditional Unix utilities with modern Rust-based alternatives on Ubuntu systems.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;The &lt;code&gt;oxidizr&lt;/code&gt; utility enables you to quickly swap in and out newer implementations of certain packages with &lt;em&gt;relatively&lt;/em&gt; low risk. It has the notion of &lt;em&gt;Experiments&lt;/em&gt;, where each experiment is a package that already exists in the archive that can be swapped in as an alternative to the default.&lt;/p&gt;
&lt;p&gt;Version &lt;a href="https://github.com/jnsgruk/oxidizr/releases/tag/v1.0.0" target="_blank" rel="noreferrer"&gt;1.0.0&lt;/a&gt; supports the following experiments:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/uutils/coreutils" target="_blank" rel="noreferrer"&gt;uutils coreutils&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/uutils/findutils" target="_blank" rel="noreferrer"&gt;uutils findutils&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/uutils/diffutils" target="_blank" rel="noreferrer"&gt;uutils diffutils&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/trifectatechfoundation/sudo-rs" target="_blank" rel="noreferrer"&gt;sudo-rs&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="how-does-it-work" class="relative group"&gt;How does it work? &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#how-does-it-work" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h3&gt;&lt;p&gt;Each experiment is subtly different since the paths of the utilities being replaced vary, but the process for enabling an experiment is generally:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Install the alternative package (e.g. &lt;code&gt;apt install rust-coreutils&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;For each binary shipped in the new package:
&lt;ul&gt;
&lt;li&gt;Lookup the default path for that utility (e.g &lt;code&gt;which date&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Back up that file (e.g. &lt;code&gt;cp /usr/bin/date /usr/bin/.date.oxidizr.bak&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Symlink the new implementation in place (e.g. &lt;code&gt;ln -s /usr/bin/coreutils /usr/bin/date&lt;/code&gt;)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;There is also the facility to &amp;ldquo;disable&amp;rdquo; an experiment, which does the reverse of the sequence above:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;For each binary shipped in the new package:
&lt;ul&gt;
&lt;li&gt;Lookup the default path for the utility (e.g &lt;code&gt;which date&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Check for and restore any backed up versions (e.g &lt;code&gt;cp /usr/bin/.date.oxidizr.bak /usr/bin/date&lt;/code&gt;)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Uninstall the package (e.g. &lt;code&gt;apt remove rust-coreutils&lt;/code&gt;)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Thereby returning the system back to its original state! The tool is covered by a suite of integration tests which illustrate this behaviour which you can find &lt;a href="https://github.com/jnsgruk/oxidizr/tree/ca955677b4f5549e5d7f06726f5c5cf1846fe448/tests" target="_blank" rel="noreferrer"&gt;on Github&lt;/a&gt;&lt;/p&gt;
&lt;h3 id="get-started" class="relative group"&gt;Get started &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#get-started" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h3&gt;&lt;blockquote&gt;
&lt;p&gt;⚠️ WARNING ⚠️: &lt;code&gt;oxidizr&lt;/code&gt; is an experimental tool to play with alternatives to foundational system utilities. It may cause a loss of data, or prevent your system from booting, so use with caution!&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;There are a couple of ways to get &lt;code&gt;oxidizr&lt;/code&gt; on your system. If you already use &lt;code&gt;cargo&lt;/code&gt;, you can do the following:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt"&gt;1
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;cargo install --git https://github.com/jnsgruk/oxidizr
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;p&gt;Otherwise, you can download and install binary releases from &lt;a href="https://github.com/jnsgruk/oxidizr/releases" target="_blank" rel="noreferrer"&gt;Github&lt;/a&gt;:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt"&gt;1
&lt;/span&gt;&lt;span class="lnt"&gt;2
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Download version 1.0.0 and extract to /usr/bin/oxidizr&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;curl -sL &lt;span class="s2"&gt;&amp;#34;https://github.com/jnsgruk/oxidizr/releases/download/v1.0.0/oxidizr_Linux_&lt;/span&gt;&lt;span class="k"&gt;$(&lt;/span&gt;uname -m&lt;span class="k"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;.tar.gz&amp;#34;&lt;/span&gt; &lt;span class="p"&gt;|&lt;/span&gt; sudo tar -xvzf - -C /usr/bin oxidizr
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;p&gt;Once installed you can invoke &lt;code&gt;oxidizr&lt;/code&gt; to selectively enable/disable experiments. The default set of experiments in &lt;code&gt;v1.0.0&lt;/code&gt; is &lt;code&gt;rust-coreutils&lt;/code&gt; and &lt;code&gt;sudo-rs&lt;/code&gt;:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt"&gt; 1
&lt;/span&gt;&lt;span class="lnt"&gt; 2
&lt;/span&gt;&lt;span class="lnt"&gt; 3
&lt;/span&gt;&lt;span class="lnt"&gt; 4
&lt;/span&gt;&lt;span class="lnt"&gt; 5
&lt;/span&gt;&lt;span class="lnt"&gt; 6
&lt;/span&gt;&lt;span class="lnt"&gt; 7
&lt;/span&gt;&lt;span class="lnt"&gt; 8
&lt;/span&gt;&lt;span class="lnt"&gt; 9
&lt;/span&gt;&lt;span class="lnt"&gt;10
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Enable default experiments&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo oxidizr &lt;span class="nb"&gt;enable&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Disable default experiments&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo oxidizr disable
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Enable just coreutils&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo oxidizr &lt;span class="nb"&gt;enable&lt;/span&gt; --experiments coreutils
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Enable all experiments without prompting with debug logging enabled&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo oxidizr &lt;span class="nb"&gt;enable&lt;/span&gt; --all --yes -v
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Disable all experiments without prompting&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo oxidizr disable --all --yes
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;p&gt;The tool should work on all versions of Ubuntu after 24.04 LTS - though the &lt;code&gt;diffutils&lt;/code&gt; experiment is only available from Ubuntu 24.10 onward.&lt;/p&gt;
&lt;p&gt;The tool itself is stable and well covered with unit and integration tests, but nonetheless I&amp;rsquo;d urge you to start with a test virtual machine or a machine that &lt;em&gt;isn&amp;rsquo;t&lt;/em&gt; your production workstation or server! I&amp;rsquo;ve been running the &lt;code&gt;coreutils&lt;/code&gt; and &lt;code&gt;sudo-rs&lt;/code&gt; experiments for around 2 weeks now on my Ubuntu 24.10 machines and haven&amp;rsquo;t had many issues (more on that below…).&lt;/p&gt;
&lt;h2 id="how-to-help" class="relative group"&gt;How to Help &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#how-to-help" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;If you&amp;rsquo;re interested in helping out on this mission, then I&amp;rsquo;d encourage you to play with the packages, either by installing them yourself or using &lt;code&gt;oxidizr&lt;/code&gt;. Reply to the Discourse post with your experiences, file bugs and perhaps even dedicate some time to the relevant upstream projects to help with resolving bugs, implementing features or improving documentation, depending on your skill set.&lt;/p&gt;
&lt;p&gt;You can also join us to discuss on our &lt;a href="https://ubuntu.com/community/communications/matrix/onboarding" target="_blank" rel="noreferrer"&gt;Matrix instance&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id="next-steps" class="relative group"&gt;Next Steps &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#next-steps" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;Earlier this week, I met with &lt;a href="https://github.com/sylvestre" target="_blank" rel="noreferrer"&gt;@sylvestre&lt;/a&gt; to discuss my proposal to make uutils coreutils the default in Ubuntu 25.10. I was pleased to hear that he feels the project is ready for that level of exposure, so now we just need to work out the specifics. The Ubuntu Foundations team is already working up a plan for next cycle.&lt;/p&gt;
&lt;p&gt;There will certainly be a few rough edges we&amp;rsquo;ll need to work out. In my testing, for example, the only incompatibility I&amp;rsquo;ve come across is that the &lt;code&gt;update-initramfs&lt;/code&gt; script for Ubuntu uses &lt;code&gt;cp -Z&lt;/code&gt; to preserve &lt;code&gt;selinux&lt;/code&gt; labels when copying files. The &lt;code&gt;cp&lt;/code&gt;, &lt;code&gt;mv&lt;/code&gt; and &lt;code&gt;ls&lt;/code&gt; commands from uutils &lt;a href="https://github.com/uutils/coreutils/issues/2404" target="_blank" rel="noreferrer"&gt;don&amp;rsquo;t yet support&lt;/a&gt; the &lt;code&gt;-Z&lt;/code&gt; flag, but I think we&amp;rsquo;ve worked out a way to unblock that work going forward, both in the upstream and in the next release of Ubuntu.&lt;/p&gt;
&lt;p&gt;I&amp;rsquo;m going to do some more digging on &lt;a href="https://github.com/trifectatechfoundation/sudo-rs" target="_blank" rel="noreferrer"&gt;&lt;code&gt;sudo-rs&lt;/code&gt;&lt;/a&gt; over the coming weeks, with a view to assessing a similar transition.&lt;/p&gt;
&lt;h2 id="summary" class="relative group"&gt;Summary &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#summary" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;I&amp;rsquo;m really excited to see so much investment in the foundational utilities behind Linux. The uutils project seems to be picking up speed after their recent &lt;a href="https://fosdem.org/2025/schedule/event/fosdem-2025-6196-rewriting-the-future-of-the-linux-essential-packages-in-rust-/" target="_blank" rel="noreferrer"&gt;appearance at FOSDEM 2025&lt;/a&gt;, with efforts ongoing to rework &lt;a href="https://github.com/uutils/procps" target="_blank" rel="noreferrer"&gt;procps&lt;/a&gt;, &lt;a href="https://github.com/uutils/util-linux" target="_blank" rel="noreferrer"&gt;util-linux&lt;/a&gt; and more.&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;sudo-rs&lt;/code&gt; project is now maintained by the &lt;a href="https://trifectatech.org/" target="_blank" rel="noreferrer"&gt;Trifecta Tech Foundation&lt;/a&gt;, who are focused on &amp;ldquo;open infrastructure software in the public interest&amp;rdquo; . Their &lt;a href="https://github.com/trifectatechfoundation/zlib-rs" target="_blank" rel="noreferrer"&gt;&lt;code&gt;zlib-rs&lt;/code&gt;&lt;/a&gt; recently released v0.4.2, which appears to now be &lt;a href="https://trifectatech.org/blog/zlib-rs-is-faster-than-c/" target="_blank" rel="noreferrer"&gt;the fastest API-compatible zlib implementation&lt;/a&gt;. They&amp;rsquo;re also behind the &lt;a href="https://github.com/pendulum-project" target="_blank" rel="noreferrer"&gt;Pendulum Project&lt;/a&gt; and &lt;a href="https://github.com/pendulum-project/ntpd-rs" target="_blank" rel="noreferrer"&gt;&lt;code&gt;ntpd-rs&lt;/code&gt;&lt;/a&gt; for memory-safe time synchronisation.&lt;/p&gt;
&lt;p&gt;With Ubuntu, we&amp;rsquo;re in a position to drive awareness and adoption of these modern equivalents by making them either trivially available, or the default implementation for the world&amp;rsquo;s most deployed Linux distribution.&lt;/p&gt;
&lt;p&gt;We will need to do so carefully, and be willing to scale back on the ambition where appropriate to avoid diluting the promise of stability and reliability that the Ubuntu LTS releases have become known for, but I&amp;rsquo;m confident that we can make progress on these topics over the coming months.&lt;/p&gt;</description></item><item><title>Engineering Ubuntu For The Next 20 Years</title><link>https://jnsgr.uk/2025/02/engineering-ubuntu-for-the-next-20-years/</link><pubDate>Tue, 11 Feb 2025 00:00:00 +0000</pubDate><guid>https://jnsgr.uk/2025/02/engineering-ubuntu-for-the-next-20-years/</guid><description>&lt;blockquote&gt;
&lt;p&gt;This article was originally posted &lt;a href="https://discourse.ubuntu.com/t/engineering-ubuntu-for-the-next-20-years/55000" target="_blank" rel="noreferrer"&gt;on the Ubuntu Discourse&lt;/a&gt;, and is reposted here. I welcome comments and further discussion in that thread.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2 id="introduction" class="relative group"&gt;Introduction &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#introduction" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;I&amp;rsquo;ve been a VP Engineering at Canonical for 3 years now, building &lt;a href="https://juju.is" target="_blank" rel="noreferrer"&gt;Juju&lt;/a&gt; and our catalog of &lt;a href="https://charmhub.io/" target="_blank" rel="noreferrer"&gt;charms&lt;/a&gt;. In the last week of January, I was appointed the VP Engineering for Ubuntu at Canonical, where I will now oversee the Ubuntu Foundations, Server and Desktop teams.&lt;/p&gt;
&lt;p&gt;Over the past 20 years, Ubuntu has become synonymous with &amp;ldquo;Linux&amp;rdquo; to many people. I fondly remember receiving my first Ubuntu CD in the post, shortly after my own Linux journey began in 2003 with booting Knoppix on a school computer. Throughout my career, Linux and open source have been prominent features that I&amp;rsquo;m very proud of. In the past few years I&amp;rsquo;ve made contributions to Ubuntu, Arch Linux, and more recently NixOS.&lt;/p&gt;
&lt;p&gt;Ubuntu&amp;rsquo;s recent 20 year milestone is a timely reminder to pause and reflect on what made Ubuntu so exciting, so successful and so captivating to the Linux community. In 2004, the idea of releasing an operating system every six months was laughed off by many, but has now become the norm. Ubuntu builds upon Debian, aiming to bring the latest and very best open source had to offer to the masses. In the past 10 years, we&amp;rsquo;ve seen huge shifts in the way software is delivered - the success of large-scale cloud based operations necessitated a shift towards more automated testing, releasing and monitoring, and as the open source community around these projects grew, we had to evolve our ways of thinking, designing and communicating about software.&lt;/p&gt;
&lt;h2 id="four-key-themes" class="relative group"&gt;&lt;strong&gt;Four Key Themes&lt;/strong&gt; &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#four-key-themes" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;As I step into this new role, I&amp;rsquo;ve reflected on how we can steer the engineering efforts behind Ubuntu. I&amp;rsquo;ve anchored this vision around four themes: Communication, Automation, Process and Modernisation.&lt;/p&gt;
&lt;h3 id="communication" class="relative group"&gt;&lt;strong&gt;Communication&lt;/strong&gt; &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#communication" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h3&gt;&lt;p&gt;Communication is a central component of a distributed workforce - whether that workforce is employed by Canonical, members of our community or contributors from our partners. Ubuntu has relied for many years on mailing lists and IRC. These platforms enabled global teams to collaborate for years, and have been invaluable to the community. In 2025 we&amp;rsquo;re fortunate to have a wealth of communications platforms at our disposal, but we must use these tools strategically to avoid fragmentation.&lt;/p&gt;
&lt;p&gt;On Jan 29 2025, the Ubuntu developer mailing list &lt;a href="https://lists.ubuntu.com/archives/ubuntu-devel-announce/2025-January/001365.html" target="_blank" rel="noreferrer"&gt;announced&lt;/a&gt; that the primary means of communication for Ubuntu developers will be the Ubuntu Community Matrix server. Matrix provides a rich, modern communications medium that is familiar to the next generation of engineers and tinkerers, who will be central to the continued progression of Ubuntu and open source. We&amp;rsquo;re in good company on Matrix, with many other Linux distributions and projects maintaining a presence on the platform. The recent &lt;a href="https://fridge.ubuntu.com/2024/12/08/ubuntu-forums-migration/" target="_blank" rel="noreferrer"&gt;migration&lt;/a&gt; of Ubuntu Forums to the Ubuntu Discourse, further consolidates the range of platforms we use to connect with one another.&lt;/p&gt;
&lt;p&gt;To effect much of the change I&amp;rsquo;m describing in this post, we will need community support. I&amp;rsquo;ll be encouraging the leads of our internal teams in Ubuntu Foundations, Server and Desktop to be more forthcoming and regular with public updates that will serve two purposes: to share our intentions, progress and dreams for Ubuntu, but also to collaborate on refining our vision, ensuring we deliver a platform that is not just exciting, but &lt;em&gt;relevant&lt;/em&gt; for many years to come.&lt;/p&gt;
&lt;p&gt;Documentation is a critical form of communication. Our documentation enables our current users, but also illuminates the path for new contributors. Such documentation does exist, but much of it is fragmented across different platforms, duplicated and/or contradictory or simply difficult to find. As a company, and as a community, we must focus on ensuring both existing and potential contributors have access to the information they need on conventions, tools and processes. A good example of where this has already happened is the &lt;a href="https://documentation.ubuntu.com/sru" target="_blank" rel="noreferrer"&gt;SRU documentation&lt;/a&gt; which was recently rebuilt in line with our documentation &lt;a href="https://canonical.com/documentation" target="_blank" rel="noreferrer"&gt;practices&lt;/a&gt;.&lt;/p&gt;
&lt;h3 id="automation" class="relative group"&gt;&lt;strong&gt;Automation&lt;/strong&gt; &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#automation" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h3&gt;&lt;p&gt;Delivering a Linux distribution is a monumental task. With tens of thousands of packages across multiple architectures, the workload can be overwhelming - leaving little room for innovation until the foundational work is done. We&amp;rsquo;re fortunate to benefit from the diligent work done by the Debian community, yet there is a huge amount of work that goes into each Ubuntu release. One of our primary tasks as a distribution is package maintenance. While some may see this as menial or repetitive, it remains critical to the future of Ubuntu, and is a valuable specialist skill in its own right.&lt;/p&gt;
&lt;p&gt;Software packaging is a complex and constantly evolving topic. Ubuntu relies heavily on a blend of Debian packages, and our own Snap packaging format. Debian packaging was revolutionary - responsible for huge advancements in the way we thought about delivering software, but as things have moved on some of those tools and practices are beginning to show their age.&lt;/p&gt;
&lt;p&gt;I&amp;rsquo;d like to focus on enriching our build process with modern ideals and processes for automating the version bumps, testing, performance benchmarking and releasing of packages in the archive. High complexity tasks are error-prone and, without sufficient automation, risk becoming overly dependent on a few skilled individuals. We have the same challenge with Snaps, but they benefit from significantly more modern tooling as a consequence of the observations made about Debian packaging over many years.&lt;/p&gt;
&lt;p&gt;The goal of this theme is not just to automate as much as possible (thereby increasing our collective capacity), but also to simplify processes where we can. Much of Ubuntu&amp;rsquo;s build process *is* automated, but those systems are disparate and often opaque to all but our most experienced contributors.&lt;/p&gt;
&lt;p&gt;I&amp;rsquo;ve been inspired by how the NixOS community manages packaging. Every single package for the distro is represented as text files, in a &lt;a href="https://github.com/NixOS/nixpkgs" target="_blank" rel="noreferrer"&gt;single Git repository&lt;/a&gt;, with a universally observable continuous integration and integration testing pipeline (&lt;a href="https://wiki.nixos.org/wiki/Hydra" target="_blank" rel="noreferrer"&gt;Hydra&lt;/a&gt;) that performs version bumps and simple maintenance tasks semi-autonomously. While this model carries its own challenges, there is something alluring about the transparency and accessibility of the systems that assemble, test and deliver software to their users.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://universal-blue.org/" target="_blank" rel="noreferrer"&gt;Universal Blue&lt;/a&gt;, and by extension &lt;a href="https://projectbluefin.io/" target="_blank" rel="noreferrer"&gt;Project Bluefin&lt;/a&gt;, are recent additions to the Linux ecosystem that benefited from thinking hard about the tooling they use to build their distribution. They&amp;rsquo;ve centered their process around tools with which their cloud-native audience are already familiar.&lt;/p&gt;
&lt;p&gt;My suggestion is not to imitate these projects, rather that the open source community is at its strongest when we collaborate and learn from one another. I think we can take inspiration from those surrounding us, and use that to inform our plans for Ubuntu&amp;rsquo;s future.&lt;/p&gt;
&lt;h3 id="process" class="relative group"&gt;&lt;strong&gt;Process&lt;/strong&gt; &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#process" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h3&gt;&lt;p&gt;Process is closely tied to automation, but is frequently viewed negatively in software engineering, carrying connotations of bureaucracy and slowdowns. In my experience, a well-designed process empowers people to enact changes with confidence.&lt;/p&gt;
&lt;p&gt;Ubuntu is built by all of us, in many countries and across all timezones. Concise, well-defined, lightweight processes promote autonomy and reduce uncertainty - enabling people to unblock themselves. Ubuntu is no stranger to process: the &lt;a href="https://canonical-ubuntu-packaging-guide.readthedocs-hosted.com/en/latest/explanation/main-inclusion-review/" target="_blank" rel="noreferrer"&gt;Main Inclusion Review (MIR)&lt;/a&gt;, the aforementioned &lt;a href="https://canonical-sru-docs.readthedocs-hosted.com/en/latest/" target="_blank" rel="noreferrer"&gt;Stable Release Updates (SRU)&lt;/a&gt; process, the &lt;a href="https://forum.snapcraft.io/t/process-for-aliases-auto-connections-and-tracks/455" target="_blank" rel="noreferrer"&gt;process&lt;/a&gt; for Snap store requests and many more have contributed to the success of Ubuntu, setting clear guardrails for contributors and ensuring we work to common standards.&lt;/p&gt;
&lt;p&gt;My goal over the coming months is to work with you, the people behind Ubuntu, to identify which of these processes still serve us, and which need revising to simplify our work while maintaining our dedication to stability. I&amp;rsquo;ll consolidate the definitions of these processes, make them searchable, peer-reviewable, and more discoverable. Examples of where this has worked well are the &lt;a href="https://github.com/golang/proposal" target="_blank" rel="noreferrer"&gt;Go proposal&lt;/a&gt; process, and the &lt;a href="https://eips.ethereum.org/" target="_blank" rel="noreferrer"&gt;Ethereum Improvement Proposal&lt;/a&gt; process - both of which make it trivial to create, track and discuss proposals across the breadth of their respective projects.&lt;/p&gt;
&lt;p&gt;If you submit an MIR, or work on an SRU, it should be trivial to understand the status of that request, and to communicate with the team executing that process where needed. If you&amp;rsquo;re interested in joining our community, it should be simple to get a sense of what is changing across the project, and where you might be able to help.&lt;/p&gt;
&lt;p&gt;I&amp;rsquo;d like to tackle these problems and make these processes as transparent as possible.&lt;/p&gt;
&lt;h3 id="modernisation" class="relative group"&gt;&lt;strong&gt;Modernisation&lt;/strong&gt; &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#modernisation" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h3&gt;&lt;p&gt;The world of computing has evolved dramatically in the last 20 years, and I’m proud that Ubuntu has continually adapted and thrived. In Linux alone there have been huge changes to what is considered &amp;ldquo;normal&amp;rdquo; for a Linux machine. Whether it be the introduction of `systemd`, the advent of languages with a focus on memory safety, the huge growth in virtualisation and containerisation technology, or even the introduction of Rust into the Linux kernel itself - the foundations of our distribution must be constantly assessed against the needs of our users.&lt;/p&gt;
&lt;p&gt;I was proud to see the &lt;a href="https://discourse.ubuntu.com/t/kernel-version-selection-for-ubuntu-releases/47007?u=d0od" target="_blank" rel="noreferrer"&gt;announcement&lt;/a&gt; last year that the Ubuntu Kernel team committed to shipping the very latest kernels in new versions of Ubuntu, wherever they possibly can. Even if that means shipping a kernel that&amp;rsquo;s in the release candidate phase, the team will stand by that kernel and continue to support it through the Ubuntu release&amp;rsquo;s life. While this could appear cavalier at a glance, what it represents is a willingness to rise to the challenge of shipping the very best of open source to our users. I&amp;rsquo;d like to see more of this. Ubuntu is a flagship Linux distribution and a starting point for many; we must ensure that our users are presented with the very best our community has to offer - even if that means a bit more hustle in the early days of a given release. This is of particular importance for our Long Term Support releases, which are relied upon by governments, financial institutions, educational establishments, nonprofits and many others for years after the initial release date.&lt;/p&gt;
&lt;p&gt;We should look deeply at the tools we ship with Ubuntu by default - selecting for tools that have resilience, performance and maintainability at their core. There are countless examples in the open source community of tools being re-engineered, and re-imagined using tools and practices that have only relatively recently become available. Some of my personal favourites include command-line utilities such as &lt;a href="https://github.com/eza-community/eza" target="_blank" rel="noreferrer"&gt;eza&lt;/a&gt;, &lt;a href="https://github.com/sharkdp/bat" target="_blank" rel="noreferrer"&gt;bat&lt;/a&gt;, and &lt;a href="https://helix-editor.com/" target="_blank" rel="noreferrer"&gt;helix&lt;/a&gt;, the new &lt;a href="https://ghostty.org/" target="_blank" rel="noreferrer"&gt;ghostty&lt;/a&gt; terminal emulator, and more foundational projects such as the &lt;a href="https://uutils.github.io/" target="_blank" rel="noreferrer"&gt;uutils&lt;/a&gt; rewrite of &lt;a href="https://github.com/uutils/coreutils" target="_blank" rel="noreferrer"&gt;coreutils in Rust&lt;/a&gt;. Each of these projects are at varying levels of maturity, but have demonstrated a vision for a more modern Unix-like experience that emphasises resilience, performance and usability.&lt;/p&gt;
&lt;p&gt;Another example of this is our work on &lt;a href="https://ubuntu.com/blog/tpm-backed-full-disk-encryption-is-coming-to-ubuntu" target="_blank" rel="noreferrer"&gt;TPM-backed full disk encryption&lt;/a&gt;, a project which promises encryption of our users&amp;rsquo; data with no degradation to their user experience. This feature relies upon cryptographic hardware and techniques that have only recently become available to us, but enable us to deliver the potent combination of security &lt;em&gt;and&lt;/em&gt; usability to our users.&lt;/p&gt;
&lt;h2 id="delivering-features" class="relative group"&gt;&lt;strong&gt;Delivering Features&lt;/strong&gt; &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#delivering-features" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;What I&amp;rsquo;ve shared so far is a high-level overview, and many of the points under the four themes will take time to implement, with most appearing as a series of gradual improvements. You might be wondering whether we&amp;rsquo;ll focus on the latest trends and features, or prioritise that bug you reported.&lt;/p&gt;
&lt;p&gt;While focusing on the latest trends or a single breakthrough feature can yield short-term progress, embracing these principles will create the space for sustained, impactful innovation.&lt;/p&gt;
&lt;p&gt;That said, I’ve also been working on a list of incremental features and improvements that we can deliver in the coming months to enhance the Ubuntu experience. You’ll hear more from me and the team leads regularly as we share updates and progress.&lt;/p&gt;
&lt;h2 id="summary" class="relative group"&gt;&lt;strong&gt;Summary&lt;/strong&gt; &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#summary" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;I’m incredibly excited to embark on this journey, and consider it a privilege to serve in this role. Together with the Ubuntu community, Canonical engineers, and our partners, we will build an open-source platform that enables the next 20 years of innovation in computing.&lt;/p&gt;
&lt;p&gt;If you have ideas for the future of Ubuntu, or something in this post has resonated with you and you want to be involved either as a community member, or perhaps a future employee of Canonical, I&amp;rsquo;d love to hear from you.&lt;/p&gt;</description></item><item><title>Workstation VMs with LXD &amp; Multipass</title><link>https://jnsgr.uk/2024/06/desktop-vms-lxd-multipass/</link><pubDate>Tue, 25 Jun 2024 00:00:00 +0000</pubDate><guid>https://jnsgr.uk/2024/06/desktop-vms-lxd-multipass/</guid><description>&lt;h2 id="introduction" class="relative group"&gt;Introduction &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#introduction" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;Over the years, I&amp;rsquo;ve used countless tools for creating virtual machines - often just for short periods of time when testing new software, trying out a new desktop environment, or creating a more isolated development environment. I&amp;rsquo;ve gone from just using the venerable &lt;a href="https://www.qemu.org/" target="_blank" rel="noreferrer"&gt;qemu&lt;/a&gt; at the command line, to full-blown desktop applications like &lt;a href="https://www.virtualbox.org/" target="_blank" rel="noreferrer"&gt;Virtualbox&lt;/a&gt;, to using &lt;a href="https://virt-manager.org/" target="_blank" rel="noreferrer"&gt;virt-manager&lt;/a&gt; with &lt;a href="https://libvirt.org/" target="_blank" rel="noreferrer"&gt;libvirt&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;When I joined Canonical back in March 2021, I&amp;rsquo;d hardly used &lt;a href="https://canonical.com/lxd" target="_blank" rel="noreferrer"&gt;LXD&lt;/a&gt;, and I hadn&amp;rsquo;t ever used &lt;a href="https://multipass.run" target="_blank" rel="noreferrer"&gt;Multipass&lt;/a&gt;. Since then, they&amp;rsquo;ve both become indispensable parts of my workflow, so I thought I&amp;rsquo;d share why I like them, and how I use each of them in my day to day work.&lt;/p&gt;
&lt;p&gt;I work for Canonical, and am therefore invested in the success of their products, but at the time of writing I&amp;rsquo;m not responsible for either LXD or Multipass, and this post represents my honest opinions as a user of the products, and nothing more.&lt;/p&gt;
&lt;p&gt;&lt;a href="01.png"&gt;
&lt;figure&gt;
&lt;picture
class="mx-auto my-0 rounded-md"
&gt;
&lt;source
srcset="https://jnsgr.uk/2024/06/desktop-vms-lxd-multipass/01_hu_ac2017cf2c80d4cb.webp 330w,https://jnsgr.uk/2024/06/desktop-vms-lxd-multipass/01_hu_443f9e818d7af594.webp 660w
,https://jnsgr.uk/2024/06/desktop-vms-lxd-multipass/01_hu_b3508bb5f485f9b9.webp 1024w
,https://jnsgr.uk/2024/06/desktop-vms-lxd-multipass/01_hu_fd05a169716c075c.webp 1320w
"
sizes="100vw"
type="image/webp"
/&gt;
&lt;img
width="1482"
height="1228"
class="mx-auto my-0 rounded-md"
alt="lxd ui showing multiple vms and containers"
loading="lazy" decoding="async"
src="https://jnsgr.uk/2024/06/desktop-vms-lxd-multipass/01_hu_d1f4c9075b739493.png" srcset="https://jnsgr.uk/2024/06/desktop-vms-lxd-multipass/01_hu_dc6b3729927da789.png 330w,https://jnsgr.uk/2024/06/desktop-vms-lxd-multipass/01_hu_d1f4c9075b739493.png 660w
,https://jnsgr.uk/2024/06/desktop-vms-lxd-multipass/01_hu_f89640c0ea781c0f.png 1024w
,https://jnsgr.uk/2024/06/desktop-vms-lxd-multipass/01_hu_9dbb5a7a3c150bd8.png 1320w
"
sizes="100vw"
/&gt;
&lt;/picture&gt;
&lt;/figure&gt;
&lt;/a&gt;&lt;/p&gt;
&lt;h2 id="installation--distribution" class="relative group"&gt;Installation / Distribution &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#installation--distribution" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;Both &lt;a href="https://snapcraft.io/lxd" target="_blank" rel="noreferrer"&gt;LXD&lt;/a&gt; and &lt;a href="https://snapcraft.io/multipass" target="_blank" rel="noreferrer"&gt;Multipass&lt;/a&gt; are available as &lt;a href="https://snapcraft.io" target="_blank" rel="noreferrer"&gt;snap packages&lt;/a&gt;, and that&amp;rsquo;s the most supported and recommended route for installation. LXD is available in the repos of a few other Linux distributions (including &lt;a href="https://search.nixos.org/options?channel=24.05&amp;amp;from=0&amp;amp;size=50&amp;amp;sort=relevance&amp;amp;type=packages&amp;amp;query=virtualisation.lxd." target="_blank" rel="noreferrer"&gt;NixOS&lt;/a&gt;, &lt;a href="https://wiki.archlinux.org/title/LXD" target="_blank" rel="noreferrer"&gt;Arch Linux&lt;/a&gt;), but the snap package also works great on Arch, Fedora, etc. I personally ran Multipass and LXD as &lt;a href="https://wiki.archlinux.org/title/Snap" target="_blank" rel="noreferrer"&gt;snaps on Arch Linux&lt;/a&gt; for a couple of years without issue.&lt;/p&gt;
&lt;p&gt;If you&amp;rsquo;d like to follow along with the commands in this post, you can get setup like so:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt"&gt;1
&lt;/span&gt;&lt;span class="lnt"&gt;2
&lt;/span&gt;&lt;span class="lnt"&gt;3
&lt;/span&gt;&lt;span class="lnt"&gt;4
&lt;/span&gt;&lt;span class="lnt"&gt;5
&lt;/span&gt;&lt;span class="lnt"&gt;6
&lt;/span&gt;&lt;span class="lnt"&gt;7
&lt;/span&gt;&lt;span class="lnt"&gt;8
&lt;/span&gt;&lt;span class="lnt"&gt;9
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo snap install lxd
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo lxd init --minimal
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# If you&amp;#39;d like to use LXD/LXC commands without sudo&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# run the following command and logout/login:&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;#&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# sudo usermod -aG lxd $USER&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo snap install multipass
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;p&gt;Early on in my journey with NixOS, I &lt;a href="https://github.com/NixOS/nixpkgs/pull/214193" target="_blank" rel="noreferrer"&gt;packaged&lt;/a&gt; Multipass for Nix. I still maintain (and use!) the NixOS module. This was my first ever contribution to NixOS &amp;ndash; a fairly colourful review process to say the least&amp;hellip;&lt;/p&gt;
&lt;p&gt;The result is that you can use something like the following in your configuration, and have multipass be available to you after a &lt;code&gt;nixos-rebuild switch&lt;/code&gt;:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt"&gt;1
&lt;/span&gt;&lt;span class="lnt"&gt;2
&lt;/span&gt;&lt;span class="lnt"&gt;3
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-nix" data-lang="nix"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;virtualisation&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;multipass&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;enable&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;p&gt;LXD has been maintained in NixOS for many years now - and around this time last year I &lt;a href="https://github.com/NixOS/nixpkgs/pull/241314" target="_blank" rel="noreferrer"&gt;added support&lt;/a&gt; for the LXD UI. The screenshots you see throughout this post are all from LXD UI running on a NixOS machine using the following configuration:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt"&gt; 1
&lt;/span&gt;&lt;span class="lnt"&gt; 2
&lt;/span&gt;&lt;span class="lnt"&gt; 3
&lt;/span&gt;&lt;span class="lnt"&gt; 4
&lt;/span&gt;&lt;span class="lnt"&gt; 5
&lt;/span&gt;&lt;span class="lnt"&gt; 6
&lt;/span&gt;&lt;span class="lnt"&gt; 7
&lt;/span&gt;&lt;span class="lnt"&gt; 8
&lt;/span&gt;&lt;span class="lnt"&gt; 9
&lt;/span&gt;&lt;span class="lnt"&gt;10
&lt;/span&gt;&lt;span class="lnt"&gt;11
&lt;/span&gt;&lt;span class="lnt"&gt;12
&lt;/span&gt;&lt;span class="lnt"&gt;13
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-nix" data-lang="nix"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;virtualisation&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;lxd&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;enable&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;zfsSupport&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;ui&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;enable&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;};&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;networking&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;firewall&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;trustedInterfaces&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;lxdbr0&amp;#34;&lt;/span&gt; &lt;span class="p"&gt;];&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;};&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;};&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;h2 id="ubuntu-on-demand-with-multipass" class="relative group"&gt;Ubuntu on-demand with Multipass &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#ubuntu-on-demand-with-multipass" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;&lt;a href="https://multipass.run/" target="_blank" rel="noreferrer"&gt;Multipass&lt;/a&gt; is designed to provide simple on-demand access to Ubuntu VMs from any workstation - whether that workstation is running Linux, macOS or Windows. It is designed to replicate, in a lightweight way, the experience of provisioning a simple Ubuntu VM on a cloud.&lt;/p&gt;
&lt;p&gt;Multipass makes use of whichever the most appropriate hypervisor is on a given platform. On Linux, it can use QEMU, LXD or libvirt as backends, on Windows it can use Hyper-V or Virtualbox, and on macOS it can use QEMU or Virtualbox. Multipass refers to these backends as &lt;a href="https://multipass.run/docs/driver" target="_blank" rel="noreferrer"&gt;drivers&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Multipass&amp;rsquo; scope is relatively limited, but in my opinion that&amp;rsquo;s what makes it so delightful to use. Once installed, the basic operation of Multipass couldn&amp;rsquo;t be simpler:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt"&gt; 1
&lt;/span&gt;&lt;span class="lnt"&gt; 2
&lt;/span&gt;&lt;span class="lnt"&gt; 3
&lt;/span&gt;&lt;span class="lnt"&gt; 4
&lt;/span&gt;&lt;span class="lnt"&gt; 5
&lt;/span&gt;&lt;span class="lnt"&gt; 6
&lt;/span&gt;&lt;span class="lnt"&gt; 7
&lt;/span&gt;&lt;span class="lnt"&gt; 8
&lt;/span&gt;&lt;span class="lnt"&gt; 9
&lt;/span&gt;&lt;span class="lnt"&gt;10
&lt;/span&gt;&lt;span class="lnt"&gt;11
&lt;/span&gt;&lt;span class="lnt"&gt;12
&lt;/span&gt;&lt;span class="lnt"&gt;13
&lt;/span&gt;&lt;span class="lnt"&gt;14
&lt;/span&gt;&lt;span class="lnt"&gt;15
&lt;/span&gt;&lt;span class="lnt"&gt;16
&lt;/span&gt;&lt;span class="lnt"&gt;17
&lt;/span&gt;&lt;span class="lnt"&gt;18
&lt;/span&gt;&lt;span class="lnt"&gt;19
&lt;/span&gt;&lt;span class="lnt"&gt;20
&lt;/span&gt;&lt;span class="lnt"&gt;21
&lt;/span&gt;&lt;span class="lnt"&gt;22
&lt;/span&gt;&lt;span class="lnt"&gt;23
&lt;/span&gt;&lt;span class="lnt"&gt;24
&lt;/span&gt;&lt;span class="lnt"&gt;25
&lt;/span&gt;&lt;span class="lnt"&gt;26
&lt;/span&gt;&lt;span class="lnt"&gt;27
&lt;/span&gt;&lt;span class="lnt"&gt;28
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;❯ multipass shell
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Launched: primary
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Mounted &lt;span class="s1"&gt;&amp;#39;/home/jon&amp;#39;&lt;/span&gt; into &lt;span class="s1"&gt;&amp;#39;primary:Home&amp;#39;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Welcome to Ubuntu 24.04 LTS &lt;span class="o"&gt;(&lt;/span&gt;GNU/Linux 6.8.0-35-generic x86_64&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; * Documentation: https://help.ubuntu.com
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; * Management: https://landscape.canonical.com
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; * Support: https://ubuntu.com/pro
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; System information as of Tue Jun &lt;span class="m"&gt;25&lt;/span&gt; 11:17:55 BST &lt;span class="m"&gt;2024&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; System load: 0.4 Processes: &lt;span class="m"&gt;132&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Usage of /: 38.9% of 3.80GB Users logged in: &lt;span class="m"&gt;0&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Memory usage: 31% IPv4 address &lt;span class="k"&gt;for&lt;/span&gt; ens3: 10.93.253.20
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; Swap usage: 0%
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Expanded Security Maintenance &lt;span class="k"&gt;for&lt;/span&gt; Applications is not enabled.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="m"&gt;3&lt;/span&gt; updates can be applied immediately.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="m"&gt;1&lt;/span&gt; of these updates is a standard security update.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;To see these additional updates run: apt list --upgradable
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Enable ESM Apps to receive additional future security updates.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;See https://ubuntu.com/esm or run: sudo pro status
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;ubuntu@primary:~$
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;p&gt;This one command will take care of creating the &lt;code&gt;primary&lt;/code&gt; instance if it doesn&amp;rsquo;t already exist, start the instance and drop you into a &lt;code&gt;bash&lt;/code&gt; shell - normally in under a minute.&lt;/p&gt;
&lt;p&gt;Multipass has a neat trick: it bundles a reverse SSHFS server that enables easy mounting of the host&amp;rsquo;s home directory into the VM. This happens by default for the &lt;code&gt;primary&lt;/code&gt; instance. As a result the instance I created above has my home directory mounted at &lt;code&gt;/home/ubuntu/Home&lt;/code&gt; - making it trivial to jump between editing code/files on my host and in the VM. I find this really useful - I can edit files on my workstation in my own editor, using my Yubikey to sign and push commits without having to worry about complicated provisioning or passthrough to the VM, and any files resulting from a build process on my workstation are instantly available in the VM for testing.&lt;/p&gt;
&lt;p&gt;Multipass instances can be customised a little. You won&amp;rsquo;t find complicated features like PCI-passthrough, but basic parameters can be tweaked. The commands I usually run for setting up a development machine when I&amp;rsquo;m working on Juju/Charms are:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt"&gt;1
&lt;/span&gt;&lt;span class="lnt"&gt;2
&lt;/span&gt;&lt;span class="lnt"&gt;3
&lt;/span&gt;&lt;span class="lnt"&gt;4
&lt;/span&gt;&lt;span class="lnt"&gt;5
&lt;/span&gt;&lt;span class="lnt"&gt;6
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Create a machine named &amp;#39;dev&amp;#39; with 16 cores, 40GiB RAM and 100GiB disk&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;multipass launch noble -n dev -c &lt;span class="m"&gt;16&lt;/span&gt; -m 40G -d 100G
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Mount my home directory into the VM&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;multipass mount /home/jon dev:/home/ubuntu/Home
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Get a shell in the VM&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;multipass shell dev
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;p&gt;Once you&amp;rsquo;re done with an instance, you can remove it like so:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt"&gt;1
&lt;/span&gt;&lt;span class="lnt"&gt;2
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;multipass remove dev
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;multipass purge
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;p&gt;Multipass does have some more interesting features, though most of my usage is represented above. One feature that might be of more interest for MacOS or Windows users is &lt;a href="https://multipass.run/docs/using-aliases" target="_blank" rel="noreferrer"&gt;aliases&lt;/a&gt;. This feature enables you to alias local commands to their counterparts in a Multipass VM, meaning for example that every time you run &lt;code&gt;docker&lt;/code&gt; on your Mac, the command is actually executed inside the Multipass VM:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt"&gt;1
&lt;/span&gt;&lt;span class="lnt"&gt;2
&lt;/span&gt;&lt;span class="lnt"&gt;3
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Example of mapping the local `mdocker` command -&amp;gt; `docker` in&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# the multipass VM&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;multipass &lt;span class="nb"&gt;alias&lt;/span&gt; dev:docker mdocker
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;p&gt;Multipass will launch the latest Ubuntu LTS by default, but there are a number of other images available - including some &amp;ldquo;appliance&amp;rdquo; images for applications like Nextcloud, Mosquitto, etc.&lt;/p&gt;
&lt;p&gt;There is also the concept of &lt;a href="https://multipass.run/docs/blueprint" target="_blank" rel="noreferrer"&gt;Blueprints&lt;/a&gt; which are essentially recipes for virtual machines with a given purpose. These are curated partly by the Multipass team, and partly by the community. A blueprint enables the author to specify cores, memory, disk, cloud-init data, aliases, health checks and more. The recipes themselves are maintained &lt;a href="https://github.com/canonical/multipass-blueprints/tree/main/v1" target="_blank" rel="noreferrer"&gt;on Github&lt;/a&gt;, and you can see the list of available images/blueprints using &lt;code&gt;multipass find&lt;/code&gt;:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt"&gt; 1
&lt;/span&gt;&lt;span class="lnt"&gt; 2
&lt;/span&gt;&lt;span class="lnt"&gt; 3
&lt;/span&gt;&lt;span class="lnt"&gt; 4
&lt;/span&gt;&lt;span class="lnt"&gt; 5
&lt;/span&gt;&lt;span class="lnt"&gt; 6
&lt;/span&gt;&lt;span class="lnt"&gt; 7
&lt;/span&gt;&lt;span class="lnt"&gt; 8
&lt;/span&gt;&lt;span class="lnt"&gt; 9
&lt;/span&gt;&lt;span class="lnt"&gt;10
&lt;/span&gt;&lt;span class="lnt"&gt;11
&lt;/span&gt;&lt;span class="lnt"&gt;12
&lt;/span&gt;&lt;span class="lnt"&gt;13
&lt;/span&gt;&lt;span class="lnt"&gt;14
&lt;/span&gt;&lt;span class="lnt"&gt;15
&lt;/span&gt;&lt;span class="lnt"&gt;16
&lt;/span&gt;&lt;span class="lnt"&gt;17
&lt;/span&gt;&lt;span class="lnt"&gt;18
&lt;/span&gt;&lt;span class="lnt"&gt;19
&lt;/span&gt;&lt;span class="lnt"&gt;20
&lt;/span&gt;&lt;span class="lnt"&gt;21
&lt;/span&gt;&lt;span class="lnt"&gt;22
&lt;/span&gt;&lt;span class="lnt"&gt;23
&lt;/span&gt;&lt;span class="lnt"&gt;24
&lt;/span&gt;&lt;span class="lnt"&gt;25
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;❯ multipass find
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Image Aliases Version Description
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;core core16 &lt;span class="m"&gt;20200818&lt;/span&gt; Ubuntu Core &lt;span class="m"&gt;16&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;core18 &lt;span class="m"&gt;20211124&lt;/span&gt; Ubuntu Core &lt;span class="m"&gt;18&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;core20 &lt;span class="m"&gt;20230119&lt;/span&gt; Ubuntu Core &lt;span class="m"&gt;20&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;core22 &lt;span class="m"&gt;20230717&lt;/span&gt; Ubuntu Core &lt;span class="m"&gt;22&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;20.04 focal &lt;span class="m"&gt;20240612&lt;/span&gt; Ubuntu 20.04 LTS
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;22.04 jammy &lt;span class="m"&gt;20240614&lt;/span&gt; Ubuntu 22.04 LTS
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;23.10 mantic &lt;span class="m"&gt;20240619&lt;/span&gt; Ubuntu 23.10
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;24.04 noble,lts &lt;span class="m"&gt;20240622&lt;/span&gt; Ubuntu 24.04 LTS
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;daily:24.10 oracular,devel &lt;span class="m"&gt;20240622&lt;/span&gt; Ubuntu 24.10
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;appliance:adguard-home &lt;span class="m"&gt;20200812&lt;/span&gt; Ubuntu AdGuard Home Appliance
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;appliance:mosquitto &lt;span class="m"&gt;20200812&lt;/span&gt; Ubuntu Mosquitto Appliance
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;appliance:nextcloud &lt;span class="m"&gt;20200812&lt;/span&gt; Ubuntu Nextcloud Appliance
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;appliance:openhab &lt;span class="m"&gt;20200812&lt;/span&gt; Ubuntu openHAB Home Appliance
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;appliance:plexmediaserver &lt;span class="m"&gt;20200812&lt;/span&gt; Ubuntu Plex Media Server Appliance
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Blueprint Aliases Version Description
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;anbox-cloud-appliance latest Anbox Cloud Appliance
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;charm-dev latest A development and testing environment &lt;span class="k"&gt;for&lt;/span&gt; charmers
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;docker 0.4 A Docker environment with Portainer and related tools
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;jellyfin latest Jellyfin is a Free Software Media System that puts you in control of managing and streaming your media.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;minikube latest minikube is &lt;span class="nb"&gt;local&lt;/span&gt; Kubernetes
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;ros-noetic 0.1 A development and testing environment &lt;span class="k"&gt;for&lt;/span&gt; ROS Noetic.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;ros2-humble 0.1 A development and testing environment &lt;span class="k"&gt;for&lt;/span&gt; ROS &lt;span class="m"&gt;2&lt;/span&gt; Humble.
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;p&gt;The team also recently introduced the ability to &lt;a href="https://multipass.run/docs/snapshot" target="_blank" rel="noreferrer"&gt;snapshot&lt;/a&gt; virtual machines, though I must confess I&amp;rsquo;ve not tried it out in anger yet.&lt;/p&gt;
&lt;h2 id="lxd-for-vms" class="relative group"&gt;LXD… for VMs? &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#lxd-for-vms" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;For many people, LXD is a container manager - and indeed for many years it could &amp;ldquo;only&amp;rdquo; manage containers. LXD was built for running &amp;ldquo;system containers&amp;rdquo;, as opposed to &amp;ldquo;application containers&amp;rdquo; like Docker/Podman (or Kubernetes). Running a container with LXD is more similar to to running a container with &lt;code&gt;systemd-nspawn&lt;/code&gt;, but with the added bonus that it can &lt;a href="https://documentation.ubuntu.com/lxd/en/latest/clustering/" target="_blank" rel="noreferrer"&gt;cluster&lt;/a&gt; across machines, &lt;a href="https://documentation.ubuntu.com/lxd/en/latest/authentication/" target="_blank" rel="noreferrer"&gt;authenticate against different identity backends&lt;/a&gt;, and manage more sophisticated &lt;a href="https://documentation.ubuntu.com/lxd/en/latest/explanation/storage/" target="_blank" rel="noreferrer"&gt;storage&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Because LXD manages system containers, each container gets its own &lt;code&gt;systemd&lt;/code&gt;, and behaves more like a &amp;rsquo;lightweight VM&amp;rsquo; sharing the host&amp;rsquo;s kernel. This turns out to be a very interesting property for people who want to get some of the benefits of containerisation (i.e. higher workload density, easier snapshotting, migration, etc.) with more legacy applications that might struggle to run effectively in application containers.&lt;/p&gt;
&lt;p&gt;But this post is about virtual machines. Since the 4.0 LTS release, LXD has also supported running VMs with &lt;code&gt;qemu&lt;/code&gt;. The API for launching a container is identical to launching a virtual machine. Better still, Canonical provides images for lots of different Linux distributions, and even desktop variants of some images - meaning you can quickly get up and running with a wide range of distributions, for example:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt"&gt; 1
&lt;/span&gt;&lt;span class="lnt"&gt; 2
&lt;/span&gt;&lt;span class="lnt"&gt; 3
&lt;/span&gt;&lt;span class="lnt"&gt; 4
&lt;/span&gt;&lt;span class="lnt"&gt; 5
&lt;/span&gt;&lt;span class="lnt"&gt; 6
&lt;/span&gt;&lt;span class="lnt"&gt; 7
&lt;/span&gt;&lt;span class="lnt"&gt; 8
&lt;/span&gt;&lt;span class="lnt"&gt; 9
&lt;/span&gt;&lt;span class="lnt"&gt;10
&lt;/span&gt;&lt;span class="lnt"&gt;11
&lt;/span&gt;&lt;span class="lnt"&gt;12
&lt;/span&gt;&lt;span class="lnt"&gt;13
&lt;/span&gt;&lt;span class="lnt"&gt;14
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Launch a Ubuntu 24.04 LTS VM&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;lxc launch ubuntu:noble ubuntu --vm
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Get a shell inside the VM&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;lxc &lt;span class="nb"&gt;exec&lt;/span&gt; ubuntu bash
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Launch a Fedora 40 VM&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;lxc launch images:fedora/40 fedora --vm
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Get a shell inside the VM&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;lxc &lt;span class="nb"&gt;exec&lt;/span&gt; fedora bash
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Launch an Arch Linux VM (doesn&amp;#39;t support secure boot yet)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;lxc launch images:archlinux arch --vm -c security.secureboot&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;false&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Get a shell inside the VM&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;lxc &lt;span class="nb"&gt;exec&lt;/span&gt; arch bash
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;p&gt;You can get a full list of virtual machine images like so:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt"&gt;1
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;lxc image ls images: --format&lt;span class="o"&gt;=&lt;/span&gt;compact &lt;span class="p"&gt;|&lt;/span&gt; grep VIRTUAL-MACHINE
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;h2 id="lxd-desktop-vms" class="relative group"&gt;LXD Desktop VMs &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#lxd-desktop-vms" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;Another neat trick for LXD is desktop virtual machines. These are launched with curated images that drop you into a minimal desktop environment that&amp;rsquo;s configured to automatically login. This has to be one of my favourite features of LXD!&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt"&gt;1
&lt;/span&gt;&lt;span class="lnt"&gt;2
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Launch a Ubuntu 24.04 LTS desktop VM and get a console&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;lxc launch images:ubuntu/24.04/desktop ubuntu --vm --console&lt;span class="o"&gt;=&lt;/span&gt;vga
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;p&gt;&lt;a href="02.png"&gt;
&lt;figure&gt;
&lt;picture
class="mx-auto my-0 rounded-md"
&gt;
&lt;source
srcset="https://jnsgr.uk/2024/06/desktop-vms-lxd-multipass/02_hu_8603299554659120.webp 330w,https://jnsgr.uk/2024/06/desktop-vms-lxd-multipass/02_hu_cd8f409b5cd6a090.webp 660w
,https://jnsgr.uk/2024/06/desktop-vms-lxd-multipass/02_hu_2b99d6ccc9b2e8cf.webp 1024w
,https://jnsgr.uk/2024/06/desktop-vms-lxd-multipass/02_hu_3046a3ace3f05d97.webp 1320w
"
sizes="100vw"
type="image/webp"
/&gt;
&lt;img
width="1329"
height="1053"
class="mx-auto my-0 rounded-md"
alt="gnome desktop from ubuntu 24.04 lts running in spice viewer"
loading="lazy" decoding="async"
src="https://jnsgr.uk/2024/06/desktop-vms-lxd-multipass/02_hu_96b6de961228d3be.png" srcset="https://jnsgr.uk/2024/06/desktop-vms-lxd-multipass/02_hu_2030a0ace072ea60.png 330w,https://jnsgr.uk/2024/06/desktop-vms-lxd-multipass/02_hu_96b6de961228d3be.png 660w
,https://jnsgr.uk/2024/06/desktop-vms-lxd-multipass/02_hu_a851dd3d61af648c.png 1024w
,https://jnsgr.uk/2024/06/desktop-vms-lxd-multipass/02_hu_4082323e1227c560.png 1320w
"
sizes="100vw"
/&gt;
&lt;/picture&gt;
&lt;/figure&gt;
&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;The guest is pre-configured to work correctly with SPICE, so that means clipboard integration, automatic resizing with the viewer window, USB redirection, etc. The same also works for other distros, as before:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt"&gt;1
&lt;/span&gt;&lt;span class="lnt"&gt;2
&lt;/span&gt;&lt;span class="lnt"&gt;3
&lt;/span&gt;&lt;span class="lnt"&gt;4
&lt;/span&gt;&lt;span class="lnt"&gt;5
&lt;/span&gt;&lt;span class="lnt"&gt;6
&lt;/span&gt;&lt;span class="lnt"&gt;7
&lt;/span&gt;&lt;span class="lnt"&gt;8
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Launch an Arch desktop VM&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;lxc launch images:archlinux/desktop-gnome arch --vm &lt;span class="se"&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; -c limits.cpu&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="m"&gt;8&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; -c limits.memory&lt;span class="o"&gt;=&lt;/span&gt;16GiB &lt;span class="se"&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; -c security.secureboot&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;false&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Get a console using a separate command (if preferred!)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;lxc console --type&lt;span class="o"&gt;=&lt;/span&gt;vga arch
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;h2 id="lxd-ui-" class="relative group"&gt;LXD UI 😍 &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#lxd-ui-" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;Back in June 2023, Canonical announced early access to the LXD graphical user interface &lt;a href="https://ubuntu.com/blog/lxd_ui" target="_blank" rel="noreferrer"&gt;on their blog&lt;/a&gt;. The LXD UI is now generally available and enabled by default from LXD 5.21 onwards - though you can find instructions for enabling it on earlier versions in the &lt;a href="https://documentation.ubuntu.com/lxd/en/latest/howto/access_ui/" target="_blank" rel="noreferrer"&gt;docs&lt;/a&gt;. The summary is:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt"&gt;1
&lt;/span&gt;&lt;span class="lnt"&gt;2
&lt;/span&gt;&lt;span class="lnt"&gt;3
&lt;/span&gt;&lt;span class="lnt"&gt;4
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-shell" data-lang="shell"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;lxc config &lt;span class="nb"&gt;set&lt;/span&gt; core.https_address :8443
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo snap &lt;span class="nb"&gt;set&lt;/span&gt; lxd ui.enable&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo systemctl reload snap.lxd.daemon
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;p&gt;In my opinion, the LXD UI is one of the best, if not &lt;em&gt;the best&lt;/em&gt; way to interact with a hypervisor yet. Being a full-stack web application, it gains independence from different GUI toolkits on Linux and, provided the cluster is remote, can be accessed the same way from Windows, Mac and Linux.&lt;/p&gt;
&lt;p&gt;I&amp;rsquo;ve used other hypervisors with web UIs, particularly Proxmox, and I&amp;rsquo;ve found the experience with LXD UI to be very smooth, even from the early days. The UI can walk you through the creation and management of VMs, containers, storage and networking. The UI can also give you a nice concise summary of each instance (below is the summary of the VM created using the command in the last section):&lt;/p&gt;
&lt;p&gt;&lt;a href="03.png"&gt;
&lt;figure&gt;
&lt;picture
class="mx-auto my-0 rounded-md"
&gt;
&lt;source
srcset="https://jnsgr.uk/2024/06/desktop-vms-lxd-multipass/03_hu_4108535946a3ab6c.webp 330w,https://jnsgr.uk/2024/06/desktop-vms-lxd-multipass/03_hu_c7c9035457f7a91f.webp 660w
,https://jnsgr.uk/2024/06/desktop-vms-lxd-multipass/03_hu_9eb32f970711e1e.webp 1024w
,https://jnsgr.uk/2024/06/desktop-vms-lxd-multipass/03_hu_c342df0b23541960.webp 1320w
"
sizes="100vw"
type="image/webp"
/&gt;
&lt;img
width="1622"
height="1273"
class="mx-auto my-0 rounded-md"
alt="lxd ui showing a virtual machine instance summary"
loading="lazy" decoding="async"
src="https://jnsgr.uk/2024/06/desktop-vms-lxd-multipass/03_hu_93da003b5711f596.png" srcset="https://jnsgr.uk/2024/06/desktop-vms-lxd-multipass/03_hu_49034627176ef198.png 330w,https://jnsgr.uk/2024/06/desktop-vms-lxd-multipass/03_hu_93da003b5711f596.png 660w
,https://jnsgr.uk/2024/06/desktop-vms-lxd-multipass/03_hu_39241ec8979da25f.png 1024w
,https://jnsgr.uk/2024/06/desktop-vms-lxd-multipass/03_hu_5d19c34fb51d36dd.png 1320w
"
sizes="100vw"
/&gt;
&lt;/picture&gt;
&lt;/figure&gt;
&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;One of my favourite features is the web-based SPICE console for desktop VMs, which combined with the management features makes it trivial to stand up a desktop VM and start testing:&lt;/p&gt;
&lt;p&gt;&lt;a href="04.png"&gt;
&lt;figure&gt;
&lt;picture
class="mx-auto my-0 rounded-md"
&gt;
&lt;source
srcset="https://jnsgr.uk/2024/06/desktop-vms-lxd-multipass/04_hu_5c51dcd053016602.webp 330w,https://jnsgr.uk/2024/06/desktop-vms-lxd-multipass/04_hu_86621a60933e6ff1.webp 660w
,https://jnsgr.uk/2024/06/desktop-vms-lxd-multipass/04_hu_f027f814146e4f12.webp 1024w
,https://jnsgr.uk/2024/06/desktop-vms-lxd-multipass/04_hu_fd0a904398b17f9a.webp 1320w
"
sizes="100vw"
type="image/webp"
/&gt;
&lt;img
width="1622"
height="1273"
class="mx-auto my-0 rounded-md"
alt="lxd ui showing a web-based spice console with a gnome desktop running on arch linux inside"
loading="lazy" decoding="async"
src="https://jnsgr.uk/2024/06/desktop-vms-lxd-multipass/04_hu_58e6d8ef56bc8dd1.png" srcset="https://jnsgr.uk/2024/06/desktop-vms-lxd-multipass/04_hu_135013a8349dbd57.png 330w,https://jnsgr.uk/2024/06/desktop-vms-lxd-multipass/04_hu_58e6d8ef56bc8dd1.png 660w
,https://jnsgr.uk/2024/06/desktop-vms-lxd-multipass/04_hu_8d768e9c0a20a841.png 1024w
,https://jnsgr.uk/2024/06/desktop-vms-lxd-multipass/04_hu_d72fa6b816d14c2a.png 1320w
"
sizes="100vw"
/&gt;
&lt;/picture&gt;
&lt;/figure&gt;
&lt;/a&gt;&lt;/p&gt;
&lt;h2 id="why-both" class="relative group"&gt;Why both? &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#why-both" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;By now you&amp;rsquo;ve probably realised that LXD can do everything Multipass can do, and give much more flexibility - and that&amp;rsquo;s true. LXD is a full-featured hypervisor which supports much more sophisticated networking, &lt;a href="https://documentation.ubuntu.com/lxd/en/latest/reference/devices/#devices" target="_blank" rel="noreferrer"&gt;PCI-passthrough&lt;/a&gt;, clustering, integration with enterprise identity providers, observability through Prometheus &lt;a href="https://documentation.ubuntu.com/lxd/en/latest/metrics/" target="_blank" rel="noreferrer"&gt;metrics&lt;/a&gt; and &lt;a href="https://documentation.ubuntu.com/lxd/en/latest/howto/logs_loki/" target="_blank" rel="noreferrer"&gt;Loki log-forwarding&lt;/a&gt;, etc.&lt;/p&gt;
&lt;p&gt;Multipass is small, lean and very easy to configure. If I just want a quick command-line only Ubuntu VM to play with, I still find &lt;code&gt;multipass shell&lt;/code&gt; to be most convenient - especially with the automatic home directory mounting.&lt;/p&gt;
&lt;p&gt;When I want to work with desktop VMs, interact with non-Ubuntu distributions, or work more closely with hardware, then I use LXD. I was already a bit of a closet LXD fan, having previously described it as a bit of a &amp;ldquo;secret weapon&amp;rdquo; for Canonical, but since the introduction of the LXD UI, I&amp;rsquo;m a fully paid up member of the LXD fan club 😉&lt;/p&gt;
&lt;h2 id="summary" class="relative group"&gt;Summary &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#summary" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;As I mentioned in the opening paragraphs - both LXD and Multipass have become central to a lot of my technical workflows. The reason I packaged Multipass for NixOS, was that I wanted to dive into daily-driving NixOS, but not without Multipass! In my opinion, the LXD UI is one of the most polished experiences for managing containers and VMs on Linux, and I&amp;rsquo;m really excited for what that team cooks up next.&lt;/p&gt;</description></item></channel></rss>