Heads Up: cirrus-ci is shutting down June 1st
Hi,
As the subject says, cirrus-ci, which cfbot uses to run CI and that one can
(for now) enable on one's own repository, is shutting down.
https://cirruslabs.org/ burries the lede a bit, but it has further down:
"Cirrus CI will shut down effective Monday, June 1, 2026."
I can't say I'm terribly surprised, they had been moving a lot slower in the
last few years.
The shutdown window is pretty short, so we'll have to do something soon. Glad
that it didn't happen a few months ago, putting the shutdown before the
feature freeze. This is probably close to the least bad time it could happen
with a short window.
I think having cfbot and CI that one could run on ones own repository, without
sending a mail to the community, has improved the development process a lot.
So clearly we're going to have to do something. I certainly could not have
done stuff like AIO without it.
I'd be interested in feedback about how high folks value different aspects:
1) CI software can be self hosted
E.g. to prevent at least the cfbot case from being unpredictably abandoned
again.
2) CI software is open source
E.g. out of a principled stance, or control concerns.
3) CI runs quickly
This matters e.g. for accepting running in containers and whether it's
crucial to be able to have our images with everything pre-installed.
4) CI tests as many operating systems as possible
A lot of system just support linux, plenty support macos, some support
windows. Barely any support anything beyond that.
5) CI can be enabled on one's own repositories
Cfbot obviously allows everyone to test patches some way, but sending patch
sets to the list just to get a CI run obviously gets noisy quite fast.
There are plenty of open source CI solutions, but clearly it's not viable
for everyone to set that up for themselves. Plenty providers do allow doing
so, but the overlap of this, open source (2), multiple platforms (4) is
small if it exists.
6) There need to be free credits for running at least some CI on one's own
repository
This makes the overlapping constraints mentioned in 5) even smaller.
There are several platforms that do provide a decent amount of CI for a
monthly charge of < 10 USD.
7) Provide CI compute for "well known contributors" for free in their own
repositories
An alternative to 6) - with some CI solutions - can be to add folks to some
team that allows them to use community resources (which so far have been
donated). The problem with that is that it's administratively annoying,
because one does need to be careful, or CI will be used to do
cryptocurrency mining or such within a few days.
For some context about how much CI we have been running, here's the daily
average for cfbot and postgres/postgres CI:
- 1464 core hours (full cores, not SMT), all CI jobs use 4 cores
- 396 core hours of which were windows (visible due to the licensing cost)
- 40 GB of artifacts
- 83 GB of artifacts downloaded externally
- doesn't include macos, which I can't track as easily, due to being self
hosted runners, rather than running on GCP, which provided the above numbers
Greetings,
Andres Freund
On Fri, Apr 10, 2026 at 8:55 AM Andres Freund <andres@anarazel.de> wrote:
4) CI tests as many operating systems as possible
A lot of system just support linux, plenty support macos, some support
windows. Barely any support anything beyond that.
Nested virtualisation to the rescue?
On Thu, 9 Apr 2026 at 22:55, Andres Freund <andres@anarazel.de> wrote:
I'd be interested in feedback about how high folks value different aspects:
My thoughts below.
1) CI software can be self hosted
E.g. to prevent at least the cfbot case from being unpredictably abandoned
again.
Low, I personally don't want to manage self hosting it. Self-hosted
software can just as easily be abandoned. i.e. I'd expect GitHub
Actions to outlive underfunded open source CI software.
2) CI software is open source
E.g. out of a principled stance, or control concerns.
I don't care
3) CI runs quickly
This matters e.g. for accepting running in containers and whether it's
crucial to be able to have our images with everything pre-installed.
Important. We should definitely be able to pre-install stuff for most
OSes. I think running stuff in containers would be fine.
4) CI tests as many operating systems as possible
I think we at minimum need linux+macos+windows. Windows is by far the
system that fails most often for me. BSDs would be good, but they
often tend to be fine if osx and linux work. Personally for me I think
on Cirrus The BSDs didn't meet the useful signal to flakiness noise
ratio (i.e. they tended to mostly break randomly for me).
5) CI can be enabled on one's own repositories
...
6) There need to be free credits for running at least some CI on one's own
repository
...
7) Provide CI compute for "well known contributors" for free in their own
repositories
I would say it's a hard requirement that people can run CI without
spamming the list. I don't think that necessarily has to be in
someone's own repository. e.g. having a way for committers to give
e.g. some limited number (e.g. 100) of CI hours to someone submitting
their first patch seems fairly low risk. If they continue contributing
they can receive a recurring number or unlimited access.
Show quoted text
On Thu, 9 Apr 2026 at 22:55, Andres Freund <andres@anarazel.de> wrote:
Hi,
As the subject says, cirrus-ci, which cfbot uses to run CI and that one can
(for now) enable on one's own repository, is shutting down.https://cirruslabs.org/ burries the lede a bit, but it has further down:
"Cirrus CI will shut down effective Monday, June 1, 2026."I can't say I'm terribly surprised, they had been moving a lot slower in the
last few years.The shutdown window is pretty short, so we'll have to do something soon. Glad
that it didn't happen a few months ago, putting the shutdown before the
feature freeze. This is probably close to the least bad time it could happen
with a short window.I think having cfbot and CI that one could run on ones own repository, without
sending a mail to the community, has improved the development process a lot.
So clearly we're going to have to do something. I certainly could not have
done stuff like AIO without it.I'd be interested in feedback about how high folks value different aspects:
1) CI software can be self hosted
E.g. to prevent at least the cfbot case from being unpredictably abandoned
again.2) CI software is open source
E.g. out of a principled stance, or control concerns.
3) CI runs quickly
This matters e.g. for accepting running in containers and whether it's
crucial to be able to have our images with everything pre-installed.4) CI tests as many operating systems as possible
A lot of system just support linux, plenty support macos, some support
windows. Barely any support anything beyond that.5) CI can be enabled on one's own repositories
Cfbot obviously allows everyone to test patches some way, but sending patch
sets to the list just to get a CI run obviously gets noisy quite fast.There are plenty of open source CI solutions, but clearly it's not viable
for everyone to set that up for themselves. Plenty providers do allow doing
so, but the overlap of this, open source (2), multiple platforms (4) is
small if it exists.6) There need to be free credits for running at least some CI on one's own
repositoryThis makes the overlapping constraints mentioned in 5) even smaller.
There are several platforms that do provide a decent amount of CI for a
monthly charge of < 10 USD.7) Provide CI compute for "well known contributors" for free in their own
repositoriesAn alternative to 6) - with some CI solutions - can be to add folks to some
team that allows them to use community resources (which so far have been
donated). The problem with that is that it's administratively annoying,
because one does need to be careful, or CI will be used to do
cryptocurrency mining or such within a few days.For some context about how much CI we have been running, here's the daily
average for cfbot and postgres/postgres CI:- 1464 core hours (full cores, not SMT), all CI jobs use 4 cores
- 396 core hours of which were windows (visible due to the licensing cost)
- 40 GB of artifacts
- 83 GB of artifacts downloaded externally
- doesn't include macos, which I can't track as easily, due to being self
hosted runners, rather than running on GCP, which provided the above numbersGreetings,
Andres Freund
Hi,
On Fri, 10 Apr 2026 at 14:31, Jelte Fennema-Nio <postgres@jeltef.nl> wrote:
On Thu, 9 Apr 2026 at 22:55, Andres Freund <andres@anarazel.de> wrote:
I'd be interested in feedback about how high folks value different aspects:
My thoughts below.
I agree with all of Jelte's points except:
4) CI tests as many operating systems as possible
I think we at minimum need linux+macos+windows. Windows is by far the
system that fails most often for me. BSDs would be good, but they
often tend to be fine if osx and linux work. Personally for me I think
on Cirrus The BSDs didn't meet the useful signal to flakiness noise
ratio (i.e. they tended to mostly break randomly for me).
I think BSDs are quite capable of catching issues that others can't
catch. That has at least been my experience with OpenBSD.
However, I agree that OpenBSD and NetBSD tasks are flaky; I think that
is mostly because we generate these VM images from scratch (i.e. other
operating systems' VM images were already available on GCP). I don't
think FreeBSD is flaky.
--
Regards,
Nazir Bilal Yavuz
Microsoft
Hi!
On Thu, Apr 9, 2026 at 11:55 PM Andres Freund <andres@anarazel.de> wrote:
As the subject says, cirrus-ci, which cfbot uses to run CI and that one can
(for now) enable on one's own repository, is shutting down.https://cirruslabs.org/ burries the lede a bit, but it has further down:
"Cirrus CI will shut down effective Monday, June 1, 2026."I can't say I'm terribly surprised, they had been moving a lot slower in the
last few years.The shutdown window is pretty short, so we'll have to do something soon. Glad
that it didn't happen a few months ago, putting the shutdown before the
feature freeze. This is probably close to the least bad time it could happen
with a short window.
+1
I think having cfbot and CI that one could run on ones own repository, without
sending a mail to the community, has improved the development process a lot.
So clearly we're going to have to do something. I certainly could not have
done stuff like AIO without it.I'd be interested in feedback about how high folks value different aspects:
1) CI software can be self hosted
E.g. to prevent at least the cfbot case from being unpredictably abandoned
again.2) CI software is open source
E.g. out of a principled stance, or control concerns.
3) CI runs quickly
This matters e.g. for accepting running in containers and whether it's
crucial to be able to have our images with everything pre-installed.4) CI tests as many operating systems as possible
A lot of system just support linux, plenty support macos, some support
windows. Barely any support anything beyond that.5) CI can be enabled on one's own repositories
Cfbot obviously allows everyone to test patches some way, but sending patch
sets to the list just to get a CI run obviously gets noisy quite fast.There are plenty of open source CI solutions, but clearly it's not viable
for everyone to set that up for themselves. Plenty providers do allow doing
so, but the overlap of this, open source (2), multiple platforms (4) is
small if it exists.6) There need to be free credits for running at least some CI on one's own
repositoryThis makes the overlapping constraints mentioned in 5) even smaller.
There are several platforms that do provide a decent amount of CI for a
monthly charge of < 10 USD.7) Provide CI compute for "well known contributors" for free in their own
repositoriesAn alternative to 6) - with some CI solutions - can be to add folks to some
team that allows them to use community resources (which so far have been
donated). The problem with that is that it's administratively annoying,
because one does need to be careful, or CI will be used to do
cryptocurrency mining or such within a few days.
It's hard for me to judge priorities, but I have a proposal on how we
can try to handle this.
Migrate to Open Source CI software, and run it on (cheap) cloud + get
sponsorship to cover the migration cost. This should protect us from
disasters like this. In worst case we would need to loop for
different cloud or different sponsor.
Provide CI workflow for GIthub Actions on our repository. This
wouldn't provide the plurality of platforms that we have now, but at
least everybody can get some basic CI coverage for free.
What do you think?
------
Regards,
Alexander Korotkov
Supabase
On Fri, Apr 10, 2026 at 3:23 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:
On Thu, Apr 9, 2026 at 11:55 PM Andres Freund <andres@anarazel.de> wrote:
As the subject says, cirrus-ci, which cfbot uses to run CI and that one can
(for now) enable on one's own repository, is shutting down.https://cirruslabs.org/ burries the lede a bit, but it has further down:
"Cirrus CI will shut down effective Monday, June 1, 2026."I can't say I'm terribly surprised, they had been moving a lot slower in the
last few years.The shutdown window is pretty short, so we'll have to do something soon. Glad
that it didn't happen a few months ago, putting the shutdown before the
feature freeze. This is probably close to the least bad time it could happen
with a short window.+1
I think having cfbot and CI that one could run on ones own repository, without
sending a mail to the community, has improved the development process a lot.
So clearly we're going to have to do something. I certainly could not have
done stuff like AIO without it.I'd be interested in feedback about how high folks value different aspects:
1) CI software can be self hosted
E.g. to prevent at least the cfbot case from being unpredictably abandoned
again.2) CI software is open source
E.g. out of a principled stance, or control concerns.
3) CI runs quickly
This matters e.g. for accepting running in containers and whether it's
crucial to be able to have our images with everything pre-installed.4) CI tests as many operating systems as possible
A lot of system just support linux, plenty support macos, some support
windows. Barely any support anything beyond that.5) CI can be enabled on one's own repositories
Cfbot obviously allows everyone to test patches some way, but sending patch
sets to the list just to get a CI run obviously gets noisy quite fast.There are plenty of open source CI solutions, but clearly it's not viable
for everyone to set that up for themselves. Plenty providers do allow doing
so, but the overlap of this, open source (2), multiple platforms (4) is
small if it exists.6) There need to be free credits for running at least some CI on one's own
repositoryThis makes the overlapping constraints mentioned in 5) even smaller.
There are several platforms that do provide a decent amount of CI for a
monthly charge of < 10 USD.7) Provide CI compute for "well known contributors" for free in their own
repositoriesAn alternative to 6) - with some CI solutions - can be to add folks to some
team that allows them to use community resources (which so far have been
donated). The problem with that is that it's administratively annoying,
because one does need to be careful, or CI will be used to do
cryptocurrency mining or such within a few days.It's hard for me to judge priorities, but I have a proposal on how we
can try to handle this.Migrate to Open Source CI software, and run it on (cheap) cloud + get
sponsorship to cover the migration cost.
Sorry, I meant sponsorship to cover the cloud cost.
------
Regards,
Alexander Korotkov
Supabase
On 09.04.26 22:55, Andres Freund wrote:
I'd be interested in feedback about how high folks value different aspects:
1) CI software can be self hosted
E.g. to prevent at least the cfbot case from being unpredictably abandoned
again.2) CI software is open source
E.g. out of a principled stance, or control concerns.
I think we should work toward that in the long run. Open-source
software should also have an open-source (and distributed, and
privacy-respecting, and reusable, etc.) development process.
In the short run, meaning something that is plausible to get ready
between now and June/July, using some stopgap from an existing
established provider (such as GH actions) would probably be better.
3) CI runs quickly
This matters e.g. for accepting running in containers and whether it's
crucial to be able to have our images with everything pre-installed.4) CI tests as many operating systems as possible
A lot of system just support linux, plenty support macos, some support
windows. Barely any support anything beyond that.5) CI can be enabled on one's own repositories
Cfbot obviously allows everyone to test patches some way, but sending patch
sets to the list just to get a CI run obviously gets noisy quite fast.There are plenty of open source CI solutions, but clearly it's not viable
for everyone to set that up for themselves. Plenty providers do allow doing
so, but the overlap of this, open source (2), multiple platforms (4) is
small if it exists.
This is the most important one, for me.
I think it would be even more useful if one could run the whole thing,
or most of the thing, locally. I mean, I can run all kinds of VMs
locally, all the pieces of this already exist. But it needs some
integration to build the images locally, and then run the build and test
processes in this images. This wouldn't cover everything (e.g., can't
virtualize macOS unless on macOS, IIRC), but I shouldn't really need to
push my code half-way around the world just to do a build run on NetBSD.
This could be someone's $season of code project.
7) Provide CI compute for "well known contributors" for free in their own
repositoriesAn alternative to 6) - with some CI solutions - can be to add folks to some
team that allows them to use community resources (which so far have been
donated). The problem with that is that it's administratively annoying,
because one does need to be careful, or CI will be used to do
cryptocurrency mining or such within a few days.
In a way, well known contributors can fend for themselves. We want to
get as many new or occasional contributors to run this so that the
patches build and test successfully before anyone else has to look at them.
On Fri, Apr 10, 2026 at 01:31:38PM +0200, Jelte Fennema-Nio wrote:
On Thu, 9 Apr 2026 at 22:55, Andres Freund <andres@anarazel.de> wrote:
I'd be interested in feedback about how high folks value different aspects:
My thoughts below.
1) CI software can be self hosted
E.g. to prevent at least the cfbot case from being unpredictably abandoned
again.Low, I personally don't want to manage self hosting it. Self-hosted
software can just as easily be abandoned. i.e. I'd expect GitHub
Actions to outlive underfunded open source CI software.
Uh, I actually think the opposite. while proprietary software doesn't
disappear, it seems to become obsolete (underfunded development) or
prohibitively expensive sooner than open source. The dataase industry
has certainly shown that in the past 30 years. Also, four months ago
Github wanted to charge for self-hosted actions, which supports
"prohibitively expensive":
https://www.reddit.com/r/devops/comments/1po8hj5/github_actions_introducing_a_perminute_fee_for/
--
Bruce Momjian <bruce@momjian.us> https://momjian.us
EDB https://enterprisedb.com
Do not let urgent matters crowd out time for investment in the future.
On 09/04/2026 23:55, Andres Freund wrote:
As the subject says, cirrus-ci, which cfbot uses to run CI and that one can
(for now) enable on one's own repository, is shutting down.https://cirruslabs.org/ burries the lede a bit, but it has further down:
"Cirrus CI will shut down effective Monday, June 1, 2026."I can't say I'm terribly surprised, they had been moving a lot slower in the
last few years.
Darn, I liked Cirrus CI. One reason being precisely that it has been
stable, i.e. moved slowly, for years :-).
I think having cfbot and CI that one could run on ones own repository, without
sending a mail to the community, has improved the development process a lot.
So clearly we're going to have to do something. I certainly could not have
done stuff like AIO without it.
+1. I rely heavily on cirrus CI nowadays to validate before I push.
I'd be interested in feedback about how high folks value different aspects:
1) CI software can be self hosted
E.g. to prevent at least the cfbot case from being unpredictably abandoned
again.2) CI software is open source
E.g. out of a principled stance, or control concerns.
These probably go together.
I think it's important that you can self-host. Even with cirrus-ci I
actually wished there was an easy way to run the jobs locally. I don't
know how often I'd really do it, but especially developing and testing
the ci yaml files is painful when you can't run it locally.
3) CI runs quickly
This matters e.g. for accepting running in containers and whether it's
crucial to be able to have our images with everything pre-installed.
Pretty important. "quickly" is pretty subjective though, I'm not sure
what number to put to it. Cirrus-CI has felt fast enough.
4) CI tests as many operating systems as possible
A lot of system just support linux, plenty support macos, some support
windows. Barely any support anything beyond that.
Windows support is pretty important as it's different enough from
others. Macos is definitely good to have too. For others, we have the
buildfarm.
5) CI can be enabled on one's own repositories
Cfbot obviously allows everyone to test patches some way, but sending patch
sets to the list just to get a CI run obviously gets noisy quite fast.There are plenty of open source CI solutions, but clearly it's not viable
for everyone to set that up for themselves. Plenty providers do allow doing
so, but the overlap of this, open source (2), multiple platforms (4) is
small if it exists.
This is important. I run the CI as part of development on my own
branches all the time.
If it's easy to self-host, that might cover it.
6) There need to be free credits for running at least some CI on one's own
repositoryThis makes the overlapping constraints mentioned in 5) even smaller.
There are several platforms that do provide a decent amount of CI for a
monthly charge of < 10 USD.
Not important. For running on one's own repository, it's totally
reasonable that you pay for it yourself. Especially if you can self-host
for free.
7) Provide CI compute for "well known contributors" for free in their own
repositoriesAn alternative to 6) - with some CI solutions - can be to add folks to some
team that allows them to use community resources (which so far have been
donated). The problem with that is that it's administratively annoying,
because one does need to be careful, or CI will be used to do
cryptocurrency mining or such within a few days.
Not important. Active contributors can easily pay for what they use, or
self-host.
- Heikki
On 4/10/26 06:29, Thomas Munro wrote:
On Fri, Apr 10, 2026 at 8:55 AM Andres Freund <andres@anarazel.de> wrote:
4) CI tests as many operating systems as possible
A lot of system just support linux, plenty support macos, some support
windows. Barely any support anything beyond that.Nested virtualisation to the rescue?
I used this to migrate our FreeBSD tests [1]https://github.com/pgbackrest/pgbackrest/blob/main/.github/workflows/test.yml#L148 and it worked out OK. The
only downside is it doesn't look like you can split out steps so all the
commands end up logged together.
Regards,
-David
[1]: https://github.com/pgbackrest/pgbackrest/blob/main/.github/workflows/test.yml#L148
https://github.com/pgbackrest/pgbackrest/blob/main/.github/workflows/test.yml#L148
Hi,
I’ve started thinking about moving away from GitHub actions myself, and was wondering what else was out there that fulfills a bunch of these needs. Feedback I got and some brief research turned up Woodpecker CI[0]https://woodpecker-ci.org/
[0]: https://woodpecker-ci.org/
On Apr 13, 2026, at 04:34, Heikki Linnakangas <hlinnaka@iki.fi> wrote:
These probably go together.
I think it's important that you can self-host. Even with cirrus-ci I actually wished there was an easy way to run the jobs locally. I don't know how often I'd really do it, but especially developing and testing the ci yaml files is painful when you can't run it locally.
While Woodpecker promotes its Docker images, esp. for integration with Codeberg and other Forgejo services, it’s a Go app so compiles for quite a lot of platforms, and has a “local mode” in which, from what I understand, you can run it on whatever trusted hardware you’d like.
So if we have, say, a Mac Mini plus an arm and amd system capable of virtualizing Linux, BSD, etc., perhaps we’d be able to get the coverage we need and host the results in a self-hosted Woodpecker service?
As I say, I’ve just started to kind of cast about for alternatives, so don’t know a lot about it myself, but on the surface it looks promising.
Best,
David
On Mon, Apr 13, 2026 at 11:53 PM David Steele <david@pgbackrest.org> wrote:
On 4/10/26 06:29, Thomas Munro wrote:
Nested virtualisation to the rescue?
I used this to migrate our FreeBSD tests [1] and it worked out OK. The
only downside is it doesn't look like you can split out steps so all the
commands end up logged together.
- name: Run Test
uses: cross-platform-actions/action@v1.0.0
with:
operating_system: freebsd
version: ${{matrix.os.version}}
run: |
uname -a
sudo pkg update && sudo pkg upgrade -y libiconv && sudo
pkg install -y bash git postgresql-libpqxx pkgconf libxml2 gmake perl5
libyaml p5-YAML-LibYAML rsync meson
cd .. && perl ${GITHUB_WORKSPACE?}/test/test.pl --vm-max=2
--no-coverage --no-valgrind --module=command --test=backup
--test=info --test=archive-push
Nice!
I guess the problems with this are:
1. It has to install the packages every time because it's not yet
using a pre-prepared image.
2. It has no ccache memory.
3. It has lost all that user-friendly stuff like artefact
archival/browsing, core file debugging etc.
4. IIRC the log URLs are not "public", you have to have to be logged
into an account to view them.
(That 4th point was one of Cirrus's unique advantages at the time we
selected it. We wanted to be able to share URLs for discussion on the
mailing list without requiring everyone to be a GitHub user.)
Perhaps for point 1, we could publish fast-start qemu images for
Debian, FreeBSD, NetBSD, OpenBSD*. The pg-vm-images repo that Andres
and Bilal maintain currently uploads images to Google Cloud's image
repository where Cirrus VMs can boot from them, but it could instead
publish qemu images to our own public URLs. I'm picturing a bunch of
images available as
https://ci.postgresql.org/images/qemu/arm64/{freebsd-15,debian-13,...}-with-postgresql-dependencies.img.
The point of per-arch variants would be to match common hosts for fast
kernel hypervisor support, eg on a Mac you want arm64, though you
could still run amd64 slowly if you need to. Could even do ppc and
riscv with emulation.
Qemu images should hopefully be usable in many different environments:
1. We could run them locally with some one-button command, and also
have images you can log into and hack on if you want.
2. We could run all of them or just the license-encumbered ones on
public clouds (not through a CI service) with some one-button command,
if you have an account.
3. We could use them in people's private GitHub/GitLab/... accounts
as you showed, just add
image_url=https://ci.postgresql.org/images/qemu/....
4. Cfbot could do any of those things, not sure what would be best.
For the license-encumbered OSes, we could at least make disk images or
archives containing a MacPorts installation or
bunch-of-installed-libraries-for-Windows, but not including the OS.
Just mount/unpack as /opt or C:\pg-packages or whatever, I guess, if
you can figure out how to get a VM running ... somewhere. Perhaps
there is some way to make project-owned resources (MacMinis, Windows
VMs) available to our community too, but IDK how that would work.
Some random half-baked thoughts about the ccache, browsing, etc problems:
1. Local qemu: we could use overlay images so that your downloaded
copy of X-with-postgres-dependencies.img remains read-only. Create a
new empty overlay image for each clean run, and if you need to inspect
logs, core files, you can just log in before the next run wipes it.
2. Local qemu: we could mount a separate disk image as /cache that
survives between runs and can be wiped any time.
3. Public CI system like GitHub actions: I suppose we could run our
own ccache, artefact, log hosting service that it could push to...
that was something I already wondered about under Cirrus due to
various disk space and retention problems... but I'm quite hesitant to
get tangled up in running "public" services and unsure how you'd
control access.
I would at least like to think about trying to make cfbot
capitalism-proof. I may be underestimating the difficulty, but I keep
wondering if cfbot should at least be able to do everything itself,
with some combination of local qemu, qemu-on-project-Mac-fleet, and
public cloud VMs controlled directly. It doesn't really *need* to
depend on ephemeral venture capital-powered CI companies, it was just
nice to make it use the exact same CI setup as you could use for
yourself in your GitHub account. I'm imagining that it would still
push branches to GitHub, since that's a nice interface to browse code
on, and I suppose it might even be possible to publish our own
minimalist GitHub plugin that allows cfbot to push its green/red
result indicators to it since that's clearly something that external
providers can do (as well as pushing them to the commitfest UI as
now). But if you clicked them, you'd be taken to a really primitive
cfbot web interface where you could browse logs and artefacts retained
for N days. In other words, an extremely cut down and limited
"let's-make-our-own-CI" project, which doesn't have to tackle the much
harder "let's-make-our-own-semi-public-CI-platform" project. I like
the idea of at least having such a mode as an insurance policy anyway,
but I'm not sure what nitty gritty details might make it hard to pull
off... In this thought experiment, people could continue to work
separately on making personal CI work in various ways, GitHub, GitLab,
whatever else, and local, and all ways of doing it would be using the
same scripts and VM images.
* ... and AFAIK we could add illumos to the set if we wanted, in the
past a couple of us tried to get that going but ran into ... I think
it was driver problems? ... when using GCP VMs, but it definitely
works in qemu VMs as that cross-platform-actions project shows. Every
OS project makes sure it can boot in qemu. Even AIX can boot in qemu,
if you have a license.