This is necessary for the NixOS EFI boot machinery to pick up the new
derivation when switching to it, otherwise the machine will not boot
into the newly switched configuration.
Change-Id: I8b18956d2afeea09c38462f09a00c345cf86f80d
Apparently, at least parts of the M610 (eg. iDRAC) attempt to index
exact bytes from the FRU EEPROM instead of parsing it, and thus were
parsing our FRU's manufacturer/product name wrong. This fixes that.
Change-Id: I18d62ea79df7b7bf30cec3251da2c32d25b73507
This fixes CVE-2021-3450 and CVE-2021-3449.
Deployed on prod:
$ kubectl -n nginx-system exec nginx-ingress-controller-5c69c5cb59-2f8v4 -- openssl version
OpenSSL 1.1.1k 25 Mar 2021
Change-Id: I7115fd2367cca7b687c555deb2134b22d19a291a
This is a first pass at a Bazel remote cache. It notably does not yet do
any authentication, upload limits or garbage collection.
We won't be deploying it to prod until these are done.
Change-Id: I70a89dbe8b3ec933b2ce82e234a969e8337ba1d9
This adds github.com/minio/minio-go, a library that can be used to
access S3-like storage, eg. our own radosgw. It's significantly lighter
than the entire Go AWS SDK, and seems to also be more idiomatic than it.
Change-Id: I1e18c7665b58480fb72e789692aa7f37816cd28f
Gerrit 3.3.1 seems to have introduced a bug which makes the reviewers
column in the dashboard entry: https://bugs.chromium.org/p/gerrit/issues/detail?id=13899
This adds an override of gerrit.war to our Docker containers. The .war
is pulled over HTTP. It has been manually built by q3k from a source
checkout. The details on how this was done are in the WORKSPACE
http_file archive.
Once 3.3.3 lands we should get rid of it.
Change-Id: I8b64103cb87d8b185ff35165695a18cb19fea523
Stopgap until we finish b/3, need to deploy some changes on it without
rebooting into newer nixpkgs.
Change-Id: Ic2690dfcb398a419338961c8fcbc7e604298977a
This brings oodviewer into k0.
oodviewer started as a py2/flask script running on q3k's personal infra,
which is now being turned down.
This is a rewrite of that script into similarly mediocre Go, conforming
to the exact same mediocre JSON API and spartan HTML interface.
This also deploys it into k0 in the oodviewer-prod namespace. It's
already running, but the 'oodviewer.q3k.me' TTL has to expire before it
begins handling traffic.
Change-Id: Ieef1b0f8f0c60e6fa5dbe7701e0a07a4257f99ce
Each OSD is connected to a 6TB drive, and with the good ol' 1TB storage
-> 1GB RAM rule of thumb for OSDs, we end up with 6GB. Or, to round up,
8GB.
I'm doing this because over the past few weeks OSDs in ceph-waw3 have
been using a _ton_ of RAM. This will probably not prevent that (and
instead they wil OOM more often :/), but it at will prevent us from
wasting resources (k0 started migrating pods to other nodes, and running
full nodes like that without an underlying request makes for a terrible
draining experience).
We need to get to the bottom of why this is happening in the first
place, though. Did this happen as we moved to containerd?
Followup: b.hswaw.net/29
Already deployed to production.
Change-Id: I98df63763c35017eb77595db7b9f2cce71756ed1