This removes the need to source env.{sh,fish} when working with hscloud.
This is done by:
1. Implementing a Go library to reliably detect the location of the
active hscloud checkout. That in turn is enabled by
BUILD_WORKSPACE_DIRECTORY being now a thing in Bazel.
2. Creating a tool `hscloud`, with a command `hscloud workspace` that
returns the workspace path.
3. Wrapping this tool to be accessible from Python and Bash.
4. Bumping all users of hscloud_root to use either the Go library or
one of the two implemented wrappers.
We also drive-by replace tools/install.sh to be a proper sh_binary, and
make it yell at people if it isn't being ran as `bazel run
//tools:install`.
Finally, we also drive-by delete cluster/tools/nixops.sh which was never used.
Change-Id: I7873714319bfc38bbb930b05baa605c5aa36470a
Reviewed-on: https://gerrit.hackerspace.pl/c/hscloud/+/1169
Reviewed-by: informatic <informatic@hackerspace.pl>
This release removes Let's Encrypt DST Root CA X3 pinning and adds
dynamic secret key generation.
Deployed to production on 2021/10/09
Change-Id: I2b88dc9ab6b67d1c3af277d673702c6a1b3188db
Reviewed-on: https://gerrit.hackerspace.pl/c/hscloud/+/1161
Reviewed-by: q3k <q3k@hackerspace.pl>
Let's make things simpler and just build/run stuff that we deem
critical.
Change-Id: I356efaac4c8af276aaaa0a141a70f35da19c6957
Reviewed-on: https://gerrit.hackerspace.pl/c/hscloud/+/1166
Reviewed-by: q3k <q3k@hackerspace.pl>
This makes the hscloud readTree object available as following in NixOS
modules:
{ config, pkgs, workspace, ... }: {
environment.systemPackages = [
workspace.hswaw.laserproxy
];
}
Change-Id: I9c8146f5156ffe5d06cb8408a2ce632657990d59
Reviewed-on: https://gerrit.hackerspace.pl/c/hscloud/+/1164
Reviewed-by: q3k <q3k@hackerspace.pl>
This bitrot at some point. Now it's all freshened up.
Change-Id: Ia7df1ccd9b39d9180131452e9bf18d0fb8fa50d5
Reviewed-on: https://gerrit.hackerspace.pl/c/hscloud/+/1163
Reviewed-by: informatic <informatic@hackerspace.pl>
You can test this using:
bazel run '@io_filippo_age//cmd/age'
The same target can now be used in data dependencies for secretstore
(you'll need to hardcode the runfile path, or use some
Bazel-runfile-resolving library for Python).
This required adding a few dependencies to
third_party/go/repositories.bzl, but also moving golang.org/x/crypto
from that file into WORKSPACE, before gazelle_deps gets loaded (as the
version requested by gazelle_deps is too old). We also moved shlex that
shouldn't have been in WORKSPACE into third_party/go/repositories.bzl.
Otherwise, this was just a few small deps - bumped golang.org/x/crypto,
new golang.org/x/term, new filippo.io/edwards25519. Hooray low
dependency code.
Change-Id: I0e684d88efffde13a3b4e253860aabcb35a3c94d
Reviewed-on: https://gerrit.hackerspace.pl/c/hscloud/+/1158
Reviewed-by: patryk <patryk@hackerspace.pl>
riot-web containers are no longer published.
We shall also readjust our internal naming for matrix web client from
riot to something more generic at some point.
Change-Id: Ice85af3ae29b587c13a3ba27d13c9bd655d7fcfd
Reviewed-on: https://gerrit.hackerspace.pl/c/hscloud/+/1145
Reviewed-by: informatic <informatic@hackerspace.pl>
This implements media-repo-proxy, a lil' bit of Go to make our
infrastructure work with matrix-media-repo's concept of Host headers.
For some reason, MMR really wants Host: hackerspace.pl instead of Host:
matrix.hackerspace.pl. We'd fix that in their code, but with no tests
and with complex config reload logic it looks very daunting. We'd just
fix that in our Ingress, but that's not easy (no per-rule host
overrides).
So, we commit a tiny little itty bitty war crime and implement a piece
of Go code that serves as a rewriter for this.
This works, tested on boston:
$ curl -H "Host: matrix.hackerspace.pl" 10.10.12.46:8080/_matrix/media/r0/download/hackerspace.pl/EwVBulPgCWDWNGMKjcOKGGbk | file -
/dev/stdin: JPEG image data, JFIF standard 1.01, aspect ratio, density 1x1, segment length 16, baseline, precision 8, 650x300, components 3
(this address is media-repo.matrix.svc.k0.hswaw.net)
But hey, at least it has tests.
Change-Id: Ib6af1988fe8e112c9f3a5577506b18b48d80af62
Reviewed-on: https://gerrit.hackerspace.pl/c/hscloud/+/1143
Reviewed-by: q3k <q3k@hackerspace.pl>
With this we can use Ceph's multi-site support to easily migrate to our
new k0 Ceph cluster.
This migration was done by using radosgw-admin to rename the existing
realm/zonegroup to the new names (hscloud and eu), and then reworking
the jsonnet so that the Rook operator would effectively do nothing.
It sounds weird that creating a bunch of CRs like
Object{Realm,ZoneGroup,Zone} realm would be a no-op for the operator,
but that's how Rook works - a CephObjectStore generally creates
everything that the above CRs would create too, but implicitly. Adding
the extra CRs just allows specifying extra settings, like names.
(it wasn't fully a no-op, as the rgw daemon is parametrized by
realm/zonegroup/zone names, so that had to be restarted)
We also make the radosgw serve under object.ceph-eu.hswaw.net, which
allows us to right away start using a zonegroup URL instead of the
zone-only URL.
Change-Id: I4dca55a705edb3bd28e54f50982c85720a17b877
This enables radosgw wherever osds are. This should be fast and works
for us because we have little osd hosts.
Change-Id: I4ed014d2790d6c02a2ba8e775aaa1846032dee1e
This is needed to get Rook to talk to an external Ceph 16/Pacific
cluster.
This is mostly a bunch of CRD/RBAC changes. Most notably, we yeet our
own CRD rewrite and just slurp in upstream CRD defs.
Change-Id: I08e7042585722ae4440f97019a5212d6cf733fcc
Some tools were taken from "host" shell/PATH which crashed in certain
cases due to libc incompatiblity.
Fixes b/50
Change-Id: Ie94e2c064afff6d5aa782f70e0a024365079e4c7
Ceph CRD updates would fail with:
ERROR Error updating customresourcedefinitions cephclusters.ceph.rook.io: expected kind, but got map
This wasn't just https://github.com/bitnami/kubecfg/issues/259 . We pull
in the 'solution' from Pulumi
(https://github.com/pulumi/pulumi-kubernetes/pull/622) which just
retries the update via a JSON update instead, and that seems to have
worked.
We also add some better error return wrapping, which I used to debug
this issue properly.
Oof.
Change-Id: I2007a7857e44128d74760174b61b59efa58e9cbc
This was to be used by a Ceph CRD bump, but we ended up using upstream
yaml instead. But it's a useful change regardless.
I really should document this and write some tests.
Change-Id: I27ce94c6ebe50a4a93baa83418e8d40004755231
First pass at a non-rook-managed Ceph cluster. We call it k0 instead of
ceph-waw4, as we pretty much are sure now that we will always have a
one-kube-cluster-to-one-ceph-cluster correspondence, with different Ceph
pools for different media kinds (if at all).
For now this has one mon and spinning rust OSDs. This can be iterated on
to make it less terrible with time.
See b/6 for more details.
Change-Id: Ie502a232c700af93f33fcad9fa1c57058161aa11