Some tools were taken from "host" shell/PATH which crashed in certain
cases due to libc incompatiblity.
Fixes b/50
Change-Id: Ie94e2c064afff6d5aa782f70e0a024365079e4c7
Ceph CRD updates would fail with:
ERROR Error updating customresourcedefinitions cephclusters.ceph.rook.io: expected kind, but got map
This wasn't just https://github.com/bitnami/kubecfg/issues/259 . We pull
in the 'solution' from Pulumi
(https://github.com/pulumi/pulumi-kubernetes/pull/622) which just
retries the update via a JSON update instead, and that seems to have
worked.
We also add some better error return wrapping, which I used to debug
this issue properly.
Oof.
Change-Id: I2007a7857e44128d74760174b61b59efa58e9cbc
This was to be used by a Ceph CRD bump, but we ended up using upstream
yaml instead. But it's a useful change regardless.
I really should document this and write some tests.
Change-Id: I27ce94c6ebe50a4a93baa83418e8d40004755231
First pass at a non-rook-managed Ceph cluster. We call it k0 instead of
ceph-waw4, as we pretty much are sure now that we will always have a
one-kube-cluster-to-one-ceph-cluster correspondence, with different Ceph
pools for different media kinds (if at all).
For now this has one mon and spinning rust OSDs. This can be iterated on
to make it less terrible with time.
See b/6 for more details.
Change-Id: Ie502a232c700af93f33fcad9fa1c57058161aa11
This now has a zero diff against prod.
location fields in CephCluster.storage.nodes seem to have been removed
from the CRD at some point. Not sure how the CRUSH tree now gets
populated, but whatever, it's been working like this for a while
already. Same for CephObjectStore.gateway.type.
The Rook Operator has been zero-scaled for a while now due to b/6.
Change-Id: I30a836f273f4c1529f60fa9297c96b7aac412f59
For a while now we've had spurious diffs against Ceph on k0 because of
a ClusterRole with an aggregationRule.
The way these behave is that the config object has an empty rule list,
and instead populates an aggregationRule which combines other existing
ClusterRoles into that ClusterRole. The control plane then populates the
rule field when the object is read/acted on, which caused us to always
see a diff between the configuration of that ClusterRole.
This hacks together a hardcoded fix for this particular behaviour.
Porting kubecfg over to SSA would probably also fix this - but that's
too much work for now.
Change-Id: I357c1417d4023691e5809f1af23f58f364353388
This moves the diff-and-activate logic from cluster/nix/provision.nix
into ops/{provision,machines}.nix that can be used for both cluster
machines and bgpwtf machines.
The provisioning scripts now live per-NixOS-config, and anything under
ops.machines.$fqdn now has a .passthru.hscloud.provision derivation
which is that script. When ran, it will attempt to deploy onto the
target machine.
There's also a top-level tool at `ops.provision` which builds all
configurations / machines and can be called with the machine name/fqdn
to call the corresponding provisioner script.
clustercfg is changed to use the new provisioning logic.
Change-Id: I258abce9e8e3db42af35af102f32ab7963046353
Looks like .ml DNS servers are currently down, and this repository
import path is deprecated anyway. Really, we should bump Kubernetes...
Change-Id: I3e0c834a49ccf1111b9412371489bae5f80ff6ab
This fixes some issues with buildFHSUserEnv on newer NixOSes, where
stuff from the /run/current-system/sw/bin/* would want a newer glibc
than the glibc available in the FSHUserEnv. Whoops.
Change-Id: I5ed741b6d7979eb288fe6f88984bc5e6d0bdb923