This unforks benji back into upstream. The old fork didn't support a new
authentication method on Ceph, and we don't have multiple clusters
anymore (so we don't need the functionality of the fork).
Change-Id: Ie79313b2321ca2e22ad2874b75a71385af95105f
Reviewed-on: https://gerrit.hackerspace.pl/c/hscloud/+/1321
Reviewed-by: informatic <informatic@hackerspace.pl>
This is a chonky refactor that get rids of the previous cluster-centric
defs-* plain nix file setup.
Now, nodes are configured individually in plain nixos modules, and are
provided a view of all other nodes in the 'machines' attribute. Cluster
logic is moved into modules which inspect this array to find other nodes
within the same cluster.
Kubernetes options are not fully clusterified yet (ie., they are still
hardcode to only provide the 'k0' cluster) but that can be fixed later.
The Ceph machinery is a good example of how that can be done.
The new NixOS configs are zero-diff against prod. While this is done
mostly by keeping the logic, we had to keep a few newly discovered
'bugs' around by adding some temporary options which keeps things as they
are. These will be removed in a future CL, then introducing a diff (but
no functional changes, hopefully).
We also remove the nix eval from clustercfg as it was not used anymore
(basically since we refactored certs at some point).
Change-Id: Id79772a96249b0e6344046f96f9c2cb481c4e1f4
Reviewed-on: https://gerrit.hackerspace.pl/c/hscloud/+/1322
Reviewed-by: informatic <informatic@hackerspace.pl>
Reminded by a power failure on bc01n0{1,2}, we migrate away from at
least one of them into another server.
We also fix up the startup join parameter to not include the node itself
(which is not necessary, but a nice thing to have nonetheless).
Since bc01n01 was the initial node of the cluster, we also disable the
init job for k0 (which we don't care about anyway).
Change-Id: I3406471c0f9542e9d802d39138e400b5a5e74794
Reviewed-on: https://gerrit.hackerspace.pl/c/hscloud/+/1176
Reviewed-by: q3k <q3k@hackerspace.pl>
This removes the need to source env.{sh,fish} when working with hscloud.
This is done by:
1. Implementing a Go library to reliably detect the location of the
active hscloud checkout. That in turn is enabled by
BUILD_WORKSPACE_DIRECTORY being now a thing in Bazel.
2. Creating a tool `hscloud`, with a command `hscloud workspace` that
returns the workspace path.
3. Wrapping this tool to be accessible from Python and Bash.
4. Bumping all users of hscloud_root to use either the Go library or
one of the two implemented wrappers.
We also drive-by replace tools/install.sh to be a proper sh_binary, and
make it yell at people if it isn't being ran as `bazel run
//tools:install`.
Finally, we also drive-by delete cluster/tools/nixops.sh which was never used.
Change-Id: I7873714319bfc38bbb930b05baa605c5aa36470a
Reviewed-on: https://gerrit.hackerspace.pl/c/hscloud/+/1169
Reviewed-by: informatic <informatic@hackerspace.pl>
With this we can use Ceph's multi-site support to easily migrate to our
new k0 Ceph cluster.
This migration was done by using radosgw-admin to rename the existing
realm/zonegroup to the new names (hscloud and eu), and then reworking
the jsonnet so that the Rook operator would effectively do nothing.
It sounds weird that creating a bunch of CRs like
Object{Realm,ZoneGroup,Zone} realm would be a no-op for the operator,
but that's how Rook works - a CephObjectStore generally creates
everything that the above CRs would create too, but implicitly. Adding
the extra CRs just allows specifying extra settings, like names.
(it wasn't fully a no-op, as the rgw daemon is parametrized by
realm/zonegroup/zone names, so that had to be restarted)
We also make the radosgw serve under object.ceph-eu.hswaw.net, which
allows us to right away start using a zonegroup URL instead of the
zone-only URL.
Change-Id: I4dca55a705edb3bd28e54f50982c85720a17b877
This enables radosgw wherever osds are. This should be fast and works
for us because we have little osd hosts.
Change-Id: I4ed014d2790d6c02a2ba8e775aaa1846032dee1e
This is needed to get Rook to talk to an external Ceph 16/Pacific
cluster.
This is mostly a bunch of CRD/RBAC changes. Most notably, we yeet our
own CRD rewrite and just slurp in upstream CRD defs.
Change-Id: I08e7042585722ae4440f97019a5212d6cf733fcc
Ceph CRD updates would fail with:
ERROR Error updating customresourcedefinitions cephclusters.ceph.rook.io: expected kind, but got map
This wasn't just https://github.com/bitnami/kubecfg/issues/259 . We pull
in the 'solution' from Pulumi
(https://github.com/pulumi/pulumi-kubernetes/pull/622) which just
retries the update via a JSON update instead, and that seems to have
worked.
We also add some better error return wrapping, which I used to debug
this issue properly.
Oof.
Change-Id: I2007a7857e44128d74760174b61b59efa58e9cbc
First pass at a non-rook-managed Ceph cluster. We call it k0 instead of
ceph-waw4, as we pretty much are sure now that we will always have a
one-kube-cluster-to-one-ceph-cluster correspondence, with different Ceph
pools for different media kinds (if at all).
For now this has one mon and spinning rust OSDs. This can be iterated on
to make it less terrible with time.
See b/6 for more details.
Change-Id: Ie502a232c700af93f33fcad9fa1c57058161aa11
This now has a zero diff against prod.
location fields in CephCluster.storage.nodes seem to have been removed
from the CRD at some point. Not sure how the CRUSH tree now gets
populated, but whatever, it's been working like this for a while
already. Same for CephObjectStore.gateway.type.
The Rook Operator has been zero-scaled for a while now due to b/6.
Change-Id: I30a836f273f4c1529f60fa9297c96b7aac412f59
For a while now we've had spurious diffs against Ceph on k0 because of
a ClusterRole with an aggregationRule.
The way these behave is that the config object has an empty rule list,
and instead populates an aggregationRule which combines other existing
ClusterRoles into that ClusterRole. The control plane then populates the
rule field when the object is read/acted on, which caused us to always
see a diff between the configuration of that ClusterRole.
This hacks together a hardcoded fix for this particular behaviour.
Porting kubecfg over to SSA would probably also fix this - but that's
too much work for now.
Change-Id: I357c1417d4023691e5809f1af23f58f364353388
This moves the diff-and-activate logic from cluster/nix/provision.nix
into ops/{provision,machines}.nix that can be used for both cluster
machines and bgpwtf machines.
The provisioning scripts now live per-NixOS-config, and anything under
ops.machines.$fqdn now has a .passthru.hscloud.provision derivation
which is that script. When ran, it will attempt to deploy onto the
target machine.
There's also a top-level tool at `ops.provision` which builds all
configurations / machines and can be called with the machine name/fqdn
to call the corresponding provisioner script.
clustercfg is changed to use the new provisioning logic.
Change-Id: I258abce9e8e3db42af35af102f32ab7963046353
This annotation is used to permit routes defined by regexes instead of
simple prefix matching. This is used by our synapse deployment for
routing incomming HTTP requests to diffferent Synapse components.
I've stumbled upon this while deploying a new Matrix/Synapse instance.
This hasn't been yet a problem because the existing ingresses for Matrix
deployments predate admitomatic.
Change-Id: I821e58b214450ccf0de22d2585c3b0d11fbe71c0
This implements the main identd service that will run on our production
hosts. It's comparatively small, as most of the functionality is
implemented in //cluster/identd/ident and //cluster/identd/kubenat.
Change-Id: I1861fe7c93d105faa19a2bafbe9c85fe36502f73
This is a high-level wrapper for querying identd, and uses IdentError to
carry errors received from the server.
Change-Id: I6444a67117193b97146ffd1548151cdb234d47b5
This is the first pass at an ident protocol client. In the end, we want
to implement an ident protocol server for our in-cluster identd, but
starting out with a client helps me getting familiar with the protocol,
and will allow the server implementation to be tested against the
client.
Change-Id: Ic37b84577321533bab2f2fbf7fb53409a5defb95
These can be used by production jobs to get the source port of the
client connecting over HTTP. A followup CR implements just that.
Change-Id: Ic8e29eaf806bb196d8cfcfb604ff66ae4d0d166a
This emits short-lived user credentials for a `dev-user` in crdb-waw1
any time someone prodaccesses.
Change-Id: I0266a05c1f02225d762cfd2ca61976af0658639d
DeveloperCredentialsLocation used to glog.Exitf instead of returning an
error, and a consumer (prodaccess) used to not check the return code.
Bad refactor?
Change-Id: I6c2d05966ba6b3eb300c24a51584ccf5e324cd49
This fixes CVE-2021-3450 and CVE-2021-3449.
Deployed on prod:
$ kubectl -n nginx-system exec nginx-ingress-controller-5c69c5cb59-2f8v4 -- openssl version
OpenSSL 1.1.1k 25 Mar 2021
Change-Id: I7115fd2367cca7b687c555deb2134b22d19a291a
Each OSD is connected to a 6TB drive, and with the good ol' 1TB storage
-> 1GB RAM rule of thumb for OSDs, we end up with 6GB. Or, to round up,
8GB.
I'm doing this because over the past few weeks OSDs in ceph-waw3 have
been using a _ton_ of RAM. This will probably not prevent that (and
instead they wil OOM more often :/), but it at will prevent us from
wasting resources (k0 started migrating pods to other nodes, and running
full nodes like that without an underlying request makes for a terrible
draining experience).
We need to get to the bottom of why this is happening in the first
place, though. Did this happen as we moved to containerd?
Followup: b.hswaw.net/29
Already deployed to production.
Change-Id: I98df63763c35017eb77595db7b9f2cce71756ed1
This removes Docker and docker-shim from our production kubernetes, and
moves over to containerd/CRI. Docker support within Kubernetes was
always slightly shitty, and with 1.20 the integration was dropped
entirely. CRI/Containerd/runc is pretty much the new standard.
Change-Id: I98c89d5433f221b5fe766fcbef261fd72db530fe
This is an attempt to see how well we do without rules_nixpkgs.
rules_nixpkgs has the following problems:
- complicates our build system significantly (generated external
repository indirection for picking local/nix python and go)
- creates builds that cannot run on production (as they are tainted by
/nix/store libraries)
- is not a full solution to the bazel hermeticity problem anyway, and
we'll have to tackle that some other way (eg. by introducing proper
C++ cross-compilation toolchains and building everything from C,
including Python and Go)
Instead of rules_nixpkgs, we ship a shell.nix file, so NixOS users can
just:
jane@hacker:~/hscloud $ nix-shell
hscloud-build-chrootenv:jane@hacker:~/hscloud$ prodaccess
This shell.nix is in a way nicer, as it immediately gives you all tools
needed to access production straight away.
Change-Id: Ieceb5ae0fb4d32e87301e5c99416379cedc900c5
This unifies nixpkgs with the one defined in //default.nix and makes it
possible to use readTree to build the provisioners:
nix-build -A cluster.nix.provision
result/bin/provision
Change-Id: I68dd70b9c8869c7c0b59f5007981eac03667b862
This will permit any binding to system:admin-namespaces (eg. personal-*
namespaces, per-namespace extra admin access like matrix-0x3c) the
ability to create and updates ingresses.
Change-Id: I522896ebe290fe982d6fe46b7b1d604d22b4f72c
This turns admitomatic into a self-standing service that can be used as
an admission controller.
I've tested this E2E on a local k3s server, and have some early test
code for that - but that'll land up in a follow up CR, as it first needs
to be cleaned up.
Change-Id: I46da0fc49f9d1a3a1a96700a36deb82e5057249b
This gives us nearly everything required to run the admission
controller. In addition to checking for allowed domains, we also do some
nginx-inress-controller security checks.
Change-Id: Ib187de6d2c06c58bd8c320503d4f850df2ec8abd
This is the beginning of a validating admission controller which we will
use to permit end-users access to manage Ingresses.
This first pass implements an ingressFilter, which is the main structure
through which allowed namespace/dns combinations will be allowed. The
interface is currently via a test, but in the future this will likely be
configured via a command line, or via a serialized protobuf config.
Change-Id: I22dbed633ea8d8e1fa02c2a1598f37f02ea1b309
This change reflects the current production state.
Upgrade was done by going through following versions:
19.1.0 -> 19.2.12 -> 20.1.10 -> 20.2.4
Change-Id: I8b33b8116363f1a918423fd18ba3d1b5c910851c
It reached the stage of being crapped out so much that the OSDs spurious
IOPS killed the performance of disks colocated on the same M610 RAID
controllers. This made etcd _very_ slow, to the point of churning
through re-elections due to timeouts.
etcd/apiserver latencies, observe the difference at ~15:38:
https://object.ceph-waw3.hswaw.net/q3k-personal/4fbe8d4cfc8193cad307d487371b4e44358b931a7494aa88aff50b13fae9983c.png
I moved gerrit/* and matrix/appservice-irc-freenode PVCs to ceph-waw3 by
hand. The rest were non-critical so I removed them, they can be
recovered from benji backups if needed.
Change-Id: Iffbe87aefc06d8324a82b958a579143b7dd9914c
More as-builts. This has already been bumped. Had to coax ceph-waw2 to
upgrade despite the fact that it's horribly broken.
Change-Id: Ia762f5d7d88d6420c2fc25cf199037cbccde0cb3
This is after the monster^Wrook outage of the week two weeks ago caused
by bc01n03 dying.
Plan is to migrate ceph-waw3 to be external, yeet ceph-waw2, and extend
crdb-waw1 to another node.
Change-Id: I133af3b1171fea383b45bf06c51e48a5c40341e4
This disables DHCP on all k0 nodes. This change has been tentatively
deployed to bc01n01 (which is cordoned off in kube), and I will deploy
it to the rest of k0 machines once merged.
Change-Id: I96253a9d0acedb4512c877c64174992ffdb43d58
These tests are broken as they depend on some test data that we
currently don't have in hscloud. They should be fixed ASAP.
Change-Id: I2571c2958cb84e145a7e3a44171685ecf43cf499
This forks bitnami/kubecfg into kartongips. The rationale is that we
want to implement hscloud-specific functionality that wouldn't really be
upstreamable into kubecfg (like secret support, mulit-cluster support).
We forked off from github.com/q3k/kubecfg at commit b6817a94492c561ed61a44eeea2d92dcf2e6b8c0.
Change-Id: If5ba513905e0a86f971576fe7061a471c1d8b398
We want to be able to scrape controller-manager and scheduler metrics
into Prometheus. For that, each of them needs to:
1) listen on a secure port
2) have authn enabled
With this, any k8s user with the right permissions (and a bearer token
or TLS certificate) can come in and access metrics over a node's public
IP address. Access without a certificate/token gets thrown into the
system:anonymous user, which as no access to any API.
Change-Id: I267680f92f748ba63b6762e6aaba3c417446e50b
This notably fixes the annoying loopback issues that prevented hosts
from accessing externalip services with externalTrafficPolicy: local
from nodes that weren't running the service.
Which means, hopefuly, no more registry pull failures when
nginx-ingress gets misplaced!
Change-Id: Id4923fd0fce2e28c31a1e65518b0e984165ca9ec
This has been deployed to k0 nodes.
Current state of cluster certificates:
cluster/certs/ca-etcd.crt
Not After : Apr 4 17:59:00 2024 GMT
cluster/certs/ca-etcdpeer.crt
Not After : Apr 4 17:59:00 2024 GMT
cluster/certs/ca-kube.crt
Not After : Apr 4 17:59:00 2024 GMT
cluster/certs/ca-kubefront.crt
Not After : Apr 4 17:59:00 2024 GMT
cluster/certs/ca-kube-prodvider.cert
Not After : Sep 1 21:30:00 2021 GMT
cluster/certs/etcd-bc01n01.hswaw.net.cert
Not After : Mar 28 15:53:00 2021 GMT
cluster/certs/etcd-bc01n02.hswaw.net.cert
Not After : Mar 28 16:45:00 2021 GMT
cluster/certs/etcd-bc01n03.hswaw.net.cert
Not After : Mar 28 15:15:00 2021 GMT
cluster/certs/etcd-calico.cert
Not After : Mar 28 15:15:00 2021 GMT
cluster/certs/etcd-dcr01s22.hswaw.net.cert
Not After : Oct 3 15:33:00 2021 GMT
cluster/certs/etcd-dcr01s24.hswaw.net.cert
Not After : Oct 3 15:38:00 2021 GMT
cluster/certs/etcd-kube.cert
Not After : Mar 28 15:15:00 2021 GMT
cluster/certs/etcdpeer-bc01n01.hswaw.net.cert
Not After : Mar 28 15:53:00 2021 GMT
cluster/certs/etcdpeer-bc01n02.hswaw.net.cert
Not After : Mar 28 16:45:00 2021 GMT
cluster/certs/etcdpeer-bc01n03.hswaw.net.cert
Not After : Mar 28 15:15:00 2021 GMT
cluster/certs/etcdpeer-dcr01s22.hswaw.net.cert
Not After : Oct 3 15:33:00 2021 GMT
cluster/certs/etcdpeer-dcr01s24.hswaw.net.cert
Not After : Oct 3 15:38:00 2021 GMT
cluster/certs/etcd-root.cert
Not After : Mar 28 15:15:00 2021 GMT
cluster/certs/kube-apiserver.cert
Not After : Oct 3 15:26:00 2021 GMT
cluster/certs/kube-controllermanager.cert
Not After : Mar 28 15:15:00 2021 GMT
cluster/certs/kubefront-apiserver.cert
Not After : Mar 28 15:15:00 2021 GMT
cluster/certs/kube-kubelet-bc01n01.hswaw.net.cert
Not After : Mar 28 15:53:00 2021 GMT
cluster/certs/kube-kubelet-bc01n02.hswaw.net.cert
Not After : Mar 28 16:45:00 2021 GMT
cluster/certs/kube-kubelet-bc01n03.hswaw.net.cert
Not After : Mar 28 15:15:00 2021 GMT
cluster/certs/kube-kubelet-dcr01s22.hswaw.net.cert
Not After : Oct 3 15:33:00 2021 GMT
cluster/certs/kube-kubelet-dcr01s24.hswaw.net.cert
Not After : Oct 3 15:38:00 2021 GMT
cluster/certs/kube-proxy.cert
Not After : Mar 28 15:15:00 2021 GMT
cluster/certs/kube-scheduler.cert
Not After : Mar 28 15:15:00 2021 GMT
cluster/certs/kube-serviceaccounts.cert
Not After : Mar 28 15:15:00 2021 GMT
Change-Id: I94030ce78c10f7e9a0c0257d55145ef629195314
This prevents metallb routes being announced from all peers to our ToR,
thereby preventing issues with traffic hitting services with
externalTrafficPolicy: local.
There still is the from-host loopback issue, but that will be fixed by
upgrading to kube 1.15.
Change-Id: Ifc9964b46840aee82d99f0b6550188550e46fe04
This fixes compatibility with prodaccess tools built with Go 1.15, which
introduced 'X.509 CommonName deprecation' [1].
[1] - https://golang.org/doc/go1.15#commonname
Change-Id: I228cde3e5651a3e36f527783f2ccb4a2f6b7a8e3
This will be, at some point, a script to run on Gerrit presubmit (ie.
right before merge).
For now, you can manually run it to ensure that Everything At Least
Kinda Works.
Change-Id: I28b305fa81a4ca4a8e94ce4daa06fe9ae0184fe8
Previously, we had the following setup:
.-----------.
| ..... |
.-----------.-|
| dcr01s24 | |
.-----------.-| |
| dcr01s22 | | |
.---|-----------| |-'
.--------. | |---------. | |
| dcsw01 | <----- | metallb | |-'
'--------' |---------' |
'-----------'
Ie., each metallb on each node directly talked to dcsw01 over BGP to
announce ExternalIPs to our L3 fabric.
Now, we rejigger the configuration to instead have Calico's BIRD
instances talk BGP to dcsw01, and have metallb talk locally to Calico.
.-------------------------.
| dcr01s24 |
|-------------------------|
.--------. |---------. .---------. |
| dcsw01 | <----- | Calico |<--| metallb | |
'--------' |---------' '---------' |
'-------------------------'
This makes Calico announce our pod/service networks into our L3 fabric!
Calico and metallb talk to eachother over 127.0.0.1 (they both run with
Host Networking), but that requires one side to flip to pasive mode. We
chose to do that with Calico, by overriding its BIRD config and
special-casing any 127.0.0.1 peer to enable passive mode.
We also override Calico's Other Bird Template (bird_ipam.cfg) to fiddle
with the kernel programming filter (ie. to-kernel-routing-table filter),
where we disable programming unreachable routes. This is because routes
coming from metallb have their next-hop set to 127.0.0.1, which makes
bird mark them as unreachable. Unreachable routes in the kernel will
break local access to ExternalIPs, eg. register access from containerd.
All routes pass through without route reflectors and a full mesh as we
use eBGP over private ASNs in our fabric.
We also have to make Calico aware of metallb pools - otherwise, routes
announced by metallb end up being filtered by Calico.
This is all mildly hacky. Here's hoping that Calico will be able to some
day gain metallb-like functionality, ie. IPAM for
externalIPs/LoadBalancers/...
There seems to be however one problem with this change (but I'm not
fixing it yet as it's not critical): metallb would previously only
announce IPs from nodes that were serving that service. Now, however,
the Calico internal mesh makes those appear from every node. This can
probably be fixed by disabling local meshing, enabling route reflection
on dcsw01 (to recreate the mesh routing through dcsw01). Or, maybe by
some more hacking of the Calico BIRD config :/.
Change-Id: I3df1f6ae7fa1911dd53956ced3b073581ef0e836
We just had an outage seemingly caused by N-I-C sendings tons of traffic
to gitea, which in turn caused N-I-C to balloon in memory/CPU usage.
I haven't debugged the cause of this traffic, but I have disabled the
gitea TCP forward to Stop The Bleeding.
This change reflects ad-hoc production changes.
Change-Id: I37e11609f408fa3e3fbfafafba44dc83149b90a9
- we update NixOS to 20.09pre
- we fix an ACME option that's now required
- we switch from systemd-timesyncd to chrony (as timesyncd took a long
time to sync clocks after restart, leading to MON_CLOCK_SKEW errors
from ceph)
This has been deployed in production.
Change-Id: Ibfcd41567235bae3e3d8abeeed61f4694ae614ad
This adds a mod proxy system, called, well, modproxy.
It sits between Factorio server instances and the Factorio mod portal,
allowing for arbitrary mod download without needing the servers to know
Factorio credentials.
Change-Id: I7bc405a25b6f9559cae1f23295249f186761f212
ceph-waw2 has currently some production issues [1] which have started to
cause write failures in the registry. The registry is the only user of
ceph-waw2's affected pool, so we reduce the dumpster fire blast radious
by moving it over to ceph-waw3.
This has already been deployed and data has been migrated over (via
s3cmd sync), and the migration has been verified (by a push and pull,
and pull of an older image).
[1] - pgs stuck inactive in the object storage pool
Change-Id: I26789b52008bb7be953954ec3fd3dd727ac15347
In addition to k8s certificates, prodaccess now issues HSPKI
certificates, with DN=$username.sso.hswaw.net. These are installed into
XDG_CONFIG_HOME (or os equiv).
//go/pki will now automatically attempt to load these certificates. This
means you can now run any pki-dependant tool with -hspki_disable, and
with automatic mTLS!
Change-Id: I5b28e193e7c968d621bab0d42aabd6f0510fed6d
instead of Python packages
As usual with Python sadness, the @pydeps wheels are built on the bazel
host, so stuffing them inside a container_image (or py_image) will cause
new and unexpected kinds of misery.
Change-Id: Id4e4d53741cf2da367f01aa15c21c133c5cf0dba
"Anyone can pull all images" rule did only match on anonymous users. Now
it should match all users, including authenticated ones.
Change-Id: I2205299093feca51f30526ba305eadbaa0a68ecb
We would like gitea to have its ssh server exposed on TCP port 22 on the
same address as its web interface. We would also still like to use all
the automation around ingresses already in place (like cert-manager
integration).
To solve this, we create an additional LoadBalancer service for
nginx-ingress-controller and set up special tcp-services forwarding rule
to pass port 22 traffic to gitea-prod/gitea service, like we already do
in case of gerrit.
Change-Id: I5bfc901ebe858464f8e9c2f3b2216b254ccd6c4d
This turns the existing script into a proper sh_binary, and injects
dependencies (kubectl and jq) as deps into it.
This change also pulls in BUILDfiles for jq, and a dep (oniguruma) into
//third_party, and adds buildable external repositories for them.
The jq/oniguruma BUILDfiles are lifted from
https://github.com/attilaolah/bazel-tools/.
Change-Id: If2e548bd60a8fd34e4f3be767ae59c6b2f2286d9
It was getting large and unwieldy (to the point where kubecfg was slow).
In this change, we:
- move the Cluster function to cluster.libsonnet
- move the Cluster instantiation into k0.libsonnet
- shuffle some fields around to make sure things are well split between
k0-specific and general cluster configs.
- add 'view' files that build on 'cluster.libsonnet' to allow rendering
either the entire k0 state, or some subsets (for speed)
- update the documentation, drive-by some small fixes and reindantation
Change-Id: I4b8d920b600df79100295267efe21b8c82699d5b
We're not using them for anything. Initially they were going to be used
for nixops, but nixops is not very good, so let's just drop them.
We still have a Nix dependency for clustercfg.py when provisioning
nodes, but rules_nix/nixpkgs in WORKSPACE were unrelated to that.
Change-Id: I28c249507d1be9c5dbbd1ee764deccd9ab038549
We handwavingly plan on implementing monitoring as a two-tier system:
- a 'global' component that is reponsible for global aggregation,
long-term storage and alerting.
- multiple 'per-cluster' components, that collect metrics from
Kubernetes clusters and export them to the global component.
In addition, several lower tiers (collected by per-cluster components)
might also be implemented in the future - for instance, specific to some
subprojects.
Here we start sketching out some basic jsonnet structure (currently all
in a single file, with little parametrization) and a cluster-level
prometheus server that scrapes Kubernetes Node and cAdvisor metrics.
This review is mostly to get this commited as early as possible, and to
make sure that the little existing Prometheus scrape configuration is
sane.
Change-Id: If37ac3b1243b8b6f464d65fee6d53080c36f992c
This kills two birds with one stone:
- update the secretstore tool to be slightly smarter about secrets, to
the point where we can now just point it at a secret directory and
ask it to 'sync' all secrets in there
- runs the new fancy sync command on all keys to update them, which
is a follow up to gerrit/328.
Change-Id: I0eec4a3e8afcd9481b0b248154983aac25657c40
This was an attempt to make new calico nodes use a full FQDN. However,
this change seemingly also makes the calico control plane use the FQDN
for all existing nodes, as such breaking CNI for new pods.
We revert this change, thereby keeping all calico nodes names as
hostnames. We could fix this by editing /var/lib/calico/nodename on
hosts to FQDNs, but it might not be worth the effort.
See https://github.com/projectcalico/calico/issues/1093 for more
context.
Change-Id: I52bfb00f604053d57d3009aebd6c50db7dc74f58
We still use etcd as the data store (and as such didn't set up k8s CRDs
for Calico), but that's okay for now.
Change-Id: If6d66f505c6b40f2646ffae7d33d0d641d34a963
This previous allowed all namespace admins (ie. personal-$user namespace
users) to create any sort of obejct they wanted within that namespace.
This could've been exploited to allow creation of a RoleBinding that
would then allow to bind a serviceaccount to the insecure
podsecuritypolicy, thereby allowing escalation to root on nodes.
As far as I've checked, this hasn't been exploited, and the access to
the k8s cluster has so far also been limited to trusted users.
This has been deployed to production.
Change-Id: Icf8747d765ccfa9fed843ec9e7b0b957ff27d96e
This bumps Rook/Ceph. The new resources (mostly RBAC) come from
following https://rook.io/docs/rook/v1.1/ceph-upgrade.html .
It's already deployed on production. The new CSI driver has not been
tested, but the old flexvolume-based provisioners still work. We'll
migrate when Rook offers a nice solution for this.
We've hit a kubecfg bug that does not allow controlling the CephCluster
CRD directly anymore (I had to apply it via kubecfg show / kubectl apply
-f instead). This might be due to our bazel/prod k8s version mismatch,
or it might be related to https://github.com/bitnami/kubecfg/issues/259.
Change-Id: Icd69974b294b823e60b8619a656d4834bd6520fd
This is hackdoc, a documentation rendering tool for monorepos.
This is the first code iteration, that can only serve from a local git
checkout.
The code is incomplete, and is WIP.
Change-Id: I68ef7a991191c1bb1b0fdd2a8d8353aba642e28f
This makes clustercfg ensure certificates are valid for at least 30
days, and renew them otherwise.
We use this to bump all the certs that were about to expire in a week.
They are now valid until 2021.
There's still some certs that expire in 2020. We need to figure out a
better story for this, especially as the next expiry is 2021 - todays
prod rollout was somewhat disruptive (basically this was done by a full
cluster upgrade-like rollout flow, via clustercfg).
We also drive-by bump the number of mons in ceph-waw3 to 3, as it shouls
be (this gets rid of a nasty SPOF that would've bitten us during this
upgrade otherwise).
Change-Id: Iee050b1b9cba4222bc0f3c7bce9e4cf9b25c8bdc
In preparation for updating to 1.1.0, which will be much more involved.
Also fix a typo in registry.libsonnet, whoops.
Change-Id: I7668bf53c7580f99fdf56fe6227f04a468f8de50
This reflects current production. This needs to get bumped up to 3 at some point as otherwise we lose HA for this cluster.
Change-Id: Ie5937e6a216b635ecbc4c82ecd182a410167c3f8
We change the existing behaviour (copy files & run nixos-rebuild switch)
to something closer to nixops-style. This now means that provisioning
admin machines need Nix installed locally, but that's probably an okay
choice to make.
The upside of this approach is that it's easier to debug and test
derivations, as all data is local to the repo and the workstation, and
deploying just means copying a configuration closure and switching the
system to it. At some point we should even be able to run the entire
cluster within a set of test VMs.
We also bump the kubernetes control plane to 1.14. Kubelets are still at
1.13 and their upgrade is comint up today too.
Change-Id: Ia9832c47f258ee223d93893d27946d1161cc4bbd
This needed an upstream change to allow only some pools to be backed up,
otherwise benji would crash when stubmling upon the first PVC from a
pool that wasn't backed by the ceph cluster it was acting upon.
Change-Id: I52bf163c16352cb59fdd3dbdd576145ce1dbac03
This time from a bare hscloud checkout to make sure _nothing_ is fucked
up.
This causes no change remotely, just makes te repo reflect reality.
Change-Id: Ie8db01300771268e0371c3cdaf1930c8d7cbfb1a
This productionizes smsgw.
We also add some jsonnet machinery to provide a unified service for Go
micro/mirkoservices.
This machinery provides all the nice stuff:
- a deployment
- a service for all your types of pots
- TLS certificates for HSPKI
We also update and test hspki for a new name scheme.
Change-Id: I292d00f858144903cbc8fe0c1c26eb1180d636bc
In https://gerrit.hackerspace.pl/c/hscloud/+/70 we accidentally
introduced a split-horizon DNS situation:
- k0.hswaw.net from the Internet resolves to nodes running the k8s API
servers, and as such can serve API server traffic
- k0.hswaw.net from the cluster returned no results
This broke prodvider in two ways:
- it dialed the API servers at k0.hswaw.net
- even after the endpoint was moved to
kubernetes.default.svc.k0.hswaw.net, the apiserver cert didn't cover
that
Thus, not only we had to change the prodvider endpoint but also change
the APIserver certs to cover this new name.
I'm not sure this should be the target fix. I think at some point we
should only start referring to in-cluster services via their full (or
cluster.local) names, but right now k0.hswaw.net is an exception and as
such a split, and we have no way to access the internal services from
the outside just yet.
However, getting prodvider to work is important enough that this fix is
IMO good enough for now.
Change-Id: I13d0681208c66f4060acecc78b7ae14b8f8d7125
This way kubernetes consumers don't have to import anything from
cluster/, hopefully.
We also create a small abstraction for local additions for
kube.libsonnet without having to modify upstream.
Change-Id: I209095781f91c8867250a647fe944370cddd67d0
This means that in addition to services being discoverable the 'classic'
way:
<svcname>.<namespace>.svc.cluster.local
They are now discoverable as:
<svcname>.<namespace>.svc.<fqdn>
For instance, on k0 you can now internally resolve:
$ kubectl run --rm -it foo --image=nixery.dev/shell/dnsutils bash
bash-4.4# dig +short coffee-svc.default.svc.k0.hswaw.net
10.10.12.192
Change-Id: Ie6875b54ed6358f30f888ca0cd96e011520ace20
Every benji backup seems to cycle blocks (eg. delete some and recreate
them).
Since wasabi has a minimum billing retention policy of 90 days, this
means that every uploaded and then an hour later deleted object costs
us.
Currently we seem to be storing around 200G of data in wasabi for Benji
but already have 600G of deleted objects. This is suboptimal.
This change has already been deployed on production.
Change-Id: I67302d23a1c45974fb5d51ec9a8cff28260830dc
rules_pip has a new version [1] of their rule system, incompatible with the
version we used, that fixes a bunch of issues, notably:
- explicit tagging of repositories for PY2/PY3/PY23 support
- removal of dependency on host pip (in exchange for having to vendor
wheels)
- higher quality tooling for locking
We update to the newer version of pip_rules, rename the external
repository to pydeps and move requirements.txt, the lockfile and the
newly vendored wheels to third_party/, where they belong.
[1] - https://github.com/apt-itude/rules_pip/issues/16
Change-Id: I1065ee2fc410e52fca2be89fcbdd4cc5a4755d55
Some pods got evicted. Some of them broke.
- postgres in matrix and nginx in internet because of the new policies
(chown issues)
- cas proxy in matrix because apparently the image was not reuploaded
to the regsitry after ceph-waw1 died, and another node didn't have it
- registry because it had a weak image pin an downgraded to some
broken version on another node
Change-Id: I836036872629843c8ede1b7f67982112c90d71f0
Here we introduce benji [1], a backup system based on backy2. It lets us
backup Ceph RBD objects from Rook into Wasabi, our offsite S3-compatible
storage provider.
Benji runs as a k8s CronJob, every hour at 42 minutes. It does the
following:
- runs benji-pvc-backup, which iterates over all PVCs in k8s, and backs
up their respective PVs to Wasabi
- runs benji enforce, marking backups outside our backup policy [2] as
to be deleted
- runs benji cleanup, to remove unneeded backups
- runs a custom script to backup benji's sqlite3 database into wasabi
(unencrypted, but we're fine with that - as the metadata only contains
image/pool names, thus Ceph PV and pool names)
[1] - https://benji-backup.me/index.html
[2] - latest3,hours48,days7,months12, which means the latest 3 backups,
then one backup for the next 48 hours, then one backup for the next
7 days, then one backup for the next 12 months, for a total of 65
backups (deduplicated, of course)
We also drive-by update some docs (make them mmore separated into
user/admin docs).
Change-Id: Ibe0942fd38bc232399c0e1eaddade3f4c98bc6b4
This port was leaking kubelet state, including information on running
pods. No secrets were leaked (if they were not text-pasted into
env/args), but this still shouldn't be available.
As far as I can tell, nothing depends on this port, other than some
enterprise load balancers that require HTTP for node 'health' checks.
Change-Id: I9549b73e0168fe3ea4dce43cbe8fdc2ca4575961
Prodaccess/Prodvider allow issuing short-lived certificates for all SSO
users to access the kubernetes cluster.
Currently, all users get a personal-$username namespace in which they
have adminitrative rights. Otherwise, they get no access.
In addition, we define a static CRB to allow some admins access to
everything. In the future, this will be more granular.
We also update relevant documentation.
Change-Id: Ia18594eea8a9e5efbb3e9a25a04a28bbd6a42153
We just got this email:
We've been working with Jetstack, the authors of cert-manager, on a
series of fixes to the client. Cert-manager sometimes falls into a
traffic pattern where it sends really excessive traffic to Let's
Encrypt's servers, continuously. To mitigate this, we plan to start
blocking all traffic from cert-manager versions less than 0.8.0 (the
current semver minor release), as of November 1, 2019. Please upgrade
all of your cert-manager instances before then.
We're sending this email because this is the contact address of your
cert-manager instance at:
185.236.240.37 .
Version 0.8.0 is much better but we still observe excessive traffic in
some cases. We're working with Jetstack to improve these cases. As new
versions of cert-manager are released, we will add the non-current
versions to our block list after 3 months. We strongly encourage
cert-manager users to stay up-to-date with new versions.
Also, there is an opportunity to help both Jetstack and Let's Encrypt.
Once you've upgraded, please check the logs for your cert-manager
instances from time to time. Are they making excessive requests to Let's
Encrypt (more than, say, 10 per day over multiple days)? If so, please
share details at https://github.com/jetstack/cert-manager/issues/1948 .
Thanks,
Let's Encrypt Team
Change-Id: Ic7152150ac1c96941423878c6d4b6209e07429cf
We accidentally created crdb-waw2 in
https://gerrit.hackerspace.pl/c/hscloud/+/2.
We remove it now and also backport a manual change that makes the
crdb-waw1 service public via a LoadBalancer.
Change-Id: I3bbd6f01b82c6efa458cc44776f086ba36e9f20c
Nixops requires nix_rules, which in turn requires a working nix
installation.
When we split tools/install.sh into tools/install.sh and
cluster/tools/install.sh [1], we accidentally made the latter always install
all cluster tools, including nixops - even if the install.sh script
detected that the system does not have Nix installed.
[1] - https://gerrit.hackerspace.pl/c/hscloud/+/81
Change-Id: Ib5357cfe125f1393b395b28062787f3f0091f549
This makes a registry be automatically part of the cluster
infrastructure.
Tested by running kubecfg diff, no diffs (apart from out-of-date ACLs)
found.
Change-Id: Ic0635e789cf3fb851f410bcf2865326f1fa87545
python_rules is completely broken when it comes to py2/py3 support.
Here, we replace it with native python rules from new Bazel versions [1] and rules_pip for PyPI dependencies [2].
rules_pip is somewhat little known and experimental, but it seems to work much better than what we had previously.
We also unpin rules_docker and fix .bazelrc to force Bazel into Python 2 mode - hopefully, this repo will now work
fine under operating systems where `python` is python2 (as the standard dictates).
[1] - https://docs.bazel.build/versions/master/be/python.html
[2] - https://github.com/apt-itude/rules_pip
Change-Id: Ibd969a4266db564bf86e9c96275deffb9610dd44
IP addresses are not necessary in the topology definitions of a
cockroach cluster.
They were mis-commited leftovers from trying to run the cluster on
DaemonSets with hostNetworking: true.
Change-Id: I4ef1f6ed9a745efc6b05846bc13aba9d1f8dc7c8
This prevents a bug where kubecfg fails to update the client pod when
running a cluster/kube/cluster.jsonnet update. The pod update is
attempted because of runtime/intent differences at serviceAccounts
specification, which causes kubecfg to see a diff, which causes it to
attempt and update, which causes kube-apiserver to reject the change
(because pods are immutable), which causes kubecfg to fail.
Change-Id: I20b0ecbb264213a2eb483d475c7683b4965c82be
We move away from the StatefulSet based deployment to manually starting
a deployment per intended node. This allows us to pin indivisual
instances of Cockroach to particular nodes, so that they state
co-located with their data.
We refactor this library to:
- support multiple databases, but with a strong suggestion of having
one per k8s cluster
- drop the database creation logic
- redo naming (allowing for two options: multiple clusters per
namespace or an exclusive namespace for the cluster)
- unhardcode dns names