Also make dataplane-only nodes actually work:
- make kubeproxy use the same package as kubelet
- disable firewall
Change-Id: I7babbb749656e6f75151c8eda6e3f09f3c6bff5f
Reviewed-on: https://gerrit.hackerspace.pl/c/hscloud/+/1686
Reviewed-by: q3k <q3k@hackerspace.pl>
This is a mega-change, but attempting to split this up further is
probably not worth the effort.
Summary:
1. Bump up bazel, rules_go, and others.
2. Switch to new go target naming (bye bye go_default_library)
3. Move go deps to go.mod/go.sum, use make gazelle generate from that
4. Bump up Python deps a bit
And also whatever was required to actually get things to work - loads of
small useless changes.
Tested to work on NixOS and Ubuntu 20.04:
$ bazel build //...
$ bazel test //...
Change-Id: I8364bdaa1406b9ae4d0385a6b607f3e7989f98a9
Reviewed-on: https://gerrit.hackerspace.pl/c/hscloud/+/1583
Reviewed-by: q3k <q3k@hackerspace.pl>
This completes the migration away from the old CA/cert infrastructure.
The tool which was used to generate all these certs will come next. It's
effectively a reimplementation of clustercfg in Go.
We also removed the unused kube-serviceaccounts cert, which was
generated by the old tooling for no good reason (we only need a key for
service accounts, not an actual cert...).
Change-Id: Ied9e5d8fc90c64a6b4b9fdd20c33981410c884b4
Reviewed-on: https://gerrit.hackerspace.pl/c/hscloud/+/1501
Reviewed-by: q3k <q3k@hackerspace.pl>
This finishes the regeneration of all cluster CAs/certs to be never
expiring ED25519 certs.
We still have leftovers of the old Kube CA (and it's still being
accepted in Kubernetes components). Cleaning that up is the next step.
Change-Id: I883f94fd8cef3e3b5feefdf56ee106e462bb04a9
Reviewed-on: https://gerrit.hackerspace.pl/c/hscloud/+/1500
Reviewed-by: q3k <q3k@hackerspace.pl>
Done:
1. etcd peer CA & certs
2. etcd client CA & certs
3. kube CA (currently all components set to accept both new and old CA,
new CA called ca-kube-new)
4. kube apiserver
5. kubelet & kube-proxy
6. prodvider intermediate
TODO:
1. kubernetes controller-manager & kubernetes scheduler
2. kubefront CA
3. admitomatic?
4. undo bundle on kube CA components to fully transition away from old
CA
Change-Id: If529eeaed9a6a2063bed23c9d81c57b36b9a0115
Reviewed-on: https://gerrit.hackerspace.pl/c/hscloud/+/1487
Reviewed-by: q3k <q3k@hackerspace.pl>
This has been deployed to k0 nodes.
Current state of cluster certificates:
cluster/certs/ca-etcd.crt
Not After : Apr 4 17:59:00 2024 GMT
cluster/certs/ca-etcdpeer.crt
Not After : Apr 4 17:59:00 2024 GMT
cluster/certs/ca-kube.crt
Not After : Apr 4 17:59:00 2024 GMT
cluster/certs/ca-kubefront.crt
Not After : Apr 4 17:59:00 2024 GMT
cluster/certs/ca-kube-prodvider.cert
Not After : Sep 1 21:30:00 2021 GMT
cluster/certs/etcd-bc01n01.hswaw.net.cert
Not After : Mar 28 15:53:00 2021 GMT
cluster/certs/etcd-bc01n02.hswaw.net.cert
Not After : Mar 28 16:45:00 2021 GMT
cluster/certs/etcd-bc01n03.hswaw.net.cert
Not After : Mar 28 15:15:00 2021 GMT
cluster/certs/etcd-calico.cert
Not After : Mar 28 15:15:00 2021 GMT
cluster/certs/etcd-dcr01s22.hswaw.net.cert
Not After : Oct 3 15:33:00 2021 GMT
cluster/certs/etcd-dcr01s24.hswaw.net.cert
Not After : Oct 3 15:38:00 2021 GMT
cluster/certs/etcd-kube.cert
Not After : Mar 28 15:15:00 2021 GMT
cluster/certs/etcdpeer-bc01n01.hswaw.net.cert
Not After : Mar 28 15:53:00 2021 GMT
cluster/certs/etcdpeer-bc01n02.hswaw.net.cert
Not After : Mar 28 16:45:00 2021 GMT
cluster/certs/etcdpeer-bc01n03.hswaw.net.cert
Not After : Mar 28 15:15:00 2021 GMT
cluster/certs/etcdpeer-dcr01s22.hswaw.net.cert
Not After : Oct 3 15:33:00 2021 GMT
cluster/certs/etcdpeer-dcr01s24.hswaw.net.cert
Not After : Oct 3 15:38:00 2021 GMT
cluster/certs/etcd-root.cert
Not After : Mar 28 15:15:00 2021 GMT
cluster/certs/kube-apiserver.cert
Not After : Oct 3 15:26:00 2021 GMT
cluster/certs/kube-controllermanager.cert
Not After : Mar 28 15:15:00 2021 GMT
cluster/certs/kubefront-apiserver.cert
Not After : Mar 28 15:15:00 2021 GMT
cluster/certs/kube-kubelet-bc01n01.hswaw.net.cert
Not After : Mar 28 15:53:00 2021 GMT
cluster/certs/kube-kubelet-bc01n02.hswaw.net.cert
Not After : Mar 28 16:45:00 2021 GMT
cluster/certs/kube-kubelet-bc01n03.hswaw.net.cert
Not After : Mar 28 15:15:00 2021 GMT
cluster/certs/kube-kubelet-dcr01s22.hswaw.net.cert
Not After : Oct 3 15:33:00 2021 GMT
cluster/certs/kube-kubelet-dcr01s24.hswaw.net.cert
Not After : Oct 3 15:38:00 2021 GMT
cluster/certs/kube-proxy.cert
Not After : Mar 28 15:15:00 2021 GMT
cluster/certs/kube-scheduler.cert
Not After : Mar 28 15:15:00 2021 GMT
cluster/certs/kube-serviceaccounts.cert
Not After : Mar 28 15:15:00 2021 GMT
Change-Id: I94030ce78c10f7e9a0c0257d55145ef629195314
This makes clustercfg ensure certificates are valid for at least 30
days, and renew them otherwise.
We use this to bump all the certs that were about to expire in a week.
They are now valid until 2021.
There's still some certs that expire in 2020. We need to figure out a
better story for this, especially as the next expiry is 2021 - todays
prod rollout was somewhat disruptive (basically this was done by a full
cluster upgrade-like rollout flow, via clustercfg).
We also drive-by bump the number of mons in ceph-waw3 to 3, as it shouls
be (this gets rid of a nasty SPOF that would've bitten us during this
upgrade otherwise).
Change-Id: Iee050b1b9cba4222bc0f3c7bce9e4cf9b25c8bdc
This time from a bare hscloud checkout to make sure _nothing_ is fucked
up.
This causes no change remotely, just makes te repo reflect reality.
Change-Id: Ie8db01300771268e0371c3cdaf1930c8d7cbfb1a
In https://gerrit.hackerspace.pl/c/hscloud/+/70 we accidentally
introduced a split-horizon DNS situation:
- k0.hswaw.net from the Internet resolves to nodes running the k8s API
servers, and as such can serve API server traffic
- k0.hswaw.net from the cluster returned no results
This broke prodvider in two ways:
- it dialed the API servers at k0.hswaw.net
- even after the endpoint was moved to
kubernetes.default.svc.k0.hswaw.net, the apiserver cert didn't cover
that
Thus, not only we had to change the prodvider endpoint but also change
the APIserver certs to cover this new name.
I'm not sure this should be the target fix. I think at some point we
should only start referring to in-cluster services via their full (or
cluster.local) names, but right now k0.hswaw.net is an exception and as
such a split, and we have no way to access the internal services from
the outside just yet.
However, getting prodvider to work is important enough that this fix is
IMO good enough for now.
Change-Id: I13d0681208c66f4060acecc78b7ae14b8f8d7125
Prodaccess/Prodvider allow issuing short-lived certificates for all SSO
users to access the kubernetes cluster.
Currently, all users get a personal-$username namespace in which they
have adminitrative rights. Otherwise, they get no access.
In addition, we define a static CRB to allow some admins access to
everything. In the future, this will be more granular.
We also update relevant documentation.
Change-Id: Ia18594eea8a9e5efbb3e9a25a04a28bbd6a42153
This pretty large change does the following:
- moves nix from bootstrap.hswaw.net to nix/
- changes clustercfg to use cfssl and moves it to cluster/clustercfg
- changes clustercfg to source information about target location of
certs from nix
- changes clustercfg to push nix config
- changes tls certs to have more than one CA
- recalculates all TLS certs
(it keeps the old serviceaccoutns key, otherwise we end up with
invalid serviceaccounts - the cert doesn't match, but who cares,
it's not used anyway)