We change the existing behaviour (copy files & run nixos-rebuild switch)
to something closer to nixops-style. This now means that provisioning
admin machines need Nix installed locally, but that's probably an okay
choice to make.
The upside of this approach is that it's easier to debug and test
derivations, as all data is local to the repo and the workstation, and
deploying just means copying a configuration closure and switching the
system to it. At some point we should even be able to run the entire
cluster within a set of test VMs.
We also bump the kubernetes control plane to 1.14. Kubelets are still at
1.13 and their upgrade is comint up today too.
Change-Id: Ia9832c47f258ee223d93893d27946d1161cc4bbd
This needed an upstream change to allow only some pools to be backed up,
otherwise benji would crash when stubmling upon the first PVC from a
pool that wasn't backed by the ceph cluster it was acting upon.
Change-Id: I52bf163c16352cb59fdd3dbdd576145ce1dbac03
This time from a bare hscloud checkout to make sure _nothing_ is fucked
up.
This causes no change remotely, just makes te repo reflect reality.
Change-Id: Ie8db01300771268e0371c3cdaf1930c8d7cbfb1a
This productionizes smsgw.
We also add some jsonnet machinery to provide a unified service for Go
micro/mirkoservices.
This machinery provides all the nice stuff:
- a deployment
- a service for all your types of pots
- TLS certificates for HSPKI
We also update and test hspki for a new name scheme.
Change-Id: I292d00f858144903cbc8fe0c1c26eb1180d636bc
In https://gerrit.hackerspace.pl/c/hscloud/+/70 we accidentally
introduced a split-horizon DNS situation:
- k0.hswaw.net from the Internet resolves to nodes running the k8s API
servers, and as such can serve API server traffic
- k0.hswaw.net from the cluster returned no results
This broke prodvider in two ways:
- it dialed the API servers at k0.hswaw.net
- even after the endpoint was moved to
kubernetes.default.svc.k0.hswaw.net, the apiserver cert didn't cover
that
Thus, not only we had to change the prodvider endpoint but also change
the APIserver certs to cover this new name.
I'm not sure this should be the target fix. I think at some point we
should only start referring to in-cluster services via their full (or
cluster.local) names, but right now k0.hswaw.net is an exception and as
such a split, and we have no way to access the internal services from
the outside just yet.
However, getting prodvider to work is important enough that this fix is
IMO good enough for now.
Change-Id: I13d0681208c66f4060acecc78b7ae14b8f8d7125
This way kubernetes consumers don't have to import anything from
cluster/, hopefully.
We also create a small abstraction for local additions for
kube.libsonnet without having to modify upstream.
Change-Id: I209095781f91c8867250a647fe944370cddd67d0
This means that in addition to services being discoverable the 'classic'
way:
<svcname>.<namespace>.svc.cluster.local
They are now discoverable as:
<svcname>.<namespace>.svc.<fqdn>
For instance, on k0 you can now internally resolve:
$ kubectl run --rm -it foo --image=nixery.dev/shell/dnsutils bash
bash-4.4# dig +short coffee-svc.default.svc.k0.hswaw.net
10.10.12.192
Change-Id: Ie6875b54ed6358f30f888ca0cd96e011520ace20
Every benji backup seems to cycle blocks (eg. delete some and recreate
them).
Since wasabi has a minimum billing retention policy of 90 days, this
means that every uploaded and then an hour later deleted object costs
us.
Currently we seem to be storing around 200G of data in wasabi for Benji
but already have 600G of deleted objects. This is suboptimal.
This change has already been deployed on production.
Change-Id: I67302d23a1c45974fb5d51ec9a8cff28260830dc
rules_pip has a new version [1] of their rule system, incompatible with the
version we used, that fixes a bunch of issues, notably:
- explicit tagging of repositories for PY2/PY3/PY23 support
- removal of dependency on host pip (in exchange for having to vendor
wheels)
- higher quality tooling for locking
We update to the newer version of pip_rules, rename the external
repository to pydeps and move requirements.txt, the lockfile and the
newly vendored wheels to third_party/, where they belong.
[1] - https://github.com/apt-itude/rules_pip/issues/16
Change-Id: I1065ee2fc410e52fca2be89fcbdd4cc5a4755d55
Some pods got evicted. Some of them broke.
- postgres in matrix and nginx in internet because of the new policies
(chown issues)
- cas proxy in matrix because apparently the image was not reuploaded
to the regsitry after ceph-waw1 died, and another node didn't have it
- registry because it had a weak image pin an downgraded to some
broken version on another node
Change-Id: I836036872629843c8ede1b7f67982112c90d71f0
Here we introduce benji [1], a backup system based on backy2. It lets us
backup Ceph RBD objects from Rook into Wasabi, our offsite S3-compatible
storage provider.
Benji runs as a k8s CronJob, every hour at 42 minutes. It does the
following:
- runs benji-pvc-backup, which iterates over all PVCs in k8s, and backs
up their respective PVs to Wasabi
- runs benji enforce, marking backups outside our backup policy [2] as
to be deleted
- runs benji cleanup, to remove unneeded backups
- runs a custom script to backup benji's sqlite3 database into wasabi
(unencrypted, but we're fine with that - as the metadata only contains
image/pool names, thus Ceph PV and pool names)
[1] - https://benji-backup.me/index.html
[2] - latest3,hours48,days7,months12, which means the latest 3 backups,
then one backup for the next 48 hours, then one backup for the next
7 days, then one backup for the next 12 months, for a total of 65
backups (deduplicated, of course)
We also drive-by update some docs (make them mmore separated into
user/admin docs).
Change-Id: Ibe0942fd38bc232399c0e1eaddade3f4c98bc6b4
This port was leaking kubelet state, including information on running
pods. No secrets were leaked (if they were not text-pasted into
env/args), but this still shouldn't be available.
As far as I can tell, nothing depends on this port, other than some
enterprise load balancers that require HTTP for node 'health' checks.
Change-Id: I9549b73e0168fe3ea4dce43cbe8fdc2ca4575961
Prodaccess/Prodvider allow issuing short-lived certificates for all SSO
users to access the kubernetes cluster.
Currently, all users get a personal-$username namespace in which they
have adminitrative rights. Otherwise, they get no access.
In addition, we define a static CRB to allow some admins access to
everything. In the future, this will be more granular.
We also update relevant documentation.
Change-Id: Ia18594eea8a9e5efbb3e9a25a04a28bbd6a42153
We just got this email:
We've been working with Jetstack, the authors of cert-manager, on a
series of fixes to the client. Cert-manager sometimes falls into a
traffic pattern where it sends really excessive traffic to Let's
Encrypt's servers, continuously. To mitigate this, we plan to start
blocking all traffic from cert-manager versions less than 0.8.0 (the
current semver minor release), as of November 1, 2019. Please upgrade
all of your cert-manager instances before then.
We're sending this email because this is the contact address of your
cert-manager instance at:
185.236.240.37 .
Version 0.8.0 is much better but we still observe excessive traffic in
some cases. We're working with Jetstack to improve these cases. As new
versions of cert-manager are released, we will add the non-current
versions to our block list after 3 months. We strongly encourage
cert-manager users to stay up-to-date with new versions.
Also, there is an opportunity to help both Jetstack and Let's Encrypt.
Once you've upgraded, please check the logs for your cert-manager
instances from time to time. Are they making excessive requests to Let's
Encrypt (more than, say, 10 per day over multiple days)? If so, please
share details at https://github.com/jetstack/cert-manager/issues/1948 .
Thanks,
Let's Encrypt Team
Change-Id: Ic7152150ac1c96941423878c6d4b6209e07429cf
We accidentally created crdb-waw2 in
https://gerrit.hackerspace.pl/c/hscloud/+/2.
We remove it now and also backport a manual change that makes the
crdb-waw1 service public via a LoadBalancer.
Change-Id: I3bbd6f01b82c6efa458cc44776f086ba36e9f20c
Nixops requires nix_rules, which in turn requires a working nix
installation.
When we split tools/install.sh into tools/install.sh and
cluster/tools/install.sh [1], we accidentally made the latter always install
all cluster tools, including nixops - even if the install.sh script
detected that the system does not have Nix installed.
[1] - https://gerrit.hackerspace.pl/c/hscloud/+/81
Change-Id: Ib5357cfe125f1393b395b28062787f3f0091f549
This makes a registry be automatically part of the cluster
infrastructure.
Tested by running kubecfg diff, no diffs (apart from out-of-date ACLs)
found.
Change-Id: Ic0635e789cf3fb851f410bcf2865326f1fa87545
python_rules is completely broken when it comes to py2/py3 support.
Here, we replace it with native python rules from new Bazel versions [1] and rules_pip for PyPI dependencies [2].
rules_pip is somewhat little known and experimental, but it seems to work much better than what we had previously.
We also unpin rules_docker and fix .bazelrc to force Bazel into Python 2 mode - hopefully, this repo will now work
fine under operating systems where `python` is python2 (as the standard dictates).
[1] - https://docs.bazel.build/versions/master/be/python.html
[2] - https://github.com/apt-itude/rules_pip
Change-Id: Ibd969a4266db564bf86e9c96275deffb9610dd44
IP addresses are not necessary in the topology definitions of a
cockroach cluster.
They were mis-commited leftovers from trying to run the cluster on
DaemonSets with hostNetworking: true.
Change-Id: I4ef1f6ed9a745efc6b05846bc13aba9d1f8dc7c8
This prevents a bug where kubecfg fails to update the client pod when
running a cluster/kube/cluster.jsonnet update. The pod update is
attempted because of runtime/intent differences at serviceAccounts
specification, which causes kubecfg to see a diff, which causes it to
attempt and update, which causes kube-apiserver to reject the change
(because pods are immutable), which causes kubecfg to fail.
Change-Id: I20b0ecbb264213a2eb483d475c7683b4965c82be
We move away from the StatefulSet based deployment to manually starting
a deployment per intended node. This allows us to pin indivisual
instances of Cockroach to particular nodes, so that they state
co-located with their data.
We refactor this library to:
- support multiple databases, but with a strong suggestion of having
one per k8s cluster
- drop the database creation logic
- redo naming (allowing for two options: multiple clusters per
namespace or an exclusive namespace for the cluster)
- unhardcode dns names
This pretty large change does the following:
- moves nix from bootstrap.hswaw.net to nix/
- changes clustercfg to use cfssl and moves it to cluster/clustercfg
- changes clustercfg to source information about target location of
certs from nix
- changes clustercfg to push nix config
- changes tls certs to have more than one CA
- recalculates all TLS certs
(it keeps the old serviceaccoutns key, otherwise we end up with
invalid serviceaccounts - the cert doesn't match, but who cares,
it's not used anyway)
This is so that Calico starts with the proper subnet. Feeding it just an
IP from the node status will mean it parses it as /32 and uses IPIP
tunnels for all connectivity.