ceph-waw2 has currently some production issues [1] which have started to
cause write failures in the registry. The registry is the only user of
ceph-waw2's affected pool, so we reduce the dumpster fire blast radious
by moving it over to ceph-waw3.
This has already been deployed and data has been migrated over (via
s3cmd sync), and the migration has been verified (by a push and pull,
and pull of an older image).
[1] - pgs stuck inactive in the object storage pool
Change-Id: I26789b52008bb7be953954ec3fd3dd727ac15347
In addition to k8s certificates, prodaccess now issues HSPKI
certificates, with DN=$username.sso.hswaw.net. These are installed into
XDG_CONFIG_HOME (or os equiv).
//go/pki will now automatically attempt to load these certificates. This
means you can now run any pki-dependant tool with -hspki_disable, and
with automatic mTLS!
Change-Id: I5b28e193e7c968d621bab0d42aabd6f0510fed6d
"Anyone can pull all images" rule did only match on anonymous users. Now
it should match all users, including authenticated ones.
Change-Id: I2205299093feca51f30526ba305eadbaa0a68ecb
We would like gitea to have its ssh server exposed on TCP port 22 on the
same address as its web interface. We would also still like to use all
the automation around ingresses already in place (like cert-manager
integration).
To solve this, we create an additional LoadBalancer service for
nginx-ingress-controller and set up special tcp-services forwarding rule
to pass port 22 traffic to gitea-prod/gitea service, like we already do
in case of gerrit.
Change-Id: I5bfc901ebe858464f8e9c2f3b2216b254ccd6c4d
It was getting large and unwieldy (to the point where kubecfg was slow).
In this change, we:
- move the Cluster function to cluster.libsonnet
- move the Cluster instantiation into k0.libsonnet
- shuffle some fields around to make sure things are well split between
k0-specific and general cluster configs.
- add 'view' files that build on 'cluster.libsonnet' to allow rendering
either the entire k0 state, or some subsets (for speed)
- update the documentation, drive-by some small fixes and reindantation
Change-Id: I4b8d920b600df79100295267efe21b8c82699d5b
We handwavingly plan on implementing monitoring as a two-tier system:
- a 'global' component that is reponsible for global aggregation,
long-term storage and alerting.
- multiple 'per-cluster' components, that collect metrics from
Kubernetes clusters and export them to the global component.
In addition, several lower tiers (collected by per-cluster components)
might also be implemented in the future - for instance, specific to some
subprojects.
Here we start sketching out some basic jsonnet structure (currently all
in a single file, with little parametrization) and a cluster-level
prometheus server that scrapes Kubernetes Node and cAdvisor metrics.
This review is mostly to get this commited as early as possible, and to
make sure that the little existing Prometheus scrape configuration is
sane.
Change-Id: If37ac3b1243b8b6f464d65fee6d53080c36f992c
This was an attempt to make new calico nodes use a full FQDN. However,
this change seemingly also makes the calico control plane use the FQDN
for all existing nodes, as such breaking CNI for new pods.
We revert this change, thereby keeping all calico nodes names as
hostnames. We could fix this by editing /var/lib/calico/nodename on
hosts to FQDNs, but it might not be worth the effort.
See https://github.com/projectcalico/calico/issues/1093 for more
context.
Change-Id: I52bfb00f604053d57d3009aebd6c50db7dc74f58
We still use etcd as the data store (and as such didn't set up k8s CRDs
for Calico), but that's okay for now.
Change-Id: If6d66f505c6b40f2646ffae7d33d0d641d34a963
This previous allowed all namespace admins (ie. personal-$user namespace
users) to create any sort of obejct they wanted within that namespace.
This could've been exploited to allow creation of a RoleBinding that
would then allow to bind a serviceaccount to the insecure
podsecuritypolicy, thereby allowing escalation to root on nodes.
As far as I've checked, this hasn't been exploited, and the access to
the k8s cluster has so far also been limited to trusted users.
This has been deployed to production.
Change-Id: Icf8747d765ccfa9fed843ec9e7b0b957ff27d96e
This bumps Rook/Ceph. The new resources (mostly RBAC) come from
following https://rook.io/docs/rook/v1.1/ceph-upgrade.html .
It's already deployed on production. The new CSI driver has not been
tested, but the old flexvolume-based provisioners still work. We'll
migrate when Rook offers a nice solution for this.
We've hit a kubecfg bug that does not allow controlling the CephCluster
CRD directly anymore (I had to apply it via kubecfg show / kubectl apply
-f instead). This might be due to our bazel/prod k8s version mismatch,
or it might be related to https://github.com/bitnami/kubecfg/issues/259.
Change-Id: Icd69974b294b823e60b8619a656d4834bd6520fd
This makes clustercfg ensure certificates are valid for at least 30
days, and renew them otherwise.
We use this to bump all the certs that were about to expire in a week.
They are now valid until 2021.
There's still some certs that expire in 2020. We need to figure out a
better story for this, especially as the next expiry is 2021 - todays
prod rollout was somewhat disruptive (basically this was done by a full
cluster upgrade-like rollout flow, via clustercfg).
We also drive-by bump the number of mons in ceph-waw3 to 3, as it shouls
be (this gets rid of a nasty SPOF that would've bitten us during this
upgrade otherwise).
Change-Id: Iee050b1b9cba4222bc0f3c7bce9e4cf9b25c8bdc
In preparation for updating to 1.1.0, which will be much more involved.
Also fix a typo in registry.libsonnet, whoops.
Change-Id: I7668bf53c7580f99fdf56fe6227f04a468f8de50
This reflects current production. This needs to get bumped up to 3 at some point as otherwise we lose HA for this cluster.
Change-Id: Ie5937e6a216b635ecbc4c82ecd182a410167c3f8
This needed an upstream change to allow only some pools to be backed up,
otherwise benji would crash when stubmling upon the first PVC from a
pool that wasn't backed by the ceph cluster it was acting upon.
Change-Id: I52bf163c16352cb59fdd3dbdd576145ce1dbac03
This productionizes smsgw.
We also add some jsonnet machinery to provide a unified service for Go
micro/mirkoservices.
This machinery provides all the nice stuff:
- a deployment
- a service for all your types of pots
- TLS certificates for HSPKI
We also update and test hspki for a new name scheme.
Change-Id: I292d00f858144903cbc8fe0c1c26eb1180d636bc
In https://gerrit.hackerspace.pl/c/hscloud/+/70 we accidentally
introduced a split-horizon DNS situation:
- k0.hswaw.net from the Internet resolves to nodes running the k8s API
servers, and as such can serve API server traffic
- k0.hswaw.net from the cluster returned no results
This broke prodvider in two ways:
- it dialed the API servers at k0.hswaw.net
- even after the endpoint was moved to
kubernetes.default.svc.k0.hswaw.net, the apiserver cert didn't cover
that
Thus, not only we had to change the prodvider endpoint but also change
the APIserver certs to cover this new name.
I'm not sure this should be the target fix. I think at some point we
should only start referring to in-cluster services via their full (or
cluster.local) names, but right now k0.hswaw.net is an exception and as
such a split, and we have no way to access the internal services from
the outside just yet.
However, getting prodvider to work is important enough that this fix is
IMO good enough for now.
Change-Id: I13d0681208c66f4060acecc78b7ae14b8f8d7125
This way kubernetes consumers don't have to import anything from
cluster/, hopefully.
We also create a small abstraction for local additions for
kube.libsonnet without having to modify upstream.
Change-Id: I209095781f91c8867250a647fe944370cddd67d0
This means that in addition to services being discoverable the 'classic'
way:
<svcname>.<namespace>.svc.cluster.local
They are now discoverable as:
<svcname>.<namespace>.svc.<fqdn>
For instance, on k0 you can now internally resolve:
$ kubectl run --rm -it foo --image=nixery.dev/shell/dnsutils bash
bash-4.4# dig +short coffee-svc.default.svc.k0.hswaw.net
10.10.12.192
Change-Id: Ie6875b54ed6358f30f888ca0cd96e011520ace20
Every benji backup seems to cycle blocks (eg. delete some and recreate
them).
Since wasabi has a minimum billing retention policy of 90 days, this
means that every uploaded and then an hour later deleted object costs
us.
Currently we seem to be storing around 200G of data in wasabi for Benji
but already have 600G of deleted objects. This is suboptimal.
This change has already been deployed on production.
Change-Id: I67302d23a1c45974fb5d51ec9a8cff28260830dc
Some pods got evicted. Some of them broke.
- postgres in matrix and nginx in internet because of the new policies
(chown issues)
- cas proxy in matrix because apparently the image was not reuploaded
to the regsitry after ceph-waw1 died, and another node didn't have it
- registry because it had a weak image pin an downgraded to some
broken version on another node
Change-Id: I836036872629843c8ede1b7f67982112c90d71f0
Here we introduce benji [1], a backup system based on backy2. It lets us
backup Ceph RBD objects from Rook into Wasabi, our offsite S3-compatible
storage provider.
Benji runs as a k8s CronJob, every hour at 42 minutes. It does the
following:
- runs benji-pvc-backup, which iterates over all PVCs in k8s, and backs
up their respective PVs to Wasabi
- runs benji enforce, marking backups outside our backup policy [2] as
to be deleted
- runs benji cleanup, to remove unneeded backups
- runs a custom script to backup benji's sqlite3 database into wasabi
(unencrypted, but we're fine with that - as the metadata only contains
image/pool names, thus Ceph PV and pool names)
[1] - https://benji-backup.me/index.html
[2] - latest3,hours48,days7,months12, which means the latest 3 backups,
then one backup for the next 48 hours, then one backup for the next
7 days, then one backup for the next 12 months, for a total of 65
backups (deduplicated, of course)
We also drive-by update some docs (make them mmore separated into
user/admin docs).
Change-Id: Ibe0942fd38bc232399c0e1eaddade3f4c98bc6b4
Prodaccess/Prodvider allow issuing short-lived certificates for all SSO
users to access the kubernetes cluster.
Currently, all users get a personal-$username namespace in which they
have adminitrative rights. Otherwise, they get no access.
In addition, we define a static CRB to allow some admins access to
everything. In the future, this will be more granular.
We also update relevant documentation.
Change-Id: Ia18594eea8a9e5efbb3e9a25a04a28bbd6a42153
We just got this email:
We've been working with Jetstack, the authors of cert-manager, on a
series of fixes to the client. Cert-manager sometimes falls into a
traffic pattern where it sends really excessive traffic to Let's
Encrypt's servers, continuously. To mitigate this, we plan to start
blocking all traffic from cert-manager versions less than 0.8.0 (the
current semver minor release), as of November 1, 2019. Please upgrade
all of your cert-manager instances before then.
We're sending this email because this is the contact address of your
cert-manager instance at:
185.236.240.37 .
Version 0.8.0 is much better but we still observe excessive traffic in
some cases. We're working with Jetstack to improve these cases. As new
versions of cert-manager are released, we will add the non-current
versions to our block list after 3 months. We strongly encourage
cert-manager users to stay up-to-date with new versions.
Also, there is an opportunity to help both Jetstack and Let's Encrypt.
Once you've upgraded, please check the logs for your cert-manager
instances from time to time. Are they making excessive requests to Let's
Encrypt (more than, say, 10 per day over multiple days)? If so, please
share details at https://github.com/jetstack/cert-manager/issues/1948 .
Thanks,
Let's Encrypt Team
Change-Id: Ic7152150ac1c96941423878c6d4b6209e07429cf
We accidentally created crdb-waw2 in
https://gerrit.hackerspace.pl/c/hscloud/+/2.
We remove it now and also backport a manual change that makes the
crdb-waw1 service public via a LoadBalancer.
Change-Id: I3bbd6f01b82c6efa458cc44776f086ba36e9f20c