1
0
Fork 0
Commit Graph

23 Commits (3e5f4382dfe32d422d4e262153bd83346e08c954)

Author SHA1 Message Date
q3k 99b91b11f1 cluster/k0/admitomatic: add .hswaw.net to hswaw-prod namespace
This was preventing certificate refresh in the hswaw-prod mirko ingress.

Change-Id: I14b18b642a3948a9864e2d9a90b2a2b2c145b9b1
2021-03-28 17:34:34 +00:00
q3k bf266c6aaf cluster/k0: add dns crdb user
In preparation for running PowerDNS on k0.

Change-Id: I853c7465a6a32d02628fa6cfdeb445eb9937b3be
2021-03-17 21:49:00 +00:00
q3k 64de7afe32 cluster/kube/k0: fix syntax errors
This happened in 793ca1b3 and slipped past review.

Change-Id: Ie31f0e1ec03d6e4545d6683b21f528550bf4ef9f
2021-03-17 21:47:51 +00:00
q3k 793ca1b3b2 cluster/kube: limit OSDs in ceph-waw3 to 8GB RAM
Each OSD is connected to a 6TB drive, and with the good ol' 1TB storage
-> 1GB RAM rule of thumb for OSDs, we end up with 6GB. Or, to round up,
8GB.

I'm doing this because over the past few weeks OSDs in ceph-waw3 have
been using a _ton_ of RAM. This will probably not prevent that (and
instead they wil OOM more often :/), but it at will prevent us from
wasting resources (k0 started migrating pods to other nodes, and running
full nodes like that without an underlying request makes for a terrible
draining experience).

We need to get to the bottom of why this is happening in the first
place, though. Did this happen as we moved to containerd?

Followup: b.hswaw.net/29

Already deployed to production.

Change-Id: I98df63763c35017eb77595db7b9f2cce71756ed1
2021-03-07 00:09:58 +00:00
q3k 877cf0af26 🅱️
Fixes b/8

Change-Id: I5a5779c3688451d89c0601dc913143d75048c9f6
2021-02-08 15:10:11 +00:00
q3k 3c5d836c56 cluster/kube: deploy admitomatic
This doesn't yet enable a webhook, but deploys admitomatic itself.

Change-Id: Id177bc8841c873031f9c196b8ff3c12dd846ba8e
2021-02-07 19:19:02 +00:00
informatic f4a6a56662 cluster/kube/k0: add issues.hackerspace.pl crdb user
Change-Id: If78f795e0e35360b65c666e6b217037fc34a2ccf
2021-02-01 21:32:25 +01:00
informatic 3b8a43f35d cluster/kube/k0: add issues.hackerspace.pl ceph s3 user
Change-Id: If5eef3404bdc08ded88e46f45bad0f9abcdb0f1c
2021-02-01 21:19:59 +01:00
patryk edf14cc5f4 crdb: replace bc01n03 with dcr01s22, upgrade to v20.2.4
This change reflects the current production state.

Upgrade was done by going through following versions:
19.1.0 -> 19.2.12 -> 20.1.10 -> 20.2.4

Change-Id: I8b33b8116363f1a918423fd18ba3d1b5c910851c
2021-01-23 23:00:29 +01:00
patryk f3153888a8 cluster/kube: Add k0-cockroach.jsonnet, add Gitea client cert
Change-Id: Ibc5db1b0114b2540b6dc806e75e9a36cf9a3bc50
2021-01-23 15:38:50 +01:00
q3k 61f978a0a0 *: tear down ceph-waw2
It reached the stage of being crapped out so much that the OSDs spurious
IOPS killed the performance of disks colocated on the same M610 RAID
controllers. This made etcd _very_ slow, to the point of churning
through re-elections due to timeouts.

etcd/apiserver latencies, observe the difference at ~15:38:

https://object.ceph-waw3.hswaw.net/q3k-personal/4fbe8d4cfc8193cad307d487371b4e44358b931a7494aa88aff50b13fae9983c.png

I moved gerrit/* and matrix/appservice-irc-freenode PVCs to ceph-waw3 by
hand. The rest were non-critical so I removed them, they can be
recovered from benji backups if needed.

Change-Id: Iffbe87aefc06d8324a82b958a579143b7dd9914c
2021-01-22 16:26:09 +01:00
q3k 3b9ee5f1c0 ceph: bump to 14.2.16
More as-builts. This has already been bumped. Had to coax ceph-waw2 to
upgrade despite the fact that it's horribly broken.

Change-Id: Ia762f5d7d88d6420c2fc25cf199037cbccde0cb3
2021-01-19 21:45:26 +00:00
q3k cf842b0442 k0: reflect reality
This is after the monster^Wrook outage of the week two weeks ago caused
by bc01n03 dying.

Plan is to migrate ceph-waw3 to be external, yeet ceph-waw2, and extend
crdb-waw1 to another node.

Change-Id: I133af3b1171fea383b45bf06c51e48a5c40341e4
2021-01-19 20:08:26 +01:00
patryk cae7cf776f k0: add missing curly brace termination in woju's S3 user name
Change-Id: Ib2752d798f6e23493daee446a834e244f858330e
2020-11-28 14:36:48 +01:00
patryk 34668a5b7b k0: add cz3's personal s3 user
Change-Id: I51ee80eb05c34cfd8b03e15fcaefb5f235587c50
2020-11-28 13:45:25 +01:00
q3k bfe9bb0e3a k0: add woju's personal s3 user
Change-Id: I8ed5bb5428594b74460f1b89185d684cb6c26268
2020-10-27 20:50:50 +01:00
q3k a5ed644980 k0.hswaw.net: pass metallb through Calico
Previously, we had the following setup:

                          .-----------.
                          | .....     |
                        .-----------.-|
                        | dcr01s24  | |
                      .-----------.-| |
                      | dcr01s22  | | |
                  .---|-----------| |-'
    .--------.    |   |---------. | |
    | dcsw01 | <----- | metallb | |-'
    '--------'        |---------' |
                      '-----------'

Ie., each metallb on each node directly talked to dcsw01 over BGP to
announce ExternalIPs to our L3 fabric.

Now, we rejigger the configuration to instead have Calico's BIRD
instances talk BGP to dcsw01, and have metallb talk locally to Calico.

                      .-------------------------.
                      | dcr01s24                |
                      |-------------------------|
    .--------.        |---------.   .---------. |
    | dcsw01 | <----- | Calico  |<--| metallb | |
    '--------'        |---------'   '---------' |
                      '-------------------------'

This makes Calico announce our pod/service networks into our L3 fabric!

Calico and metallb talk to eachother over 127.0.0.1 (they both run with
Host Networking), but that requires one side to flip to pasive mode. We
chose to do that with Calico, by overriding its BIRD config and
special-casing any 127.0.0.1 peer to enable passive mode.

We also override Calico's Other Bird Template (bird_ipam.cfg) to fiddle
with the kernel programming filter (ie. to-kernel-routing-table filter),
where we disable programming unreachable routes. This is because routes
coming from metallb have their next-hop set to 127.0.0.1, which makes
bird mark them as unreachable. Unreachable routes in the kernel will
break local access to ExternalIPs, eg. register access from containerd.

All routes pass through without route reflectors and a full mesh as we
use eBGP over private ASNs in our fabric.

We also have to make Calico aware of metallb pools - otherwise, routes
announced by metallb end up being filtered by Calico.

This is all mildly hacky. Here's hoping that Calico will be able to some
day gain metallb-like functionality, ie. IPAM for
externalIPs/LoadBalancers/...

There seems to be however one problem with this change (but I'm not
fixing it yet as it's not critical): metallb would previously only
announce IPs from nodes that were serving that service. Now, however,
the Calico internal mesh makes those appear from every node. This can
probably be fixed by disabling local meshing, enabling route reflection
on dcsw01 (to recreate the mesh routing through dcsw01). Or, maybe by
some more hacking of the Calico BIRD config :/.

Change-Id: I3df1f6ae7fa1911dd53956ced3b073581ef0e836
2020-09-23 18:55:12 +00:00
q3k 242ec58a33 k0: add waw-hdd-redundant-q3k-3
Change-Id: Id3718877d1e67d48c6726d7649a565db657cfc82
2020-09-20 15:36:24 +00:00
q3k 3d29484ebb k0: move registry to ceph-waw3
ceph-waw2 has currently some production issues [1] which have started to
cause write failures in the registry. The registry is the only user of
ceph-waw2's affected pool, so we reduce the dumpster fire blast radious
by moving it over to ceph-waw3.

This has already been deployed and data has been migrated over (via
s3cmd sync), and the migration has been verified (by a push and pull,
and pull of an older image).

[1] - pgs stuck inactive in the object storage pool

Change-Id: I26789b52008bb7be953954ec3fd3dd727ac15347
2020-08-04 01:36:51 +02:00
q3k 509ab6e29a k0/cockroach: add public DNS entry for cockroach
Change-Id: I934bf348e2165148b515b709e853ab67f039a402
2020-07-30 22:56:30 +02:00
q3k b1aadd88ff k0: add q3k's personal s3 user
Change-Id: I5681774e1dca2cf4a865d9e1a24602ed4334f006
2020-06-24 17:19:36 +00:00
implr d9df5879e3 add radosgw bucket for spark
Change-Id: Id8ea8901ce038ccbf11afabe0e6272c358b32cf2
2020-06-13 21:31:56 +02:00
q3k dbfa988c73 cluster/kube: split up cluster.jsonnet
It was getting large and unwieldy (to the point where kubecfg was slow).
In this change, we:

 - move the Cluster function to cluster.libsonnet
 - move the Cluster instantiation into k0.libsonnet
 - shuffle some fields around to make sure things are well split between
   k0-specific and general cluster configs.
 - add 'view' files that build on 'cluster.libsonnet' to allow rendering
   either the entire k0 state, or some subsets (for speed)
 - update the documentation, drive-by some small fixes and reindantation

Change-Id: I4b8d920b600df79100295267efe21b8c82699d5b
2020-06-13 19:51:58 +02:00