This annotation is used to permit routes defined by regexes instead of
simple prefix matching. This is used by our synapse deployment for
routing incomming HTTP requests to diffferent Synapse components.
I've stumbled upon this while deploying a new Matrix/Synapse instance.
This hasn't been yet a problem because the existing ingresses for Matrix
deployments predate admitomatic.
Change-Id: I821e58b214450ccf0de22d2585c3b0d11fbe71c0
These can be used by production jobs to get the source port of the
client connecting over HTTP. A followup CR implements just that.
Change-Id: Ic8e29eaf806bb196d8cfcfb604ff66ae4d0d166a
This fixes CVE-2021-3450 and CVE-2021-3449.
Deployed on prod:
$ kubectl -n nginx-system exec nginx-ingress-controller-5c69c5cb59-2f8v4 -- openssl version
OpenSSL 1.1.1k 25 Mar 2021
Change-Id: I7115fd2367cca7b687c555deb2134b22d19a291a
Each OSD is connected to a 6TB drive, and with the good ol' 1TB storage
-> 1GB RAM rule of thumb for OSDs, we end up with 6GB. Or, to round up,
8GB.
I'm doing this because over the past few weeks OSDs in ceph-waw3 have
been using a _ton_ of RAM. This will probably not prevent that (and
instead they wil OOM more often :/), but it at will prevent us from
wasting resources (k0 started migrating pods to other nodes, and running
full nodes like that without an underlying request makes for a terrible
draining experience).
We need to get to the bottom of why this is happening in the first
place, though. Did this happen as we moved to containerd?
Followup: b.hswaw.net/29
Already deployed to production.
Change-Id: I98df63763c35017eb77595db7b9f2cce71756ed1
This will permit any binding to system:admin-namespaces (eg. personal-*
namespaces, per-namespace extra admin access like matrix-0x3c) the
ability to create and updates ingresses.
Change-Id: I522896ebe290fe982d6fe46b7b1d604d22b4f72c
This change reflects the current production state.
Upgrade was done by going through following versions:
19.1.0 -> 19.2.12 -> 20.1.10 -> 20.2.4
Change-Id: I8b33b8116363f1a918423fd18ba3d1b5c910851c
It reached the stage of being crapped out so much that the OSDs spurious
IOPS killed the performance of disks colocated on the same M610 RAID
controllers. This made etcd _very_ slow, to the point of churning
through re-elections due to timeouts.
etcd/apiserver latencies, observe the difference at ~15:38:
https://object.ceph-waw3.hswaw.net/q3k-personal/4fbe8d4cfc8193cad307d487371b4e44358b931a7494aa88aff50b13fae9983c.png
I moved gerrit/* and matrix/appservice-irc-freenode PVCs to ceph-waw3 by
hand. The rest were non-critical so I removed them, they can be
recovered from benji backups if needed.
Change-Id: Iffbe87aefc06d8324a82b958a579143b7dd9914c
More as-builts. This has already been bumped. Had to coax ceph-waw2 to
upgrade despite the fact that it's horribly broken.
Change-Id: Ia762f5d7d88d6420c2fc25cf199037cbccde0cb3
This is after the monster^Wrook outage of the week two weeks ago caused
by bc01n03 dying.
Plan is to migrate ceph-waw3 to be external, yeet ceph-waw2, and extend
crdb-waw1 to another node.
Change-Id: I133af3b1171fea383b45bf06c51e48a5c40341e4
This prevents metallb routes being announced from all peers to our ToR,
thereby preventing issues with traffic hitting services with
externalTrafficPolicy: local.
There still is the from-host loopback issue, but that will be fixed by
upgrading to kube 1.15.
Change-Id: Ifc9964b46840aee82d99f0b6550188550e46fe04
This fixes compatibility with prodaccess tools built with Go 1.15, which
introduced 'X.509 CommonName deprecation' [1].
[1] - https://golang.org/doc/go1.15#commonname
Change-Id: I228cde3e5651a3e36f527783f2ccb4a2f6b7a8e3
Previously, we had the following setup:
.-----------.
| ..... |
.-----------.-|
| dcr01s24 | |
.-----------.-| |
| dcr01s22 | | |
.---|-----------| |-'
.--------. | |---------. | |
| dcsw01 | <----- | metallb | |-'
'--------' |---------' |
'-----------'
Ie., each metallb on each node directly talked to dcsw01 over BGP to
announce ExternalIPs to our L3 fabric.
Now, we rejigger the configuration to instead have Calico's BIRD
instances talk BGP to dcsw01, and have metallb talk locally to Calico.
.-------------------------.
| dcr01s24 |
|-------------------------|
.--------. |---------. .---------. |
| dcsw01 | <----- | Calico |<--| metallb | |
'--------' |---------' '---------' |
'-------------------------'
This makes Calico announce our pod/service networks into our L3 fabric!
Calico and metallb talk to eachother over 127.0.0.1 (they both run with
Host Networking), but that requires one side to flip to pasive mode. We
chose to do that with Calico, by overriding its BIRD config and
special-casing any 127.0.0.1 peer to enable passive mode.
We also override Calico's Other Bird Template (bird_ipam.cfg) to fiddle
with the kernel programming filter (ie. to-kernel-routing-table filter),
where we disable programming unreachable routes. This is because routes
coming from metallb have their next-hop set to 127.0.0.1, which makes
bird mark them as unreachable. Unreachable routes in the kernel will
break local access to ExternalIPs, eg. register access from containerd.
All routes pass through without route reflectors and a full mesh as we
use eBGP over private ASNs in our fabric.
We also have to make Calico aware of metallb pools - otherwise, routes
announced by metallb end up being filtered by Calico.
This is all mildly hacky. Here's hoping that Calico will be able to some
day gain metallb-like functionality, ie. IPAM for
externalIPs/LoadBalancers/...
There seems to be however one problem with this change (but I'm not
fixing it yet as it's not critical): metallb would previously only
announce IPs from nodes that were serving that service. Now, however,
the Calico internal mesh makes those appear from every node. This can
probably be fixed by disabling local meshing, enabling route reflection
on dcsw01 (to recreate the mesh routing through dcsw01). Or, maybe by
some more hacking of the Calico BIRD config :/.
Change-Id: I3df1f6ae7fa1911dd53956ced3b073581ef0e836
We just had an outage seemingly caused by N-I-C sendings tons of traffic
to gitea, which in turn caused N-I-C to balloon in memory/CPU usage.
I haven't debugged the cause of this traffic, but I have disabled the
gitea TCP forward to Stop The Bleeding.
This change reflects ad-hoc production changes.
Change-Id: I37e11609f408fa3e3fbfafafba44dc83149b90a9
This adds a mod proxy system, called, well, modproxy.
It sits between Factorio server instances and the Factorio mod portal,
allowing for arbitrary mod download without needing the servers to know
Factorio credentials.
Change-Id: I7bc405a25b6f9559cae1f23295249f186761f212
ceph-waw2 has currently some production issues [1] which have started to
cause write failures in the registry. The registry is the only user of
ceph-waw2's affected pool, so we reduce the dumpster fire blast radious
by moving it over to ceph-waw3.
This has already been deployed and data has been migrated over (via
s3cmd sync), and the migration has been verified (by a push and pull,
and pull of an older image).
[1] - pgs stuck inactive in the object storage pool
Change-Id: I26789b52008bb7be953954ec3fd3dd727ac15347
In addition to k8s certificates, prodaccess now issues HSPKI
certificates, with DN=$username.sso.hswaw.net. These are installed into
XDG_CONFIG_HOME (or os equiv).
//go/pki will now automatically attempt to load these certificates. This
means you can now run any pki-dependant tool with -hspki_disable, and
with automatic mTLS!
Change-Id: I5b28e193e7c968d621bab0d42aabd6f0510fed6d
"Anyone can pull all images" rule did only match on anonymous users. Now
it should match all users, including authenticated ones.
Change-Id: I2205299093feca51f30526ba305eadbaa0a68ecb
We would like gitea to have its ssh server exposed on TCP port 22 on the
same address as its web interface. We would also still like to use all
the automation around ingresses already in place (like cert-manager
integration).
To solve this, we create an additional LoadBalancer service for
nginx-ingress-controller and set up special tcp-services forwarding rule
to pass port 22 traffic to gitea-prod/gitea service, like we already do
in case of gerrit.
Change-Id: I5bfc901ebe858464f8e9c2f3b2216b254ccd6c4d
It was getting large and unwieldy (to the point where kubecfg was slow).
In this change, we:
- move the Cluster function to cluster.libsonnet
- move the Cluster instantiation into k0.libsonnet
- shuffle some fields around to make sure things are well split between
k0-specific and general cluster configs.
- add 'view' files that build on 'cluster.libsonnet' to allow rendering
either the entire k0 state, or some subsets (for speed)
- update the documentation, drive-by some small fixes and reindantation
Change-Id: I4b8d920b600df79100295267efe21b8c82699d5b
We handwavingly plan on implementing monitoring as a two-tier system:
- a 'global' component that is reponsible for global aggregation,
long-term storage and alerting.
- multiple 'per-cluster' components, that collect metrics from
Kubernetes clusters and export them to the global component.
In addition, several lower tiers (collected by per-cluster components)
might also be implemented in the future - for instance, specific to some
subprojects.
Here we start sketching out some basic jsonnet structure (currently all
in a single file, with little parametrization) and a cluster-level
prometheus server that scrapes Kubernetes Node and cAdvisor metrics.
This review is mostly to get this commited as early as possible, and to
make sure that the little existing Prometheus scrape configuration is
sane.
Change-Id: If37ac3b1243b8b6f464d65fee6d53080c36f992c