This is quite hacky, but we intend to remove that postgres soon anyway.
The changes to synapse's resource limits are to reflect current state of
prod.
Change-Id: Ic7beaa3e7ee378c0e10ba24f9a5a3aee67c2ccf2
Reviewed-on: https://gerrit.hackerspace.pl/c/hscloud/+/1468
Reviewed-by: q3k <q3k@hackerspace.pl>
TURN server is required for proper cross-NAT voice/video calls via
Matrix.
Change-Id: I8182292dd8ef30690ae4b9487c22aedcff098710
Reviewed-on: https://gerrit.hackerspace.pl/c/hscloud/+/1387
Reviewed-by: informatic <informatic@hackerspace.pl>
Presence of id_token in IDP token response causes synapse to demand
jwks_uri to be present in config/metadata. (login flow failing with
<<Missing "jwks_uri" in metadata>> message)
This behaviour was introduced somewhere between 1.42.0 and 1.56.0.
This is currently not set up correctly on sso.hackerspace.pl (we hand
out hs256 tokens instead of proper rsa ones) so this change will make it
fall back to non-oidc/plain oauth2 flow.
Change-Id: I4ff8aa175b4f0bbdcb3ee993b7cbd4545eac561a
Reviewed-on: https://gerrit.hackerspace.pl/c/hscloud/+/1302
Reviewed-by: informatic <informatic@hackerspace.pl>
Reviewed-by: q3k <q3k@hackerspace.pl>
We ran out of disk space on the old PVC. Made a new one, copied data
over, and this change points the postgres data mount to that new PVC.
Change-Id: Iea4e140680066a3335cc69caf9293093f90bb568
Apart from this, we also had to manually edit the registration yaml to
add @libera_ and #libera_ prefixes to the allowlists.
Change-Id: If85f58cf3d1291e0bf9099ef13d9397040a47782
This doesn't have to be publicly reachable, as the future
//cluster/identd will dial into the pod directly to access the
appservice's identd.
Change-Id: I139341ead76309a6640eeb9a278462565290dd34
These contain a channel key for a secret channel.
We also had to migrate the appservice-irc config to a secret.
Change-Id: I92c7cdf9679f65d9e655e22d690cef2e83180135
This allows people to save their NickServ passwords into bridge's
storage. Obviously nobody should trust us tho.
Change-Id: I2afe9e5215cd8f7419e9eab8183789df13e21aac
It reached the stage of being crapped out so much that the OSDs spurious
IOPS killed the performance of disks colocated on the same M610 RAID
controllers. This made etcd _very_ slow, to the point of churning
through re-elections due to timeouts.
etcd/apiserver latencies, observe the difference at ~15:38:
https://object.ceph-waw3.hswaw.net/q3k-personal/4fbe8d4cfc8193cad307d487371b4e44358b931a7494aa88aff50b13fae9983c.png
I moved gerrit/* and matrix/appservice-irc-freenode PVCs to ceph-waw3 by
hand. The rest were non-critical so I removed them, they can be
recovered from benji backups if needed.
Change-Id: Iffbe87aefc06d8324a82b958a579143b7dd9914c
This is in prepration for bringing up a Matrix server for hsp.sh.
Verified to cause no diff on prod.
Change-Id: Ied2de210692e3ddfdb1d3f37b12893b214c34b0b