Presence of id_token in IDP token response causes synapse to demand
jwks_uri to be present in config/metadata. (login flow failing with
<<Missing "jwks_uri" in metadata>> message)
This behaviour was introduced somewhere between 1.42.0 and 1.56.0.
This is currently not set up correctly on sso.hackerspace.pl (we hand
out hs256 tokens instead of proper rsa ones) so this change will make it
fall back to non-oidc/plain oauth2 flow.
Change-Id: I4ff8aa175b4f0bbdcb3ee993b7cbd4545eac561a
Reviewed-on: https://gerrit.hackerspace.pl/c/hscloud/+/1302
Reviewed-by: informatic <informatic@hackerspace.pl>
Reviewed-by: q3k <q3k@hackerspace.pl>
This change enables experimental message threading support and upgrades
Synapse and Element to their latest stable versions.
Change-Id: I68334982168ffdac98a1602a157be727b04e58d6
Reviewed-on: https://gerrit.hackerspace.pl/c/hscloud/+/1286
Reviewed-by: informatic <informatic@hackerspace.pl>
Reviewed-by: q3k <q3k@hackerspace.pl>
riot-web containers are no longer published.
We shall also readjust our internal naming for matrix web client from
riot to something more generic at some point.
Change-Id: Ice85af3ae29b587c13a3ba27d13c9bd655d7fcfd
Reviewed-on: https://gerrit.hackerspace.pl/c/hscloud/+/1145
Reviewed-by: informatic <informatic@hackerspace.pl>
This implements media-repo-proxy, a lil' bit of Go to make our
infrastructure work with matrix-media-repo's concept of Host headers.
For some reason, MMR really wants Host: hackerspace.pl instead of Host:
matrix.hackerspace.pl. We'd fix that in their code, but with no tests
and with complex config reload logic it looks very daunting. We'd just
fix that in our Ingress, but that's not easy (no per-rule host
overrides).
So, we commit a tiny little itty bitty war crime and implement a piece
of Go code that serves as a rewriter for this.
This works, tested on boston:
$ curl -H "Host: matrix.hackerspace.pl" 10.10.12.46:8080/_matrix/media/r0/download/hackerspace.pl/EwVBulPgCWDWNGMKjcOKGGbk | file -
/dev/stdin: JPEG image data, JFIF standard 1.01, aspect ratio, density 1x1, segment length 16, baseline, precision 8, 650x300, components 3
(this address is media-repo.matrix.svc.k0.hswaw.net)
But hey, at least it has tests.
Change-Id: Ib6af1988fe8e112c9f3a5577506b18b48d80af62
Reviewed-on: https://gerrit.hackerspace.pl/c/hscloud/+/1143
Reviewed-by: q3k <q3k@hackerspace.pl>
We ran out of disk space on the old PVC. Made a new one, copied data
over, and this change points the postgres data mount to that new PVC.
Change-Id: Iea4e140680066a3335cc69caf9293093f90bb568
Previously: 856b216459 switched to using a
Secret instead of a ConfigMap for appservice-irc. That however didn't
update the bootstrap job which still used the ConfigMap. This fixes
that.
Change-Id: I50f33935691678ce24ecf4e04d7ce1b13c184929
Apart from this, we also had to manually edit the registration yaml to
add @libera_ and #libera_ prefixes to the allowlists.
Change-Id: If85f58cf3d1291e0bf9099ef13d9397040a47782
This doesn't have to be publicly reachable, as the future
//cluster/identd will dial into the pod directly to access the
appservice's identd.
Change-Id: I139341ead76309a6640eeb9a278462565290dd34
These contain a channel key for a secret channel.
We also had to migrate the appservice-irc config to a secret.
Change-Id: I92c7cdf9679f65d9e655e22d690cef2e83180135
This is the case for any IRC server that has ignoreIdleUsersOnStartup
set, because of what seems like an appservice-irc bug.
Change-Id: If5063a3bc2d79c7f2fc79ec7560bf9bfe2b25aba
This allows us to bypass the issue where Kubernetes jobs cannot be
updated once completed, so bumping appservice image versions was
painful.
But really, though, this is probably someting that kubecfg/kartongips
should handle.
Change-Id: I2778c5433f699db89120a3c44e55d2fbe2a10015
This allows people to save their NickServ passwords into bridge's
storage. Obviously nobody should trust us tho.
Change-Id: I2afe9e5215cd8f7419e9eab8183789df13e21aac
This should alleviate an issue of people getting joining and immediately
getting dropped off due to client limit on bridge restarts.
Change-Id: Ideb13ba9930d565ede728d2750d0c7af04746cf1
Newer versions of alpine edge repos have a `yq` that behaves oddly:
$ kubectl -n matrix-0x3c logs -f appservice-telegram-prod-85d66696c6-9drnl -c generate-config
+ apk add --no-cache yq
fetch https://dl-cdn.alpinelinux.org/alpine/edge/main/x86_64/APKINDEX.tar.gz
fetch https://dl-cdn.alpinelinux.org/alpine/edge/community/x86_64/APKINDEX.tar.gz
(1/1) Installing yq (4.4.1-r0)
Executing busybox-1.31.1-r21.trigger
ERROR: busybox-1.31.1-r21.trigger: script exited with error 127
OK: 11 MiB in 15 packages
+ cp /config/config.yaml /data/config.yaml
+ yq r /registration/registration.yaml as_token
Error: unknown command "r" for "yq"
Run 'yq --help' for usage.
+ yq w -i /data/config.yaml appservice.as_token
Error: unknown command "w" for "yq"
Run 'yq --help' for usage.
This downgrades back to a working yq.
Change-Id: Ifc77bcc88156b02f3ec17e6f84c5615149108777
This is used by some external modules (appservices/instance
definitions). In order to reduce scope of (untested) changes in this
rollout, let's temporarily backport that function into matrix-ng.
Change-Id: Ib1054844391497ef1455b25c7f939c68c628ff09
matrix-ng split into multiple submodules causes some changes in keys
that might've been used for homeserver/riot configuration customization.
Migration to kube.Namespace.Contain has also caused change in Deployment
selectors (immutable fields), thus needing manual removal of these
first.
This is, as always, documented in lib/matrix-ng.libsonnet header.
Change-Id: I39a745ee27e3c55ec748818b9cf9b4e8ba1d2df5
This is a major revamp of our matrix/synapse deployment as a separate
.libsonnet module.
* synapse version bump to 1.25.0
* riot-web version bump to 1.7.18
* Replaced synapse migration hack we used to template configuration with
environment variable replacement done by Kubernetes itself
* Implemented support for OpenID Connect, migration from CAS has been
verified to be working with some additional configuration options
* Moved homeserver signing key into k8s secret, thus making it possible
to run synapse processes without a single data volume
* Split synapse into main process, generic worker and media repository
worker. (latter is the only container using data volume) Both generic
worker and media repository worker is running on a single replica, until
we get proper HTTP routing/loadbalancing
* Riot nginx.conf has been extracted into an external file loaded using
importstr.
Change-Id: I6c4d34bf41e148a302d1cbe725608a5aeb7b87ba