Apart from this, we also had to manually edit the registration yaml to
add @libera_ and #libera_ prefixes to the allowlists.
Change-Id: If85f58cf3d1291e0bf9099ef13d9397040a47782
This doesn't have to be publicly reachable, as the future
//cluster/identd will dial into the pod directly to access the
appservice's identd.
Change-Id: I139341ead76309a6640eeb9a278462565290dd34
These contain a channel key for a secret channel.
We also had to migrate the appservice-irc config to a secret.
Change-Id: I92c7cdf9679f65d9e655e22d690cef2e83180135
This is the case for any IRC server that has ignoreIdleUsersOnStartup
set, because of what seems like an appservice-irc bug.
Change-Id: If5063a3bc2d79c7f2fc79ec7560bf9bfe2b25aba
This allows us to bypass the issue where Kubernetes jobs cannot be
updated once completed, so bumping appservice image versions was
painful.
But really, though, this is probably someting that kubecfg/kartongips
should handle.
Change-Id: I2778c5433f699db89120a3c44e55d2fbe2a10015
This allows people to save their NickServ passwords into bridge's
storage. Obviously nobody should trust us tho.
Change-Id: I2afe9e5215cd8f7419e9eab8183789df13e21aac
This should alleviate an issue of people getting joining and immediately
getting dropped off due to client limit on bridge restarts.
Change-Id: Ideb13ba9930d565ede728d2750d0c7af04746cf1
Newer versions of alpine edge repos have a `yq` that behaves oddly:
$ kubectl -n matrix-0x3c logs -f appservice-telegram-prod-85d66696c6-9drnl -c generate-config
+ apk add --no-cache yq
fetch https://dl-cdn.alpinelinux.org/alpine/edge/main/x86_64/APKINDEX.tar.gz
fetch https://dl-cdn.alpinelinux.org/alpine/edge/community/x86_64/APKINDEX.tar.gz
(1/1) Installing yq (4.4.1-r0)
Executing busybox-1.31.1-r21.trigger
ERROR: busybox-1.31.1-r21.trigger: script exited with error 127
OK: 11 MiB in 15 packages
+ cp /config/config.yaml /data/config.yaml
+ yq r /registration/registration.yaml as_token
Error: unknown command "r" for "yq"
Run 'yq --help' for usage.
+ yq w -i /data/config.yaml appservice.as_token
Error: unknown command "w" for "yq"
Run 'yq --help' for usage.
This downgrades back to a working yq.
Change-Id: Ifc77bcc88156b02f3ec17e6f84c5615149108777
This is used by some external modules (appservices/instance
definitions). In order to reduce scope of (untested) changes in this
rollout, let's temporarily backport that function into matrix-ng.
Change-Id: Ib1054844391497ef1455b25c7f939c68c628ff09
matrix-ng split into multiple submodules causes some changes in keys
that might've been used for homeserver/riot configuration customization.
Migration to kube.Namespace.Contain has also caused change in Deployment
selectors (immutable fields), thus needing manual removal of these
first.
This is, as always, documented in lib/matrix-ng.libsonnet header.
Change-Id: I39a745ee27e3c55ec748818b9cf9b4e8ba1d2df5
This is a major revamp of our matrix/synapse deployment as a separate
.libsonnet module.
* synapse version bump to 1.25.0
* riot-web version bump to 1.7.18
* Replaced synapse migration hack we used to template configuration with
environment variable replacement done by Kubernetes itself
* Implemented support for OpenID Connect, migration from CAS has been
verified to be working with some additional configuration options
* Moved homeserver signing key into k8s secret, thus making it possible
to run synapse processes without a single data volume
* Split synapse into main process, generic worker and media repository
worker. (latter is the only container using data volume) Both generic
worker and media repository worker is running on a single replica, until
we get proper HTTP routing/loadbalancing
* Riot nginx.conf has been extracted into an external file loaded using
importstr.
Change-Id: I6c4d34bf41e148a302d1cbe725608a5aeb7b87ba
Exposes /.well-known/matrix/ metadata endpoints on cfg.webDomain that
are required for federation to work properly. This can be enabled using
cfg.wellKnown flag set to true.
Change-Id: I097b58efc7442b904a135d4519999e36d155c197
It reached the stage of being crapped out so much that the OSDs spurious
IOPS killed the performance of disks colocated on the same M610 RAID
controllers. This made etcd _very_ slow, to the point of churning
through re-elections due to timeouts.
etcd/apiserver latencies, observe the difference at ~15:38:
https://object.ceph-waw3.hswaw.net/q3k-personal/4fbe8d4cfc8193cad307d487371b4e44358b931a7494aa88aff50b13fae9983c.png
I moved gerrit/* and matrix/appservice-irc-freenode PVCs to ceph-waw3 by
hand. The rest were non-critical so I removed them, they can be
recovered from benji backups if needed.
Change-Id: Iffbe87aefc06d8324a82b958a579143b7dd9914c
This is in preparation for spinning up a staging/QA matrix instance,
where the MXID domain is under control by hscloud machinery (and not a
top-level organizational domain).
Change-Id: I10505615ebb407b3b2eac0c1b87ad5625e2009c0
This is in prepration for bringing up a Matrix server for hsp.sh.
Verified to cause no diff on prod.
Change-Id: Ied2de210692e3ddfdb1d3f37b12893b214c34b0b