Apparently, at least parts of the M610 (eg. iDRAC) attempt to index
exact bytes from the FRU EEPROM instead of parsing it, and thus were
parsing our FRU's manufacturer/product name wrong. This fixes that.
Change-Id: I18d62ea79df7b7bf30cec3251da2c32d25b73507
This fixes CVE-2021-3450 and CVE-2021-3449.
Deployed on prod:
$ kubectl -n nginx-system exec nginx-ingress-controller-5c69c5cb59-2f8v4 -- openssl version
OpenSSL 1.1.1k 25 Mar 2021
Change-Id: I7115fd2367cca7b687c555deb2134b22d19a291a
This is a first pass at a Bazel remote cache. It notably does not yet do
any authentication, upload limits or garbage collection.
We won't be deploying it to prod until these are done.
Change-Id: I70a89dbe8b3ec933b2ce82e234a969e8337ba1d9
This adds github.com/minio/minio-go, a library that can be used to
access S3-like storage, eg. our own radosgw. It's significantly lighter
than the entire Go AWS SDK, and seems to also be more idiomatic than it.
Change-Id: I1e18c7665b58480fb72e789692aa7f37816cd28f
Gerrit 3.3.1 seems to have introduced a bug which makes the reviewers
column in the dashboard entry: https://bugs.chromium.org/p/gerrit/issues/detail?id=13899
This adds an override of gerrit.war to our Docker containers. The .war
is pulled over HTTP. It has been manually built by q3k from a source
checkout. The details on how this was done are in the WORKSPACE
http_file archive.
Once 3.3.3 lands we should get rid of it.
Change-Id: I8b64103cb87d8b185ff35165695a18cb19fea523
Stopgap until we finish b/3, need to deploy some changes on it without
rebooting into newer nixpkgs.
Change-Id: Ic2690dfcb398a419338961c8fcbc7e604298977a
This brings oodviewer into k0.
oodviewer started as a py2/flask script running on q3k's personal infra,
which is now being turned down.
This is a rewrite of that script into similarly mediocre Go, conforming
to the exact same mediocre JSON API and spartan HTML interface.
This also deploys it into k0 in the oodviewer-prod namespace. It's
already running, but the 'oodviewer.q3k.me' TTL has to expire before it
begins handling traffic.
Change-Id: Ieef1b0f8f0c60e6fa5dbe7701e0a07a4257f99ce
Each OSD is connected to a 6TB drive, and with the good ol' 1TB storage
-> 1GB RAM rule of thumb for OSDs, we end up with 6GB. Or, to round up,
8GB.
I'm doing this because over the past few weeks OSDs in ceph-waw3 have
been using a _ton_ of RAM. This will probably not prevent that (and
instead they wil OOM more often :/), but it at will prevent us from
wasting resources (k0 started migrating pods to other nodes, and running
full nodes like that without an underlying request makes for a terrible
draining experience).
We need to get to the bottom of why this is happening in the first
place, though. Did this happen as we moved to containerd?
Followup: b.hswaw.net/29
Already deployed to production.
Change-Id: I98df63763c35017eb77595db7b9f2cce71756ed1
This will create the following:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
annotations: {}
labels:
name: sso-admins
name: sso:admins
namespace: valheim
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:admin-namespace
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: patryk@hackerspace.pl
- apiGroup: rbac.authorization.k8s.io
kind: User
name: palid@hackerspace.pl
It's not enough to allow palid to use kubecfg (as we use a secretstore
secret in this jsonnet), but at least to manually restart the server via
kubectl, which is needed to update the game.
Change-Id: I6cb42ca87c9a78bbe34957f2c5e23acd2efe3423