hscloud/cluster
Serge Bazanski 793ca1b3b2 cluster/kube: limit OSDs in ceph-waw3 to 8GB RAM
Each OSD is connected to a 6TB drive, and with the good ol' 1TB storage
-> 1GB RAM rule of thumb for OSDs, we end up with 6GB. Or, to round up,
8GB.

I'm doing this because over the past few weeks OSDs in ceph-waw3 have
been using a _ton_ of RAM. This will probably not prevent that (and
instead they wil OOM more often :/), but it at will prevent us from
wasting resources (k0 started migrating pods to other nodes, and running
full nodes like that without an underlying request makes for a terrible
draining experience).

We need to get to the bottom of why this is happening in the first
place, though. Did this happen as we moved to containerd?

Followup: b.hswaw.net/29

Already deployed to production.

Change-Id: I98df63763c35017eb77595db7b9f2cce71756ed1
2021-03-07 00:09:58 +00:00
..
admitomatic cluster/admitomatic: allow whitelist-source-range 2021-02-07 23:35:28 +00:00
certs cluster: add admitomatic CA/certificate 2021-02-06 17:18:58 +00:00
clustercfg cluster/nix: integrate with readtree 2021-02-14 14:46:07 +00:00
doc *: docs pass 2021-03-06 22:21:28 +00:00
kube cluster/kube: limit OSDs in ceph-waw3 to 8GB RAM 2021-03-07 00:09:58 +00:00
nix cluster: disable nginx/acme 2021-02-15 22:14:41 +01:00
prodaccess *: developer machine HSPKI credentials 2020-08-01 17:15:52 +02:00
prodvider prodvider: fix build after k8s update, add to CI presubmit 2020-11-27 09:43:47 +00:00
secrets cluster: add admitomatic CA/certificate 2021-02-06 17:18:58 +00:00
tools RFC: *: move away from rules_nixpkgs 2021-02-15 22:11:35 +01:00
hackdoc.toml *: docs pass 2021-03-06 22:21:28 +00:00
README.md *: docs pass 2021-03-06 22:21:28 +00:00

Cluster Docs Home

Documentation relating to our Kubernetes cluster(s).

For information about the physical DC infrastructure, see //dc.