forked from hswaw/hscloud
Serge Bazanski
793ca1b3b2
Each OSD is connected to a 6TB drive, and with the good ol' 1TB storage -> 1GB RAM rule of thumb for OSDs, we end up with 6GB. Or, to round up, 8GB. I'm doing this because over the past few weeks OSDs in ceph-waw3 have been using a _ton_ of RAM. This will probably not prevent that (and instead they wil OOM more often :/), but it at will prevent us from wasting resources (k0 started migrating pods to other nodes, and running full nodes like that without an underlying request makes for a terrible draining experience). We need to get to the bottom of why this is happening in the first place, though. Did this happen as we moved to containerd? Followup: b.hswaw.net/29 Already deployed to production. Change-Id: I98df63763c35017eb77595db7b9f2cce71756ed1 |
||
---|---|---|
.. | ||
admitomatic | ||
certs | ||
clustercfg | ||
doc | ||
kube | ||
nix | ||
prodaccess | ||
prodvider | ||
secrets | ||
tools | ||
hackdoc.toml | ||
README.md |