1
0
Fork 0

nix/cluster-configuration: fix CNI plugins being deleted on kubelet restart

master
q3k 2019-06-20 12:51:39 +02:00
parent c807f86b6a
commit f970a7ef0f
2 changed files with 7 additions and 9 deletions

View File

@ -49,12 +49,3 @@ thus may need to be manually copied into application namespace. (see
`tools/rook-s3cmd-config` can be used to generate test configuration file for s3cmd.
Remember to append `:default-placement` to your region name (ie. `waw-hdd-redundant-1-object:default-placement`)
Known Issues
============
After running `nixos-configure switch` on the hosts, the shared host/container CNI plugin directory gets nuked, and pods will fail to schedule on that node (TODO(q3k): error message here). To fix this, restart calico-node pods running on nodes that have this issue. The Calico Node pod will reschedule automatically and fix the CNI plugins directory.
kubectl -n kube-system get pods -o wide | grep calico-node
kubectl -n kube-system delete pod calico-node-XXXX

View File

@ -235,4 +235,11 @@ in rec {
systemd.services.kubelet-online = {
script = pkgs.lib.mkForce "sleep 1";
};
# This by default removes all CNI plugins and replaces them with nix-defines ones
# Since we bring our own CNI plugins via containers with host mounts, this causes
# them to be removed on kubelet restart.
# TODO(q3k): file issue
systemd.services.kubelet = {
preStart = pkgs.lib.mkForce "sleep 1";
};
}