forked from hswaw/hscloud
nix/cluster-configuration: fix CNI plugins being deleted on kubelet restart
parent
c807f86b6a
commit
f970a7ef0f
|
@ -49,12 +49,3 @@ thus may need to be manually copied into application namespace. (see
|
||||||
|
|
||||||
`tools/rook-s3cmd-config` can be used to generate test configuration file for s3cmd.
|
`tools/rook-s3cmd-config` can be used to generate test configuration file for s3cmd.
|
||||||
Remember to append `:default-placement` to your region name (ie. `waw-hdd-redundant-1-object:default-placement`)
|
Remember to append `:default-placement` to your region name (ie. `waw-hdd-redundant-1-object:default-placement`)
|
||||||
|
|
||||||
Known Issues
|
|
||||||
============
|
|
||||||
|
|
||||||
After running `nixos-configure switch` on the hosts, the shared host/container CNI plugin directory gets nuked, and pods will fail to schedule on that node (TODO(q3k): error message here). To fix this, restart calico-node pods running on nodes that have this issue. The Calico Node pod will reschedule automatically and fix the CNI plugins directory.
|
|
||||||
|
|
||||||
kubectl -n kube-system get pods -o wide | grep calico-node
|
|
||||||
kubectl -n kube-system delete pod calico-node-XXXX
|
|
||||||
|
|
||||||
|
|
|
@ -235,4 +235,11 @@ in rec {
|
||||||
systemd.services.kubelet-online = {
|
systemd.services.kubelet-online = {
|
||||||
script = pkgs.lib.mkForce "sleep 1";
|
script = pkgs.lib.mkForce "sleep 1";
|
||||||
};
|
};
|
||||||
|
# This by default removes all CNI plugins and replaces them with nix-defines ones
|
||||||
|
# Since we bring our own CNI plugins via containers with host mounts, this causes
|
||||||
|
# them to be removed on kubelet restart.
|
||||||
|
# TODO(q3k): file issue
|
||||||
|
systemd.services.kubelet = {
|
||||||
|
preStart = pkgs.lib.mkForce "sleep 1";
|
||||||
|
};
|
||||||
}
|
}
|
||||||
|
|
Loading…
Reference in New Issue