We just got this email:
We've been working with Jetstack, the authors of cert-manager, on a
series of fixes to the client. Cert-manager sometimes falls into a
traffic pattern where it sends really excessive traffic to Let's
Encrypt's servers, continuously. To mitigate this, we plan to start
blocking all traffic from cert-manager versions less than 0.8.0 (the
current semver minor release), as of November 1, 2019. Please upgrade
all of your cert-manager instances before then.
We're sending this email because this is the contact address of your
cert-manager instance at:
185.236.240.37 .
Version 0.8.0 is much better but we still observe excessive traffic in
some cases. We're working with Jetstack to improve these cases. As new
versions of cert-manager are released, we will add the non-current
versions to our block list after 3 months. We strongly encourage
cert-manager users to stay up-to-date with new versions.
Also, there is an opportunity to help both Jetstack and Let's Encrypt.
Once you've upgraded, please check the logs for your cert-manager
instances from time to time. Are they making excessive requests to Let's
Encrypt (more than, say, 10 per day over multiple days)? If so, please
share details at https://github.com/jetstack/cert-manager/issues/1948 .
Thanks,
Let's Encrypt Team
Change-Id: Ic7152150ac1c96941423878c6d4b6209e07429cf
We accidentally created crdb-waw2 in
https://gerrit.hackerspace.pl/c/hscloud/+/2.
We remove it now and also backport a manual change that makes the
crdb-waw1 service public via a LoadBalancer.
Change-Id: I3bbd6f01b82c6efa458cc44776f086ba36e9f20c
This makes a registry be automatically part of the cluster
infrastructure.
Tested by running kubecfg diff, no diffs (apart from out-of-date ACLs)
found.
Change-Id: Ic0635e789cf3fb851f410bcf2865326f1fa87545
IP addresses are not necessary in the topology definitions of a
cockroach cluster.
They were mis-commited leftovers from trying to run the cluster on
DaemonSets with hostNetworking: true.
Change-Id: I4ef1f6ed9a745efc6b05846bc13aba9d1f8dc7c8
This prevents a bug where kubecfg fails to update the client pod when
running a cluster/kube/cluster.jsonnet update. The pod update is
attempted because of runtime/intent differences at serviceAccounts
specification, which causes kubecfg to see a diff, which causes it to
attempt and update, which causes kube-apiserver to reject the change
(because pods are immutable), which causes kubecfg to fail.
Change-Id: I20b0ecbb264213a2eb483d475c7683b4965c82be
We move away from the StatefulSet based deployment to manually starting
a deployment per intended node. This allows us to pin indivisual
instances of Cockroach to particular nodes, so that they state
co-located with their data.
We refactor this library to:
- support multiple databases, but with a strong suggestion of having
one per k8s cluster
- drop the database creation logic
- redo naming (allowing for two options: multiple clusters per
namespace or an exclusive namespace for the cluster)
- unhardcode dns names
This pretty large change does the following:
- moves nix from bootstrap.hswaw.net to nix/
- changes clustercfg to use cfssl and moves it to cluster/clustercfg
- changes clustercfg to source information about target location of
certs from nix
- changes clustercfg to push nix config
- changes tls certs to have more than one CA
- recalculates all TLS certs
(it keeps the old serviceaccoutns key, otherwise we end up with
invalid serviceaccounts - the cert doesn't match, but who cares,
it's not used anyway)
This is so that Calico starts with the proper subnet. Feeding it just an
IP from the node status will mean it parses it as /32 and uses IPIP
tunnels for all connectivity.