1
0
Fork 0
Commit Graph

225 Commits (5cc64bf60e40e8386b164c2e84cd7a641dc793e6)

Author SHA1 Message Date
q3k 6579e842b0 kartongips: paper over^W^Wfix CRD updates
Ceph CRD updates would fail with:

  ERROR Error updating customresourcedefinitions cephclusters.ceph.rook.io: expected kind, but got map

This wasn't just https://github.com/bitnami/kubecfg/issues/259 . We pull
in the 'solution' from Pulumi
(https://github.com/pulumi/pulumi-kubernetes/pull/622) which just
retries the update via a JSON update instead, and that seems to have
worked.

We also add some better error return wrapping, which I used to debug
this issue properly.

Oof.

Change-Id: I2007a7857e44128d74760174b61b59efa58e9cbc
2021-09-11 20:54:34 +00:00
q3k 05c4b5515b cluster/nix: symlink /sbin/lvm
This is needed by the new Rook OSD daemons.

Change-Id: I16eb24332db40a8209e7eb9747a81fa852e5cad9
2021-09-11 20:45:45 +00:00
q3k 9848e7e15f cluster: deploy NixOS-based ceph
First pass at a non-rook-managed Ceph cluster. We call it k0 instead of
ceph-waw4, as we pretty much are sure now that we will always have a
one-kube-cluster-to-one-ceph-cluster correspondence, with different Ceph
pools for different media kinds (if at all).

For now this has one mon and spinning rust OSDs. This can be iterated on
to make it less terrible with time.

See b/6 for more details.

Change-Id: Ie502a232c700af93f33fcad9fa1c57058161aa11
2021-09-11 20:33:24 +00:00
q3k 1dbefed537 Merge "cluster/kube: remove ceph diff against k0 production" 2021-09-11 20:32:57 +00:00
q3k 9f639694ba Merge "kartongips: switch default diff behaviour to subset, nag users" 2021-09-11 20:18:34 +00:00
q3k 29f314b620 Merge "kartongips: implement proper diffing of aggregated ClusterRoles" 2021-09-11 20:18:28 +00:00
q3k 4f0468fa26 cluster/kube: remove ceph diff against k0 production
This now has a zero diff against prod.

location fields in CephCluster.storage.nodes seem to have been removed
from the CRD at some point. Not sure how the CRUSH tree now gets
populated, but whatever, it's been working like this for a while
already. Same for CephObjectStore.gateway.type.

The Rook Operator has been zero-scaled for a while now due to b/6.

Change-Id: I30a836f273f4c1529f60fa9297c96b7aac412f59
2021-09-11 12:43:53 +00:00
q3k 59c8149df4 kartongips: switch default diff behaviour to subset, nag users
Change-Id: I998cdf7e693f6d1ce86c7ea411f47320d72a5906
2021-09-11 12:43:50 +00:00
q3k 72d7574536 kartongips: implement proper diffing of aggregated ClusterRoles
For a while now we've had spurious diffs against Ceph on k0 because of
a ClusterRole with an aggregationRule.

The way these behave is that the config object has an empty rule list,
and instead populates an aggregationRule which combines other existing
ClusterRoles into that ClusterRole. The control plane then populates the
rule field when the object is read/acted on, which caused us to always
see a diff between the configuration of that ClusterRole.

This hacks together a hardcoded fix for this particular behaviour.
Porting kubecfg over to SSA would probably also fix this - but that's
too much work for now.

Change-Id: I357c1417d4023691e5809f1af23f58f364353388
2021-09-11 12:40:18 +00:00
q3k b3c6770f8d ops, cluster: consolidate NixOS provisioning
This moves the diff-and-activate logic from cluster/nix/provision.nix
into ops/{provision,machines}.nix that can be used for both cluster
machines and bgpwtf machines.

The provisioning scripts now live per-NixOS-config, and anything under
ops.machines.$fqdn now has a .passthru.hscloud.provision derivation
which is that script. When ran, it will attempt to deploy onto the
target machine.

There's also a top-level tool at `ops.provision` which builds all
configurations / machines and can be called with the machine name/fqdn
to call the corresponding provisioner script.

clustercfg is changed to use the new provisioning logic.

Change-Id: I258abce9e8e3db42af35af102f32ab7963046353
2021-09-10 23:55:52 +00:00
q3k 432fa30ded cluster/certs: bump ca-kube-prodivider
Redeployed.

Change-Id: I01110433f89df5595de0f9587508104d6091a774
2021-08-29 17:20:59 +00:00
q3k 89a16f4de4 cluster/admitomatic: allow use-regex n-i-c annotation
This annotation is used to permit routes defined by regexes instead of
simple prefix matching. This is used by our synapse deployment for
routing incomming HTTP requests to diffferent Synapse components.

I've stumbled upon this while deploying a new Matrix/Synapse instance.
This hasn't been yet a problem because the existing ingresses for Matrix
deployments predate admitomatic.

Change-Id: I821e58b214450ccf0de22d2585c3b0d11fbe71c0
2021-06-06 12:58:11 +00:00
q3k 7251f2720e Merge changes Ib068109f,I9a00487f,I1861fe7c,I254983e5,I3e2bedca, ...
* changes:
  cluster/identd/ident: update README
  cluster/kube: deploy identd
  cluster/identd: implement
  cluster/identd/kubenat: implement
  cluster/identd/cri: import
  cluster/identd/ident: add TestE2E
  cluster/identd/ident: add Query function
  cluster/identd/ident: add IdentError
  cluster/identd/ident: add basic ident protocol server
  cluster/identd/ident: add basic ident protocol client
2021-05-28 23:08:10 +00:00
q3k 46c3137d36 cluster/identd/ident: update README
Change-Id: Ib068109ff37749207e7b2a18c07f51d3c4ed3fd6
2021-05-26 19:46:13 +00:00
q3k 2414afe3c0 cluster/kube: deploy identd
Change-Id: I9a00487fc4a972ecb0904055dbaaab08221062c1
2021-05-26 19:46:09 +00:00
q3k 044386d638 cluster/identd: implement
This implements the main identd service that will run on our production
hosts. It's comparatively small, as most of the functionality is
implemented in //cluster/identd/ident and //cluster/identd/kubenat.

Change-Id: I1861fe7c93d105faa19a2bafbe9c85fe36502f73
2021-05-26 19:46:06 +00:00
q3k 6b649f8234 cluster/identd/kubenat: implement
This is a library to find pod information for a given TCP 4-tuple.

Change-Id: I254983e579e3aaa04c0c5491851f4af94a3f4249
2021-05-26 19:46:02 +00:00
q3k ae052f0804 cluster/identd/cri: import
This imports the CRI protobuf/gRPC specs. These are pulled from:

    https://raw.githubusercontent.com/kubernetes/cri-api/master/pkg/apis/runtime/v1alpha2/api.proto

Our host containerd does not implement v1, so we go with v1alpha2.

Change-Id: I3e2bedca76edc85eea9b61a8634c92175f0d2a30
2021-05-26 19:45:58 +00:00
q3k 3638a3d76a cluster/identd/ident: add TestE2E
Change-Id: I8a95fadf19376de2806cb63897b77e370559392f
2021-05-23 16:27:22 +00:00
q3k 8e603e13e5 cluster/identd/ident: add Query function
This is a high-level wrapper for querying identd, and uses IdentError to
carry errors received from the server.

Change-Id: I6444a67117193b97146ffd1548151cdb234d47b5
2021-05-23 16:27:17 +00:00
q3k 1c2bc12ad0 cluster/identd/ident: add IdentError
This adds a Go error type that can be used to wrap any ErrorResponse.

Change-Id: I57fbd056ac774f4e2ae3bdf85941c1010ada0656
2021-05-23 16:26:59 +00:00
q3k ce2737f2e7 cluster/identd/ident: add basic ident protocol server
This adds an ident protocol server and tests for it.

Change-Id: I830f85faa7dce4220bd7001635b20e88b4a8b417
2021-05-23 16:26:54 +00:00
q3k d4438d67a2 cluster/identd/ident: add basic ident protocol client
This is the first pass at an ident protocol client. In the end, we want
to implement an ident protocol server for our in-cluster identd, but
starting out with a client helps me getting familiar with the protocol,
and will allow the server implementation to be tested against the
client.

Change-Id: Ic37b84577321533bab2f2fbf7fb53409a5defb95
2021-05-23 16:26:50 +00:00
q3k e17f7edde0 cluster/kube: nginx: add Hscloud-Nic-Source-* headers
These can be used by production jobs to get the source port of the
client connecting over HTTP. A followup CR implements just that.

Change-Id: Ic8e29eaf806bb196d8cfcfb604ff66ae4d0d166a
2021-05-22 19:16:39 +00:00
q3k ba2f4d8215 cluster/prodvider: deploy
Change-Id: I01d931a664e4b09c0d75fb01fb3f2528bc0f1a53
2021-05-19 22:13:26 +00:00
q3k 02e1598eb3 cluster/prodvider: emit crdb certs
This emits short-lived user credentials for a `dev-user` in crdb-waw1
any time someone prodaccesses.

Change-Id: I0266a05c1f02225d762cfd2ca61976af0658639d
2021-05-19 22:13:22 +00:00
q3k bade46d45f go/pki: fix error return
DeveloperCredentialsLocation used to glog.Exitf instead of returning an
error, and a consumer (prodaccess) used to not check the return code.
Bad refactor?

Change-Id: I6c2d05966ba6b3eb300c24a51584ccf5e324cd49
2021-05-19 22:12:08 +00:00
q3k 5ae5cbec81 Merge "cluster/kube: bump nginx-ingress-controller, backport openssl 1.1.1k" 2021-05-19 15:34:45 +00:00
q3k 99b91b11f1 cluster/k0/admitomatic: add .hswaw.net to hswaw-prod namespace
This was preventing certificate refresh in the hswaw-prod mirko ingress.

Change-Id: I14b18b642a3948a9864e2d9a90b2a2b2c145b9b1
2021-03-28 17:34:34 +00:00
q3k 7967ca177b cluster/certs: update k0 certs
This leaves us with the next set of expiring certs in September 2021.

Fixes b/36.

Change-Id: I536497626c0dd3807fccf28d4b61e5e531cf8d9c
2021-03-27 12:19:25 +00:00
q3k 41b882d053 cluster: remove bc01n03 certs/secrets
Decomissioned node, noticed while rolling over certs in b/36.

Change-Id: Ia386ff846998c52799662179c325b24e78f2eca8
2021-03-27 12:18:56 +00:00
q3k 2e8d24b84a cluster/kube: bump nginx-ingress-controller, backport openssl 1.1.1k
This fixes CVE-2021-3450 and CVE-2021-3449.

Deployed on prod:

$ kubectl -n nginx-system exec nginx-ingress-controller-5c69c5cb59-2f8v4 -- openssl version
OpenSSL 1.1.1k  25 Mar 2021

Change-Id: I7115fd2367cca7b687c555deb2134b22d19a291a
2021-03-25 18:16:13 +00:00
q3k bf266c6aaf cluster/k0: add dns crdb user
In preparation for running PowerDNS on k0.

Change-Id: I853c7465a6a32d02628fa6cfdeb445eb9937b3be
2021-03-17 21:49:00 +00:00
q3k 3b8935378a cluster/crdb: make init job 'idempotent'
This enables its redeployment with a newer crdb image.

Change-Id: If039992674f401af53738c80d22cc2ca2818fe00
2021-03-17 21:48:30 +00:00
q3k 64de7afe32 cluster/kube/k0: fix syntax errors
This happened in 793ca1b3 and slipped past review.

Change-Id: Ie31f0e1ec03d6e4545d6683b21f528550bf4ef9f
2021-03-17 21:47:51 +00:00
q3k 793ca1b3b2 cluster/kube: limit OSDs in ceph-waw3 to 8GB RAM
Each OSD is connected to a 6TB drive, and with the good ol' 1TB storage
-> 1GB RAM rule of thumb for OSDs, we end up with 6GB. Or, to round up,
8GB.

I'm doing this because over the past few weeks OSDs in ceph-waw3 have
been using a _ton_ of RAM. This will probably not prevent that (and
instead they wil OOM more often :/), but it at will prevent us from
wasting resources (k0 started migrating pods to other nodes, and running
full nodes like that without an underlying request makes for a terrible
draining experience).

We need to get to the bottom of why this is happening in the first
place, though. Did this happen as we moved to containerd?

Followup: b.hswaw.net/29

Already deployed to production.

Change-Id: I98df63763c35017eb77595db7b9f2cce71756ed1
2021-03-07 00:09:58 +00:00
q3k 3ba5c1b591 *: docs pass
Change-Id: I87ca80d3f7728ed407071468ac233e6ad4574929
2021-03-06 22:21:28 +00:00
q3k bc0d3cb227 hackdoc: link to cs instead of gitweb
Change-Id: Ifca7a63517bceffe7ccc0452474d9d16626486de
2021-03-06 22:16:54 +00:00
q3k 0d26fc9780 cluster: disable nginx/acme
These are unused.

Change-Id: I2a428dabd0a27c060c595f5e0843d7d8d8e26dcd
2021-02-15 22:14:41 +01:00
q3k 765e369255 cluster: replace docker with containerd
This removes Docker and docker-shim from our production kubernetes, and
moves over to containerd/CRI. Docker support within Kubernetes was
always slightly shitty, and with 1.20 the integration was dropped
entirely. CRI/Containerd/runc is pretty much the new standard.

Change-Id: I98c89d5433f221b5fe766fcbef261fd72db530fe
2021-02-15 22:14:15 +01:00
q3k 4b613303b1 RFC: *: move away from rules_nixpkgs
This is an attempt to see how well we do without rules_nixpkgs.

rules_nixpkgs has the following problems:

 - complicates our build system significantly (generated external
   repository indirection for picking local/nix python and go)
 - creates builds that cannot run on production (as they are tainted by
   /nix/store libraries)
 - is not a full solution to the bazel hermeticity problem anyway, and
   we'll have to tackle that some other way (eg. by introducing proper
   C++ cross-compilation toolchains and building everything from C,
   including Python and Go)

Instead of rules_nixpkgs, we ship a shell.nix file, so NixOS users can
just:

  jane@hacker:~/hscloud $ nix-shell
  hscloud-build-chrootenv:jane@hacker:~/hscloud$ prodaccess

This shell.nix is in a way nicer, as it immediately gives you all tools
needed to access production straight away.

Change-Id: Ieceb5ae0fb4d32e87301e5c99416379cedc900c5
2021-02-15 22:11:35 +01:00
q3k 4842705406 cluster/nix: integrate with readtree
This unifies nixpkgs with the one defined in //default.nix and makes it
possible to use readTree to build the provisioners:

   nix-build -A cluster.nix.provision

   result/bin/provision

Change-Id: I68dd70b9c8869c7c0b59f5007981eac03667b862
2021-02-14 14:46:07 +00:00
q3k 225a5c7ee9 nixpkgs: bump
Fixes b/3.

Change-Id: I2f734422cdad00f78956477815c4aea645c6c49e
2021-02-14 14:43:07 +00:00
q3k 78d6f11cb2 Merge "cluster/admitomatic: allow whitelist-source-range" 2021-02-08 17:21:59 +00:00
q3k 877cf0af26 🅱️
Fixes b/8

Change-Id: I5a5779c3688451d89c0601dc913143d75048c9f6
2021-02-08 15:10:11 +00:00
q3k 943ab5b1a6 cluster/admitomatic: allow whitelist-source-range
Without this, cert-manager get stuck.

Deployed to prod.

Change-Id: I356cd44f455b6f4aecea9ae396f6a05e1a727859
2021-02-07 23:35:28 +00:00
q3k f40c9249ce cluster/kube: allow system:admin-namespaces to modify ingresses
This will permit any binding to system:admin-namespaces (eg. personal-*
namespaces, per-namespace extra admin access like matrix-0x3c) the
ability to create and updates ingresses.

Change-Id: I522896ebe290fe982d6fe46b7b1d604d22b4f72c
2021-02-07 19:24:43 +00:00
q3k 41bbf1436a cluster/kube: deploy admitomatic webhook
This has been (succesfully) tested on prod and then rolled back.

Change-Id: I22657f66b4aeaa8a0ae452035ba18a79f4549b14
2021-02-07 19:19:23 +00:00
q3k 3c5d836c56 cluster/kube: deploy admitomatic
This doesn't yet enable a webhook, but deploys admitomatic itself.

Change-Id: Id177bc8841c873031f9c196b8ff3c12dd846ba8e
2021-02-07 19:19:02 +00:00
q3k 3ab5f07c64 cluster/admitomatic: build docker image
Change-Id: I086a8b17a4dc7257de1bae3a6f0c95400af7e115
2021-02-07 19:18:53 +00:00