This adds a mod proxy system, called, well, modproxy.
It sits between Factorio server instances and the Factorio mod portal,
allowing for arbitrary mod download without needing the servers to know
Factorio credentials.
Change-Id: I7bc405a25b6f9559cae1f23295249f186761f212
ceph-waw2 has currently some production issues [1] which have started to
cause write failures in the registry. The registry is the only user of
ceph-waw2's affected pool, so we reduce the dumpster fire blast radious
by moving it over to ceph-waw3.
This has already been deployed and data has been migrated over (via
s3cmd sync), and the migration has been verified (by a push and pull,
and pull of an older image).
[1] - pgs stuck inactive in the object storage pool
Change-Id: I26789b52008bb7be953954ec3fd3dd727ac15347
In addition to k8s certificates, prodaccess now issues HSPKI
certificates, with DN=$username.sso.hswaw.net. These are installed into
XDG_CONFIG_HOME (or os equiv).
//go/pki will now automatically attempt to load these certificates. This
means you can now run any pki-dependant tool with -hspki_disable, and
with automatic mTLS!
Change-Id: I5b28e193e7c968d621bab0d42aabd6f0510fed6d
instead of Python packages
As usual with Python sadness, the @pydeps wheels are built on the bazel
host, so stuffing them inside a container_image (or py_image) will cause
new and unexpected kinds of misery.
Change-Id: Id4e4d53741cf2da367f01aa15c21c133c5cf0dba
"Anyone can pull all images" rule did only match on anonymous users. Now
it should match all users, including authenticated ones.
Change-Id: I2205299093feca51f30526ba305eadbaa0a68ecb
We would like gitea to have its ssh server exposed on TCP port 22 on the
same address as its web interface. We would also still like to use all
the automation around ingresses already in place (like cert-manager
integration).
To solve this, we create an additional LoadBalancer service for
nginx-ingress-controller and set up special tcp-services forwarding rule
to pass port 22 traffic to gitea-prod/gitea service, like we already do
in case of gerrit.
Change-Id: I5bfc901ebe858464f8e9c2f3b2216b254ccd6c4d
This turns the existing script into a proper sh_binary, and injects
dependencies (kubectl and jq) as deps into it.
This change also pulls in BUILDfiles for jq, and a dep (oniguruma) into
//third_party, and adds buildable external repositories for them.
The jq/oniguruma BUILDfiles are lifted from
https://github.com/attilaolah/bazel-tools/.
Change-Id: If2e548bd60a8fd34e4f3be767ae59c6b2f2286d9
It was getting large and unwieldy (to the point where kubecfg was slow).
In this change, we:
- move the Cluster function to cluster.libsonnet
- move the Cluster instantiation into k0.libsonnet
- shuffle some fields around to make sure things are well split between
k0-specific and general cluster configs.
- add 'view' files that build on 'cluster.libsonnet' to allow rendering
either the entire k0 state, or some subsets (for speed)
- update the documentation, drive-by some small fixes and reindantation
Change-Id: I4b8d920b600df79100295267efe21b8c82699d5b
We're not using them for anything. Initially they were going to be used
for nixops, but nixops is not very good, so let's just drop them.
We still have a Nix dependency for clustercfg.py when provisioning
nodes, but rules_nix/nixpkgs in WORKSPACE were unrelated to that.
Change-Id: I28c249507d1be9c5dbbd1ee764deccd9ab038549
We handwavingly plan on implementing monitoring as a two-tier system:
- a 'global' component that is reponsible for global aggregation,
long-term storage and alerting.
- multiple 'per-cluster' components, that collect metrics from
Kubernetes clusters and export them to the global component.
In addition, several lower tiers (collected by per-cluster components)
might also be implemented in the future - for instance, specific to some
subprojects.
Here we start sketching out some basic jsonnet structure (currently all
in a single file, with little parametrization) and a cluster-level
prometheus server that scrapes Kubernetes Node and cAdvisor metrics.
This review is mostly to get this commited as early as possible, and to
make sure that the little existing Prometheus scrape configuration is
sane.
Change-Id: If37ac3b1243b8b6f464d65fee6d53080c36f992c
This kills two birds with one stone:
- update the secretstore tool to be slightly smarter about secrets, to
the point where we can now just point it at a secret directory and
ask it to 'sync' all secrets in there
- runs the new fancy sync command on all keys to update them, which
is a follow up to gerrit/328.
Change-Id: I0eec4a3e8afcd9481b0b248154983aac25657c40
This was an attempt to make new calico nodes use a full FQDN. However,
this change seemingly also makes the calico control plane use the FQDN
for all existing nodes, as such breaking CNI for new pods.
We revert this change, thereby keeping all calico nodes names as
hostnames. We could fix this by editing /var/lib/calico/nodename on
hosts to FQDNs, but it might not be worth the effort.
See https://github.com/projectcalico/calico/issues/1093 for more
context.
Change-Id: I52bfb00f604053d57d3009aebd6c50db7dc74f58
We still use etcd as the data store (and as such didn't set up k8s CRDs
for Calico), but that's okay for now.
Change-Id: If6d66f505c6b40f2646ffae7d33d0d641d34a963
This previous allowed all namespace admins (ie. personal-$user namespace
users) to create any sort of obejct they wanted within that namespace.
This could've been exploited to allow creation of a RoleBinding that
would then allow to bind a serviceaccount to the insecure
podsecuritypolicy, thereby allowing escalation to root on nodes.
As far as I've checked, this hasn't been exploited, and the access to
the k8s cluster has so far also been limited to trusted users.
This has been deployed to production.
Change-Id: Icf8747d765ccfa9fed843ec9e7b0b957ff27d96e
This bumps Rook/Ceph. The new resources (mostly RBAC) come from
following https://rook.io/docs/rook/v1.1/ceph-upgrade.html .
It's already deployed on production. The new CSI driver has not been
tested, but the old flexvolume-based provisioners still work. We'll
migrate when Rook offers a nice solution for this.
We've hit a kubecfg bug that does not allow controlling the CephCluster
CRD directly anymore (I had to apply it via kubecfg show / kubectl apply
-f instead). This might be due to our bazel/prod k8s version mismatch,
or it might be related to https://github.com/bitnami/kubecfg/issues/259.
Change-Id: Icd69974b294b823e60b8619a656d4834bd6520fd
This is hackdoc, a documentation rendering tool for monorepos.
This is the first code iteration, that can only serve from a local git
checkout.
The code is incomplete, and is WIP.
Change-Id: I68ef7a991191c1bb1b0fdd2a8d8353aba642e28f
This makes clustercfg ensure certificates are valid for at least 30
days, and renew them otherwise.
We use this to bump all the certs that were about to expire in a week.
They are now valid until 2021.
There's still some certs that expire in 2020. We need to figure out a
better story for this, especially as the next expiry is 2021 - todays
prod rollout was somewhat disruptive (basically this was done by a full
cluster upgrade-like rollout flow, via clustercfg).
We also drive-by bump the number of mons in ceph-waw3 to 3, as it shouls
be (this gets rid of a nasty SPOF that would've bitten us during this
upgrade otherwise).
Change-Id: Iee050b1b9cba4222bc0f3c7bce9e4cf9b25c8bdc
In preparation for updating to 1.1.0, which will be much more involved.
Also fix a typo in registry.libsonnet, whoops.
Change-Id: I7668bf53c7580f99fdf56fe6227f04a468f8de50
This reflects current production. This needs to get bumped up to 3 at some point as otherwise we lose HA for this cluster.
Change-Id: Ie5937e6a216b635ecbc4c82ecd182a410167c3f8
We change the existing behaviour (copy files & run nixos-rebuild switch)
to something closer to nixops-style. This now means that provisioning
admin machines need Nix installed locally, but that's probably an okay
choice to make.
The upside of this approach is that it's easier to debug and test
derivations, as all data is local to the repo and the workstation, and
deploying just means copying a configuration closure and switching the
system to it. At some point we should even be able to run the entire
cluster within a set of test VMs.
We also bump the kubernetes control plane to 1.14. Kubelets are still at
1.13 and their upgrade is comint up today too.
Change-Id: Ia9832c47f258ee223d93893d27946d1161cc4bbd
This needed an upstream change to allow only some pools to be backed up,
otherwise benji would crash when stubmling upon the first PVC from a
pool that wasn't backed by the ceph cluster it was acting upon.
Change-Id: I52bf163c16352cb59fdd3dbdd576145ce1dbac03
This time from a bare hscloud checkout to make sure _nothing_ is fucked
up.
This causes no change remotely, just makes te repo reflect reality.
Change-Id: Ie8db01300771268e0371c3cdaf1930c8d7cbfb1a