We want to start keeping codebases separated per 'team'/intent, to then
have simple OWNER files/trees to specify review rules.
This means dc/ stuff can all be OWNED by q3k, and review will only
involve a +1 for style/readability, instead of a +2 for approval.
Change-Id: I05afbc4e1018944b841ec0d88cd24cc95bec8bf1
Add sync script for camp IX.
This will likely be triggered externally from some sort of long-running
service.
Change-Id: I4ead566e4308d24fdb64e789a7ca0e3dbf0214fb
python_rules is completely broken when it comes to py2/py3 support.
Here, we replace it with native python rules from new Bazel versions [1] and rules_pip for PyPI dependencies [2].
rules_pip is somewhat little known and experimental, but it seems to work much better than what we had previously.
We also unpin rules_docker and fix .bazelrc to force Bazel into Python 2 mode - hopefully, this repo will now work
fine under operating systems where `python` is python2 (as the standard dictates).
[1] - https://docs.bazel.build/versions/master/be/python.html
[2] - https://github.com/apt-itude/rules_pip
Change-Id: Ibd969a4266db564bf86e9c96275deffb9610dd44
The following services were never ported:
- cmc-proxy
- arista-proxy
- m6220-proxy
- topo
They now build.
Change-Id: I0688bfe43cdff946e6662e21969ef539382c0e86
We have quite a bit of them at this point, and we're likely going to use
app/* and go/svc/* for 'core' services only anyway.
Change-Id: Ic315fbd2d672e525439992bfcd9ead730d1a1b71
Another change I lost somewhere in the process of remembering how to
gerrit.
I rewrote it (lost the original commit), and also added the (upcoming)
egressifier service.
Change-Id: I1647bc3b1e504a192150ab76f4c6d1709e608f0a
IP addresses are not necessary in the topology definitions of a
cockroach cluster.
They were mis-commited leftovers from trying to run the cluster on
DaemonSets with hostNetworking: true.
Change-Id: I4ef1f6ed9a745efc6b05846bc13aba9d1f8dc7c8
This prevents a bug where kubecfg fails to update the client pod when
running a cluster/kube/cluster.jsonnet update. The pod update is
attempted because of runtime/intent differences at serviceAccounts
specification, which causes kubecfg to see a diff, which causes it to
attempt and update, which causes kube-apiserver to reject the change
(because pods are immutable), which causes kubecfg to fail.
Change-Id: I20b0ecbb264213a2eb483d475c7683b4965c82be
This change impelements the k8s machinery for Gerrit.
This might look somewhat complex at first, but the gist of it is:
- k8s mounts etc, git, cache, db, index as RW PVs
- k8s mounts a configmap containing gerrit.conf into an external
directory
- k8s mounts a secret containing secure.conf into an external directory
- on startup, gerrit's entrypoint will copy over {gerrit,secure}.conf
and start a small updater script that copies over gerrit.conf if
there's any change. This should, in theory, make gerrit reload its
config.
This is already running on production. You're probably looking at this
change through the instance deployed by itself :)
Change-Id: Ida9dff721c17cf4da7fb6ccbb54d2c4024672572
We move away from the StatefulSet based deployment to manually starting
a deployment per intended node. This allows us to pin indivisual
instances of Cockroach to particular nodes, so that they state
co-located with their data.
We refactor this library to:
- support multiple databases, but with a strong suggestion of having
one per k8s cluster
- drop the database creation logic
- redo naming (allowing for two options: multiple clusters per
namespace or an exclusive namespace for the cluster)
- unhardcode dns names