mirror of https://gerrit.hackerspace.pl/hscloud
2ce367681a
python_rules is completely broken when it comes to py2/py3 support. Here, we replace it with native python rules from new Bazel versions [1] and rules_pip for PyPI dependencies [2]. rules_pip is somewhat little known and experimental, but it seems to work much better than what we had previously. We also unpin rules_docker and fix .bazelrc to force Bazel into Python 2 mode - hopefully, this repo will now work fine under operating systems where `python` is python2 (as the standard dictates). [1] - https://docs.bazel.build/versions/master/be/python.html [2] - https://github.com/apt-itude/rules_pip Change-Id: Ibd969a4266db564bf86e9c96275deffb9610dd44 |
||
---|---|---|
.. | ||
certs | ||
clustercfg | ||
kube | ||
secrets | ||
README |
README
HSCloud Clusters ================ Current cluster: `k0.hswaw.net` Accessing via kubectl --------------------- There isn't yet a service for getting short-term user certificates. Instead, you'll have to get admin certificates: bazel run //cluster/clustercfg:clustercfg admincreds $(whoami)-admin kubectl get nodes Provisioning nodes ------------------ - bring up a new node with nixos, running the configuration.nix from bootstrap (to be documented) - `bazel run //cluster/clustercfg:clustercfg nodestrap bc01nXX.hswaw.net` That's it! Ceph ==== We run Ceph via Rook. The Rook operator is running in the `ceph-rook-system` namespace. To debug Ceph issues, start by looking at its logs. The following Ceph clusters are available: ceph-waw1 --------- HDDs on bc01n0{1-3}. 3TB total capacity. The following storage classes use this cluster: - `waw-hdd-redundant-1` - erasure coded 2.1 - `waw-hdd-yolo-1` - unreplicated (you _will_ lose your data) - `waw-hdd-redundant-1-object` - erasure coded 2.1 object store A dashboard is available at https://ceph-waw1.hswaw.net/, to get the admin password run: kubectl -n ceph-waw1 get secret rook-ceph-dashboard-password -o yaml | grep "password:" | awk '{print $2}' | base64 --decode ; echo Rados Gateway (S3) is available at https://object.ceph-waw1.hswaw.net/. To create an object store user consult rook.io manual (https://rook.io/docs/rook/v0.9/ceph-object-store-user-crd.html) User authentication secret is generated in ceph cluster namespace (`ceph-waw1`), thus may need to be manually copied into application namespace. (see `app/registry/prod.jsonnet` comment) `tools/rook-s3cmd-config` can be used to generate test configuration file for s3cmd. Remember to append `:default-placement` to your region name (ie. `waw-hdd-redundant-1-object:default-placement`)