diff --git a/README b/README index 5a61a6f6..dc389bd4 100644 --- a/README +++ b/README @@ -1,9 +1,10 @@ HSCloud ======= -This is a monorepo. You'll need bash and Bazel 0.20.0+ to use it. +This is a monorepo. You'll need bash and Bazel 1.0.0+ to use it. + +If you have Nix installed you will also be able to manage bare metal nodes. If you don't want that, you can skip it. -You'll also need Nix installed globally in your system until [rules_nixpkgs/75](https://github.com/tweag/rules_nixpkgs/issues/75) is resolved. Or run on NixOS. Getting started --------------- diff --git a/cluster/README b/cluster/README index b012b5c1..01208592 100644 --- a/cluster/README +++ b/cluster/README @@ -7,15 +7,21 @@ Accessing via kubectl --------------------- prodaccess # get a short-lived certificate for your use via SSO + # if youre local username is not the same as your HSWAW SSO + # username, pass `-username foo` kubectl version kubectl top nodes Every user gets a `personal-$username` namespace. Feel free to use it for your own purposes, but watch out for resource usage! -Persistent Storage ------------------- + kubectl run -n personal-$username run --image=alpine:latest -it foo -HDDs on bc01n0{1-3}. 3TB total capacity. +To proceed further you should be somewhat familiar with Kubernetes. Otherwise the rest of terminology might not make sense. We recommend going through the original Kubernetes tutorials. + +Persistent Storage (waw2) +------------------------- + +HDDs on bc01n0{1-3}. 3TB total capacity. Don't use this as this pool should go away soon (the disks are slow, the network is slow and the RAID controllers lie). Use ceph-waw3 instead. The following storage classes use this cluster: @@ -26,7 +32,22 @@ The following storage classes use this cluster: Rados Gateway (S3) is available at https://object.ceph-waw2.hswaw.net/. To create a user, ask an admin. -PersistentVolumes currently bound to PVCs get automatically backued up (hourly for the next 48 hours, then once every 4 weeks, then once every month for a year). +PersistentVolumes currently bound to PersistentVolumeClaims get automatically backed up (hourly for the next 48 hours, then once every 4 weeks, then once every month for a year). + +Persistent Storage (waw3) +------------------------- + +HDDs on dcr01s2{2,4}. 40TB total capacity for now. Use this. + +The following storage classes use this cluster: + + - `waw-hdd-yolo-3` - 1 replica + - `waw-hdd-redundant-3` - 2 replicas + - `waw-hdd-redundant-3-object` - 2 replicas, object store + +Rados Gateway (S3) is available at https://object.ceph-waw3.hswaw.net/. To create a user, ask an admin. + +PersistentVolumes currently bound to PVCs get automatically backed up (hourly for the next 48 hours, then once every 4 weeks, then once every month for a year). Administration ============== @@ -34,7 +55,8 @@ Administration Provisioning nodes ------------------ - - bring up a new node with nixos, running the configuration.nix from bootstrap (to be documented) + - bring up a new node with nixos, the configuration doesn't matter and will be nuked anyway + - edit cluster/nix/defs-machines.nix - `bazel run //cluster/clustercfg nodestrap bc01nXX.hswaw.net` Ceph - Debugging