4
0
Fork 2
mirror of https://gerrit.hackerspace.pl/hscloud synced 2025-01-24 16:03:54 +00:00

prod{access,vider}: implement

Prodaccess/Prodvider allow issuing short-lived certificates for all SSO
users to access the kubernetes cluster.

Currently, all users get a personal-$username namespace in which they
have adminitrative rights. Otherwise, they get no access.

In addition, we define a static CRB to allow some admins access to
everything. In the future, this will be more granular.

We also update relevant documentation.

Change-Id: Ia18594eea8a9e5efbb3e9a25a04a28bbd6a42153
This commit is contained in:
q3k 2019-08-29 20:12:24 +02:00
parent d16454badc
commit b13b7ffcdb
28 changed files with 1506 additions and 50 deletions

21
README
View file

@ -13,24 +13,9 @@ Getting started
tools/install.sh # build tools
Then, to get Kubernets:
Then, to get Kubernetes access to k0.hswaw.net (current nearly-production cluster):
echo "185.236.240.36 k0.hswaw.net" >> /etc/hosts # temporary hack until we get loadbalancers working
bazel run //cluster/clustercfg:clustercfg admincreds $(whoami)-admin # get administrative creds (valid for 5 days)
prodaccess
kubectl version
Clusters
========
The following kubernetes clusters are available:
k0.hswaw.net
------------
3 nodes (bc01n{01,02,03}.hswaw.net), mixed worker/master.
No persistent storage (yet).
Temporary development cluster. Will become base production cluster once configuration is done, but will *likely be fully cleared*.
Feel free to use for tests, but your pods might disappear at any time.
You will automatically get a `personal-$USERNAME` namespace created in which you have full admin rights.

View file

@ -690,3 +690,15 @@ go_repository(
commit = "68ac5879751a7105834296859f8c1bf70b064675",
importpath = "github.com/sethvargo/go-password",
)
go_repository(
name = "in_gopkg_ldap_v3",
commit = "9f0d712775a0973b7824a1585a86a4ea1d5263d9",
importpath = "gopkg.in/ldap.v3",
)
go_repository(
name = "in_gopkg_asn1_ber_v1",
commit = "f715ec2f112d1e4195b827ad68cf44017a3ef2b1",
importpath = "gopkg.in/asn1-ber.v1",
)

View file

@ -6,33 +6,17 @@ Current cluster: `k0.hswaw.net`
Accessing via kubectl
---------------------
There isn't yet a service for getting short-term user certificates. Instead, you'll have to get admin certificates:
bazel run //cluster/clustercfg:clustercfg admincreds $(whoami)-admin
prodaccess # get a short-lived certificate for your use via SSO
kubectl get nodes
Provisioning nodes
Persistent Storage
------------------
- bring up a new node with nixos, running the configuration.nix from bootstrap (to be documented)
- `bazel run //cluster/clustercfg:clustercfg nodestrap bc01nXX.hswaw.net`
That's it!
Ceph
====
We run Ceph via Rook. The Rook operator is running in the `ceph-rook-system` namespace. To debug Ceph issues, start by looking at its logs.
The following Ceph clusters are available:
ceph-waw1
---------
HDDs on bc01n0{1-3}. 3TB total capacity.
The following storage classes use this cluster:
- `waw-hdd-paranoid-1` - 3 replicas
- `waw-hdd-redundant-1` - erasure coded 2.1
- `waw-hdd-yolo-1` - unreplicated (you _will_ lose your data)
- `waw-hdd-redundant-1-object` - erasure coded 2.1 object store
@ -49,3 +33,22 @@ thus may need to be manually copied into application namespace. (see
`tools/rook-s3cmd-config` can be used to generate test configuration file for s3cmd.
Remember to append `:default-placement` to your region name (ie. `waw-hdd-redundant-1-object:default-placement`)
Administration
==============
Provisioning nodes
------------------
- bring up a new node with nixos, running the configuration.nix from bootstrap (to be documented)
- `bazel run //cluster/clustercfg:clustercfg nodestrap bc01nXX.hswaw.net`
That's it!
Ceph
====
We run Ceph via Rook. The Rook operator is running in the `ceph-rook-system` namespace. To debug Ceph issues, start by looking at its logs.
The following Ceph clusters are available:

18
cluster/certs/BUILD.bazel Normal file
View file

@ -0,0 +1,18 @@
load("@io_bazel_rules_go//go:def.bzl", "go_library")
load("@io_bazel_rules_go//extras:embed_data.bzl", "go_embed_data")
go_embed_data(
name = "certs_data",
srcs = glob(["*.crt"]),
package = "certs",
flatten = True,
)
go_library(
name = "go_default_library",
srcs = [
":certs_data", # keep
],
importpath = "code.hackerspace.pl/cluster/certs",
visibility = ["//visibility:public"],
)

View file

@ -0,0 +1,31 @@
-----BEGIN CERTIFICATE-----
MIIFQzCCBCugAwIBAgIUbcxmU7cMccTf/ERKgi0uDIKJRoEwDQYJKoZIhvcNAQEL
BQAwgYMxCzAJBgNVBAYTAlBMMRQwEgYDVQQIEwtNYXpvd2llY2tpZTEPMA0GA1UE
BxMGV2Fyc2F3MRswGQYDVQQKExJXYXJzYXcgSGFja2Vyc3BhY2UxEzARBgNVBAsT
CmNsdXN0ZXJjZmcxGzAZBgNVBAMTEmt1YmVybmV0ZXMgbWFpbiBDQTAeFw0xOTA4
MzAyMDI1MDBaFw0yMDA4MjkyMDI1MDBaMIGsMQswCQYDVQQGEwJQTDEUMBIGA1UE
CBMLTWF6b3dpZWNraWUxDzANBgNVBAcTBldhcnNhdzEbMBkGA1UEChMSV2Fyc2F3
IEhhY2tlcnNwYWNlMSowKAYDVQQLEyFrdWJlcm5ldGVzIHByb2R2aWRlciBpbnRl
cm1lZGlhdGUxLTArBgNVBAMTJGt1YmVybmV0ZXMgcHJvZHZpZGVyIGludGVybWVk
aWF0ZSBDQTCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBAL/38OKQgrqI
9WZKRubACVF1QUmZS9IIzcmmxsAJEvNwCirAr6Rx45G+uBlUx0PmHK+783Pa0WEO
deTHpZZt5o6YrQGvEzkI9ckDraUjRcQEQewi3kygmAdPW6GMWZd7fjCjsEQ0Engc
qJ7BkEWNfJYLh8VpEwPz1ClqFrlbHU55hbuvNNg3Ro0enFmTu3PPZYUIcdX3jyJz
p/fsE7K/f2OhHG2ej0Ji2Ssz6Bo9bB6yHLMN1oYzGB5H8Xa5dQ6LqpU0wUBqtGC8
06ZUfNA1gtpTOj+ApDX/OYucoOE422r1lT6SfgeBhHGN3xalcYyiPumFsCBUSq+B
7oLRW3emWJcjlOdmhtx26yl5/XpONY8u/jPG56CnT3tNGPdYnpVQ/969NrKA7yd4
TRA4rU6Nyg5f3x8Xrw5QPci5Uuz2X2feFy53x25i2tRT2fm5VabzdjsO9mXCZbl8
BO8mLVJ4Ojw5ER/sIw/OME29+tcBL3j31OoBUAHo82ca4B0KJBCWDHrjDTlchFfT
fQfFWuRluZaa1kGU/9hEuHe8wXNsMlkCW+68xZ5SXLX29ruhx7SoDk3+SMk1GMNv
vZr6CjWer94OajPN+scW7Pol2mhqENWFsTDA0WFN0HwLjLna9vQJg6vZeobm3bWZ
DWl93HqdKeINlp9Q0HQ7nR+LUkeodWf7AgMBAAGjgYMwgYAwDgYDVR0PAQH/BAQD
AgGmMB0GA1UdJQQWMBQGCCsGAQUFBwMBBggrBgEFBQcDAjAPBgNVHRMBAf8EBTAD
AQH/MB0GA1UdDgQWBBRpjeqS08ZAgwwhQZnMEmrNN2PdszAfBgNVHSMEGDAWgBSY
Ml0OTzMe+wnpiSQTFkJqgNGZ0DANBgkqhkiG9w0BAQsFAAOCAQEAiVxVjz4vuN0w
9mw56taa8AxOF4Cl18LEuxVnw6ugxG5ahlhZOssnv/HdDwoHdlbLw5ER2RTK0hFT
whH76BkJOUwAZ+YggpnOFf5hUIf9e3Pfu5MtdSBJQ0LHPRY3QPP/gHEsQR0muXVd
AIyTQZPuJ2M98bWgaZX4yrJ31jLjcNPFM7RXiIi1ZgTr7LTRCALoFm1Tw/kM5TE7
2qYjcaeJO1X3Zon5UXJogYa/3JreKQlBhGZgHHNAQobmVNmJTEvOuPw/31ZWDKVR
Qrv04QYFUwCNGdI1Bin1rk9lbsrTiEP2x8W5cwGPaa1MR45xTrrEYBrplUJXiCBQ
kwCwP+xLBQ==
-----END CERTIFICATE-----

View file

@ -4,6 +4,7 @@ import logging
import os
from six import StringIO
import subprocess
import tempfile
logger = logging.getLogger(__name__)
@ -32,6 +33,20 @@ _ca_config = {
"expiry": "168h"
},
"profiles": {
"intermediate": {
"expiry": "8760h",
"usages": [
"signing",
"key encipherment",
"cert sign",
"crl sign",
"server auth",
"client auth",
],
"ca_constraint": {
"is_ca": True,
},
},
"server": {
"expiry": "8760h",
"usages": [
@ -156,12 +171,17 @@ class CA(object):
return key, csr
def sign(self, csr, save=None):
def sign(self, csr, save=None, profile='client-server'):
logging.info("{}: Signing CSR".format(self))
ca = self._cert
cakey = self.ss.plaintext(self._secret_key)
config = tempfile.NamedTemporaryFile(mode='w')
json.dump(_ca_config, config)
config.flush()
out = self._cfssl_call(['sign', '-ca=' + ca, '-ca-key=' + cakey,
'-profile=client-server', '-'], stdin=csr)
'-profile='+profile, '-config='+config.name, '-'], stdin=csr)
cert = out['cert']
if save is not None:
name = os.path.join(self.cdir, save)
@ -170,6 +190,7 @@ class CA(object):
f.write(cert)
f.close()
config.close()
return cert
def upload(self, c, remote_cert):
@ -181,7 +202,7 @@ class CA(object):
class ManagedCertificate(object):
def __init__(self, ca, name, hosts, o=None, ou=None):
def __init__(self, ca, name, hosts, o=None, ou=None, profile='client-server'):
self.ca = ca
self.hosts = hosts
@ -190,6 +211,7 @@ class ManagedCertificate(object):
self.cert = '{}.cert'.format(name)
self.o = o
self.ou = ou
self.profile = profile
self.ensure()
@ -230,7 +252,7 @@ class ManagedCertificate(object):
logger.info("{}: Generating...".format(self))
key, csr = self.ca.gen_key(self.hosts, o=self.o, ou=self.ou, save=self.key)
self.ca.sign(csr, save=self.cert)
self.ca.sign(csr, save=self.cert, profile=self.profile)
def upload(self, c, remote_cert, remote_key, concat_ca=False):
logger.info("Uploading Cert {} to {} & {}".format(self, remote_cert, remote_key))

View file

@ -57,7 +57,7 @@ def _file_exists(c, filename):
def configure_k8s(username, ca, cert, key):
subprocess.check_call([
'kubectl', 'config',
'set-cluster', cluster,
'set-cluster', 'admin.' + cluster,
'--certificate-authority=' + ca,
'--embed-certs=true',
'--server=https://' + cluster + ':4001',
@ -71,13 +71,13 @@ def configure_k8s(username, ca, cert, key):
])
subprocess.check_call([
'kubectl', 'config',
'set-context', cluster,
'--cluster=' + cluster,
'set-context', 'admin.' + cluster,
'--cluster=' + 'admin.' + cluster,
'--user=' + username,
])
subprocess.check_call([
'kubectl', 'config',
'use-context', cluster,
'use-context', 'admin.' + cluster,
])
@ -86,6 +86,18 @@ def admincreds(args):
sys.stderr.write("Usage: admincreds q3k\n")
return 1
username = args[0]
print("")
print("WARNING WARNING WARNING WARNING WARNING WARNING")
print("===============================================")
print("")
print("You are requesting ADMIN credentials.")
print("")
print("You likely shouldn't be doing this, and")
print("instead should be using `prodaccess`.")
print("")
print("===============================================")
print("WARNING WARNING WARNING WARNING WARNING WARNING")
print("")
## Make kube certificates.
certs_root = os.path.join(local_root, 'cluster/certs')
@ -169,6 +181,10 @@ def nodestrap(args, nocerts=False):
## Make kube certificates.
ca_kube = ca.CA(ss, certs_root, 'kube', 'kubernetes main CA')
# Make prodvider intermediate CA.
c = ca_kube.make_cert('ca-kube-prodvider', o='Warsaw Hackerspace', ou='kubernetes prodvider intermediate', hosts=['kubernetes prodvider intermediate CA'], profile='intermediate')
c.ensure()
# Make kubelet certificate (per node).
c = ca_kube.make_cert('kube-kubelet-'+fqdn, o='system:nodes', ou='Kubelet', hosts=['system:node:'+fqdn, fqdn])
c.upload_pki(r, pki_config('kube.kubelet'))

View file

@ -1,6 +1,7 @@
# Top level cluster configuration.
local kube = import "../../kube/kube.libsonnet";
local policies = import "../../kube/policies.libsonnet";
local calico = import "lib/calico.libsonnet";
local certmanager = import "lib/cert-manager.libsonnet";
@ -9,6 +10,7 @@ local coredns = import "lib/coredns.libsonnet";
local metallb = import "lib/metallb.libsonnet";
local metrics = import "lib/metrics.libsonnet";
local nginx = import "lib/nginx.libsonnet";
local prodvider = import "lib/prodvider.libsonnet";
local registry = import "lib/registry.libsonnet";
local rook = import "lib/rook.libsonnet";
@ -30,7 +32,7 @@ local Cluster(fqdn) = {
"rbac.authorization.kubernetes.io/autoupdate": "true",
},
labels+: {
"kubernets.io/bootstrapping": "rbac-defaults",
"kubernetes.io/bootstrapping": "rbac-defaults",
},
},
rules: [
@ -57,6 +59,96 @@ local Cluster(fqdn) = {
],
},
// This ClusteRole is bound to all humans that log in via prodaccess/prodvider/SSO.
// It should allow viewing of non-sensitive data for debugability and openness.
crViewer: kube.ClusterRole("system:viewer") {
rules: [
{
apiGroups: [""],
resources: [
"nodes",
"namespaces",
"pods",
"configmaps",
"services",
],
verbs: ["list"],
},
{
apiGroups: ["metrics.k8s.io"],
resources: [
"nodes",
"pods",
],
verbs: ["list"],
},
{
apiGroups: ["apps"],
resources: [
"statefulsets",
],
verbs: ["list"],
},
{
apiGroups: ["extensions"],
resources: [
"deployments",
"ingresses",
],
verbs: ["list"],
}
],
},
// This ClusterRole is applied (scoped to personal namespace) to all humans.
crFullInNamespace: kube.ClusterRole("system:admin-namespace") {
rules: [
{
apiGroups: ["*"],
resources: ["*"],
verbs: ["*"],
},
],
},
// This ClusterRoleBindings allows root access to cluster admins.
crbAdmins: kube.ClusterRoleBinding("system:admins") {
roleRef: {
apiGroup: "rbac.authorization.k8s.io",
kind: "ClusterRole",
name: "cluster-admin",
},
subjects: [
{
apiGroup: "rbac.authorization.k8s.io",
kind: "User",
name: user + "@hackerspace.pl",
} for user in [
"q3k",
"implr",
"informatic",
]
],
},
podSecurityPolicies: policies.Cluster {},
allowInsecureNamespaces: [
policies.AllowNamespaceInsecure("kube-system"),
# TODO(q3k): fix this?
policies.AllowNamespaceInsecure("ceph-waw2"),
],
// Allow all service accounts (thus all controllers) to create secure pods.
crbAllowServiceAccountsSecure: kube.ClusterRoleBinding("policy:allow-all-secure") {
roleRef_: cluster.podSecurityPolicies.secureRole,
subjects: [
{
kind: "Group",
apiGroup: "rbac.authorization.k8s.io",
name: "system:serviceaccounts",
}
],
},
// Calico network fabric
calico: calico.Environment {},
// CoreDNS for this cluster.
@ -106,6 +198,9 @@ local Cluster(fqdn) = {
objectStorageName: "waw-hdd-redundant-2-object",
},
},
// Prodvider
prodvider: prodvider.Environment {},
};

View file

@ -36,6 +36,7 @@
local kube = import "../../../kube/kube.libsonnet";
local cm = import "cert-manager.libsonnet";
local policies = import "../../../kube/policies.libsonnet";
{
Cluster(name): {
@ -70,6 +71,8 @@ local cm = import "cert-manager.libsonnet";
[if cluster.cfg.ownNamespace then "ns"]: kube.Namespace(cluster.namespaceName),
},
insecurePolicy: policies.AllowNamespaceInsecure(cluster.namespaceName),
name(suffix):: if cluster.cfg.ownNamespace then suffix else name + "-" + suffix,
pki: {

View file

@ -1,6 +1,7 @@
# Deploy MetalLB
local kube = import "../../../kube/kube.libsonnet";
local policies = import "../../../kube/policies.libsonnet";
local bindServiceAccountClusterRole(sa, cr) = kube.ClusterRoleBinding(cr.metadata.name) {
roleRef: {
@ -32,6 +33,8 @@ local bindServiceAccountClusterRole(sa, cr) = kube.ClusterRoleBinding(cr.metadat
ns: if cfg.namespaceCreate then kube.Namespace(cfg.namespace),
insecurePolicy: policies.AllowNamespaceInsecure(cfg.namespace),
saController: kube.ServiceAccount("controller") {
metadata+: {
namespace: cfg.namespace,

View file

@ -1,6 +1,7 @@
# Deploy a per-cluster Nginx Ingress Controller
local kube = import "../../../kube/kube.libsonnet";
local policies = import "../../../kube/policies.libsonnet";
{
Environment: {
@ -21,6 +22,8 @@ local kube = import "../../../kube/kube.libsonnet";
namespace: kube.Namespace(cfg.namespace),
allowInsecure: policies.AllowNamespaceInsecure(cfg.namespace),
maps: {
make(name):: kube.ConfigMap(name) {
metadata+: env.metadata,

View file

@ -0,0 +1,85 @@
# Deploy prodvider (prodaccess server) in cluster.
local kube = import "../../../kube/kube.libsonnet";
{
Environment: {
local env = self,
local cfg = env.cfg,
cfg:: {
namespace: "prodvider",
image: "registry.k0.hswaw.net/cluster/prodvider:1567199084-2e1c08fa7a41faac2ef3f79a1bb82f8841a68016",
pki: {
intermediate: {
cert: importstr "../../certs/ca-kube-prodvider.cert",
key: importstr "../../secrets/plain/ca-kube-prodvider.key",
},
kube: {
cert: importstr "../../certs/ca-kube.crt",
},
}
},
namespace: kube.Namespace(cfg.namespace),
metadata(component):: {
namespace: cfg.namespace,
labels: {
"app.kubernetes.io/name": "prodvider",
"app.kubernetes.io/managed-by": "kubecfg",
"app.kubernetes.io/component": component,
},
},
secret: kube.Secret("ca") {
metadata+: env.metadata("prodvider"),
data_: {
"intermediate-ca.crt": cfg.pki.intermediate.cert,
"intermediate-ca.key": cfg.pki.intermediate.key,
"ca.crt": cfg.pki.kube.cert,
},
},
deployment: kube.Deployment("prodvider") {
metadata+: env.metadata("prodvider"),
spec+: {
replicas: 3,
template+: {
spec+: {
volumes_: {
ca: kube.SecretVolume(env.secret),
},
containers_: {
prodvider: kube.Container("prodvider") {
image: cfg.image,
args: [
"/cluster/prodvider/prodvider",
"-listen_address", "0.0.0.0:8080",
"-ca_key_path", "/opt/ca/intermediate-ca.key",
"-ca_certificate_path", "/opt/ca/intermediate-ca.crt",
"-kube_ca_certificate_path", "/opt/ca/ca.crt",
],
volumeMounts_: {
ca: { mountPath: "/opt/ca" },
}
},
},
},
},
},
},
svc: kube.Service("prodvider") {
metadata+: env.metadata("prodvider"),
target_pod:: env.deployment.spec.template,
spec+: {
type: "LoadBalancer",
ports: [
{ name: "public", port: 443, targetPort: 8080, protocol: "TCP" },
],
},
},
},
}

View file

@ -152,11 +152,12 @@ local cm = import "cert-manager.libsonnet";
},
local data = self,
pushers:: [
{ who: ["q3k", "inf"], what: "vms/*" },
{ who: ["q3k", "inf"], what: "app/*" },
{ who: ["q3k", "inf"], what: "go/svc/*" },
{ who: ["q3k", "informatic"], what: "vms/*" },
{ who: ["q3k", "informatic"], what: "app/*" },
{ who: ["q3k", "informatic"], what: "go/svc/*" },
{ who: ["q3k"], what: "bgpwtf/*" },
{ who: ["q3k"], what: "devtools/*" },
{ who: ["q3k", "informatic"], what: "cluster/*" },
],
acl: [
{

View file

@ -161,7 +161,7 @@ in rec {
serviceClusterIpRange = "10.10.12.0/24";
runtimeConfig = "api/all,authentication.k8s.io/v1beta1";
authorizationMode = ["Node" "RBAC"];
enableAdmissionPlugins = ["Initializers" "NamespaceLifecycle" "NodeRestriction" "LimitRanger" "ServiceAccount" "DefaultStorageClass" "ResourceQuota"];
enableAdmissionPlugins = ["Initializers" "NamespaceLifecycle" "NodeRestriction" "LimitRanger" "ServiceAccount" "DefaultStorageClass" "ResourceQuota" "PodSecurityPolicy"];
extraOpts = ''
--apiserver-count=3 \
--proxy-client-cert-file=${pki.kubeFront.apiserver.cert} \

View file

@ -0,0 +1,25 @@
load("@io_bazel_rules_go//go:def.bzl", "go_binary", "go_library")
go_library(
name = "go_default_library",
srcs = [
"kubernetes.go",
"prodaccess.go",
],
importpath = "code.hackerspace.pl/hscloud/cluster/prodaccess",
visibility = ["//visibility:private"],
deps = [
"//cluster/certs:go_default_library",
"//cluster/prodvider/proto:go_default_library",
"@com_github_golang_glog//:go_default_library",
"@org_golang_google_grpc//:go_default_library",
"@org_golang_google_grpc//credentials:go_default_library",
"@org_golang_x_crypto//ssh/terminal:go_default_library",
],
)
go_binary(
name = "prodaccess",
embed = [":go_default_library"],
visibility = ["//visibility:public"],
)

View file

@ -0,0 +1,26 @@
prodvider
=========
It provides access, yo.
Architecture
------------
Prodvider uses an intermedaite CA (the prodvider CA, signed by the kube CA), to generate the following:
- a cert for prodvider to present itself over gRPC for prodaccess clients
- a cert for prodvider to authenticate itself to the kube apiserver
- client certificates for prodaccess consumers.
Any time someone runs 'prodaccess', they get a certificate from the intermediate CA, and the intermediate CA is included as part of the chain that they receive. They can then use this chain to authenticate against kubernetes.
Naming
------
Prodvider customers get certificates with a CN=`username@hackerspace.pl` and O=`sso:username`. This means that they appear to Kubernetes as being a `User` named `username@hackerspace.pl` and `Group` named `sso:username`. In the future, another group might be given to users, do not rely on this relationship.
Kubernetes Structure
--------------------
After generating a user certificate, prodvider will also call kubernetes to set up a personal user namespace (`personal-username`), a RoleBinding to `system:admin-namespace` from their `User` in their namespace (thus, giving them full rights in it) and a ClusterRoleBinding to `system:viewer` from their `User` (thus, giving them some read access for all resources, but not to secure data (like secrets).
`system:admin-namespace` and `system:viewer` are defined in `//cluster/kube`.

View file

@ -0,0 +1,110 @@
package main
import (
"crypto/tls"
"crypto/x509"
"fmt"
"io/ioutil"
"os"
"os/exec"
"path"
"path/filepath"
"time"
"github.com/golang/glog"
pb "code.hackerspace.pl/hscloud/cluster/prodvider/proto"
)
func kubernetesPaths() (string, string, string) {
localRoot := os.Getenv("hscloud_root")
if localRoot == "" {
glog.Exitf("Please source env.sh")
}
localKey := path.Join(localRoot, ".kubectl", fmt.Sprintf("%s.key", flagUsername))
localCert := path.Join(localRoot, ".kubectl", fmt.Sprintf("%s.crt", flagUsername))
localCA := path.Join(localRoot, ".kubectl", fmt.Sprintf("ca.crt"))
return localKey, localCert, localCA
}
func needKubernetesCreds() bool {
localKey, localCert, _ := kubernetesPaths()
// Check for existence of cert/key.
if _, err := os.Stat(localKey); os.IsNotExist(err) {
return true
}
if _, err := os.Stat(localCert); os.IsNotExist(err) {
return true
}
// Cert/key exist, try to load and parse.
creds, err := tls.LoadX509KeyPair(localCert, localKey)
if err != nil {
return true
}
if len(creds.Certificate) != 1 {
return true
}
cert, err := x509.ParseCertificate(creds.Certificate[0])
if err != nil {
return true
}
creds.Leaf = cert
// Check if certificate will still be valid in 2 hours.
target := time.Now().Add(2 * time.Hour)
if creds.Leaf.NotAfter.Before(target) {
return true
}
return false
}
func useKubernetesKeys(keys *pb.KubernetesKeys) {
localKey, localCert, localCA := kubernetesPaths()
parent := filepath.Dir(localKey)
if _, err := os.Stat(parent); os.IsNotExist(err) {
os.MkdirAll(parent, 0700)
}
if err := ioutil.WriteFile(localKey, keys.Key, 0600); err != nil {
glog.Exitf("WriteFile(%q): %v", localKey, err)
}
if err := ioutil.WriteFile(localCert, keys.Cert, 0600); err != nil {
glog.Exitf("WriteFile(%q): %v", localCert, err)
}
if err := ioutil.WriteFile(localCA, keys.Ca, 0600); err != nil {
glog.Exitf("WriteFile(%q): %v", localCA, err)
}
kubectl := func(args ...string) {
cmd := exec.Command("kubectl", args...)
out, err := cmd.CombinedOutput()
if err != nil {
glog.Exitf("kubectl %v: %v: %v", args, err, string(out))
}
}
kubectl("config",
"set-cluster", keys.Cluster,
"--certificate-authority="+localCA,
"--embed-certs=true",
"--server=https://"+keys.Cluster+":4001")
kubectl("config",
"set-credentials", flagUsername,
"--client-certificate="+localCert,
"--client-key="+localKey,
"--embed-certs=true")
kubectl("config",
"set-context", keys.Cluster,
"--cluster="+keys.Cluster,
"--user="+flagUsername)
kubectl("config", "use-context", keys.Cluster)
}

View file

@ -0,0 +1,114 @@
package main
import (
"context"
"crypto/x509"
"flag"
"fmt"
"os"
"os/user"
"syscall"
"github.com/golang/glog"
"golang.org/x/crypto/ssh/terminal"
"google.golang.org/grpc"
"google.golang.org/grpc/credentials"
"code.hackerspace.pl/cluster/certs"
pb "code.hackerspace.pl/hscloud/cluster/prodvider/proto"
)
var (
flagProdvider string
flagUsername string
flagForce bool
)
func init() {
flag.Set("logtostderr", "true")
}
func main() {
user, err := user.Current()
if err == nil {
flagUsername = user.Username
}
flag.StringVar(&flagProdvider, "prodvider", "prodvider.hswaw.net:443", "Prodvider endpoint")
flag.StringVar(&flagUsername, "username", flagUsername, "Username to authenticate with")
flag.BoolVar(&flagForce, "force", false, "Force retrieving certificates even if they already exist")
flag.Parse()
if flagUsername == "" {
glog.Exitf("Username could not be detected, please provide with -username flag")
}
cp := x509.NewCertPool()
if ok := cp.AppendCertsFromPEM(certs.Data["ca-kube.crt"]); !ok {
glog.Exitf("Could not load k8s CA")
}
creds := credentials.NewClientTLSFromCert(cp, "")
conn, err := grpc.Dial(flagProdvider, grpc.WithTransportCredentials(creds))
if err != nil {
glog.Exitf("Could not dial prodvider: %v", err)
}
prodvider := pb.NewProdviderClient(conn)
ctx := context.Background()
if !needKubernetesCreds() && !flagForce {
fmt.Printf("Kubernetes credentials exist. Use `prodaccess -force` to force update.\n")
os.Exit(0)
}
attempts := 0
for {
ok := authenticate(ctx, prodvider)
attempts += 1
if !ok {
if attempts >= 3 {
os.Exit(1)
}
} else {
fmt.Printf("Good evening professor. I see you have driven here in your Ferrari.\n")
os.Exit(0)
}
}
}
func authenticate(ctx context.Context, prodvider pb.ProdviderClient) bool {
req := &pb.AuthenticateRequest{
Username: flagUsername,
Password: password(),
}
res, err := prodvider.Authenticate(ctx, req)
if err != nil {
glog.Exitf("Prodvider error: %v", err)
}
switch res.Result {
case pb.AuthenticateResponse_RESULT_AUTHENTICATED:
break
case pb.AuthenticateResponse_RESULT_INVALID_CREDENTIALS:
fmt.Printf("Invalid username or password.\n")
return false
default:
glog.Exitf("Unknown authentication result: %v", res.Result)
}
useKubernetesKeys(res.KubernetesKeys)
return true
}
func password() string {
fmt.Printf("Enter SSO/LDAP password for %s@hackerspace.pl: ", flagUsername)
bytePassword, err := terminal.ReadPassword(int(syscall.Stdin))
if err != nil {
return ""
}
fmt.Printf("\n")
return string(bytePassword)
}

View file

@ -0,0 +1,64 @@
load("@io_bazel_rules_docker//container:container.bzl", "container_image", "container_layer", "container_push")
load("@io_bazel_rules_go//go:def.bzl", "go_binary", "go_library")
go_library(
name = "go_default_library",
srcs = [
"certs.go",
"kubernetes.go",
"main.go",
"service.go",
],
importpath = "code.hackerspace.pl/hscloud/cluster/prodvider",
visibility = ["//visibility:private"],
deps = [
"//cluster/prodvider/proto:go_default_library",
"@com_github_cloudflare_cfssl//config:go_default_library",
"@com_github_cloudflare_cfssl//csr:go_default_library",
"@com_github_cloudflare_cfssl//signer:go_default_library",
"@com_github_cloudflare_cfssl//signer/local:go_default_library",
"@com_github_golang_glog//:go_default_library",
"@in_gopkg_ldap_v3//:go_default_library",
"@io_k8s_api//core/v1:go_default_library",
"@io_k8s_api//rbac/v1:go_default_library",
"@io_k8s_apimachinery//pkg/api/errors:go_default_library",
"@io_k8s_apimachinery//pkg/apis/meta/v1:go_default_library",
"@io_k8s_client_go//kubernetes:go_default_library",
"@io_k8s_client_go//rest:go_default_library",
"@org_golang_google_grpc//:go_default_library",
"@org_golang_google_grpc//codes:go_default_library",
"@org_golang_google_grpc//credentials:go_default_library",
"@org_golang_google_grpc//status:go_default_library",
],
)
go_binary(
name = "prodvider",
embed = [":go_default_library"],
visibility = ["//visibility:public"],
)
container_layer(
name = "layer_bin",
files = [
":prodvider",
],
directory = "/cluster/prodvider/",
)
container_image(
name = "runtime",
base = "@prodimage-bionic//image",
layers = [
":layer_bin",
],
)
container_push(
name = "push",
image = ":runtime",
format = "Docker",
registry = "registry.k0.hswaw.net",
repository = "cluster/prodvider",
tag = "{BUILD_TIMESTAMP}-{STABLE_GIT_COMMIT}",
)

112
cluster/prodvider/certs.go Normal file
View file

@ -0,0 +1,112 @@
package main
import (
"crypto/tls"
"fmt"
"time"
"github.com/cloudflare/cfssl/csr"
"github.com/cloudflare/cfssl/signer"
"github.com/golang/glog"
"google.golang.org/grpc"
"google.golang.org/grpc/credentials"
)
func (p *prodvider) selfCreds() grpc.ServerOption {
glog.Infof("Bootstrapping certificate for self (%q)...", flagProdviderCN)
// Create a key and CSR.
csrPEM, keyPEM, err := p.makeSelfCSR()
if err != nil {
glog.Exitf("Could not generate key and CSR for self: %v", err)
}
// Create a cert
certPEM, err := p.makeSelfCertificate(csrPEM)
if err != nil {
glog.Exitf("Could not sign certificate for self: %v", err)
}
serverCert, err := tls.X509KeyPair(certPEM, keyPEM)
if err != nil {
glog.Exitf("Could not use gRPC certificate: %v", err)
}
signerCert, _ := p.sign.Certificate("", "")
serverCert.Certificate = append(serverCert.Certificate, signerCert.Raw)
return grpc.Creds(credentials.NewTLS(&tls.Config{
Certificates: []tls.Certificate{serverCert},
}))
}
func (p *prodvider) makeSelfCSR() ([]byte, []byte, error) {
signerCert, _ := p.sign.Certificate("", "")
req := &csr.CertificateRequest{
CN: flagProdviderCN,
KeyRequest: &csr.BasicKeyRequest{
A: "rsa",
S: 4096,
},
Names: []csr.Name{
{
C: signerCert.Subject.Country[0],
ST: signerCert.Subject.Province[0],
L: signerCert.Subject.Locality[0],
O: signerCert.Subject.Organization[0],
OU: signerCert.Subject.OrganizationalUnit[0],
},
},
}
g := &csr.Generator{
Validator: func(req *csr.CertificateRequest) error { return nil },
}
return g.ProcessRequest(req)
}
func (p *prodvider) makeSelfCertificate(csr []byte) ([]byte, error) {
req := signer.SignRequest{
Hosts: []string{},
Request: string(csr),
Profile: "server",
}
return p.sign.Sign(req)
}
func (p *prodvider) makeKubernetesCSR(username, o string) ([]byte, []byte, error) {
signerCert, _ := p.sign.Certificate("", "")
req := &csr.CertificateRequest{
CN: username,
KeyRequest: &csr.BasicKeyRequest{
A: "rsa",
S: 4096,
},
Names: []csr.Name{
{
C: signerCert.Subject.Country[0],
ST: signerCert.Subject.Province[0],
L: signerCert.Subject.Locality[0],
O: o,
OU: fmt.Sprintf("Prodvider Kubernetes Cert for %s/%s", username, o),
},
},
}
g := &csr.Generator{
Validator: func(req *csr.CertificateRequest) error { return nil },
}
return g.ProcessRequest(req)
}
func (p *prodvider) makeKubernetesCertificate(csr []byte, notAfter time.Time) ([]byte, error) {
req := signer.SignRequest{
Hosts: []string{},
Request: string(csr),
Profile: "client",
NotAfter: notAfter,
}
return p.sign.Sign(req)
}

View file

@ -0,0 +1,205 @@
package main
import (
"encoding/pem"
"fmt"
"time"
"github.com/golang/glog"
corev1 "k8s.io/api/core/v1"
rbacv1 "k8s.io/api/rbac/v1"
"k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
pb "code.hackerspace.pl/hscloud/cluster/prodvider/proto"
)
func (p *prodvider) kubernetesCreds(username string) (*pb.KubernetesKeys, error) {
o := fmt.Sprintf("sso:%s", username)
csrPEM, keyPEM, err := p.makeKubernetesCSR(username+"@hackerspace.pl", o)
if err != nil {
return nil, err
}
certPEM, err := p.makeKubernetesCertificate(csrPEM, time.Now().Add(13*time.Hour))
if err != nil {
return nil, err
}
caCert, _ := p.sign.Certificate("", "")
caPEM := pem.EncodeToMemory(&pem.Block{Type: "CERTIFICATE", Bytes: caCert.Raw})
// Build certificate chain from new cert and intermediate CA.
chainPEM := append(certPEM, caPEM...)
glog.Infof("Generated k8s certificate for %q", username)
return &pb.KubernetesKeys{
Cluster: "k0.hswaw.net",
// APIServerCA
Ca: p.kubeCAPEM,
// Chain of new cert + intermediate CA
Cert: chainPEM,
Key: keyPEM,
}, nil
}
func (p *prodvider) kubernetesConnect() error {
csrPEM, keyPEM, err := p.makeKubernetesCSR("prodvider", "system:masters")
if err != nil {
return err
}
certPEM, err := p.makeKubernetesCertificate(csrPEM, time.Now().Add(30*24*time.Hour))
if err != nil {
return err
}
caCert, _ := p.sign.Certificate("", "")
caPEM := pem.EncodeToMemory(&pem.Block{Type: "CERTIFICATE", Bytes: caCert.Raw})
glog.Infof("Generated k8s certificate for self (system:masters)")
// Build certificate chain from our cert and intermediate CA.
chainPEM := append(certPEM, caPEM...)
config := &rest.Config{
Host: flagKubernetesHost,
TLSClientConfig: rest.TLSClientConfig{
// Chain to authenticate ourselves (us + intermediate CA).
CertData: chainPEM,
KeyData: keyPEM,
// APIServer CA for verification.
CAData: p.kubeCAPEM,
},
}
cs, err := kubernetes.NewForConfig(config)
if err != nil {
return err
}
p.k8s = cs
return nil
}
// kubernetesSetupUser ensures that for a given SSO username we:
// - have a personal-<username> namespace
// - have a sso:<username>:personal rolebinding that binds
// system:admin-namespace to the user within their personal namespace
// - have a sso:<username>:global clusterrolebinding that binds
// system:viewer to the user at cluster level
func (p *prodvider) kubernetesSetupUser(username string) error {
namespace := "personal-" + username
if err := p.ensureNamespace(namespace); err != nil {
return err
}
if err := p.ensureRoleBindingPersonal(namespace, username); err != nil {
return err
}
if err := p.ensureClusterRoleBindingGlobal(username); err != nil {
return err
}
return nil
}
func (p *prodvider) ensureNamespace(name string) error {
_, err := p.k8s.CoreV1().Namespaces().Get(name, metav1.GetOptions{})
switch {
case err == nil:
// Already exists, nothing to do
return nil
case errors.IsNotFound(err):
break
default:
// Something went wrong.
return err
}
ns := &corev1.Namespace{
ObjectMeta: metav1.ObjectMeta{
Name: name,
},
}
_, err = p.k8s.CoreV1().Namespaces().Create(ns)
return err
}
func (p *prodvider) ensureRoleBindingPersonal(namespace, username string) error {
name := "sso:" + username + ":personal"
rb := &rbacv1.RoleBinding{
ObjectMeta: metav1.ObjectMeta{
Name: name,
Namespace: namespace,
},
Subjects: []rbacv1.Subject{
{
APIGroup: "rbac.authorization.k8s.io",
Kind: "User",
Name: username + "@hackerspace.pl",
},
},
RoleRef: rbacv1.RoleRef{
APIGroup: "rbac.authorization.k8s.io",
Kind: "ClusterRole",
Name: "system:admin-namespace",
},
}
rbs := p.k8s.RbacV1().RoleBindings(namespace)
_, err := rbs.Get(name, metav1.GetOptions{})
switch {
case err == nil:
// Already exists, update.
_, err = rbs.Update(rb)
return err
case errors.IsNotFound(err):
// Create.
_, err = rbs.Create(rb)
return err
default:
// Something went wrong.
return err
}
}
func (p *prodvider) ensureClusterRoleBindingGlobal(username string) error {
name := "sso:" + username + ":global"
rb := &rbacv1.ClusterRoleBinding{
ObjectMeta: metav1.ObjectMeta{
Name: name,
},
Subjects: []rbacv1.Subject{
{
APIGroup: "rbac.authorization.k8s.io",
Kind: "User",
Name: username + "@hackerspace.pl",
},
},
RoleRef: rbacv1.RoleRef{
APIGroup: "rbac.authorization.k8s.io",
Kind: "ClusterRole",
Name: "system:viewer",
},
}
crbs := p.k8s.RbacV1().ClusterRoleBindings()
_, err := crbs.Get(name, metav1.GetOptions{})
switch {
case err == nil:
// Already exists, update.
_, err = crbs.Update(rb)
return err
case errors.IsNotFound(err):
// Create.
_, err = crbs.Create(rb)
return err
default:
// Something went wrong.
return err
}
}

149
cluster/prodvider/main.go Normal file
View file

@ -0,0 +1,149 @@
package main
import (
"flag"
"io/ioutil"
"math/rand"
"net"
"os"
"time"
"github.com/cloudflare/cfssl/config"
"github.com/cloudflare/cfssl/signer/local"
"github.com/golang/glog"
"google.golang.org/grpc"
"k8s.io/client-go/kubernetes"
pb "code.hackerspace.pl/hscloud/cluster/prodvider/proto"
)
var (
flagLDAPServer string
flagLDAPBindDN string
flagLDAPGroupSearchBase string
flagListenAddress string
flagKubernetesHost string
flagCACertificatePath string
flagCAKeyPath string
flagKubeCACertificatePath string
flagProdviderCN string
)
func init() {
flag.Set("logtostderr", "true")
}
type prodvider struct {
sign *local.Signer
k8s *kubernetes.Clientset
srv *grpc.Server
kubeCAPEM []byte
}
func newProdvider() *prodvider {
policy := &config.Signing{
Profiles: map[string]*config.SigningProfile{
"server": &config.SigningProfile{
Usage: []string{"signing", "key encipherment", "server auth"},
ExpiryString: "30d",
},
"client": &config.SigningProfile{
Usage: []string{"signing", "key encipherment", "client auth"},
ExpiryString: "30d",
},
"client-server": &config.SigningProfile{
Usage: []string{"signing", "key encipherment", "server auth", "client auth"},
ExpiryString: "30d",
},
},
Default: config.DefaultConfig(),
}
sign, err := local.NewSignerFromFile(flagCACertificatePath, flagCAKeyPath, policy)
if err != nil {
glog.Exitf("Could not create signer: %v", err)
}
kubeCAPEM, err := ioutil.ReadFile(flagKubeCACertificatePath)
if err != nil {
glog.Exitf("Could not read kube CA cert path: %v")
}
return &prodvider{
sign: sign,
kubeCAPEM: kubeCAPEM,
}
}
// Timebomb restarts the prodvider after a deadline, usually 7 days +/- 4 days.
// This is to ensure we serve with up-to-date certificates and that the service
// can still come up after restart.
func timebomb(srv *grpc.Server) {
deadline := time.Now()
deadline = deadline.Add(3 * 24 * time.Hour)
rand.Seed(time.Now().UnixNano())
jitter := rand.Intn(8 * 24 * 60 * 60)
deadline = deadline.Add(time.Duration(jitter) * time.Second)
glog.Infof("Timebomb deadline set to %v", deadline)
t := time.NewTicker(time.Minute)
for {
<-t.C
if time.Now().After(deadline) {
break
}
}
// Start killing connections, and wait one minute...
go srv.GracefulStop()
<-t.C
glog.Infof("Timebomb deadline exceeded, restarting.")
os.Exit(0)
}
func main() {
flag.StringVar(&flagLDAPServer, "ldap_server", "ldap.hackerspace.pl:636", "Address of LDAP server")
flag.StringVar(&flagLDAPBindDN, "ldap_bind_dn", "uid=%s,ou=People,dc=hackerspace,dc=pl", "LDAP Bind DN")
flag.StringVar(&flagLDAPGroupSearchBase, "ldap_group_search_base_dn", "ou=Group,dc=hackerspace,dc=pl", "LDAP Group Search Base DN")
flag.StringVar(&flagListenAddress, "listen_address", "127.0.0.1:8080", "gRPC listen address")
flag.StringVar(&flagKubernetesHost, "kubernetes_host", "k0.hswaw.net:4001", "Kubernetes API host")
flag.StringVar(&flagCACertificatePath, "ca_certificate_path", "", "CA certificate path (for signer)")
flag.StringVar(&flagCAKeyPath, "ca_key_path", "", "CA key path (for signer)")
flag.StringVar(&flagKubeCACertificatePath, "kube_ca_certificate_path", "", "CA certificate path (for checking kube apiserver)")
flag.StringVar(&flagProdviderCN, "prodvider_cn", "prodvider.hswaw.net", "CN of certificate that prodvider will use")
flag.Parse()
if flagCACertificatePath == "" || flagCAKeyPath == "" {
glog.Exitf("CA certificate and key must be provided")
}
p := newProdvider()
err := p.kubernetesConnect()
if err != nil {
glog.Exitf("Could not connect to kubernetes: %v", err)
}
creds := p.selfCreds()
// Start serving gRPC
grpcLis, err := net.Listen("tcp", flagListenAddress)
if err != nil {
glog.Exitf("Could not listen for gRPC on %q: %v", flagListenAddress, err)
}
glog.Infof("Starting gRPC on %q...", flagListenAddress)
grpcSrv := grpc.NewServer(creds)
pb.RegisterProdviderServer(grpcSrv, p)
go timebomb(grpcSrv)
err = grpcSrv.Serve(grpcLis)
if err != nil {
glog.Exitf("Could not serve gRPC: %v", err)
}
}

View file

@ -0,0 +1,23 @@
load("@io_bazel_rules_go//go:def.bzl", "go_library")
load("@io_bazel_rules_go//proto:def.bzl", "go_proto_library")
proto_library(
name = "proto_proto",
srcs = ["prodvider.proto"],
visibility = ["//visibility:public"],
)
go_proto_library(
name = "proto_go_proto",
compilers = ["@io_bazel_rules_go//proto:go_grpc"],
importpath = "code.hackerspace.pl/hscloud/cluster/prodvider/proto",
proto = ":proto_proto",
visibility = ["//visibility:public"],
)
go_library(
name = "go_default_library",
embed = [":proto_go_proto"],
importpath = "code.hackerspace.pl/hscloud/cluster/prodvider/proto",
visibility = ["//visibility:public"],
)