Merge "Get in the Cluster, Benji!"

changes/61/61/1
q3k 2019-09-18 20:40:12 +00:00 committed by Gerrit Code Review
commit db2a2a029f
6 changed files with 395 additions and 68 deletions

2
README
View File

@ -19,3 +19,5 @@ Then, to get Kubernetes access to k0.hswaw.net (current nearly-production cluste
kubectl version
You will automatically get a `personal-$USERNAME` namespace created in which you have full admin rights.
For mor information about the cluster, see [cluster/README].

View File

@ -7,7 +7,10 @@ Accessing via kubectl
---------------------
prodaccess # get a short-lived certificate for your use via SSO
kubectl get nodes
kubectl version
kubectl top nodes
Every user gets a `personal-$username` namespace. Feel free to use it for your own purposes, but watch out for resource usage!
Persistent Storage
------------------
@ -21,18 +24,9 @@ The following storage classes use this cluster:
- `waw-hdd-yolo-1` - unreplicated (you _will_ lose your data)
- `waw-hdd-redundant-1-object` - erasure coded 2.1 object store
A dashboard is available at https://ceph-waw1.hswaw.net/, to get the admin password run:
Rados Gateway (S3) is available at https://object.ceph-waw2.hswaw.net/. To create a user, ask an admin.
kubectl -n ceph-waw1 get secret rook-ceph-dashboard-password -o yaml | grep "password:" | awk '{print $2}' | base64 --decode ; echo
Rados Gateway (S3) is available at https://object.ceph-waw1.hswaw.net/. To create
an object store user consult rook.io manual (https://rook.io/docs/rook/v0.9/ceph-object-store-user-crd.html)
User authentication secret is generated in ceph cluster namespace (`ceph-waw1`),
thus may need to be manually copied into application namespace. (see
`app/registry/prod.jsonnet` comment)
`tools/rook-s3cmd-config` can be used to generate test configuration file for s3cmd.
Remember to append `:default-placement` to your region name (ie. `waw-hdd-redundant-1-object:default-placement`)
PersistentVolumes currently bound to PVCs get automatically backued up (hourly for the next 48 hours, then once every 4 weeks, then once every month for a year).
Administration
==============
@ -43,12 +37,33 @@ Provisioning nodes
- bring up a new node with nixos, running the configuration.nix from bootstrap (to be documented)
- `bazel run //cluster/clustercfg:clustercfg nodestrap bc01nXX.hswaw.net`
That's it!
Ceph
====
Ceph - Debugging
-----------------
We run Ceph via Rook. The Rook operator is running in the `ceph-rook-system` namespace. To debug Ceph issues, start by looking at its logs.
The following Ceph clusters are available:
A dashboard is available at https://ceph-waw2.hswaw.net/, to get the admin password run:
kubectl -n ceph-waw2 get secret rook-ceph-dashboard-password -o yaml | grep "password:" | awk '{print $2}' | base64 --decode ; echo
Ceph - Backups
--------------
Kubernetes PVs backed in Ceph RBDs get backed up using Benji. An hourly cronjob runs in every Ceph cluster. You can also manually trigger a run by doing:
kubectl -n ceph-waw2 create job --from=cronjob/ceph-waw2-benji ceph-waw2-benji-manual-$(date +%s)
Ceph ObjectStorage pools (RADOSGW) are _not_ backed up yet!
Ceph - Object Storage
---------------------
To create an object store user consult rook.io manual (https://rook.io/docs/rook/v0.9/ceph-object-store-user-crd.html)
User authentication secret is generated in ceph cluster namespace (`ceph-waw2`),
thus may need to be manually copied into application namespace. (see
`app/registry/prod.jsonnet` comment)
`tools/rook-s3cmd-config` can be used to generate test configuration file for s3cmd.
Remember to append `:default-placement` to your region name (ie. `waw-hdd-redundant-1-object:default-placement`)

View File

@ -262,6 +262,22 @@ local Cluster(fqdn) = {
},
],
},
benji:: {
metadataStorageClass: "waw-hdd-paranoid-2",
encryptionPassword: std.split((importstr "../secrets/plain/k0-benji-encryption-password"), '\n')[0],
pools: [
"waw-hdd-redundant-2",
"waw-hdd-redundant-2-metadata",
"waw-hdd-paranoid-2",
"waw-hdd-yolo-2",
],
s3Configuration: {
awsAccessKeyId: "RPYZIROFXNLQVU2WJ4R3",
awsSecretAccessKey: std.split((importstr "../secrets/plain/k0-benji-secret-access-key"), '\n')[0],
bucketName: "benji-k0-backups",
endpointUrl: "https://s3.eu-central-1.wasabisys.com/",
},
}
},
},
// redundant block storage

View File

@ -216,18 +216,8 @@ local policies = import "../../../kube/policies.libsonnet";
crb: kube.ClusterRoleBinding("ceph-rook-global") {
metadata+: env.metadata { namespace:: null },
roleRef: {
apiGroup: "rbac.authorization.k8s.io",
kind: "ClusterRole",
name: env.crs.global.metadata.name,
},
subjects: [
{
kind: "ServiceAccount",
name: env.sa.metadata.name,
namespace: env.sa.metadata.namespace,
},
],
roleRef_: env.crs.global,
subjects_: [env.sa],
},
role: kube.Role("ceph-rook-system") {
@ -248,18 +238,8 @@ local policies = import "../../../kube/policies.libsonnet";
rb: kube.RoleBinding("ceph-rook-system") {
metadata+: env.metadata,
roleRef: {
apiGroup: "rbac.authorization.k8s.io",
kind: "Role",
name: env.role.metadata.name,
},
subjects: [
{
kind: "ServiceAccount",
name: env.sa.metadata.name,
namespace: env.sa.metadata.namespace,
},
],
roleRef_: env.role,
subjects_: [env.sa],
},
operator: kube.Deployment("rook-ceph-operator") {
@ -372,23 +352,13 @@ local policies = import "../../../kube/policies.libsonnet";
rbs: [
kube.RoleBinding(cluster.name(el.name)) {
metadata+: cluster.metadata,
roleRef: {
apiGroup: "rbac.authorization.k8s.io",
kind: el.role.kind,
name: el.role.metadata.name,
},
subjects: [
{
kind: el.sa.kind,
name: el.sa.metadata.name,
namespace: el.sa.metadata.namespace,
},
],
roleRef_: el.role,
subjects_: [el.sa],
},
for el in [
// Allow Operator SA to perform Cluster Mgmt in this namespace.
{ name: "cluster-mgmt", role: operator.crs.clusterMgmt, sa: operator.sa },
{ name: "osd", role: cluster.roles.osd, sa: cluster.sa.osd },
{ name: "osd", role: cluster.roles.osd, sa: cluster.sa.osd },
{ name: "mgr", role: cluster.roles.mgr, sa: cluster.sa.mgr },
{ name: "mgr-cluster", role: operator.crs.mgrCluster, sa: cluster.sa.mgr },
]
@ -398,18 +368,8 @@ local policies = import "../../../kube/policies.libsonnet";
metadata+: {
namespace: operator.cfg.namespace,
},
roleRef: {
apiGroup: "rbac.authorization.k8s.io",
kind: cluster.roles.mgrSystem.kind,
name: cluster.roles.mgrSystem.metadata.name,
},
subjects: [
{
kind: cluster.sa.mgr.kind,
name: cluster.sa.mgr.metadata.name,
namespace: cluster.sa.mgr.metadata.namespace,
},
],
roleRef_: cluster.roles.mgrSystem,
subjects_: [cluster.sa.mgr],
},
cluster: kube._Object("ceph.rook.io/v1", "CephCluster", name) {
@ -434,7 +394,7 @@ local policies = import "../../../kube/policies.libsonnet";
metadata+: cluster.metadata,
spec: {
ports: [
{ name: "dashboard", port: 80, targetPort: 8080, protocol: "TCP" },
{ name: "dashboard", port: 80, targetPort: 8080, protocol: "TCP" },
],
selector: {
app: "rook-ceph-mgr",
@ -469,7 +429,259 @@ local policies = import "../../../kube/policies.libsonnet";
}
],
},
}
},
# Benji is a backup tool, external to rook, that we use for backing up
# RBDs.
benji: {
sa: kube.ServiceAccount(cluster.name("benji")) {
metadata+: cluster.metadata,
},
cr: kube.ClusterRole(cluster.name("benji")) {
rules: [
{
apiGroups: [""],
resources: [
"persistentvolumes",
"persistentvolumeclaims"
],
verbs: ["list", "get"],
},
{
apiGroups: [""],
resources: [
"events",
],
verbs: ["create", "update"],
},
],
},
crb: kube.ClusterRoleBinding(cluster.name("benji")) {
roleRef_: cluster.benji.cr,
subjects_: [cluster.benji.sa],
},
config: kube.Secret(cluster.name("benji-config")) {
metadata+: cluster.metadata,
data_: {
"benji.yaml": std.manifestJson({
configurationVersion: '1',
databaseEngine: 'sqlite:////data/benji.sqlite',
defaultStorage: 'wasabi',
storages: [
{
name: "wasabi",
storageId: 1,
module: "s3",
configuration: cluster.spec.benji.s3Configuration {
activeTransforms: ["encrypt"],
},
},
],
transforms: [
{
name: "encrypt",
module: "aes_256_gcm",
configuration: {
# not secret.
kdfSalt: "T2huZzZpcGhhaWM3QWVwaDhybzRhaDNhbzFpc2VpOWFobDNSZWVQaGVvTWV1bmVaYWVsNHRoYWg5QWVENHNoYWg0ZGFoN3Rlb3NvcHVuZzNpZXZpMm9vTG9vbmc1YWlmb0RlZXAwYmFobDlab294b2hjaG9odjRhbzFsYWkwYWk=",
kdfIterations: 2137,
password: cluster.spec.benji.encryptionPassword,
},
},
],
ios: [
{ name: pool, module: "rbd" }
for pool in cluster.spec.benji.pools
],
}),
},
},
# Yes, Benji keeps data (backup metadata) on the ceph cluster that
# it backs up. However:
# - we add a command to benji-k8s to also copy over the sqlite
# database over to s3
# - benji can, in a pinch, restore without a database if a version
# is known: https://benji-backup.me/restore.html#restoring-without-a-database
data: kube.PersistentVolumeClaim(cluster.name("benji-data")) {
metadata+: cluster.metadata,
spec+: {
storageClassName: cluster.spec.benji.metadataStorageClass,
accessModes: [ "ReadWriteOnce" ],
resources: {
requests: {
storage: "1Gi",
},
},
},
},
# Extra scripts.
extrabins: kube.ConfigMap(cluster.name("benji-extrabins")) {
metadata+: cluster.metadata,
data: {
"metabackup.sh" : |||
# Make backups of sqlite3 metadata used by Benji.
# The backups live in the same bucket as backups, and the metabackups
# are named `metabackup-0..10`, where 0 is the newest backup. Any time
# this script is called, backups get shifted one way to the left (9 to 10,
# 8 to 9, etc). This ensures we have at least 10 backup replicas.
set -e
which s3cmd || pip install --upgrade s3cmd
AWS_ACCESS_KEY_ID=$(jq -r .storages[0].configuration.awsAccessKeyId < /etc/benji/benji.yaml)
AWS_SECRET_ACCESS_KEY=$(jq -r .storages[0].configuration.awsSecretAccessKey < /etc/benji/benji.yaml)
BUCKET=$(jq -r .storages[0].configuration.bucketName < /etc/benji/benji.yaml)
s3() {
s3cmd --host=s3.wasabisys.com \
"--host-bucket=%(bucket)s.s3.wasabisys.com" \
--region=eu-central-1 \
--access_key=$AWS_ACCESS_KEY_ID \
--secret_key=$AWS_SECRET_ACCESS_KEY \
"$@"
}
# Copy over old backups, if they exist.
for i in `seq 9 -1 0`; do
from="s3://$BUCKET/metabackup-$i.sqlite3"
to="s3://$BUCKET/metabackup-$((i+1)).sqlite3"
if [[ $(s3 ls $from | wc -l) -eq 0 ]]; then
echo "$from does not exist, skipping shift."
continue
fi
echo "Moving $from to $to..."
s3 mv $from $to
done
# Make new metabackup.
s3 put /data/benji.sqlite s3://$BUCKET/metabackup-0.sqlite3
|||,
"get-rook-creds.sh": |||
# Based on the Rook Toolbox /usr/local/bin/toolbox.sh script.
# Copyright 2016 The Rook Authors. All rights reserved.
CEPH_CONFIG="/etc/ceph/ceph.conf"
MON_CONFIG="/etc/rook/mon-endpoints"
KEYRING_FILE="/etc/ceph/keyring"
# create a ceph config file in its default location so ceph/rados tools can be used
# without specifying any arguments
write_endpoints() {
endpoints=$(cat ${MON_CONFIG})
# filter out the mon names
mon_endpoints=$(echo ${endpoints} | sed 's/[a-z]\+=//g')
# filter out the legacy mon names
mon_endpoints=$(echo ${mon_endpoints} | sed 's/rook-ceph-mon[0-9]\+=//g')
DATE=$(date)
echo "$DATE writing mon endpoints to ${CEPH_CONFIG}: ${endpoints}"
cat <<EOF > ${CEPH_CONFIG}
[global]
mon_host = ${mon_endpoints}
[client.admin]
keyring = ${KEYRING_FILE}
EOF
}
# watch the endpoints config file and update if the mon endpoints ever change
watch_endpoints() {
# get the timestamp for the target of the soft link
real_path=$(realpath ${MON_CONFIG})
initial_time=$(stat -c %Z ${real_path})
while true; do
real_path=$(realpath ${MON_CONFIG})
latest_time=$(stat -c %Z ${real_path})
if [[ "${latest_time}" != "${initial_time}" ]]; then
write_endpoints
initial_time=${latest_time}
fi
sleep 10
done
}
# create the keyring file
cat <<EOF > ${KEYRING_FILE}
[client.admin]
key = ${ROOK_ADMIN_SECRET}
EOF
# write the initial config file
write_endpoints
# continuously update the mon endpoints if they fail over
watch_endpoints &
|||
},
},
cronjob: kube.CronJob(cluster.name("benji")) {
metadata+: cluster.metadata,
spec+: { # CronJob Spec
schedule: "42 * * * *", # Hourly at 42 minute past.
jobTemplate+: {
spec+: { # Job Spec
selector:: null,
template+: {
spec+: { # PodSpec
serviceAccountName: cluster.benji.sa.metadata.name,
containers_: {
benji: kube.Container(cluster.name("benji")) {
# TODO(q3k): switch back to upstream after pull/52 goes in.
# Currently this is being built from github.com/q3k/benji.
# https://github.com/elemental-lf/benji/pull/52
image: "registry.k0.hswaw.net/q3k/benji-k8s:20190831-1351",
volumeMounts_: {
extrabins: { mountPath: "/usr/local/extrabins" },
monendpoints: { mountPath: "/etc/rook" },
benjiconfig: { mountPath: "/etc/benji" },
data: { mountPath: "/data" },
},
env_: {
ROOK_ADMIN_SECRET: { secretKeyRef: { name: "rook-ceph-mon", key: "admin-secret" }},
},
command: [
"bash", "-c", |||
bash /usr/local/extrabins/get-rook-creds.sh
benji-backup-pvc
benji-command enforce latest3,hours48,days7,months12
benji-command cleanup
bash /usr/local/extrabins/metabackup.sh
|||,
],
},
},
volumes_: {
data: kube.PersistentVolumeClaimVolume(cluster.benji.data),
benjiconfig: kube.SecretVolume(cluster.benji.config),
extrabins: kube.ConfigMapVolume(cluster.benji.extrabins),
monendpoints: {
configMap: {
name: "rook-ceph-mon-endpoints",
items: [
{ key: "data", path: "mon-endpoints" },
],
},
},
},
},
},
},
},
},
},
},
},
ReplicatedBlockPool(cluster, name):: {

View File

@ -0,0 +1,42 @@
-----BEGIN PGP MESSAGE-----
hQEMAzhuiT4RC8VbAQf/atxrl5tx7rXn00mSgm4l9icjC5uRgLzBMhWOuCCBX2N2
4w2m9rmlc2Qj3agweiWMENl0AijTjuVxpcRNprTYAk8GX6bQ1pS4j9LkMUxPse83
wEh62BqrUMSqtaOfUcffsPzS9Ffiza35/xOS1LCDf7irj1wmaWwBGdseAYQaUgfV
+k5LIPSNBgNp/U8lyi24WWj7wUChTTBaYuLox4NDSsBY1Vw16pu6rdUHkIxsT9UX
yb90R3rM68y7Do2WOP+/9u+1rjK6sk4ptZM9Z64BkJBLo/boTjqJbtzdfaNk7ot8
uIwyXJTOrdnbzYFMD6iJ3EiNTRBMRnbqFuyQj56rk4UBCwNcG2tp6fXqvgEH91zw
h/oAY7i4lLXeK0Avf2bMleJxCXrfaRP5neIToHimpgHA7/xG4H8lOgu7xM9+EfKb
iTp/gkPamD1thD2IuULz/zEHhpeixcKdhJDcHdjavvnMnuOZS9bAYlMVJhx5hQsW
isI6dwrbWCS2AGbcG26iV8RaMkO8oRkXbrBpDM98hhkUjR5d0g+W5MAHGguAwrAj
VYpeNXLWsXJDAyN1vRs4ElheVOEqMczfCu2irfMz1gmQjS5DT/up/dJV0/fkCyUT
M/ozpIvmEfoygGyHCibKNXMM5YRGbBWGpoZg22TKvmW8xLkuTegP4S73nYBa9Lhd
YQSvmOLtaza6dO2UXYUCDAOh2hPxWpGXhQEP+gPTx24wy9RNizWSFJCh+/VPcnqP
yLw16uSGLp/QGsPPMxePzRIC1PUMCsJ2QGGQERd6RK6sM3xJKcsNYBfndm30kmUP
zkKE3Ng7/lQ5CPq26mZTKKdiA58hJkTdcG8TRFIcFAJ9rc3DDb9NMrs9NezMCgwo
Zp41IImfbkTGMmLuXrmPJTKLXZCnT83ZVCg56rMyJLpi16RyIh/j1zb/RJIFxOIX
sYMX5eFId38aurEQUoJ9DrAdqs0QL98m6peVLzbVkknZ4HWfgkN32LM+SJkqBW5L
/Tl7fVVRX34oCNszRmr0vCw8VzmdkdB9E3jf7Ku49jgQcq1Qd5i4bCxn7AV/6CwA
FR5vkd/LwZVEHlbMyrettseEVWWWNpk/ZmyzkFSqOy8z5YXo1V5xHv5ZP4S7XUYq
2daFzX1pQEShg4Ik7F6VQYTk/TZ/qz4FgDxYLZpXAIzqht+jN3ZY1XmESTj9pBll
pysy3F94iO4GOX77PAK1OcHmGiOunSDAK6SyvEIjHlC51pKtjLLuhp4dZPcuRCdA
0XzpIMyJjR129TOMphN7aYZHdGTp4vAxgeCk9nMVHYGwxpCqI51/Gm9QtNc4+pfM
dOe2Bi/cg1sv4XbtGScU8UCJwJXrMSzoLLnZSoHxdPh0qBNsOswLB4VIKmtMyowN
ZVxDdVWxzqEimcZNhQIMA+IDyU5c67PvAQ/8CA1hprksva3WrTQ6Po/maFssW3cy
tGQIQh/hv0qHi1QEPIvixaVvC0hXXzdS0Mu8/H2FoxovUINhPhhJNEgjSgoNzQzw
ODxCHwSHa8fR09hB1ivwbxkr9MhF+Dvg+xFheZM7hXzxa5J/GzIJeYT74TIi31a4
D5Z6lip1Fw1aF6SIIfNw/UeDb82C9DlbhsWNqoDtgdNn3EYBXIOTqILNvJUjYpGt
iKJJUeMEIRbDPQ0j3BHlMGCs5vTpTlm3CasyJLfR2xphMNySFs0GHASdkbeKxEBS
W04JzXWsWfFVLf90JbJhxJHA6y39SUro0HywSa7Re2bKJ2Jy4b/Q24N5tWOMeuhv
GPuDzzgXaksvpdkJocKHgJwjcPiDoqk+cqjos+eSC813A+cSf8S66qhxKzZvGhVC
OmqYAGtIqO06j3F17do0pOFeevZXFmpE5UJZex9hqGhn3HmksU43OSIvi9ceODMO
6xZjIfXyzjItNI35hcHvBy4qtNRXaRQFSMw+OrYBiJnlz8vOXt+xm07CqrWZUhX5
FvpzHGjEU2f2uI35oANCJb303mCAglTXvUp6ALlt96eJ+j9Fa5plJEG4PwiALIz4
SaH6RQIkNZWfMbVVquSccR9VLYSQNSwD4v/WozaMDqWywWSYFSgd5dv6XLh76spX
J9xgdggYmbejgZ/SvwHlUSfBlmD+oOrhYKmyvmJiczDrAcpJdspQrZYn8DMpVbsV
yDabepNGqVmSEQPIfw0sLJ8uHYibYc+duFVWh6xZGStPleBQvJKZBovqhpcj/luP
K57AIeYP0YKqKLEcOBof2ZyCNh1sjJKLPSZHDiVKZutLyFUs8giwjPYLvl5K1BX7
CqEMLJi36VuMY2wvWs1IjCTEMHLvscmQQGvpUMNOOFYgy5/o5pwH//Vkv1xzMA5m
F4asCNxWg4rFkBbe
=DFc7
-----END PGP MESSAGE-----

View File

@ -0,0 +1,40 @@
-----BEGIN PGP MESSAGE-----
hQEMAzhuiT4RC8VbAQf/d7bQO8quPzhKFBYYGfW4K84eFiUvb0azJQpGoUS+w5qB
B8Jyr0zwSSmW7XGgSeI53x+K5ZlztoTDiqEuFEV4oOMhC4TQ5EMbwsGWS2ugJkAQ
JzLbu+yU9GO1aZhyhvm7vAn0dVkD5+9jHwUgLpCerxHIWKW78w5wY50zo8f7jbFb
Z2yl2ktI9/37FdSe9iWBl3oapcuyD7HKYP7XXmc+q61/nW5L03D21WHJfDHjj04m
c6KtdOzdrejpCSkR3d8S8R7QONzNPu0D1QX305POQkZslHNXfjG/WNq9ry7Eec7u
AynOMneX31qZjrs6vV6XyZxWDIqzRKdGbyLVwp7v8oUBDANcG2tp6fXqvgEIAMQh
HZeCRgs899oJu+ZwxXyic38UZKE1sUbaqip4P/qVlFkP97kgbsdwdUx3iCIvqhYv
uM7nRSAugUF9t9N/QDxTTghiVB3PDEJTfqWNfZZGEIfvmv95ZxPP5N9Luv2hWNVe
fMPnzU80atHFJd14deAuDCxETo4dCuARMlkUCi2Oqnw6L15s93DgnLqgID5s/E+X
f9gW+ZOtC8Bxp1KzpgXTmYOF4jpoEQx//5wdZW2kR2QZ+o0PzJpBUrNOI+4rS56p
jODkM4KYyVjKUIi3P+yB7YfViN9N7RMxuLQWysw94ei9xwMUECjSQxHWyfv0GKCD
1duJQcLhCJ5BLwyL95+FAgwDodoT8VqRl4UBD/4hUL28fWb6E9FwEyetesf6VPeY
xD4kvgu+cbsPrvcwujsds7Xb28rHN42d/gY/3rcUrFKd7439M6YAEVbfmWE04ggY
1FmKDJdWNHw/V+o9CpPy5wze9y06jCbdE1q4z0M7I387UpP2P3Mg+jlEseVy4+Qs
ihRGqfCjOSbwSj+NpXpuHQ8chjoeAZgmGa+IdwIiHJlDcUt5RVS0pjqZVUpoXNe9
mWQDLSaHsUNBtSLEYokS0ABog1vUSW0Lofj2Z6OcI5bRUDXNxdgaANhcRGEEBGvY
27ZVBHs4btpxE2qztWWlKwvH1LIcGKNTk/Ttt9H3N6kbuU6lONf0NksgNsBsbrgD
9SNGgpzhSN5sbWdjKSf7DB683xSf41wB/nPgPTKja6nfc9tL2MUVcJsxFMVXLsbd
GDKr+PS+VvcWgZT03LRPm5Ghi5wi3WEKWJ+Z3/DKOPG2VB3XOJGMuYbgMDNx0Jop
9dXRjaeBSs2x3DsKeTd89BCC5EvMHHTe37Uv0sMtwIoqjXJFxFvljb/sWYN2pWla
88BxovZ71v5dWASKcl/BJHtZGuzJ8L8euEI3b12chaeoEQAVAg5WHJrBt8doAgHQ
TIDd7sEwM3rHpHXc5XT0ENbljGRQr9m/tRO2BzkmdFQUG4Gv+eElnEdUSFmfaqTA
s1qmLMQSvQF92+2P2YUCDAPiA8lOXOuz7wEP/1ROcjoWbc1h4fGsYo4pcsgoLm8i
Hv44mxdxgM5mwJtscV8YjEjC1ILftlecviw/ZDrgfJVDULS/Yl6QdFmhqvxRj9PL
AOc+k7xXnEWx/GrS6h3B1NxqX9GsQdW7vsHJ6oeD446CK5a6VuVR5zuaNG4OHzUB
9p1mXoapb2BwjHh9ScWNnNlCrAgb6NQRz+GxCJ5PYA7kRZkBxWqAIRPISzsowC3u
daYRWvQUu5GSxOcBRB2mCsxPGj/0KS8FGzyR2pFgKnhZQmSa337l6HWGocLhDeCX
Zp4cj4PFxl9lT+7R6L7+yJr5sa/vifQlYyC0melqbx8TPaXq58STmlnDRK4+Tv+w
hL74DAmYxHAzlxP8rhPsUEEmQb589Jh2FSL3O6muuhPayoeMg4traCZgbsyw0W6C
UfrQpymw1xFIBAPDKbfHEXYwkDnv6Ku4WNz2P2YAVyBQaQsq7m0MqqijoOcmB+2c
zmgQ62VascydV9XMESJp7eNd4XeF+MFXT9nTQstysKPrvYNOoj+Ujh17fI29VykU
5pmOsvOohvlqWr8vvPkxqK8HKp/2Zu8SNu0haM2o3+TbBiRzWJ6y/oObY9kChqRp
CTjrGZW3GLZrXZ4S1MCF9SXAfNhqodyRA2kNk27e8+1M9/CaQ60sqjvLw4283yud
eHswvtt+frHhOAm90n4BO24gPMVYCwhscUC7IzjtOeKyh0BPIH7qbXrs9+L6mT5t
Xhr/LXTv2dT+0Fjju0YS8TOYBoPQHOJlJozv97jgvIyr4L7MF4zGZSl4TdpJsE4J
RGidU4efwU/8h6VF9axmGpeSV+9VcpKybjoT/RqqkJFkGqovUp+1vW9eths=
=dBs2
-----END PGP MESSAGE-----