let the pain begin

pull/1/head
Kelsey Hightower 2016-07-07 07:15:59 -07:00
commit 9d7ace8b18
12 changed files with 750 additions and 0 deletions

2
README.md Normal file
View File

@ -0,0 +1,2 @@
# Kubernetes The Hard Way

View File

@ -0,0 +1,8 @@
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"*", "nonResourcePath": "*", "readonly": true}}
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"admin", "namespace": "*", "resource": "*", "apiGroup": "*" }}
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"scheduler", "namespace": "*", "resource": "pods", "readonly": true }}
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"scheduler", "namespace": "*", "resource": "bindings" }}
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"kubelet", "namespace": "*", "resource": "pods", "readonly": true }}
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"kubelet", "namespace": "*", "resource": "services", "readonly": true }}
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"kubelet", "namespace": "*", "resource": "endpoints", "readonly": true }}
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"kubelet", "namespace": "*", "resource": "events" }}

View File

@ -0,0 +1,114 @@
# Certificate Authority
In this lab you will setup the necessary PKI infrastructure to secure the Kuberentes API for remote communication. This lab will leverage CloudFlare's PKI toolkit, [cfssl](https://github.com/cloudflare/cfssl), to bootstrap a Certificate Authority.
## Initialize a CA
### Create the CA configuration file
```
echo '{
"signing": {
"default": {
"expiry": "8760h"
},
"profiles": {
"kubernetes": {
"usages": ["signing", "key encipherment", "server auth", "client auth"],
"expiry": "8760h"
}
}
}
}' > ca-config.json
```
### Generate the CA certificate and private key
Create the CA CSR:
```
echo '{
"CN": "Kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "Kubernetes",
"OU": "CA",
"ST": "Oregon"
}
]
}' > ca-csr.json
```
Generate the CA certificate and private key:
```
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
```
Results:
```
ca-key.pem
ca.csr
ca.pem
```
```
openssl x509 -in ca.pem -text -noout
```
## Generate Server and Client Certs
### Generate the kube-apiserver server cert
```
echo '{
"CN": "kubernetes",
"hosts": [
"10.240.0.10",
"10.240.0.11",
"10.240.0.12",
"10.240.0.20",
"10.240.0.21",
"10.240.0.22",
"10.240.0.30",
"10.240.0.31",
"10.240.0.32",
"146.148.34.151",
"127.0.0.1"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "Kubernetes",
"OU": "Cluster",
"ST": "Oregon"
}
]
}' > kubernetes-csr.json
```
```
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
kubernetes-csr.json | cfssljson -bare kubernetes
```
```
openssl x509 -in kubernetes.pem -text -noout
```

0
docs/docker.md Normal file
View File

31
docs/downloads.md Normal file
View File

@ -0,0 +1,31 @@
# Downloads
## Kubernetes 1.3.0
```
wget https://github.com/kubernetes/kubernetes/releases/download/v1.3.0/kubernetes.tar.gz
```
## etcd 3.0.1
```
wget https://github.com/coreos/etcd/releases/download/v3.0.1/etcd-v3.0.1-linux-amd64.tar.gz
```
## Docker 1.11.2
```
wget https://get.docker.com/builds/Linux/x86_64/docker-1.11.2.tgz
```
```
tar -xvf kubernetes.tar.gz
```
```
tar -xvf kubernetes/server/kubernetes-server-linux-amd64.tar.gz
```
```
```

269
docs/etcd.md Normal file
View File

@ -0,0 +1,269 @@
# etcd
Setup a 3 node etcd cluster.
### Copy TLS Certs
```
gcloud compute copy-files ca.pem kubernetes-key.pem kubernetes.pem etcd0:~/
```
```
gcloud compute copy-files ca.pem kubernetes-key.pem kubernetes.pem etcd1:~/
```
```
gcloud compute copy-files ca.pem kubernetes-key.pem kubernetes.pem etcd2:~/
```
## etcd0
```
gcloud compute ssh etcd0
```
```
sudo mkdir -p /etc/etcd/
```
```
sudo mv ca.pem kubernetes-key.pem kubernetes.pem /etc/etcd/
```
```
wget https://github.com/coreos/etcd/releases/download/v3.0.1/etcd-v3.0.1-linux-amd64.tar.gz
```
```
tar -xvf etcd-v3.0.1-linux-amd64.tar.gz
```
```
sudo cp etcd-v3.0.1-linux-amd64/etcdctl /usr/bin/
```
```
sudo cp etcd-v3.0.1-linux-amd64/etcd /usr/bin/
```
```
sudo mkdir -p /var/lib/etcd
```
```
sudo sh -c 'echo "[Unit]
Description=etcd
Documentation=https://github.com/coreos
[Service]
ExecStart=/usr/bin/etcd --name etcd0 \
--cert-file=/etc/etcd/kubernetes.pem \
--key-file=/etc/etcd/kubernetes-key.pem \
--peer-cert-file=/etc/etcd/kubernetes.pem \
--peer-key-file=/etc/etcd/kubernetes-key.pem \
--trusted-ca-file=/etc/etcd/ca.pem \
--peer-trusted-ca-file=/etc/etcd/ca.pem \
--initial-advertise-peer-urls https://10.240.0.10:2380 \
--listen-peer-urls https://10.240.0.10:2380 \
--listen-client-urls https://10.240.0.10:2379,http://127.0.0.1:2379 \
--advertise-client-urls https://10.240.0.10:2379 \
--initial-cluster-token etcd-cluster-0 \
--initial-cluster etcd0=https://10.240.0.10:2380,etcd1=https://10.240.0.11:2380,etcd2=https://10.240.0.12:2380 \
--initial-cluster-state new \
--data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target" > /etc/systemd/system/etcd.service'
```
```
sudo systemctl daemon-reload
sudo systemctl enable etcd
sudo systemctl start etcd
```
```
sudo systemctl status etcd
```
```
etcdctl --ca-file=/etc/etcd/ca.pem cluster-health
```
```
cluster may be unhealthy: failed to list members
Error: client: etcd cluster is unavailable or misconfigured
error #0: client: endpoint http://127.0.0.1:2379 exceeded header timeout
error #1: dial tcp 127.0.0.1:4001: getsockopt: connection refused
```
## etcd1
```
gcloud compute ssh etcd1
```
```
sudo mkdir -p /etc/etcd/
```
```
sudo mv ca.pem kubernetes-key.pem kubernetes.pem /etc/etcd/
```
```
wget https://github.com/coreos/etcd/releases/download/v3.0.1/etcd-v3.0.1-linux-amd64.tar.gz
```
```
tar -xvf etcd-v3.0.1-linux-amd64.tar.gz
```
```
sudo cp etcd-v3.0.1-linux-amd64/etcdctl /usr/bin/
```
```
sudo cp etcd-v3.0.1-linux-amd64/etcd /usr/bin/
```
```
sudo mkdir /var/lib/etcd
```
```
sudo sh -c 'echo "[Unit]
Description=etcd
Documentation=https://github.com/coreos
[Service]
ExecStart=/usr/bin/etcd --name etcd1 \
--cert-file=/etc/etcd/kubernetes.pem \
--key-file=/etc/etcd/kubernetes-key.pem \
--peer-cert-file=/etc/etcd/kubernetes.pem \
--peer-key-file=/etc/etcd/kubernetes-key.pem \
--trusted-ca-file=/etc/etcd/ca.pem \
--peer-trusted-ca-file=/etc/etcd/ca.pem \
--initial-advertise-peer-urls https://10.240.0.11:2380 \
--listen-peer-urls https://10.240.0.11:2380 \
--listen-client-urls https://10.240.0.11:2379,http://127.0.0.1:2379 \
--advertise-client-urls https://10.240.0.11:2379 \
--initial-cluster-token etcd-cluster-0 \
--initial-cluster etcd0=https://10.240.0.10:2380,etcd1=https://10.240.0.11:2380,etcd2=https://10.240.0.12:2380 \
--initial-cluster-state new \
--data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target" > /etc/systemd/system/etcd.service'
```
```
sudo systemctl daemon-reload
sudo systemctl enable etcd
sudo systemctl start etcd
```
```
sudo systemctl status etcd
```
```
etcdctl --ca-file=/etc/etcd/ca.pem cluster-health
```
```
member 3a57933972cb5131 is unreachable: no available published client urls
member f98dc20bce6225a0 is healthy: got healthy result from https://10.240.0.10:2379
member ffed16798470cab5 is healthy: got healthy result from https://10.240.0.11:2379
cluster is healthy
```
## etcd2
```
gcloud compute ssh etcd2
```
```
sudo mkdir -p /etc/etcd/
```
```
sudo mv ca.pem kubernetes-key.pem kubernetes.pem /etc/etcd/
```
```
wget https://github.com/coreos/etcd/releases/download/v3.0.1/etcd-v3.0.1-linux-amd64.tar.gz
```
```
tar -xvf etcd-v3.0.1-linux-amd64.tar.gz
```
```
sudo cp etcd-v3.0.1-linux-amd64/etcdctl /usr/bin/
```
```
sudo cp etcd-v3.0.1-linux-amd64/etcd /usr/bin/
```
```
sudo mkdir /var/lib/etcd
```
```
sudo sh -c 'echo "[Unit]
Description=etcd
Documentation=https://github.com/coreos
[Service]
ExecStart=/usr/bin/etcd --name etcd2 \
--cert-file=/etc/etcd/kubernetes.pem \
--key-file=/etc/etcd/kubernetes-key.pem \
--peer-cert-file=/etc/etcd/kubernetes.pem \
--peer-key-file=/etc/etcd/kubernetes-key.pem \
--trusted-ca-file=/etc/etcd/ca.pem \
--peer-trusted-ca-file=/etc/etcd/ca.pem \
--initial-advertise-peer-urls https://10.240.0.12:2380 \
--listen-peer-urls https://10.240.0.12:2380 \
--listen-client-urls https://10.240.0.12:2379,http://127.0.0.1:2379 \
--advertise-client-urls https://10.240.0.12:2379 \
--initial-cluster-token etcd-cluster-0 \
--initial-cluster etcd0=https://10.240.0.10:2380,etcd1=https://10.240.0.11:2380,etcd2=https://10.240.0.12:2380 \
--initial-cluster-state new \
--data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target" > /etc/systemd/system/etcd.service'
```
```
sudo systemctl daemon-reload
sudo systemctl enable etcd
sudo systemctl start etcd
```
```
sudo systemctl status etcd
```
```
etcdctl --ca-file=/etc/etcd/ca.pem cluster-health
```
```
member 3a57933972cb5131 is healthy: got healthy result from https://10.240.0.12:2379
member f98dc20bce6225a0 is healthy: got healthy result from https://10.240.0.10:2379
member ffed16798470cab5 is healthy: got healthy result from https://10.240.0.11:2379
cluster is healthy
```

124
docs/infrastructure.md Normal file
View File

@ -0,0 +1,124 @@
# Infrastructure
```
gcloud compute addresses create kubernetes
```
```
146.148.34.151
```
## etcd
```
gcloud compute instances create etcd0 \
--boot-disk-size 200GB \
--can-ip-forward \
--image-project ubuntu-os-cloud \
--image ubuntu-1604-xenial-v20160627 \
--machine-type n1-standard-1 \
--private-network-ip 10.240.0.10
```
```
gcloud compute instances create etcd1 \
--boot-disk-size 200GB \
--can-ip-forward \
--image-project ubuntu-os-cloud \
--image ubuntu-1604-xenial-v20160627 \
--machine-type n1-standard-1 \
--private-network-ip 10.240.0.11
```
```
gcloud compute instances create etcd2 \
--boot-disk-size 200GB \
--can-ip-forward \
--image-project ubuntu-os-cloud \
--image ubuntu-1604-xenial-v20160627 \
--machine-type n1-standard-1 \
--private-network-ip 10.240.0.12
```
## Kubernetes Control Plane
```
gcloud compute instances create controller0 \
--boot-disk-size 200GB \
--can-ip-forward \
--image-project ubuntu-os-cloud \
--image ubuntu-1604-xenial-v20160627 \
--machine-type n1-standard-1 \
--private-network-ip 10.240.0.20
```
```
gcloud compute instances create controller1 \
--boot-disk-size 200GB \
--can-ip-forward \
--image-project ubuntu-os-cloud \
--image ubuntu-1604-xenial-v20160627 \
--machine-type n1-standard-1 \
--private-network-ip 10.240.0.21
```
```
gcloud compute instances create controller2 \
--boot-disk-size 200GB \
--can-ip-forward \
--image-project ubuntu-os-cloud \
--image ubuntu-1604-xenial-v20160627 \
--machine-type n1-standard-1 \
--private-network-ip 10.240.0.22
```
## Kubernetes Workers
```
gcloud compute instances create worker0 \
--boot-disk-size 200GB \
--can-ip-forward \
--image-project ubuntu-os-cloud \
--image ubuntu-1604-xenial-v20160627 \
--machine-type n1-standard-1 \
--private-network-ip 10.240.0.30
```
```
gcloud compute instances create worker1 \
--boot-disk-size 200GB \
--can-ip-forward \
--image-project ubuntu-os-cloud \
--image ubuntu-1604-xenial-v20160627 \
--machine-type n1-standard-1 \
--private-network-ip 10.240.0.31
```
```
gcloud compute instances create worker2 \
--boot-disk-size 200GB \
--can-ip-forward \
--image-project ubuntu-os-cloud \
--image ubuntu-1604-xenial-v20160627 \
--machine-type n1-standard-1 \
--private-network-ip 10.240.0.32
```
### Verify
```
gcloud compute instances list
```
````
NAME ZONE MACHINE_TYPE INTERNAL_IP STATUS
controller0 us-central1-f n1-standard-1 10.240.0.20 RUNNING
controller1 us-central1-f n1-standard-1 10.240.0.21 RUNNING
controller2 us-central1-f n1-standard-1 10.240.0.22 RUNNING
etcd0 us-central1-f n1-standard-1 10.240.0.10 RUNNING
etcd1 us-central1-f n1-standard-1 10.240.0.11 RUNNING
etcd2 us-central1-f n1-standard-1 10.240.0.12 RUNNING
worker0 us-central1-f n1-standard-1 10.240.0.30 RUNNING
worker1 us-central1-f n1-standard-1 10.240.0.31 RUNNING
worker2 us-central1-f n1-standard-1 10.240.0.32 RUNNING
````

View File

@ -0,0 +1,199 @@
# Kubernetes Controller
### Copy TLS Certs
```
gcloud compute copy-files ca.pem kubernetes-key.pem kubernetes.pem controller0:~/
```
```
gcloud compute copy-files ca.pem kubernetes-key.pem kubernetes.pem controller1:~/
```
```
gcloud compute copy-files ca.pem kubernetes-key.pem kubernetes.pem controller2:~/
```
### controller0
```
gcloud compute ssh controller0
```
```
wget https://github.com/kubernetes/kubernetes/releases/download/v1.3.0/kubernetes.tar.gz
```
```
tar -xvf kubernetes.tar.gz
```
```
tar -xvf kubernetes/server/kubernetes-server-linux-amd64.tar.gz
```
```
sudo cp kubernetes/server/bin/kube-apiserver /usr/bin/
sudo cp kubernetes/server/bin/kube-controller-manager /usr/bin/
sudo cp kubernetes/server/bin/kube-scheduler /usr/bin/
sudo cp kubernetes/server/bin/kubectl /usr/bin/
```
```
sudo mkdir -p /var/run/kubernetes
```
```
sudo mv ca.pem kubernetes-key.pem kubernetes.pem /var/run/kubernetes/
```
### Kubernetes API Server
```
wget https://storage.googleapis.com/hightowerlabs/authorization-policy.jsonl
```
```
cat authorization-policy.jsonl
```
```
sudo mv authorization-policy.jsonl /var/run/kubernetes/
```
```
wget https://storage.googleapis.com/hightowerlabs/token.csv
```
```
cat token.csv
```
```
sudo mv token.csv /var/run/kubernetes/
```
```
sudo sh -c 'echo "[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
ExecStart=/usr/bin/kube-apiserver \
--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota \
--advertise-address=10.240.0.20 \
--allow-privileged=true \
--apiserver-count=3 \
--authorization-mode=ABAC \
--authorization-policy-file=/var/run/kubernetes/authorization-policy.jsonl \
--bind-address=0.0.0.0 \
--enable-swagger-ui=true \
--etcd-cafile=/var/run/kubernetes/ca.pem \
--insecure-bind-address=127.0.0.1 \
--kubelet-certificate-authority=/var/run/kubernetes/ca.pem \
--etcd-servers=https://10.240.0.10:2379,https://10.240.0.11:2379,https://10.240.0.12:2379 \
--service-account-key-file=/var/run/kubernetes/kubernetes-key.pem \
--service-cluster-ip-range=10.32.0.0/24 \
--service-node-port-range=30000-32767 \
--tls-cert-file=/var/run/kubernetes/kubernetes.pem \
--tls-private-key-file=/var/run/kubernetes/kubernetes-key.pem \
--token-auth-file=/var/run/kubernetes/token.csv
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target" > /etc/systemd/system/kube-apiserver.service'
```
```
sudo systemctl daemon-reload
sudo systemctl enable kube-apiserver
sudo systemctl start kube-apiserver
```
```
sudo systemctl status kube-apiserver
```
### Kubernetes Controller Manager
```
sudo sh -c 'echo "[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
ExecStart=/usr/bin/kube-controller-manager \
--cluster-cidr=10.200.0.0/16 \
--cluster-name=kubernetes \
--leader-elect=true \
--master=http://127.0.0.1:8080 \
--root-ca-file=/var/run/kubernetes/ca.pem \
--service-account-private-key-file=/var/run/kubernetes/kubernetes-key.pem \
--service-cluster-ip-range=10.32.0.0/24 \
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target" > /etc/systemd/system/kube-controller-manager.service'
```
```
sudo systemctl daemon-reload
sudo systemctl enable kube-controller-manager
sudo systemctl start kube-controller-manager
```
```
sudo systemctl status kube-controller-manager
```
### Kubernetes Scheduler
```
sudo sh -c 'echo "[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
ExecStart=/usr/bin/kube-scheduler \
--leader-elect=true \
--master=http://127.0.0.1:8080 \
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target" > /etc/systemd/system/kube-scheduler.service'
```
```
sudo systemctl daemon-reload
sudo systemctl enable kube-scheduler
sudo systemctl start kube-scheduler
```
```
sudo systemctl status kube-scheduler
```
### Verify
```
kubectl get componentstatuses
```
```
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-1 Healthy {"health": "true"}
etcd-0 Healthy {"health": "true"}
etcd-2 Healthy {"health": "true"}
```

0
docs/kubernetes-dns.md Normal file
View File

View File

0
docs/network.md Normal file
View File

3
token.csv Normal file
View File

@ -0,0 +1,3 @@
chAng3m3,admin,admin
chAng3m3,scheduler,scheduler
chAng3m3,kubelet,kubelet
1 chAng3m3 admin admin
2 chAng3m3 scheduler scheduler
3 chAng3m3 kubelet kubelet