dry up the docs

pull/13/head
Kelsey Hightower 2016-07-08 10:26:32 -07:00
parent f93b2d609e
commit b2b6360e04
8 changed files with 251 additions and 665 deletions

View File

@ -91,10 +91,13 @@ openssl x509 -in ca.pem -text -noout
In this section we will generate a TLS certificate that will be valid for all Kubernetes components. This is being done for ease of use. In production you should strongly consider generating individual TLS certificates for each component. In this section we will generate a TLS certificate that will be valid for all Kubernetes components. This is being done for ease of use. In production you should strongly consider generating individual TLS certificates for each component.
> Notice the list of hosts includes the Kubernetes Public IP Address 104.197.132.159 Create the `kubernetes-csr.json` file:
export KUBERNETES_PUBLIC_IP_ADDRESS=$(gcloud compute addresses describe kubernetes --format 'value(address)')
``` ```
echo '{ cat > kubernetes-csr.json <<EOF
{
"CN": "kubernetes", "CN": "kubernetes",
"hosts": [ "hosts": [
"10.240.0.10", "10.240.0.10",
@ -106,7 +109,7 @@ echo '{
"10.240.0.30", "10.240.0.30",
"10.240.0.31", "10.240.0.31",
"10.240.0.32", "10.240.0.32",
"104.197.132.159", "${KUBERNETES_PUBLIC_IP_ADDRESS}",
"127.0.0.1" "127.0.0.1"
], ],
"key": { "key": {
@ -122,9 +125,16 @@ echo '{
"ST": "Oregon" "ST": "Oregon"
} }
] ]
}' > kubernetes-csr.json }
EOF
``` ```
```
gcloud compute addresses list kubernetes
```
Generate the Kubernetes certificate and private key:
``` ```
cfssl gencert \ cfssl gencert \
-ca=ca.pem \ -ca=ca.pem \

View File

@ -17,7 +17,7 @@ gcloud compute forwarding-rules delete kubernetes-rule
``` ```
``` ```
gcloud compute addresses delete kubernetes gcloud compute target-pools delete kubernetes-pool
``` ```
``` ```
@ -25,21 +25,22 @@ gcloud compute http-health-checks delete kube-apiserver-check
``` ```
``` ```
gcloud compute target-pools delete kubernetes-pool gcloud compute addresses delete kubernetes
```
```
gcloud compute firewall-rules delete kubernetes-allow-api-server kubernetes-allow-healthz
``` ```
``` ```
gcloud compute firewall-rules delete kubernetes-api-server gcloud compute routes delete kubernetes-route-10-200-0-0-24
``` ```
``` ```
gcloud compute routes delete default-route-10-200-0-0-24 gcloud compute routes delete kubernetes-route-10-200-1-0-24
``` ```
``` ```
gcloud compute routes delete default-route-10-200-1-0-24 gcloud compute routes delete kubernetes-route-10-200-2-0-24
```
```
gcloud compute routes delete default-route-10-200-2-0-24
``` ```

View File

@ -2,11 +2,15 @@
In this lab you will bootstrap a 3 node etcd cluster. The following virtual machines will be used: In this lab you will bootstrap a 3 node etcd cluster. The following virtual machines will be used:
```
gcloud compute instances list
```
```` ````
NAME ZONE MACHINE_TYPE INTERNAL_IP STATUS NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
etcd0 us-central1-f n1-standard-1 10.240.0.10 RUNNING etcd0 us-central1-f n1-standard-1 10.240.0.10 XXX.XXX.XXX.XXX RUNNING
etcd1 us-central1-f n1-standard-1 10.240.0.11 RUNNING etcd1 us-central1-f n1-standard-1 10.240.0.11 XXX.XXX.XXX.XXX RUNNING
etcd2 us-central1-f n1-standard-1 10.240.0.12 RUNNING etcd2 us-central1-f n1-standard-1 10.240.0.12 XXX.XXX.XXX.XXX RUNNING
```` ````
## Why ## Why
@ -21,12 +25,9 @@ following reasons:
## Provision the etcd Cluster ## Provision the etcd Cluster
### etcd0 Run the following commands on `etcd0`, `etcd1`, `etcd2`:
> SSH into each machine using the `gcloud compute ssh` command
```
gcloud compute ssh etcd0
```
Move the TLS certificates in place: Move the TLS certificates in place:
@ -62,23 +63,25 @@ sudo mkdir -p /var/lib/etcd
Create the etcd systemd unit file: Create the etcd systemd unit file:
``` ```
sudo sh -c 'echo "[Unit] cat > etcd.service <<"EOF"
[Unit]
Description=etcd Description=etcd
Documentation=https://github.com/coreos Documentation=https://github.com/coreos
[Service] [Service]
ExecStart=/usr/bin/etcd --name etcd0 \ ExecStart=/usr/bin/etcd --name ETCD_NAME \
--cert-file=/etc/etcd/kubernetes.pem \ --cert-file=/etc/etcd/kubernetes.pem \
--key-file=/etc/etcd/kubernetes-key.pem \ --key-file=/etc/etcd/kubernetes-key.pem \
--peer-cert-file=/etc/etcd/kubernetes.pem \ --peer-cert-file=/etc/etcd/kubernetes.pem \
--peer-key-file=/etc/etcd/kubernetes-key.pem \ --peer-key-file=/etc/etcd/kubernetes-key.pem \
--trusted-ca-file=/etc/etcd/ca.pem \ --trusted-ca-file=/etc/etcd/ca.pem \
--peer-trusted-ca-file=/etc/etcd/ca.pem \ --peer-trusted-ca-file=/etc/etcd/ca.pem \
--initial-advertise-peer-urls https://10.240.0.10:2380 \ --initial-advertise-peer-urls https://INTERNAL_IP:2380 \
--listen-peer-urls https://10.240.0.10:2380 \ --listen-peer-urls https://INTERNAL_IP:2380 \
--listen-client-urls https://10.240.0.10:2379,http://127.0.0.1:2379 \ --listen-client-urls https://INTERNAL_IP:2379,http://127.0.0.1:2379 \
--advertise-client-urls https://10.240.0.10:2379 \ --advertise-client-urls https://INTERNAL_IP:2379 \
--initial-cluster-token etcd-cluster-0 \ --initial-cluster-token etcd-cluster-0 \
--initial-cluster etcd0=https://10.240.0.10:2380,etcd1=https://10.240.0.11:2380,etcd2=https://10.240.0.12:2380 \ --initial-cluster etcd0=https://10.240.0.10:2380,etcd1=https://10.240.0.11:2380,etcd2=https://10.240.0.12:2380 \
--initial-cluster-state new \ --initial-cluster-state new \
@ -87,7 +90,29 @@ Restart=on-failure
RestartSec=5 RestartSec=5
[Install] [Install]
WantedBy=multi-user.target" > /etc/systemd/system/etcd.service' WantedBy=multi-user.target
EOF
```
```
export INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
```
```
export ETCD_NAME=$(hostname -s)
```
```
sed -i s/INTERNAL_IP/$INTERNAL_IP/g etcd.service
```
```
sed -i s/ETCD_NAME/$ETCD_NAME/g etcd.service
```
```
sudo mv etcd.service /etc/systemd/system/
``` ```
Start etcd: Start etcd:
@ -101,193 +126,15 @@ sudo systemctl start etcd
### Verification ### Verification
``` ```
sudo systemctl status etcd sudo systemctl status etcd --no-pager
``` ```
``` ## Verification
etcdctl --ca-file=/etc/etcd/ca.pem cluster-health
``` Once all 3 etcd nodes have been bootstrapped verify the etcd cluster is healthy:
``` ```
cluster may be unhealthy: failed to list members gcloud compute ssh etcd0
Error: client: etcd cluster is unavailable or misconfigured
error #0: client: endpoint http://127.0.0.1:2379 exceeded header timeout
error #1: dial tcp 127.0.0.1:4001: getsockopt: connection refused
```
### etcd1
```
gcloud compute ssh etcd1
```
Move the TLS certificates in place:
```
sudo mkdir -p /etc/etcd/
```
```
sudo mv ca.pem kubernetes-key.pem kubernetes.pem /etc/etcd/
```
Download and install the etcd binaries:
```
wget https://github.com/coreos/etcd/releases/download/v3.0.1/etcd-v3.0.1-linux-amd64.tar.gz
```
```
tar -xvf etcd-v3.0.1-linux-amd64.tar.gz
```
```
sudo cp etcd-v3.0.1-linux-amd64/etcdctl /usr/bin/
```
```
sudo cp etcd-v3.0.1-linux-amd64/etcd /usr/bin/
```
```
sudo mkdir /var/lib/etcd
```
Create the etcd systemd unit file:
```
sudo sh -c 'echo "[Unit]
Description=etcd
Documentation=https://github.com/coreos
[Service]
ExecStart=/usr/bin/etcd --name etcd1 \
--cert-file=/etc/etcd/kubernetes.pem \
--key-file=/etc/etcd/kubernetes-key.pem \
--peer-cert-file=/etc/etcd/kubernetes.pem \
--peer-key-file=/etc/etcd/kubernetes-key.pem \
--trusted-ca-file=/etc/etcd/ca.pem \
--peer-trusted-ca-file=/etc/etcd/ca.pem \
--initial-advertise-peer-urls https://10.240.0.11:2380 \
--listen-peer-urls https://10.240.0.11:2380 \
--listen-client-urls https://10.240.0.11:2379,http://127.0.0.1:2379 \
--advertise-client-urls https://10.240.0.11:2379 \
--initial-cluster-token etcd-cluster-0 \
--initial-cluster etcd0=https://10.240.0.10:2380,etcd1=https://10.240.0.11:2380,etcd2=https://10.240.0.12:2380 \
--initial-cluster-state new \
--data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target" > /etc/systemd/system/etcd.service'
```
Start etcd:
```
sudo systemctl daemon-reload
sudo systemctl enable etcd
sudo systemctl start etcd
```
#### Verification
```
sudo systemctl status etcd
```
```
etcdctl --ca-file=/etc/etcd/ca.pem cluster-health
```
```
member 3a57933972cb5131 is unreachable: no available published client urls
member f98dc20bce6225a0 is healthy: got healthy result from https://10.240.0.10:2379
member ffed16798470cab5 is healthy: got healthy result from https://10.240.0.11:2379
cluster is healthy
```
### etcd2
```
gcloud compute ssh etcd2
```
Move the TLS certificates in place:
```
sudo mkdir -p /etc/etcd/
```
```
sudo mv ca.pem kubernetes-key.pem kubernetes.pem /etc/etcd/
```
Download and install the etcd binaries:
```
wget https://github.com/coreos/etcd/releases/download/v3.0.1/etcd-v3.0.1-linux-amd64.tar.gz
```
```
tar -xvf etcd-v3.0.1-linux-amd64.tar.gz
```
```
sudo cp etcd-v3.0.1-linux-amd64/etcdctl /usr/bin/
```
```
sudo cp etcd-v3.0.1-linux-amd64/etcd /usr/bin/
```
```
sudo mkdir /var/lib/etcd
```
Create the etcd systemd unit file:
```
sudo sh -c 'echo "[Unit]
Description=etcd
Documentation=https://github.com/coreos
[Service]
ExecStart=/usr/bin/etcd --name etcd2 \
--cert-file=/etc/etcd/kubernetes.pem \
--key-file=/etc/etcd/kubernetes-key.pem \
--peer-cert-file=/etc/etcd/kubernetes.pem \
--peer-key-file=/etc/etcd/kubernetes-key.pem \
--trusted-ca-file=/etc/etcd/ca.pem \
--peer-trusted-ca-file=/etc/etcd/ca.pem \
--initial-advertise-peer-urls https://10.240.0.12:2380 \
--listen-peer-urls https://10.240.0.12:2380 \
--listen-client-urls https://10.240.0.12:2379,http://127.0.0.1:2379 \
--advertise-client-urls https://10.240.0.12:2379 \
--initial-cluster-token etcd-cluster-0 \
--initial-cluster etcd0=https://10.240.0.10:2380,etcd1=https://10.240.0.11:2380,etcd2=https://10.240.0.12:2380 \
--initial-cluster-state new \
--data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target" > /etc/systemd/system/etcd.service'
```
Start etcd:
```
sudo systemctl daemon-reload
sudo systemctl enable etcd
sudo systemctl start etcd
```
#### Verification
```
sudo systemctl status etcd
``` ```
``` ```

View File

@ -12,21 +12,103 @@ gcloud compute instances list
```` ````
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
controller0 us-central1-f n1-standard-1 10.240.0.20 146.148.34.151 RUNNING controller0 us-central1-f n1-standard-1 10.240.0.20 XXX.XXX.XXX.XXX RUNNING
controller1 us-central1-f n1-standard-1 10.240.0.21 104.197.49.230 RUNNING controller1 us-central1-f n1-standard-1 10.240.0.21 XXX.XXX.XXX.XXX RUNNING
controller2 us-central1-f n1-standard-1 10.240.0.22 130.211.123.47 RUNNING controller2 us-central1-f n1-standard-1 10.240.0.22 XXX.XXX.XXX.XXX RUNNING
etcd0 us-central1-f n1-standard-1 10.240.0.10 104.197.163.174 RUNNING etcd0 us-central1-f n1-standard-1 10.240.0.10 XXX.XXX.XXX.XXX RUNNING
etcd1 us-central1-f n1-standard-1 10.240.0.11 146.148.43.6 RUNNING etcd1 us-central1-f n1-standard-1 10.240.0.11 XXX.XXX.XXX.XXX RUNNING
etcd2 us-central1-f n1-standard-1 10.240.0.12 162.222.179.131 RUNNING etcd2 us-central1-f n1-standard-1 10.240.0.12 XXX.XXX.XXX.XXX RUNNING
worker0 us-central1-f n1-standard-1 10.240.0.30 104.155.181.141 RUNNING worker0 us-central1-f n1-standard-1 10.240.0.30 XXX.XXX.XXX.XXX RUNNING
worker1 us-central1-f n1-standard-1 10.240.0.31 104.197.163.37 RUNNING worker1 us-central1-f n1-standard-1 10.240.0.31 XXX.XXX.XXX.XXX RUNNING
worker2 us-central1-f n1-standard-1 10.240.0.32 104.154.41.9 RUNNING worker2 us-central1-f n1-standard-1 10.240.0.32 XXX.XXX.XXX.XXX RUNNING
```` ````
> All machines will be provisioned with fixed private IP addresses to simplify the bootstrap process. > All machines will be provisioned with fixed private IP addresses to simplify the bootstrap process.
To make our Kubernetes control plane remotely accessable a public IP address will be provisioned and assigned to a Load Balancer that will sit in front of the 3 Kubernetes controllers. To make our Kubernetes control plane remotely accessable a public IP address will be provisioned and assigned to a Load Balancer that will sit in front of the 3 Kubernetes controllers.
## Create a Custom Network
```
gcloud compute networks create kubernetes --mode custom
```
```
NAME MODE IPV4_RANGE GATEWAY_IPV4
kubernetes custom
```
```
gcloud compute networks subnets create kubernetes \
--network kubernetes \
--region us-central1 \
--range 10.240.0.0/24
```
```
NAME REGION NETWORK RANGE
kubernetes us-central1 kubernetes 10.240.0.0/24
```
### Firewall Rules
```
gcloud compute firewall-rules create kubernetes-allow-icmp \
--network kubernetes \
--source-ranges 0.0.0.0/0 \
--allow icmp
```
```
gcloud compute firewall-rules create kubernetes-allow-internal \
--network kubernetes \
--source-ranges 10.240.0.0/24 \
--allow tcp:0-65535,udp:0-65535,icmp
```
```
gcloud compute firewall-rules create kubernetes-allow-rdp \
--network kubernetes \
--source-ranges 0.0.0.0/0 \
--allow tcp:3389
```
```
gcloud compute firewall-rules create kubernetes-allow-ssh \
--network kubernetes \
--source-ranges 0.0.0.0/0 \
--allow tcp:22
```
```
gcloud compute firewall-rules create kubernetes-allow-healthz \
--network kubernetes \
--allow tcp:8080 \
--source-ranges 130.211.0.0/22
```
```
gcloud compute firewall-rules create kubernetes-allow-api-server \
--network kubernetes \
--source-ranges 0.0.0.0/0 \
--allow tcp:6443
```
```
gcloud compute firewall-rules list --filter "network=kubernetes"
```
```
NAME NETWORK SRC_RANGES RULES SRC_TAGS TARGET_TAGS
kubernetes-allow-api-server kubernetes 0.0.0.0/0 tcp:6443
kubernetes-allow-healthz kubernetes 130.211.0.0/22 tcp:8080
kubernetes-allow-icmp kubernetes 0.0.0.0/0 icmp
kubernetes-allow-internal kubernetes 10.240.0.0/24 tcp:0-65535,udp:0-65535,icmp
kubernetes-allow-rdp kubernetes 0.0.0.0/0 tcp:3389
kubernetes-allow-ssh kubernetes 0.0.0.0/0 tcp:22
```
## Create the Kubernetes Public IP Address ## Create the Kubernetes Public IP Address
Create a public IP address that will be used by remote clients to connect to the Kubernetes control plane: Create a public IP address that will be used by remote clients to connect to the Kubernetes control plane:
@ -36,11 +118,11 @@ gcloud compute addresses create kubernetes
``` ```
``` ```
gcloud compute addresses list gcloud compute addresses list kubernetes
``` ```
``` ```
NAME REGION ADDRESS STATUS NAME REGION ADDRESS STATUS
kubernetes us-central1 104.197.132.159 RESERVED kubernetes us-central1 XXX.XXX.XXX.XXX RESERVED
``` ```
## Provision Virtual Machines ## Provision Virtual Machines
@ -57,6 +139,7 @@ gcloud compute instances create etcd0 \
--image-project ubuntu-os-cloud \ --image-project ubuntu-os-cloud \
--image ubuntu-1604-xenial-v20160627 \ --image ubuntu-1604-xenial-v20160627 \
--machine-type n1-standard-1 \ --machine-type n1-standard-1 \
--subnet kubernetes \
--private-network-ip 10.240.0.10 --private-network-ip 10.240.0.10
``` ```
@ -67,6 +150,7 @@ gcloud compute instances create etcd1 \
--image-project ubuntu-os-cloud \ --image-project ubuntu-os-cloud \
--image ubuntu-1604-xenial-v20160627 \ --image ubuntu-1604-xenial-v20160627 \
--machine-type n1-standard-1 \ --machine-type n1-standard-1 \
--subnet kubernetes \
--private-network-ip 10.240.0.11 --private-network-ip 10.240.0.11
``` ```
@ -77,6 +161,7 @@ gcloud compute instances create etcd2 \
--image-project ubuntu-os-cloud \ --image-project ubuntu-os-cloud \
--image ubuntu-1604-xenial-v20160627 \ --image ubuntu-1604-xenial-v20160627 \
--machine-type n1-standard-1 \ --machine-type n1-standard-1 \
--subnet kubernetes \
--private-network-ip 10.240.0.12 --private-network-ip 10.240.0.12
``` ```
@ -89,6 +174,7 @@ gcloud compute instances create controller0 \
--image-project ubuntu-os-cloud \ --image-project ubuntu-os-cloud \
--image ubuntu-1604-xenial-v20160627 \ --image ubuntu-1604-xenial-v20160627 \
--machine-type n1-standard-1 \ --machine-type n1-standard-1 \
--subnet kubernetes \
--private-network-ip 10.240.0.20 --private-network-ip 10.240.0.20
``` ```
@ -99,6 +185,7 @@ gcloud compute instances create controller1 \
--image-project ubuntu-os-cloud \ --image-project ubuntu-os-cloud \
--image ubuntu-1604-xenial-v20160627 \ --image ubuntu-1604-xenial-v20160627 \
--machine-type n1-standard-1 \ --machine-type n1-standard-1 \
--subnet kubernetes \
--private-network-ip 10.240.0.21 --private-network-ip 10.240.0.21
``` ```
@ -109,6 +196,7 @@ gcloud compute instances create controller2 \
--image-project ubuntu-os-cloud \ --image-project ubuntu-os-cloud \
--image ubuntu-1604-xenial-v20160627 \ --image ubuntu-1604-xenial-v20160627 \
--machine-type n1-standard-1 \ --machine-type n1-standard-1 \
--subnet kubernetes \
--private-network-ip 10.240.0.22 --private-network-ip 10.240.0.22
``` ```
@ -121,6 +209,7 @@ gcloud compute instances create worker0 \
--image-project ubuntu-os-cloud \ --image-project ubuntu-os-cloud \
--image ubuntu-1604-xenial-v20160627 \ --image ubuntu-1604-xenial-v20160627 \
--machine-type n1-standard-1 \ --machine-type n1-standard-1 \
--subnet kubernetes \
--private-network-ip 10.240.0.30 --private-network-ip 10.240.0.30
``` ```
@ -131,6 +220,7 @@ gcloud compute instances create worker1 \
--image-project ubuntu-os-cloud \ --image-project ubuntu-os-cloud \
--image ubuntu-1604-xenial-v20160627 \ --image ubuntu-1604-xenial-v20160627 \
--machine-type n1-standard-1 \ --machine-type n1-standard-1 \
--subnet kubernetes \
--private-network-ip 10.240.0.31 --private-network-ip 10.240.0.31
``` ```
@ -141,5 +231,6 @@ gcloud compute instances create worker2 \
--image-project ubuntu-os-cloud \ --image-project ubuntu-os-cloud \
--image ubuntu-1604-xenial-v20160627 \ --image ubuntu-1604-xenial-v20160627 \
--machine-type n1-standard-1 \ --machine-type n1-standard-1 \
--subnet kubernetes \
--private-network-ip 10.240.0.32 --private-network-ip 10.240.0.32
``` ```

View File

@ -22,14 +22,8 @@ sudo cp kubectl /usr/local/bin
In this section you will configure the kubectl client to point to the [Kubernetes API Server Frontend Load Balancer](docs/kubernetes-controller.md#setup-kubernetes-api-server-frontend-load-balancer). In this section you will configure the kubectl client to point to the [Kubernetes API Server Frontend Load Balancer](docs/kubernetes-controller.md#setup-kubernetes-api-server-frontend-load-balancer).
Recall the Public IP address we allocated for the frontend load balancer:
``` ```
gcloud compute addresses list export KUBERNETES_PUBLIC_IP_ADDRESS=$(gcloud compute addresses describe kubernetes --format 'value(address)')
```
```
NAME REGION ADDRESS STATUS
kubernetes us-central1 104.197.132.159 IN_USE
``` ```
Recall the token we setup for the admin user: Recall the token we setup for the admin user:
@ -49,7 +43,7 @@ The following commands will build up the default kubeconfig file used by kubectl
kubectl config set-cluster kubernetes-the-hard-way \ kubectl config set-cluster kubernetes-the-hard-way \
--embed-certs=true \ --embed-certs=true \
--certificate-authority=ca.pem \ --certificate-authority=ca.pem \
--server=https://104.197.132.159:6443 --server=https://${KUBERNETES_PUBLIC_IP_ADDRESS}:6443
``` ```
``` ```

View File

@ -28,11 +28,10 @@ Each component is being run on the same machines for the following reasons:
## Provision the Kubernetes Controller Cluster ## Provision the Kubernetes Controller Cluster
### controller0 Run the following commands on `controller0`, `controller1`, `controller2`:
> SSH into each machine using the `gcloud compute ssh` command
```
gcloud compute ssh controller0
```
Move the TLS certificates in place: Move the TLS certificates in place:
@ -87,15 +86,25 @@ cat token.csv
sudo mv token.csv /var/run/kubernetes/ sudo mv token.csv /var/run/kubernetes/
``` ```
Capture the internal IP address:
``` ```
sudo sh -c 'echo "[Unit] export INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
```
Create the systemd unit file:
```
cat > kube-apiserver.service <<"EOF"
[Unit]
Description=Kubernetes API Server Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service] [Service]
ExecStart=/usr/bin/kube-apiserver \ ExecStart=/usr/bin/kube-apiserver \
--admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota \ --admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota \
--advertise-address=10.240.0.20 \ --advertise-address=INTERNAL_IP \
--allow-privileged=true \ --allow-privileged=true \
--apiserver-count=3 \ --apiserver-count=3 \
--authorization-mode=ABAC \ --authorization-mode=ABAC \
@ -117,9 +126,19 @@ Restart=on-failure
RestartSec=5 RestartSec=5
[Install] [Install]
WantedBy=multi-user.target" > /etc/systemd/system/kube-apiserver.service' WantedBy=multi-user.target
EOF
``` ```
```
sed -i s/INTERNAL_IP/$INTERNAL_IP/g kube-apiserver.service
```
```
sudo mv kube-apiserver.service /etc/systemd/system/
```
``` ```
sudo systemctl daemon-reload sudo systemctl daemon-reload
sudo systemctl enable kube-apiserver sudo systemctl enable kube-apiserver
@ -127,13 +146,14 @@ sudo systemctl start kube-apiserver
``` ```
``` ```
sudo systemctl status kube-apiserver sudo systemctl status kube-apiserver --no-pager
``` ```
#### Kubernetes Controller Manager #### Kubernetes Controller Manager
``` ```
sudo sh -c 'echo "[Unit] cat > kube-controller-manager.service <<"EOF"
[Unit]
Description=Kubernetes Controller Manager Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes Documentation=https://github.com/GoogleCloudPlatform/kubernetes
@ -143,7 +163,7 @@ ExecStart=/usr/bin/kube-controller-manager \
--cluster-cidr=10.200.0.0/16 \ --cluster-cidr=10.200.0.0/16 \
--cluster-name=kubernetes \ --cluster-name=kubernetes \
--leader-elect=true \ --leader-elect=true \
--master=http://127.0.0.1:8080 \ --master=http://INTERNAL_IP:8080 \
--root-ca-file=/var/run/kubernetes/ca.pem \ --root-ca-file=/var/run/kubernetes/ca.pem \
--service-account-private-key-file=/var/run/kubernetes/kubernetes-key.pem \ --service-account-private-key-file=/var/run/kubernetes/kubernetes-key.pem \
--service-cluster-ip-range=10.32.0.0/24 \ --service-cluster-ip-range=10.32.0.0/24 \
@ -152,9 +172,19 @@ Restart=on-failure
RestartSec=5 RestartSec=5
[Install] [Install]
WantedBy=multi-user.target" > /etc/systemd/system/kube-controller-manager.service' WantedBy=multi-user.target
EOF
``` ```
```
sed -i s/INTERNAL_IP/$INTERNAL_IP/g kube-controller-manager.service
```
```
sudo mv kube-controller-manager.service /etc/systemd/system/
```
``` ```
sudo systemctl daemon-reload sudo systemctl daemon-reload
sudo systemctl enable kube-controller-manager sudo systemctl enable kube-controller-manager
@ -162,26 +192,36 @@ sudo systemctl start kube-controller-manager
``` ```
``` ```
sudo systemctl status kube-controller-manager sudo systemctl status kube-controller-manager --no-pager
``` ```
#### Kubernetes Scheduler #### Kubernetes Scheduler
``` ```
sudo sh -c 'echo "[Unit] cat > kube-scheduler.service <<"EOF"
[Unit]
Description=Kubernetes Scheduler Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service] [Service]
ExecStart=/usr/bin/kube-scheduler \ ExecStart=/usr/bin/kube-scheduler \
--leader-elect=true \ --leader-elect=true \
--master=http://127.0.0.1:8080 \ --master=http://INTERNAL_IP:8080 \
--v=2 --v=2
Restart=on-failure Restart=on-failure
RestartSec=5 RestartSec=5
[Install] [Install]
WantedBy=multi-user.target" > /etc/systemd/system/kube-scheduler.service' WantedBy=multi-user.target
EOF
```
```
sed -i s/INTERNAL_IP/$INTERNAL_IP/g kube-scheduler.service
```
```
sudo mv kube-scheduler.service /etc/systemd/system/
``` ```
``` ```
@ -191,368 +231,7 @@ sudo systemctl start kube-scheduler
``` ```
``` ```
sudo systemctl status kube-scheduler sudo systemctl status kube-scheduler --no-pager
```
#### Verification
```
kubectl get componentstatuses
```
```
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-1 Healthy {"health": "true"}
etcd-0 Healthy {"health": "true"}
etcd-2 Healthy {"health": "true"}
```
### controller1
```
gcloud compute ssh controller1
```
Move the TLS certificates in place:
```
sudo mkdir -p /var/run/kubernetes
```
```
sudo mv ca.pem kubernetes-key.pem kubernetes.pem /var/run/kubernetes/
```
Download and install the Kubernetes controller binaries:
```
wget https://storage.googleapis.com/kubernetes-release/release/v1.3.0/bin/linux/amd64/kube-apiserver
wget https://storage.googleapis.com/kubernetes-release/release/v1.3.0/bin/linux/amd64/kube-controller-manager
wget https://storage.googleapis.com/kubernetes-release/release/v1.3.0/bin/linux/amd64/kube-scheduler
wget https://storage.googleapis.com/kubernetes-release/release/v1.3.0/bin/linux/amd64/kubectl
```
```
chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl
```
```
sudo mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/bin/
```
#### Kubernetes API Server
```
wget https://storage.googleapis.com/hightowerlabs/authorization-policy.jsonl
```
```
cat authorization-policy.jsonl
```
```
sudo mv authorization-policy.jsonl /var/run/kubernetes/
```
```
wget https://storage.googleapis.com/hightowerlabs/token.csv
```
```
cat token.csv
```
```
sudo mv token.csv /var/run/kubernetes/
```
```
sudo sh -c 'echo "[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
ExecStart=/usr/bin/kube-apiserver \
--admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota \
--advertise-address=10.240.0.21 \
--allow-privileged=true \
--apiserver-count=3 \
--authorization-mode=ABAC \
--authorization-policy-file=/var/run/kubernetes/authorization-policy.jsonl \
--bind-address=0.0.0.0 \
--enable-swagger-ui=true \
--etcd-cafile=/var/run/kubernetes/ca.pem \
--insecure-bind-address=0.0.0.0 \
--kubelet-certificate-authority=/var/run/kubernetes/ca.pem \
--etcd-servers=https://10.240.0.10:2379,https://10.240.0.11:2379,https://10.240.0.12:2379 \
--service-account-key-file=/var/run/kubernetes/kubernetes-key.pem \
--service-cluster-ip-range=10.32.0.0/24 \
--service-node-port-range=30000-32767 \
--tls-cert-file=/var/run/kubernetes/kubernetes.pem \
--tls-private-key-file=/var/run/kubernetes/kubernetes-key.pem \
--token-auth-file=/var/run/kubernetes/token.csv \
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target" > /etc/systemd/system/kube-apiserver.service'
```
```
sudo systemctl daemon-reload
sudo systemctl enable kube-apiserver
sudo systemctl start kube-apiserver
```
```
sudo systemctl status kube-apiserver
```
#### Kubernetes Controller Manager
```
sudo sh -c 'echo "[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
ExecStart=/usr/bin/kube-controller-manager \
--cluster-cidr=10.200.0.0/16 \
--cluster-name=kubernetes \
--leader-elect=true \
--master=http://127.0.0.1:8080 \
--root-ca-file=/var/run/kubernetes/ca.pem \
--service-account-private-key-file=/var/run/kubernetes/kubernetes-key.pem \
--service-cluster-ip-range=10.32.0.0/24 \
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target" > /etc/systemd/system/kube-controller-manager.service'
```
```
sudo systemctl daemon-reload
sudo systemctl enable kube-controller-manager
sudo systemctl start kube-controller-manager
```
```
sudo systemctl status kube-controller-manager
```
#### Kubernetes Scheduler
```
sudo sh -c 'echo "[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
ExecStart=/usr/bin/kube-scheduler \
--leader-elect=true \
--master=http://127.0.0.1:8080 \
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target" > /etc/systemd/system/kube-scheduler.service'
```
```
sudo systemctl daemon-reload
sudo systemctl enable kube-scheduler
sudo systemctl start kube-scheduler
```
```
sudo systemctl status kube-scheduler
```
#### Verification
```
kubectl get componentstatuses
```
```
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-1 Healthy {"health": "true"}
etcd-0 Healthy {"health": "true"}
etcd-2 Healthy {"health": "true"}
```
### controller2
```
gcloud compute ssh controller2
```
Move the TLS certificates in place:
```
sudo mkdir -p /var/run/kubernetes
```
```
sudo mv ca.pem kubernetes-key.pem kubernetes.pem /var/run/kubernetes/
```
Download and install the Kubernetes controller binaries:
```
wget https://storage.googleapis.com/kubernetes-release/release/v1.3.0/bin/linux/amd64/kube-apiserver
wget https://storage.googleapis.com/kubernetes-release/release/v1.3.0/bin/linux/amd64/kube-controller-manager
wget https://storage.googleapis.com/kubernetes-release/release/v1.3.0/bin/linux/amd64/kube-scheduler
wget https://storage.googleapis.com/kubernetes-release/release/v1.3.0/bin/linux/amd64/kubectl
```
```
chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl
```
```
sudo mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/bin/
```
#### Kubernetes API Server
```
wget https://storage.googleapis.com/hightowerlabs/authorization-policy.jsonl
```
```
cat authorization-policy.jsonl
```
```
sudo mv authorization-policy.jsonl /var/run/kubernetes/
```
```
wget https://storage.googleapis.com/hightowerlabs/token.csv
```
```
cat token.csv
```
```
sudo mv token.csv /var/run/kubernetes/
```
```
sudo sh -c 'echo "[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
ExecStart=/usr/bin/kube-apiserver \
--admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota \
--advertise-address=10.240.0.22 \
--allow-privileged=true \
--apiserver-count=3 \
--authorization-mode=ABAC \
--authorization-policy-file=/var/run/kubernetes/authorization-policy.jsonl \
--bind-address=0.0.0.0 \
--enable-swagger-ui=true \
--etcd-cafile=/var/run/kubernetes/ca.pem \
--insecure-bind-address=0.0.0.0 \
--kubelet-certificate-authority=/var/run/kubernetes/ca.pem \
--etcd-servers=https://10.240.0.10:2379,https://10.240.0.11:2379,https://10.240.0.12:2379 \
--service-account-key-file=/var/run/kubernetes/kubernetes-key.pem \
--service-cluster-ip-range=10.32.0.0/24 \
--service-node-port-range=30000-32767 \
--tls-cert-file=/var/run/kubernetes/kubernetes.pem \
--tls-private-key-file=/var/run/kubernetes/kubernetes-key.pem \
--token-auth-file=/var/run/kubernetes/token.csv \
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target" > /etc/systemd/system/kube-apiserver.service'
```
```
sudo systemctl daemon-reload
sudo systemctl enable kube-apiserver
sudo systemctl start kube-apiserver
```
```
sudo systemctl status kube-apiserver
```
#### Kubernetes Controller Manager
```
sudo sh -c 'echo "[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
ExecStart=/usr/bin/kube-controller-manager \
--cluster-cidr=10.200.0.0/16 \
--cluster-name=kubernetes \
--leader-elect=true \
--master=http://127.0.0.1:8080 \
--root-ca-file=/var/run/kubernetes/ca.pem \
--service-account-private-key-file=/var/run/kubernetes/kubernetes-key.pem \
--service-cluster-ip-range=10.32.0.0/24 \
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target" > /etc/systemd/system/kube-controller-manager.service'
```
```
sudo systemctl daemon-reload
sudo systemctl enable kube-controller-manager
sudo systemctl start kube-controller-manager
```
```
sudo systemctl status kube-controller-manager
```
#### Kubernetes Scheduler
```
sudo sh -c 'echo "[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
ExecStart=/usr/bin/kube-scheduler \
--leader-elect=true \
--master=http://127.0.0.1:8080 \
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target" > /etc/systemd/system/kube-scheduler.service'
```
```
sudo systemctl daemon-reload
sudo systemctl enable kube-scheduler
sudo systemctl start kube-scheduler
```
```
sudo systemctl status kube-scheduler
``` ```
@ -589,28 +268,14 @@ gcloud compute target-pools create kubernetes-pool \
``` ```
gcloud compute target-pools add-instances kubernetes-pool \ gcloud compute target-pools add-instances kubernetes-pool \
--instances controller0,controller1,controller2 \ --instances controller0,controller1,controller2
--zone us-central1-f
``` ```
``` export KUBERNETES_PUBLIC_IP_ADDRESS=$(gcloud compute addresses describe kubernetes --format 'value(address)')
gcloud compute addresses list
```
```
NAME REGION ADDRESS STATUS
kubernetes us-central1 104.197.132.159 RESERVED
```
``` ```
gcloud compute forwarding-rules create kubernetes-rule \ gcloud compute forwarding-rules create kubernetes-rule \
--region us-central1 \
--ports 6443 \ --ports 6443 \
--address 104.197.132.159 \ --address $KUBERNETES_PUBLIC_IP_ADDRESS \
--target-pool kubernetes-pool --target-pool kubernetes-pool
``` ```
```
gcloud compute firewall-rules create kubernetes-api-server \
--allow tcp:6443
```

View File

@ -21,13 +21,9 @@ Some people would like to run workers and cluster services anywhere in the clust
## Provision the Kubernetes Worker Nodes ## Provision the Kubernetes Worker Nodes
The following instructions can be ran on each worker node without modification. Lets start with `worker0`. Don't forget to repeat these steps for `worker1` and `worker2`. Run the following commands on `worker0`, `worker1`, `worker2`:
### worker0 > SSH into each machine using the `gcloud compute ssh` command
```
gcloud compute ssh worker0
```
#### Move the TLS certificates in place #### Move the TLS certificates in place
@ -52,11 +48,7 @@ tar -xvf docker-1.11.2.tgz
``` ```
``` ```
sudo cp docker/docker /usr/bin/ sudo cp docker/docker* /usr/bin/
sudo cp docker/docker-containerd /usr/bin/
sudo cp docker/docker-containerd-ctr /usr/bin/
sudo cp docker/docker-containerd-shim /usr/bin/
sudo cp docker/docker-runc /usr/bin/
``` ```
Create the Docker systemd unit file: Create the Docker systemd unit file:
@ -91,23 +83,6 @@ sudo systemctl start docker
sudo docker version sudo docker version
``` ```
```
Client:
Version: 1.11.2
API version: 1.23
Go version: go1.5.4
Git commit: b9f10c9
Built: Wed Jun 1 21:20:08 2016
OS/Arch: linux/amd64
Server:
Version: 1.11.2
API version: 1.23
Go version: go1.5.4
Git commit: b9f10c9
Built: Wed Jun 1 21:20:08 2016
OS/Arch: linux/amd64
```
#### kubelet #### kubelet
@ -124,7 +99,7 @@ wget https://storage.googleapis.com/kubernetes-release/network-plugins/cni-c864f
``` ```
``` ```
sudo tar -xzf cni-c864f0e1ea73719b8f4582402b0847064f9883b0.tar.gz -C /opt/cni sudo tar -xvf cni-c864f0e1ea73719b8f4582402b0847064f9883b0.tar.gz -C /opt/cni
``` ```
@ -209,7 +184,7 @@ sudo systemctl start kubelet
``` ```
``` ```
sudo systemctl status kubelet sudo systemctl status kubelet --no-pager
``` ```
@ -242,5 +217,5 @@ sudo systemctl start kube-proxy
``` ```
``` ```
sudo systemctl status kube-proxy sudo systemctl status kube-proxy --no-pager
``` ```

View File

@ -5,13 +5,13 @@ Now that each worker node is online we need to add routes to make sure that Pods
After completing this lab you will have the following router entries: After completing this lab you will have the following router entries:
``` ```
$ gcloud compute routes list $ gcloud compute routes list --filter "network=kubernetes"
``` ```
``` ```
NAME NETWORK DEST_RANGE NEXT_HOP PRIORITY NAME NETWORK DEST_RANGE NEXT_HOP PRIORITY
default-route-10-200-0-0-24 default 10.200.0.0/24 10.240.0.30 1000 kubernetes-route-10-200-0-0-24 kubernetes 10.200.0.0/24 10.240.0.30 1000
default-route-10-200-1-0-24 default 10.200.1.0/24 10.240.0.31 1000 kubernetes-route-10-200-1-0-24 kubernetes 10.200.1.0/24 10.240.0.31 1000
default-route-10-200-2-0-24 default 10.200.2.0/24 10.240.0.32 1000 kubernetes-route-10-200-2-0-24 kubernetes 10.200.2.0/24 10.240.0.32 1000
``` ```
## Get the Routing Table ## Get the Routing Table
@ -36,19 +36,22 @@ Output:
Use `gcloud` to add the routes to GCP: Use `gcloud` to add the routes to GCP:
``` ```
gcloud compute routes create default-route-10-200-0-0-24 \ gcloud compute routes create kubernetes-route-10-200-0-0-24 \
--network kubernetes \
--next-hop-address 10.240.0.30 \ --next-hop-address 10.240.0.30 \
--destination-range 10.200.0.0/24 --destination-range 10.200.0.0/24
``` ```
``` ```
gcloud compute routes create default-route-10-200-1-0-24 \ gcloud compute routes create kubernetes-route-10-200-1-0-24 \
--network kubernetes \
--next-hop-address 10.240.0.31 \ --next-hop-address 10.240.0.31 \
--destination-range 10.200.1.0/24 --destination-range 10.200.1.0/24
``` ```
``` ```
gcloud compute routes create default-route-10-200-2-0-24 \ gcloud compute routes create kubernetes-route-10-200-2-0-24 \
--network kubernetes \
--next-hop-address 10.240.0.32 \ --next-hop-address 10.240.0.32 \
--destination-range 10.200.2.0/24 --destination-range 10.200.2.0/24
``` ```