update docs

pull/118/merge 1.7.4
Kelsey Hightower 2017-08-28 14:19:25 -07:00
parent f9486b081f
commit 4ca7c45046
29 changed files with 2141 additions and 1945 deletions

View File

@ -1,47 +1,39 @@
# Kubernetes The Hard Way
This tutorial will walk you through setting up Kubernetes the hard way. This guide is not for people looking for a fully automated command to bring up a Kubernetes cluster. If that's you then check out [Google Container Engine](https://cloud.google.com/container-engine), or the [Getting Started Guides](http://kubernetes.io/docs/getting-started-guides/).
This tutorial walks you through setting up Kubernetes the hard way. This guide is not for people looking for a fully automated command to bring up a Kubernetes cluster. If that's you then check out [Google Container Engine](https://cloud.google.com/container-engine), or the [Getting Started Guides](http://kubernetes.io/docs/getting-started-guides/).
This tutorial is optimized for learning, which means taking the long route to help people understand each task required to bootstrap a Kubernetes cluster. This tutorial requires access to [Google Compute Engine](https://cloud.google.com/compute).
Kubernetes The Hard Way is optimized for learning, which means taking the long route to ensure you understand each task required to bootstrap a Kubernetes cluster.
> The results of this tutorial should not be viewed as production ready, and may receive limited support from the community, but don't let that prevent you from learning!
> The results of this tutorial should not be viewed as production ready, and may receive limited support from the community, but don't let that stop you from learning!
## Target Audience
The target audience for this tutorial is someone planning to support a production Kubernetes cluster and wants to understand how everything fits together. After completing this tutorial I encourage you to automate away the manual steps presented in this guide.
The target audience for this tutorial is someone planning to support a production Kubernetes cluster and wants to understand how everything fits together.
## Cluster Details
* Kubernetes 1.7.0
* Docker 1.12.6
* etcd 3.1.4
* [CNI Based Networking](https://github.com/containernetworking/cni)
* Secure communication between all components (etcd, control plane, workers)
* Default Service Account and Secrets
* [RBAC authorization enabled](https://kubernetes.io/docs/admin/authorization)
* [TLS client certificate bootstrapping for kubelets](https://kubernetes.io/docs/admin/kubelet-tls-bootstrapping)
* DNS add-on
Kubernetes The Hard Way guides you through bootstrapping a highly available Kubernetes cluster with end-to-end encryption between components and RBAC authentication.
### What's Missing
The resulting cluster will be missing the following features:
* Cloud Provider Integration
* [Logging](https://kubernetes.io/docs/concepts/cluster-administration/logging/)
* [Cluster add-ons](https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
* [Kubernetes](https://github.com/kubernetes/kubernetes) 1.7.4
* [CRI-O Container Runtime](https://github.com/kubernetes-incubator/cri-o) v1.0.0-beta.0
* [CNI Container Networking](https://github.com/containernetworking/cni) v0.6.0
* [etcd](https://github.com/coreos/etcd) 3.2.6
## Labs
This tutorial assumes you have access to [Google Cloud Platform](https://cloud.google.com) and the [Google Cloud SDK](https://cloud.google.com/sdk/)(148.0.0+). While GCP is used for basic infrastructure needs the things learned in this tutorial can be applied to every platform.
This tutorial assumes you have access to the [Google Cloud Platform](https://cloud.google.com). While GCP is used for basic infrastructure requirements the lessons learned in this tutorial can be applied to other platforms.
* [Cloud Infrastructure Provisioning](docs/01-infrastructure-gcp.md)
* [Setting up a CA and TLS Cert Generation](docs/02-certificate-authority.md)
* [Setting up TLS Client Bootstrap and RBAC Authentication](docs/03-auth-configs.md)
* [Bootstrapping a H/A etcd cluster](docs/04-etcd.md)
* [Bootstrapping a H/A Kubernetes Control Plane](docs/05-kubernetes-controller.md)
* [Bootstrapping Kubernetes Workers](docs/06-kubernetes-worker.md)
* [Configuring the Kubernetes Client - Remote Access](docs/07-kubectl.md)
* [Managing the Container Network Routes](docs/08-network.md)
* [Deploying the Cluster DNS Add-on](docs/09-dns-addon.md)
* [Smoke Test](docs/10-smoke-test.md)
* [Cleaning Up](docs/11-cleanup.md)
* [Prerequisites](docs/01-prerequisites.md)
* [Installing the Client Tools](docs/02-client-tools.md)
* [Provisioning Compute Resources](docs/03-compute-resources.md)
* [Provisioning the CA and Generating TLS Certificates](docs/04-certificate-authority.md)
* [Generating Kubernetes Configuration Files for Authentication](docs/05-kubernetes-configuration-files.md)
* [Generating the Data Encryption Config and Key](docs/06-data-encryption-keys.md)
* [Bootstrapping the etcd Cluster](docs/07-bootstrapping-etcd.md)
* [Bootstrapping the Kubernetes Control Plane](docs/08-bootstrapping-kubernetes-controllers.md)
* [Bootstrapping the Kubernetes Worker Nodes](docs/09-bootstrapping-kubernetes-workers.md)
* [Configuring kubectl for Remote Access](docs/10-configuring-kubectl.md)
* [Provisioning Pod Network Routes](docs/11-pod-network-routes.md)
* [Deploying the DNS Cluster Add-on](docs/12-dns-addon.md)
* [Smoke Test](docs/13-smoke-test.md)
* [Cleaning Up](docs/14-cleanup.md)

192
deployments/kube-dns.yaml Normal file
View File

@ -0,0 +1,192 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: kube-dns
namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
name: kube-dns
namespace: kube-system
labels:
addonmanager.kubernetes.io/mode: EnsureExists
---
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "KubeDNS"
spec:
clusterIP: 10.32.0.10
ports:
- name: dns
port: 53
protocol: UDP
targetPort: 53
- name: dns-tcp
port: 53
protocol: TCP
targetPort: 53
selector:
k8s-app: kube-dns
sessionAffinity: None
type: ClusterIP
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
name: kube-dns
namespace: kube-system
spec:
replicas: 2
selector:
matchLabels:
k8s-app: kube-dns
strategy:
rollingUpdate:
maxSurge: 10%
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
creationTimestamp: null
labels:
k8s-app: kube-dns
spec:
containers:
- name: kubedns
image: gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.4
env:
- name: PROMETHEUS_PORT
value: "10055"
args:
- --domain=cluster.local.
- --dns-port=10053
- --config-dir=/kube-dns-config
- --v=2
livenessProbe:
failureThreshold: 5
httpGet:
path: /healthcheck/kubedns
port: 10054
scheme: HTTP
initialDelaySeconds: 60
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
ports:
- name: dns-local
containerPort: 10053
protocol: UDP
- name: dns-tcp-local
containerPort: 10053
protocol: TCP
- name: metrics
containerPort: 10055
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /readiness
port: 8081
scheme: HTTP
initialDelaySeconds: 3
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
resources:
limits:
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
volumeMounts:
- name: kube-dns-config
mountPath: /kube-dns-config
- name: dnsmasq
image: gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.4
args:
- -v=2
- -logtostderr
- -configDir=/etc/k8s/dns/dnsmasq-nanny
- -restartDnsmasq=true
- --
- -k
- --cache-size=1000
- --log-facility=-
- --server=/cluster.local/127.0.0.1#10053
- --server=/in-addr.arpa/127.0.0.1#10053
- --server=/ip6.arpa/127.0.0.1#10053
livenessProbe:
failureThreshold: 5
httpGet:
path: /healthcheck/dnsmasq
port: 10054
scheme: HTTP
initialDelaySeconds: 60
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
ports:
- name: dns
containerPort: 53
protocol: UDP
- name: dns-tcp
containerPort: 53
protocol: TCP
resources:
requests:
cpu: 150m
memory: 20Mi
volumeMounts:
- name: kube-dns-config
mountPath: /etc/k8s/dns/dnsmasq-nanny
- name: sidecar
image: gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.4
args:
- --v=2
- --logtostderr
- --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,A
- --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,A
livenessProbe:
failureThreshold: 5
httpGet:
path: /metrics
port: 10054
scheme: HTTP
initialDelaySeconds: 60
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
ports:
- name: metrics
containerPort: 10054
protocol: TCP
resources:
requests:
cpu: 10m
memory: 20Mi
dnsPolicy: Default
restartPolicy: Always
serviceAccount: kube-dns
serviceAccountName: kube-dns
terminationGracePeriodSeconds: 30
tolerations:
- key: CriticalAddonsOnly
operator: Exists
volumes:
- name: kube-dns-config
configMap:
defaultMode: 420
name: kube-dns
optional: true

View File

@ -1,165 +0,0 @@
# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
spec:
# replicas: not specified here:
# 1. In order to make Addon Manager do not reconcile this replicas parameter.
# 2. Default is 1.
# 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
strategy:
rollingUpdate:
maxSurge: 10%
maxUnavailable: 0
selector:
matchLabels:
k8s-app: kube-dns
template:
metadata:
labels:
k8s-app: kube-dns
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]'
spec:
containers:
- name: kubedns
image: gcr.io/google_containers/kubedns-amd64:1.9
resources:
# TODO: Set memory limits when we've profiled the container for large
# clusters, then set request = limit to keep this container in
# guaranteed class. Currently, this container falls into the
# "burstable" category so the kubelet doesn't backoff from restarting it.
limits:
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
livenessProbe:
httpGet:
path: /healthz-kubedns
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /readiness
port: 8081
scheme: HTTP
# we poll on pod startup for the Kubernetes master service and
# only setup the /readiness HTTP server once that's available.
initialDelaySeconds: 3
timeoutSeconds: 5
args:
- --domain=cluster.local.
- --dns-port=10053
- --config-map=kube-dns
# This should be set to v=2 only after the new image (cut from 1.5) has
# been released, otherwise we will flood the logs.
- --v=0
env:
- name: PROMETHEUS_PORT
value: "10055"
ports:
- containerPort: 10053
name: dns-local
protocol: UDP
- containerPort: 10053
name: dns-tcp-local
protocol: TCP
- containerPort: 10055
name: metrics
protocol: TCP
- name: dnsmasq
image: gcr.io/google_containers/kube-dnsmasq-amd64:1.4
livenessProbe:
httpGet:
path: /healthz-dnsmasq
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
args:
- --cache-size=1000
- --no-resolv
- --server=127.0.0.1#10053
- --log-facility=-
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
# see: https://github.com/kubernetes/kubernetes/issues/29055 for details
resources:
requests:
cpu: 150m
memory: 10Mi
- name: dnsmasq-metrics
image: gcr.io/google_containers/dnsmasq-metrics-amd64:1.0
livenessProbe:
httpGet:
path: /metrics
port: 10054
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
args:
- --v=2
- --logtostderr
ports:
- containerPort: 10054
name: metrics
protocol: TCP
resources:
requests:
memory: 10Mi
- name: healthz
image: gcr.io/google_containers/exechealthz-amd64:1.2
resources:
limits:
memory: 50Mi
requests:
cpu: 10m
# Note that this container shouldn't really need 50Mi of memory. The
# limits are set higher than expected pending investigation on #29688.
# The extra memory was stolen from the kubedns container to keep the
# net memory requested by the pod constant.
memory: 50Mi
args:
- --cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null
- --url=/healthz-dnsmasq
- --cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1:10053 >/dev/null
- --url=/healthz-kubedns
- --port=8080
- --quiet
ports:
- containerPort: 8080
protocol: TCP
dnsPolicy: Default # Don't use cluster DNS.

View File

@ -1,195 +0,0 @@
# Cloud Infrastructure Provisioning - Google Cloud Platform
This lab will walk you through provisioning the compute instances required for running a H/A Kubernetes cluster. A total of 6 virtual machines will be created.
After completing this guide you should have the following compute instances:
```
gcloud compute instances list
```
````
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
controller0 us-central1-f n1-standard-1 10.240.0.10 XXX.XXX.XXX.XXX RUNNING
controller1 us-central1-f n1-standard-1 10.240.0.11 XXX.XXX.XXX.XXX RUNNING
controller2 us-central1-f n1-standard-1 10.240.0.12 XXX.XXX.XXX.XXX RUNNING
worker0 us-central1-f n1-standard-1 10.240.0.20 XXX.XXX.XXX.XXX RUNNING
worker1 us-central1-f n1-standard-1 10.240.0.21 XXX.XXX.XXX.XXX RUNNING
worker2 us-central1-f n1-standard-1 10.240.0.22 XXX.XXX.XXX.XXX RUNNING
````
> All machines will be provisioned with fixed private IP addresses to simplify the bootstrap process.
To make our Kubernetes control plane remotely accessible, a public IP address will be provisioned and assigned to a Load Balancer that will sit in front of the 3 Kubernetes controllers.
## Prerequisites
Set the compute region and zone to us-central1:
```
gcloud config set compute/region us-central1
```
```
gcloud config set compute/zone us-central1-f
```
## Setup Networking
Create a custom network:
```
gcloud compute networks create kubernetes-the-hard-way --mode custom
```
Create a subnet for the Kubernetes cluster:
```
gcloud compute networks subnets create kubernetes \
--network kubernetes-the-hard-way \
--range 10.240.0.0/24 \
--region us-central1
```
### Create Firewall Rules
```
gcloud compute firewall-rules create allow-internal \
--allow tcp,udp,icmp \
--network kubernetes-the-hard-way \
--source-ranges 10.240.0.0/24,10.200.0.0/16
```
```
gcloud compute firewall-rules create allow-external \
--allow tcp:22,tcp:3389,tcp:6443,icmp \
--network kubernetes-the-hard-way \
--source-ranges 0.0.0.0/0
```
```
gcloud compute firewall-rules create allow-healthz \
--allow tcp:8080 \
--network kubernetes-the-hard-way \
--source-ranges 130.211.0.0/22,35.191.0.0/16
```
```
gcloud compute firewall-rules list --filter "network=kubernetes-the-hard-way"
```
```
NAME NETWORK SRC_RANGES RULES SRC_TAGS TARGET_TAGS
allow-external kubernetes-the-hard-way 0.0.0.0/0 tcp:22,tcp:3389,tcp:6443,icmp
allow-healthz   kubernetes-the-hard-way 130.211.0.0/22,35.191.0.0/16 tcp:8080
allow-internal kubernetes-the-hard-way 10.240.0.0/24,10.200.0.0/16 tcp,udp,icmp
```
### Create the Kubernetes Public Address
Create a public IP address that will be used by remote clients to connect to the Kubernetes control plane:
```
gcloud compute addresses create kubernetes-the-hard-way --region=us-central1
```
```
gcloud compute addresses list kubernetes-the-hard-way
```
```
NAME REGION ADDRESS STATUS
kubernetes-the-hard-way us-central1 XXX.XXX.XXX.XXX RESERVED
```
## Provision Virtual Machines
All the VMs in this lab will be provisioned using Ubuntu 16.04 mainly because it runs a newish Linux kernel with good support for Docker.
### Virtual Machines
#### Kubernetes Controllers
```
gcloud compute instances create controller0 \
--boot-disk-size 200GB \
--can-ip-forward \
--image ubuntu-1604-xenial-v20170307 \
--image-project ubuntu-os-cloud \
--machine-type n1-standard-1 \
--private-network-ip 10.240.0.10 \
--subnet kubernetes
```
```
gcloud compute instances create controller1 \
--boot-disk-size 200GB \
--can-ip-forward \
--image ubuntu-1604-xenial-v20170307 \
--image-project ubuntu-os-cloud \
--machine-type n1-standard-1 \
--private-network-ip 10.240.0.11 \
--subnet kubernetes
```
```
gcloud compute instances create controller2 \
--boot-disk-size 200GB \
--can-ip-forward \
--image ubuntu-1604-xenial-v20170307 \
--image-project ubuntu-os-cloud \
--machine-type n1-standard-1 \
--private-network-ip 10.240.0.12 \
--subnet kubernetes
```
#### Kubernetes Workers
Include socat depedency on worker VMs to enable kubelet's portfw functionality.
```
gcloud compute instances create worker0 \
--boot-disk-size 200GB \
--can-ip-forward \
--image ubuntu-1604-xenial-v20170307 \
--image-project ubuntu-os-cloud \
--machine-type n1-standard-1 \
--private-network-ip 10.240.0.20 \
--subnet kubernetes \
--metadata startup-script='#! /bin/bash
apt-get update
apt-get install -y socat
EOF'
```
```
gcloud compute instances create worker1 \
--boot-disk-size 200GB \
--can-ip-forward \
--image ubuntu-1604-xenial-v20170307 \
--image-project ubuntu-os-cloud \
--machine-type n1-standard-1 \
--private-network-ip 10.240.0.21 \
--subnet kubernetes \
--metadata startup-script='#! /bin/bash
apt-get update
apt-get install -y socat
EOF'
```
```
gcloud compute instances create worker2 \
--boot-disk-size 200GB \
--can-ip-forward \
--image ubuntu-1604-xenial-v20170307 \
--image-project ubuntu-os-cloud \
--machine-type n1-standard-1 \
--private-network-ip 10.240.0.22 \
--subnet kubernetes \
--metadata startup-script='#! /bin/bash
apt-get update
apt-get install -y socat
EOF'
```

41
docs/01-prerequisites.md Normal file
View File

@ -0,0 +1,41 @@
# Prerequisites
## Google Cloud Platform
This tutorial leverages the [Google Cloud Platform](https://cloud.google.com/) to streamline provisioning of the compute infrastructure required to bootstrap a Kubernetes cluster from the ground up. [Sign up](https://cloud.google.com/free/) for $300 in free credits.
[Estimated cost](https://cloud.google.com/products/calculator/#id=78df6ced-9c50-48f8-a670-bc5003f2ddaa) to run this tutorial: $0.22 per hour ($5.39 per day).
> The compute resources required for this tutorial exceed the Google Cloud Platform free tier.
## Google Cloud Platform SDK
### Install the Google Cloud SDK
Follow the Google Cloud SDK [documentation](https://cloud.google.com/sdk/) to install and configure the `gcloud` command line utility.
Verify the Google Cloud SDK version is 169.0.0 or higher:
```
gcloud version
```
### Set a Default Compute Region and Zone
This tutorial assumes a default compute region and zone have been configured.
Set a default compute region:
```
gcloud config set compute/region us-west1
```
Set a default compute zone:
```
gcloud config set compute/zone us-west1-c
```
> Use the `gcloud compute zones list` command to view additional regions and zones.
Next: [Installing the Client Tools](02-client-tools.md)

View File

@ -1,282 +0,0 @@
# Setting up a Certificate Authority and Creating TLS Certificates
In this lab you will setup the necessary PKI infrastructure to secure the Kubernetes components. This lab will leverage CloudFlare's PKI toolkit, [cfssl](https://github.com/cloudflare/cfssl), to bootstrap a Certificate Authority and generate TLS certificates to secure the following Kubernetes components:
* etcd
* kube-apiserver
* kubelet
* kube-proxy
After completing this lab you should have the following TLS keys and certificates:
```
admin.pem
admin-key.pem
ca-key.pem
ca.pem
kubernetes-key.pem
kubernetes.pem
kube-proxy.pem
kube-proxy-key.pem
```
## Install CFSSL
This lab requires the `cfssl` and `cfssljson` binaries. Download them from the [cfssl repository](https://pkg.cfssl.org).
### OS X
```
wget https://pkg.cfssl.org/R1.2/cfssl_darwin-amd64
chmod +x cfssl_darwin-amd64
sudo mv cfssl_darwin-amd64 /usr/local/bin/cfssl
```
```
wget https://pkg.cfssl.org/R1.2/cfssljson_darwin-amd64
chmod +x cfssljson_darwin-amd64
sudo mv cfssljson_darwin-amd64 /usr/local/bin/cfssljson
```
### Linux
```
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
chmod +x cfssl_linux-amd64
sudo mv cfssl_linux-amd64 /usr/local/bin/cfssl
```
```
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
chmod +x cfssljson_linux-amd64
sudo mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
```
## Set up a Certificate Authority
Create a CA configuration file:
```
cat > ca-config.json <<EOF
{
"signing": {
"default": {
"expiry": "8760h"
},
"profiles": {
"kubernetes": {
"usages": ["signing", "key encipherment", "server auth", "client auth"],
"expiry": "8760h"
}
}
}
}
EOF
```
Create a CA certificate signing request:
```
cat > ca-csr.json <<EOF
{
"CN": "Kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "Kubernetes",
"OU": "CA",
"ST": "Oregon"
}
]
}
EOF
```
Generate a CA certificate and private key:
```
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
```
Results:
```
ca-key.pem
ca.pem
```
## Generate client and server TLS certificates
In this section we will generate TLS certificates for each Kubernetes component and a client certificate for the admin user.
### Create the Admin client certificate
Create the admin client certificate signing request:
```
cat > admin-csr.json <<EOF
{
"CN": "admin",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "system:masters",
"OU": "Cluster",
"ST": "Oregon"
}
]
}
EOF
```
Generate the admin client certificate and private key:
```
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
admin-csr.json | cfssljson -bare admin
```
Results:
```
admin-key.pem
admin.pem
```
### Create the kube-proxy client certificate
Create the kube-proxy client certificate signing request:
```
cat > kube-proxy-csr.json <<EOF
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "system:node-proxier",
"OU": "Cluster",
"ST": "Oregon"
}
]
}
EOF
```
Generate the kube-proxy client certificate and private key:
```
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
kube-proxy-csr.json | cfssljson -bare kube-proxy
```
Results:
```
kube-proxy-key.pem
kube-proxy.pem
```
### Create the kubernetes server certificate
The Kubernetes public IP address will be included in the list of subject alternative names for the Kubernetes server certificate. This will ensure the TLS certificate is valid for remote client access.
```
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region us-central1 \
--format 'value(address)')
```
Create the Kubernetes server certificate signing request:
```
cat > kubernetes-csr.json <<EOF
{
"CN": "kubernetes",
"hosts": [
"10.32.0.1",
"10.240.0.10",
"10.240.0.11",
"10.240.0.12",
"${KUBERNETES_PUBLIC_ADDRESS}",
"127.0.0.1",
"kubernetes.default"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "Kubernetes",
"OU": "Cluster",
"ST": "Oregon"
}
]
}
EOF
```
Generate the Kubernetes certificate and private key:
```
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
kubernetes-csr.json | cfssljson -bare kubernetes
```
Results:
```
kubernetes-key.pem
kubernetes.pem
```
## Distribute the TLS certificates
Set the list of Kubernetes hosts where the certs should be copied to:
The following commands will copy the TLS certificates and keys to each Kubernetes host using the `gcloud compute scp` command.
```
for host in worker0 worker1 worker2; do
gcloud compute scp ca.pem kube-proxy.pem kube-proxy-key.pem ${host}:~/
done
```
```
for host in controller0 controller1 controller2; do
gcloud compute scp ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem ${host}:~/
done
```

116
docs/02-client-tools.md Normal file
View File

@ -0,0 +1,116 @@
# Installing the Client Tools
In this lab you will install the command line utilities required to complete this tutorial: [cfssl](https://github.com/cloudflare/cfssl), [cfssljson](https://github.com/cloudflare/cfssl), and [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl).
## Install CFSSL
The `cfssl` and `cfssljson` command line utilities will be used to provision a [PKI Infrastructure](https://en.wikipedia.org/wiki/Public_key_infrastructure) and generate TLS certificates.
Download and install `cfssl` and `cfssljson` from the [cfssl repository](https://pkg.cfssl.org):
### OS X
```
wget -q --show-progress --https-only --timestamping \
https://pkg.cfssl.org/R1.2/cfssl_darwin-amd64 \
https://pkg.cfssl.org/R1.2/cfssljson_darwin-amd64
```
```
chmod +x cfssl_darwin-amd64 cfssljson_darwin-amd64
```
```
sudo mv cfssl_darwin-amd64 /usr/local/bin/cfssl
```
```
sudo mv cfssljson_darwin-amd64 /usr/local/bin/cfssljson
```
### Linux
```
wget -q --show-progress --https-only --timestamping \
https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 \
https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
```
```
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64
```
```
sudo mv cfssl_linux-amd64 /usr/local/bin/cfssl
```
```
sudo mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
```
### Verification
Verify `cfssl` version 1.2.0 or higher is installed:
```
cfssl version
```
> output
```
Version: 1.2.0
Revision: dev
Runtime: go1.6
```
> The cfssljson command line utility does not provide a way to print its version.
## Install kubectl
The `kubectl` command line utility is used to interact with the Kubernetes API Server. Download and install `kubectl` from the official release binaries:
### OS X
```
wget https://storage.googleapis.com/kubernetes-release/release/v1.7.4/bin/darwin/amd64/kubectl
```
```
chmod +x kubectl
```
```
sudo mv kubectl /usr/local/bin/
```
### Linux
```
wget https://storage.googleapis.com/kubernetes-release/release/v1.7.4/bin/linux/amd64/kubectl
```
```
chmod +x kubectl
```
```
sudo mv kubectl /usr/local/bin/
```
### Verification
Verify `kubectl` version 1.7.4 or higher is installed:
```
kubectl version --client
```
> output
```
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.4", GitCommit:"793658f2d7ca7f064d2bdf606519f9fe1229c381", GitTreeState:"clean", BuildDate:"2017-08-17T08:48:23Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"darwin/amd64"}
```
Next: [Provisioning Compute Resources](03-compute-resources.md)

View File

@ -1,140 +0,0 @@
# Setting up Authentication
In this lab you will setup the necessary authentication configs to enable Kubernetes clients to bootstrap and authenticate using RBAC (Role-Based Access Control).
## Download and Install kubectl
The kubectl client will be used to generate kubeconfig files which will be consumed by the kubelet and kube-proxy services.
### OS X
```
wget https://storage.googleapis.com/kubernetes-release/release/v1.7.0/bin/darwin/amd64/kubectl
chmod +x kubectl
sudo mv kubectl /usr/local/bin
```
### Linux
```
wget https://storage.googleapis.com/kubernetes-release/release/v1.7.0/bin/linux/amd64/kubectl
chmod +x kubectl
sudo mv kubectl /usr/local/bin
```
## Authentication
The following components will leverage Kubernetes RBAC:
* kubelet (client)
* kube-proxy (client)
* kubectl (client)
The other components, mainly the `scheduler` and `controller manager`, access the Kubernetes API server locally over the insecure API port which does not require authentication. The insecure port is only enabled for local access.
### Create the TLS Bootstrap Token
This section will walk you through the creation of a TLS bootstrap token that will be used to [bootstrap TLS client certificates for kubelets](https://kubernetes.io/docs/admin/kubelet-tls-bootstrapping/).
Generate a token:
```
BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
```
Generate a token file:
```
cat > token.csv <<EOF
${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF
```
Distribute the bootstrap token file to each controller node:
```
for host in controller0 controller1 controller2; do
gcloud compute scp token.csv ${host}:~/
done
```
## Client Authentication Configs
This section will walk you through creating kubeconfig files that will be used to bootstrap kubelets, which will then generate their own kubeconfigs based on dynamically generated certificates, and a kubeconfig for authenticating kube-proxy clients.
Each kubeconfig requires a Kubernetes master to connect to. To support H/A the IP address assigned to the load balancer sitting in front of the Kubernetes API servers will be used.
### Set the Kubernetes Public Address
```
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region us-central1 \
--format 'value(address)')
```
## Create client kubeconfig files
### Create the bootstrap kubeconfig file
```
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \
--kubeconfig=bootstrap.kubeconfig
```
```
kubectl config set-credentials kubelet-bootstrap \
--token=${BOOTSTRAP_TOKEN} \
--kubeconfig=bootstrap.kubeconfig
```
```
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=kubelet-bootstrap \
--kubeconfig=bootstrap.kubeconfig
```
```
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
```
### Create the kube-proxy kubeconfig
```
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \
--kubeconfig=kube-proxy.kubeconfig
```
```
kubectl config set-credentials kube-proxy \
--client-certificate=kube-proxy.pem \
--client-key=kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
```
```
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
```
```
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
```
## Distribute the client kubeconfig files
```
for host in worker0 worker1 worker2; do
gcloud compute scp bootstrap.kubeconfig kube-proxy.kubeconfig ${host}:~/
done
```

View File

@ -0,0 +1,172 @@
# Provisioning Compute Resources
Kubernetes requires a set of machines to host the Kubernetes control plane and the worker nodes where containers are ultimately run. In this lab you will provision the compute resources required for running a secure and highly available Kubernetes cluster across a single [compute zone](https://cloud.google.com/compute/docs/regions-zones/regions-zones).
> Ensure a default compute zone and region have been set as described in the [Prerequisites](01-prerequisites.md#set-a-default-compute-region-and-zone) lab.
## Networking
The Kubernetes [networking model](https://kubernetes.io/docs/concepts/cluster-administration/networking/#kubernetes-model) assumes a flat network in which containers and nodes can communicate with each other. In cases where this is not desired [network policies](https://kubernetes.io/docs/concepts/services-networking/network-policies/) can limit how groups of containers are allowed to communicate with each other and external network endpoints.
> Setting up network policies is out of scope for this tutorial.
### Virtual Private Cloud Network
In this section a dedicated [Virtual Private Cloud](https://cloud.google.com/compute/docs/networks-and-firewalls#networks) (VPC) network will be setup to host the Kubernetes cluster.
Create the `kubernetes-the-hard-way` custom VPC network:
```
gcloud compute networks create kubernetes-the-hard-way --mode custom
```
A [subnet](https://cloud.google.com/compute/docs/vpc/#vpc_networks_and_subnets) must be provisioned with an IP address range large enough to assign a private IP address to each node in the Kubernetes cluster.
Create the `kubernetes` subnet in the `kubernetes-the-hard-way` VPC network:
```
gcloud compute networks subnets create kubernetes \
--network kubernetes-the-hard-way \
--range 10.240.0.0/24
```
> The `10.240.0.0/24` IP address range can host up to 254 compute instances.
### Firewall Rules
Create a firewall rule that allows internal communication across all protocols:
```
gcloud compute firewall-rules create kubernetes-the-hard-way-allow-internal \
--allow tcp,udp,icmp \
--network kubernetes-the-hard-way \
--source-ranges 10.240.0.0/24,10.200.0.0/16
```
Create a firewall rule that allows external SSH, ICMP, and HTTPS:
```
gcloud compute firewall-rules create kubernetes-the-hard-way-allow-external \
--allow tcp:22,tcp:6443,icmp \
--network kubernetes-the-hard-way \
--source-ranges 0.0.0.0/0
```
Create a firewall rule that allows health check probes from the GCP [network load balancer IP ranges](https://cloud.google.com/compute/docs/load-balancing/network/#firewall_rules_and_network_load_balancing):
```
gcloud compute firewall-rules create kubernetes-the-hard-way-allow-health-checks \
--allow tcp:8080 \
--network kubernetes-the-hard-way \
--source-ranges 209.85.204.0/22,209.85.152.0/22,35.191.0.0/16
```
> An [external load balancer](https://cloud.google.com/compute/docs/load-balancing/network/) will be used to expose the Kubernetes API Servers to remote clients.
List the firewall rules in the `kubernetes-the-hard-way` VPC network:
```
gcloud compute firewall-rules list --filter "network kubernetes-the-hard-way"
```
> output
```
NAME NETWORK DIRECTION PRIORITY ALLOW DENY
kubernetes-the-hard-way-allow-external kubernetes-the-hard-way INGRESS 1000 tcp:22,tcp:6443,icmp
kubernetes-the-hard-way-allow-health-checks kubernetes-the-hard-way INGRESS 1000 tcp:8080
kubernetes-the-hard-way-allow-internal kubernetes-the-hard-way INGRESS 1000 tcp,udp,icmp
```
### Kubernetes Public IP Address
Allocate a static IP address that will be attached to the external load balancer fronting the Kubernetes API Servers:
```
gcloud compute addresses create kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region)
```
Verify the `kubernetes-the-hard-way` static IP address was created in your default compute region:
```
gcloud compute addresses list --filter="name=('kubernetes-the-hard-way')"
```
> output
```
NAME REGION ADDRESS STATUS
kubernetes-the-hard-way us-west1 XX.XXX.XXX.XX RESERVED
```
## Compute Instances
The compute instances in this lab will be provisioned using [Ubuntu Server](https://www.ubuntu.com/server) 16.04, which has good support for the [CRI-O container runtime](https://github.com/kubernetes-incubator/cri-o). Each compute instance will be provisioned with a fixed private IP address to simplify the Kubernetes bootstrapping process.
### Kubernetes Controllers
Create three compute instances which will host the Kubernetes control plane:
```
for i in 0 1 2; do
gcloud compute instances create controller-${i} \
--async \
--boot-disk-size 200GB \
--can-ip-forward \
--image-family ubuntu-1604-lts \
--image-project ubuntu-os-cloud \
--machine-type n1-standard-1 \
--private-network-ip 10.240.0.1${i} \
--scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring \
--subnet kubernetes \
--tags kubernetes-the-hard-way,controller
done
```
### Kubernetes Workers
Each worker instance requires a pod subnet allocation from the Kubernetes cluster CIDR range. The pod subnet allocation will be used to configure container networking in a later exercise. The `pod-cidr` instance metadata will be used to expose pod subnet allocations to compute instances at runtime.
> The Kubernetes cluster CIDR range is defined by the Controller Manager's `--cluster-cidr` flag. In this tutorial the cluster CIDR range will be set to `10.200.0.0/16`, which supports 254 subnets.
Create three compute instances which will host the Kubernetes worker nodes:
```
for i in 0 1 2; do
gcloud compute instances create worker-${i} \
--async \
--boot-disk-size 200GB \
--can-ip-forward \
--image-family ubuntu-1604-lts \
--image-project ubuntu-os-cloud \
--machine-type n1-standard-1 \
--metadata pod-cidr=10.200.${i}.0/24 \
--private-network-ip 10.240.0.2${i} \
--scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring \
--subnet kubernetes \
--tags kubernetes-the-hard-way,worker
done
```
### Verification
List the compute instances in your default compute zone:
```
gcloud compute instances list
```
> output
```
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
controller-0 us-west1-c n1-standard-1 10.240.0.10 XX.XXX.XXX.XXX RUNNING
controller-1 us-west1-c n1-standard-1 10.240.0.11 XX.XXX.X.XX RUNNING
controller-2 us-west1-c n1-standard-1 10.240.0.12 XX.XXX.XXX.XX RUNNING
worker-0 us-west1-c n1-standard-1 10.240.0.20 XXX.XXX.XXX.XX RUNNING
worker-1 us-west1-c n1-standard-1 10.240.0.21 XX.XXX.XX.XXX RUNNING
worker-2 us-west1-c n1-standard-1 10.240.0.22 XXX.XXX.XX.XX RUNNING
```
Next: [Provisioning a CA and Generating TLS Certificates](04-certificate-authority.md)

View File

@ -0,0 +1,283 @@
# Provisioning a CA and Generating TLS Certificates
In this lab you will provision a [PKI Infrastructure](https://en.wikipedia.org/wiki/Public_key_infrastructure) using CloudFlare's PKI toolkit, [cfssl](https://github.com/cloudflare/cfssl), then use it to bootstrap a Certificate Authority, and generate TLS certificates for the following components: etcd, kube-apiserver, kubelet, and kube-proxy.
## Certificate Authority
In this section you will provision a Certificate Authority that can be used to generate additional TLS certificates.
Create the CA configuration file:
```
cat > ca-config.json <<EOF
{
"signing": {
"default": {
"expiry": "8760h"
},
"profiles": {
"kubernetes": {
"usages": ["signing", "key encipherment", "server auth", "client auth"],
"expiry": "8760h"
}
}
}
}
EOF
```
Create the CA certificate signing request:
```
cat > ca-csr.json <<EOF
{
"CN": "Kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "Kubernetes",
"OU": "CA",
"ST": "Oregon"
}
]
}
EOF
```
Generate the CA certificate and private key:
```
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
```
Results:
```
ca-key.pem
ca.pem
```
## Client and Server Certificates
In this section you will generate client and server certificates for each Kubernetes component and a client certificate for the Kubernetes `admin` user.
### The Admin Client Certificate
Create the `admin` client certificate signing request:
```
cat > admin-csr.json <<EOF
{
"CN": "admin",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "system:masters",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
```
Generate the `admin` client certificate and private key:
```
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
admin-csr.json | cfssljson -bare admin
```
Results:
```
admin-key.pem
admin.pem
```
### The Kubelet Client Certificates
Kubernetes uses a [special-purpose authorization mode](https://kubernetes.io/docs/admin/authorization/node/) called Node Authorizer, that specifically authorizes API requests made by [Kubelets](https://kubernetes.io/docs/concepts/overview/components/#kubelet). In order to be authorized by the Node Authorizer, Kubelets must use a credential that identifies them as being in the `system:nodes` group, with a username of `system:node:<nodeName>`. In this section you will create a certificate for each Kubernetes worker node that meets the Node Authorizer requirements.
Generate a certificate and private key for each Kubernetes worker node:
```
for instance in worker-0 worker-1 worker-2; do
cat > ${instance}-csr.json <<EOF
{
"CN": "system:node:${instance}",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "system:nodes",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
EXTERNAL_IP=$(gcloud compute instances describe ${instance} \
--format 'value(networkInterfaces[0].accessConfigs[0].natIP)')
INTERNAL_IP=$(gcloud compute instances describe ${instance} \
--format 'value(networkInterfaces[0].networkIP)')
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-hostname=${instance},${EXTERNAL_IP},${INTERNAL_IP} \
-profile=kubernetes \
${instance}-csr.json | cfssljson -bare ${instance}
done
```
Results:
```
worker-0-key.pem
worker-0.pem
worker-1-key.pem
worker-1.pem
worker-2-key.pem
worker-2.pem
```
### The kube-proxy Client Certificate
Create the `kube-proxy` client certificate signing request:
```
cat > kube-proxy-csr.json <<EOF
{
"CN": "system:kube-proxy",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "system:node-proxier",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
```
Generate the `kube-proxy` client certificate and private key:
```
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
kube-proxy-csr.json | cfssljson -bare kube-proxy
```
Results:
```
kube-proxy-key.pem
kube-proxy.pem
```
### The Kubernetes API Server Certificate
The `kubernetes-the-hard-way` static IP address will be included in the list of subject alternative names for the Kubernetes API Server certificate. This will ensure the certificate can be validated by remote clients.
Retrieve the `kubernetes-the-hard-way` static IP address:
```
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) \
--format 'value(address)')
```
Create the Kubernetes API Server certificate signing request:
```
cat > kubernetes-csr.json <<EOF
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "Kubernetes",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
```
Generate the Kubernetes API Server certificate and private key:
```
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-hostname=10.32.0.1,10.240.0.10,10.240.0.11,10.240.0.12,${KUBERNETES_PUBLIC_ADDRESS},127.0.0.1,kubernetes.default \
-profile=kubernetes \
kubernetes-csr.json | cfssljson -bare kubernetes
```
Results:
```
kubernetes-key.pem
kubernetes.pem
```
## Distribute the Client and Server Certificates
Copy the appropriate certificates and private keys to each worker instance:
```
for instance in worker-0 worker-1 worker-2; do
gcloud compute scp ca.pem ${instance}-key.pem ${instance}.pem ${instance}:~/
done
```
Copy the appropriate certificates and private keys to each controller instance:
```
for instance in controller-0 controller-1 controller-2; do
gcloud compute scp ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem ${instance}:~/
done
```
> The `kube-proxy` and `kubelet` client certificates will be used to generate client authentication configuration files in the next lab.
Next: [Generating Kubernetes Configuration Files for Authentication](05-kubernetes-configuration-files.md)

View File

@ -1,157 +0,0 @@
# Bootstrapping a H/A etcd cluster
In this lab you will bootstrap a 3 node etcd cluster. The following virtual machines will be used:
* controller0
* controller1
* controller2
## Why
All Kubernetes components are stateless which greatly simplifies managing a Kubernetes cluster. All state is stored
in etcd, which is a database and must be treated specially. To limit the number of compute resource to complete this lab etcd is being installed on the Kubernetes controller nodes, although some people will prefer to run etcd on a dedicated set of machines for the following reasons:
* The etcd lifecycle is not tied to Kubernetes. We should be able to upgrade etcd independently of Kubernetes.
* Scaling out etcd is different than scaling out the Kubernetes Control Plane.
* Prevent other applications from taking up resources (CPU, Memory, I/O) required by etcd.
However, all the e2e tested configurations currently run etcd on the master nodes.
## Provision the etcd Cluster
Run the following commands on `controller0`, `controller1`, `controller2`:
### TLS Certificates
The TLS certificates created in the [Setting up a CA and TLS Cert Generation](02-certificate-authority.md) lab will be used to secure communication between the Kubernetes API server and the etcd cluster. The TLS certificates will also be used to limit access to the etcd cluster using TLS client authentication. Only clients with a TLS certificate signed by a trusted CA will be able to access the etcd cluster.
Copy the TLS certificates to the etcd configuration directory:
```
sudo mkdir -p /etc/etcd/
```
```
sudo cp ca.pem kubernetes-key.pem kubernetes.pem /etc/etcd/
```
### Download and Install the etcd binaries
Download the official etcd release binaries from `coreos/etcd` GitHub project:
```
wget https://github.com/coreos/etcd/releases/download/v3.1.4/etcd-v3.1.4-linux-amd64.tar.gz
```
Extract and install the `etcd` server binary and the `etcdctl` command line client:
```
tar -xvf etcd-v3.1.4-linux-amd64.tar.gz
```
```
sudo mv etcd-v3.1.4-linux-amd64/etcd* /usr/bin/
```
All etcd data is stored under the etcd data directory. In a production cluster the data directory should be backed by a persistent disk. Create the etcd data directory:
```
sudo mkdir -p /var/lib/etcd
```
### Set The Internal IP Address
The internal IP address will be used by etcd to serve client requests and communicate with other etcd peers.
```
INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
```
Each etcd member must have a unique name within an etcd cluster. Set the etcd name:
```
ETCD_NAME=controller$(echo $INTERNAL_IP | cut -c 11)
```
The etcd server will be started and managed by systemd. Create the etcd systemd unit file:
```
cat > etcd.service <<EOF
[Unit]
Description=etcd
Documentation=https://github.com/coreos
[Service]
ExecStart=/usr/bin/etcd \\
--name ${ETCD_NAME} \\
--cert-file=/etc/etcd/kubernetes.pem \\
--key-file=/etc/etcd/kubernetes-key.pem \\
--peer-cert-file=/etc/etcd/kubernetes.pem \\
--peer-key-file=/etc/etcd/kubernetes-key.pem \\
--trusted-ca-file=/etc/etcd/ca.pem \\
--peer-trusted-ca-file=/etc/etcd/ca.pem \\
--peer-client-cert-auth \\
--client-cert-auth \\
--initial-advertise-peer-urls https://${INTERNAL_IP}:2380 \\
--listen-peer-urls https://${INTERNAL_IP}:2380 \\
--listen-client-urls https://${INTERNAL_IP}:2379,http://127.0.0.1:2379 \\
--advertise-client-urls https://${INTERNAL_IP}:2379 \\
--initial-cluster-token etcd-cluster-0 \\
--initial-cluster controller0=https://10.240.0.10:2380,controller1=https://10.240.0.11:2380,controller2=https://10.240.0.12:2380 \\
--initial-cluster-state new \\
--data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
```
Once the etcd systemd unit file is ready, move it to the systemd system directory:
```
sudo mv etcd.service /etc/systemd/system/
```
Start the etcd server:
```
sudo systemctl daemon-reload
```
```
sudo systemctl enable etcd
```
```
sudo systemctl start etcd
```
```
sudo systemctl status etcd --no-pager
```
> Remember to run these steps on `controller0`, `controller1`, and `controller2`
## Verification
Once all 3 etcd nodes have been bootstrapped verify the etcd cluster is healthy:
* On one of the controller nodes run the following command:
```
sudo etcdctl \
--ca-file=/etc/etcd/ca.pem \
--cert-file=/etc/etcd/kubernetes.pem \
--key-file=/etc/etcd/kubernetes-key.pem \
cluster-health
```
```
member 3a57933972cb5131 is healthy: got healthy result from https://10.240.0.12:2379
member f98dc20bce6225a0 is healthy: got healthy result from https://10.240.0.10:2379
member ffed16798470cab5 is healthy: got healthy result from https://10.240.0.11:2379
cluster is healthy
```

View File

@ -0,0 +1,101 @@
# Generating Kubernetes Configuration Files for Authentication
In this lab you will generate [Kubernetes configuration files](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/), also known as kubeconfigs, which enable Kubernetes clients to locate and authenticate to the Kubernetes API Servers.
## Client Authentication Configs
In this section you will generate kubeconfig files for the `kubelet` and `kube-proxy` clients.
> The `scheduler` and `controller manager` access the Kubernetes API Server locally over an insecure API port which does not require authentication. The Kubernetes API Server's insecure port is only enabled for local access.
### Kubernetes Public IP Address
Each kubeconfig requires a Kubernetes API Server to connect to. To support high availability the IP address assigned to the external load balancer fronting the Kubernetes API Servers will be used.
Retrieve the `kubernetes-the-hard-way` static IP address:
```
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) \
--format 'value(address)')
```
### The kubelet Kubernetes Configuration File
When generating kubeconfig files for Kubelets the client certificate matching the Kubelet's node name must be used. This will ensure Kubelets are properly authorized by the Kubernetes [Node Authorizer](https://kubernetes.io/docs/admin/authorization/node/).
Generate a kubeconfig file for each worker node:
```
for instance in worker-0 worker-1 worker-2; do
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \
--kubeconfig=${instance}.kubeconfig
kubectl config set-credentials system:node:${instance} \
--client-certificate=${instance}.pem \
--client-key=${instance}-key.pem \
--embed-certs=true \
--kubeconfig=${instance}.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:node:${instance} \
--kubeconfig=${instance}.kubeconfig
kubectl config use-context default --kubeconfig=${instance}.kubeconfig
done
```
Results:
```
worker-0.kubeconfig
worker-1.kubeconfig
worker-2.kubeconfig
```
### The kube-proxy Kubernetes Configuration File
Generate a kubeconfig file for the `kube-proxy` service:
```
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \
--kubeconfig=kube-proxy.kubeconfig
```
```
kubectl config set-credentials kube-proxy \
--client-certificate=kube-proxy.pem \
--client-key=kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
```
```
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
```
```
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
```
## Distribute the Kubernetes Configuration Files
Copy the appropriate `kubelet` and `kube-proxy` kubeconfig files to each worker instance:
```
for instance in worker-0 worker-1 worker-2; do
gcloud compute scp ${instance}.kubeconfig kube-proxy.kubeconfig ${instance}:~/
done
```
Next: [Generating the Data Encryption Config and Key](06-data-encryption-keys.md)

View File

@ -1,310 +0,0 @@
# Bootstrapping an H/A Kubernetes Control Plane
In this lab you will bootstrap a 3 node Kubernetes controller cluster. The following virtual machines will be used:
* controller0
* controller1
* controller2
In this lab you will also create a frontend load balancer with a public IP address for remote access to the API servers and H/A.
## Why
The Kubernetes components that make up the control plane include the following components:
* API Server
* Scheduler
* Controller Manager
Each component is being run on the same machine for the following reasons:
* The Scheduler and Controller Manager are tightly coupled with the API Server
* Only one Scheduler and Controller Manager can be active at a given time, but it's ok to run multiple at the same time. Each component will elect a leader via the API Server.
* Running multiple copies of each component is required for H/A
* Running each component next to the API Server eases configuration.
## Provision the Kubernetes Controller Cluster
Run the following commands on `controller0`, `controller1`, `controller2`:
> Login to each machine using the gcloud compute ssh command
---
Copy the bootstrap token into place:
```
sudo mkdir -p /var/lib/kubernetes/
```
```
sudo mv token.csv /var/lib/kubernetes/
```
### TLS Certificates
The TLS certificates created in the [Setting up a CA and TLS Cert Generation](02-certificate-authority.md) lab will be used to secure communication between the Kubernetes API server and Kubernetes clients such as `kubectl` and the `kubelet` agent. The TLS certificates will also be used to authenticate the Kubernetes API server to etcd via TLS client auth.
Copy the TLS certificates to the Kubernetes configuration directory:
```
sudo mv ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem /var/lib/kubernetes/
```
### Download and install the Kubernetes controller binaries
Download the official Kubernetes release binaries:
```
wget https://storage.googleapis.com/kubernetes-release/release/v1.7.0/bin/linux/amd64/kube-apiserver
```
```
wget https://storage.googleapis.com/kubernetes-release/release/v1.7.0/bin/linux/amd64/kube-controller-manager
```
```
wget https://storage.googleapis.com/kubernetes-release/release/v1.7.0/bin/linux/amd64/kube-scheduler
```
```
wget https://storage.googleapis.com/kubernetes-release/release/v1.7.0/bin/linux/amd64/kubectl
```
Install the Kubernetes binaries:
```
chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl
```
```
sudo mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/bin/
```
### Kubernetes API Server
Capture the internal IP address of the machine:
```
INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
```
Create the systemd unit file:
```
cat > kube-apiserver.service <<EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
ExecStart=/usr/bin/kube-apiserver \\
--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\
--advertise-address=${INTERNAL_IP} \\
--allow-privileged=true \\
--apiserver-count=3 \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-path=/var/lib/audit.log \\
--authorization-mode=RBAC \\
--bind-address=0.0.0.0 \\
--client-ca-file=/var/lib/kubernetes/ca.pem \\
--enable-swagger-ui=true \\
--etcd-cafile=/var/lib/kubernetes/ca.pem \\
--etcd-certfile=/var/lib/kubernetes/kubernetes.pem \\
--etcd-keyfile=/var/lib/kubernetes/kubernetes-key.pem \\
--etcd-servers=https://10.240.0.10:2379,https://10.240.0.11:2379,https://10.240.0.12:2379 \\
--event-ttl=1h \\
--experimental-bootstrap-token-auth \\
--insecure-bind-address=0.0.0.0 \\
--kubelet-certificate-authority=/var/lib/kubernetes/ca.pem \\
--kubelet-client-certificate=/var/lib/kubernetes/kubernetes.pem \\
--kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem \\
--kubelet-https=true \\
--runtime-config=rbac.authorization.k8s.io/v1alpha1 \\
--service-account-key-file=/var/lib/kubernetes/ca-key.pem \\
--service-cluster-ip-range=10.32.0.0/24 \\
--service-node-port-range=30000-32767 \\
--tls-cert-file=/var/lib/kubernetes/kubernetes.pem \\
--tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem \\
--token-auth-file=/var/lib/kubernetes/token.csv \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
```
Start the `kube-apiserver` service:
```
sudo mv kube-apiserver.service /etc/systemd/system/
```
```
sudo systemctl daemon-reload
```
```
sudo systemctl enable kube-apiserver
```
```
sudo systemctl start kube-apiserver
```
```
sudo systemctl status kube-apiserver --no-pager
```
### Kubernetes Controller Manager
```
cat > kube-controller-manager.service <<EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
ExecStart=/usr/bin/kube-controller-manager \\
--address=0.0.0.0 \\
--allocate-node-cidrs=true \\
--cluster-cidr=10.200.0.0/16 \\
--cluster-name=kubernetes \\
--cluster-signing-cert-file=/var/lib/kubernetes/ca.pem \\
--cluster-signing-key-file=/var/lib/kubernetes/ca-key.pem \\
--leader-elect=true \\
--master=http://${INTERNAL_IP}:8080 \\
--root-ca-file=/var/lib/kubernetes/ca.pem \\
--service-account-private-key-file=/var/lib/kubernetes/ca-key.pem \\
--service-cluster-ip-range=10.32.0.0/16 \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
```
Start the `kube-controller-manager` service:
```
sudo mv kube-controller-manager.service /etc/systemd/system/
```
```
sudo systemctl daemon-reload
```
```
sudo systemctl enable kube-controller-manager
```
```
sudo systemctl start kube-controller-manager
```
```
sudo systemctl status kube-controller-manager --no-pager
```
### Kubernetes Scheduler
```
cat > kube-scheduler.service <<EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
ExecStart=/usr/bin/kube-scheduler \\
--leader-elect=true \\
--master=http://${INTERNAL_IP}:8080 \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
```
Start the `kube-scheduler` service:
```
sudo mv kube-scheduler.service /etc/systemd/system/
```
```
sudo systemctl daemon-reload
```
```
sudo systemctl enable kube-scheduler
```
```
sudo systemctl start kube-scheduler
```
```
sudo systemctl status kube-scheduler --no-pager
```
### Verification
```
kubectl get componentstatuses
```
```
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health": "true"}
etcd-1 Healthy {"health": "true"}
etcd-2 Healthy {"health": "true"}
```
> Remember to run these steps on `controller0`, `controller1`, and `controller2`
## Setup Kubernetes API Server Frontend Load Balancer
The virtual machines created in this tutorial will not have permission to complete this section. Run the following commands from the same place used to create the virtual machines for this tutorial.
```
gcloud compute http-health-checks create kube-apiserver-health-check \
--description "Kubernetes API Server Health Check" \
--port 8080 \
--request-path /healthz
```
```
gcloud compute target-pools create kubernetes-target-pool \
--http-health-check=kube-apiserver-health-check \
--region us-central1
```
```
gcloud compute target-pools add-instances kubernetes-target-pool \
--instances controller0,controller1,controller2
```
```
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region us-central1 \
--format 'value(name)')
```
```
gcloud compute forwarding-rules create kubernetes-forwarding-rule \
--address ${KUBERNETES_PUBLIC_ADDRESS} \
--ports 6443 \
--target-pool kubernetes-target-pool \
--region us-central1
```

View File

@ -0,0 +1,43 @@
# Generating the Data Encryption Config and Key
Kubernetes stores a variety of data including cluster state, application configurations, and secrets. Kubernetes supports the ability to [encrypt](https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data) cluster data at rest.
In this lab you will generate an encryption key and an [encryption config](https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/#understanding-the-encryption-at-rest-configuration) suitable for encrypting Kubernetes Secrets.
## The Encryption Key
Generate an encryption key:
```
ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)
```
## The Encryption Config File
Create the `encryption-config.yaml` encryption config file:
```
cat > encryption-config.yaml <<EOF
kind: EncryptionConfig
apiVersion: v1
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: key1
secret: ${ENCRYPTION_KEY}
- identity: {}
EOF
```
Copy the `encryption-config.yaml` encryption config file to each controller instance:
```
for instance in controller-0 controller-1 controller-2; do
gcloud compute scp encryption-config.yaml ${instance}:~/
done
```
Next: [Bootstrapping the etcd Cluster](07-bootstrapping-etcd.md)

View File

@ -1,307 +0,0 @@
# Bootstrapping Kubernetes Workers
In this lab you will bootstrap 3 Kubernetes worker nodes. The following virtual machines will be used:
* worker0
* worker1
* worker2
## Why
Kubernetes worker nodes are responsible for running your containers. All Kubernetes clusters need one or more worker nodes. We are running the worker nodes on dedicated machines for the following reasons:
* Ease of deployment and configuration
* Avoid mixing arbitrary workloads with critical cluster components. We are building machines with just enough resources so we don't have to worry about wasting resources.
Some people would like to run workers and cluster services anywhere in the cluster. This is totally possible, and you'll have to decide what's best for your environment.
## Prerequisites
Each worker node will provision a unique TLS client certificate as defined in the [kubelet TLS bootstrapping guide](https://kubernetes.io/docs/admin/kubelet-tls-bootstrapping/). The `kubelet-bootstrap` user must be granted permission to request a client TLS certificate.
```
gcloud compute ssh controller0
```
Enable TLS bootstrapping by binding the `kubelet-bootstrap` user to the `system:node-bootstrapper` cluster role:
```
kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap
```
## Provision the Kubernetes Worker Nodes
Run the following commands on `worker0`, `worker1`, `worker2`:
```
sudo mkdir -p /var/lib/{kubelet,kube-proxy,kubernetes}
```
```
sudo mkdir -p /var/run/kubernetes
```
```
sudo mv bootstrap.kubeconfig /var/lib/kubelet
```
```
sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy
```
Move the TLS certificates in place
```
sudo mv ca.pem /var/lib/kubernetes/
```
### Install Docker
```
wget https://get.docker.com/builds/Linux/x86_64/docker-1.12.6.tgz
```
```
tar -xvf docker-1.12.6.tgz
```
```
sudo cp docker/docker* /usr/bin/
```
Create the Docker systemd unit file:
```
cat > docker.service <<EOF
[Unit]
Description=Docker Application Container Engine
Documentation=http://docs.docker.com
[Service]
ExecStart=/usr/bin/docker daemon \\
--iptables=false \\
--ip-masq=false \\
--host=unix:///var/run/docker.sock \\
--log-level=error \\
--storage-driver=overlay
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
```
Start the docker service:
```
sudo mv docker.service /etc/systemd/system/docker.service
```
```
sudo systemctl daemon-reload
```
```
sudo systemctl enable docker
```
```
sudo systemctl start docker
```
```
sudo docker version
```
### Install the kubelet
The Kubelet can now use [CNI - the Container Network Interface](https://github.com/containernetworking/cni) to manage machine level networking requirements.
Download and install CNI plugins
```
sudo mkdir -p /opt/cni
```
```
wget https://storage.googleapis.com/kubernetes-release/network-plugins/cni-amd64-0799f5732f2a11b329d9e3d51b9c8f2e3759f2ff.tar.gz
```
```
sudo tar -xvf cni-amd64-0799f5732f2a11b329d9e3d51b9c8f2e3759f2ff.tar.gz -C /opt/cni
```
Download and install the Kubernetes worker binaries:
```
wget https://storage.googleapis.com/kubernetes-release/release/v1.6.1/bin/linux/amd64/kubectl
```
```
wget https://storage.googleapis.com/kubernetes-release/release/v1.6.1/bin/linux/amd64/kube-proxy
```
```
wget https://storage.googleapis.com/kubernetes-release/release/v1.6.1/bin/linux/amd64/kubelet
```
```
chmod +x kubectl kube-proxy kubelet
```
```
sudo mv kubectl kube-proxy kubelet /usr/bin/
```
Create the kubelet systemd unit file:
```
API_SERVERS=$(sudo cat /var/lib/kubelet/bootstrap.kubeconfig | \
grep server | cut -d ':' -f2,3,4 | tr -d '[:space:]')
```
```
cat > kubelet.service <<EOF
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service
[Service]
ExecStart=/usr/bin/kubelet \\
--api-servers=${API_SERVERS} \\
--allow-privileged=true \\
--cluster-dns=10.32.0.10 \\
--cluster-domain=cluster.local \\
--container-runtime=docker \\
--experimental-bootstrap-kubeconfig=/var/lib/kubelet/bootstrap.kubeconfig \\
--network-plugin=kubenet \\
--kubeconfig=/var/lib/kubelet/kubeconfig \\
--serialize-image-pulls=false \\
--register-node=true \\
--tls-cert-file=/var/lib/kubelet/kubelet-client.crt \\
--tls-private-key-file=/var/lib/kubelet/kubelet-client.key \\
--cert-dir=/var/lib/kubelet \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
```
```
sudo mv kubelet.service /etc/systemd/system/kubelet.service
```
```
sudo systemctl daemon-reload
```
```
sudo systemctl enable kubelet
```
```
sudo systemctl start kubelet
```
```
sudo systemctl status kubelet --no-pager
```
#### kube-proxy
```
cat > kube-proxy.service <<EOF
[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
ExecStart=/usr/bin/kube-proxy \\
--cluster-cidr=10.200.0.0/16 \\
--masquerade-all=true \\
--kubeconfig=/var/lib/kube-proxy/kube-proxy.kubeconfig \\
--proxy-mode=iptables \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
```
```
sudo mv kube-proxy.service /etc/systemd/system/kube-proxy.service
```
```
sudo systemctl daemon-reload
```
```
sudo systemctl enable kube-proxy
```
```
sudo systemctl start kube-proxy
```
```
sudo systemctl status kube-proxy --no-pager
```
> Remember to run these steps on `worker0`, `worker1`, and `worker2`
## Approve the TLS certificate requests
Each worker node will submit a certificate signing request which must be approved before the node is allowed to join the cluster.
Log into one of the controller nodes:
```
gcloud compute ssh controller0
```
List the pending certificate requests:
```
kubectl get csr
```
```
NAME AGE REQUESTOR CONDITION
csr-XXXXX 1m kubelet-bootstrap Pending
```
> Use the kubectl describe csr command to view the details of a specific signing request.
Approve each certificate signing request using the `kubectl certificate approve` command:
```
kubectl certificate approve csr-XXXXX
```
```
certificatesigningrequest "csr-XXXXX" approved
```
Once all certificate signing requests have been approved all nodes should be registered with the cluster:
```
kubectl get nodes
```
```
NAME STATUS AGE VERSION
worker0 Ready 7m v1.6.1
worker1 Ready 5m v1.6.1
worker2 Ready 2m v1.6.1
```

View File

@ -0,0 +1,128 @@
# Bootstrapping the etcd Cluster
Kubernetes components are stateless and store cluster state in [etcd](https://github.com/coreos/etcd). In this lab you will bootstrap a three node etcd cluster and configure it for high availability and secure remote access.
## Prerequisites
The commands in this lab must be run on each controller instance: `controller-0`, `controller-1`, and `controller-2`. Login to each controller instance using the `gcloud` command. Example:
```
gcloud compute ssh controller-0
```
## Bootstrapping an etcd Cluster Member
### Download and Install the etcd Binaries
Download the official etcd release binaries from the [coreos/etcd](https://github.com/coreos/etcd) GitHub project:
```
wget -q --show-progress --https-only --timestamping \
"https://github.com/coreos/etcd/releases/download/v3.2.6/etcd-v3.2.6-linux-amd64.tar.gz"
```
Extract and install the `etcd` server and the `etcdctl` command line utility:
```
tar -xvf etcd-v3.2.6-linux-amd64.tar.gz
```
```
sudo mv etcd-v3.2.6-linux-amd64/etcd* /usr/local/bin/
```
### Configure the etcd Server
```
sudo mkdir -p /etc/etcd /var/lib/etcd
```
```
sudo cp ca.pem kubernetes-key.pem kubernetes.pem /etc/etcd/
```
The instance internal IP address will be used to serve client requests and communicate with etcd cluster peers. Retrieve the internal IP address for the current compute instance:
```
INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
```
Each etcd member must have a unique name within an etcd cluster. Set the etcd name to match the hostname of the current compute instance:
```
ETCD_NAME=$(hostname -s)
```
Create the `etcd.service` systemd unit file:
```
cat > etcd.service <<EOF
[Unit]
Description=etcd
Documentation=https://github.com/coreos
[Service]
ExecStart=/usr/local/bin/etcd \\
--name ${ETCD_NAME} \\
--cert-file=/etc/etcd/kubernetes.pem \\
--key-file=/etc/etcd/kubernetes-key.pem \\
--peer-cert-file=/etc/etcd/kubernetes.pem \\
--peer-key-file=/etc/etcd/kubernetes-key.pem \\
--trusted-ca-file=/etc/etcd/ca.pem \\
--peer-trusted-ca-file=/etc/etcd/ca.pem \\
--peer-client-cert-auth \\
--client-cert-auth \\
--initial-advertise-peer-urls https://${INTERNAL_IP}:2380 \\
--listen-peer-urls https://${INTERNAL_IP}:2380 \\
--listen-client-urls https://${INTERNAL_IP}:2379,http://127.0.0.1:2379 \\
--advertise-client-urls https://${INTERNAL_IP}:2379 \\
--initial-cluster-token etcd-cluster-0 \\
--initial-cluster controller-0=https://10.240.0.10:2380,controller-1=https://10.240.0.11:2380,controller-2=https://10.240.0.12:2380 \\
--initial-cluster-state new \\
--data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
```
### Start the etcd Server
```
sudo mv etcd.service /etc/systemd/system/
```
```
sudo systemctl daemon-reload
```
```
sudo systemctl enable etcd
```
```
sudo systemctl start etcd
```
> Remember to run the above commands on each controller node: `controller-0`, `controller-1`, and `controller-2`.
## Verification
List the etcd cluster members:
```
ETCDCTL_API=3 etcdctl member list
```
> output
```
3a57933972cb5131, started, controller-2, https://10.240.0.12:2380, https://10.240.0.12:2379
f98dc20bce6225a0, started, controller-0, https://10.240.0.10:2380, https://10.240.0.10:2379
ffed16798470cab5, started, controller-1, https://10.240.0.11:2380, https://10.240.0.11:2379
```
Next: [Bootstrapping the Kubernetes Control Plane](08-bootstrapping-kubernetes-controllers.md)

View File

@ -1,86 +0,0 @@
# Configuring the Kubernetes Client - Remote Access
Run the following commands from the machine which will be your Kubernetes Client
## Download and Install kubectl
### OS X
```
wget https://storage.googleapis.com/kubernetes-release/release/v1.7.0/bin/darwin/amd64/kubectl
chmod +x kubectl
sudo mv kubectl /usr/local/bin
```
### Linux
```
wget https://storage.googleapis.com/kubernetes-release/release/v1.7.0/bin/linux/amd64/kubectl
chmod +x kubectl
sudo mv kubectl /usr/local/bin
```
## Configure Kubectl
In this section you will configure the kubectl client to point to the [Kubernetes API Server Frontend Load Balancer](04-kubernetes-controller.md#setup-kubernetes-api-server-frontend-load-balancer).
```
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region us-central1 \
--format 'value(address)')
```
Also be sure to locate the CA certificate [created earlier](02-certificate-authority.md). Since we are using self-signed TLS certs we need to trust the CA certificate so we can verify the remote API Servers.
### Build up the kubeconfig entry
The following commands will build up the default kubeconfig file used by kubectl.
```
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443
```
```
kubectl config set-credentials admin \
--client-certificate=admin.pem \
--client-key=admin-key.pem
```
```
kubectl config set-context kubernetes-the-hard-way \
--cluster=kubernetes-the-hard-way \
--user=admin
```
```
kubectl config use-context kubernetes-the-hard-way
```
At this point you should be able to connect securly to the remote API server:
```
kubectl get componentstatuses
```
```
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-2 Healthy {"health": "true"}
etcd-0 Healthy {"health": "true"}
etcd-1 Healthy {"health": "true"}
```
```
kubectl get nodes
```
```
NAME STATUS AGE VERSION
worker0 Ready 7m v1.6.1
worker1 Ready 5m v1.6.1
worker2 Ready 2m v1.6.1
```

View File

@ -0,0 +1,265 @@
# Bootstrapping the Kubernetes Control Plane
In this lab you will bootstrap the Kubernetes control plane across three compute instances and configure it for high availability. You will also create an external load balancer that exposes the Kubernetes API Servers to remote clients. The following components will be installed on each node: Kubernetes API Server, Scheduler, and Controller Manager.
## Prerequisites
The commands in this lab must be run on each controller instance: `controller-0`, `controller-1`, and `controller-2`. Login to each controller instance using the `gcloud` command. Example:
```
gcloud compute ssh controller-0
```
## Provision the Kubernetes Control Plane
### Download and Install the Kubernetes Controller Binaries
Download the official Kubernetes release binaries:
```
wget -q --show-progress --https-only --timestamping \
"https://storage.googleapis.com/kubernetes-release/release/v1.7.4/bin/linux/amd64/kube-apiserver" \
"https://storage.googleapis.com/kubernetes-release/release/v1.7.4/bin/linux/amd64/kube-controller-manager" \
"https://storage.googleapis.com/kubernetes-release/release/v1.7.4/bin/linux/amd64/kube-scheduler" \
"https://storage.googleapis.com/kubernetes-release/release/v1.7.4/bin/linux/amd64/kubectl"
```
Install the Kubernetes binaries:
```
chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl
```
```
sudo mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/
```
### Configure the Kubernetes API Server
```
sudo mkdir -p /var/lib/kubernetes/
```
```
sudo mv ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem encryption-config.yaml /var/lib/kubernetes/
```
The instance internal IP address will be used advertise the API Server to members of the cluster. Retrieve the internal IP address for the current compute instance:
```
INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
```
Create the `kube-apiserver.service` systemd unit file:
```
cat > kube-apiserver.service <<EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-apiserver \\
--admission-control=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\
--advertise-address=${INTERNAL_IP} \\
--allow-privileged=true \\
--apiserver-count=3 \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-path=/var/log/audit.log \\
--authorization-mode=Node,RBAC \\
--bind-address=0.0.0.0 \\
--client-ca-file=/var/lib/kubernetes/ca.pem \\
--enable-swagger-ui=true \\
--etcd-cafile=/var/lib/kubernetes/ca.pem \\
--etcd-certfile=/var/lib/kubernetes/kubernetes.pem \\
--etcd-keyfile=/var/lib/kubernetes/kubernetes-key.pem \\
--etcd-servers=https://10.240.0.10:2379,https://10.240.0.11:2379,https://10.240.0.12:2379 \\
--event-ttl=1h \\
--experimental-encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \\
--insecure-bind-address=0.0.0.0 \\
--kubelet-certificate-authority=/var/lib/kubernetes/ca.pem \\
--kubelet-client-certificate=/var/lib/kubernetes/kubernetes.pem \\
--kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem \\
--kubelet-https=true \\
--runtime-config=rbac.authorization.k8s.io/v1alpha1 \\
--service-account-key-file=/var/lib/kubernetes/ca-key.pem \\
--service-cluster-ip-range=10.32.0.0/24 \\
--service-node-port-range=30000-32767 \\
--tls-ca-file=/var/lib/kubernetes/ca.pem \\
--tls-cert-file=/var/lib/kubernetes/kubernetes.pem \\
--tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
```
### Configure the Kubernetes Controller Manager
Create the `kube-controller-manager.service` systemd unit file:
```
cat > kube-controller-manager.service <<EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-controller-manager \\
--address=0.0.0.0 \\
--cluster-cidr=10.200.0.0/16 \\
--cluster-name=kubernetes \\
--cluster-signing-cert-file=/var/lib/kubernetes/ca.pem \\
--cluster-signing-key-file=/var/lib/kubernetes/ca-key.pem \\
--leader-elect=true \\
--master=http://${INTERNAL_IP}:8080 \\
--root-ca-file=/var/lib/kubernetes/ca.pem \\
--service-account-private-key-file=/var/lib/kubernetes/ca-key.pem \\
--service-cluster-ip-range=10.32.0.0/16 \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
```
### Configure the Kubernetes Scheduler
Create the `kube-scheduler.service` systemd unit file:
```
cat > kube-scheduler.service <<EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-scheduler \\
--leader-elect=true \\
--master=http://${INTERNAL_IP}:8080 \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
```
### Start the Controller Services
```
sudo mv kube-apiserver.service kube-scheduler.service kube-controller-manager.service /etc/systemd/system/
```
```
sudo systemctl daemon-reload
```
```
sudo systemctl enable kube-apiserver kube-controller-manager kube-scheduler
```
```
sudo systemctl start kube-apiserver kube-controller-manager kube-scheduler
```
> Allow up to 10 seconds for the Kubernetes API Server to fully initialize.
### Verification
```
kubectl get componentstatuses
```
```
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-2 Healthy {"health": "true"}
etcd-0 Healthy {"health": "true"}
etcd-1 Healthy {"health": "true"}
```
> Remember to run the above commands on each controller node: `controller-0`, `controller-1`, and `controller-2`.
## The Kubernetes Frontend Load Balancer
In this section you will provision an external load balancer to front the Kubernetes API Servers. The `kubernetes-the-hard-way` static IP address will be attached to the resulting load balancer.
> The compute instances created in this tutorial will not have permission to complete this section. Run the following commands from the same machine used to create the compute instances.
Create the external load balancer network resources:
```
gcloud compute http-health-checks create kube-apiserver-health-check \
--description "Kubernetes API Server Health Check" \
--port 8080 \
--request-path /healthz
```
```
gcloud compute target-pools create kubernetes-target-pool \
--http-health-check=kube-apiserver-health-check
```
```
gcloud compute target-pools add-instances kubernetes-target-pool \
--instances controller-0,controller-1,controller-2
```
```
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) \
--format 'value(name)')
```
```
gcloud compute forwarding-rules create kubernetes-forwarding-rule \
--address ${KUBERNETES_PUBLIC_ADDRESS} \
--ports 6443 \
--region $(gcloud config get-value compute/region) \
--target-pool kubernetes-target-pool
```
### Verification
Retrieve the `kubernetes-the-hard-way` static IP address:
```
KUBERNETES_PUBLIC_IP_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) \
--format 'value(address)')
```
Make a HTTP request for the Kubernetes version info:
```
curl --cacert ca.pem https://${KUBERNETES_PUBLIC_IP_ADDRESS}:6443/version
```
> output
```
{
"major": "1",
"minor": "7",
"gitVersion": "v1.7.4",
"gitCommit": "793658f2d7ca7f064d2bdf606519f9fe1229c381",
"gitTreeState": "clean",
"buildDate": "2017-08-17T08:30:51Z",
"goVersion": "go1.8.3",
"compiler": "gc",
"platform": "linux/amd64"
}
```
Next: [Bootstrapping the Kubernetes Worker Nodes](09-bootstrapping-kubernetes-workers.md)

View File

@ -1,62 +0,0 @@
# Managing the Container Network Routes
Now that each worker node is online we need to add routes to make sure that Pods running on different machines can talk to each other. In this lab we are not going to provision any overlay networks and instead rely on Layer 3 networking. That means we need to add routes to our router. In GCP each network has a router that can be configured. If this was an on-prem datacenter then ideally you would need to add the routes to your local router.
## Container Subnets
The IP addresses for each pod will be allocated from the `podCIDR` range assigned to each Kubernetes worker through the node registration process. The `podCIDR` will be allocated from the cluster cidr range as configured on the Kubernetes Controller Manager with the following flag:
```
--cluster-cidr=10.200.0.0/16
```
Based on the above configuration each node will receive a `/24` subnet. For example:
```
10.200.0.0/24
10.200.1.0/24
10.200.2.0/24
...
```
## Get the Routing Table
The first thing we need to do is gather the information required to populate the router table. We need the Internal IP address and Pod Subnet for each of the worker nodes.
Use `kubectl` to print the `InternalIP` and `podCIDR` for each worker node:
```
kubectl get nodes \
--output=jsonpath='{range .items[*]}{.status.addresses[?(@.type=="InternalIP")].address} {.spec.podCIDR} {"\n"}{end}'
```
Output:
```
10.240.0.20 10.200.0.0/24
10.240.0.21 10.200.1.0/24
10.240.0.22 10.200.2.0/24
```
## Create Routes
```
gcloud compute routes create kubernetes-route-10-200-0-0-24 \
--network kubernetes-the-hard-way \
--next-hop-address 10.240.0.20 \
--destination-range 10.200.0.0/24
```
```
gcloud compute routes create kubernetes-route-10-200-1-0-24 \
--network kubernetes-the-hard-way \
--next-hop-address 10.240.0.21 \
--destination-range 10.200.1.0/24
```
```
gcloud compute routes create kubernetes-route-10-200-2-0-24 \
--network kubernetes-the-hard-way \
--next-hop-address 10.240.0.22 \
--destination-range 10.200.2.0/24
```

View File

@ -0,0 +1,284 @@
# Bootstrapping the Kubernetes Worker Nodes
In this lab you will bootstrap three Kubernetes worker nodes. The following components will be installed on each node: [runc](https://github.com/opencontainers/runc), [container networking plugins](https://github.com/containernetworking/cni), [cri-o](https://github.com/kubernetes-incubator/cri-o), [kubelet](https://kubernetes.io/docs/admin/kubelet), and [kube-proxy](https://kubernetes.io/docs/concepts/cluster-administration/proxies).
## Prerequisites
The commands in this lab must be run on each worker instance: `worker-0`, `worker-1`, and `worker-2`. Login to each worker instance using the `gcloud` command. Example:
```
gcloud compute ssh worker-0
```
## Provisioning a Kubernetes Worker Node
### Install the cri-o OS Dependencies
Add the `alexlarsson/flatpak` [PPA](https://launchpad.net/ubuntu/+ppas) which hosts the `libostree` package:
```
sudo add-apt-repository -y ppa:alexlarsson/flatpak
```
```
sudo apt-get update
```
Install the OS dependencies required by the cri-o container runtime:
```
sudo apt-get install -y socat libgpgme11 libostree-1-1
```
### Download and Install Worker Binaries
```
wget -q --show-progress --https-only --timestamping \
https://github.com/containernetworking/plugins/releases/download/v0.6.0/cni-plugins-amd64-v0.6.0.tgz \
https://github.com/opencontainers/runc/releases/download/v1.0.0-rc4/runc.amd64 \
https://storage.googleapis.com/kubernetes-the-hard-way/crio-amd64-v1.0.0-beta.0.tar.gz \
https://storage.googleapis.com/kubernetes-release/release/v1.7.4/bin/linux/amd64/kubectl \
https://storage.googleapis.com/kubernetes-release/release/v1.7.4/bin/linux/amd64/kube-proxy \
https://storage.googleapis.com/kubernetes-release/release/v1.7.4/bin/linux/amd64/kubelet
```
Create the installation directories:
```
sudo mkdir -p \
/etc/containers \
/etc/cni/net.d \
/etc/crio \
/opt/cni/bin \
/usr/local/libexec/crio \
/var/lib/kubelet \
/var/lib/kube-proxy \
/var/lib/kubernetes \
/var/run/kubernetes
```
Install the worker binaries:
```
sudo tar -xvf cni-plugins-amd64-v0.6.0.tgz -C /opt/cni/bin/
```
```
tar -xvf crio-amd64-v1.0.0-beta.0.tar.gz
```
```
chmod +x kubectl kube-proxy kubelet runc.amd64
```
```
sudo mv runc.amd64 /usr/local/bin/runc
```
```
sudo mv crio crioctl kpod kubectl kube-proxy kubelet /usr/local/bin/
```
```
sudo mv conmon pause /usr/local/libexec/crio/
```
### Configure CNI Networking
Retrieve the Pod CIDR range for the current compute instance:
```
POD_CIDR=$(curl -s -H "Metadata-Flavor: Google" \
http://metadata.google.internal/computeMetadata/v1/instance/attributes/pod-cidr)
```
Create the `bridge` network configuration file:
```
cat > 10-bridge.conf <<EOF
{
"cniVersion": "0.3.1",
"name": "bridge",
"type": "bridge",
"bridge": "cnio0",
"isGateway": true,
"ipMasq": true,
"ipam": {
"type": "host-local",
"ranges": [
[{"subnet": "${POD_CIDR}"}]
],
"routes": [{"dst": "0.0.0.0/0"}]
}
}
EOF
```
Create the `loopback` network configuration file:
```
cat > 99-loopback.conf <<EOF
{
"cniVersion": "0.3.1",
"type": "loopback"
}
EOF
```
Move the network configuration files to the CNI configuration directory:
```
sudo mv 10-bridge.conf 99-loopback.conf /etc/cni/net.d/
```
### Configure the CRI-O Container Runtime
```
sudo mv crio.conf seccomp.json /etc/crio/
```
```
sudo mv policy.json /etc/containers/
```
```
cat > crio.service <<EOF
[Unit]
Description=CRI-O daemon
Documentation=https://github.com/kubernetes-incubator/cri-o
[Service]
ExecStart=/usr/local/bin/crio
Restart=always
RestartSec=10s
[Install]
WantedBy=multi-user.target
EOF
```
### Configure the Kubelet
```
sudo mv ${HOSTNAME}-key.pem ${HOSTNAME}.pem /var/lib/kubelet/
```
```
sudo mv ${HOSTNAME}.kubeconfig /var/lib/kubelet/kubeconfig
```
```
sudo mv ca.pem /var/lib/kubernetes/
```
Create the `kubelet.service` systemd unit file:
```
cat > kubelet.service <<EOF
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=crio.service
Requires=crio.service
[Service]
ExecStart=/usr/local/bin/kubelet \\
--allow-privileged=true \\
--cluster-dns=10.32.0.10 \\
--cluster-domain=cluster.local \\
--container-runtime=remote \\
--container-runtime-endpoint=unix:///var/run/crio.sock \\
--enable-custom-metrics \\
--image-pull-progress-deadline=2m \\
--image-service-endpoint=unix:///var/run/crio.sock \\
--kubeconfig=/var/lib/kubelet/kubeconfig \\
--network-plugin=cni \\
--pod-cidr=${POD_CIDR} \\
--register-node=true \\
--require-kubeconfig \\
--runtime-request-timeout=10m \\
--tls-cert-file=/var/lib/kubelet/${HOSTNAME}.pem \\
--tls-private-key-file=/var/lib/kubelet/${HOSTNAME}-key.pem \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
```
### Configure the Kubernetes Proxy
```
sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig
```
Create the `kube-proxy.service` systemd unit file:
```
cat > kube-proxy.service <<EOF
[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-proxy \\
--cluster-cidr=10.200.0.0/16 \\
--kubeconfig=/var/lib/kube-proxy/kubeconfig \\
--proxy-mode=iptables \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
```
### Start the Worker Services
```
sudo mv crio.service kubelet.service kube-proxy.service /etc/systemd/system/
```
```
sudo systemctl daemon-reload
```
```
sudo systemctl enable crio kubelet kube-proxy
```
```
sudo systemctl start crio kubelet kube-proxy
```
> Remember to run the above commands on each worker node: `worker-0`, `worker-1`, and `worker-2`.
## Verification
Login to one of the controller nodes:
```
gcloud compute ssh controller-0
```
List the registered Kubernetes nodes:
```
kubectl get nodes
```
> output
```
NAME STATUS AGE VERSION
worker-0 Ready 5m v1.7.4
worker-1 Ready 3m v1.7.4
worker-2 Ready 7s v1.7.4
```
Next: [Configuring kubectl for Remote Access](10-configuring-kubectl.md)

View File

@ -1,44 +0,0 @@
# Deploying the Cluster DNS Add-on
In this lab you will deploy the DNS add-on which is required for every Kubernetes cluster. Without the DNS add-on the following things will not work:
* DNS based service discovery
* DNS lookups from containers running in pods
## Cluster DNS Add-on
```
kubectl create clusterrolebinding serviceaccounts-cluster-admin \
--clusterrole=cluster-admin \
--group=system:serviceaccounts
```
Create the `kubedns` service:
```
kubectl create -f https://raw.githubusercontent.com/kelseyhightower/kubernetes-the-hard-way/master/services/kubedns.yaml
```
```
kubectl --namespace=kube-system get svc
```
```
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns 10.32.0.10 <none> 53/UDP,53/TCP 5s
```
Create the `kubedns` deployment:
```
kubectl create -f https://raw.githubusercontent.com/kelseyhightower/kubernetes-the-hard-way/master/deployments/kubedns.yaml
```
```
kubectl --namespace=kube-system get pods
```
```
NAME READY STATUS RESTARTS AGE
kube-dns-321336704-6749s 4/4 Running 0 10s
```

View File

@ -0,0 +1,78 @@
# Configuring kubectl for Remote Access
In this lab you will generate a kubeconfig file for the `kubectl` command line utility based on the `admin` user credentials.
> Run the commands in this lab from the same directory used to generate the admin client certificates.
## The Admin Kubernetes Configuration File
Each kubeconfig requires a Kubernetes API Server to connect to. To support high availability the IP address assigned to the external load balancer fronting the Kubernetes API Servers will be used.
Retrieve the `kubernetes-the-hard-way` static IP address:
```
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) \
--format 'value(address)')
```
Generate a kubeconfig file suitable for authenticating as the `admin` user:
```
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443
```
```
kubectl config set-credentials admin \
--client-certificate=admin.pem \
--client-key=admin-key.pem
```
```
kubectl config set-context kubernetes-the-hard-way \
--cluster=kubernetes-the-hard-way \
--user=admin
```
```
kubectl config use-context kubernetes-the-hard-way
```
## Verification
Check the health of the remote Kubernetes cluster:
```
kubectl get componentstatuses
```
> output
```
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-2 Healthy {"health": "true"}
etcd-0 Healthy {"health": "true"}
etcd-1 Healthy {"health": "true"}
```
List the nodes in the remote Kubernetes cluster:
```
kubectl get nodes
```
> output
```
NAME STATUS AGE VERSION
worker-0 Ready 7m v1.7.4
worker-1 Ready 4m v1.7.4
worker-2 Ready 1m v1.7.4
```
Next: [Provisioning Pod Network Routes](11-pod-network-routes.md)

View File

@ -1,80 +0,0 @@
# Smoke Test
This lab walks you through a quick smoke test to make sure things are working.
## Test
```
kubectl run nginx --image=nginx --port=80 --replicas=3
```
```
kubectl get pods -o wide
```
```
NAME READY STATUS RESTARTS AGE IP NODE
nginx-158599303-7k8p9 1/1 Running 0 13s 10.200.2.3 worker2
nginx-158599303-h0zcs 1/1 Running 0 13s 10.200.1.2 worker1
nginx-158599303-rfhm3 1/1 Running 0 13s 10.200.0.2 worker0
```
```
kubectl expose deployment nginx --type NodePort
```
> Note that --type=LoadBalancer will not work because we did not configure a cloud provider when bootstrapping this cluster.
Grab the `NodePort` that was setup for the nginx service:
```
NODE_PORT=$(kubectl get svc nginx --output=jsonpath='{range .spec.ports[0]}{.nodePort}')
```
### Create the Node Port Firewall Rule
```
gcloud compute firewall-rules create kubernetes-nginx-service \
--allow=tcp:${NODE_PORT} \
--network kubernetes-the-hard-way
```
Grab the `EXTERNAL_IP` for one of the worker nodes:
```
NODE_PUBLIC_IP=$(gcloud compute instances describe worker0 \
--format 'value(networkInterfaces[0].accessConfigs[0].natIP)')
```
Test the nginx service using cURL:
```
curl http://${NODE_PUBLIC_IP}:${NODE_PORT}
```
```
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
```

View File

@ -1,51 +0,0 @@
# Cleaning Up
## Virtual Machines
```
gcloud -q compute instances delete \
controller0 controller1 controller2 \
worker0 worker1 worker2
```
## Networking
```
gcloud -q compute forwarding-rules delete kubernetes-forwarding-rule --region us-central1
```
```
gcloud -q compute target-pools delete kubernetes-target-pool
```
```
gcloud -q compute http-health-checks delete kube-apiserver-health-check
```
```
gcloud -q compute addresses delete kubernetes-the-hard-way
```
```
gcloud -q compute firewall-rules delete \
kubernetes-nginx-service \
allow-internal \
allow-external \
allow-healthz
```
```
gcloud -q compute routes delete \
kubernetes-route-10-200-0-0-24 \
kubernetes-route-10-200-1-0-24 \
kubernetes-route-10-200-2-0-24
```
```
gcloud -q compute networks subnets delete kubernetes
```
```
gcloud -q compute networks delete kubernetes-the-hard-way
```

View File

@ -0,0 +1,60 @@
# Provisioning Pod Network Routes
Pods scheduled to a node receive an IP address from the node's Pod CIDR range. At this point pods can not communicate with other pods running on different nodes due to missing network [routes](https://cloud.google.com/compute/docs/vpc/routes).
In this lab you will create a route for each worker node that maps the node's Pod CIDR range to the node's internal IP address.
> There are [other ways](https://kubernetes.io/docs/concepts/cluster-administration/networking/#how-to-achieve-this) to implement the Kubernetes networking model.
## The Routing Table
In this section you will gather the information required to create routes in the `kubernetes-the-hard-way` VPC network.
Print the internal IP address and Pod CIDR range for each worker instance:
```
for instance in worker-0 worker-1 worker-2; do
gcloud compute instances describe ${instance} \
--format 'value[separator=" "](networkInterfaces[0].networkIP,metadata.items[0].value)'
done
```
> output
```
10.240.0.20 10.200.0.0/24
10.240.0.21 10.200.1.0/24
10.240.0.22 10.200.2.0/24
```
## Routes
Create network routes for each worker instance:
```
for i in 0 1 2; do
gcloud compute routes create kubernetes-route-10-200-${i}-0-24 \
--network kubernetes-the-hard-way \
--next-hop-address 10.240.0.2${i} \
--destination-range 10.200.${i}.0/24
done
```
List the routes in the `kubernetes-the-hard-way` VPC network:
```
gcloud compute routes list --filter "network kubernetes-the-hard-way"
```
> output
```
NAME NETWORK DEST_RANGE NEXT_HOP PRIORITY
default-route-77bcc6bee33b5535 kubernetes-the-hard-way 10.240.0.0/24 1000
default-route-b11fc914b626974d kubernetes-the-hard-way 0.0.0.0/0 default-internet-gateway 1000
kubernetes-route-10-200-0-0-24 kubernetes-the-hard-way 10.200.0.0/24 10.240.0.20 1000
kubernetes-route-10-200-1-0-24 kubernetes-the-hard-way 10.200.1.0/24 10.240.0.21 1000
kubernetes-route-10-200-2-0-24 kubernetes-the-hard-way 10.200.2.0/24 10.240.0.22 1000
```
Next: [Deploying the DNS Cluster Add-on](12-dns-addon.md)

79
docs/12-dns-addon.md Normal file
View File

@ -0,0 +1,79 @@
# Deploying the DNS Cluster Add-on
In this lab you will deploy the [DNS add-on](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/) which provides DNS based service discovery to applications running inside the Kubernetes cluster.
## The DNS Cluster Add-on
Deploy the `kube-dns` cluster add-on:
```
kubectl create -f https://storage.googleapis.com/kubernetes-the-hard-way/kube-dns.yaml
```
> output
```
serviceaccount "kube-dns" created
configmap "kube-dns" created
service "kube-dns" created
deployment "kube-dns" created
```
List the pods created by the `kube-dns` deployment:
```
kubectl get pods -l k8s-app=kube-dns -n kube-system
```
> output
```
NAME READY STATUS RESTARTS AGE
kube-dns-3097350089-gq015 3/3 Running 0 20s
kube-dns-3097350089-q64qc 3/3 Running 0 20s
```
## Verification
Create a `busybox` deployment:
```
kubectl run busybox --image=busybox --command -- sleep 3600
```
List the pod created by the `busybox` deployment:
```
kubectl get pods -l run=busybox
```
> output
```
NAME READY STATUS RESTARTS AGE
busybox-2125412808-mt2vb 1/1 Running 0 15s
```
Retrieve the full name of the `busybox` pod:
```
POD_NAME=$(kubectl get pods -l run=busybox -o jsonpath="{.items[0].metadata.name}")
```
Execute a DNS lookup for the `kubernetes` service inside the `busybox` pod:
```
kubectl exec -ti $POD_NAME -- nslookup kubernetes
```
> output
```
Server: 10.32.0.10
Address 1: 10.32.0.10 kube-dns.kube-system.svc.cluster.local
Name: kubernetes
Address 1: 10.32.0.1 kubernetes.default.svc.cluster.local
```
Next: [Smoke Test](13-smoke-test.md)

208
docs/13-smoke-test.md Normal file
View File

@ -0,0 +1,208 @@
# Smoke Test
In this lab you will complete a series of tasks to ensure your Kubernetes cluster is functioning correctly.
## Data Encryption
In this section you will verify the ability to [encrypt secret data at rest](https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/#verifying-that-data-is-encrypted).
Create a generic secret:
```
kubectl create secret generic kubernetes-the-hard-way \
--from-literal="mykey=mydata"
```
Print a hexdump of the `kubernetes-the-hard-way` secret stored in etcd:
```
gcloud compute ssh controller-0 \
--command "ETCDCTL_API=3 etcdctl get /registry/secrets/default/kubernetes-the-hard-way | hexdump -C"
```
> output
```
00000000 2f 72 65 67 69 73 74 72 79 2f 73 65 63 72 65 74 |/registry/secret|
00000010 73 2f 64 65 66 61 75 6c 74 2f 6b 75 62 65 72 6e |s/default/kubern|
00000020 65 74 65 73 2d 74 68 65 2d 68 61 72 64 2d 77 61 |etes-the-hard-wa|
00000030 79 0a 6b 38 73 3a 65 6e 63 3a 61 65 73 63 62 63 |y.k8s:enc:aescbc|
00000040 3a 76 31 3a 6b 65 79 31 3a 70 88 d8 52 83 b7 96 |:v1:key1:p..R...|
00000050 04 a3 bd 7e 42 9e 8a 77 2f 97 24 a7 68 3f c5 ec |...~B..w/.$.h?..|
00000060 9e f7 66 e8 a3 81 fc c8 3c df 63 71 33 0a 87 8f |..f.....<.cq3...|
00000070 0e c7 0a 0a f2 04 46 85 33 92 9a 4b 61 b2 10 c0 |......F.3..Ka...|
00000080 0b 00 05 dd c3 c2 d0 6b ff ff f2 32 3b e0 ec a0 |.......k...2;...|
00000090 63 d3 8b 1c 29 84 88 71 a7 88 e2 26 4b 65 95 14 |c...)..q...&Ke..|
000000a0 dc 8d 59 63 11 e5 f3 4e b4 94 cc 3d 75 52 c7 07 |..Yc...N...=uR..|
000000b0 73 f5 b4 b0 63 aa f9 9d 29 f8 d6 88 aa 33 c4 24 |s...c...)....3.$|
000000c0 ac c6 71 2b 45 98 9e 5f c6 a4 9d a2 26 3c 24 41 |..q+E.._....&<$A|
000000d0 95 5b d3 2c 4b 1e 4a 47 c8 47 c8 f3 ac d6 e8 cb |.[.,K.JG.G......|
000000e0 5f a9 09 93 91 d7 5d c9 c2 68 f8 cf 3c 7e 3b a3 |_.....]..h..<~;.|
000000f0 db d8 d5 9e 0c bf 2a 2f 58 0a |......*/X.|
000000fa
```
The etcd key should be prefixed with `k8s:enc:aescbc:v1:key1`, which indicates the `aescbc` provider was used to encrypt the data with the `key1` encryption key.
## Deployments
In this section you will verify the ability to create and manage [Deployments](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/).
Create a deployment for the [nginx](https://nginx.org/en/) web server:
```
kubectl run nginx --image=nginx
```
List the pod created by the `nginx` deployment:
```
kubectl get pods -l run=nginx
```
> output
```
NAME READY STATUS RESTARTS AGE
nginx-4217019353-b5gzn 1/1 Running 0 15s
```
### Port Forwarding
In this section you will verify the ability to access applications remotely using [port forwarding](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/).
Retrieve the full name of the `nginx` pod:
```
POD_NAME=$(kubectl get pods -l run=nginx -o jsonpath="{.items[0].metadata.name}")
```
Forward port `8080` on your local machine to port `80` of the `nginx` pod:
```
kubectl port-forward $POD_NAME 8080:80
```
> output
```
Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80
```
In a new terminal make an HTTP request using the forwarding address:
```
curl --head http://127.0.0.1:8080
```
> output
```
HTTP/1.1 200 OK
Server: nginx/1.13.3
Date: Thu, 31 Aug 2017 01:58:15 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 11 Jul 2017 13:06:07 GMT
Connection: keep-alive
ETag: "5964cd3f-264"
Accept-Ranges: bytes
```
Switch back to the previous terminal and stop the port forwarding to the `nginx` pod:
```
Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80
Handling connection for 8080
^C
```
### Logs
In this section you will verify the ability to [retrieve container logs](https://kubernetes.io/docs/concepts/cluster-administration/logging/).
Print the `nginx` pod logs:
```
kubectl logs $POD_NAME
```
> output
```
127.0.0.1 - - [31/Aug/2017:01:58:15 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.54.0" "-"
```
### Exec
In this section you will verify the ability to [execute commands in a container](https://kubernetes.io/docs/tasks/debug-application-cluster/get-shell-running-container/#running-individual-commands-in-a-container).
Print the nginx version by executing the `nginx -v` command in the `nginx` container:
```
kubectl exec -ti $POD_NAME -- nginx -v
```
> output
```
nginx version: nginx/1.13.3
```
## Services
In this section you will verify the ability to expose applications using a [Service](https://kubernetes.io/docs/concepts/services-networking/service/).
Expose the `nginx` deployment using a [NodePort](https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport) service:
```
kubectl expose deployment nginx --port 80 --type NodePort
```
> The LoadBalancer service type can not be used because your cluster is not configured with [cloud provider integration](https://kubernetes.io/docs/getting-started-guides/scratch/#cloud-provider). Setting up cloud provider integration is out of scope for this tutorial.
Retrieve the node port assigned to the `nginx` service:
```
NODE_PORT=$(kubectl get svc nginx \
--output=jsonpath='{range .spec.ports[0]}{.nodePort}')
```
Create a firewall rule that allows remote access to the `nginx` node port:
```
gcloud compute firewall-rules create kubernetes-the-hard-way-allow-nginx-service \
--allow=tcp:${NODE_PORT} \
--network kubernetes-the-hard-way
```
Retrieve the external IP address of a worker instance:
```
EXTERNAL_IP=$(gcloud compute instances describe worker-0 \
--format 'value(networkInterfaces[0].accessConfigs[0].natIP)')
```
Make an HTTP request using the external IP address and the `nginx` node port:
```
curl -I http://${EXTERNAL_IP}:${NODE_PORT}
```
> output
```
HTTP/1.1 200 OK
Server: nginx/1.13.3
Date: Thu, 31 Aug 2017 02:00:21 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 11 Jul 2017 13:06:07 GMT
Connection: keep-alive
ETag: "5964cd3f-264"
Accept-Ranges: bytes
```
Next: [Cleaning Up](14-cleanup.md)

67
docs/14-cleanup.md Normal file
View File

@ -0,0 +1,67 @@
# Cleaning Up
In this labs you will delete the compute resources created during this tutorial.
## Compute Instances
Delete the controller and worker compute instances:
```
gcloud -q compute instances delete \
controller-0 controller-1 controller-2 \
worker-0 worker-1 worker-2
```
## Networking
Delete the external load balancer network resources:
```
gcloud -q compute forwarding-rules delete kubernetes-forwarding-rule \
--region $(gcloud config get-value compute/region)
```
```
gcloud -q compute target-pools delete kubernetes-target-pool
```
```
gcloud -q compute http-health-checks delete kube-apiserver-health-check
```
Delete the `kubernetes-the-hard-way` static IP address:
```
gcloud -q compute addresses delete kubernetes-the-hard-way
```
Delete the `kubernetes-the-hard-way` firewall rules:
```
gcloud -q compute firewall-rules delete \
kubernetes-the-hard-way-allow-nginx-service \
kubernetes-the-hard-way-allow-internal \
kubernetes-the-hard-way-allow-external \
kubernetes-the-hard-way-allow-health-checks
```
Delete the Pod network routes:
```
gcloud -q compute routes delete \
kubernetes-route-10-200-0-0-24 \
kubernetes-route-10-200-1-0-24 \
kubernetes-route-10-200-2-0-24
```
Delete the `kubernetes` subnet:
```
gcloud -q compute networks subnets delete kubernetes
```
Delete the `kubernetes-the-hard-way` network VPC:
```
gcloud -q compute networks delete kubernetes-the-hard-way
```

View File

@ -1,34 +0,0 @@
# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "KubeDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: 10.32.0.10
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP