parent
4f5cecb5ed
commit
b974042d95
|
@ -2,12 +2,23 @@ admin-csr.json
|
||||||
admin-key.pem
|
admin-key.pem
|
||||||
admin.csr
|
admin.csr
|
||||||
admin.pem
|
admin.pem
|
||||||
|
admin.kubeconfig
|
||||||
ca-config.json
|
ca-config.json
|
||||||
ca-csr.json
|
ca-csr.json
|
||||||
ca-key.pem
|
ca-key.pem
|
||||||
ca.csr
|
ca.csr
|
||||||
ca.pem
|
ca.pem
|
||||||
encryption-config.yaml
|
encryption-config.yaml
|
||||||
|
kube-controller-manager-csr.json
|
||||||
|
kube-controller-manager-key.pem
|
||||||
|
kube-controller-manager.csr
|
||||||
|
kube-controller-manager.kubeconfig
|
||||||
|
kube-controller-manager.pem
|
||||||
|
kube-scheduler-csr.json
|
||||||
|
kube-scheduler-key.pem
|
||||||
|
kube-scheduler.csr
|
||||||
|
kube-scheduler.kubeconfig
|
||||||
|
kube-scheduler.pem
|
||||||
kube-proxy-csr.json
|
kube-proxy-csr.json
|
||||||
kube-proxy-key.pem
|
kube-proxy-key.pem
|
||||||
kube-proxy.csr
|
kube-proxy.csr
|
||||||
|
@ -32,3 +43,7 @@ worker-2-key.pem
|
||||||
worker-2.csr
|
worker-2.csr
|
||||||
worker-2.kubeconfig
|
worker-2.kubeconfig
|
||||||
worker-2.pem
|
worker-2.pem
|
||||||
|
service-account-key.pem
|
||||||
|
service-account.csr
|
||||||
|
service-account.pem
|
||||||
|
service-account-csr.json
|
||||||
|
|
|
@ -14,10 +14,11 @@ The target audience for this tutorial is someone planning to support a productio
|
||||||
|
|
||||||
Kubernetes The Hard Way guides you through bootstrapping a highly available Kubernetes cluster with end-to-end encryption between components and RBAC authentication.
|
Kubernetes The Hard Way guides you through bootstrapping a highly available Kubernetes cluster with end-to-end encryption between components and RBAC authentication.
|
||||||
|
|
||||||
* [Kubernetes](https://github.com/kubernetes/kubernetes) 1.9.0
|
* [Kubernetes](https://github.com/kubernetes/kubernetes) 1.10.2
|
||||||
* [cri-containerd Container Runtime](https://github.com/kubernetes-incubator/cri-containerd) 1.0.0-beta.0
|
* [containerd Container Runtime](https://github.com/containerd/containerd) 1.1.0
|
||||||
|
* [gVisor](https://github.com/google/gvisor) 08879266fef3a67fac1a77f1ea133c3ac75759dd
|
||||||
* [CNI Container Networking](https://github.com/containernetworking/cni) 0.6.0
|
* [CNI Container Networking](https://github.com/containernetworking/cni) 0.6.0
|
||||||
* [etcd](https://github.com/coreos/etcd) 3.2.11
|
* [etcd](https://github.com/coreos/etcd) 3.3.5
|
||||||
|
|
||||||
## Labs
|
## Labs
|
||||||
|
|
||||||
|
|
|
@ -14,7 +14,7 @@ This tutorial leverages the [Google Cloud Platform](https://cloud.google.com/) t
|
||||||
|
|
||||||
Follow the Google Cloud SDK [documentation](https://cloud.google.com/sdk/) to install and configure the `gcloud` command line utility.
|
Follow the Google Cloud SDK [documentation](https://cloud.google.com/sdk/) to install and configure the `gcloud` command line utility.
|
||||||
|
|
||||||
Verify the Google Cloud SDK version is 183.0.0 or higher:
|
Verify the Google Cloud SDK version is 200.0.0 or higher:
|
||||||
|
|
||||||
```
|
```
|
||||||
gcloud version
|
gcloud version
|
||||||
|
@ -44,4 +44,14 @@ gcloud config set compute/zone us-west1-c
|
||||||
|
|
||||||
> Use the `gcloud compute zones list` command to view additional regions and zones.
|
> Use the `gcloud compute zones list` command to view additional regions and zones.
|
||||||
|
|
||||||
|
## Running Commands in Parallel with tmux
|
||||||
|
|
||||||
|
[tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple compute instances at the same time. Labs in this tutorial may require running the same commands across multiple compute instances, in those cases consider using tmux and splitting a window into multiple panes with `synchronize-panes` enabled to speed up the provisioning process.
|
||||||
|
|
||||||
|
> The use of tmux is optional and not required to complete this tutorial.
|
||||||
|
|
||||||
|
![tmux screenshot](images/tmux-screenshot.png)
|
||||||
|
|
||||||
|
> Enable `synchronize-panes`: `ctrl+b` then `shift :`. Then type `set synchronize-panes on` at the prompt. To disable synchronization: `set synchronize-panes off`.
|
||||||
|
|
||||||
Next: [Installing the Client Tools](02-client-tools.md)
|
Next: [Installing the Client Tools](02-client-tools.md)
|
||||||
|
|
|
@ -24,6 +24,12 @@ chmod +x cfssl cfssljson
|
||||||
sudo mv cfssl cfssljson /usr/local/bin/
|
sudo mv cfssl cfssljson /usr/local/bin/
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Some OS X users may experience problems using the pre-built binaries in which case [Homebrew](https://brew.sh) might be a better option:
|
||||||
|
|
||||||
|
```
|
||||||
|
brew install cfssl
|
||||||
|
```
|
||||||
|
|
||||||
### Linux
|
### Linux
|
||||||
|
|
||||||
```
|
```
|
||||||
|
@ -69,7 +75,7 @@ The `kubectl` command line utility is used to interact with the Kubernetes API S
|
||||||
### OS X
|
### OS X
|
||||||
|
|
||||||
```
|
```
|
||||||
curl -o kubectl https://storage.googleapis.com/kubernetes-release/release/v1.9.0/bin/darwin/amd64/kubectl
|
curl -o kubectl https://storage.googleapis.com/kubernetes-release/release/v1.10.2/bin/darwin/amd64/kubectl
|
||||||
```
|
```
|
||||||
|
|
||||||
```
|
```
|
||||||
|
@ -83,7 +89,7 @@ sudo mv kubectl /usr/local/bin/
|
||||||
### Linux
|
### Linux
|
||||||
|
|
||||||
```
|
```
|
||||||
wget https://storage.googleapis.com/kubernetes-release/release/v1.9.0/bin/linux/amd64/kubectl
|
wget https://storage.googleapis.com/kubernetes-release/release/v1.10.2/bin/linux/amd64/kubectl
|
||||||
```
|
```
|
||||||
|
|
||||||
```
|
```
|
||||||
|
@ -96,7 +102,7 @@ sudo mv kubectl /usr/local/bin/
|
||||||
|
|
||||||
### Verification
|
### Verification
|
||||||
|
|
||||||
Verify `kubectl` version 1.9.0 or higher is installed:
|
Verify `kubectl` version 1.10.2 or higher is installed:
|
||||||
|
|
||||||
```
|
```
|
||||||
kubectl version --client
|
kubectl version --client
|
||||||
|
@ -105,7 +111,7 @@ kubectl version --client
|
||||||
> output
|
> output
|
||||||
|
|
||||||
```
|
```
|
||||||
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.0", GitCommit:"925c127ec6b946659ad0fd596fa959be43f0cc05", GitTreeState:"clean", BuildDate:"2017-12-15T21:07:38Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"darwin/amd64"}
|
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.2", GitCommit:"81753b10df112992bf51bbc2c2f85208aad78335", GitTreeState:"clean", BuildDate:"2018-04-27T09:22:21Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
|
||||||
```
|
```
|
||||||
|
|
||||||
Next: [Provisioning Compute Resources](03-compute-resources.md)
|
Next: [Provisioning Compute Resources](03-compute-resources.md)
|
||||||
|
|
|
@ -92,7 +92,7 @@ kubernetes-the-hard-way us-west1 XX.XXX.XXX.XX RESERVED
|
||||||
|
|
||||||
## Compute Instances
|
## Compute Instances
|
||||||
|
|
||||||
The compute instances in this lab will be provisioned using [Ubuntu Server](https://www.ubuntu.com/server) 16.04, which has good support for the [cri-containerd container runtime](https://github.com/containerd/cri-containerd). Each compute instance will be provisioned with a fixed private IP address to simplify the Kubernetes bootstrapping process.
|
The compute instances in this lab will be provisioned using [Ubuntu Server](https://www.ubuntu.com/server) 18.04, which has good support for the [containerd container runtime](https://github.com/containerd/containerd). Each compute instance will be provisioned with a fixed private IP address to simplify the Kubernetes bootstrapping process.
|
||||||
|
|
||||||
### Kubernetes Controllers
|
### Kubernetes Controllers
|
||||||
|
|
||||||
|
@ -104,7 +104,7 @@ for i in 0 1 2; do
|
||||||
--async \
|
--async \
|
||||||
--boot-disk-size 200GB \
|
--boot-disk-size 200GB \
|
||||||
--can-ip-forward \
|
--can-ip-forward \
|
||||||
--image-family ubuntu-1604-lts \
|
--image-family ubuntu-1804-lts \
|
||||||
--image-project ubuntu-os-cloud \
|
--image-project ubuntu-os-cloud \
|
||||||
--machine-type n1-standard-1 \
|
--machine-type n1-standard-1 \
|
||||||
--private-network-ip 10.240.0.1${i} \
|
--private-network-ip 10.240.0.1${i} \
|
||||||
|
@ -128,7 +128,7 @@ for i in 0 1 2; do
|
||||||
--async \
|
--async \
|
||||||
--boot-disk-size 200GB \
|
--boot-disk-size 200GB \
|
||||||
--can-ip-forward \
|
--can-ip-forward \
|
||||||
--image-family ubuntu-1604-lts \
|
--image-family ubuntu-1804-lts \
|
||||||
--image-project ubuntu-os-cloud \
|
--image-project ubuntu-os-cloud \
|
||||||
--machine-type n1-standard-1 \
|
--machine-type n1-standard-1 \
|
||||||
--metadata pod-cidr=10.200.${i}.0/24 \
|
--metadata pod-cidr=10.200.${i}.0/24 \
|
||||||
|
@ -159,4 +159,72 @@ worker-1 us-west1-c n1-standard-1 10.240.0.21 XX.XXX.XX.XXX
|
||||||
worker-2 us-west1-c n1-standard-1 10.240.0.22 XXX.XXX.XX.XX RUNNING
|
worker-2 us-west1-c n1-standard-1 10.240.0.22 XXX.XXX.XX.XX RUNNING
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Configuring SSH Access
|
||||||
|
|
||||||
|
SSH will be used to configure the controller and worker instances. When connecting to compute instances for the first time SSH keys will be generated for you and stored in the project or instance metadata as describe in the [connecting to instances](https://cloud.google.com/compute/docs/instances/connecting-to-instance) documentation.
|
||||||
|
|
||||||
|
Test SSH access to the `controller-0` compute instances:
|
||||||
|
|
||||||
|
```
|
||||||
|
gcloud compute ssh controller-0
|
||||||
|
```
|
||||||
|
|
||||||
|
If this is your first time connecting to a compute instance SSH keys will be generated for you. Enter a passphrase at the prompt to continue:
|
||||||
|
|
||||||
|
```
|
||||||
|
WARNING: The public SSH key file for gcloud does not exist.
|
||||||
|
WARNING: The private SSH key file for gcloud does not exist.
|
||||||
|
WARNING: You do not have an SSH key for gcloud.
|
||||||
|
WARNING: SSH keygen will be executed to generate a key.
|
||||||
|
Generating public/private rsa key pair.
|
||||||
|
Enter passphrase (empty for no passphrase):
|
||||||
|
Enter same passphrase again:
|
||||||
|
```
|
||||||
|
|
||||||
|
At this point the generated SSH keys will be uploaded and stored in your project:
|
||||||
|
|
||||||
|
```
|
||||||
|
Your identification has been saved in /home/$USER/.ssh/google_compute_engine.
|
||||||
|
Your public key has been saved in /home/$USER/.ssh/google_compute_engine.pub.
|
||||||
|
The key fingerprint is:
|
||||||
|
SHA256:nz1i8jHmgQuGt+WscqP5SeIaSy5wyIJeL71MuV+QruE $USER@$HOSTNAME
|
||||||
|
The key's randomart image is:
|
||||||
|
+---[RSA 2048]----+
|
||||||
|
| |
|
||||||
|
| |
|
||||||
|
| |
|
||||||
|
| . |
|
||||||
|
|o. oS |
|
||||||
|
|=... .o .o o |
|
||||||
|
|+.+ =+=.+.X o |
|
||||||
|
|.+ ==O*B.B = . |
|
||||||
|
| .+.=EB++ o |
|
||||||
|
+----[SHA256]-----+
|
||||||
|
Updating project ssh metadata...-Updated [https://www.googleapis.com/compute/v1/projects/$PROJECT_ID].
|
||||||
|
Updating project ssh metadata...done.
|
||||||
|
Waiting for SSH key to propagate.
|
||||||
|
```
|
||||||
|
|
||||||
|
After the SSH keys have been updated you'll be logged into the `controller-0` instance:
|
||||||
|
|
||||||
|
```
|
||||||
|
Welcome to Ubuntu 18.04 LTS (GNU/Linux 4.15.0-1006-gcp x86_64)
|
||||||
|
|
||||||
|
...
|
||||||
|
|
||||||
|
Last login: Sun May 13 14:34:27 2018 from XX.XXX.XXX.XX
|
||||||
|
```
|
||||||
|
|
||||||
|
Type `exit` at the prompt to exit the `controller-0` compute instance:
|
||||||
|
|
||||||
|
```
|
||||||
|
$USER@controller-0:~$ exit
|
||||||
|
```
|
||||||
|
> output
|
||||||
|
|
||||||
|
```
|
||||||
|
logout
|
||||||
|
Connection to XX.XXX.XXX.XXX closed
|
||||||
|
```
|
||||||
|
|
||||||
Next: [Provisioning a CA and Generating TLS Certificates](04-certificate-authority.md)
|
Next: [Provisioning a CA and Generating TLS Certificates](04-certificate-authority.md)
|
||||||
|
|
|
@ -1,14 +1,16 @@
|
||||||
# Provisioning a CA and Generating TLS Certificates
|
# Provisioning a CA and Generating TLS Certificates
|
||||||
|
|
||||||
In this lab you will provision a [PKI Infrastructure](https://en.wikipedia.org/wiki/Public_key_infrastructure) using CloudFlare's PKI toolkit, [cfssl](https://github.com/cloudflare/cfssl), then use it to bootstrap a Certificate Authority, and generate TLS certificates for the following components: etcd, kube-apiserver, kubelet, and kube-proxy.
|
In this lab you will provision a [PKI Infrastructure](https://en.wikipedia.org/wiki/Public_key_infrastructure) using CloudFlare's PKI toolkit, [cfssl](https://github.com/cloudflare/cfssl), then use it to bootstrap a Certificate Authority, and generate TLS certificates for the following components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler, kubelet, and kube-proxy.
|
||||||
|
|
||||||
## Certificate Authority
|
## Certificate Authority
|
||||||
|
|
||||||
In this section you will provision a Certificate Authority that can be used to generate additional TLS certificates.
|
In this section you will provision a Certificate Authority that can be used to generate additional TLS certificates.
|
||||||
|
|
||||||
Create the CA configuration file:
|
Generate the CA configuration file, certificate, and private key:
|
||||||
|
|
||||||
```
|
```
|
||||||
|
{
|
||||||
|
|
||||||
cat > ca-config.json <<EOF
|
cat > ca-config.json <<EOF
|
||||||
{
|
{
|
||||||
"signing": {
|
"signing": {
|
||||||
|
@ -24,11 +26,7 @@ cat > ca-config.json <<EOF
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
EOF
|
EOF
|
||||||
```
|
|
||||||
|
|
||||||
Create the CA certificate signing request:
|
|
||||||
|
|
||||||
```
|
|
||||||
cat > ca-csr.json <<EOF
|
cat > ca-csr.json <<EOF
|
||||||
{
|
{
|
||||||
"CN": "Kubernetes",
|
"CN": "Kubernetes",
|
||||||
|
@ -47,12 +45,10 @@ cat > ca-csr.json <<EOF
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
EOF
|
EOF
|
||||||
```
|
|
||||||
|
|
||||||
Generate the CA certificate and private key:
|
|
||||||
|
|
||||||
```
|
|
||||||
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
|
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
|
||||||
|
|
||||||
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
Results:
|
Results:
|
||||||
|
@ -68,9 +64,11 @@ In this section you will generate client and server certificates for each Kubern
|
||||||
|
|
||||||
### The Admin Client Certificate
|
### The Admin Client Certificate
|
||||||
|
|
||||||
Create the `admin` client certificate signing request:
|
Generate the `admin` client certificate and private key:
|
||||||
|
|
||||||
```
|
```
|
||||||
|
{
|
||||||
|
|
||||||
cat > admin-csr.json <<EOF
|
cat > admin-csr.json <<EOF
|
||||||
{
|
{
|
||||||
"CN": "admin",
|
"CN": "admin",
|
||||||
|
@ -89,17 +87,15 @@ cat > admin-csr.json <<EOF
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
EOF
|
EOF
|
||||||
```
|
|
||||||
|
|
||||||
Generate the `admin` client certificate and private key:
|
|
||||||
|
|
||||||
```
|
|
||||||
cfssl gencert \
|
cfssl gencert \
|
||||||
-ca=ca.pem \
|
-ca=ca.pem \
|
||||||
-ca-key=ca-key.pem \
|
-ca-key=ca-key.pem \
|
||||||
-config=ca-config.json \
|
-config=ca-config.json \
|
||||||
-profile=kubernetes \
|
-profile=kubernetes \
|
||||||
admin-csr.json | cfssljson -bare admin
|
admin-csr.json | cfssljson -bare admin
|
||||||
|
|
||||||
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
Results:
|
Results:
|
||||||
|
@ -163,11 +159,57 @@ worker-2-key.pem
|
||||||
worker-2.pem
|
worker-2.pem
|
||||||
```
|
```
|
||||||
|
|
||||||
### The kube-proxy Client Certificate
|
### The Controller Manager Client Certificate
|
||||||
|
|
||||||
Create the `kube-proxy` client certificate signing request:
|
Generate the `kube-controller-manager` client certificate and private key:
|
||||||
|
|
||||||
```
|
```
|
||||||
|
{
|
||||||
|
|
||||||
|
cat > kube-controller-manager-csr.json <<EOF
|
||||||
|
{
|
||||||
|
"CN": "system:kube-controller-manager",
|
||||||
|
"key": {
|
||||||
|
"algo": "rsa",
|
||||||
|
"size": 2048
|
||||||
|
},
|
||||||
|
"names": [
|
||||||
|
{
|
||||||
|
"C": "US",
|
||||||
|
"L": "Portland",
|
||||||
|
"O": "system:kube-controller-manager",
|
||||||
|
"OU": "Kubernetes The Hard Way",
|
||||||
|
"ST": "Oregon"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
EOF
|
||||||
|
|
||||||
|
cfssl gencert \
|
||||||
|
-ca=ca.pem \
|
||||||
|
-ca-key=ca-key.pem \
|
||||||
|
-config=ca-config.json \
|
||||||
|
-profile=kubernetes \
|
||||||
|
kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
|
||||||
|
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Results:
|
||||||
|
|
||||||
|
```
|
||||||
|
kube-controller-manager-key.pem
|
||||||
|
kube-controller-manager.pem
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
### The Kube Proxy Client Certificate
|
||||||
|
|
||||||
|
Generate the `kube-proxy` client certificate and private key:
|
||||||
|
|
||||||
|
```
|
||||||
|
{
|
||||||
|
|
||||||
cat > kube-proxy-csr.json <<EOF
|
cat > kube-proxy-csr.json <<EOF
|
||||||
{
|
{
|
||||||
"CN": "system:kube-proxy",
|
"CN": "system:kube-proxy",
|
||||||
|
@ -186,17 +228,15 @@ cat > kube-proxy-csr.json <<EOF
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
EOF
|
EOF
|
||||||
```
|
|
||||||
|
|
||||||
Generate the `kube-proxy` client certificate and private key:
|
|
||||||
|
|
||||||
```
|
|
||||||
cfssl gencert \
|
cfssl gencert \
|
||||||
-ca=ca.pem \
|
-ca=ca.pem \
|
||||||
-ca-key=ca-key.pem \
|
-ca-key=ca-key.pem \
|
||||||
-config=ca-config.json \
|
-config=ca-config.json \
|
||||||
-profile=kubernetes \
|
-profile=kubernetes \
|
||||||
kube-proxy-csr.json | cfssljson -bare kube-proxy
|
kube-proxy-csr.json | cfssljson -bare kube-proxy
|
||||||
|
|
||||||
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
Results:
|
Results:
|
||||||
|
@ -206,21 +246,63 @@ kube-proxy-key.pem
|
||||||
kube-proxy.pem
|
kube-proxy.pem
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### The Scheduler Client Certificate
|
||||||
|
|
||||||
|
Generate the `kube-scheduler` client certificate and private key:
|
||||||
|
|
||||||
|
```
|
||||||
|
{
|
||||||
|
|
||||||
|
cat > kube-scheduler-csr.json <<EOF
|
||||||
|
{
|
||||||
|
"CN": "system:kube-scheduler",
|
||||||
|
"key": {
|
||||||
|
"algo": "rsa",
|
||||||
|
"size": 2048
|
||||||
|
},
|
||||||
|
"names": [
|
||||||
|
{
|
||||||
|
"C": "US",
|
||||||
|
"L": "Portland",
|
||||||
|
"O": "system:kube-scheduler",
|
||||||
|
"OU": "Kubernetes The Hard Way",
|
||||||
|
"ST": "Oregon"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
EOF
|
||||||
|
|
||||||
|
cfssl gencert \
|
||||||
|
-ca=ca.pem \
|
||||||
|
-ca-key=ca-key.pem \
|
||||||
|
-config=ca-config.json \
|
||||||
|
-profile=kubernetes \
|
||||||
|
kube-scheduler-csr.json | cfssljson -bare kube-scheduler
|
||||||
|
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Results:
|
||||||
|
|
||||||
|
```
|
||||||
|
kube-scheduler-key.pem
|
||||||
|
kube-scheduler.pem
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
### The Kubernetes API Server Certificate
|
### The Kubernetes API Server Certificate
|
||||||
|
|
||||||
The `kubernetes-the-hard-way` static IP address will be included in the list of subject alternative names for the Kubernetes API Server certificate. This will ensure the certificate can be validated by remote clients.
|
The `kubernetes-the-hard-way` static IP address will be included in the list of subject alternative names for the Kubernetes API Server certificate. This will ensure the certificate can be validated by remote clients.
|
||||||
|
|
||||||
Retrieve the `kubernetes-the-hard-way` static IP address:
|
Generate the Kubernetes API Server certificate and private key:
|
||||||
|
|
||||||
```
|
```
|
||||||
|
{
|
||||||
|
|
||||||
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
|
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
|
||||||
--region $(gcloud config get-value compute/region) \
|
--region $(gcloud config get-value compute/region) \
|
||||||
--format 'value(address)')
|
--format 'value(address)')
|
||||||
```
|
|
||||||
|
|
||||||
Create the Kubernetes API Server certificate signing request:
|
|
||||||
|
|
||||||
```
|
|
||||||
cat > kubernetes-csr.json <<EOF
|
cat > kubernetes-csr.json <<EOF
|
||||||
{
|
{
|
||||||
"CN": "kubernetes",
|
"CN": "kubernetes",
|
||||||
|
@ -239,11 +321,7 @@ cat > kubernetes-csr.json <<EOF
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
EOF
|
EOF
|
||||||
```
|
|
||||||
|
|
||||||
Generate the Kubernetes API Server certificate and private key:
|
|
||||||
|
|
||||||
```
|
|
||||||
cfssl gencert \
|
cfssl gencert \
|
||||||
-ca=ca.pem \
|
-ca=ca.pem \
|
||||||
-ca-key=ca-key.pem \
|
-ca-key=ca-key.pem \
|
||||||
|
@ -251,6 +329,8 @@ cfssl gencert \
|
||||||
-hostname=10.32.0.1,10.240.0.10,10.240.0.11,10.240.0.12,${KUBERNETES_PUBLIC_ADDRESS},127.0.0.1,kubernetes.default \
|
-hostname=10.32.0.1,10.240.0.10,10.240.0.11,10.240.0.12,${KUBERNETES_PUBLIC_ADDRESS},127.0.0.1,kubernetes.default \
|
||||||
-profile=kubernetes \
|
-profile=kubernetes \
|
||||||
kubernetes-csr.json | cfssljson -bare kubernetes
|
kubernetes-csr.json | cfssljson -bare kubernetes
|
||||||
|
|
||||||
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
Results:
|
Results:
|
||||||
|
@ -260,6 +340,52 @@ kubernetes-key.pem
|
||||||
kubernetes.pem
|
kubernetes.pem
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## The Service Account Key Pair
|
||||||
|
|
||||||
|
The Kubernetes Controller Manager leverages a key pair to generate and sign service account tokens as describe in the [managing service accounts](https://kubernetes.io/docs/admin/service-accounts-admin/) documentation.
|
||||||
|
|
||||||
|
Generate the `service-account` certificate and private key:
|
||||||
|
|
||||||
|
```
|
||||||
|
{
|
||||||
|
|
||||||
|
cat > service-account-csr.json <<EOF
|
||||||
|
{
|
||||||
|
"CN": "service-accounts",
|
||||||
|
"key": {
|
||||||
|
"algo": "rsa",
|
||||||
|
"size": 2048
|
||||||
|
},
|
||||||
|
"names": [
|
||||||
|
{
|
||||||
|
"C": "US",
|
||||||
|
"L": "Portland",
|
||||||
|
"O": "Kubernetes",
|
||||||
|
"OU": "Kubernetes The Hard Way",
|
||||||
|
"ST": "Oregon"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
EOF
|
||||||
|
|
||||||
|
cfssl gencert \
|
||||||
|
-ca=ca.pem \
|
||||||
|
-ca-key=ca-key.pem \
|
||||||
|
-config=ca-config.json \
|
||||||
|
-profile=kubernetes \
|
||||||
|
service-account-csr.json | cfssljson -bare service-account
|
||||||
|
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Results:
|
||||||
|
|
||||||
|
```
|
||||||
|
service-account-key.pem
|
||||||
|
service-account.pem
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
## Distribute the Client and Server Certificates
|
## Distribute the Client and Server Certificates
|
||||||
|
|
||||||
Copy the appropriate certificates and private keys to each worker instance:
|
Copy the appropriate certificates and private keys to each worker instance:
|
||||||
|
@ -274,10 +400,11 @@ Copy the appropriate certificates and private keys to each controller instance:
|
||||||
|
|
||||||
```
|
```
|
||||||
for instance in controller-0 controller-1 controller-2; do
|
for instance in controller-0 controller-1 controller-2; do
|
||||||
gcloud compute scp ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem ${instance}:~/
|
gcloud compute scp ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \
|
||||||
|
service-account-key.pem service-account.pem ${instance}:~/
|
||||||
done
|
done
|
||||||
```
|
```
|
||||||
|
|
||||||
> The `kube-proxy` and `kubelet` client certificates will be used to generate client authentication configuration files in the next lab.
|
> The `kube-proxy`, `kube-controller-manager`, `kube-scheduler`, and `kubelet` client certificates will be used to generate client authentication configuration files in the next lab.
|
||||||
|
|
||||||
Next: [Generating Kubernetes Configuration Files for Authentication](05-kubernetes-configuration-files.md)
|
Next: [Generating Kubernetes Configuration Files for Authentication](05-kubernetes-configuration-files.md)
|
||||||
|
|
|
@ -4,9 +4,7 @@ In this lab you will generate [Kubernetes configuration files](https://kubernete
|
||||||
|
|
||||||
## Client Authentication Configs
|
## Client Authentication Configs
|
||||||
|
|
||||||
In this section you will generate kubeconfig files for the `kubelet` and `kube-proxy` clients.
|
In this section you will generate kubeconfig files for the `controller manager`, `kubelet`, `kube-proxy`, and `scheduler` clients and the `admin` user.
|
||||||
|
|
||||||
> The `scheduler` and `controller manager` access the Kubernetes API Server locally over an insecure API port which does not require authentication. The Kubernetes API Server's insecure port is only enabled for local access.
|
|
||||||
|
|
||||||
### Kubernetes Public IP Address
|
### Kubernetes Public IP Address
|
||||||
|
|
||||||
|
@ -62,32 +60,137 @@ worker-2.kubeconfig
|
||||||
Generate a kubeconfig file for the `kube-proxy` service:
|
Generate a kubeconfig file for the `kube-proxy` service:
|
||||||
|
|
||||||
```
|
```
|
||||||
|
{
|
||||||
kubectl config set-cluster kubernetes-the-hard-way \
|
kubectl config set-cluster kubernetes-the-hard-way \
|
||||||
--certificate-authority=ca.pem \
|
--certificate-authority=ca.pem \
|
||||||
--embed-certs=true \
|
--embed-certs=true \
|
||||||
--server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \
|
--server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \
|
||||||
--kubeconfig=kube-proxy.kubeconfig
|
--kubeconfig=kube-proxy.kubeconfig
|
||||||
```
|
|
||||||
|
|
||||||
```
|
kubectl config set-credentials system:kube-proxy \
|
||||||
kubectl config set-credentials kube-proxy \
|
|
||||||
--client-certificate=kube-proxy.pem \
|
--client-certificate=kube-proxy.pem \
|
||||||
--client-key=kube-proxy-key.pem \
|
--client-key=kube-proxy-key.pem \
|
||||||
--embed-certs=true \
|
--embed-certs=true \
|
||||||
--kubeconfig=kube-proxy.kubeconfig
|
--kubeconfig=kube-proxy.kubeconfig
|
||||||
```
|
|
||||||
|
|
||||||
```
|
|
||||||
kubectl config set-context default \
|
kubectl config set-context default \
|
||||||
--cluster=kubernetes-the-hard-way \
|
--cluster=kubernetes-the-hard-way \
|
||||||
--user=kube-proxy \
|
--user=system:kube-proxy \
|
||||||
--kubeconfig=kube-proxy.kubeconfig
|
--kubeconfig=kube-proxy.kubeconfig
|
||||||
|
|
||||||
|
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
|
||||||
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Results:
|
||||||
|
|
||||||
```
|
```
|
||||||
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
|
kube-proxy.kubeconfig
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### The kube-controller-manager Kubernetes Configuration File
|
||||||
|
|
||||||
|
Generate a kubeconfig file for the `kube-controller-manager` service:
|
||||||
|
|
||||||
|
```
|
||||||
|
{
|
||||||
|
kubectl config set-cluster kubernetes-the-hard-way \
|
||||||
|
--certificate-authority=ca.pem \
|
||||||
|
--embed-certs=true \
|
||||||
|
--server=https://127.0.0.1:6443 \
|
||||||
|
--kubeconfig=kube-controller-manager.kubeconfig
|
||||||
|
|
||||||
|
kubectl config set-credentials system:kube-controller-manager \
|
||||||
|
--client-certificate=kube-controller-manager.pem \
|
||||||
|
--client-key=kube-controller-manager-key.pem \
|
||||||
|
--embed-certs=true \
|
||||||
|
--kubeconfig=kube-controller-manager.kubeconfig
|
||||||
|
|
||||||
|
kubectl config set-context default \
|
||||||
|
--cluster=kubernetes-the-hard-way \
|
||||||
|
--user=system:kube-controller-manager \
|
||||||
|
--kubeconfig=kube-controller-manager.kubeconfig
|
||||||
|
|
||||||
|
kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Results:
|
||||||
|
|
||||||
|
```
|
||||||
|
kube-controller-manager.kubeconfig
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
### The kube-scheduler Kubernetes Configuration File
|
||||||
|
|
||||||
|
Generate a kubeconfig file for the `kube-scheduler` service:
|
||||||
|
|
||||||
|
```
|
||||||
|
{
|
||||||
|
kubectl config set-cluster kubernetes-the-hard-way \
|
||||||
|
--certificate-authority=ca.pem \
|
||||||
|
--embed-certs=true \
|
||||||
|
--server=https://127.0.0.1:6443 \
|
||||||
|
--kubeconfig=kube-scheduler.kubeconfig
|
||||||
|
|
||||||
|
kubectl config set-credentials system:kube-scheduler \
|
||||||
|
--client-certificate=kube-scheduler.pem \
|
||||||
|
--client-key=kube-scheduler-key.pem \
|
||||||
|
--embed-certs=true \
|
||||||
|
--kubeconfig=kube-scheduler.kubeconfig
|
||||||
|
|
||||||
|
kubectl config set-context default \
|
||||||
|
--cluster=kubernetes-the-hard-way \
|
||||||
|
--user=system:kube-scheduler \
|
||||||
|
--kubeconfig=kube-scheduler.kubeconfig
|
||||||
|
|
||||||
|
kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Results:
|
||||||
|
|
||||||
|
```
|
||||||
|
kube-scheduler.kubeconfig
|
||||||
|
```
|
||||||
|
|
||||||
|
### The admin Kubernetes Configuration File
|
||||||
|
|
||||||
|
Generate a kubeconfig file for the `admin` user:
|
||||||
|
|
||||||
|
```
|
||||||
|
{
|
||||||
|
kubectl config set-cluster kubernetes-the-hard-way \
|
||||||
|
--certificate-authority=ca.pem \
|
||||||
|
--embed-certs=true \
|
||||||
|
--server=https://127.0.0.1:6443 \
|
||||||
|
--kubeconfig=admin.kubeconfig
|
||||||
|
|
||||||
|
kubectl config set-credentials admin \
|
||||||
|
--client-certificate=admin.pem \
|
||||||
|
--client-key=admin-key.pem \
|
||||||
|
--embed-certs=true \
|
||||||
|
--kubeconfig=admin.kubeconfig
|
||||||
|
|
||||||
|
kubectl config set-context default \
|
||||||
|
--cluster=kubernetes-the-hard-way \
|
||||||
|
--user=admin \
|
||||||
|
--kubeconfig=admin.kubeconfig
|
||||||
|
|
||||||
|
kubectl config use-context default --kubeconfig=admin.kubeconfig
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Results:
|
||||||
|
|
||||||
|
```
|
||||||
|
admin.kubeconfig
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
##
|
||||||
|
|
||||||
## Distribute the Kubernetes Configuration Files
|
## Distribute the Kubernetes Configuration Files
|
||||||
|
|
||||||
Copy the appropriate `kubelet` and `kube-proxy` kubeconfig files to each worker instance:
|
Copy the appropriate `kubelet` and `kube-proxy` kubeconfig files to each worker instance:
|
||||||
|
@ -98,4 +201,12 @@ for instance in worker-0 worker-1 worker-2; do
|
||||||
done
|
done
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Copy the appropriate `kube-controller-manager` and `kube-scheduler` kubeconfig files to each controller instance:
|
||||||
|
|
||||||
|
```
|
||||||
|
for instance in controller-0 controller-1 controller-2; do
|
||||||
|
gcloud compute scp admin.kubeconfig kube-controller-manager.kubeconfig kube-scheduler.kubeconfig ${instance}:~/
|
||||||
|
done
|
||||||
|
```
|
||||||
|
|
||||||
Next: [Generating the Data Encryption Config and Key](06-data-encryption-keys.md)
|
Next: [Generating the Data Encryption Config and Key](06-data-encryption-keys.md)
|
||||||
|
|
|
@ -10,6 +10,10 @@ The commands in this lab must be run on each controller instance: `controller-0`
|
||||||
gcloud compute ssh controller-0
|
gcloud compute ssh controller-0
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### Running commands in parallel with tmux
|
||||||
|
|
||||||
|
[tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple compute instances at the same time. See the [Running commands in parallel with tmux](01-prerequisites.md#running-commands-in-parallel-with-tmux) section in the Prerequisites lab.
|
||||||
|
|
||||||
## Bootstrapping an etcd Cluster Member
|
## Bootstrapping an etcd Cluster Member
|
||||||
|
|
||||||
### Download and Install the etcd Binaries
|
### Download and Install the etcd Binaries
|
||||||
|
@ -18,27 +22,25 @@ Download the official etcd release binaries from the [coreos/etcd](https://githu
|
||||||
|
|
||||||
```
|
```
|
||||||
wget -q --show-progress --https-only --timestamping \
|
wget -q --show-progress --https-only --timestamping \
|
||||||
"https://github.com/coreos/etcd/releases/download/v3.2.11/etcd-v3.2.11-linux-amd64.tar.gz"
|
"https://github.com/coreos/etcd/releases/download/v3.3.5/etcd-v3.3.5-linux-amd64.tar.gz"
|
||||||
```
|
```
|
||||||
|
|
||||||
Extract and install the `etcd` server and the `etcdctl` command line utility:
|
Extract and install the `etcd` server and the `etcdctl` command line utility:
|
||||||
|
|
||||||
```
|
```
|
||||||
tar -xvf etcd-v3.2.11-linux-amd64.tar.gz
|
{
|
||||||
```
|
tar -xvf etcd-v3.3.5-linux-amd64.tar.gz
|
||||||
|
sudo mv etcd-v3.3.5-linux-amd64/etcd* /usr/local/bin/
|
||||||
```
|
}
|
||||||
sudo mv etcd-v3.2.11-linux-amd64/etcd* /usr/local/bin/
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Configure the etcd Server
|
### Configure the etcd Server
|
||||||
|
|
||||||
```
|
```
|
||||||
|
{
|
||||||
sudo mkdir -p /etc/etcd /var/lib/etcd
|
sudo mkdir -p /etc/etcd /var/lib/etcd
|
||||||
```
|
|
||||||
|
|
||||||
```
|
|
||||||
sudo cp ca.pem kubernetes-key.pem kubernetes.pem /etc/etcd/
|
sudo cp ca.pem kubernetes-key.pem kubernetes.pem /etc/etcd/
|
||||||
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
The instance internal IP address will be used to serve client requests and communicate with etcd cluster peers. Retrieve the internal IP address for the current compute instance:
|
The instance internal IP address will be used to serve client requests and communicate with etcd cluster peers. Retrieve the internal IP address for the current compute instance:
|
||||||
|
@ -57,7 +59,7 @@ ETCD_NAME=$(hostname -s)
|
||||||
Create the `etcd.service` systemd unit file:
|
Create the `etcd.service` systemd unit file:
|
||||||
|
|
||||||
```
|
```
|
||||||
cat > etcd.service <<EOF
|
cat <<EOF | sudo tee /etc/systemd/system/etcd.service
|
||||||
[Unit]
|
[Unit]
|
||||||
Description=etcd
|
Description=etcd
|
||||||
Documentation=https://github.com/coreos
|
Documentation=https://github.com/coreos
|
||||||
|
@ -75,7 +77,7 @@ ExecStart=/usr/local/bin/etcd \\
|
||||||
--client-cert-auth \\
|
--client-cert-auth \\
|
||||||
--initial-advertise-peer-urls https://${INTERNAL_IP}:2380 \\
|
--initial-advertise-peer-urls https://${INTERNAL_IP}:2380 \\
|
||||||
--listen-peer-urls https://${INTERNAL_IP}:2380 \\
|
--listen-peer-urls https://${INTERNAL_IP}:2380 \\
|
||||||
--listen-client-urls https://${INTERNAL_IP}:2379,http://127.0.0.1:2379 \\
|
--listen-client-urls https://${INTERNAL_IP}:2379,https://127.0.0.1:2379 \\
|
||||||
--advertise-client-urls https://${INTERNAL_IP}:2379 \\
|
--advertise-client-urls https://${INTERNAL_IP}:2379 \\
|
||||||
--initial-cluster-token etcd-cluster-0 \\
|
--initial-cluster-token etcd-cluster-0 \\
|
||||||
--initial-cluster controller-0=https://10.240.0.10:2380,controller-1=https://10.240.0.11:2380,controller-2=https://10.240.0.12:2380 \\
|
--initial-cluster controller-0=https://10.240.0.10:2380,controller-1=https://10.240.0.11:2380,controller-2=https://10.240.0.12:2380 \\
|
||||||
|
@ -92,19 +94,11 @@ EOF
|
||||||
### Start the etcd Server
|
### Start the etcd Server
|
||||||
|
|
||||||
```
|
```
|
||||||
sudo mv etcd.service /etc/systemd/system/
|
{
|
||||||
```
|
|
||||||
|
|
||||||
```
|
|
||||||
sudo systemctl daemon-reload
|
sudo systemctl daemon-reload
|
||||||
```
|
|
||||||
|
|
||||||
```
|
|
||||||
sudo systemctl enable etcd
|
sudo systemctl enable etcd
|
||||||
```
|
|
||||||
|
|
||||||
```
|
|
||||||
sudo systemctl start etcd
|
sudo systemctl start etcd
|
||||||
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
> Remember to run the above commands on each controller node: `controller-0`, `controller-1`, and `controller-2`.
|
> Remember to run the above commands on each controller node: `controller-0`, `controller-1`, and `controller-2`.
|
||||||
|
@ -114,7 +108,11 @@ sudo systemctl start etcd
|
||||||
List the etcd cluster members:
|
List the etcd cluster members:
|
||||||
|
|
||||||
```
|
```
|
||||||
ETCDCTL_API=3 etcdctl member list
|
sudo ETCDCTL_API=3 etcdctl member list \
|
||||||
|
--endpoints=https://127.0.0.1:2379 \
|
||||||
|
--cacert=/etc/etcd/ca.pem \
|
||||||
|
--cert=/etc/etcd/kubernetes.pem \
|
||||||
|
--key=/etc/etcd/kubernetes-key.pem
|
||||||
```
|
```
|
||||||
|
|
||||||
> output
|
> output
|
||||||
|
|
|
@ -10,38 +10,49 @@ The commands in this lab must be run on each controller instance: `controller-0`
|
||||||
gcloud compute ssh controller-0
|
gcloud compute ssh controller-0
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### Running commands in parallel with tmux
|
||||||
|
|
||||||
|
[tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple compute instances at the same time. See the [Running commands in parallel with tmux](01-prerequisites.md#running-commands-in-parallel-with-tmux) section in the Prerequisites lab.
|
||||||
|
|
||||||
## Provision the Kubernetes Control Plane
|
## Provision the Kubernetes Control Plane
|
||||||
|
|
||||||
|
Create the Kubernetes configuration directory:
|
||||||
|
|
||||||
|
```
|
||||||
|
sudo mkdir -p /etc/kubernetes/config
|
||||||
|
```
|
||||||
|
|
||||||
### Download and Install the Kubernetes Controller Binaries
|
### Download and Install the Kubernetes Controller Binaries
|
||||||
|
|
||||||
Download the official Kubernetes release binaries:
|
Download the official Kubernetes release binaries:
|
||||||
|
|
||||||
```
|
```
|
||||||
wget -q --show-progress --https-only --timestamping \
|
wget -q --show-progress --https-only --timestamping \
|
||||||
"https://storage.googleapis.com/kubernetes-release/release/v1.9.0/bin/linux/amd64/kube-apiserver" \
|
"https://storage.googleapis.com/kubernetes-release/release/v1.10.2/bin/linux/amd64/kube-apiserver" \
|
||||||
"https://storage.googleapis.com/kubernetes-release/release/v1.9.0/bin/linux/amd64/kube-controller-manager" \
|
"https://storage.googleapis.com/kubernetes-release/release/v1.10.2/bin/linux/amd64/kube-controller-manager" \
|
||||||
"https://storage.googleapis.com/kubernetes-release/release/v1.9.0/bin/linux/amd64/kube-scheduler" \
|
"https://storage.googleapis.com/kubernetes-release/release/v1.10.2/bin/linux/amd64/kube-scheduler" \
|
||||||
"https://storage.googleapis.com/kubernetes-release/release/v1.9.0/bin/linux/amd64/kubectl"
|
"https://storage.googleapis.com/kubernetes-release/release/v1.10.2/bin/linux/amd64/kubectl"
|
||||||
```
|
```
|
||||||
|
|
||||||
Install the Kubernetes binaries:
|
Install the Kubernetes binaries:
|
||||||
|
|
||||||
```
|
```
|
||||||
|
{
|
||||||
chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl
|
chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl
|
||||||
```
|
|
||||||
|
|
||||||
```
|
|
||||||
sudo mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/
|
sudo mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/
|
||||||
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
### Configure the Kubernetes API Server
|
### Configure the Kubernetes API Server
|
||||||
|
|
||||||
```
|
```
|
||||||
|
{
|
||||||
sudo mkdir -p /var/lib/kubernetes/
|
sudo mkdir -p /var/lib/kubernetes/
|
||||||
```
|
|
||||||
|
|
||||||
```
|
sudo mv ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \
|
||||||
sudo mv ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem encryption-config.yaml /var/lib/kubernetes/
|
service-account-key.pem service-account.pem \
|
||||||
|
encryption-config.yaml /var/lib/kubernetes/
|
||||||
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
The instance internal IP address will be used to advertise the API Server to members of the cluster. Retrieve the internal IP address for the current compute instance:
|
The instance internal IP address will be used to advertise the API Server to members of the cluster. Retrieve the internal IP address for the current compute instance:
|
||||||
|
@ -54,14 +65,13 @@ INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
|
||||||
Create the `kube-apiserver.service` systemd unit file:
|
Create the `kube-apiserver.service` systemd unit file:
|
||||||
|
|
||||||
```
|
```
|
||||||
cat > kube-apiserver.service <<EOF
|
cat <<EOF | sudo tee /etc/systemd/system/kube-apiserver.service
|
||||||
[Unit]
|
[Unit]
|
||||||
Description=Kubernetes API Server
|
Description=Kubernetes API Server
|
||||||
Documentation=https://github.com/kubernetes/kubernetes
|
Documentation=https://github.com/kubernetes/kubernetes
|
||||||
|
|
||||||
[Service]
|
[Service]
|
||||||
ExecStart=/usr/local/bin/kube-apiserver \\
|
ExecStart=/usr/local/bin/kube-apiserver \\
|
||||||
--admission-control=Initializers,NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\
|
|
||||||
--advertise-address=${INTERNAL_IP} \\
|
--advertise-address=${INTERNAL_IP} \\
|
||||||
--allow-privileged=true \\
|
--allow-privileged=true \\
|
||||||
--apiserver-count=3 \\
|
--apiserver-count=3 \\
|
||||||
|
@ -72,6 +82,7 @@ ExecStart=/usr/local/bin/kube-apiserver \\
|
||||||
--authorization-mode=Node,RBAC \\
|
--authorization-mode=Node,RBAC \\
|
||||||
--bind-address=0.0.0.0 \\
|
--bind-address=0.0.0.0 \\
|
||||||
--client-ca-file=/var/lib/kubernetes/ca.pem \\
|
--client-ca-file=/var/lib/kubernetes/ca.pem \\
|
||||||
|
--enable-admission-plugins=Initializers,NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\
|
||||||
--enable-swagger-ui=true \\
|
--enable-swagger-ui=true \\
|
||||||
--etcd-cafile=/var/lib/kubernetes/ca.pem \\
|
--etcd-cafile=/var/lib/kubernetes/ca.pem \\
|
||||||
--etcd-certfile=/var/lib/kubernetes/kubernetes.pem \\
|
--etcd-certfile=/var/lib/kubernetes/kubernetes.pem \\
|
||||||
|
@ -79,16 +90,14 @@ ExecStart=/usr/local/bin/kube-apiserver \\
|
||||||
--etcd-servers=https://10.240.0.10:2379,https://10.240.0.11:2379,https://10.240.0.12:2379 \\
|
--etcd-servers=https://10.240.0.10:2379,https://10.240.0.11:2379,https://10.240.0.12:2379 \\
|
||||||
--event-ttl=1h \\
|
--event-ttl=1h \\
|
||||||
--experimental-encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \\
|
--experimental-encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \\
|
||||||
--insecure-bind-address=127.0.0.1 \\
|
|
||||||
--kubelet-certificate-authority=/var/lib/kubernetes/ca.pem \\
|
--kubelet-certificate-authority=/var/lib/kubernetes/ca.pem \\
|
||||||
--kubelet-client-certificate=/var/lib/kubernetes/kubernetes.pem \\
|
--kubelet-client-certificate=/var/lib/kubernetes/kubernetes.pem \\
|
||||||
--kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem \\
|
--kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem \\
|
||||||
--kubelet-https=true \\
|
--kubelet-https=true \\
|
||||||
--runtime-config=api/all \\
|
--runtime-config=api/all \\
|
||||||
--service-account-key-file=/var/lib/kubernetes/ca-key.pem \\
|
--service-account-key-file=/var/lib/kubernetes/service-account.pem \\
|
||||||
--service-cluster-ip-range=10.32.0.0/24 \\
|
--service-cluster-ip-range=10.32.0.0/24 \\
|
||||||
--service-node-port-range=30000-32767 \\
|
--service-node-port-range=30000-32767 \\
|
||||||
--tls-ca-file=/var/lib/kubernetes/ca.pem \\
|
|
||||||
--tls-cert-file=/var/lib/kubernetes/kubernetes.pem \\
|
--tls-cert-file=/var/lib/kubernetes/kubernetes.pem \\
|
||||||
--tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem \\
|
--tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem \\
|
||||||
--v=2
|
--v=2
|
||||||
|
@ -102,10 +111,16 @@ EOF
|
||||||
|
|
||||||
### Configure the Kubernetes Controller Manager
|
### Configure the Kubernetes Controller Manager
|
||||||
|
|
||||||
|
Move the `kube-controller-manager` kubeconfig into place:
|
||||||
|
|
||||||
|
```
|
||||||
|
sudo mv kube-controller-manager.kubeconfig /var/lib/kubernetes/
|
||||||
|
```
|
||||||
|
|
||||||
Create the `kube-controller-manager.service` systemd unit file:
|
Create the `kube-controller-manager.service` systemd unit file:
|
||||||
|
|
||||||
```
|
```
|
||||||
cat > kube-controller-manager.service <<EOF
|
cat <<EOF | sudo tee /etc/systemd/system/kube-controller-manager.service
|
||||||
[Unit]
|
[Unit]
|
||||||
Description=Kubernetes Controller Manager
|
Description=Kubernetes Controller Manager
|
||||||
Documentation=https://github.com/kubernetes/kubernetes
|
Documentation=https://github.com/kubernetes/kubernetes
|
||||||
|
@ -117,11 +132,12 @@ ExecStart=/usr/local/bin/kube-controller-manager \\
|
||||||
--cluster-name=kubernetes \\
|
--cluster-name=kubernetes \\
|
||||||
--cluster-signing-cert-file=/var/lib/kubernetes/ca.pem \\
|
--cluster-signing-cert-file=/var/lib/kubernetes/ca.pem \\
|
||||||
--cluster-signing-key-file=/var/lib/kubernetes/ca-key.pem \\
|
--cluster-signing-key-file=/var/lib/kubernetes/ca-key.pem \\
|
||||||
|
--kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig \\
|
||||||
--leader-elect=true \\
|
--leader-elect=true \\
|
||||||
--master=http://127.0.0.1:8080 \\
|
|
||||||
--root-ca-file=/var/lib/kubernetes/ca.pem \\
|
--root-ca-file=/var/lib/kubernetes/ca.pem \\
|
||||||
--service-account-private-key-file=/var/lib/kubernetes/ca-key.pem \\
|
--service-account-private-key-file=/var/lib/kubernetes/service-account-key.pem \\
|
||||||
--service-cluster-ip-range=10.32.0.0/24 \\
|
--service-cluster-ip-range=10.32.0.0/24 \\
|
||||||
|
--use-service-account-credentials=true \\
|
||||||
--v=2
|
--v=2
|
||||||
Restart=on-failure
|
Restart=on-failure
|
||||||
RestartSec=5
|
RestartSec=5
|
||||||
|
@ -133,18 +149,36 @@ EOF
|
||||||
|
|
||||||
### Configure the Kubernetes Scheduler
|
### Configure the Kubernetes Scheduler
|
||||||
|
|
||||||
|
Move the `kube-scheduler` kubeconfig into place:
|
||||||
|
|
||||||
|
```
|
||||||
|
sudo mv kube-scheduler.kubeconfig /var/lib/kubernetes/
|
||||||
|
```
|
||||||
|
|
||||||
|
Create the `kube-scheduler.yaml` configuration file:
|
||||||
|
|
||||||
|
```
|
||||||
|
cat <<EOF | sudo tee /etc/kubernetes/config/kube-scheduler.yaml
|
||||||
|
apiVersion: componentconfig/v1alpha1
|
||||||
|
kind: KubeSchedulerConfiguration
|
||||||
|
clientConnection:
|
||||||
|
kubeconfig: "/var/lib/kubernetes/kube-scheduler.kubeconfig"
|
||||||
|
leaderElection:
|
||||||
|
leaderElect: true
|
||||||
|
EOF
|
||||||
|
```
|
||||||
|
|
||||||
Create the `kube-scheduler.service` systemd unit file:
|
Create the `kube-scheduler.service` systemd unit file:
|
||||||
|
|
||||||
```
|
```
|
||||||
cat > kube-scheduler.service <<EOF
|
cat <<EOF | sudo tee /etc/systemd/system/kube-scheduler.service
|
||||||
[Unit]
|
[Unit]
|
||||||
Description=Kubernetes Scheduler
|
Description=Kubernetes Scheduler
|
||||||
Documentation=https://github.com/kubernetes/kubernetes
|
Documentation=https://github.com/kubernetes/kubernetes
|
||||||
|
|
||||||
[Service]
|
[Service]
|
||||||
ExecStart=/usr/local/bin/kube-scheduler \\
|
ExecStart=/usr/local/bin/kube-scheduler \\
|
||||||
--leader-elect=true \\
|
--config=/etc/kubernetes/config/kube-scheduler.yaml \\
|
||||||
--master=http://127.0.0.1:8080 \\
|
|
||||||
--v=2
|
--v=2
|
||||||
Restart=on-failure
|
Restart=on-failure
|
||||||
RestartSec=5
|
RestartSec=5
|
||||||
|
@ -157,27 +191,62 @@ EOF
|
||||||
### Start the Controller Services
|
### Start the Controller Services
|
||||||
|
|
||||||
```
|
```
|
||||||
sudo mv kube-apiserver.service kube-scheduler.service kube-controller-manager.service /etc/systemd/system/
|
{
|
||||||
```
|
|
||||||
|
|
||||||
```
|
|
||||||
sudo systemctl daemon-reload
|
sudo systemctl daemon-reload
|
||||||
```
|
|
||||||
|
|
||||||
```
|
|
||||||
sudo systemctl enable kube-apiserver kube-controller-manager kube-scheduler
|
sudo systemctl enable kube-apiserver kube-controller-manager kube-scheduler
|
||||||
```
|
|
||||||
|
|
||||||
```
|
|
||||||
sudo systemctl start kube-apiserver kube-controller-manager kube-scheduler
|
sudo systemctl start kube-apiserver kube-controller-manager kube-scheduler
|
||||||
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
> Allow up to 10 seconds for the Kubernetes API Server to fully initialize.
|
> Allow up to 10 seconds for the Kubernetes API Server to fully initialize.
|
||||||
|
|
||||||
|
### Enable HTTP Health Checks
|
||||||
|
|
||||||
|
A [Google Network Load Balancer](https://cloud.google.com/compute/docs/load-balancing/network) will be used to distribute traffic across the three API servers and allow each API server to terminate TLS connections and validate client certificates. The network load balancer only supports HTTP health checks which means the HTTPS endpoint exposed by the API server cannot be used. As a workaround the nginx webserver can be used to proxy HTTP health checks. In this section nginx will be installed and configured to accept HTTP health checks on port `80` and proxy the connections to the API server on `https://127.0.0.1:6443/healthz`.
|
||||||
|
|
||||||
|
> The `/healthz` API server endpoint does not require authentication by default.
|
||||||
|
|
||||||
|
Install a basic web server to handle HTTP health checks:
|
||||||
|
|
||||||
|
```
|
||||||
|
sudo apt-get install -y nginx
|
||||||
|
```
|
||||||
|
|
||||||
|
```
|
||||||
|
cat > kubernetes.default.svc.cluster.local <<EOF
|
||||||
|
server {
|
||||||
|
listen 80;
|
||||||
|
server_name kubernetes.default.svc.cluster.local;
|
||||||
|
|
||||||
|
location /healthz {
|
||||||
|
proxy_pass https://127.0.0.1:6443/healthz;
|
||||||
|
proxy_ssl_trusted_certificate /var/lib/kubernetes/ca.pem;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
EOF
|
||||||
|
```
|
||||||
|
|
||||||
|
```
|
||||||
|
{
|
||||||
|
sudo mv kubernetes.default.svc.cluster.local \
|
||||||
|
/etc/nginx/sites-available/kubernetes.default.svc.cluster.local
|
||||||
|
|
||||||
|
sudo ln -s /etc/nginx/sites-available/kubernetes.default.svc.cluster.local /etc/nginx/sites-enabled/
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
```
|
||||||
|
sudo systemctl restart nginx
|
||||||
|
```
|
||||||
|
|
||||||
|
```
|
||||||
|
sudo systemctl enable nginx
|
||||||
|
```
|
||||||
|
|
||||||
### Verification
|
### Verification
|
||||||
|
|
||||||
```
|
```
|
||||||
kubectl get componentstatuses
|
kubectl get componentstatuses --kubeconfig admin.kubeconfig
|
||||||
```
|
```
|
||||||
|
|
||||||
```
|
```
|
||||||
|
@ -189,6 +258,23 @@ etcd-0 Healthy {"health": "true"}
|
||||||
etcd-1 Healthy {"health": "true"}
|
etcd-1 Healthy {"health": "true"}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Test the nginx HTTP health check proxy:
|
||||||
|
|
||||||
|
```
|
||||||
|
curl -H "Host: kubernetes.default.svc.cluster.local" -i http://127.0.0.1/healthz
|
||||||
|
```
|
||||||
|
|
||||||
|
```
|
||||||
|
HTTP/1.1 200 OK
|
||||||
|
Server: nginx/1.14.0 (Ubuntu)
|
||||||
|
Date: Mon, 14 May 2018 13:45:39 GMT
|
||||||
|
Content-Type: text/plain; charset=utf-8
|
||||||
|
Content-Length: 2
|
||||||
|
Connection: keep-alive
|
||||||
|
|
||||||
|
ok
|
||||||
|
```
|
||||||
|
|
||||||
> Remember to run the above commands on each controller node: `controller-0`, `controller-1`, and `controller-2`.
|
> Remember to run the above commands on each controller node: `controller-0`, `controller-1`, and `controller-2`.
|
||||||
|
|
||||||
## RBAC for Kubelet Authorization
|
## RBAC for Kubelet Authorization
|
||||||
|
@ -204,7 +290,7 @@ gcloud compute ssh controller-0
|
||||||
Create the `system:kube-apiserver-to-kubelet` [ClusterRole](https://kubernetes.io/docs/admin/authorization/rbac/#role-and-clusterrole) with permissions to access the Kubelet API and perform most common tasks associated with managing pods:
|
Create the `system:kube-apiserver-to-kubelet` [ClusterRole](https://kubernetes.io/docs/admin/authorization/rbac/#role-and-clusterrole) with permissions to access the Kubelet API and perform most common tasks associated with managing pods:
|
||||||
|
|
||||||
```
|
```
|
||||||
cat <<EOF | kubectl apply -f -
|
cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
|
||||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||||
kind: ClusterRole
|
kind: ClusterRole
|
||||||
metadata:
|
metadata:
|
||||||
|
@ -232,7 +318,7 @@ The Kubernetes API Server authenticates to the Kubelet as the `kubernetes` user
|
||||||
Bind the `system:kube-apiserver-to-kubelet` ClusterRole to the `kubernetes` user:
|
Bind the `system:kube-apiserver-to-kubelet` ClusterRole to the `kubernetes` user:
|
||||||
|
|
||||||
```
|
```
|
||||||
cat <<EOF | kubectl apply -f -
|
cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
|
||||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||||
kind: ClusterRoleBinding
|
kind: ClusterRoleBinding
|
||||||
metadata:
|
metadata:
|
||||||
|
@ -255,29 +341,39 @@ In this section you will provision an external load balancer to front the Kubern
|
||||||
|
|
||||||
> The compute instances created in this tutorial will not have permission to complete this section. Run the following commands from the same machine used to create the compute instances.
|
> The compute instances created in this tutorial will not have permission to complete this section. Run the following commands from the same machine used to create the compute instances.
|
||||||
|
|
||||||
|
|
||||||
|
### Provision a Network Load Balancer
|
||||||
|
|
||||||
Create the external load balancer network resources:
|
Create the external load balancer network resources:
|
||||||
|
|
||||||
```
|
```
|
||||||
gcloud compute target-pools create kubernetes-target-pool
|
{
|
||||||
```
|
|
||||||
|
|
||||||
```
|
|
||||||
gcloud compute target-pools add-instances kubernetes-target-pool \
|
|
||||||
--instances controller-0,controller-1,controller-2
|
|
||||||
```
|
|
||||||
|
|
||||||
```
|
|
||||||
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
|
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
|
||||||
--region $(gcloud config get-value compute/region) \
|
--region $(gcloud config get-value compute/region) \
|
||||||
--format 'value(name)')
|
--format 'value(address)')
|
||||||
```
|
|
||||||
|
gcloud compute http-health-checks create kubernetes \
|
||||||
|
--description "Kubernetes Health Check" \
|
||||||
|
--host "kubernetes.default.svc.cluster.local" \
|
||||||
|
--request-path "/healthz"
|
||||||
|
|
||||||
|
gcloud compute firewall-rules create kubernetes-the-hard-way-allow-health-check \
|
||||||
|
--network kubernetes-the-hard-way \
|
||||||
|
--source-ranges 209.85.152.0/22,209.85.204.0/22,35.191.0.0/16 \
|
||||||
|
--allow tcp
|
||||||
|
|
||||||
|
gcloud compute target-pools create kubernetes-target-pool \
|
||||||
|
--http-health-check kubernetes
|
||||||
|
|
||||||
|
gcloud compute target-pools add-instances kubernetes-target-pool \
|
||||||
|
--instances controller-0,controller-1,controller-2
|
||||||
|
|
||||||
```
|
|
||||||
gcloud compute forwarding-rules create kubernetes-forwarding-rule \
|
gcloud compute forwarding-rules create kubernetes-forwarding-rule \
|
||||||
--address ${KUBERNETES_PUBLIC_ADDRESS} \
|
--address ${KUBERNETES_PUBLIC_ADDRESS} \
|
||||||
--ports 6443 \
|
--ports 6443 \
|
||||||
--region $(gcloud config get-value compute/region) \
|
--region $(gcloud config get-value compute/region) \
|
||||||
--target-pool kubernetes-target-pool
|
--target-pool kubernetes-target-pool
|
||||||
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
### Verification
|
### Verification
|
||||||
|
@ -301,12 +397,12 @@ curl --cacert ca.pem https://${KUBERNETES_PUBLIC_ADDRESS}:6443/version
|
||||||
```
|
```
|
||||||
{
|
{
|
||||||
"major": "1",
|
"major": "1",
|
||||||
"minor": "9",
|
"minor": "10",
|
||||||
"gitVersion": "v1.9.0",
|
"gitVersion": "v1.10.2",
|
||||||
"gitCommit": "925c127ec6b946659ad0fd596fa959be43f0cc05",
|
"gitCommit": "81753b10df112992bf51bbc2c2f85208aad78335",
|
||||||
"gitTreeState": "clean",
|
"gitTreeState": "clean",
|
||||||
"buildDate": "2017-12-15T20:55:30Z",
|
"buildDate": "2018-04-27T09:10:24Z",
|
||||||
"goVersion": "go1.9.2",
|
"goVersion": "go1.9.3",
|
||||||
"compiler": "gc",
|
"compiler": "gc",
|
||||||
"platform": "linux/amd64"
|
"platform": "linux/amd64"
|
||||||
}
|
}
|
||||||
|
|
|
@ -1,6 +1,6 @@
|
||||||
# Bootstrapping the Kubernetes Worker Nodes
|
# Bootstrapping the Kubernetes Worker Nodes
|
||||||
|
|
||||||
In this lab you will bootstrap three Kubernetes worker nodes. The following components will be installed on each node: [runc](https://github.com/opencontainers/runc), [container networking plugins](https://github.com/containernetworking/cni), [cri-containerd](https://github.com/containerd/cri-containerd), [kubelet](https://kubernetes.io/docs/admin/kubelet), and [kube-proxy](https://kubernetes.io/docs/concepts/cluster-administration/proxies).
|
In this lab you will bootstrap three Kubernetes worker nodes. The following components will be installed on each node: [runc](https://github.com/opencontainers/runc), [gVisor](https://github.com/google/gvisor), [container networking plugins](https://github.com/containernetworking/cni), [containerd](https://github.com/containerd/containerd), [kubelet](https://kubernetes.io/docs/admin/kubelet), and [kube-proxy](https://kubernetes.io/docs/concepts/cluster-administration/proxies).
|
||||||
|
|
||||||
## Prerequisites
|
## Prerequisites
|
||||||
|
|
||||||
|
@ -10,12 +10,19 @@ The commands in this lab must be run on each worker instance: `worker-0`, `worke
|
||||||
gcloud compute ssh worker-0
|
gcloud compute ssh worker-0
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### Running commands in parallel with tmux
|
||||||
|
|
||||||
|
[tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple compute instances at the same time. See the [Running commands in parallel with tmux](01-prerequisites.md#running-commands-in-parallel-with-tmux) section in the Prerequisites lab.
|
||||||
|
|
||||||
## Provisioning a Kubernetes Worker Node
|
## Provisioning a Kubernetes Worker Node
|
||||||
|
|
||||||
Install the OS dependencies:
|
Install the OS dependencies:
|
||||||
|
|
||||||
```
|
```
|
||||||
sudo apt-get -y install socat
|
{
|
||||||
|
sudo apt-get update
|
||||||
|
sudo apt-get -y install socat conntrack ipset
|
||||||
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
> The socat binary enables support for the `kubectl port-forward` command.
|
> The socat binary enables support for the `kubectl port-forward` command.
|
||||||
|
@ -24,11 +31,14 @@ sudo apt-get -y install socat
|
||||||
|
|
||||||
```
|
```
|
||||||
wget -q --show-progress --https-only --timestamping \
|
wget -q --show-progress --https-only --timestamping \
|
||||||
|
https://github.com/kubernetes-incubator/cri-tools/releases/download/v1.0.0-beta.0/crictl-v1.0.0-beta.0-linux-amd64.tar.gz \
|
||||||
|
https://storage.googleapis.com/kubernetes-the-hard-way/runsc \
|
||||||
|
https://github.com/opencontainers/runc/releases/download/v1.0.0-rc5/runc.amd64 \
|
||||||
https://github.com/containernetworking/plugins/releases/download/v0.6.0/cni-plugins-amd64-v0.6.0.tgz \
|
https://github.com/containernetworking/plugins/releases/download/v0.6.0/cni-plugins-amd64-v0.6.0.tgz \
|
||||||
https://github.com/containerd/cri-containerd/releases/download/v1.0.0-beta.1/cri-containerd-1.0.0-beta.1.linux-amd64.tar.gz \
|
https://github.com/containerd/containerd/releases/download/v1.1.0/containerd-1.1.0.linux-amd64.tar.gz \
|
||||||
https://storage.googleapis.com/kubernetes-release/release/v1.9.0/bin/linux/amd64/kubectl \
|
https://storage.googleapis.com/kubernetes-release/release/v1.10.2/bin/linux/amd64/kubectl \
|
||||||
https://storage.googleapis.com/kubernetes-release/release/v1.9.0/bin/linux/amd64/kube-proxy \
|
https://storage.googleapis.com/kubernetes-release/release/v1.10.2/bin/linux/amd64/kube-proxy \
|
||||||
https://storage.googleapis.com/kubernetes-release/release/v1.9.0/bin/linux/amd64/kubelet
|
https://storage.googleapis.com/kubernetes-release/release/v1.10.2/bin/linux/amd64/kubelet
|
||||||
```
|
```
|
||||||
|
|
||||||
Create the installation directories:
|
Create the installation directories:
|
||||||
|
@ -46,19 +56,14 @@ sudo mkdir -p \
|
||||||
Install the worker binaries:
|
Install the worker binaries:
|
||||||
|
|
||||||
```
|
```
|
||||||
|
{
|
||||||
|
chmod +x kubectl kube-proxy kubelet runc.amd64 runsc
|
||||||
|
sudo mv runc.amd64 runc
|
||||||
|
sudo mv kubectl kube-proxy kubelet runc runsc /usr/local/bin/
|
||||||
|
sudo tar -xvf crictl-v1.0.0-beta.0-linux-amd64.tar.gz -C /usr/local/bin/
|
||||||
sudo tar -xvf cni-plugins-amd64-v0.6.0.tgz -C /opt/cni/bin/
|
sudo tar -xvf cni-plugins-amd64-v0.6.0.tgz -C /opt/cni/bin/
|
||||||
```
|
sudo tar -xvf containerd-1.1.0.linux-amd64.tar.gz -C /
|
||||||
|
}
|
||||||
```
|
|
||||||
sudo tar -xvf cri-containerd-1.0.0-beta.1.linux-amd64.tar.gz -C /
|
|
||||||
```
|
|
||||||
|
|
||||||
```
|
|
||||||
chmod +x kubectl kube-proxy kubelet
|
|
||||||
```
|
|
||||||
|
|
||||||
```
|
|
||||||
sudo mv kubectl kube-proxy kubelet /usr/local/bin/
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Configure CNI Networking
|
### Configure CNI Networking
|
||||||
|
@ -73,7 +78,7 @@ POD_CIDR=$(curl -s -H "Metadata-Flavor: Google" \
|
||||||
Create the `bridge` network configuration file:
|
Create the `bridge` network configuration file:
|
||||||
|
|
||||||
```
|
```
|
||||||
cat > 10-bridge.conf <<EOF
|
cat <<EOF | sudo tee /etc/cni/net.d/10-bridge.conf
|
||||||
{
|
{
|
||||||
"cniVersion": "0.3.1",
|
"cniVersion": "0.3.1",
|
||||||
"name": "bridge",
|
"name": "bridge",
|
||||||
|
@ -95,7 +100,7 @@ EOF
|
||||||
Create the `loopback` network configuration file:
|
Create the `loopback` network configuration file:
|
||||||
|
|
||||||
```
|
```
|
||||||
cat > 99-loopback.conf <<EOF
|
cat <<EOF | sudo tee /etc/cni/net.d/99-loopback.conf
|
||||||
{
|
{
|
||||||
"cniVersion": "0.3.1",
|
"cniVersion": "0.3.1",
|
||||||
"type": "loopback"
|
"type": "loopback"
|
||||||
|
@ -103,55 +108,112 @@ cat > 99-loopback.conf <<EOF
|
||||||
EOF
|
EOF
|
||||||
```
|
```
|
||||||
|
|
||||||
Move the network configuration files to the CNI configuration directory:
|
### Configure containerd
|
||||||
|
|
||||||
|
Create the `containerd` configuration file:
|
||||||
|
|
||||||
```
|
```
|
||||||
sudo mv 10-bridge.conf 99-loopback.conf /etc/cni/net.d/
|
sudo mkdir -p /etc/containerd/
|
||||||
|
```
|
||||||
|
|
||||||
|
```
|
||||||
|
cat << EOF | sudo tee /etc/containerd/config.toml
|
||||||
|
[plugins]
|
||||||
|
[plugins.cri.containerd]
|
||||||
|
snapshotter = "overlayfs"
|
||||||
|
[plugins.cri.containerd.default_runtime]
|
||||||
|
runtime_type = "io.containerd.runtime.v1.linux"
|
||||||
|
runtime_engine = "/usr/local/bin/runc"
|
||||||
|
runtime_root = ""
|
||||||
|
[plugins.cri.containerd.untrusted_workload_runtime]
|
||||||
|
runtime_type = "io.containerd.runtime.v1.linux"
|
||||||
|
runtime_engine = "/usr/local/bin/runsc"
|
||||||
|
runtime_root = "/run/containerd/runsc"
|
||||||
|
EOF
|
||||||
|
```
|
||||||
|
|
||||||
|
> Untrusted workloads will be run using the gVisor (runsc) runtime.
|
||||||
|
|
||||||
|
Create the `containerd.service` systemd unit file:
|
||||||
|
|
||||||
|
```
|
||||||
|
cat <<EOF | sudo tee /etc/systemd/system/containerd.service
|
||||||
|
[Unit]
|
||||||
|
Description=containerd container runtime
|
||||||
|
Documentation=https://containerd.io
|
||||||
|
After=network.target
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
ExecStartPre=/sbin/modprobe overlay
|
||||||
|
ExecStart=/bin/containerd
|
||||||
|
Restart=always
|
||||||
|
RestartSec=5
|
||||||
|
Delegate=yes
|
||||||
|
KillMode=process
|
||||||
|
OOMScoreAdjust=-999
|
||||||
|
LimitNOFILE=1048576
|
||||||
|
LimitNPROC=infinity
|
||||||
|
LimitCORE=infinity
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target
|
||||||
|
EOF
|
||||||
```
|
```
|
||||||
|
|
||||||
### Configure the Kubelet
|
### Configure the Kubelet
|
||||||
|
|
||||||
```
|
```
|
||||||
|
{
|
||||||
sudo mv ${HOSTNAME}-key.pem ${HOSTNAME}.pem /var/lib/kubelet/
|
sudo mv ${HOSTNAME}-key.pem ${HOSTNAME}.pem /var/lib/kubelet/
|
||||||
```
|
|
||||||
|
|
||||||
```
|
|
||||||
sudo mv ${HOSTNAME}.kubeconfig /var/lib/kubelet/kubeconfig
|
sudo mv ${HOSTNAME}.kubeconfig /var/lib/kubelet/kubeconfig
|
||||||
|
sudo mv ca.pem /var/lib/kubernetes/
|
||||||
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Create the `kubelet-config.yaml` configuration file:
|
||||||
|
|
||||||
```
|
```
|
||||||
sudo mv ca.pem /var/lib/kubernetes/
|
cat <<EOF | sudo tee /var/lib/kubelet/kubelet-config.yaml
|
||||||
|
kind: KubeletConfiguration
|
||||||
|
apiVersion: kubelet.config.k8s.io/v1beta1
|
||||||
|
authentication:
|
||||||
|
anonymous:
|
||||||
|
enabled: false
|
||||||
|
webhook:
|
||||||
|
enabled: true
|
||||||
|
x509:
|
||||||
|
clientCAFile: "/var/lib/kubernetes/ca.pem"
|
||||||
|
authorization:
|
||||||
|
mode: Webhook
|
||||||
|
clusterDomain: "cluster.local"
|
||||||
|
clusterDNS:
|
||||||
|
- "10.32.0.10"
|
||||||
|
podCIDR: "${POD_CIDR}"
|
||||||
|
runtimeRequestTimeout: "15m"
|
||||||
|
tlsCertFile: "/var/lib/kubelet/${HOSTNAME}.pem"
|
||||||
|
tlsPrivateKeyFile: "/var/lib/kubelet/${HOSTNAME}-key.pem"
|
||||||
|
EOF
|
||||||
```
|
```
|
||||||
|
|
||||||
Create the `kubelet.service` systemd unit file:
|
Create the `kubelet.service` systemd unit file:
|
||||||
|
|
||||||
```
|
```
|
||||||
cat > kubelet.service <<EOF
|
cat <<EOF | sudo tee /etc/systemd/system/kubelet.service
|
||||||
[Unit]
|
[Unit]
|
||||||
Description=Kubernetes Kubelet
|
Description=Kubernetes Kubelet
|
||||||
Documentation=https://github.com/kubernetes/kubernetes
|
Documentation=https://github.com/kubernetes/kubernetes
|
||||||
After=cri-containerd.service
|
After=containerd.service
|
||||||
Requires=cri-containerd.service
|
Requires=containerd.service
|
||||||
|
|
||||||
[Service]
|
[Service]
|
||||||
ExecStart=/usr/local/bin/kubelet \\
|
ExecStart=/usr/local/bin/kubelet \\
|
||||||
--allow-privileged=true \\
|
--config=/var/lib/kubelet/kubelet-config.yaml \\
|
||||||
--anonymous-auth=false \\
|
|
||||||
--authorization-mode=Webhook \\
|
|
||||||
--client-ca-file=/var/lib/kubernetes/ca.pem \\
|
|
||||||
--cloud-provider= \\
|
|
||||||
--cluster-dns=10.32.0.10 \\
|
|
||||||
--cluster-domain=cluster.local \\
|
|
||||||
--container-runtime=remote \\
|
--container-runtime=remote \\
|
||||||
--container-runtime-endpoint=unix:///var/run/cri-containerd.sock \\
|
--container-runtime-endpoint=unix:///var/run/containerd/containerd.sock \\
|
||||||
--image-pull-progress-deadline=2m \\
|
--image-pull-progress-deadline=2m \\
|
||||||
--kubeconfig=/var/lib/kubelet/kubeconfig \\
|
--kubeconfig=/var/lib/kubelet/kubeconfig \\
|
||||||
--network-plugin=cni \\
|
--network-plugin=cni \\
|
||||||
--pod-cidr=${POD_CIDR} \\
|
|
||||||
--register-node=true \\
|
--register-node=true \\
|
||||||
--runtime-request-timeout=15m \\
|
|
||||||
--tls-cert-file=/var/lib/kubelet/${HOSTNAME}.pem \\
|
|
||||||
--tls-private-key-file=/var/lib/kubelet/${HOSTNAME}-key.pem \\
|
|
||||||
--v=2
|
--v=2
|
||||||
Restart=on-failure
|
Restart=on-failure
|
||||||
RestartSec=5
|
RestartSec=5
|
||||||
|
@ -167,20 +229,30 @@ EOF
|
||||||
sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig
|
sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Create the `kube-proxy-config.yaml` configuration file:
|
||||||
|
|
||||||
|
```
|
||||||
|
cat <<EOF | sudo tee /var/lib/kube-proxy/kube-proxy-config.yaml
|
||||||
|
kind: KubeProxyConfiguration
|
||||||
|
apiVersion: kubeproxy.config.k8s.io/v1alpha1
|
||||||
|
clientConnection:
|
||||||
|
kubeconfig: "/var/lib/kube-proxy/kubeconfig"
|
||||||
|
mode: "iptables"
|
||||||
|
clusterCIDR: "10.200.0.0/16"
|
||||||
|
EOF
|
||||||
|
```
|
||||||
|
|
||||||
Create the `kube-proxy.service` systemd unit file:
|
Create the `kube-proxy.service` systemd unit file:
|
||||||
|
|
||||||
```
|
```
|
||||||
cat > kube-proxy.service <<EOF
|
cat <<EOF | sudo tee /etc/systemd/system/kube-proxy.service
|
||||||
[Unit]
|
[Unit]
|
||||||
Description=Kubernetes Kube Proxy
|
Description=Kubernetes Kube Proxy
|
||||||
Documentation=https://github.com/kubernetes/kubernetes
|
Documentation=https://github.com/kubernetes/kubernetes
|
||||||
|
|
||||||
[Service]
|
[Service]
|
||||||
ExecStart=/usr/local/bin/kube-proxy \\
|
ExecStart=/usr/local/bin/kube-proxy \\
|
||||||
--cluster-cidr=10.200.0.0/16 \\
|
--config=/var/lib/kube-proxy/kube-proxy-config.yaml
|
||||||
--kubeconfig=/var/lib/kube-proxy/kubeconfig \\
|
|
||||||
--proxy-mode=iptables \\
|
|
||||||
--v=2
|
|
||||||
Restart=on-failure
|
Restart=on-failure
|
||||||
RestartSec=5
|
RestartSec=5
|
||||||
|
|
||||||
|
@ -192,44 +264,33 @@ EOF
|
||||||
### Start the Worker Services
|
### Start the Worker Services
|
||||||
|
|
||||||
```
|
```
|
||||||
sudo mv kubelet.service kube-proxy.service /etc/systemd/system/
|
{
|
||||||
```
|
|
||||||
|
|
||||||
```
|
|
||||||
sudo systemctl daemon-reload
|
sudo systemctl daemon-reload
|
||||||
```
|
sudo systemctl enable containerd kubelet kube-proxy
|
||||||
|
sudo systemctl start containerd kubelet kube-proxy
|
||||||
```
|
}
|
||||||
sudo systemctl enable containerd cri-containerd kubelet kube-proxy
|
|
||||||
```
|
|
||||||
|
|
||||||
```
|
|
||||||
sudo systemctl start containerd cri-containerd kubelet kube-proxy
|
|
||||||
```
|
```
|
||||||
|
|
||||||
> Remember to run the above commands on each worker node: `worker-0`, `worker-1`, and `worker-2`.
|
> Remember to run the above commands on each worker node: `worker-0`, `worker-1`, and `worker-2`.
|
||||||
|
|
||||||
## Verification
|
## Verification
|
||||||
|
|
||||||
Login to one of the controller nodes:
|
> The compute instances created in this tutorial will not have permission to complete this section. Run the following commands from the same machine used to create the compute instances.
|
||||||
|
|
||||||
```
|
|
||||||
gcloud compute ssh controller-0
|
|
||||||
```
|
|
||||||
|
|
||||||
List the registered Kubernetes nodes:
|
List the registered Kubernetes nodes:
|
||||||
|
|
||||||
```
|
```
|
||||||
kubectl get nodes
|
gcloud compute ssh controller-0 \
|
||||||
|
--command "kubectl get nodes --kubeconfig admin.kubeconfig"
|
||||||
```
|
```
|
||||||
|
|
||||||
> output
|
> output
|
||||||
|
|
||||||
```
|
```
|
||||||
NAME STATUS ROLES AGE VERSION
|
NAME STATUS ROLES AGE VERSION
|
||||||
worker-0 Ready <none> 18s v1.9.0
|
worker-0 Ready <none> 20s v1.10.2
|
||||||
worker-1 Ready <none> 18s v1.9.0
|
worker-1 Ready <none> 20s v1.10.2
|
||||||
worker-2 Ready <none> 18s v1.9.0
|
worker-2 Ready <none> 20s v1.10.2
|
||||||
```
|
```
|
||||||
|
|
||||||
Next: [Configuring kubectl for Remote Access](10-configuring-kubectl.md)
|
Next: [Configuring kubectl for Remote Access](10-configuring-kubectl.md)
|
||||||
|
|
|
@ -8,37 +8,29 @@ In this lab you will generate a kubeconfig file for the `kubectl` command line u
|
||||||
|
|
||||||
Each kubeconfig requires a Kubernetes API Server to connect to. To support high availability the IP address assigned to the external load balancer fronting the Kubernetes API Servers will be used.
|
Each kubeconfig requires a Kubernetes API Server to connect to. To support high availability the IP address assigned to the external load balancer fronting the Kubernetes API Servers will be used.
|
||||||
|
|
||||||
Retrieve the `kubernetes-the-hard-way` static IP address:
|
|
||||||
|
|
||||||
```
|
|
||||||
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
|
|
||||||
--region $(gcloud config get-value compute/region) \
|
|
||||||
--format 'value(address)')
|
|
||||||
```
|
|
||||||
|
|
||||||
Generate a kubeconfig file suitable for authenticating as the `admin` user:
|
Generate a kubeconfig file suitable for authenticating as the `admin` user:
|
||||||
|
|
||||||
```
|
```
|
||||||
|
{
|
||||||
|
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
|
||||||
|
--region $(gcloud config get-value compute/region) \
|
||||||
|
--format 'value(address)')
|
||||||
|
|
||||||
kubectl config set-cluster kubernetes-the-hard-way \
|
kubectl config set-cluster kubernetes-the-hard-way \
|
||||||
--certificate-authority=ca.pem \
|
--certificate-authority=ca.pem \
|
||||||
--embed-certs=true \
|
--embed-certs=true \
|
||||||
--server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443
|
--server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443
|
||||||
```
|
|
||||||
|
|
||||||
```
|
|
||||||
kubectl config set-credentials admin \
|
kubectl config set-credentials admin \
|
||||||
--client-certificate=admin.pem \
|
--client-certificate=admin.pem \
|
||||||
--client-key=admin-key.pem
|
--client-key=admin-key.pem
|
||||||
```
|
|
||||||
|
|
||||||
```
|
|
||||||
kubectl config set-context kubernetes-the-hard-way \
|
kubectl config set-context kubernetes-the-hard-way \
|
||||||
--cluster=kubernetes-the-hard-way \
|
--cluster=kubernetes-the-hard-way \
|
||||||
--user=admin
|
--user=admin
|
||||||
```
|
|
||||||
|
|
||||||
```
|
|
||||||
kubectl config use-context kubernetes-the-hard-way
|
kubectl config use-context kubernetes-the-hard-way
|
||||||
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
## Verification
|
## Verification
|
||||||
|
@ -55,9 +47,9 @@ kubectl get componentstatuses
|
||||||
NAME STATUS MESSAGE ERROR
|
NAME STATUS MESSAGE ERROR
|
||||||
controller-manager Healthy ok
|
controller-manager Healthy ok
|
||||||
scheduler Healthy ok
|
scheduler Healthy ok
|
||||||
|
etcd-1 Healthy {"health":"true"}
|
||||||
etcd-2 Healthy {"health":"true"}
|
etcd-2 Healthy {"health":"true"}
|
||||||
etcd-0 Healthy {"health":"true"}
|
etcd-0 Healthy {"health":"true"}
|
||||||
etcd-1 Healthy {"health": "true"}
|
|
||||||
```
|
```
|
||||||
|
|
||||||
List the nodes in the remote Kubernetes cluster:
|
List the nodes in the remote Kubernetes cluster:
|
||||||
|
@ -70,9 +62,9 @@ kubectl get nodes
|
||||||
|
|
||||||
```
|
```
|
||||||
NAME STATUS ROLES AGE VERSION
|
NAME STATUS ROLES AGE VERSION
|
||||||
worker-0 Ready <none> 1m v1.9.0
|
worker-0 Ready <none> 1m v1.10.2
|
||||||
worker-1 Ready <none> 1m v1.9.0
|
worker-1 Ready <none> 1m v1.10.2
|
||||||
worker-2 Ready <none> 1m v1.9.0
|
worker-2 Ready <none> 1m v1.10.2
|
||||||
```
|
```
|
||||||
|
|
||||||
Next: [Provisioning Pod Network Routes](11-pod-network-routes.md)
|
Next: [Provisioning Pod Network Routes](11-pod-network-routes.md)
|
||||||
|
|
|
@ -13,10 +13,10 @@ kubectl create -f https://storage.googleapis.com/kubernetes-the-hard-way/kube-dn
|
||||||
> output
|
> output
|
||||||
|
|
||||||
```
|
```
|
||||||
|
service "kube-dns" created
|
||||||
serviceaccount "kube-dns" created
|
serviceaccount "kube-dns" created
|
||||||
configmap "kube-dns" created
|
configmap "kube-dns" created
|
||||||
service "kube-dns" created
|
deployment.extensions "kube-dns" created
|
||||||
deployment "kube-dns" created
|
|
||||||
```
|
```
|
||||||
|
|
||||||
List the pods created by the `kube-dns` deployment:
|
List the pods created by the `kube-dns` deployment:
|
||||||
|
|
|
@ -17,7 +17,12 @@ Print a hexdump of the `kubernetes-the-hard-way` secret stored in etcd:
|
||||||
|
|
||||||
```
|
```
|
||||||
gcloud compute ssh controller-0 \
|
gcloud compute ssh controller-0 \
|
||||||
--command "ETCDCTL_API=3 etcdctl get /registry/secrets/default/kubernetes-the-hard-way | hexdump -C"
|
--command "sudo ETCDCTL_API=3 etcdctl get \
|
||||||
|
--endpoints=https://127.0.0.1:2379 \
|
||||||
|
--cacert=/etc/etcd/ca.pem \
|
||||||
|
--cert=/etc/etcd/kubernetes.pem \
|
||||||
|
--key=/etc/etcd/kubernetes-key.pem\
|
||||||
|
/registry/secrets/default/kubernetes-the-hard-way | hexdump -C"
|
||||||
```
|
```
|
||||||
|
|
||||||
> output
|
> output
|
||||||
|
@ -27,17 +32,17 @@ gcloud compute ssh controller-0 \
|
||||||
00000010 73 2f 64 65 66 61 75 6c 74 2f 6b 75 62 65 72 6e |s/default/kubern|
|
00000010 73 2f 64 65 66 61 75 6c 74 2f 6b 75 62 65 72 6e |s/default/kubern|
|
||||||
00000020 65 74 65 73 2d 74 68 65 2d 68 61 72 64 2d 77 61 |etes-the-hard-wa|
|
00000020 65 74 65 73 2d 74 68 65 2d 68 61 72 64 2d 77 61 |etes-the-hard-wa|
|
||||||
00000030 79 0a 6b 38 73 3a 65 6e 63 3a 61 65 73 63 62 63 |y.k8s:enc:aescbc|
|
00000030 79 0a 6b 38 73 3a 65 6e 63 3a 61 65 73 63 62 63 |y.k8s:enc:aescbc|
|
||||||
00000040 3a 76 31 3a 6b 65 79 31 3a ea 7c 76 32 43 62 6f |:v1:key1:.|v2Cbo|
|
00000040 3a 76 31 3a 6b 65 79 31 3a 7b 8e 59 78 0f 59 09 |:v1:key1:{.Yx.Y.|
|
||||||
00000050 44 02 02 8c b7 ca fe 95 a5 33 f6 a1 18 6c 3d 53 |D........3...l=S|
|
00000050 e2 6a ce cd f4 b6 4e ec bc 91 aa 87 06 29 39 8d |.j....N......)9.|
|
||||||
00000060 e7 9c 51 ee 32 f6 e4 17 ea bb 11 d5 2f e2 40 00 |..Q.2......./.@.|
|
00000060 70 e8 5d c4 b1 66 69 49 60 8f c0 cc 55 d3 69 2b |p.]..fiI`...U.i+|
|
||||||
00000070 ae cf d9 e7 ba 7f 68 18 d3 c1 10 10 93 43 35 bd |......h......C5.|
|
00000070 49 bb 0e 7b 90 10 b0 85 5b b1 e2 c6 33 b6 b7 31 |I..{....[...3..1|
|
||||||
00000080 24 dd 66 b4 f8 f9 82 77 4a d5 78 03 19 41 1e bc |$.f....wJ.x..A..|
|
00000080 25 99 a1 60 8f 40 a9 e5 55 8c 0f 26 ae 76 dc 5b |%..`.@..U..&.v.[|
|
||||||
00000090 94 3f 17 41 ad cc 8c ba 9f 8f 8e 56 97 7e 96 fb |.?.A.......V.~..|
|
00000090 78 35 f5 3e c1 1e bc 21 bb 30 e2 0c e3 80 1e 33 |x5.>...!.0.....3|
|
||||||
000000a0 8f 2e 6a a5 bf 08 1f 0b c3 4b 2b 93 d1 ec f8 70 |..j......K+....p|
|
000000a0 90 79 46 6d 23 d8 f9 a2 d7 5d ed 4d 82 2e 9a 5e |.yFm#....].M...^|
|
||||||
000000b0 c1 e4 1d 1a d2 0d f8 74 3a a1 4f 3c e0 c9 6d 3f |.......t:.O<..m?|
|
000000b0 5d b6 3c 34 37 51 4b 83 de 99 1a ea 0f 2f 7c 9b |].<47QK....../|.|
|
||||||
000000c0 de a3 f5 fd 76 aa 5e bc 27 d9 3c 6b 8f 54 97 45 |....v.^.'.<k.T.E|
|
000000c0 46 15 93 aa ba 72 ba b9 bd e1 a3 c0 45 90 b1 de |F....r......E...|
|
||||||
000000d0 31 25 ff 23 90 a4 2a f2 db 78 b1 3b ca 21 f3 6b |1%.#..*..x.;.!.k|
|
000000d0 c4 2e c8 d0 94 ec 25 69 7b af 08 34 93 12 3d 1c |......%i{..4..=.|
|
||||||
000000e0 dd fb 8e 53 c6 23 0d 35 c8 0a |...S.#.5..|
|
000000e0 fd 23 9b ba e8 d1 25 56 f4 0a |.#....%V..|
|
||||||
000000ea
|
000000ea
|
||||||
```
|
```
|
||||||
|
|
||||||
|
@ -63,7 +68,7 @@ kubectl get pods -l run=nginx
|
||||||
|
|
||||||
```
|
```
|
||||||
NAME READY STATUS RESTARTS AGE
|
NAME READY STATUS RESTARTS AGE
|
||||||
nginx-4217019353-b5gzn 1/1 Running 0 15s
|
nginx-65899c769f-xkfcn 1/1 Running 0 15s
|
||||||
```
|
```
|
||||||
|
|
||||||
### Port Forwarding
|
### Port Forwarding
|
||||||
|
@ -99,13 +104,13 @@ curl --head http://127.0.0.1:8080
|
||||||
|
|
||||||
```
|
```
|
||||||
HTTP/1.1 200 OK
|
HTTP/1.1 200 OK
|
||||||
Server: nginx/1.13.7
|
Server: nginx/1.13.12
|
||||||
Date: Mon, 18 Dec 2017 14:50:36 GMT
|
Date: Mon, 14 May 2018 13:59:21 GMT
|
||||||
Content-Type: text/html
|
Content-Type: text/html
|
||||||
Content-Length: 612
|
Content-Length: 612
|
||||||
Last-Modified: Tue, 21 Nov 2017 14:28:04 GMT
|
Last-Modified: Mon, 09 Apr 2018 16:01:09 GMT
|
||||||
Connection: keep-alive
|
Connection: keep-alive
|
||||||
ETag: "5a1437f4-264"
|
ETag: "5acb8e45-264"
|
||||||
Accept-Ranges: bytes
|
Accept-Ranges: bytes
|
||||||
```
|
```
|
||||||
|
|
||||||
|
@ -131,7 +136,7 @@ kubectl logs $POD_NAME
|
||||||
> output
|
> output
|
||||||
|
|
||||||
```
|
```
|
||||||
127.0.0.1 - - [18/Dec/2017:14:50:36 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.54.0" "-"
|
127.0.0.1 - - [14/May/2018:13:59:21 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.52.1" "-"
|
||||||
```
|
```
|
||||||
|
|
||||||
### Exec
|
### Exec
|
||||||
|
@ -147,7 +152,7 @@ kubectl exec -ti $POD_NAME -- nginx -v
|
||||||
> output
|
> output
|
||||||
|
|
||||||
```
|
```
|
||||||
nginx version: nginx/1.13.7
|
nginx version: nginx/1.13.12
|
||||||
```
|
```
|
||||||
|
|
||||||
## Services
|
## Services
|
||||||
|
@ -194,14 +199,128 @@ curl -I http://${EXTERNAL_IP}:${NODE_PORT}
|
||||||
|
|
||||||
```
|
```
|
||||||
HTTP/1.1 200 OK
|
HTTP/1.1 200 OK
|
||||||
Server: nginx/1.13.7
|
Server: nginx/1.13.12
|
||||||
Date: Mon, 18 Dec 2017 14:52:09 GMT
|
Date: Mon, 14 May 2018 14:01:30 GMT
|
||||||
Content-Type: text/html
|
Content-Type: text/html
|
||||||
Content-Length: 612
|
Content-Length: 612
|
||||||
Last-Modified: Tue, 21 Nov 2017 14:28:04 GMT
|
Last-Modified: Mon, 09 Apr 2018 16:01:09 GMT
|
||||||
Connection: keep-alive
|
Connection: keep-alive
|
||||||
ETag: "5a1437f4-264"
|
ETag: "5acb8e45-264"
|
||||||
Accept-Ranges: bytes
|
Accept-Ranges: bytes
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Untrusted Workloads
|
||||||
|
|
||||||
|
This section will verify the ability to run untrusted workloads using [gVisor](https://github.com/google/gvisor).
|
||||||
|
|
||||||
|
Create the `untrusted` pod:
|
||||||
|
|
||||||
|
```
|
||||||
|
cat <<EOF | kubectl apply -f -
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Pod
|
||||||
|
metadata:
|
||||||
|
name: untrusted
|
||||||
|
annotations:
|
||||||
|
io.kubernetes.cri.untrusted-workload: "true"
|
||||||
|
spec:
|
||||||
|
containers:
|
||||||
|
- name: webserver
|
||||||
|
image: gcr.io/hightowerlabs/helloworld:2.0.0
|
||||||
|
EOF
|
||||||
|
```
|
||||||
|
|
||||||
|
### Verification
|
||||||
|
|
||||||
|
In this section you will verify the `untrusted` pod is running under gVisor (runsc) by inspecting the assigned worker node.
|
||||||
|
|
||||||
|
Verify the `untrusted` pod is running:
|
||||||
|
|
||||||
|
```
|
||||||
|
kubectl get pods -o wide
|
||||||
|
```
|
||||||
|
```
|
||||||
|
NAME READY STATUS RESTARTS AGE IP NODE
|
||||||
|
busybox-68654f944b-djjjb 1/1 Running 0 5m 10.200.0.2 worker-0
|
||||||
|
nginx-65899c769f-xkfcn 1/1 Running 0 4m 10.200.1.2 worker-1
|
||||||
|
untrusted 1/1 Running 0 10s 10.200.0.3 worker-0
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
Get the node name where the `untrusted` pod is running:
|
||||||
|
|
||||||
|
```
|
||||||
|
INSTANCE_NAME=$(kubectl get pod untrusted --output=jsonpath='{.spec.nodeName}')
|
||||||
|
```
|
||||||
|
|
||||||
|
SSH into the worker node:
|
||||||
|
|
||||||
|
```
|
||||||
|
gcloud compute ssh ${INSTANCE_NAME}
|
||||||
|
```
|
||||||
|
|
||||||
|
List the containers running under gVisor:
|
||||||
|
|
||||||
|
```
|
||||||
|
sudo runsc --root /run/containerd/runsc/k8s.io list
|
||||||
|
```
|
||||||
|
```
|
||||||
|
I0514 14:03:56.108368 14988 x:0] ***************************
|
||||||
|
I0514 14:03:56.108548 14988 x:0] Args: [runsc --root /run/containerd/runsc/k8s.io list]
|
||||||
|
I0514 14:03:56.108730 14988 x:0] Git Revision: 08879266fef3a67fac1a77f1ea133c3ac75759dd
|
||||||
|
I0514 14:03:56.108787 14988 x:0] PID: 14988
|
||||||
|
I0514 14:03:56.108838 14988 x:0] UID: 0, GID: 0
|
||||||
|
I0514 14:03:56.108877 14988 x:0] Configuration:
|
||||||
|
I0514 14:03:56.108912 14988 x:0] RootDir: /run/containerd/runsc/k8s.io
|
||||||
|
I0514 14:03:56.109000 14988 x:0] Platform: ptrace
|
||||||
|
I0514 14:03:56.109080 14988 x:0] FileAccess: proxy, overlay: false
|
||||||
|
I0514 14:03:56.109159 14988 x:0] Network: sandbox, logging: false
|
||||||
|
I0514 14:03:56.109238 14988 x:0] Strace: false, max size: 1024, syscalls: []
|
||||||
|
I0514 14:03:56.109315 14988 x:0] ***************************
|
||||||
|
ID PID STATUS BUNDLE CREATED OWNER
|
||||||
|
3528c6b270c76858e15e10ede61bd1100b77519e7c9972d51b370d6a3c60adbb 14766 running /run/containerd/io.containerd.runtime.v1.linux/k8s.io/3528c6b270c76858e15e10ede61bd1100b77519e7c9972d51b370d6a3c60adbb 2018-05-14T14:02:34.302378996Z
|
||||||
|
7ff747c919c2dcf31e64d7673340885138317c91c7c51ec6302527df680ba981 14716 running /run/containerd/io.containerd.runtime.v1.linux/k8s.io/7ff747c919c2dcf31e64d7673340885138317c91c7c51ec6302527df680ba981 2018-05-14T14:02:32.159552044Z
|
||||||
|
I0514 14:03:56.111287 14988 x:0] Exiting with status: 0
|
||||||
|
```
|
||||||
|
|
||||||
|
Get the ID of the `untrusted` pod:
|
||||||
|
|
||||||
|
```
|
||||||
|
POD_ID=$(sudo crictl -r unix:///var/run/containerd/containerd.sock \
|
||||||
|
pods --name untrusted -q)
|
||||||
|
```
|
||||||
|
|
||||||
|
Get the ID of the `webserver` container running in the `untrusted` pod:
|
||||||
|
|
||||||
|
```
|
||||||
|
CONTAINER_ID=$(sudo crictl -r unix:///var/run/containerd/containerd.sock \
|
||||||
|
ps -p ${POD_ID} -q)
|
||||||
|
```
|
||||||
|
|
||||||
|
Use the gVisor `runsc` command to display the processes running inside the `webserver` container:
|
||||||
|
|
||||||
|
```
|
||||||
|
sudo runsc --root /run/containerd/runsc/k8s.io ps ${CONTAINER_ID}
|
||||||
|
```
|
||||||
|
|
||||||
|
> output
|
||||||
|
|
||||||
|
```
|
||||||
|
I0514 14:05:16.499237 15096 x:0] ***************************
|
||||||
|
I0514 14:05:16.499542 15096 x:0] Args: [runsc --root /run/containerd/runsc/k8s.io ps 3528c6b270c76858e15e10ede61bd1100b77519e7c9972d51b370d6a3c60adbb]
|
||||||
|
I0514 14:05:16.499597 15096 x:0] Git Revision: 08879266fef3a67fac1a77f1ea133c3ac75759dd
|
||||||
|
I0514 14:05:16.499644 15096 x:0] PID: 15096
|
||||||
|
I0514 14:05:16.499695 15096 x:0] UID: 0, GID: 0
|
||||||
|
I0514 14:05:16.499734 15096 x:0] Configuration:
|
||||||
|
I0514 14:05:16.499769 15096 x:0] RootDir: /run/containerd/runsc/k8s.io
|
||||||
|
I0514 14:05:16.499880 15096 x:0] Platform: ptrace
|
||||||
|
I0514 14:05:16.499962 15096 x:0] FileAccess: proxy, overlay: false
|
||||||
|
I0514 14:05:16.500042 15096 x:0] Network: sandbox, logging: false
|
||||||
|
I0514 14:05:16.500120 15096 x:0] Strace: false, max size: 1024, syscalls: []
|
||||||
|
I0514 14:05:16.500197 15096 x:0] ***************************
|
||||||
|
UID PID PPID C STIME TIME CMD
|
||||||
|
0 1 0 0 14:02 40ms app
|
||||||
|
I0514 14:05:16.501354 15096 x:0] Exiting with status: 0
|
||||||
|
```
|
||||||
|
|
||||||
Next: [Cleaning Up](14-cleanup.md)
|
Next: [Cleaning Up](14-cleanup.md)
|
||||||
|
|
|
@ -17,18 +17,16 @@ gcloud -q compute instances delete \
|
||||||
Delete the external load balancer network resources:
|
Delete the external load balancer network resources:
|
||||||
|
|
||||||
```
|
```
|
||||||
|
{
|
||||||
gcloud -q compute forwarding-rules delete kubernetes-forwarding-rule \
|
gcloud -q compute forwarding-rules delete kubernetes-forwarding-rule \
|
||||||
--region $(gcloud config get-value compute/region)
|
--region $(gcloud config get-value compute/region)
|
||||||
```
|
|
||||||
|
|
||||||
```
|
|
||||||
gcloud -q compute target-pools delete kubernetes-target-pool
|
gcloud -q compute target-pools delete kubernetes-target-pool
|
||||||
```
|
|
||||||
|
|
||||||
Delete the `kubernetes-the-hard-way` static IP address:
|
gcloud -q compute http-health-checks delete kubernetes
|
||||||
|
|
||||||
```
|
|
||||||
gcloud -q compute addresses delete kubernetes-the-hard-way
|
gcloud -q compute addresses delete kubernetes-the-hard-way
|
||||||
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
Delete the `kubernetes-the-hard-way` firewall rules:
|
Delete the `kubernetes-the-hard-way` firewall rules:
|
||||||
|
@ -37,26 +35,21 @@ Delete the `kubernetes-the-hard-way` firewall rules:
|
||||||
gcloud -q compute firewall-rules delete \
|
gcloud -q compute firewall-rules delete \
|
||||||
kubernetes-the-hard-way-allow-nginx-service \
|
kubernetes-the-hard-way-allow-nginx-service \
|
||||||
kubernetes-the-hard-way-allow-internal \
|
kubernetes-the-hard-way-allow-internal \
|
||||||
kubernetes-the-hard-way-allow-external
|
kubernetes-the-hard-way-allow-external \
|
||||||
```
|
kubernetes-the-hard-way-allow-health-check
|
||||||
|
|
||||||
Delete the Pod network routes:
|
|
||||||
|
|
||||||
```
|
|
||||||
gcloud -q compute routes delete \
|
|
||||||
kubernetes-route-10-200-0-0-24 \
|
|
||||||
kubernetes-route-10-200-1-0-24 \
|
|
||||||
kubernetes-route-10-200-2-0-24
|
|
||||||
```
|
|
||||||
|
|
||||||
Delete the `kubernetes` subnet:
|
|
||||||
|
|
||||||
```
|
|
||||||
gcloud -q compute networks subnets delete kubernetes
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Delete the `kubernetes-the-hard-way` network VPC:
|
Delete the `kubernetes-the-hard-way` network VPC:
|
||||||
|
|
||||||
```
|
```
|
||||||
|
{
|
||||||
|
gcloud -q compute routes delete \
|
||||||
|
kubernetes-route-10-200-0-0-24 \
|
||||||
|
kubernetes-route-10-200-1-0-24 \
|
||||||
|
kubernetes-route-10-200-2-0-24
|
||||||
|
|
||||||
|
gcloud -q compute networks subnets delete kubernetes
|
||||||
|
|
||||||
gcloud -q compute networks delete kubernetes-the-hard-way
|
gcloud -q compute networks delete kubernetes-the-hard-way
|
||||||
|
}
|
||||||
```
|
```
|
||||||
|
|
Binary file not shown.
After Width: | Height: | Size: 116 KiB |
Loading…
Reference in New Issue