Update to Kubernetes v1.14.4 with necessary fixes
parent
bf2850974e
commit
50163592a9
15
README.md
15
README.md
|
@ -14,12 +14,15 @@ The target audience for this tutorial is someone planning to support a productio
|
|||
|
||||
Kubernetes The Hard Way guides you through bootstrapping a highly available Kubernetes cluster with end-to-end encryption between components and RBAC authentication.
|
||||
|
||||
* [Kubernetes](https://github.com/kubernetes/kubernetes) 1.12.0
|
||||
* [containerd Container Runtime](https://github.com/containerd/containerd) 1.2.0-rc.0
|
||||
* [gVisor](https://github.com/google/gvisor) 50c283b9f56bb7200938d9e207355f05f79f0d17
|
||||
* [CNI Container Networking](https://github.com/containernetworking/cni) 0.6.0
|
||||
* [etcd](https://github.com/coreos/etcd) v3.3.9
|
||||
* [CoreDNS](https://github.com/coredns/coredns) v1.2.2
|
||||
* [Kubernetes](https://github.com/kubernetes/kubernetes) v1.14.4
|
||||
* [containerd Container Runtime](https://github.com/containerd/containerd) 1.2.7
|
||||
* [gVisor](https://github.com/google/gvisor) release-20190529.1
|
||||
* [CNI Container Networking](https://github.com/containernetworking/cni) 0.7.1
|
||||
* [etcd](https://github.com/coreos/etcd) v3.3.13
|
||||
* [CoreDNS](https://github.com/coredns/coredns) v1.5.2
|
||||
* [cri-tools](https://github.com/kubernetes-sigs/cri-tools) v1.15.0
|
||||
* [runc](https://github.com/opencontainers/runc) v1.0.0-rc8
|
||||
* [CNI plugins](https://github.com/containernetworking/plugins) v0.8.1
|
||||
|
||||
## Labs
|
||||
|
||||
|
|
|
@ -16,29 +16,38 @@ Follow the Google Cloud SDK [documentation](https://cloud.google.com/sdk/) to in
|
|||
|
||||
Verify the Google Cloud SDK version is 218.0.0 or higher:
|
||||
|
||||
```
|
||||
```sh
|
||||
gcloud version
|
||||
```
|
||||
|
||||
> output
|
||||
|
||||
```
|
||||
Google Cloud SDK 241.0.0
|
||||
bq 2.0.43
|
||||
core 2019.04.02
|
||||
gsutil 4.38
|
||||
```
|
||||
|
||||
### Set a Default Compute Region and Zone
|
||||
|
||||
This tutorial assumes a default compute region and zone have been configured.
|
||||
|
||||
If you are using the `gcloud` command-line tool for the first time `init` is the easiest way to do this:
|
||||
|
||||
```
|
||||
```sh
|
||||
gcloud init
|
||||
```
|
||||
|
||||
Otherwise set a default compute region:
|
||||
|
||||
```
|
||||
```sh
|
||||
gcloud config set compute/region us-west1
|
||||
```
|
||||
|
||||
Set a default compute zone:
|
||||
|
||||
```
|
||||
```sh
|
||||
gcloud config set compute/zone us-west1-c
|
||||
```
|
||||
|
||||
|
|
|
@ -11,42 +11,42 @@ Download and install `cfssl` and `cfssljson` from the [cfssl repository](https:/
|
|||
|
||||
### OS X
|
||||
|
||||
```
|
||||
```sh
|
||||
curl -o cfssl https://pkg.cfssl.org/R1.2/cfssl_darwin-amd64
|
||||
curl -o cfssljson https://pkg.cfssl.org/R1.2/cfssljson_darwin-amd64
|
||||
```
|
||||
|
||||
```
|
||||
```sh
|
||||
chmod +x cfssl cfssljson
|
||||
```
|
||||
|
||||
```
|
||||
```sh
|
||||
sudo mv cfssl cfssljson /usr/local/bin/
|
||||
```
|
||||
|
||||
Some OS X users may experience problems using the pre-built binaries in which case [Homebrew](https://brew.sh) might be a better option:
|
||||
|
||||
```
|
||||
```sh
|
||||
brew install cfssl
|
||||
```
|
||||
|
||||
### Linux
|
||||
|
||||
```
|
||||
```sh
|
||||
wget -q --show-progress --https-only --timestamping \
|
||||
https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 \
|
||||
https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
|
||||
```
|
||||
|
||||
```
|
||||
```sh
|
||||
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64
|
||||
```
|
||||
|
||||
```
|
||||
```sh
|
||||
sudo mv cfssl_linux-amd64 /usr/local/bin/cfssl
|
||||
```
|
||||
|
||||
```
|
||||
```sh
|
||||
sudo mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
|
||||
```
|
||||
|
||||
|
@ -54,64 +54,65 @@ sudo mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
|
|||
|
||||
Verify `cfssl` version 1.2.0 or higher is installed:
|
||||
|
||||
```
|
||||
```sh
|
||||
cfssl version
|
||||
```
|
||||
|
||||
> output
|
||||
|
||||
```
|
||||
Version: 1.2.0
|
||||
Version: 1.3.4
|
||||
Revision: dev
|
||||
Runtime: go1.6
|
||||
Runtime: go1.12.7
|
||||
```
|
||||
|
||||
> The cfssljson command line utility does not provide a way to print its version.
|
||||
|
||||
## Install kubectl
|
||||
|
||||
The `kubectl` command line utility is used to interact with the Kubernetes API Server. Download and install `kubectl` from the official release binaries:
|
||||
The `kubectl` command line utility is used to interact with the Kubernetes API Server.
|
||||
Download and install latest stable `kubectl` from the official release binaries:
|
||||
|
||||
### OS X
|
||||
|
||||
```
|
||||
curl -o kubectl https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/darwin/amd64/kubectl
|
||||
```sh
|
||||
curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl"
|
||||
```
|
||||
|
||||
```
|
||||
```sh
|
||||
chmod +x kubectl
|
||||
```
|
||||
|
||||
```
|
||||
```sh
|
||||
sudo mv kubectl /usr/local/bin/
|
||||
```
|
||||
|
||||
### Linux
|
||||
|
||||
```
|
||||
wget https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kubectl
|
||||
```sh
|
||||
curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl"
|
||||
```
|
||||
|
||||
```
|
||||
```sh
|
||||
chmod +x kubectl
|
||||
```
|
||||
|
||||
```
|
||||
```sh
|
||||
sudo mv kubectl /usr/local/bin/
|
||||
```
|
||||
|
||||
### Verification
|
||||
|
||||
Verify `kubectl` version 1.12.0 or higher is installed:
|
||||
Verify `kubectl` version 1.14.0 or higher is installed:
|
||||
|
||||
```
|
||||
```sh
|
||||
kubectl version --client
|
||||
```
|
||||
|
||||
> output
|
||||
|
||||
```
|
||||
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.0", GitCommit:"0ed33881dc4355495f623c6f22e7dd0b7632b7c0", GitTreeState:"clean", BuildDate:"2018-09-27T17:05:32Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
|
||||
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.1", GitCommit:"4485c6f18cee9a5d3c3b4e523bd27972b1b53892", GitTreeState:"clean", BuildDate:"2019-07-18T09:18:22Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"darwin/amd64"}
|
||||
```
|
||||
|
||||
Next: [Provisioning Compute Resources](03-compute-resources.md)
|
||||
|
|
|
@ -16,7 +16,7 @@ In this section a dedicated [Virtual Private Cloud](https://cloud.google.com/com
|
|||
|
||||
Create the `kubernetes-the-hard-way` custom VPC network:
|
||||
|
||||
```
|
||||
```sh
|
||||
gcloud compute networks create kubernetes-the-hard-way --subnet-mode custom
|
||||
```
|
||||
|
||||
|
@ -24,7 +24,7 @@ A [subnet](https://cloud.google.com/compute/docs/vpc/#vpc_networks_and_subnets)
|
|||
|
||||
Create the `kubernetes` subnet in the `kubernetes-the-hard-way` VPC network:
|
||||
|
||||
```
|
||||
```sh
|
||||
gcloud compute networks subnets create kubernetes \
|
||||
--network kubernetes-the-hard-way \
|
||||
--range 10.240.0.0/24
|
||||
|
@ -36,7 +36,7 @@ gcloud compute networks subnets create kubernetes \
|
|||
|
||||
Create a firewall rule that allows internal communication across all protocols:
|
||||
|
||||
```
|
||||
```sh
|
||||
gcloud compute firewall-rules create kubernetes-the-hard-way-allow-internal \
|
||||
--allow tcp,udp,icmp \
|
||||
--network kubernetes-the-hard-way \
|
||||
|
@ -45,7 +45,7 @@ gcloud compute firewall-rules create kubernetes-the-hard-way-allow-internal \
|
|||
|
||||
Create a firewall rule that allows external SSH, ICMP, and HTTPS:
|
||||
|
||||
```
|
||||
```sh
|
||||
gcloud compute firewall-rules create kubernetes-the-hard-way-allow-external \
|
||||
--allow tcp:22,tcp:6443,icmp \
|
||||
--network kubernetes-the-hard-way \
|
||||
|
@ -56,7 +56,7 @@ gcloud compute firewall-rules create kubernetes-the-hard-way-allow-external \
|
|||
|
||||
List the firewall rules in the `kubernetes-the-hard-way` VPC network:
|
||||
|
||||
```
|
||||
```sh
|
||||
gcloud compute firewall-rules list --filter="network:kubernetes-the-hard-way"
|
||||
```
|
||||
|
||||
|
@ -72,14 +72,14 @@ kubernetes-the-hard-way-allow-internal kubernetes-the-hard-way INGRESS 1000
|
|||
|
||||
Allocate a static IP address that will be attached to the external load balancer fronting the Kubernetes API Servers:
|
||||
|
||||
```
|
||||
```sh
|
||||
gcloud compute addresses create kubernetes-the-hard-way \
|
||||
--region $(gcloud config get-value compute/region)
|
||||
```
|
||||
|
||||
Verify the `kubernetes-the-hard-way` static IP address was created in your default compute region:
|
||||
|
||||
```
|
||||
```sh
|
||||
gcloud compute addresses list --filter="name=('kubernetes-the-hard-way')"
|
||||
```
|
||||
|
||||
|
@ -98,7 +98,7 @@ The compute instances in this lab will be provisioned using [Ubuntu Server](http
|
|||
|
||||
Create three compute instances which will host the Kubernetes control plane:
|
||||
|
||||
```
|
||||
```sh
|
||||
for i in 0 1 2; do
|
||||
gcloud compute instances create controller-${i} \
|
||||
--async \
|
||||
|
@ -122,7 +122,7 @@ Each worker instance requires a pod subnet allocation from the Kubernetes cluste
|
|||
|
||||
Create three compute instances which will host the Kubernetes worker nodes:
|
||||
|
||||
```
|
||||
```sh
|
||||
for i in 0 1 2; do
|
||||
gcloud compute instances create worker-${i} \
|
||||
--async \
|
||||
|
@ -143,7 +143,7 @@ done
|
|||
|
||||
List the compute instances in your default compute zone:
|
||||
|
||||
```
|
||||
```sh
|
||||
gcloud compute instances list
|
||||
```
|
||||
|
||||
|
@ -165,7 +165,7 @@ SSH will be used to configure the controller and worker instances. When connecti
|
|||
|
||||
Test SSH access to the `controller-0` compute instances:
|
||||
|
||||
```
|
||||
```sh
|
||||
gcloud compute ssh controller-0
|
||||
```
|
||||
|
||||
|
@ -217,12 +217,12 @@ Last login: Sun May 13 14:34:27 2018 from XX.XXX.XXX.XX
|
|||
|
||||
Type `exit` at the prompt to exit the `controller-0` compute instance:
|
||||
|
||||
```
|
||||
```sh
|
||||
$USER@controller-0:~$ exit
|
||||
```
|
||||
> output
|
||||
|
||||
```
|
||||
```sh
|
||||
logout
|
||||
Connection to XX.XXX.XXX.XXX closed
|
||||
```
|
||||
|
|
|
@ -8,7 +8,7 @@ In this section you will provision a Certificate Authority that can be used to g
|
|||
|
||||
Generate the CA configuration file, certificate, and private key:
|
||||
|
||||
```
|
||||
```sh
|
||||
{
|
||||
|
||||
cat > ca-config.json <<EOF
|
||||
|
@ -54,7 +54,11 @@ cfssl gencert -initca ca-csr.json | cfssljson -bare ca
|
|||
Results:
|
||||
|
||||
```
|
||||
$ ls -p | grep -v /
|
||||
ca-config.json
|
||||
ca-csr.json
|
||||
ca-key.pem
|
||||
ca.csr
|
||||
ca.pem
|
||||
```
|
||||
|
||||
|
@ -66,7 +70,7 @@ In this section you will generate client and server certificates for each Kubern
|
|||
|
||||
Generate the `admin` client certificate and private key:
|
||||
|
||||
```
|
||||
```sh
|
||||
{
|
||||
|
||||
cat > admin-csr.json <<EOF
|
||||
|
@ -101,17 +105,19 @@ cfssl gencert \
|
|||
Results:
|
||||
|
||||
```
|
||||
admin-csr.json
|
||||
admin-key.pem
|
||||
admin.csr
|
||||
admin.pem
|
||||
```
|
||||
|
||||
### The Kubelet Client Certificates
|
||||
|
||||
Kubernetes uses a [special-purpose authorization mode](https://kubernetes.io/docs/admin/authorization/node/) called Node Authorizer, that specifically authorizes API requests made by [Kubelets](https://kubernetes.io/docs/concepts/overview/components/#kubelet). In order to be authorized by the Node Authorizer, Kubelets must use a credential that identifies them as being in the `system:nodes` group, with a username of `system:node:<nodeName>`. In this section you will create a certificate for each Kubernetes worker node that meets the Node Authorizer requirements.
|
||||
Kubernetes uses a **special-purpose authorization mode** called [Node Authorizer](https://kubernetes.io/docs/admin/authorization/node/), that specifically authorizes API requests made by [Kubelets](https://kubernetes.io/docs/concepts/overview/components/#kubelet). In order to be authorized by the Node Authorizer, Kubelets must use a credential that identifies them as being in the `system:nodes` group, with a username of `system:node:<nodeName>`. In this section you will create a certificate for each Kubernetes worker node that meets the Node Authorizer requirements.
|
||||
|
||||
Generate a certificate and private key for each Kubernetes worker node:
|
||||
|
||||
```
|
||||
```sh
|
||||
for instance in worker-0 worker-1 worker-2; do
|
||||
cat > ${instance}-csr.json <<EOF
|
||||
{
|
||||
|
@ -151,11 +157,17 @@ done
|
|||
Results:
|
||||
|
||||
```
|
||||
worker-0-csr.json
|
||||
worker-0-key.pem
|
||||
worker-0.csr
|
||||
worker-0.pem
|
||||
worker-1-csr.json
|
||||
worker-1-key.pem
|
||||
worker-1.csr
|
||||
worker-1.pem
|
||||
worker-2-csr.json
|
||||
worker-2-key.pem
|
||||
worker-2.csr
|
||||
worker-2.pem
|
||||
```
|
||||
|
||||
|
@ -163,7 +175,7 @@ worker-2.pem
|
|||
|
||||
Generate the `kube-controller-manager` client certificate and private key:
|
||||
|
||||
```
|
||||
```sh
|
||||
{
|
||||
|
||||
cat > kube-controller-manager-csr.json <<EOF
|
||||
|
@ -198,7 +210,9 @@ cfssl gencert \
|
|||
Results:
|
||||
|
||||
```
|
||||
kube-controller-manager-csr.json
|
||||
kube-controller-manager-key.pem
|
||||
kube-controller-manager.csr
|
||||
kube-controller-manager.pem
|
||||
```
|
||||
|
||||
|
@ -207,9 +221,8 @@ kube-controller-manager.pem
|
|||
|
||||
Generate the `kube-proxy` client certificate and private key:
|
||||
|
||||
```
|
||||
```sh
|
||||
{
|
||||
|
||||
cat > kube-proxy-csr.json <<EOF
|
||||
{
|
||||
"CN": "system:kube-proxy",
|
||||
|
@ -242,7 +255,9 @@ cfssl gencert \
|
|||
Results:
|
||||
|
||||
```
|
||||
kube-proxy-csr.json
|
||||
kube-proxy-key.pem
|
||||
kube-proxy.csr
|
||||
kube-proxy.pem
|
||||
```
|
||||
|
||||
|
@ -250,7 +265,7 @@ kube-proxy.pem
|
|||
|
||||
Generate the `kube-scheduler` client certificate and private key:
|
||||
|
||||
```
|
||||
```sh
|
||||
{
|
||||
|
||||
cat > kube-scheduler-csr.json <<EOF
|
||||
|
@ -285,7 +300,9 @@ cfssl gencert \
|
|||
Results:
|
||||
|
||||
```
|
||||
kube-scheduler-csr.json
|
||||
kube-scheduler-key.pem
|
||||
kube-scheduler.csr
|
||||
kube-scheduler.pem
|
||||
```
|
||||
|
||||
|
@ -296,7 +313,7 @@ The `kubernetes-the-hard-way` static IP address will be included in the list of
|
|||
|
||||
Generate the Kubernetes API Server certificate and private key:
|
||||
|
||||
```
|
||||
```sh
|
||||
{
|
||||
|
||||
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
|
||||
|
@ -336,7 +353,9 @@ cfssl gencert \
|
|||
Results:
|
||||
|
||||
```
|
||||
kubernetes-csr.json
|
||||
kubernetes-key.pem
|
||||
kubernetes.csr
|
||||
kubernetes.pem
|
||||
```
|
||||
|
||||
|
@ -346,7 +365,7 @@ The Kubernetes Controller Manager leverages a key pair to generate and sign serv
|
|||
|
||||
Generate the `service-account` certificate and private key:
|
||||
|
||||
```
|
||||
```sh
|
||||
{
|
||||
|
||||
cat > service-account-csr.json <<EOF
|
||||
|
@ -381,24 +400,30 @@ cfssl gencert \
|
|||
Results:
|
||||
|
||||
```
|
||||
service-account-csr.json
|
||||
service-account-key.pem
|
||||
service-account.csr
|
||||
service-account.pem
|
||||
```
|
||||
|
||||
|
||||
## Distribute the Client and Server Certificates
|
||||
|
||||
Copy the appropriate certificates and private keys to each worker instance:
|
||||
- ca.pem
|
||||
- worker-X.pem & worker-X-key.pem
|
||||
|
||||
```
|
||||
```sh
|
||||
for instance in worker-0 worker-1 worker-2; do
|
||||
gcloud compute scp ca.pem ${instance}-key.pem ${instance}.pem ${instance}:~/
|
||||
done
|
||||
```
|
||||
|
||||
Copy the appropriate certificates and private keys to each controller instance:
|
||||
- ca.pem & ca-key.pem
|
||||
- kubernetes-key.pem & kubernetes.pem
|
||||
- service-account-key.pem & service-account.pem
|
||||
|
||||
```
|
||||
```sh
|
||||
for instance in controller-0 controller-1 controller-2; do
|
||||
gcloud compute scp ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \
|
||||
service-account-key.pem service-account.pem ${instance}:~/
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
# Generating Kubernetes Configuration Files for Authentication
|
||||
|
||||
In this lab you will generate [Kubernetes configuration files](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/), also known as kubeconfigs, which enable Kubernetes clients to locate and authenticate to the Kubernetes API Servers.
|
||||
In this lab you will generate [Kubernetes configuration files](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/), also known as `kubeconfig`s, which enable Kubernetes clients to locate and authenticate to the Kubernetes API Servers.
|
||||
|
||||
## Client Authentication Configs
|
||||
|
||||
|
@ -12,7 +12,7 @@ Each kubeconfig requires a Kubernetes API Server to connect to. To support high
|
|||
|
||||
Retrieve the `kubernetes-the-hard-way` static IP address:
|
||||
|
||||
```
|
||||
```sh
|
||||
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
|
||||
--region $(gcloud config get-value compute/region) \
|
||||
--format 'value(address)')
|
||||
|
@ -24,7 +24,7 @@ When generating kubeconfig files for Kubelets the client certificate matching th
|
|||
|
||||
Generate a kubeconfig file for each worker node:
|
||||
|
||||
```
|
||||
```sh
|
||||
for instance in worker-0 worker-1 worker-2; do
|
||||
kubectl config set-cluster kubernetes-the-hard-way \
|
||||
--certificate-authority=ca.pem \
|
||||
|
@ -59,8 +59,9 @@ worker-2.kubeconfig
|
|||
|
||||
Generate a kubeconfig file for the `kube-proxy` service:
|
||||
|
||||
```
|
||||
```sh
|
||||
{
|
||||
|
||||
kubectl config set-cluster kubernetes-the-hard-way \
|
||||
--certificate-authority=ca.pem \
|
||||
--embed-certs=true \
|
||||
|
@ -79,6 +80,7 @@ Generate a kubeconfig file for the `kube-proxy` service:
|
|||
--kubeconfig=kube-proxy.kubeconfig
|
||||
|
||||
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
|
||||
|
||||
}
|
||||
```
|
||||
|
||||
|
@ -92,8 +94,9 @@ kube-proxy.kubeconfig
|
|||
|
||||
Generate a kubeconfig file for the `kube-controller-manager` service:
|
||||
|
||||
```
|
||||
```sh
|
||||
{
|
||||
|
||||
kubectl config set-cluster kubernetes-the-hard-way \
|
||||
--certificate-authority=ca.pem \
|
||||
--embed-certs=true \
|
||||
|
@ -112,6 +115,7 @@ Generate a kubeconfig file for the `kube-controller-manager` service:
|
|||
--kubeconfig=kube-controller-manager.kubeconfig
|
||||
|
||||
kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig
|
||||
|
||||
}
|
||||
```
|
||||
|
||||
|
@ -126,8 +130,9 @@ kube-controller-manager.kubeconfig
|
|||
|
||||
Generate a kubeconfig file for the `kube-scheduler` service:
|
||||
|
||||
```
|
||||
```sh
|
||||
{
|
||||
|
||||
kubectl config set-cluster kubernetes-the-hard-way \
|
||||
--certificate-authority=ca.pem \
|
||||
--embed-certs=true \
|
||||
|
@ -146,6 +151,7 @@ Generate a kubeconfig file for the `kube-scheduler` service:
|
|||
--kubeconfig=kube-scheduler.kubeconfig
|
||||
|
||||
kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig
|
||||
|
||||
}
|
||||
```
|
||||
|
||||
|
@ -159,8 +165,9 @@ kube-scheduler.kubeconfig
|
|||
|
||||
Generate a kubeconfig file for the `admin` user:
|
||||
|
||||
```
|
||||
```sh
|
||||
{
|
||||
|
||||
kubectl config set-cluster kubernetes-the-hard-way \
|
||||
--certificate-authority=ca.pem \
|
||||
--embed-certs=true \
|
||||
|
@ -179,6 +186,7 @@ Generate a kubeconfig file for the `admin` user:
|
|||
--kubeconfig=admin.kubeconfig
|
||||
|
||||
kubectl config use-context default --kubeconfig=admin.kubeconfig
|
||||
|
||||
}
|
||||
```
|
||||
|
||||
|
@ -188,14 +196,11 @@ Results:
|
|||
admin.kubeconfig
|
||||
```
|
||||
|
||||
|
||||
##
|
||||
|
||||
## Distribute the Kubernetes Configuration Files
|
||||
|
||||
Copy the appropriate `kubelet` and `kube-proxy` kubeconfig files to each worker instance:
|
||||
|
||||
```
|
||||
```sh
|
||||
for instance in worker-0 worker-1 worker-2; do
|
||||
gcloud compute scp ${instance}.kubeconfig kube-proxy.kubeconfig ${instance}:~/
|
||||
done
|
||||
|
@ -203,7 +208,7 @@ done
|
|||
|
||||
Copy the appropriate `kube-controller-manager` and `kube-scheduler` kubeconfig files to each controller instance:
|
||||
|
||||
```
|
||||
```sh
|
||||
for instance in controller-0 controller-1 controller-2; do
|
||||
gcloud compute scp admin.kubeconfig kube-controller-manager.kubeconfig kube-scheduler.kubeconfig ${instance}:~/
|
||||
done
|
||||
|
|
|
@ -16,7 +16,7 @@ ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)
|
|||
|
||||
Create the `encryption-config.yaml` encryption config file:
|
||||
|
||||
```
|
||||
```sh
|
||||
cat > encryption-config.yaml <<EOF
|
||||
kind: EncryptionConfig
|
||||
apiVersion: v1
|
||||
|
@ -34,7 +34,7 @@ EOF
|
|||
|
||||
Copy the `encryption-config.yaml` encryption config file to each controller instance:
|
||||
|
||||
```
|
||||
```sh
|
||||
for instance in controller-0 controller-1 controller-2; do
|
||||
gcloud compute scp encryption-config.yaml ${instance}:~/
|
||||
done
|
||||
|
|
|
@ -6,7 +6,7 @@ Kubernetes components are stateless and store cluster state in [etcd](https://gi
|
|||
|
||||
The commands in this lab must be run on each controller instance: `controller-0`, `controller-1`, and `controller-2`. Login to each controller instance using the `gcloud` command. Example:
|
||||
|
||||
```
|
||||
```sh
|
||||
gcloud compute ssh controller-0
|
||||
```
|
||||
|
||||
|
@ -20,45 +20,41 @@ gcloud compute ssh controller-0
|
|||
|
||||
Download the official etcd release binaries from the [coreos/etcd](https://github.com/coreos/etcd) GitHub project:
|
||||
|
||||
```
|
||||
```sh
|
||||
wget -q --show-progress --https-only --timestamping \
|
||||
"https://github.com/coreos/etcd/releases/download/v3.3.9/etcd-v3.3.9-linux-amd64.tar.gz"
|
||||
"https://github.com/etcd-io/etcd/releases/download/v3.3.13/etcd-v3.3.13-linux-amd64.tar.gz"
|
||||
```
|
||||
|
||||
Extract and install the `etcd` server and the `etcdctl` command line utility:
|
||||
|
||||
```
|
||||
{
|
||||
tar -xvf etcd-v3.3.9-linux-amd64.tar.gz
|
||||
sudo mv etcd-v3.3.9-linux-amd64/etcd* /usr/local/bin/
|
||||
}
|
||||
```sh
|
||||
tar -xvf etcd-v3.3.13-linux-amd64.tar.gz
|
||||
sudo mv etcd-v3.3.13-linux-amd64/etcd* /usr/local/bin/
|
||||
```
|
||||
|
||||
### Configure the etcd Server
|
||||
|
||||
```
|
||||
{
|
||||
```sh
|
||||
sudo mkdir -p /etc/etcd /var/lib/etcd
|
||||
sudo cp ca.pem kubernetes-key.pem kubernetes.pem /etc/etcd/
|
||||
}
|
||||
```
|
||||
|
||||
The instance internal IP address will be used to serve client requests and communicate with etcd cluster peers. Retrieve the internal IP address for the current compute instance:
|
||||
|
||||
```
|
||||
```sh
|
||||
INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
|
||||
http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
|
||||
```
|
||||
|
||||
Each etcd member must have a unique name within an etcd cluster. Set the etcd name to match the hostname of the current compute instance:
|
||||
|
||||
```
|
||||
```sh
|
||||
ETCD_NAME=$(hostname -s)
|
||||
```
|
||||
|
||||
Create the `etcd.service` systemd unit file:
|
||||
|
||||
```
|
||||
```sh
|
||||
cat <<EOF | sudo tee /etc/systemd/system/etcd.service
|
||||
[Unit]
|
||||
Description=etcd
|
||||
|
@ -93,7 +89,7 @@ EOF
|
|||
|
||||
### Start the etcd Server
|
||||
|
||||
```
|
||||
```sh
|
||||
{
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl enable etcd
|
||||
|
@ -107,7 +103,7 @@ EOF
|
|||
|
||||
List the etcd cluster members:
|
||||
|
||||
```
|
||||
```sh
|
||||
sudo ETCDCTL_API=3 etcdctl member list \
|
||||
--endpoints=https://127.0.0.1:2379 \
|
||||
--cacert=/etc/etcd/ca.pem \
|
||||
|
|
|
@ -6,7 +6,7 @@ In this lab you will bootstrap the Kubernetes control plane across three compute
|
|||
|
||||
The commands in this lab must be run on each controller instance: `controller-0`, `controller-1`, and `controller-2`. Login to each controller instance using the `gcloud` command. Example:
|
||||
|
||||
```
|
||||
```sh
|
||||
gcloud compute ssh controller-0
|
||||
```
|
||||
|
||||
|
@ -18,7 +18,7 @@ gcloud compute ssh controller-0
|
|||
|
||||
Create the Kubernetes configuration directory:
|
||||
|
||||
```
|
||||
```sh
|
||||
sudo mkdir -p /etc/kubernetes/config
|
||||
```
|
||||
|
||||
|
@ -26,17 +26,17 @@ sudo mkdir -p /etc/kubernetes/config
|
|||
|
||||
Download the official Kubernetes release binaries:
|
||||
|
||||
```
|
||||
```sh
|
||||
wget -q --show-progress --https-only --timestamping \
|
||||
"https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kube-apiserver" \
|
||||
"https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kube-controller-manager" \
|
||||
"https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kube-scheduler" \
|
||||
"https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kubectl"
|
||||
"https://storage.googleapis.com/kubernetes-release/release/v1.14.4/bin/linux/amd64/kube-apiserver" \
|
||||
"https://storage.googleapis.com/kubernetes-release/release/v1.14.4/bin/linux/amd64/kube-controller-manager" \
|
||||
"https://storage.googleapis.com/kubernetes-release/release/v1.14.4/bin/linux/amd64/kube-scheduler" \
|
||||
"https://storage.googleapis.com/kubernetes-release/release/v1.14.4/bin/linux/amd64/kubectl"
|
||||
```
|
||||
|
||||
Install the Kubernetes binaries:
|
||||
|
||||
```
|
||||
```sh
|
||||
{
|
||||
chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl
|
||||
sudo mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/
|
||||
|
@ -45,7 +45,7 @@ Install the Kubernetes binaries:
|
|||
|
||||
### Configure the Kubernetes API Server
|
||||
|
||||
```
|
||||
```sh
|
||||
{
|
||||
sudo mkdir -p /var/lib/kubernetes/
|
||||
|
||||
|
@ -57,14 +57,14 @@ Install the Kubernetes binaries:
|
|||
|
||||
The instance internal IP address will be used to advertise the API Server to members of the cluster. Retrieve the internal IP address for the current compute instance:
|
||||
|
||||
```
|
||||
```sh
|
||||
INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
|
||||
http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
|
||||
```
|
||||
|
||||
Create the `kube-apiserver.service` systemd unit file:
|
||||
|
||||
```
|
||||
```sh
|
||||
cat <<EOF | sudo tee /etc/systemd/system/kube-apiserver.service
|
||||
[Unit]
|
||||
Description=Kubernetes API Server
|
||||
|
@ -82,7 +82,7 @@ ExecStart=/usr/local/bin/kube-apiserver \\
|
|||
--authorization-mode=Node,RBAC \\
|
||||
--bind-address=0.0.0.0 \\
|
||||
--client-ca-file=/var/lib/kubernetes/ca.pem \\
|
||||
--enable-admission-plugins=Initializers,NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\
|
||||
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,PersistentVolumeClaimResize,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota \\
|
||||
--enable-swagger-ui=true \\
|
||||
--etcd-cafile=/var/lib/kubernetes/ca.pem \\
|
||||
--etcd-certfile=/var/lib/kubernetes/kubernetes.pem \\
|
||||
|
@ -113,13 +113,13 @@ EOF
|
|||
|
||||
Move the `kube-controller-manager` kubeconfig into place:
|
||||
|
||||
```
|
||||
```sh
|
||||
sudo mv kube-controller-manager.kubeconfig /var/lib/kubernetes/
|
||||
```
|
||||
|
||||
Create the `kube-controller-manager.service` systemd unit file:
|
||||
|
||||
```
|
||||
```sh
|
||||
cat <<EOF | sudo tee /etc/systemd/system/kube-controller-manager.service
|
||||
[Unit]
|
||||
Description=Kubernetes Controller Manager
|
||||
|
@ -127,12 +127,17 @@ Documentation=https://github.com/kubernetes/kubernetes
|
|||
|
||||
[Service]
|
||||
ExecStart=/usr/local/bin/kube-controller-manager \\
|
||||
--bind-address=127.0.0.1 \\
|
||||
--allocate-node-cidrs=true \\
|
||||
--node-cidr-mask-size=24 \\
|
||||
--kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig \\
|
||||
--address=0.0.0.0 \\
|
||||
--master=127.0.0.1:8080 \\
|
||||
--cluster-cidr=10.200.0.0/16 \\
|
||||
--cluster-name=kubernetes \\
|
||||
--client-ca-file=/var/lib/kubernetes/ca.pem \\
|
||||
--cluster-signing-cert-file=/var/lib/kubernetes/ca.pem \\
|
||||
--cluster-signing-key-file=/var/lib/kubernetes/ca-key.pem \\
|
||||
--kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig \\
|
||||
--leader-elect=true \\
|
||||
--root-ca-file=/var/lib/kubernetes/ca.pem \\
|
||||
--service-account-private-key-file=/var/lib/kubernetes/service-account-key.pem \\
|
||||
|
@ -151,15 +156,15 @@ EOF
|
|||
|
||||
Move the `kube-scheduler` kubeconfig into place:
|
||||
|
||||
```
|
||||
```sh
|
||||
sudo mv kube-scheduler.kubeconfig /var/lib/kubernetes/
|
||||
```
|
||||
|
||||
Create the `kube-scheduler.yaml` configuration file:
|
||||
|
||||
```
|
||||
```sh
|
||||
cat <<EOF | sudo tee /etc/kubernetes/config/kube-scheduler.yaml
|
||||
apiVersion: componentconfig/v1alpha1
|
||||
apiVersion: kubescheduler.config.k8s.io/v1alpha1
|
||||
kind: KubeSchedulerConfiguration
|
||||
clientConnection:
|
||||
kubeconfig: "/var/lib/kubernetes/kube-scheduler.kubeconfig"
|
||||
|
@ -170,7 +175,7 @@ EOF
|
|||
|
||||
Create the `kube-scheduler.service` systemd unit file:
|
||||
|
||||
```
|
||||
```sh
|
||||
cat <<EOF | sudo tee /etc/systemd/system/kube-scheduler.service
|
||||
[Unit]
|
||||
Description=Kubernetes Scheduler
|
||||
|
@ -190,7 +195,7 @@ EOF
|
|||
|
||||
### Start the Controller Services
|
||||
|
||||
```
|
||||
```sh
|
||||
{
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl enable kube-apiserver kube-controller-manager kube-scheduler
|
||||
|
@ -208,11 +213,11 @@ A [Google Network Load Balancer](https://cloud.google.com/compute/docs/load-bala
|
|||
|
||||
Install a basic web server to handle HTTP health checks:
|
||||
|
||||
```
|
||||
```sh
|
||||
sudo apt-get install -y nginx
|
||||
```
|
||||
|
||||
```
|
||||
```sh
|
||||
cat > kubernetes.default.svc.cluster.local <<EOF
|
||||
server {
|
||||
listen 80;
|
||||
|
@ -226,7 +231,7 @@ server {
|
|||
EOF
|
||||
```
|
||||
|
||||
```
|
||||
```sh
|
||||
{
|
||||
sudo mv kubernetes.default.svc.cluster.local \
|
||||
/etc/nginx/sites-available/kubernetes.default.svc.cluster.local
|
||||
|
@ -235,17 +240,14 @@ EOF
|
|||
}
|
||||
```
|
||||
|
||||
```
|
||||
```sh
|
||||
sudo systemctl restart nginx
|
||||
```
|
||||
|
||||
```
|
||||
sudo systemctl enable nginx
|
||||
```
|
||||
|
||||
### Verification
|
||||
|
||||
```
|
||||
```sh
|
||||
kubectl get componentstatuses --kubeconfig admin.kubeconfig
|
||||
```
|
||||
|
||||
|
@ -260,7 +262,7 @@ etcd-1 Healthy {"health": "true"}
|
|||
|
||||
Test the nginx HTTP health check proxy:
|
||||
|
||||
```
|
||||
```sh
|
||||
curl -H "Host: kubernetes.default.svc.cluster.local" -i http://127.0.0.1/healthz
|
||||
```
|
||||
|
||||
|
@ -283,13 +285,13 @@ In this section you will configure RBAC permissions to allow the Kubernetes API
|
|||
|
||||
> This tutorial sets the Kubelet `--authorization-mode` flag to `Webhook`. Webhook mode uses the [SubjectAccessReview](https://kubernetes.io/docs/admin/authorization/#checking-api-access) API to determine authorization.
|
||||
|
||||
```
|
||||
```sh
|
||||
gcloud compute ssh controller-0
|
||||
```
|
||||
|
||||
Create the `system:kube-apiserver-to-kubelet` [ClusterRole](https://kubernetes.io/docs/admin/authorization/rbac/#role-and-clusterrole) with permissions to access the Kubelet API and perform most common tasks associated with managing pods:
|
||||
|
||||
```
|
||||
```sh
|
||||
cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
|
||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
kind: ClusterRole
|
||||
|
@ -317,7 +319,7 @@ The Kubernetes API Server authenticates to the Kubelet as the `kubernetes` user
|
|||
|
||||
Bind the `system:kube-apiserver-to-kubelet` ClusterRole to the `kubernetes` user:
|
||||
|
||||
```
|
||||
```sh
|
||||
cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
|
||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
kind: ClusterRoleBinding
|
||||
|
@ -346,7 +348,7 @@ In this section you will provision an external load balancer to front the Kubern
|
|||
|
||||
Create the external load balancer network resources:
|
||||
|
||||
```
|
||||
```sh
|
||||
{
|
||||
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
|
||||
--region $(gcloud config get-value compute/region) \
|
||||
|
@ -380,7 +382,7 @@ Create the external load balancer network resources:
|
|||
|
||||
Retrieve the `kubernetes-the-hard-way` static IP address:
|
||||
|
||||
```
|
||||
```sh
|
||||
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
|
||||
--region $(gcloud config get-value compute/region) \
|
||||
--format 'value(address)')
|
||||
|
@ -388,8 +390,8 @@ KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-har
|
|||
|
||||
Make a HTTP request for the Kubernetes version info:
|
||||
|
||||
```
|
||||
curl --cacert ca.pem https://${KUBERNETES_PUBLIC_ADDRESS}:6443/version
|
||||
```sh
|
||||
curl --cacert ca.pem "https://${KUBERNETES_PUBLIC_ADDRESS}:6443/version"
|
||||
```
|
||||
|
||||
> output
|
||||
|
@ -397,12 +399,12 @@ curl --cacert ca.pem https://${KUBERNETES_PUBLIC_ADDRESS}:6443/version
|
|||
```
|
||||
{
|
||||
"major": "1",
|
||||
"minor": "12",
|
||||
"gitVersion": "v1.12.0",
|
||||
"gitCommit": "0ed33881dc4355495f623c6f22e7dd0b7632b7c0",
|
||||
"minor": "14",
|
||||
"gitVersion": "v1.14.4",
|
||||
"gitCommit": "a87e9a978f65a8303aa9467537aa59c18122cbf9",
|
||||
"gitTreeState": "clean",
|
||||
"buildDate": "2018-09-27T16:55:41Z",
|
||||
"goVersion": "go1.10.4",
|
||||
"buildDate": "2019-07-08T08:43:10Z",
|
||||
"goVersion": "go1.12.5",
|
||||
"compiler": "gc",
|
||||
"platform": "linux/amd64"
|
||||
}
|
||||
|
|
|
@ -6,7 +6,7 @@ In this lab you will bootstrap three Kubernetes worker nodes. The following comp
|
|||
|
||||
The commands in this lab must be run on each worker instance: `worker-0`, `worker-1`, and `worker-2`. Login to each worker instance using the `gcloud` command. Example:
|
||||
|
||||
```
|
||||
```sh
|
||||
gcloud compute ssh worker-0
|
||||
```
|
||||
|
||||
|
@ -18,7 +18,7 @@ gcloud compute ssh worker-0
|
|||
|
||||
Install the OS dependencies:
|
||||
|
||||
```
|
||||
```sh
|
||||
{
|
||||
sudo apt-get update
|
||||
sudo apt-get -y install socat conntrack ipset
|
||||
|
@ -29,21 +29,21 @@ Install the OS dependencies:
|
|||
|
||||
### Download and Install Worker Binaries
|
||||
|
||||
```
|
||||
```sh
|
||||
wget -q --show-progress --https-only --timestamping \
|
||||
https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.12.0/crictl-v1.12.0-linux-amd64.tar.gz \
|
||||
https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.15.0/crictl-v1.15.0-linux-amd64.tar.gz \
|
||||
https://storage.googleapis.com/kubernetes-the-hard-way/runsc-50c283b9f56bb7200938d9e207355f05f79f0d17 \
|
||||
https://github.com/opencontainers/runc/releases/download/v1.0.0-rc5/runc.amd64 \
|
||||
https://github.com/containernetworking/plugins/releases/download/v0.6.0/cni-plugins-amd64-v0.6.0.tgz \
|
||||
https://github.com/containerd/containerd/releases/download/v1.2.0-rc.0/containerd-1.2.0-rc.0.linux-amd64.tar.gz \
|
||||
https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kubectl \
|
||||
https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kube-proxy \
|
||||
https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kubelet
|
||||
https://github.com/opencontainers/runc/releases/download/v1.0.0-rc8/runc.amd64 \
|
||||
https://github.com/containernetworking/plugins/releases/download/v0.8.1/cni-plugins-linux-amd64-v0.8.1.tgz \
|
||||
https://github.com/containerd/containerd/releases/download/v1.2.7/containerd-1.2.7.linux-amd64.tar.gz \
|
||||
https://storage.googleapis.com/kubernetes-release/release/v1.14.4/bin/linux/amd64/kubectl \
|
||||
https://storage.googleapis.com/kubernetes-release/release/v1.14.4/bin/linux/amd64/kube-proxy \
|
||||
https://storage.googleapis.com/kubernetes-release/release/v1.14.4/bin/linux/amd64/kubelet
|
||||
```
|
||||
|
||||
Create the installation directories:
|
||||
|
||||
```
|
||||
```sh
|
||||
sudo mkdir -p \
|
||||
/etc/cni/net.d \
|
||||
/opt/cni/bin \
|
||||
|
@ -55,15 +55,15 @@ sudo mkdir -p \
|
|||
|
||||
Install the worker binaries:
|
||||
|
||||
```
|
||||
```sh
|
||||
{
|
||||
sudo mv runsc-50c283b9f56bb7200938d9e207355f05f79f0d17 runsc
|
||||
sudo mv runc.amd64 runc
|
||||
chmod +x kubectl kube-proxy kubelet runc runsc
|
||||
sudo mv kubectl kube-proxy kubelet runc runsc /usr/local/bin/
|
||||
sudo tar -xvf crictl-v1.12.0-linux-amd64.tar.gz -C /usr/local/bin/
|
||||
sudo tar -xvf cni-plugins-amd64-v0.6.0.tgz -C /opt/cni/bin/
|
||||
sudo tar -xvf containerd-1.2.0-rc.0.linux-amd64.tar.gz -C /
|
||||
sudo tar -xvf crictl-v1.15.0-linux-amd64.tar.gz -C /usr/local/bin/
|
||||
sudo tar -xvf cni-plugins-linux-amd64-v0.8.1.tgz -C /opt/cni/bin/
|
||||
sudo tar -xvf containerd-1.2.7.linux-amd64.tar.gz -C /
|
||||
}
|
||||
```
|
||||
|
||||
|
@ -71,14 +71,15 @@ Install the worker binaries:
|
|||
|
||||
Retrieve the Pod CIDR range for the current compute instance:
|
||||
|
||||
```
|
||||
```sh
|
||||
POD_CIDR=$(curl -s -H "Metadata-Flavor: Google" \
|
||||
http://metadata.google.internal/computeMetadata/v1/instance/attributes/pod-cidr)
|
||||
echo $POD_CIDR
|
||||
```
|
||||
|
||||
Create the `bridge` network configuration file:
|
||||
|
||||
```
|
||||
```sh
|
||||
cat <<EOF | sudo tee /etc/cni/net.d/10-bridge.conf
|
||||
{
|
||||
"cniVersion": "0.3.1",
|
||||
|
@ -100,7 +101,7 @@ EOF
|
|||
|
||||
Create the `loopback` network configuration file:
|
||||
|
||||
```
|
||||
```sh
|
||||
cat <<EOF | sudo tee /etc/cni/net.d/99-loopback.conf
|
||||
{
|
||||
"cniVersion": "0.3.1",
|
||||
|
@ -113,11 +114,11 @@ EOF
|
|||
|
||||
Create the `containerd` configuration file:
|
||||
|
||||
```
|
||||
```sh
|
||||
sudo mkdir -p /etc/containerd/
|
||||
```
|
||||
|
||||
```
|
||||
```sh
|
||||
cat << EOF | sudo tee /etc/containerd/config.toml
|
||||
[plugins]
|
||||
[plugins.cri.containerd]
|
||||
|
@ -141,7 +142,7 @@ EOF
|
|||
|
||||
Create the `containerd.service` systemd unit file:
|
||||
|
||||
```
|
||||
```sh
|
||||
cat <<EOF | sudo tee /etc/systemd/system/containerd.service
|
||||
[Unit]
|
||||
Description=containerd container runtime
|
||||
|
@ -167,7 +168,7 @@ EOF
|
|||
|
||||
### Configure the Kubelet
|
||||
|
||||
```
|
||||
```sh
|
||||
{
|
||||
sudo mv ${HOSTNAME}-key.pem ${HOSTNAME}.pem /var/lib/kubelet/
|
||||
sudo mv ${HOSTNAME}.kubeconfig /var/lib/kubelet/kubeconfig
|
||||
|
@ -177,7 +178,7 @@ EOF
|
|||
|
||||
Create the `kubelet-config.yaml` configuration file:
|
||||
|
||||
```
|
||||
```sh
|
||||
cat <<EOF | sudo tee /var/lib/kubelet/kubelet-config.yaml
|
||||
kind: KubeletConfiguration
|
||||
apiVersion: kubelet.config.k8s.io/v1beta1
|
||||
|
@ -205,7 +206,7 @@ EOF
|
|||
|
||||
Create the `kubelet.service` systemd unit file:
|
||||
|
||||
```
|
||||
```sh
|
||||
cat <<EOF | sudo tee /etc/systemd/system/kubelet.service
|
||||
[Unit]
|
||||
Description=Kubernetes Kubelet
|
||||
|
@ -233,13 +234,13 @@ EOF
|
|||
|
||||
### Configure the Kubernetes Proxy
|
||||
|
||||
```
|
||||
```sh
|
||||
sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig
|
||||
```
|
||||
|
||||
Create the `kube-proxy-config.yaml` configuration file:
|
||||
|
||||
```
|
||||
```sh
|
||||
cat <<EOF | sudo tee /var/lib/kube-proxy/kube-proxy-config.yaml
|
||||
kind: KubeProxyConfiguration
|
||||
apiVersion: kubeproxy.config.k8s.io/v1alpha1
|
||||
|
@ -252,7 +253,7 @@ EOF
|
|||
|
||||
Create the `kube-proxy.service` systemd unit file:
|
||||
|
||||
```
|
||||
```sh
|
||||
cat <<EOF | sudo tee /etc/systemd/system/kube-proxy.service
|
||||
[Unit]
|
||||
Description=Kubernetes Kube Proxy
|
||||
|
@ -271,7 +272,7 @@ EOF
|
|||
|
||||
### Start the Worker Services
|
||||
|
||||
```
|
||||
```sh
|
||||
{
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl enable containerd kubelet kube-proxy
|
||||
|
@ -287,7 +288,7 @@ EOF
|
|||
|
||||
List the registered Kubernetes nodes:
|
||||
|
||||
```
|
||||
```sh
|
||||
gcloud compute ssh controller-0 \
|
||||
--command "kubectl get nodes --kubeconfig admin.kubeconfig"
|
||||
```
|
||||
|
@ -296,9 +297,9 @@ gcloud compute ssh controller-0 \
|
|||
|
||||
```
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
worker-0 Ready <none> 35s v1.12.0
|
||||
worker-1 Ready <none> 36s v1.12.0
|
||||
worker-2 Ready <none> 36s v1.12.0
|
||||
worker-0 Ready <none> 94s v1.14.4
|
||||
worker-1 Ready <none> 93s v1.14.4
|
||||
worker-2 Ready <none> 92s v1.14.4
|
||||
```
|
||||
|
||||
Next: [Configuring kubectl for Remote Access](10-configuring-kubectl.md)
|
||||
|
|
|
@ -10,7 +10,7 @@ Each kubeconfig requires a Kubernetes API Server to connect to. To support high
|
|||
|
||||
Generate a kubeconfig file suitable for authenticating as the `admin` user:
|
||||
|
||||
```
|
||||
```sh
|
||||
{
|
||||
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
|
||||
--region $(gcloud config get-value compute/region) \
|
||||
|
@ -37,7 +37,7 @@ Generate a kubeconfig file suitable for authenticating as the `admin` user:
|
|||
|
||||
Check the health of the remote Kubernetes cluster:
|
||||
|
||||
```
|
||||
```sh
|
||||
kubectl get componentstatuses
|
||||
```
|
||||
|
||||
|
@ -54,7 +54,7 @@ etcd-0 Healthy {"health":"true"}
|
|||
|
||||
List the nodes in the remote Kubernetes cluster:
|
||||
|
||||
```
|
||||
```sh
|
||||
kubectl get nodes
|
||||
```
|
||||
|
||||
|
@ -62,9 +62,9 @@ kubectl get nodes
|
|||
|
||||
```
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
worker-0 Ready <none> 117s v1.12.0
|
||||
worker-1 Ready <none> 118s v1.12.0
|
||||
worker-2 Ready <none> 118s v1.12.0
|
||||
worker-0 Ready <none> 3m59s v1.14.4
|
||||
worker-1 Ready <none> 3m58s v1.14.4
|
||||
worker-2 Ready <none> 3m57s v1.14.4
|
||||
```
|
||||
|
||||
Next: [Provisioning Pod Network Routes](11-pod-network-routes.md)
|
||||
|
|
|
@ -12,7 +12,7 @@ In this section you will gather the information required to create routes in the
|
|||
|
||||
Print the internal IP address and Pod CIDR range for each worker instance:
|
||||
|
||||
```
|
||||
```sh
|
||||
for instance in worker-0 worker-1 worker-2; do
|
||||
gcloud compute instances describe ${instance} \
|
||||
--format 'value[separator=" "](networkInterfaces[0].networkIP,metadata.items[0].value)'
|
||||
|
@ -31,7 +31,7 @@ done
|
|||
|
||||
Create network routes for each worker instance:
|
||||
|
||||
```
|
||||
```sh
|
||||
for i in 0 1 2; do
|
||||
gcloud compute routes create kubernetes-route-10-200-${i}-0-24 \
|
||||
--network kubernetes-the-hard-way \
|
||||
|
@ -42,7 +42,7 @@ done
|
|||
|
||||
List the routes in the `kubernetes-the-hard-way` VPC network:
|
||||
|
||||
```
|
||||
```sh
|
||||
gcloud compute routes list --filter "network: kubernetes-the-hard-way"
|
||||
```
|
||||
|
||||
|
|
|
@ -6,7 +6,7 @@ In this lab you will deploy the [DNS add-on](https://kubernetes.io/docs/concepts
|
|||
|
||||
Deploy the `coredns` cluster add-on:
|
||||
|
||||
```
|
||||
```sh
|
||||
kubectl apply -f https://storage.googleapis.com/kubernetes-the-hard-way/coredns.yaml
|
||||
```
|
||||
|
||||
|
@ -23,7 +23,7 @@ service/kube-dns created
|
|||
|
||||
List the pods created by the `kube-dns` deployment:
|
||||
|
||||
```
|
||||
```sh
|
||||
kubectl get pods -l k8s-app=kube-dns -n kube-system
|
||||
```
|
||||
|
||||
|
@ -39,13 +39,13 @@ coredns-699f8ddd77-gtcgb 1/1 Running 0 20s
|
|||
|
||||
Create a `busybox` deployment:
|
||||
|
||||
```
|
||||
```sh
|
||||
kubectl run busybox --image=busybox:1.28 --command -- sleep 3600
|
||||
```
|
||||
|
||||
List the pod created by the `busybox` deployment:
|
||||
|
||||
```
|
||||
```sh
|
||||
kubectl get pods -l run=busybox
|
||||
```
|
||||
|
||||
|
@ -64,7 +64,7 @@ POD_NAME=$(kubectl get pods -l run=busybox -o jsonpath="{.items[0].metadata.name
|
|||
|
||||
Execute a DNS lookup for the `kubernetes` service inside the `busybox` pod:
|
||||
|
||||
```
|
||||
```sh
|
||||
kubectl exec -ti $POD_NAME -- nslookup kubernetes
|
||||
```
|
||||
|
||||
|
|
|
@ -8,14 +8,14 @@ In this section you will verify the ability to [encrypt secret data at rest](htt
|
|||
|
||||
Create a generic secret:
|
||||
|
||||
```
|
||||
```sh
|
||||
kubectl create secret generic kubernetes-the-hard-way \
|
||||
--from-literal="mykey=mydata"
|
||||
```
|
||||
|
||||
Print a hexdump of the `kubernetes-the-hard-way` secret stored in etcd:
|
||||
|
||||
```
|
||||
```sh
|
||||
gcloud compute ssh controller-0 \
|
||||
--command "sudo ETCDCTL_API=3 etcdctl get \
|
||||
--endpoints=https://127.0.0.1:2379 \
|
||||
|
@ -54,13 +54,13 @@ In this section you will verify the ability to create and manage [Deployments](h
|
|||
|
||||
Create a deployment for the [nginx](https://nginx.org/en/) web server:
|
||||
|
||||
```
|
||||
```sh
|
||||
kubectl run nginx --image=nginx
|
||||
```
|
||||
|
||||
List the pod created by the `nginx` deployment:
|
||||
|
||||
```
|
||||
```sh
|
||||
kubectl get pods -l run=nginx
|
||||
```
|
||||
|
||||
|
@ -77,13 +77,13 @@ In this section you will verify the ability to access applications remotely usin
|
|||
|
||||
Retrieve the full name of the `nginx` pod:
|
||||
|
||||
```
|
||||
```sh
|
||||
POD_NAME=$(kubectl get pods -l run=nginx -o jsonpath="{.items[0].metadata.name}")
|
||||
```
|
||||
|
||||
Forward port `8080` on your local machine to port `80` of the `nginx` pod:
|
||||
|
||||
```
|
||||
```sh
|
||||
kubectl port-forward $POD_NAME 8080:80
|
||||
```
|
||||
|
||||
|
@ -104,13 +104,13 @@ curl --head http://127.0.0.1:8080
|
|||
|
||||
```
|
||||
HTTP/1.1 200 OK
|
||||
Server: nginx/1.15.4
|
||||
Date: Sun, 30 Sep 2018 19:23:10 GMT
|
||||
Server: nginx/1.17.2
|
||||
Date: Sat, 03 Aug 2019 03:35:08 GMT
|
||||
Content-Type: text/html
|
||||
Content-Length: 612
|
||||
Last-Modified: Tue, 25 Sep 2018 15:04:03 GMT
|
||||
Last-Modified: Tue, 23 Jul 2019 11:45:37 GMT
|
||||
Connection: keep-alive
|
||||
ETag: "5baa4e63-264"
|
||||
ETag: "5d36f361-264"
|
||||
Accept-Ranges: bytes
|
||||
```
|
||||
|
||||
|
@ -129,14 +129,14 @@ In this section you will verify the ability to [retrieve container logs](https:/
|
|||
|
||||
Print the `nginx` pod logs:
|
||||
|
||||
```
|
||||
```sh
|
||||
kubectl logs $POD_NAME
|
||||
```
|
||||
|
||||
> output
|
||||
|
||||
```
|
||||
127.0.0.1 - - [30/Sep/2018:19:23:10 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.58.0" "-"
|
||||
127.0.0.1 - - [03/Aug/2019:03:35:08 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.54.0" "-"
|
||||
```
|
||||
|
||||
### Exec
|
||||
|
@ -152,7 +152,7 @@ kubectl exec -ti $POD_NAME -- nginx -v
|
|||
> output
|
||||
|
||||
```
|
||||
nginx version: nginx/1.15.4
|
||||
nginx version: nginx/1.17.2
|
||||
```
|
||||
|
||||
## Services
|
||||
|
@ -161,7 +161,7 @@ In this section you will verify the ability to expose applications using a [Serv
|
|||
|
||||
Expose the `nginx` deployment using a [NodePort](https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport) service:
|
||||
|
||||
```
|
||||
```sh
|
||||
kubectl expose deployment nginx --port 80 --type NodePort
|
||||
```
|
||||
|
||||
|
@ -169,14 +169,21 @@ kubectl expose deployment nginx --port 80 --type NodePort
|
|||
|
||||
Retrieve the node port assigned to the `nginx` service:
|
||||
|
||||
```
|
||||
```sh
|
||||
NODE_PORT=$(kubectl get svc nginx \
|
||||
--output=jsonpath='{range .spec.ports[0]}{.nodePort}')
|
||||
echo $NODE_PORT
|
||||
```
|
||||
|
||||
> output
|
||||
|
||||
```
|
||||
30313
|
||||
```
|
||||
|
||||
Create a firewall rule that allows remote access to the `nginx` node port:
|
||||
|
||||
```
|
||||
```sh
|
||||
gcloud compute firewall-rules create kubernetes-the-hard-way-allow-nginx-service \
|
||||
--allow=tcp:${NODE_PORT} \
|
||||
--network kubernetes-the-hard-way
|
||||
|
@ -184,28 +191,28 @@ gcloud compute firewall-rules create kubernetes-the-hard-way-allow-nginx-service
|
|||
|
||||
Retrieve the external IP address of a worker instance:
|
||||
|
||||
```
|
||||
```sh
|
||||
EXTERNAL_IP=$(gcloud compute instances describe worker-0 \
|
||||
--format 'value(networkInterfaces[0].accessConfigs[0].natIP)')
|
||||
```
|
||||
|
||||
Make an HTTP request using the external IP address and the `nginx` node port:
|
||||
|
||||
```
|
||||
curl -I http://${EXTERNAL_IP}:${NODE_PORT}
|
||||
```sh
|
||||
curl -I "http://${EXTERNAL_IP}:${NODE_PORT}"
|
||||
```
|
||||
|
||||
> output
|
||||
|
||||
```
|
||||
HTTP/1.1 200 OK
|
||||
Server: nginx/1.15.4
|
||||
Date: Sun, 30 Sep 2018 19:25:40 GMT
|
||||
Server: nginx/1.17.2
|
||||
Date: Sat, 03 Aug 2019 03:43:19 GMT
|
||||
Content-Type: text/html
|
||||
Content-Length: 612
|
||||
Last-Modified: Tue, 25 Sep 2018 15:04:03 GMT
|
||||
Last-Modified: Tue, 23 Jul 2019 11:45:37 GMT
|
||||
Connection: keep-alive
|
||||
ETag: "5baa4e63-264"
|
||||
ETag: "5d36f361-264"
|
||||
Accept-Ranges: bytes
|
||||
```
|
||||
|
||||
|
@ -215,7 +222,7 @@ This section will verify the ability to run untrusted workloads using [gVisor](h
|
|||
|
||||
Create the `untrusted` pod:
|
||||
|
||||
```
|
||||
```sh
|
||||
cat <<EOF | kubectl apply -f -
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
|
@ -236,34 +243,43 @@ In this section you will verify the `untrusted` pod is running under gVisor (run
|
|||
|
||||
Verify the `untrusted` pod is running:
|
||||
|
||||
```
|
||||
kubectl get pods -o wide
|
||||
```
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE IP NODE
|
||||
busybox-68654f944b-djjjb 1/1 Running 0 5m 10.200.0.2 worker-0
|
||||
nginx-65899c769f-xkfcn 1/1 Running 0 4m 10.200.1.2 worker-1
|
||||
untrusted 1/1 Running 0 10s 10.200.0.3 worker-0
|
||||
```sh
|
||||
kubectl get pods,svc -o wide
|
||||
```
|
||||
|
||||
> output
|
||||
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
|
||||
pod/busybox-68f7d47fc6-fnzlp 1/1 Running 0 18m 10.200.1.2 worker-1 <none> <none>
|
||||
pod/nginx 1/1 Running 0 11m 10.200.1.3 worker-1 <none> <none>
|
||||
pod/untrusted 1/1 Running 0 90s 10.200.0.3 worker-0 <none> <none>
|
||||
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
|
||||
service/kubernetes ClusterIP 10.32.0.1 <none> 443/TCP 7h20m <none>
|
||||
service/nginx NodePort 10.32.0.147 <none> 80:31209/TCP 7m30s run=nginx
|
||||
```
|
||||
|
||||
Get the node name where the `untrusted` pod is running:
|
||||
|
||||
```
|
||||
```sh
|
||||
INSTANCE_NAME=$(kubectl get pod untrusted --output=jsonpath='{.spec.nodeName}')
|
||||
```
|
||||
|
||||
SSH into the worker node:
|
||||
|
||||
```
|
||||
```sh
|
||||
gcloud compute ssh ${INSTANCE_NAME}
|
||||
```
|
||||
|
||||
List the containers running under gVisor:
|
||||
|
||||
```
|
||||
```sh
|
||||
sudo runsc --root /run/containerd/runsc/k8s.io list
|
||||
```
|
||||
|
||||
> output
|
||||
|
||||
```
|
||||
I0930 19:27:13.255142 20832 x:0] ***************************
|
||||
I0930 19:27:13.255326 20832 x:0] Args: [runsc --root /run/containerd/runsc/k8s.io list]
|
||||
|
@ -285,21 +301,21 @@ I0930 19:27:13.259733 20832 x:0] Exiting with status: 0
|
|||
|
||||
Get the ID of the `untrusted` pod:
|
||||
|
||||
```
|
||||
```sh
|
||||
POD_ID=$(sudo crictl -r unix:///var/run/containerd/containerd.sock \
|
||||
pods --name untrusted -q)
|
||||
```
|
||||
|
||||
Get the ID of the `webserver` container running in the `untrusted` pod:
|
||||
|
||||
```
|
||||
```sh
|
||||
CONTAINER_ID=$(sudo crictl -r unix:///var/run/containerd/containerd.sock \
|
||||
ps -p ${POD_ID} -q)
|
||||
```
|
||||
|
||||
Use the gVisor `runsc` command to display the processes running inside the `webserver` container:
|
||||
|
||||
```
|
||||
```sh
|
||||
sudo runsc --root /run/containerd/runsc/k8s.io ps ${CONTAINER_ID}
|
||||
```
|
||||
|
||||
|
|
|
@ -6,7 +6,7 @@ In this lab you will delete the compute resources created during this tutorial.
|
|||
|
||||
Delete the controller and worker compute instances:
|
||||
|
||||
```
|
||||
```sh
|
||||
gcloud -q compute instances delete \
|
||||
controller-0 controller-1 controller-2 \
|
||||
worker-0 worker-1 worker-2
|
||||
|
@ -16,7 +16,7 @@ gcloud -q compute instances delete \
|
|||
|
||||
Delete the external load balancer network resources:
|
||||
|
||||
```
|
||||
```sh
|
||||
{
|
||||
gcloud -q compute forwarding-rules delete kubernetes-forwarding-rule \
|
||||
--region $(gcloud config get-value compute/region)
|
||||
|
@ -31,7 +31,7 @@ Delete the external load balancer network resources:
|
|||
|
||||
Delete the `kubernetes-the-hard-way` firewall rules:
|
||||
|
||||
```
|
||||
```sh
|
||||
gcloud -q compute firewall-rules delete \
|
||||
kubernetes-the-hard-way-allow-nginx-service \
|
||||
kubernetes-the-hard-way-allow-internal \
|
||||
|
|
Loading…
Reference in New Issue