Update to Kubernetes 1.15.3
parent
cbbdcffd4a
commit
cf06712e78
|
@ -47,3 +47,4 @@ service-account-key.pem
|
||||||
service-account.csr
|
service-account.csr
|
||||||
service-account.pem
|
service-account.pem
|
||||||
service-account-csr.json
|
service-account-csr.json
|
||||||
|
*.swp
|
||||||
|
|
|
@ -0,0 +1,3 @@
|
||||||
|
# Copyright
|
||||||
|
|
||||||
|
<a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a>
|
18
README.md
18
README.md
|
@ -1,11 +1,16 @@
|
||||||
# Kubernetes The Hard Way
|
# Kubernetes The Hard Way
|
||||||
|
|
||||||
This tutorial walks you through setting up Kubernetes the hard way. This guide is not for people looking for a fully automated command to bring up a Kubernetes cluster. If that's you then check out [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine), or the [Getting Started Guides](http://kubernetes.io/docs/getting-started-guides/).
|
This tutorial walks you through setting up Kubernetes the hard way. This guide is not for people looking for a fully automated command to bring up a Kubernetes cluster. If that's you then check out [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine), or the [Getting Started Guides](https://kubernetes.io/docs/setup).
|
||||||
|
|
||||||
Kubernetes The Hard Way is optimized for learning, which means taking the long route to ensure you understand each task required to bootstrap a Kubernetes cluster.
|
Kubernetes The Hard Way is optimized for learning, which means taking the long route to ensure you understand each task required to bootstrap a Kubernetes cluster.
|
||||||
|
|
||||||
> The results of this tutorial should not be viewed as production ready, and may receive limited support from the community, but don't let that stop you from learning!
|
> The results of this tutorial should not be viewed as production ready, and may receive limited support from the community, but don't let that stop you from learning!
|
||||||
|
|
||||||
|
## Copyright
|
||||||
|
|
||||||
|
<a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a>.
|
||||||
|
|
||||||
|
|
||||||
## Target Audience
|
## Target Audience
|
||||||
|
|
||||||
The target audience for this tutorial is someone planning to support a production Kubernetes cluster and wants to understand how everything fits together.
|
The target audience for this tutorial is someone planning to support a production Kubernetes cluster and wants to understand how everything fits together.
|
||||||
|
@ -14,12 +19,11 @@ The target audience for this tutorial is someone planning to support a productio
|
||||||
|
|
||||||
Kubernetes The Hard Way guides you through bootstrapping a highly available Kubernetes cluster with end-to-end encryption between components and RBAC authentication.
|
Kubernetes The Hard Way guides you through bootstrapping a highly available Kubernetes cluster with end-to-end encryption between components and RBAC authentication.
|
||||||
|
|
||||||
* [Kubernetes](https://github.com/kubernetes/kubernetes) 1.12.0
|
* [kubernetes](https://github.com/kubernetes/kubernetes) 1.15.3
|
||||||
* [containerd Container Runtime](https://github.com/containerd/containerd) 1.2.0-rc.0
|
* [containerd](https://github.com/containerd/containerd) 1.2.9
|
||||||
* [gVisor](https://github.com/google/gvisor) 50c283b9f56bb7200938d9e207355f05f79f0d17
|
* [coredns](https://github.com/coredns/coredns) v1.6.3
|
||||||
* [CNI Container Networking](https://github.com/containernetworking/cni) 0.6.0
|
* [cni](https://github.com/containernetworking/cni) v0.7.1
|
||||||
* [etcd](https://github.com/coreos/etcd) v3.3.9
|
* [etcd](https://github.com/coreos/etcd) v3.4.0
|
||||||
* [CoreDNS](https://github.com/coredns/coredns) v1.2.2
|
|
||||||
|
|
||||||
## Labs
|
## Labs
|
||||||
|
|
||||||
|
|
|
@ -0,0 +1,180 @@
|
||||||
|
apiVersion: v1
|
||||||
|
kind: ServiceAccount
|
||||||
|
metadata:
|
||||||
|
name: coredns
|
||||||
|
namespace: kube-system
|
||||||
|
---
|
||||||
|
apiVersion: rbac.authorization.k8s.io/v1
|
||||||
|
kind: ClusterRole
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
kubernetes.io/bootstrapping: rbac-defaults
|
||||||
|
name: system:coredns
|
||||||
|
rules:
|
||||||
|
- apiGroups:
|
||||||
|
- ""
|
||||||
|
resources:
|
||||||
|
- endpoints
|
||||||
|
- services
|
||||||
|
- pods
|
||||||
|
- namespaces
|
||||||
|
verbs:
|
||||||
|
- list
|
||||||
|
- watch
|
||||||
|
- apiGroups:
|
||||||
|
- ""
|
||||||
|
resources:
|
||||||
|
- nodes
|
||||||
|
verbs:
|
||||||
|
- get
|
||||||
|
---
|
||||||
|
apiVersion: rbac.authorization.k8s.io/v1
|
||||||
|
kind: ClusterRoleBinding
|
||||||
|
metadata:
|
||||||
|
annotations:
|
||||||
|
rbac.authorization.kubernetes.io/autoupdate: "true"
|
||||||
|
labels:
|
||||||
|
kubernetes.io/bootstrapping: rbac-defaults
|
||||||
|
name: system:coredns
|
||||||
|
roleRef:
|
||||||
|
apiGroup: rbac.authorization.k8s.io
|
||||||
|
kind: ClusterRole
|
||||||
|
name: system:coredns
|
||||||
|
subjects:
|
||||||
|
- kind: ServiceAccount
|
||||||
|
name: coredns
|
||||||
|
namespace: kube-system
|
||||||
|
---
|
||||||
|
apiVersion: v1
|
||||||
|
kind: ConfigMap
|
||||||
|
metadata:
|
||||||
|
name: coredns
|
||||||
|
namespace: kube-system
|
||||||
|
data:
|
||||||
|
Corefile: |
|
||||||
|
.:53 {
|
||||||
|
errors
|
||||||
|
health
|
||||||
|
ready
|
||||||
|
kubernetes cluster.local in-addr.arpa ip6.arpa {
|
||||||
|
pods insecure
|
||||||
|
fallthrough in-addr.arpa ip6.arpa
|
||||||
|
}
|
||||||
|
prometheus :9153
|
||||||
|
cache 30
|
||||||
|
loop
|
||||||
|
reload
|
||||||
|
loadbalance
|
||||||
|
}
|
||||||
|
---
|
||||||
|
apiVersion: apps/v1
|
||||||
|
kind: Deployment
|
||||||
|
metadata:
|
||||||
|
name: coredns
|
||||||
|
namespace: kube-system
|
||||||
|
labels:
|
||||||
|
k8s-app: kube-dns
|
||||||
|
kubernetes.io/name: "CoreDNS"
|
||||||
|
spec:
|
||||||
|
replicas: 2
|
||||||
|
strategy:
|
||||||
|
type: RollingUpdate
|
||||||
|
rollingUpdate:
|
||||||
|
maxUnavailable: 1
|
||||||
|
selector:
|
||||||
|
matchLabels:
|
||||||
|
k8s-app: kube-dns
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
k8s-app: kube-dns
|
||||||
|
spec:
|
||||||
|
priorityClassName: system-cluster-critical
|
||||||
|
serviceAccountName: coredns
|
||||||
|
tolerations:
|
||||||
|
- key: "CriticalAddonsOnly"
|
||||||
|
operator: "Exists"
|
||||||
|
nodeSelector:
|
||||||
|
beta.kubernetes.io/os: linux
|
||||||
|
containers:
|
||||||
|
- name: coredns
|
||||||
|
image: coredns/coredns:1.6.2
|
||||||
|
imagePullPolicy: IfNotPresent
|
||||||
|
resources:
|
||||||
|
limits:
|
||||||
|
memory: 170Mi
|
||||||
|
requests:
|
||||||
|
cpu: 100m
|
||||||
|
memory: 70Mi
|
||||||
|
args: [ "-conf", "/etc/coredns/Corefile" ]
|
||||||
|
volumeMounts:
|
||||||
|
- name: config-volume
|
||||||
|
mountPath: /etc/coredns
|
||||||
|
readOnly: true
|
||||||
|
ports:
|
||||||
|
- containerPort: 53
|
||||||
|
name: dns
|
||||||
|
protocol: UDP
|
||||||
|
- containerPort: 53
|
||||||
|
name: dns-tcp
|
||||||
|
protocol: TCP
|
||||||
|
- containerPort: 9153
|
||||||
|
name: metrics
|
||||||
|
protocol: TCP
|
||||||
|
securityContext:
|
||||||
|
allowPrivilegeEscalation: false
|
||||||
|
capabilities:
|
||||||
|
add:
|
||||||
|
- NET_BIND_SERVICE
|
||||||
|
drop:
|
||||||
|
- all
|
||||||
|
readOnlyRootFilesystem: true
|
||||||
|
livenessProbe:
|
||||||
|
httpGet:
|
||||||
|
path: /health
|
||||||
|
port: 8080
|
||||||
|
scheme: HTTP
|
||||||
|
initialDelaySeconds: 60
|
||||||
|
timeoutSeconds: 5
|
||||||
|
successThreshold: 1
|
||||||
|
failureThreshold: 5
|
||||||
|
readinessProbe:
|
||||||
|
httpGet:
|
||||||
|
path: /ready
|
||||||
|
port: 8181
|
||||||
|
scheme: HTTP
|
||||||
|
dnsPolicy: Default
|
||||||
|
volumes:
|
||||||
|
- name: config-volume
|
||||||
|
configMap:
|
||||||
|
name: coredns
|
||||||
|
items:
|
||||||
|
- key: Corefile
|
||||||
|
path: Corefile
|
||||||
|
---
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Service
|
||||||
|
metadata:
|
||||||
|
name: kube-dns
|
||||||
|
namespace: kube-system
|
||||||
|
annotations:
|
||||||
|
prometheus.io/port: "9153"
|
||||||
|
prometheus.io/scrape: "true"
|
||||||
|
labels:
|
||||||
|
k8s-app: kube-dns
|
||||||
|
kubernetes.io/cluster-service: "true"
|
||||||
|
kubernetes.io/name: "CoreDNS"
|
||||||
|
spec:
|
||||||
|
selector:
|
||||||
|
k8s-app: kube-dns
|
||||||
|
clusterIP: 10.32.0.10
|
||||||
|
ports:
|
||||||
|
- name: dns
|
||||||
|
port: 53
|
||||||
|
protocol: UDP
|
||||||
|
- name: dns-tcp
|
||||||
|
port: 53
|
||||||
|
protocol: TCP
|
||||||
|
- name: metrics
|
||||||
|
port: 9153
|
||||||
|
protocol: TCP
|
|
@ -4,7 +4,7 @@
|
||||||
|
|
||||||
This tutorial leverages the [Google Cloud Platform](https://cloud.google.com/) to streamline provisioning of the compute infrastructure required to bootstrap a Kubernetes cluster from the ground up. [Sign up](https://cloud.google.com/free/) for $300 in free credits.
|
This tutorial leverages the [Google Cloud Platform](https://cloud.google.com/) to streamline provisioning of the compute infrastructure required to bootstrap a Kubernetes cluster from the ground up. [Sign up](https://cloud.google.com/free/) for $300 in free credits.
|
||||||
|
|
||||||
[Estimated cost](https://cloud.google.com/products/calculator/#id=78df6ced-9c50-48f8-a670-bc5003f2ddaa) to run this tutorial: $0.22 per hour ($5.39 per day).
|
[Estimated cost](https://cloud.google.com/products/calculator/#id=55663256-c384-449c-9306-e39893e23afb) to run this tutorial: $0.23 per hour ($5.46 per day).
|
||||||
|
|
||||||
> The compute resources required for this tutorial exceed the Google Cloud Platform free tier.
|
> The compute resources required for this tutorial exceed the Google Cloud Platform free tier.
|
||||||
|
|
||||||
|
@ -14,7 +14,7 @@ This tutorial leverages the [Google Cloud Platform](https://cloud.google.com/) t
|
||||||
|
|
||||||
Follow the Google Cloud SDK [documentation](https://cloud.google.com/sdk/) to install and configure the `gcloud` command line utility.
|
Follow the Google Cloud SDK [documentation](https://cloud.google.com/sdk/) to install and configure the `gcloud` command line utility.
|
||||||
|
|
||||||
Verify the Google Cloud SDK version is 218.0.0 or higher:
|
Verify the Google Cloud SDK version is 262.0.0 or higher:
|
||||||
|
|
||||||
```
|
```
|
||||||
gcloud version
|
gcloud version
|
||||||
|
@ -30,7 +30,13 @@ If you are using the `gcloud` command-line tool for the first time `init` is the
|
||||||
gcloud init
|
gcloud init
|
||||||
```
|
```
|
||||||
|
|
||||||
Otherwise set a default compute region:
|
Then be sure to authorize gcloud to access the Cloud Platform with your Google user credentials:
|
||||||
|
|
||||||
|
```
|
||||||
|
gcloud auth login
|
||||||
|
```
|
||||||
|
|
||||||
|
Next set a default compute region and compute zone:
|
||||||
|
|
||||||
```
|
```
|
||||||
gcloud config set compute/region us-west1
|
gcloud config set compute/region us-west1
|
||||||
|
@ -46,12 +52,12 @@ gcloud config set compute/zone us-west1-c
|
||||||
|
|
||||||
## Running Commands in Parallel with tmux
|
## Running Commands in Parallel with tmux
|
||||||
|
|
||||||
[tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple compute instances at the same time. Labs in this tutorial may require running the same commands across multiple compute instances, in those cases consider using tmux and splitting a window into multiple panes with `synchronize-panes` enabled to speed up the provisioning process.
|
[tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple compute instances at the same time. Labs in this tutorial may require running the same commands across multiple compute instances, in those cases consider using tmux and splitting a window into multiple panes with synchronize-panes enabled to speed up the provisioning process.
|
||||||
|
|
||||||
> The use of tmux is optional and not required to complete this tutorial.
|
> The use of tmux is optional and not required to complete this tutorial.
|
||||||
|
|
||||||
![tmux screenshot](images/tmux-screenshot.png)
|
![tmux screenshot](images/tmux-screenshot.png)
|
||||||
|
|
||||||
> Enable `synchronize-panes`: `ctrl+b` then `shift :`. Then type `set synchronize-panes on` at the prompt. To disable synchronization: `set synchronize-panes off`.
|
> Enable synchronize-panes by pressing `ctrl+b` followed by `shift+:`. Next type `set synchronize-panes on` at the prompt. To disable synchronization: `set synchronize-panes off`.
|
||||||
|
|
||||||
Next: [Installing the Client Tools](02-client-tools.md)
|
Next: [Installing the Client Tools](02-client-tools.md)
|
||||||
|
|
|
@ -7,13 +7,13 @@ In this lab you will install the command line utilities required to complete thi
|
||||||
|
|
||||||
The `cfssl` and `cfssljson` command line utilities will be used to provision a [PKI Infrastructure](https://en.wikipedia.org/wiki/Public_key_infrastructure) and generate TLS certificates.
|
The `cfssl` and `cfssljson` command line utilities will be used to provision a [PKI Infrastructure](https://en.wikipedia.org/wiki/Public_key_infrastructure) and generate TLS certificates.
|
||||||
|
|
||||||
Download and install `cfssl` and `cfssljson` from the [cfssl repository](https://pkg.cfssl.org):
|
Download and install `cfssl` and `cfssljson`:
|
||||||
|
|
||||||
### OS X
|
### OS X
|
||||||
|
|
||||||
```
|
```
|
||||||
curl -o cfssl https://pkg.cfssl.org/R1.2/cfssl_darwin-amd64
|
curl -o cfssl https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/darwin/cfssl
|
||||||
curl -o cfssljson https://pkg.cfssl.org/R1.2/cfssljson_darwin-amd64
|
curl -o cfssljson https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/darwin/cfssljson
|
||||||
```
|
```
|
||||||
|
|
||||||
```
|
```
|
||||||
|
@ -34,25 +34,21 @@ brew install cfssl
|
||||||
|
|
||||||
```
|
```
|
||||||
wget -q --show-progress --https-only --timestamping \
|
wget -q --show-progress --https-only --timestamping \
|
||||||
https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 \
|
https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/linux/cfssl \
|
||||||
https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
|
https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/linux/cfssljson
|
||||||
```
|
```
|
||||||
|
|
||||||
```
|
```
|
||||||
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64
|
chmod +x cfssl cfssljson
|
||||||
```
|
```
|
||||||
|
|
||||||
```
|
```
|
||||||
sudo mv cfssl_linux-amd64 /usr/local/bin/cfssl
|
sudo mv cfssl cfssljson /usr/local/bin/
|
||||||
```
|
|
||||||
|
|
||||||
```
|
|
||||||
sudo mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Verification
|
### Verification
|
||||||
|
|
||||||
Verify `cfssl` version 1.2.0 or higher is installed:
|
Verify `cfssl` and `cfssljson` version 1.3.4 or higher is installed:
|
||||||
|
|
||||||
```
|
```
|
||||||
cfssl version
|
cfssl version
|
||||||
|
@ -61,12 +57,19 @@ cfssl version
|
||||||
> output
|
> output
|
||||||
|
|
||||||
```
|
```
|
||||||
Version: 1.2.0
|
Version: 1.3.4
|
||||||
Revision: dev
|
Revision: dev
|
||||||
Runtime: go1.6
|
Runtime: go1.13
|
||||||
```
|
```
|
||||||
|
|
||||||
> The cfssljson command line utility does not provide a way to print its version.
|
```
|
||||||
|
cfssljson --version
|
||||||
|
```
|
||||||
|
```
|
||||||
|
Version: 1.3.4
|
||||||
|
Revision: dev
|
||||||
|
Runtime: go1.13
|
||||||
|
```
|
||||||
|
|
||||||
## Install kubectl
|
## Install kubectl
|
||||||
|
|
||||||
|
@ -75,7 +78,7 @@ The `kubectl` command line utility is used to interact with the Kubernetes API S
|
||||||
### OS X
|
### OS X
|
||||||
|
|
||||||
```
|
```
|
||||||
curl -o kubectl https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/darwin/amd64/kubectl
|
curl -o kubectl https://storage.googleapis.com/kubernetes-release/release/v1.15.3/bin/darwin/amd64/kubectl
|
||||||
```
|
```
|
||||||
|
|
||||||
```
|
```
|
||||||
|
@ -89,7 +92,7 @@ sudo mv kubectl /usr/local/bin/
|
||||||
### Linux
|
### Linux
|
||||||
|
|
||||||
```
|
```
|
||||||
wget https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kubectl
|
wget https://storage.googleapis.com/kubernetes-release/release/v1.15.3/bin/linux/amd64/kubectl
|
||||||
```
|
```
|
||||||
|
|
||||||
```
|
```
|
||||||
|
@ -102,7 +105,7 @@ sudo mv kubectl /usr/local/bin/
|
||||||
|
|
||||||
### Verification
|
### Verification
|
||||||
|
|
||||||
Verify `kubectl` version 1.12.0 or higher is installed:
|
Verify `kubectl` version 1.15.3 or higher is installed:
|
||||||
|
|
||||||
```
|
```
|
||||||
kubectl version --client
|
kubectl version --client
|
||||||
|
@ -111,7 +114,7 @@ kubectl version --client
|
||||||
> output
|
> output
|
||||||
|
|
||||||
```
|
```
|
||||||
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.0", GitCommit:"0ed33881dc4355495f623c6f22e7dd0b7632b7c0", GitTreeState:"clean", BuildDate:"2018-09-27T17:05:32Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
|
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState:"clean", BuildDate:"2019-08-19T11:13:54Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
|
||||||
```
|
```
|
||||||
|
|
||||||
Next: [Provisioning Compute Resources](03-compute-resources.md)
|
Next: [Provisioning Compute Resources](03-compute-resources.md)
|
||||||
|
|
|
@ -161,7 +161,7 @@ worker-2 us-west1-c n1-standard-1 10.240.0.22 XXX.XXX.XX.XX
|
||||||
|
|
||||||
## Configuring SSH Access
|
## Configuring SSH Access
|
||||||
|
|
||||||
SSH will be used to configure the controller and worker instances. When connecting to compute instances for the first time SSH keys will be generated for you and stored in the project or instance metadata as describe in the [connecting to instances](https://cloud.google.com/compute/docs/instances/connecting-to-instance) documentation.
|
SSH will be used to configure the controller and worker instances. When connecting to compute instances for the first time SSH keys will be generated for you and stored in the project or instance metadata as described in the [connecting to instances](https://cloud.google.com/compute/docs/instances/connecting-to-instance) documentation.
|
||||||
|
|
||||||
Test SSH access to the `controller-0` compute instances:
|
Test SSH access to the `controller-0` compute instances:
|
||||||
|
|
||||||
|
@ -208,11 +208,10 @@ Waiting for SSH key to propagate.
|
||||||
After the SSH keys have been updated you'll be logged into the `controller-0` instance:
|
After the SSH keys have been updated you'll be logged into the `controller-0` instance:
|
||||||
|
|
||||||
```
|
```
|
||||||
Welcome to Ubuntu 18.04 LTS (GNU/Linux 4.15.0-1006-gcp x86_64)
|
Welcome to Ubuntu 18.04.3 LTS (GNU/Linux 4.15.0-1042-gcp x86_64)
|
||||||
|
|
||||||
...
|
...
|
||||||
|
|
||||||
Last login: Sun May 13 14:34:27 2018 from XX.XXX.XXX.XX
|
Last login: Sun Sept 14 14:34:27 2019 from XX.XXX.XXX.XX
|
||||||
```
|
```
|
||||||
|
|
||||||
Type `exit` at the prompt to exit the `controller-0` compute instance:
|
Type `exit` at the prompt to exit the `controller-0` compute instance:
|
||||||
|
|
|
@ -303,6 +303,8 @@ KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-har
|
||||||
--region $(gcloud config get-value compute/region) \
|
--region $(gcloud config get-value compute/region) \
|
||||||
--format 'value(address)')
|
--format 'value(address)')
|
||||||
|
|
||||||
|
KUBERNETES_HOSTNAMES=kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.svc.cluster.local
|
||||||
|
|
||||||
cat > kubernetes-csr.json <<EOF
|
cat > kubernetes-csr.json <<EOF
|
||||||
{
|
{
|
||||||
"CN": "kubernetes",
|
"CN": "kubernetes",
|
||||||
|
@ -326,13 +328,15 @@ cfssl gencert \
|
||||||
-ca=ca.pem \
|
-ca=ca.pem \
|
||||||
-ca-key=ca-key.pem \
|
-ca-key=ca-key.pem \
|
||||||
-config=ca-config.json \
|
-config=ca-config.json \
|
||||||
-hostname=10.32.0.1,10.240.0.10,10.240.0.11,10.240.0.12,${KUBERNETES_PUBLIC_ADDRESS},127.0.0.1,kubernetes.default \
|
-hostname=10.32.0.1,10.240.0.10,10.240.0.11,10.240.0.12,${KUBERNETES_PUBLIC_ADDRESS},127.0.0.1,${KUBERNETES_HOSTNAMES} \
|
||||||
-profile=kubernetes \
|
-profile=kubernetes \
|
||||||
kubernetes-csr.json | cfssljson -bare kubernetes
|
kubernetes-csr.json | cfssljson -bare kubernetes
|
||||||
|
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
> The Kubernetes API server is automatically assigned the `kubernetes` internal dns name, which will be linked to the first IP address (`10.32.0.1`) from the address range (`10.32.0.0/24`) reserved for internal cluster services during the [control plane bootstrapping](08-bootstrapping-kubernetes-controllers.md#configure-the-kubernetes-api-server) lab.
|
||||||
|
|
||||||
Results:
|
Results:
|
||||||
|
|
||||||
```
|
```
|
||||||
|
@ -342,7 +346,7 @@ kubernetes.pem
|
||||||
|
|
||||||
## The Service Account Key Pair
|
## The Service Account Key Pair
|
||||||
|
|
||||||
The Kubernetes Controller Manager leverages a key pair to generate and sign service account tokens as describe in the [managing service accounts](https://kubernetes.io/docs/admin/service-accounts-admin/) documentation.
|
The Kubernetes Controller Manager leverages a key pair to generate and sign service account tokens as described in the [managing service accounts](https://kubernetes.io/docs/admin/service-accounts-admin/) documentation.
|
||||||
|
|
||||||
Generate the `service-account` certificate and private key:
|
Generate the `service-account` certificate and private key:
|
||||||
|
|
||||||
|
|
|
@ -22,6 +22,8 @@ KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-har
|
||||||
|
|
||||||
When generating kubeconfig files for Kubelets the client certificate matching the Kubelet's node name must be used. This will ensure Kubelets are properly authorized by the Kubernetes [Node Authorizer](https://kubernetes.io/docs/admin/authorization/node/).
|
When generating kubeconfig files for Kubelets the client certificate matching the Kubelet's node name must be used. This will ensure Kubelets are properly authorized by the Kubernetes [Node Authorizer](https://kubernetes.io/docs/admin/authorization/node/).
|
||||||
|
|
||||||
|
> The following commands must be run in the same directory used to generate the SSL certificates during the [Generating TLS Certificates](04-certificate-authority.md) lab.
|
||||||
|
|
||||||
Generate a kubeconfig file for each worker node:
|
Generate a kubeconfig file for each worker node:
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
|
@ -1,6 +1,6 @@
|
||||||
# Bootstrapping the etcd Cluster
|
# Bootstrapping the etcd Cluster
|
||||||
|
|
||||||
Kubernetes components are stateless and store cluster state in [etcd](https://github.com/coreos/etcd). In this lab you will bootstrap a three node etcd cluster and configure it for high availability and secure remote access.
|
Kubernetes components are stateless and store cluster state in [etcd](https://github.com/etcd-io/etcd). In this lab you will bootstrap a three node etcd cluster and configure it for high availability and secure remote access.
|
||||||
|
|
||||||
## Prerequisites
|
## Prerequisites
|
||||||
|
|
||||||
|
@ -18,19 +18,19 @@ gcloud compute ssh controller-0
|
||||||
|
|
||||||
### Download and Install the etcd Binaries
|
### Download and Install the etcd Binaries
|
||||||
|
|
||||||
Download the official etcd release binaries from the [coreos/etcd](https://github.com/coreos/etcd) GitHub project:
|
Download the official etcd release binaries from the [etcd](https://github.com/etcd-io/etcd) GitHub project:
|
||||||
|
|
||||||
```
|
```
|
||||||
wget -q --show-progress --https-only --timestamping \
|
wget -q --show-progress --https-only --timestamping \
|
||||||
"https://github.com/coreos/etcd/releases/download/v3.3.9/etcd-v3.3.9-linux-amd64.tar.gz"
|
"https://github.com/etcd-io/etcd/releases/download/v3.4.0/etcd-v3.4.0-linux-amd64.tar.gz"
|
||||||
```
|
```
|
||||||
|
|
||||||
Extract and install the `etcd` server and the `etcdctl` command line utility:
|
Extract and install the `etcd` server and the `etcdctl` command line utility:
|
||||||
|
|
||||||
```
|
```
|
||||||
{
|
{
|
||||||
tar -xvf etcd-v3.3.9-linux-amd64.tar.gz
|
tar -xvf etcd-v3.4.0-linux-amd64.tar.gz
|
||||||
sudo mv etcd-v3.3.9-linux-amd64/etcd* /usr/local/bin/
|
sudo mv etcd-v3.4.0-linux-amd64/etcd* /usr/local/bin/
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
@ -65,6 +65,7 @@ Description=etcd
|
||||||
Documentation=https://github.com/coreos
|
Documentation=https://github.com/coreos
|
||||||
|
|
||||||
[Service]
|
[Service]
|
||||||
|
Type=notify
|
||||||
ExecStart=/usr/local/bin/etcd \\
|
ExecStart=/usr/local/bin/etcd \\
|
||||||
--name ${ETCD_NAME} \\
|
--name ${ETCD_NAME} \\
|
||||||
--cert-file=/etc/etcd/kubernetes.pem \\
|
--cert-file=/etc/etcd/kubernetes.pem \\
|
||||||
|
|
|
@ -28,10 +28,10 @@ Download the official Kubernetes release binaries:
|
||||||
|
|
||||||
```
|
```
|
||||||
wget -q --show-progress --https-only --timestamping \
|
wget -q --show-progress --https-only --timestamping \
|
||||||
"https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kube-apiserver" \
|
"https://storage.googleapis.com/kubernetes-release/release/v1.15.3/bin/linux/amd64/kube-apiserver" \
|
||||||
"https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kube-controller-manager" \
|
"https://storage.googleapis.com/kubernetes-release/release/v1.15.3/bin/linux/amd64/kube-controller-manager" \
|
||||||
"https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kube-scheduler" \
|
"https://storage.googleapis.com/kubernetes-release/release/v1.15.3/bin/linux/amd64/kube-scheduler" \
|
||||||
"https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kubectl"
|
"https://storage.googleapis.com/kubernetes-release/release/v1.15.3/bin/linux/amd64/kubectl"
|
||||||
```
|
```
|
||||||
|
|
||||||
Install the Kubernetes binaries:
|
Install the Kubernetes binaries:
|
||||||
|
@ -82,14 +82,13 @@ ExecStart=/usr/local/bin/kube-apiserver \\
|
||||||
--authorization-mode=Node,RBAC \\
|
--authorization-mode=Node,RBAC \\
|
||||||
--bind-address=0.0.0.0 \\
|
--bind-address=0.0.0.0 \\
|
||||||
--client-ca-file=/var/lib/kubernetes/ca.pem \\
|
--client-ca-file=/var/lib/kubernetes/ca.pem \\
|
||||||
--enable-admission-plugins=Initializers,NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\
|
--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\
|
||||||
--enable-swagger-ui=true \\
|
|
||||||
--etcd-cafile=/var/lib/kubernetes/ca.pem \\
|
--etcd-cafile=/var/lib/kubernetes/ca.pem \\
|
||||||
--etcd-certfile=/var/lib/kubernetes/kubernetes.pem \\
|
--etcd-certfile=/var/lib/kubernetes/kubernetes.pem \\
|
||||||
--etcd-keyfile=/var/lib/kubernetes/kubernetes-key.pem \\
|
--etcd-keyfile=/var/lib/kubernetes/kubernetes-key.pem \\
|
||||||
--etcd-servers=https://10.240.0.10:2379,https://10.240.0.11:2379,https://10.240.0.12:2379 \\
|
--etcd-servers=https://10.240.0.10:2379,https://10.240.0.11:2379,https://10.240.0.12:2379 \\
|
||||||
--event-ttl=1h \\
|
--event-ttl=1h \\
|
||||||
--experimental-encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \\
|
--encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \\
|
||||||
--kubelet-certificate-authority=/var/lib/kubernetes/ca.pem \\
|
--kubelet-certificate-authority=/var/lib/kubernetes/ca.pem \\
|
||||||
--kubelet-client-certificate=/var/lib/kubernetes/kubernetes.pem \\
|
--kubelet-client-certificate=/var/lib/kubernetes/kubernetes.pem \\
|
||||||
--kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem \\
|
--kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem \\
|
||||||
|
@ -159,7 +158,7 @@ Create the `kube-scheduler.yaml` configuration file:
|
||||||
|
|
||||||
```
|
```
|
||||||
cat <<EOF | sudo tee /etc/kubernetes/config/kube-scheduler.yaml
|
cat <<EOF | sudo tee /etc/kubernetes/config/kube-scheduler.yaml
|
||||||
apiVersion: componentconfig/v1alpha1
|
apiVersion: kubescheduler.config.k8s.io/v1alpha1
|
||||||
kind: KubeSchedulerConfiguration
|
kind: KubeSchedulerConfiguration
|
||||||
clientConnection:
|
clientConnection:
|
||||||
kubeconfig: "/var/lib/kubernetes/kube-scheduler.kubeconfig"
|
kubeconfig: "/var/lib/kubernetes/kube-scheduler.kubeconfig"
|
||||||
|
@ -209,6 +208,7 @@ A [Google Network Load Balancer](https://cloud.google.com/compute/docs/load-bala
|
||||||
Install a basic web server to handle HTTP health checks:
|
Install a basic web server to handle HTTP health checks:
|
||||||
|
|
||||||
```
|
```
|
||||||
|
sudo apt-get update
|
||||||
sudo apt-get install -y nginx
|
sudo apt-get install -y nginx
|
||||||
```
|
```
|
||||||
|
|
||||||
|
@ -267,10 +267,11 @@ curl -H "Host: kubernetes.default.svc.cluster.local" -i http://127.0.0.1/healthz
|
||||||
```
|
```
|
||||||
HTTP/1.1 200 OK
|
HTTP/1.1 200 OK
|
||||||
Server: nginx/1.14.0 (Ubuntu)
|
Server: nginx/1.14.0 (Ubuntu)
|
||||||
Date: Sun, 30 Sep 2018 17:44:24 GMT
|
Date: Sat, 14 Sep 2019 18:34:11 GMT
|
||||||
Content-Type: text/plain; charset=utf-8
|
Content-Type: text/plain; charset=utf-8
|
||||||
Content-Length: 2
|
Content-Length: 2
|
||||||
Connection: keep-alive
|
Connection: keep-alive
|
||||||
|
X-Content-Type-Options: nosniff
|
||||||
|
|
||||||
ok
|
ok
|
||||||
```
|
```
|
||||||
|
@ -283,6 +284,8 @@ In this section you will configure RBAC permissions to allow the Kubernetes API
|
||||||
|
|
||||||
> This tutorial sets the Kubelet `--authorization-mode` flag to `Webhook`. Webhook mode uses the [SubjectAccessReview](https://kubernetes.io/docs/admin/authorization/#checking-api-access) API to determine authorization.
|
> This tutorial sets the Kubelet `--authorization-mode` flag to `Webhook`. Webhook mode uses the [SubjectAccessReview](https://kubernetes.io/docs/admin/authorization/#checking-api-access) API to determine authorization.
|
||||||
|
|
||||||
|
The commands in this section will effect the entire cluster and only need to be run once from one of the controller nodes.
|
||||||
|
|
||||||
```
|
```
|
||||||
gcloud compute ssh controller-0
|
gcloud compute ssh controller-0
|
||||||
```
|
```
|
||||||
|
@ -339,7 +342,7 @@ EOF
|
||||||
|
|
||||||
In this section you will provision an external load balancer to front the Kubernetes API Servers. The `kubernetes-the-hard-way` static IP address will be attached to the resulting load balancer.
|
In this section you will provision an external load balancer to front the Kubernetes API Servers. The `kubernetes-the-hard-way` static IP address will be attached to the resulting load balancer.
|
||||||
|
|
||||||
> The compute instances created in this tutorial will not have permission to complete this section. Run the following commands from the same machine used to create the compute instances.
|
> The compute instances created in this tutorial will not have permission to complete this section. **Run the following commands from the same machine used to create the compute instances**.
|
||||||
|
|
||||||
|
|
||||||
### Provision a Network Load Balancer
|
### Provision a Network Load Balancer
|
||||||
|
@ -378,6 +381,8 @@ Create the external load balancer network resources:
|
||||||
|
|
||||||
### Verification
|
### Verification
|
||||||
|
|
||||||
|
> The compute instances created in this tutorial will not have permission to complete this section. **Run the following commands from the same machine used to create the compute instances**.
|
||||||
|
|
||||||
Retrieve the `kubernetes-the-hard-way` static IP address:
|
Retrieve the `kubernetes-the-hard-way` static IP address:
|
||||||
|
|
||||||
```
|
```
|
||||||
|
@ -397,12 +402,12 @@ curl --cacert ca.pem https://${KUBERNETES_PUBLIC_ADDRESS}:6443/version
|
||||||
```
|
```
|
||||||
{
|
{
|
||||||
"major": "1",
|
"major": "1",
|
||||||
"minor": "12",
|
"minor": "15",
|
||||||
"gitVersion": "v1.12.0",
|
"gitVersion": "v1.15.3",
|
||||||
"gitCommit": "0ed33881dc4355495f623c6f22e7dd0b7632b7c0",
|
"gitCommit": "2d3c76f9091b6bec110a5e63777c332469e0cba2",
|
||||||
"gitTreeState": "clean",
|
"gitTreeState": "clean",
|
||||||
"buildDate": "2018-09-27T16:55:41Z",
|
"buildDate": "2019-08-19T11:05:50Z",
|
||||||
"goVersion": "go1.10.4",
|
"goVersion": "go1.12.9",
|
||||||
"compiler": "gc",
|
"compiler": "gc",
|
||||||
"platform": "linux/amd64"
|
"platform": "linux/amd64"
|
||||||
}
|
}
|
||||||
|
|
|
@ -1,6 +1,6 @@
|
||||||
# Bootstrapping the Kubernetes Worker Nodes
|
# Bootstrapping the Kubernetes Worker Nodes
|
||||||
|
|
||||||
In this lab you will bootstrap three Kubernetes worker nodes. The following components will be installed on each node: [runc](https://github.com/opencontainers/runc), [gVisor](https://github.com/google/gvisor), [container networking plugins](https://github.com/containernetworking/cni), [containerd](https://github.com/containerd/containerd), [kubelet](https://kubernetes.io/docs/admin/kubelet), and [kube-proxy](https://kubernetes.io/docs/concepts/cluster-administration/proxies).
|
In this lab you will bootstrap three Kubernetes worker nodes. The following components will be installed on each node: [runc](https://github.com/opencontainers/runc), [container networking plugins](https://github.com/containernetworking/cni), [containerd](https://github.com/containerd/containerd), [kubelet](https://kubernetes.io/docs/admin/kubelet), and [kube-proxy](https://kubernetes.io/docs/concepts/cluster-administration/proxies).
|
||||||
|
|
||||||
## Prerequisites
|
## Prerequisites
|
||||||
|
|
||||||
|
@ -27,18 +27,35 @@ Install the OS dependencies:
|
||||||
|
|
||||||
> The socat binary enables support for the `kubectl port-forward` command.
|
> The socat binary enables support for the `kubectl port-forward` command.
|
||||||
|
|
||||||
|
### Disable Swap
|
||||||
|
|
||||||
|
By default the kubelet will fail to start if [swap](https://help.ubuntu.com/community/SwapFaq) is enabled. It is [recommended](https://github.com/kubernetes/kubernetes/issues/7294) that swap be disabled to ensure Kubernetes can provide proper resource allocation and quality of service.
|
||||||
|
|
||||||
|
Verify if swap is enabled:
|
||||||
|
|
||||||
|
```
|
||||||
|
sudo swapon --show
|
||||||
|
```
|
||||||
|
|
||||||
|
If output is empthy then swap is not enabled. If swap is enabled run the following command to disable swap immediately:
|
||||||
|
|
||||||
|
```
|
||||||
|
sudo swapoff -a
|
||||||
|
```
|
||||||
|
|
||||||
|
> To ensure swap remains off after reboot consult your Linux distro documentation.
|
||||||
|
|
||||||
### Download and Install Worker Binaries
|
### Download and Install Worker Binaries
|
||||||
|
|
||||||
```
|
```
|
||||||
wget -q --show-progress --https-only --timestamping \
|
wget -q --show-progress --https-only --timestamping \
|
||||||
https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.12.0/crictl-v1.12.0-linux-amd64.tar.gz \
|
https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.15.0/crictl-v1.15.0-linux-amd64.tar.gz \
|
||||||
https://storage.googleapis.com/kubernetes-the-hard-way/runsc-50c283b9f56bb7200938d9e207355f05f79f0d17 \
|
https://github.com/opencontainers/runc/releases/download/v1.0.0-rc8/runc.amd64 \
|
||||||
https://github.com/opencontainers/runc/releases/download/v1.0.0-rc5/runc.amd64 \
|
https://github.com/containernetworking/plugins/releases/download/v0.8.2/cni-plugins-linux-amd64-v0.8.2.tgz \
|
||||||
https://github.com/containernetworking/plugins/releases/download/v0.6.0/cni-plugins-amd64-v0.6.0.tgz \
|
https://github.com/containerd/containerd/releases/download/v1.2.9/containerd-1.2.9.linux-amd64.tar.gz \
|
||||||
https://github.com/containerd/containerd/releases/download/v1.2.0-rc.0/containerd-1.2.0-rc.0.linux-amd64.tar.gz \
|
https://storage.googleapis.com/kubernetes-release/release/v1.15.3/bin/linux/amd64/kubectl \
|
||||||
https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kubectl \
|
https://storage.googleapis.com/kubernetes-release/release/v1.15.3/bin/linux/amd64/kube-proxy \
|
||||||
https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kube-proxy \
|
https://storage.googleapis.com/kubernetes-release/release/v1.15.3/bin/linux/amd64/kubelet
|
||||||
https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kubelet
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Create the installation directories:
|
Create the installation directories:
|
||||||
|
@ -57,13 +74,14 @@ Install the worker binaries:
|
||||||
|
|
||||||
```
|
```
|
||||||
{
|
{
|
||||||
sudo mv runsc-50c283b9f56bb7200938d9e207355f05f79f0d17 runsc
|
mkdir containerd
|
||||||
|
tar -xvf crictl-v1.15.0-linux-amd64.tar.gz
|
||||||
|
tar -xvf containerd-1.2.9.linux-amd64.tar.gz -C containerd
|
||||||
|
sudo tar -xvf cni-plugins-linux-amd64-v0.8.2.tgz -C /opt/cni/bin/
|
||||||
sudo mv runc.amd64 runc
|
sudo mv runc.amd64 runc
|
||||||
chmod +x kubectl kube-proxy kubelet runc runsc
|
chmod +x crictl kubectl kube-proxy kubelet runc
|
||||||
sudo mv kubectl kube-proxy kubelet runc runsc /usr/local/bin/
|
sudo mv crictl kubectl kube-proxy kubelet runc /usr/local/bin/
|
||||||
sudo tar -xvf crictl-v1.12.0-linux-amd64.tar.gz -C /usr/local/bin/
|
sudo mv containerd/bin/* /bin/
|
||||||
sudo tar -xvf cni-plugins-amd64-v0.6.0.tgz -C /opt/cni/bin/
|
|
||||||
sudo tar -xvf containerd-1.2.0-rc.0.linux-amd64.tar.gz -C /
|
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
@ -104,6 +122,7 @@ Create the `loopback` network configuration file:
|
||||||
cat <<EOF | sudo tee /etc/cni/net.d/99-loopback.conf
|
cat <<EOF | sudo tee /etc/cni/net.d/99-loopback.conf
|
||||||
{
|
{
|
||||||
"cniVersion": "0.3.1",
|
"cniVersion": "0.3.1",
|
||||||
|
"name": "lo",
|
||||||
"type": "loopback"
|
"type": "loopback"
|
||||||
}
|
}
|
||||||
EOF
|
EOF
|
||||||
|
@ -126,19 +145,9 @@ cat << EOF | sudo tee /etc/containerd/config.toml
|
||||||
runtime_type = "io.containerd.runtime.v1.linux"
|
runtime_type = "io.containerd.runtime.v1.linux"
|
||||||
runtime_engine = "/usr/local/bin/runc"
|
runtime_engine = "/usr/local/bin/runc"
|
||||||
runtime_root = ""
|
runtime_root = ""
|
||||||
[plugins.cri.containerd.untrusted_workload_runtime]
|
|
||||||
runtime_type = "io.containerd.runtime.v1.linux"
|
|
||||||
runtime_engine = "/usr/local/bin/runsc"
|
|
||||||
runtime_root = "/run/containerd/runsc"
|
|
||||||
[plugins.cri.containerd.gvisor]
|
|
||||||
runtime_type = "io.containerd.runtime.v1.linux"
|
|
||||||
runtime_engine = "/usr/local/bin/runsc"
|
|
||||||
runtime_root = "/run/containerd/runsc"
|
|
||||||
EOF
|
EOF
|
||||||
```
|
```
|
||||||
|
|
||||||
> Untrusted workloads will be run using the gVisor (runsc) runtime.
|
|
||||||
|
|
||||||
Create the `containerd.service` systemd unit file:
|
Create the `containerd.service` systemd unit file:
|
||||||
|
|
||||||
```
|
```
|
||||||
|
@ -296,9 +305,9 @@ gcloud compute ssh controller-0 \
|
||||||
|
|
||||||
```
|
```
|
||||||
NAME STATUS ROLES AGE VERSION
|
NAME STATUS ROLES AGE VERSION
|
||||||
worker-0 Ready <none> 35s v1.12.0
|
worker-0 Ready <none> 15s v1.15.3
|
||||||
worker-1 Ready <none> 36s v1.12.0
|
worker-1 Ready <none> 15s v1.15.3
|
||||||
worker-2 Ready <none> 36s v1.12.0
|
worker-2 Ready <none> 15s v1.15.3
|
||||||
```
|
```
|
||||||
|
|
||||||
Next: [Configuring kubectl for Remote Access](10-configuring-kubectl.md)
|
Next: [Configuring kubectl for Remote Access](10-configuring-kubectl.md)
|
||||||
|
|
|
@ -62,9 +62,9 @@ kubectl get nodes
|
||||||
|
|
||||||
```
|
```
|
||||||
NAME STATUS ROLES AGE VERSION
|
NAME STATUS ROLES AGE VERSION
|
||||||
worker-0 Ready <none> 117s v1.12.0
|
worker-0 Ready <none> 2m9s v1.15.3
|
||||||
worker-1 Ready <none> 118s v1.12.0
|
worker-1 Ready <none> 2m9s v1.15.3
|
||||||
worker-2 Ready <none> 118s v1.12.0
|
worker-2 Ready <none> 2m9s v1.15.3
|
||||||
```
|
```
|
||||||
|
|
||||||
Next: [Provisioning Pod Network Routes](11-pod-network-routes.md)
|
Next: [Provisioning Pod Network Routes](11-pod-network-routes.md)
|
||||||
|
|
|
@ -40,7 +40,7 @@ coredns-699f8ddd77-gtcgb 1/1 Running 0 20s
|
||||||
Create a `busybox` deployment:
|
Create a `busybox` deployment:
|
||||||
|
|
||||||
```
|
```
|
||||||
kubectl run busybox --image=busybox:1.28 --command -- sleep 3600
|
kubectl run --generator=run-pod/v1 busybox --image=busybox:1.28 --command -- sleep 3600
|
||||||
```
|
```
|
||||||
|
|
||||||
List the pod created by the `busybox` deployment:
|
List the pod created by the `busybox` deployment:
|
||||||
|
@ -53,7 +53,7 @@ kubectl get pods -l run=busybox
|
||||||
|
|
||||||
```
|
```
|
||||||
NAME READY STATUS RESTARTS AGE
|
NAME READY STATUS RESTARTS AGE
|
||||||
busybox-bd8fb7cbd-vflm9 1/1 Running 0 10s
|
busybox 1/1 Running 0 3s
|
||||||
```
|
```
|
||||||
|
|
||||||
Retrieve the full name of the `busybox` pod:
|
Retrieve the full name of the `busybox` pod:
|
||||||
|
|
|
@ -32,18 +32,17 @@ gcloud compute ssh controller-0 \
|
||||||
00000010 73 2f 64 65 66 61 75 6c 74 2f 6b 75 62 65 72 6e |s/default/kubern|
|
00000010 73 2f 64 65 66 61 75 6c 74 2f 6b 75 62 65 72 6e |s/default/kubern|
|
||||||
00000020 65 74 65 73 2d 74 68 65 2d 68 61 72 64 2d 77 61 |etes-the-hard-wa|
|
00000020 65 74 65 73 2d 74 68 65 2d 68 61 72 64 2d 77 61 |etes-the-hard-wa|
|
||||||
00000030 79 0a 6b 38 73 3a 65 6e 63 3a 61 65 73 63 62 63 |y.k8s:enc:aescbc|
|
00000030 79 0a 6b 38 73 3a 65 6e 63 3a 61 65 73 63 62 63 |y.k8s:enc:aescbc|
|
||||||
00000040 3a 76 31 3a 6b 65 79 31 3a dd 3f 36 6c ce 65 9d |:v1:key1:.?6l.e.|
|
00000040 3a 76 31 3a 6b 65 79 31 3a 44 ac 6e ac 11 2f 28 |:v1:key1:D.n../(|
|
||||||
00000050 b3 b1 46 1a ba ae a2 1f e4 fa 13 0c 4b 6e 2c 3c |..F.........Kn,<|
|
00000050 02 46 3d ad 9d cd 68 be e4 cc 63 ae 13 e4 99 e8 |.F=...h...c.....|
|
||||||
00000060 15 fa 88 56 84 b7 aa c0 7a ca 66 f3 de db 2b a3 |...V....z.f...+.|
|
00000060 6e 55 a0 fd 9d 33 7a b1 17 6b 20 19 23 dc 3e 67 |nU...3z..k .#.>g|
|
||||||
00000070 88 dc b1 b1 d8 2f 16 3e 6b 4a cb ac 88 5d 23 2d |...../.>kJ...]#-|
|
00000070 c9 6c 47 fa 78 8b 4d 28 cd d1 71 25 e9 29 ec 88 |.lG.x.M(..q%.)..|
|
||||||
00000080 99 62 be 72 9f a5 01 38 15 c4 43 ac 38 5f ef 88 |.b.r...8..C.8_..|
|
00000080 7f c9 76 b6 31 63 6e ea ac c5 e4 2f 32 d7 a6 94 |..v.1cn..../2...|
|
||||||
00000090 3b 88 c1 e6 b6 06 4f ae a8 6b c8 40 70 ac 0a d3 |;.....O..k.@p...|
|
00000090 3c 3d 97 29 40 5a ee e1 ef d6 b2 17 01 75 a4 a3 |<=.)@Z.......u..|
|
||||||
000000a0 3e dc 2b b6 0f 01 b6 8b e2 21 29 4d 32 d6 67 a6 |>.+......!)M2.g.|
|
000000a0 e2 c2 70 5b 77 1a 0b ec 71 c3 87 7a 1f 68 73 03 |..p[w...q..z.hs.|
|
||||||
000000b0 4e 6d bb 61 0d 85 22 ea f4 d6 2d 0a af 3c 71 85 |Nm.a.."...-..<q.|
|
000000b0 67 70 5e ba 5e 65 ff 6f 0c 40 5a f9 2a bd d6 0e |gp^.^e.o.@Z.*...|
|
||||||
000000c0 96 27 c9 ec 90 e3 56 8c 94 a7 1c 9a 0e 00 28 11 |.'....V.......(.|
|
000000c0 44 8d 62 21 1a 30 4f 43 b8 03 69 52 c0 b7 2e 16 |D.b!.0OC..iR....|
|
||||||
000000d0 18 28 f4 33 42 d9 57 d9 e3 e9 1c 38 e3 bc 1e c3 |.(.3B.W....8....|
|
000000d0 14 a5 91 21 29 fa 6e 03 47 e2 06 25 45 7c 4f 8f |...!).n.G..%E|O.|
|
||||||
000000e0 d2 47 f3 20 60 be b8 57 a7 0a |.G. `..W..|
|
000000e0 6e bb 9d 3b e9 e5 2d 9e 3e 0a |n..;..-.>.|
|
||||||
000000ea
|
|
||||||
```
|
```
|
||||||
|
|
||||||
The etcd key should be prefixed with `k8s:enc:aescbc:v1:key1`, which indicates the `aescbc` provider was used to encrypt the data with the `key1` encryption key.
|
The etcd key should be prefixed with `k8s:enc:aescbc:v1:key1`, which indicates the `aescbc` provider was used to encrypt the data with the `key1` encryption key.
|
||||||
|
@ -55,20 +54,20 @@ In this section you will verify the ability to create and manage [Deployments](h
|
||||||
Create a deployment for the [nginx](https://nginx.org/en/) web server:
|
Create a deployment for the [nginx](https://nginx.org/en/) web server:
|
||||||
|
|
||||||
```
|
```
|
||||||
kubectl run nginx --image=nginx
|
kubectl create deployment nginx --image=nginx
|
||||||
```
|
```
|
||||||
|
|
||||||
List the pod created by the `nginx` deployment:
|
List the pod created by the `nginx` deployment:
|
||||||
|
|
||||||
```
|
```
|
||||||
kubectl get pods -l run=nginx
|
kubectl get pods -l app=nginx
|
||||||
```
|
```
|
||||||
|
|
||||||
> output
|
> output
|
||||||
|
|
||||||
```
|
```
|
||||||
NAME READY STATUS RESTARTS AGE
|
NAME READY STATUS RESTARTS AGE
|
||||||
nginx-dbddb74b8-6lxg2 1/1 Running 0 10s
|
nginx-554b9c67f9-vt5rn 1/1 Running 0 10s
|
||||||
```
|
```
|
||||||
|
|
||||||
### Port Forwarding
|
### Port Forwarding
|
||||||
|
@ -78,7 +77,7 @@ In this section you will verify the ability to access applications remotely usin
|
||||||
Retrieve the full name of the `nginx` pod:
|
Retrieve the full name of the `nginx` pod:
|
||||||
|
|
||||||
```
|
```
|
||||||
POD_NAME=$(kubectl get pods -l run=nginx -o jsonpath="{.items[0].metadata.name}")
|
POD_NAME=$(kubectl get pods -l app=nginx -o jsonpath="{.items[0].metadata.name}")
|
||||||
```
|
```
|
||||||
|
|
||||||
Forward port `8080` on your local machine to port `80` of the `nginx` pod:
|
Forward port `8080` on your local machine to port `80` of the `nginx` pod:
|
||||||
|
@ -104,13 +103,13 @@ curl --head http://127.0.0.1:8080
|
||||||
|
|
||||||
```
|
```
|
||||||
HTTP/1.1 200 OK
|
HTTP/1.1 200 OK
|
||||||
Server: nginx/1.15.4
|
Server: nginx/1.17.3
|
||||||
Date: Sun, 30 Sep 2018 19:23:10 GMT
|
Date: Sat, 14 Sep 2019 21:10:11 GMT
|
||||||
Content-Type: text/html
|
Content-Type: text/html
|
||||||
Content-Length: 612
|
Content-Length: 612
|
||||||
Last-Modified: Tue, 25 Sep 2018 15:04:03 GMT
|
Last-Modified: Tue, 13 Aug 2019 08:50:00 GMT
|
||||||
Connection: keep-alive
|
Connection: keep-alive
|
||||||
ETag: "5baa4e63-264"
|
ETag: "5d5279b8-264"
|
||||||
Accept-Ranges: bytes
|
Accept-Ranges: bytes
|
||||||
```
|
```
|
||||||
|
|
||||||
|
@ -136,7 +135,7 @@ kubectl logs $POD_NAME
|
||||||
> output
|
> output
|
||||||
|
|
||||||
```
|
```
|
||||||
127.0.0.1 - - [30/Sep/2018:19:23:10 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.58.0" "-"
|
127.0.0.1 - - [14/Sep/2019:21:10:11 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.52.1" "-"
|
||||||
```
|
```
|
||||||
|
|
||||||
### Exec
|
### Exec
|
||||||
|
@ -152,7 +151,7 @@ kubectl exec -ti $POD_NAME -- nginx -v
|
||||||
> output
|
> output
|
||||||
|
|
||||||
```
|
```
|
||||||
nginx version: nginx/1.15.4
|
nginx version: nginx/1.17.3
|
||||||
```
|
```
|
||||||
|
|
||||||
## Services
|
## Services
|
||||||
|
@ -199,128 +198,14 @@ curl -I http://${EXTERNAL_IP}:${NODE_PORT}
|
||||||
|
|
||||||
```
|
```
|
||||||
HTTP/1.1 200 OK
|
HTTP/1.1 200 OK
|
||||||
Server: nginx/1.15.4
|
Server: nginx/1.17.3
|
||||||
Date: Sun, 30 Sep 2018 19:25:40 GMT
|
Date: Sat, 14 Sep 2019 21:12:35 GMT
|
||||||
Content-Type: text/html
|
Content-Type: text/html
|
||||||
Content-Length: 612
|
Content-Length: 612
|
||||||
Last-Modified: Tue, 25 Sep 2018 15:04:03 GMT
|
Last-Modified: Tue, 13 Aug 2019 08:50:00 GMT
|
||||||
Connection: keep-alive
|
Connection: keep-alive
|
||||||
ETag: "5baa4e63-264"
|
ETag: "5d5279b8-264"
|
||||||
Accept-Ranges: bytes
|
Accept-Ranges: bytes
|
||||||
```
|
```
|
||||||
|
|
||||||
## Untrusted Workloads
|
|
||||||
|
|
||||||
This section will verify the ability to run untrusted workloads using [gVisor](https://github.com/google/gvisor).
|
|
||||||
|
|
||||||
Create the `untrusted` pod:
|
|
||||||
|
|
||||||
```
|
|
||||||
cat <<EOF | kubectl apply -f -
|
|
||||||
apiVersion: v1
|
|
||||||
kind: Pod
|
|
||||||
metadata:
|
|
||||||
name: untrusted
|
|
||||||
annotations:
|
|
||||||
io.kubernetes.cri.untrusted-workload: "true"
|
|
||||||
spec:
|
|
||||||
containers:
|
|
||||||
- name: webserver
|
|
||||||
image: gcr.io/hightowerlabs/helloworld:2.0.0
|
|
||||||
EOF
|
|
||||||
```
|
|
||||||
|
|
||||||
### Verification
|
|
||||||
|
|
||||||
In this section you will verify the `untrusted` pod is running under gVisor (runsc) by inspecting the assigned worker node.
|
|
||||||
|
|
||||||
Verify the `untrusted` pod is running:
|
|
||||||
|
|
||||||
```
|
|
||||||
kubectl get pods -o wide
|
|
||||||
```
|
|
||||||
```
|
|
||||||
NAME READY STATUS RESTARTS AGE IP NODE
|
|
||||||
busybox-68654f944b-djjjb 1/1 Running 0 5m 10.200.0.2 worker-0
|
|
||||||
nginx-65899c769f-xkfcn 1/1 Running 0 4m 10.200.1.2 worker-1
|
|
||||||
untrusted 1/1 Running 0 10s 10.200.0.3 worker-0
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
Get the node name where the `untrusted` pod is running:
|
|
||||||
|
|
||||||
```
|
|
||||||
INSTANCE_NAME=$(kubectl get pod untrusted --output=jsonpath='{.spec.nodeName}')
|
|
||||||
```
|
|
||||||
|
|
||||||
SSH into the worker node:
|
|
||||||
|
|
||||||
```
|
|
||||||
gcloud compute ssh ${INSTANCE_NAME}
|
|
||||||
```
|
|
||||||
|
|
||||||
List the containers running under gVisor:
|
|
||||||
|
|
||||||
```
|
|
||||||
sudo runsc --root /run/containerd/runsc/k8s.io list
|
|
||||||
```
|
|
||||||
```
|
|
||||||
I0930 19:27:13.255142 20832 x:0] ***************************
|
|
||||||
I0930 19:27:13.255326 20832 x:0] Args: [runsc --root /run/containerd/runsc/k8s.io list]
|
|
||||||
I0930 19:27:13.255386 20832 x:0] Git Revision: 50c283b9f56bb7200938d9e207355f05f79f0d17
|
|
||||||
I0930 19:27:13.255429 20832 x:0] PID: 20832
|
|
||||||
I0930 19:27:13.255472 20832 x:0] UID: 0, GID: 0
|
|
||||||
I0930 19:27:13.255591 20832 x:0] Configuration:
|
|
||||||
I0930 19:27:13.255654 20832 x:0] RootDir: /run/containerd/runsc/k8s.io
|
|
||||||
I0930 19:27:13.255781 20832 x:0] Platform: ptrace
|
|
||||||
I0930 19:27:13.255893 20832 x:0] FileAccess: exclusive, overlay: false
|
|
||||||
I0930 19:27:13.256004 20832 x:0] Network: sandbox, logging: false
|
|
||||||
I0930 19:27:13.256128 20832 x:0] Strace: false, max size: 1024, syscalls: []
|
|
||||||
I0930 19:27:13.256238 20832 x:0] ***************************
|
|
||||||
ID PID STATUS BUNDLE CREATED OWNER
|
|
||||||
79e74d0cec52a1ff4bc2c9b0bb9662f73ea918959c08bca5bcf07ddb6cb0e1fd 20449 running /run/containerd/io.containerd.runtime.v1.linux/k8s.io/79e74d0cec52a1ff4bc2c9b0bb9662f73ea918959c08bca5bcf07ddb6cb0e1fd 0001-01-01T00:00:00Z
|
|
||||||
af7470029008a4520b5db9fb5b358c65d64c9f748fae050afb6eaf014a59fea5 20510 running /run/containerd/io.containerd.runtime.v1.linux/k8s.io/af7470029008a4520b5db9fb5b358c65d64c9f748fae050afb6eaf014a59fea5 0001-01-01T00:00:00Z
|
|
||||||
I0930 19:27:13.259733 20832 x:0] Exiting with status: 0
|
|
||||||
```
|
|
||||||
|
|
||||||
Get the ID of the `untrusted` pod:
|
|
||||||
|
|
||||||
```
|
|
||||||
POD_ID=$(sudo crictl -r unix:///var/run/containerd/containerd.sock \
|
|
||||||
pods --name untrusted -q)
|
|
||||||
```
|
|
||||||
|
|
||||||
Get the ID of the `webserver` container running in the `untrusted` pod:
|
|
||||||
|
|
||||||
```
|
|
||||||
CONTAINER_ID=$(sudo crictl -r unix:///var/run/containerd/containerd.sock \
|
|
||||||
ps -p ${POD_ID} -q)
|
|
||||||
```
|
|
||||||
|
|
||||||
Use the gVisor `runsc` command to display the processes running inside the `webserver` container:
|
|
||||||
|
|
||||||
```
|
|
||||||
sudo runsc --root /run/containerd/runsc/k8s.io ps ${CONTAINER_ID}
|
|
||||||
```
|
|
||||||
|
|
||||||
> output
|
|
||||||
|
|
||||||
```
|
|
||||||
I0930 19:31:31.419765 21217 x:0] ***************************
|
|
||||||
I0930 19:31:31.419907 21217 x:0] Args: [runsc --root /run/containerd/runsc/k8s.io ps af7470029008a4520b5db9fb5b358c65d64c9f748fae050afb6eaf014a59fea5]
|
|
||||||
I0930 19:31:31.419959 21217 x:0] Git Revision: 50c283b9f56bb7200938d9e207355f05f79f0d17
|
|
||||||
I0930 19:31:31.420000 21217 x:0] PID: 21217
|
|
||||||
I0930 19:31:31.420041 21217 x:0] UID: 0, GID: 0
|
|
||||||
I0930 19:31:31.420081 21217 x:0] Configuration:
|
|
||||||
I0930 19:31:31.420115 21217 x:0] RootDir: /run/containerd/runsc/k8s.io
|
|
||||||
I0930 19:31:31.420188 21217 x:0] Platform: ptrace
|
|
||||||
I0930 19:31:31.420266 21217 x:0] FileAccess: exclusive, overlay: false
|
|
||||||
I0930 19:31:31.420424 21217 x:0] Network: sandbox, logging: false
|
|
||||||
I0930 19:31:31.420515 21217 x:0] Strace: false, max size: 1024, syscalls: []
|
|
||||||
I0930 19:31:31.420676 21217 x:0] ***************************
|
|
||||||
UID PID PPID C STIME TIME CMD
|
|
||||||
0 1 0 0 19:26 10ms app
|
|
||||||
I0930 19:31:31.422022 21217 x:0] Exiting with status: 0
|
|
||||||
```
|
|
||||||
|
|
||||||
Next: [Cleaning Up](14-cleanup.md)
|
Next: [Cleaning Up](14-cleanup.md)
|
||||||
|
|
|
@ -9,7 +9,8 @@ Delete the controller and worker compute instances:
|
||||||
```
|
```
|
||||||
gcloud -q compute instances delete \
|
gcloud -q compute instances delete \
|
||||||
controller-0 controller-1 controller-2 \
|
controller-0 controller-1 controller-2 \
|
||||||
worker-0 worker-1 worker-2
|
worker-0 worker-1 worker-2 \
|
||||||
|
--zone $(gcloud config get-value compute/zone)
|
||||||
```
|
```
|
||||||
|
|
||||||
## Networking
|
## Networking
|
||||||
|
|
Loading…
Reference in New Issue