The chapter 'Smoke Test' is updated.

pull/443/head
Pick1a1username 2019-02-17 01:55:44 +09:00
parent bfd7a1e69c
commit 2669322613
1 changed files with 32 additions and 35 deletions

View File

@ -9,15 +9,14 @@ In this section you will verify the ability to [encrypt secret data at rest](htt
Create a generic secret:
```
kubectl create secret generic kubernetes-the-hard-way \
$ kubectl create secret generic kubernetes-the-hard-way \
--from-literal="mykey=mydata"
```
Print a hexdump of the `kubernetes-the-hard-way` secret stored in etcd:
```
gcloud compute ssh controller-0 \
--command "sudo ETCDCTL_API=3 etcdctl get \
$ ssh -i ~/.ssh/id_rsa-k8s.pub 10.240.0.21 "sudo ETCDCTL_API=3 etcdctl get \
--endpoints=https://127.0.0.1:2379 \
--cacert=/etc/etcd/ca.pem \
--cert=/etc/etcd/kubernetes.pem \
@ -48,6 +47,7 @@ gcloud compute ssh controller-0 \
The etcd key should be prefixed with `k8s:enc:aescbc:v1:key1`, which indicates the `aescbc` provider was used to encrypt the data with the `key1` encryption key.
## Deployments
In this section you will verify the ability to create and manage [Deployments](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/).
@ -55,13 +55,13 @@ In this section you will verify the ability to create and manage [Deployments](h
Create a deployment for the [nginx](https://nginx.org/en/) web server:
```
kubectl run nginx --image=nginx
$ kubectl run nginx --image=nginx
```
List the pod created by the `nginx` deployment:
```
kubectl get pods -l run=nginx
$ kubectl get pods -l run=nginx
```
> output
@ -78,13 +78,13 @@ In this section you will verify the ability to access applications remotely usin
Retrieve the full name of the `nginx` pod:
```
POD_NAME=$(kubectl get pods -l run=nginx -o jsonpath="{.items[0].metadata.name}")
$ POD_NAME=$(kubectl get pods -l run=nginx -o jsonpath="{.items[0].metadata.name}")
```
Forward port `8080` on your local machine to port `80` of the `nginx` pod:
```
kubectl port-forward $POD_NAME 8080:80
$ kubectl port-forward $POD_NAME 8080:80
```
> output
@ -97,7 +97,7 @@ Forwarding from [::1]:8080 -> 80
In a new terminal make an HTTP request using the forwarding address:
```
curl --head http://127.0.0.1:8080
$ curl --head http://127.0.0.1:8080
```
> output
@ -123,6 +123,7 @@ Handling connection for 8080
^C
```
### Logs
In this section you will verify the ability to [retrieve container logs](https://kubernetes.io/docs/concepts/cluster-administration/logging/).
@ -130,7 +131,7 @@ In this section you will verify the ability to [retrieve container logs](https:/
Print the `nginx` pod logs:
```
kubectl logs $POD_NAME
$ kubectl logs $POD_NAME
```
> output
@ -139,6 +140,7 @@ kubectl logs $POD_NAME
127.0.0.1 - - [30/Sep/2018:19:23:10 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.58.0" "-"
```
### Exec
In this section you will verify the ability to [execute commands in a container](https://kubernetes.io/docs/tasks/debug-application-cluster/get-shell-running-container/#running-individual-commands-in-a-container).
@ -146,15 +148,16 @@ In this section you will verify the ability to [execute commands in a container]
Print the nginx version by executing the `nginx -v` command in the `nginx` container:
```
kubectl exec -ti $POD_NAME -- nginx -v
$ kubectl exec -ti $POD_NAME -- nginx -v
```
> output
```
nginx version: nginx/1.15.4
$ nginx version: nginx/1.15.4
```
## Services
In this section you will verify the ability to expose applications using a [Service](https://kubernetes.io/docs/concepts/services-networking/service/).
@ -162,7 +165,7 @@ In this section you will verify the ability to expose applications using a [Serv
Expose the `nginx` deployment using a [NodePort](https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport) service:
```
kubectl expose deployment nginx --port 80 --type NodePort
$ kubectl expose deployment nginx --port 80 --type NodePort
```
> The LoadBalancer service type can not be used because your cluster is not configured with [cloud provider integration](https://kubernetes.io/docs/getting-started-guides/scratch/#cloud-provider). Setting up cloud provider integration is out of scope for this tutorial.
@ -170,29 +173,20 @@ kubectl expose deployment nginx --port 80 --type NodePort
Retrieve the node port assigned to the `nginx` service:
```
NODE_PORT=$(kubectl get svc nginx \
$ NODE_PORT=$(kubectl get svc nginx \
--output=jsonpath='{range .spec.ports[0]}{.nodePort}')
```
Create a firewall rule that allows remote access to the `nginx` node port:
Retrieve the IP address of a worker instance:
```
gcloud compute firewall-rules create kubernetes-the-hard-way-allow-nginx-service \
--allow=tcp:${NODE_PORT} \
--network kubernetes-the-hard-way
WORKER_IP=$ kubectl get nodes $(kubectl get pods -o wide | grep nginx | awk '{ print $7 }') -o wide | tail -1 | awk '{ print $6 }'
```
Retrieve the external IP address of a worker instance:
Make an HTTP request using the IP address and the `nginx` node port:
```
EXTERNAL_IP=$(gcloud compute instances describe worker-0 \
--format 'value(networkInterfaces[0].accessConfigs[0].natIP)')
```
Make an HTTP request using the external IP address and the `nginx` node port:
```
curl -I http://${EXTERNAL_IP}:${NODE_PORT}
curl -I http://${WORKER_IP}:${NODE_PORT}
```
> output
@ -216,7 +210,7 @@ This section will verify the ability to run untrusted workloads using [gVisor](h
Create the `untrusted` pod:
```
cat <<EOF | kubectl apply -f -
$ cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
@ -237,8 +231,9 @@ In this section you will verify the `untrusted` pod is running under gVisor (run
Verify the `untrusted` pod is running:
```
kubectl get pods -o wide
$ kubectl get pods -o wide
```
```
NAME READY STATUS RESTARTS AGE IP NODE
busybox-68654f944b-djjjb 1/1 Running 0 5m 10.200.0.2 worker-0
@ -247,23 +242,25 @@ untrusted 1/1 Running 0 10s 10.200.0.3
```
Get the node name where the `untrusted` pod is running:
Get the node's IP address where the `untrusted` pod is running:
```
INSTANCE_NAME=$(kubectl get pod untrusted --output=jsonpath='{.spec.nodeName}')
$ INSTANCE_NAME=$(kubectl get pod untrusted --output=jsonpath='{.spec.nodeName}')
$ INSTANCE_IP_ADDRESS=$(kubectl get nodes ${INSTANCE_NAME} -o wide | tail -1 | awk '{ print $6 }')
```
SSH into the worker node:
```
gcloud compute ssh ${INSTANCE_NAME}
$ ssh -i ~/.ssh/id_rsa-k8s.pub ${INSTANCE_IP_ADDRESS}
```
List the containers running under gVisor:
```
sudo runsc --root /run/containerd/runsc/k8s.io list
$ sudo runsc --root /run/containerd/runsc/k8s.io list
```
```
I0930 19:27:13.255142 20832 x:0] ***************************
I0930 19:27:13.255326 20832 x:0] Args: [runsc --root /run/containerd/runsc/k8s.io list]
@ -286,21 +283,21 @@ I0930 19:27:13.259733 20832 x:0] Exiting with status: 0
Get the ID of the `untrusted` pod:
```
POD_ID=$(sudo crictl -r unix:///var/run/containerd/containerd.sock \
$ POD_ID=$(sudo crictl -r unix:///var/run/containerd/containerd.sock \
pods --name untrusted -q)
```
Get the ID of the `webserver` container running in the `untrusted` pod:
```
CONTAINER_ID=$(sudo crictl -r unix:///var/run/containerd/containerd.sock \
$ CONTAINER_ID=$(sudo crictl -r unix:///var/run/containerd/containerd.sock \
ps -p ${POD_ID} -q)
```
Use the gVisor `runsc` command to display the processes running inside the `webserver` container:
```
sudo runsc --root /run/containerd/runsc/k8s.io ps ${CONTAINER_ID}
$ sudo runsc --root /run/containerd/runsc/k8s.io ps ${CONTAINER_ID}
```
> output