diff --git a/.gitignore b/.gitignore index 624c834..809e1eb 100644 --- a/.gitignore +++ b/.gitignore @@ -2,6 +2,7 @@ admin-csr.json admin-key.pem admin.csr admin.pem +admin.kubeconfig ca-config.json ca-csr.json ca-key.pem diff --git a/docs/04-certificate-authority.md b/docs/04-certificate-authority.md index 6f25c5c..f8842d9 100644 --- a/docs/04-certificate-authority.md +++ b/docs/04-certificate-authority.md @@ -6,9 +6,11 @@ In this lab you will provision a [PKI Infrastructure](https://en.wikipedia.org/w In this section you will provision a Certificate Authority that can be used to generate additional TLS certificates. -Create the CA configuration file: +Generate the CA configuration file, certificate, and private key: ``` +{ + cat > ca-config.json < ca-config.json < ca-csr.json < ca-csr.json < admin-csr.json < admin-csr.json < kube-controller-manager-csr.json < kube-controller-manager-csr.json < kube-proxy-csr.json < kube-proxy-csr.json < kube-scheduler-csr.json < kube-scheduler-csr.json < kubernetes-csr.json < kubernetes-csr.json < service-account-csr.json < service-account-csr.json < etcd.service < Remember to run the above commands on each controller node: `controller-0`, `controller-1`, and `controller-2`. diff --git a/docs/08-bootstrapping-kubernetes-controllers.md b/docs/08-bootstrapping-kubernetes-controllers.md index 833543f..a0d247b 100644 --- a/docs/08-bootstrapping-kubernetes-controllers.md +++ b/docs/08-bootstrapping-kubernetes-controllers.md @@ -37,23 +37,22 @@ wget -q --show-progress --https-only --timestamping \ Install the Kubernetes binaries: ``` -chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl -``` - -``` -sudo mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/ +{ + chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl + sudo mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/ +} ``` ### Configure the Kubernetes API Server ``` -sudo mkdir -p /var/lib/kubernetes/ -``` +{ + sudo mkdir -p /var/lib/kubernetes/ -``` -sudo mv ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \ - service-account-key.pem service-account.pem \ - encryption-config.yaml /var/lib/kubernetes/ + sudo mv ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \ + service-account-key.pem service-account.pem \ + encryption-config.yaml /var/lib/kubernetes/ +} ``` The instance internal IP address will be used to advertise the API Server to members of the cluster. Retrieve the internal IP address for the current compute instance: @@ -66,7 +65,7 @@ INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \ Create the `kube-apiserver.service` systemd unit file: ``` -cat > kube-apiserver.service < kube-controller-manager.service < kube-scheduler.yaml < kube-scheduler.service < Allow up to 10 seconds for the Kubernetes API Server to fully initialize. +### Enable HTTP Health Checks + +A [Google Network Load Balancer](https://cloud.google.com/compute/docs/load-balancing/network) will be used to distribute traffic across the three API servers and allow each API server to terminate TLS connections and validate client certificates. The network load balancer only supports HTTP health checks which means the HTTPS endpoint exposed by the API server cannot be used. As a workaround the nginx webserver can be used to proxy HTTP health checks. In this section nginx will be installed and configured to accept HTTP health checks on port `80` and proxy the connections to the API server on `https://127.0.0.1:6443/healthz`. + +> The `/healthz` API server endpoint does not require authentication by default. + +Install a basic web server to handle HTTP health checks: + +``` +sudo apt-get install -y nginx +``` + +``` +cat > kubernetes.default.svc.cluster.local < Remember to run the above commands on each controller node: `controller-0`, `controller-1`, and `controller-2`. ## RBAC for Kubelet Authorization @@ -244,7 +290,7 @@ gcloud compute ssh controller-0 Create the `system:kube-apiserver-to-kubelet` [ClusterRole](https://kubernetes.io/docs/admin/authorization/rbac/#role-and-clusterrole) with permissions to access the Kubelet API and perform most common tasks associated with managing pods: ``` -cat < The `/healthz` API server endpoint does not require authentication by default. - -The following commands must be run on each controller instance. Example: - -``` -gcloud compute ssh controller-0 -``` - -Install a basic web server to handle HTTP health checks: - -``` -sudo apt-get install -y nginx -``` - -``` -cat > kubernetes.default.svc.cluster.local < Remember to run the above commands on each controller node: controller-0, controller-1, and controller-2. +> The compute instances created in this tutorial will not have permission to complete this section. Run the following commands from the same machine used to create the compute instances. ### Provision a Network Load Balancer -> The compute instances created in this tutorial will not have permission to complete this section. Run the following commands from the same machine used to create the compute instances. - Create the external load balancer network resources: ``` -gcloud compute http-health-checks create kubernetes \ - --description "Kubernetes Health Check" \ - --host "kubernetes.default.svc.cluster.local" \ - --request-path "/healthz" -``` +{ + KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \ + --region $(gcloud config get-value compute/region) \ + --format 'value(address)') -``` -gcloud compute firewall-rules create kubernetes-the-hard-way-allow-health-check \ - --network kubernetes-the-hard-way \ - --source-ranges 209.85.152.0/22,209.85.204.0/22,35.191.0.0/16 \ - --allow tcp -``` + gcloud compute http-health-checks create kubernetes \ + --description "Kubernetes Health Check" \ + --host "kubernetes.default.svc.cluster.local" \ + --request-path "/healthz" -``` -gcloud compute target-pools create kubernetes-target-pool \ - --http-health-check kubernetes -``` + gcloud compute firewall-rules create kubernetes-the-hard-way-allow-health-check \ + --network kubernetes-the-hard-way \ + --source-ranges 209.85.152.0/22,209.85.204.0/22,35.191.0.0/16 \ + --allow tcp -``` -gcloud compute target-pools add-instances kubernetes-target-pool \ - --instances controller-0,controller-1,controller-2 -``` + gcloud compute target-pools create kubernetes-target-pool \ + --http-health-check kubernetes -``` -gcloud compute forwarding-rules create kubernetes-forwarding-rule \ - --address ${KUBERNETES_PUBLIC_ADDRESS} \ - --ports 6443 \ - --region $(gcloud config get-value compute/region) \ - --target-pool kubernetes-target-pool + gcloud compute target-pools add-instances kubernetes-target-pool \ + --instances controller-0,controller-1,controller-2 + + gcloud compute forwarding-rules create kubernetes-forwarding-rule \ + --address ${KUBERNETES_PUBLIC_ADDRESS} \ + --ports 6443 \ + --region $(gcloud config get-value compute/region) \ + --target-pool kubernetes-target-pool +} ``` ### Verification diff --git a/docs/09-bootstrapping-kubernetes-workers.md b/docs/09-bootstrapping-kubernetes-workers.md index fc0abfb..a3a50da 100644 --- a/docs/09-bootstrapping-kubernetes-workers.md +++ b/docs/09-bootstrapping-kubernetes-workers.md @@ -19,11 +19,10 @@ gcloud compute ssh worker-0 Install the OS dependencies: ``` -sudo apt-get update -``` - -``` -sudo apt-get -y install socat conntrack ipset +{ + sudo apt-get update + sudo apt-get -y install socat conntrack ipset +} ``` > The socat binary enables support for the `kubectl port-forward` command. @@ -57,27 +56,14 @@ sudo mkdir -p \ Install the worker binaries: ``` -chmod +x kubectl kube-proxy kubelet runc.amd64 runsc -``` - -``` -sudo mv runc.amd64 runc -``` - -``` -sudo mv kubectl kube-proxy kubelet runc runsc /usr/local/bin/ -``` - -``` -tar -xvf crictl-v1.0.0-beta.0-linux-amd64.tar.gz -C /usr/local/bin/ -``` - -``` -sudo tar -xvf cni-plugins-amd64-v0.6.0.tgz -C /opt/cni/bin/ -``` - -``` -sudo tar -xvf containerd-1.1.0.linux-amd64.tar.gz -C / +{ + chmod +x kubectl kube-proxy kubelet runc.amd64 runsc + sudo mv runc.amd64 runc + sudo mv kubectl kube-proxy kubelet runc runsc /usr/local/bin/ + sudo tar -xvf crictl-v1.0.0-beta.0-linux-amd64.tar.gz -C /usr/local/bin/ + sudo tar -xvf cni-plugins-amd64-v0.6.0.tgz -C /opt/cni/bin/ + sudo tar -xvf containerd-1.1.0.linux-amd64.tar.gz -C / +} ``` ### Configure CNI Networking @@ -92,7 +78,7 @@ POD_CIDR=$(curl -s -H "Metadata-Flavor: Google" \ Create the `bridge` network configuration file: ``` -cat > 10-bridge.conf < 99-loopback.conf < 99-loopback.conf < Untrusted workloads will be run using the gVisor runtime. +> Untrusted workloads will be run using the gVisor (runsc) runtime. Create the `containerd.service` systemd unit file: ``` -cat > containerd.service < kubelet-config.yaml < kubelet.service < kube-proxy-config.yaml < kube-proxy.service < Remember to run the above commands on each worker node: `worker-0`, `worker-1`, and `worker-2`. @@ -317,18 +277,11 @@ sudo systemctl start containerd kubelet kube-proxy > The compute instances created in this tutorial will not have permission to complete this section. Run the following commands from the same machine used to create the compute instances. -Print the Kubernetes nodes: - -``` -gcloud compute ssh controller-0 \ - --command="kubectl get nodes \ - --kubeconfig /var/lib/kubernetes/kube-controller-manager.kubeconfig" -``` - List the registered Kubernetes nodes: ``` -kubectl get nodes +gcloud compute ssh controller-0 \ + --command "kubectl get nodes --kubeconfig admin.kubeconfig" ``` > output diff --git a/docs/10-configuring-kubectl.md b/docs/10-configuring-kubectl.md index ee78b30..e524c46 100644 --- a/docs/10-configuring-kubectl.md +++ b/docs/10-configuring-kubectl.md @@ -8,37 +8,29 @@ In this lab you will generate a kubeconfig file for the `kubectl` command line u Each kubeconfig requires a Kubernetes API Server to connect to. To support high availability the IP address assigned to the external load balancer fronting the Kubernetes API Servers will be used. -Retrieve the `kubernetes-the-hard-way` static IP address: - -``` -KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \ - --region $(gcloud config get-value compute/region) \ - --format 'value(address)') -``` - Generate a kubeconfig file suitable for authenticating as the `admin` user: ``` -kubectl config set-cluster kubernetes-the-hard-way \ - --certificate-authority=ca.pem \ - --embed-certs=true \ - --server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 -``` +{ + KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \ + --region $(gcloud config get-value compute/region) \ + --format 'value(address)') -``` -kubectl config set-credentials admin \ - --client-certificate=admin.pem \ - --client-key=admin-key.pem -``` + kubectl config set-cluster kubernetes-the-hard-way \ + --certificate-authority=ca.pem \ + --embed-certs=true \ + --server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 -``` -kubectl config set-context kubernetes-the-hard-way \ - --cluster=kubernetes-the-hard-way \ - --user=admin -``` + kubectl config set-credentials admin \ + --client-certificate=admin.pem \ + --client-key=admin-key.pem -``` -kubectl config use-context kubernetes-the-hard-way + kubectl config set-context kubernetes-the-hard-way \ + --cluster=kubernetes-the-hard-way \ + --user=admin + + kubectl config use-context kubernetes-the-hard-way +} ``` ## Verification diff --git a/docs/13-smoke-test.md b/docs/13-smoke-test.md index c57bc6c..cf839a8 100644 --- a/docs/13-smoke-test.md +++ b/docs/13-smoke-test.md @@ -209,4 +209,118 @@ ETag: "5acb8e45-264" Accept-Ranges: bytes ``` +## Untrusted Workloads + +This section will verify the ability to run untrusted workloads using [gVisor](https://github.com/google/gvisor). + +Create the `untrusted` pod: + +``` +cat < output + +``` +I0514 06:48:48.154040 18401 x:0] *************************** +I0514 06:48:48.154263 18401 x:0] Args: [runsc --root /run/containerd/runsc/k8s.io ps 5a25ef793aaa302edc5407c34723287de36609e0fc189a6c0621c65bb10eea58] +I0514 06:48:48.154332 18401 x:0] Git Revision: 08879266fef3a67fac1a77f1ea133c3ac75759dd +I0514 06:48:48.154380 18401 x:0] PID: 18401 +I0514 06:48:48.154431 18401 x:0] UID: 0, GID: 0 +I0514 06:48:48.154474 18401 x:0] Configuration: +I0514 06:48:48.154508 18401 x:0] RootDir: /run/containerd/runc/k8s.io +I0514 06:48:48.154585 18401 x:0] Platform: ptrace +I0514 06:48:48.154681 18401 x:0] FileAccess: proxy, overlay: false +I0514 06:48:48.154764 18401 x:0] Network: sandbox, logging: false +I0514 06:48:48.154844 18401 x:0] Strace: false, max size: 1024, syscalls: [] +I0514 06:48:48.155015 18401 x:0] *************************** +UID PID PPID C STIME TIME CMD +0 1 0 0 06:34 10ms app +I0514 06:48:48.156130 18401 x:0] Exiting with status: 0 +``` + Next: [Cleaning Up](14-cleanup.md) diff --git a/docs/14-cleanup.md b/docs/14-cleanup.md index b5bb9af..dc97a3a 100644 --- a/docs/14-cleanup.md +++ b/docs/14-cleanup.md @@ -17,22 +17,16 @@ gcloud -q compute instances delete \ Delete the external load balancer network resources: ``` -gcloud -q compute forwarding-rules delete kubernetes-forwarding-rule \ - --region $(gcloud config get-value compute/region) -``` +{ + gcloud -q compute forwarding-rules delete kubernetes-forwarding-rule \ + --region $(gcloud config get-value compute/region) -``` -gcloud -q compute target-pools delete kubernetes-target-pool -``` + gcloud -q compute target-pools delete kubernetes-target-pool -``` -gcloud -q compute http-health-checks delete kubernetes -``` + gcloud -q compute http-health-checks delete kubernetes -Delete the `kubernetes-the-hard-way` static IP address: - -``` -gcloud -q compute addresses delete kubernetes-the-hard-way + gcloud -q compute addresses delete kubernetes-the-hard-way +} ``` Delete the `kubernetes-the-hard-way` firewall rules: @@ -45,23 +39,17 @@ gcloud -q compute firewall-rules delete \ kubernetes-the-hard-way-allow-health-check ``` -Delete the Pod network routes: - -``` -gcloud -q compute routes delete \ - kubernetes-route-10-200-0-0-24 \ - kubernetes-route-10-200-1-0-24 \ - kubernetes-route-10-200-2-0-24 -``` - -Delete the `kubernetes` subnet: - -``` -gcloud -q compute networks subnets delete kubernetes -``` - Delete the `kubernetes-the-hard-way` network VPC: ``` -gcloud -q compute networks delete kubernetes-the-hard-way +{ + gcloud -q compute routes delete \ + kubernetes-route-10-200-0-0-24 \ + kubernetes-route-10-200-1-0-24 \ + kubernetes-route-10-200-2-0-24 + + gcloud -q compute networks subnets delete kubernetes + + gcloud -q compute networks delete kubernetes-the-hard-way +} ```