Apply Markdown best practices (block code language specification, spacing between lines, spacing between char, ...

pull/582/head
Nemo 2020-06-20 09:24:03 +02:00
parent 0d07e90828
commit acbb8958e6
15 changed files with 193 additions and 199 deletions

View File

@ -1,3 +1,5 @@
# Contributing
This project is made possible by contributors like YOU! While all contributions are welcomed, please be sure and follow the following suggestions to help your PR get merged. This project is made possible by contributors like YOU! While all contributions are welcomed, please be sure and follow the following suggestions to help your PR get merged.
## License ## License
@ -15,4 +17,3 @@ Here are some examples of the review and justification process:
## Notes on minutiae ## Notes on minutiae
If you find a bug that breaks the guide, please do submit it. If you are considering a minor copy edit for tone, grammar, or simple inconsistent whitespace, consider the tradeoff between maintainer time and community benefit before investing too much of your time. If you find a bug that breaks the guide, please do submit it. If you are considering a minor copy edit for tone, grammar, or simple inconsistent whitespace, consider the tradeoff between maintainer time and community benefit before investing too much of your time.

View File

@ -16,7 +16,7 @@ Follow the Google Cloud SDK [documentation](https://cloud.google.com/sdk/) to in
Verify the Google Cloud SDK version is 262.0.0 or higher: Verify the Google Cloud SDK version is 262.0.0 or higher:
``` ```bash
gcloud version gcloud version
``` ```
@ -26,25 +26,25 @@ This tutorial assumes a default compute region and zone have been configured.
If you are using the `gcloud` command-line tool for the first time `init` is the easiest way to do this: If you are using the `gcloud` command-line tool for the first time `init` is the easiest way to do this:
``` ```bash
gcloud init gcloud init
``` ```
Then be sure to authorize gcloud to access the Cloud Platform with your Google user credentials: Then be sure to authorize gcloud to access the Cloud Platform with your Google user credentials:
``` ```bash
gcloud auth login gcloud auth login
``` ```
Next set a default compute region and compute zone: Next set a default compute region and compute zone:
``` ```bash
gcloud config set compute/region us-west1 gcloud config set compute/region us-west1
``` ```
Set a default compute zone: Set a default compute zone:
``` ```bash
gcloud config set compute/zone us-west1-c gcloud config set compute/zone us-west1-c
``` ```

View File

@ -2,7 +2,6 @@
In this lab you will install the command line utilities required to complete this tutorial: [cfssl](https://github.com/cloudflare/cfssl), [cfssljson](https://github.com/cloudflare/cfssl), and [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl). In this lab you will install the command line utilities required to complete this tutorial: [cfssl](https://github.com/cloudflare/cfssl), [cfssljson](https://github.com/cloudflare/cfssl), and [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl).
## Install CFSSL ## Install CFSSL
The `cfssl` and `cfssljson` command line utilities will be used to provision a [PKI Infrastructure](https://en.wikipedia.org/wiki/Public_key_infrastructure) and generate TLS certificates. The `cfssl` and `cfssljson` command line utilities will be used to provision a [PKI Infrastructure](https://en.wikipedia.org/wiki/Public_key_infrastructure) and generate TLS certificates.
@ -11,38 +10,38 @@ Download and install `cfssl` and `cfssljson`:
### OS X ### OS X
``` ```bash
curl -o cfssl https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/darwin/cfssl curl -o cfssl https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/darwin/cfssl
curl -o cfssljson https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/darwin/cfssljson curl -o cfssljson https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/darwin/cfssljson
``` ```
``` ```bash
chmod +x cfssl cfssljson chmod +x cfssl cfssljson
``` ```
``` ```bash
sudo mv cfssl cfssljson /usr/local/bin/ sudo mv cfssl cfssljson /usr/local/bin/
``` ```
Some OS X users may experience problems using the pre-built binaries in which case [Homebrew](https://brew.sh) might be a better option: Some OS X users may experience problems using the pre-built binaries in which case [Homebrew](https://brew.sh) might be a better option:
``` ```bash
brew install cfssl brew install cfssl
``` ```
### Linux ### Linux
``` ```bash
wget -q --show-progress --https-only --timestamping \ wget -q --show-progress --https-only --timestamping \
https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/linux/cfssl \ https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/linux/cfssl \
https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/linux/cfssljson https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/linux/cfssljson
``` ```
``` ```bash
chmod +x cfssl cfssljson chmod +x cfssl cfssljson
``` ```
``` ```bash
sudo mv cfssl cfssljson /usr/local/bin/ sudo mv cfssl cfssljson /usr/local/bin/
``` ```
@ -50,22 +49,23 @@ sudo mv cfssl cfssljson /usr/local/bin/
Verify `cfssl` and `cfssljson` version 1.3.4 or higher is installed: Verify `cfssl` and `cfssljson` version 1.3.4 or higher is installed:
``` ```bash
cfssl version cfssl version
``` ```
> output > output
``` ```bash
Version: 1.3.4 Version: 1.3.4
Revision: dev Revision: dev
Runtime: go1.13 Runtime: go1.13
``` ```
``` ```bash
cfssljson --version cfssljson --version
``` ```
```
```bash
Version: 1.3.4 Version: 1.3.4
Revision: dev Revision: dev
Runtime: go1.13 Runtime: go1.13
@ -75,45 +75,45 @@ Runtime: go1.13
The `kubectl` command line utility is used to interact with the Kubernetes API Server. Download and install `kubectl` from the official release binaries: The `kubectl` command line utility is used to interact with the Kubernetes API Server. Download and install `kubectl` from the official release binaries:
### OS X ### Install on OS X
``` ```bash
curl -o kubectl https://storage.googleapis.com/kubernetes-release/release/v1.15.3/bin/darwin/amd64/kubectl curl -o kubectl https://storage.googleapis.com/kubernetes-release/release/v1.15.3/bin/darwin/amd64/kubectl
``` ```
``` ```bash
chmod +x kubectl chmod +x kubectl
``` ```
``` ```bash
sudo mv kubectl /usr/local/bin/ sudo mv kubectl /usr/local/bin/
``` ```
### Linux ### Install on Linux
``` ```bash
wget https://storage.googleapis.com/kubernetes-release/release/v1.15.3/bin/linux/amd64/kubectl wget https://storage.googleapis.com/kubernetes-release/release/v1.15.3/bin/linux/amd64/kubectl
``` ```
``` ```bash
chmod +x kubectl chmod +x kubectl
``` ```
``` ```bash
sudo mv kubectl /usr/local/bin/ sudo mv kubectl /usr/local/bin/
``` ```
### Verification ### Verification install
Verify `kubectl` version 1.15.3 or higher is installed: Verify `kubectl` version 1.15.3 or higher is installed:
``` ```bash
kubectl version --client kubectl version --client
``` ```
> output > output
``` ```bash
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState:"clean", BuildDate:"2019-08-19T11:13:54Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"} Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState:"clean", BuildDate:"2019-08-19T11:13:54Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
``` ```

View File

@ -16,7 +16,7 @@ In this section a dedicated [Virtual Private Cloud](https://cloud.google.com/com
Create the `kubernetes-the-hard-way` custom VPC network: Create the `kubernetes-the-hard-way` custom VPC network:
``` ```bash
gcloud compute networks create kubernetes-the-hard-way --subnet-mode custom gcloud compute networks create kubernetes-the-hard-way --subnet-mode custom
``` ```
@ -24,7 +24,7 @@ A [subnet](https://cloud.google.com/compute/docs/vpc/#vpc_networks_and_subnets)
Create the `kubernetes` subnet in the `kubernetes-the-hard-way` VPC network: Create the `kubernetes` subnet in the `kubernetes-the-hard-way` VPC network:
``` ```bash
gcloud compute networks subnets create kubernetes \ gcloud compute networks subnets create kubernetes \
--network kubernetes-the-hard-way \ --network kubernetes-the-hard-way \
--range 10.240.0.0/24 --range 10.240.0.0/24
@ -36,7 +36,7 @@ gcloud compute networks subnets create kubernetes \
Create a firewall rule that allows internal communication across all protocols: Create a firewall rule that allows internal communication across all protocols:
``` ```bash
gcloud compute firewall-rules create kubernetes-the-hard-way-allow-internal \ gcloud compute firewall-rules create kubernetes-the-hard-way-allow-internal \
--allow tcp,udp,icmp \ --allow tcp,udp,icmp \
--network kubernetes-the-hard-way \ --network kubernetes-the-hard-way \
@ -45,7 +45,7 @@ gcloud compute firewall-rules create kubernetes-the-hard-way-allow-internal \
Create a firewall rule that allows external SSH, ICMP, and HTTPS: Create a firewall rule that allows external SSH, ICMP, and HTTPS:
``` ```bash
gcloud compute firewall-rules create kubernetes-the-hard-way-allow-external \ gcloud compute firewall-rules create kubernetes-the-hard-way-allow-external \
--allow tcp:22,tcp:6443,icmp \ --allow tcp:22,tcp:6443,icmp \
--network kubernetes-the-hard-way \ --network kubernetes-the-hard-way \
@ -56,13 +56,13 @@ gcloud compute firewall-rules create kubernetes-the-hard-way-allow-external \
List the firewall rules in the `kubernetes-the-hard-way` VPC network: List the firewall rules in the `kubernetes-the-hard-way` VPC network:
``` ```bash
gcloud compute firewall-rules list --filter="network:kubernetes-the-hard-way" gcloud compute firewall-rules list --filter="network:kubernetes-the-hard-way"
``` ```
> output > output
``` ```bash
NAME NETWORK DIRECTION PRIORITY ALLOW DENY NAME NETWORK DIRECTION PRIORITY ALLOW DENY
kubernetes-the-hard-way-allow-external kubernetes-the-hard-way INGRESS 1000 tcp:22,tcp:6443,icmp kubernetes-the-hard-way-allow-external kubernetes-the-hard-way INGRESS 1000 tcp:22,tcp:6443,icmp
kubernetes-the-hard-way-allow-internal kubernetes-the-hard-way INGRESS 1000 tcp,udp,icmp kubernetes-the-hard-way-allow-internal kubernetes-the-hard-way INGRESS 1000 tcp,udp,icmp
@ -72,20 +72,20 @@ kubernetes-the-hard-way-allow-internal kubernetes-the-hard-way INGRESS 1000
Allocate a static IP address that will be attached to the external load balancer fronting the Kubernetes API Servers: Allocate a static IP address that will be attached to the external load balancer fronting the Kubernetes API Servers:
``` ```bash
gcloud compute addresses create kubernetes-the-hard-way \ gcloud compute addresses create kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) --region $(gcloud config get-value compute/region)
``` ```
Verify the `kubernetes-the-hard-way` static IP address was created in your default compute region: Verify the `kubernetes-the-hard-way` static IP address was created in your default compute region:
``` ```bash
gcloud compute addresses list --filter="name=('kubernetes-the-hard-way')" gcloud compute addresses list --filter="name=('kubernetes-the-hard-way')"
``` ```
> output > output
``` ```bash
NAME REGION ADDRESS STATUS NAME REGION ADDRESS STATUS
kubernetes-the-hard-way us-west1 XX.XXX.XXX.XX RESERVED kubernetes-the-hard-way us-west1 XX.XXX.XXX.XX RESERVED
``` ```
@ -98,7 +98,7 @@ The compute instances in this lab will be provisioned using [Ubuntu Server](http
Create three compute instances which will host the Kubernetes control plane: Create three compute instances which will host the Kubernetes control plane:
``` ```bash
for i in 0 1 2; do for i in 0 1 2; do
gcloud compute instances create controller-${i} \ gcloud compute instances create controller-${i} \
--async \ --async \
@ -122,7 +122,7 @@ Each worker instance requires a pod subnet allocation from the Kubernetes cluste
Create three compute instances which will host the Kubernetes worker nodes: Create three compute instances which will host the Kubernetes worker nodes:
``` ```bash
for i in 0 1 2; do for i in 0 1 2; do
gcloud compute instances create worker-${i} \ gcloud compute instances create worker-${i} \
--async \ --async \
@ -143,13 +143,13 @@ done
List the compute instances in your default compute zone: List the compute instances in your default compute zone:
``` ```bash
gcloud compute instances list gcloud compute instances list
``` ```
> output > output
``` ```bash
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
controller-0 us-west1-c n1-standard-1 10.240.0.10 XX.XXX.XXX.XXX RUNNING controller-0 us-west1-c n1-standard-1 10.240.0.10 XX.XXX.XXX.XXX RUNNING
controller-1 us-west1-c n1-standard-1 10.240.0.11 XX.XXX.X.XX RUNNING controller-1 us-west1-c n1-standard-1 10.240.0.11 XX.XXX.X.XX RUNNING
@ -165,13 +165,13 @@ SSH will be used to configure the controller and worker instances. When connecti
Test SSH access to the `controller-0` compute instances: Test SSH access to the `controller-0` compute instances:
``` ```bash
gcloud compute ssh controller-0 gcloud compute ssh controller-0
``` ```
If this is your first time connecting to a compute instance SSH keys will be generated for you. Enter a passphrase at the prompt to continue: If this is your first time connecting to a compute instance SSH keys will be generated for you. Enter a passphrase at the prompt to continue:
``` ```bash
WARNING: The public SSH key file for gcloud does not exist. WARNING: The public SSH key file for gcloud does not exist.
WARNING: The private SSH key file for gcloud does not exist. WARNING: The private SSH key file for gcloud does not exist.
WARNING: You do not have an SSH key for gcloud. WARNING: You do not have an SSH key for gcloud.
@ -183,7 +183,7 @@ Enter same passphrase again:
At this point the generated SSH keys will be uploaded and stored in your project: At this point the generated SSH keys will be uploaded and stored in your project:
``` ```bash
Your identification has been saved in /home/$USER/.ssh/google_compute_engine. Your identification has been saved in /home/$USER/.ssh/google_compute_engine.
Your public key has been saved in /home/$USER/.ssh/google_compute_engine.pub. Your public key has been saved in /home/$USER/.ssh/google_compute_engine.pub.
The key fingerprint is: The key fingerprint is:
@ -207,7 +207,7 @@ Waiting for SSH key to propagate.
After the SSH keys have been updated you'll be logged into the `controller-0` instance: After the SSH keys have been updated you'll be logged into the `controller-0` instance:
``` ```bash
Welcome to Ubuntu 18.04.3 LTS (GNU/Linux 4.15.0-1042-gcp x86_64) Welcome to Ubuntu 18.04.3 LTS (GNU/Linux 4.15.0-1042-gcp x86_64)
... ...
@ -216,12 +216,13 @@ Last login: Sun Sept 14 14:34:27 2019 from XX.XXX.XXX.XX
Type `exit` at the prompt to exit the `controller-0` compute instance: Type `exit` at the prompt to exit the `controller-0` compute instance:
``` ```bash
$USER@controller-0:~$ exit $USER@controller-0:~$ exit
``` ```
> output > output
``` ```bash
logout logout
Connection to XX.XXX.XXX.XXX closed Connection to XX.XXX.XXX.XXX closed
``` ```

View File

@ -8,7 +8,7 @@ In this section you will provision a Certificate Authority that can be used to g
Generate the CA configuration file, certificate, and private key: Generate the CA configuration file, certificate, and private key:
``` ```bash
{ {
cat > ca-config.json <<EOF cat > ca-config.json <<EOF
@ -53,7 +53,7 @@ cfssl gencert -initca ca-csr.json | cfssljson -bare ca
Results: Results:
``` ```bash
ca-key.pem ca-key.pem
ca.pem ca.pem
``` ```
@ -66,7 +66,7 @@ In this section you will generate client and server certificates for each Kubern
Generate the `admin` client certificate and private key: Generate the `admin` client certificate and private key:
``` ```bash
{ {
cat > admin-csr.json <<EOF cat > admin-csr.json <<EOF
@ -100,7 +100,7 @@ cfssl gencert \
Results: Results:
``` ```bash
admin-key.pem admin-key.pem
admin.pem admin.pem
``` ```
@ -111,7 +111,7 @@ Kubernetes uses a [special-purpose authorization mode](https://kubernetes.io/doc
Generate a certificate and private key for each Kubernetes worker node: Generate a certificate and private key for each Kubernetes worker node:
``` ```bash
for instance in worker-0 worker-1 worker-2; do for instance in worker-0 worker-1 worker-2; do
cat > ${instance}-csr.json <<EOF cat > ${instance}-csr.json <<EOF
{ {
@ -150,7 +150,7 @@ done
Results: Results:
``` ```bash
worker-0-key.pem worker-0-key.pem
worker-0.pem worker-0.pem
worker-1-key.pem worker-1-key.pem
@ -163,7 +163,7 @@ worker-2.pem
Generate the `kube-controller-manager` client certificate and private key: Generate the `kube-controller-manager` client certificate and private key:
``` ```bash
{ {
cat > kube-controller-manager-csr.json <<EOF cat > kube-controller-manager-csr.json <<EOF
@ -197,17 +197,16 @@ cfssl gencert \
Results: Results:
``` ```bash
kube-controller-manager-key.pem kube-controller-manager-key.pem
kube-controller-manager.pem kube-controller-manager.pem
``` ```
### The Kube Proxy Client Certificate ### The Kube Proxy Client Certificate
Generate the `kube-proxy` client certificate and private key: Generate the `kube-proxy` client certificate and private key:
``` ```bash
{ {
cat > kube-proxy-csr.json <<EOF cat > kube-proxy-csr.json <<EOF
@ -241,7 +240,7 @@ cfssl gencert \
Results: Results:
``` ```bash
kube-proxy-key.pem kube-proxy-key.pem
kube-proxy.pem kube-proxy.pem
``` ```
@ -250,7 +249,7 @@ kube-proxy.pem
Generate the `kube-scheduler` client certificate and private key: Generate the `kube-scheduler` client certificate and private key:
``` ```bash
{ {
cat > kube-scheduler-csr.json <<EOF cat > kube-scheduler-csr.json <<EOF
@ -284,19 +283,18 @@ cfssl gencert \
Results: Results:
``` ```bash
kube-scheduler-key.pem kube-scheduler-key.pem
kube-scheduler.pem kube-scheduler.pem
``` ```
### The Kubernetes API Server Certificate ### The Kubernetes API Server Certificate
The `kubernetes-the-hard-way` static IP address will be included in the list of subject alternative names for the Kubernetes API Server certificate. This will ensure the certificate can be validated by remote clients. The `kubernetes-the-hard-way` static IP address will be included in the list of subject alternative names for the Kubernetes API Server certificate. This will ensure the certificate can be validated by remote clients.
Generate the Kubernetes API Server certificate and private key: Generate the Kubernetes API Server certificate and private key:
``` ```bash
{ {
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \ KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
@ -339,7 +337,7 @@ cfssl gencert \
Results: Results:
``` ```bash
kubernetes-key.pem kubernetes-key.pem
kubernetes.pem kubernetes.pem
``` ```
@ -350,7 +348,7 @@ The Kubernetes Controller Manager leverages a key pair to generate and sign serv
Generate the `service-account` certificate and private key: Generate the `service-account` certificate and private key:
``` ```bash
{ {
cat > service-account-csr.json <<EOF cat > service-account-csr.json <<EOF
@ -384,17 +382,16 @@ cfssl gencert \
Results: Results:
``` ```bash
service-account-key.pem service-account-key.pem
service-account.pem service-account.pem
``` ```
## Distribute the Client and Server Certificates ## Distribute the Client and Server Certificates
Copy the appropriate certificates and private keys to each worker instance: Copy the appropriate certificates and private keys to each worker instance:
``` ```bash
for instance in worker-0 worker-1 worker-2; do for instance in worker-0 worker-1 worker-2; do
gcloud compute scp ca.pem ${instance}-key.pem ${instance}.pem ${instance}:~/ gcloud compute scp ca.pem ${instance}-key.pem ${instance}.pem ${instance}:~/
done done
@ -402,7 +399,7 @@ done
Copy the appropriate certificates and private keys to each controller instance: Copy the appropriate certificates and private keys to each controller instance:
``` ```bash
for instance in controller-0 controller-1 controller-2; do for instance in controller-0 controller-1 controller-2; do
gcloud compute scp ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \ gcloud compute scp ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \
service-account-key.pem service-account.pem ${instance}:~/ service-account-key.pem service-account.pem ${instance}:~/

View File

@ -12,7 +12,7 @@ Each kubeconfig requires a Kubernetes API Server to connect to. To support high
Retrieve the `kubernetes-the-hard-way` static IP address: Retrieve the `kubernetes-the-hard-way` static IP address:
``` ```bash
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \ KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) \ --region $(gcloud config get-value compute/region) \
--format 'value(address)') --format 'value(address)')
@ -26,7 +26,7 @@ When generating kubeconfig files for Kubelets the client certificate matching th
Generate a kubeconfig file for each worker node: Generate a kubeconfig file for each worker node:
``` ```bash
for instance in worker-0 worker-1 worker-2; do for instance in worker-0 worker-1 worker-2; do
kubectl config set-cluster kubernetes-the-hard-way \ kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \ --certificate-authority=ca.pem \
@ -51,7 +51,7 @@ done
Results: Results:
``` ```bash
worker-0.kubeconfig worker-0.kubeconfig
worker-1.kubeconfig worker-1.kubeconfig
worker-2.kubeconfig worker-2.kubeconfig
@ -61,7 +61,7 @@ worker-2.kubeconfig
Generate a kubeconfig file for the `kube-proxy` service: Generate a kubeconfig file for the `kube-proxy` service:
``` ```bash
{ {
kubectl config set-cluster kubernetes-the-hard-way \ kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \ --certificate-authority=ca.pem \
@ -86,7 +86,7 @@ Generate a kubeconfig file for the `kube-proxy` service:
Results: Results:
``` ```bash
kube-proxy.kubeconfig kube-proxy.kubeconfig
``` ```
@ -94,7 +94,7 @@ kube-proxy.kubeconfig
Generate a kubeconfig file for the `kube-controller-manager` service: Generate a kubeconfig file for the `kube-controller-manager` service:
``` ```bash
{ {
kubectl config set-cluster kubernetes-the-hard-way \ kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \ --certificate-authority=ca.pem \
@ -119,16 +119,15 @@ Generate a kubeconfig file for the `kube-controller-manager` service:
Results: Results:
``` ```bash
kube-controller-manager.kubeconfig kube-controller-manager.kubeconfig
``` ```
### The kube-scheduler Kubernetes Configuration File ### The kube-scheduler Kubernetes Configuration File
Generate a kubeconfig file for the `kube-scheduler` service: Generate a kubeconfig file for the `kube-scheduler` service:
``` ```bash
{ {
kubectl config set-cluster kubernetes-the-hard-way \ kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \ --certificate-authority=ca.pem \
@ -153,7 +152,7 @@ Generate a kubeconfig file for the `kube-scheduler` service:
Results: Results:
``` ```bash
kube-scheduler.kubeconfig kube-scheduler.kubeconfig
``` ```
@ -161,7 +160,7 @@ kube-scheduler.kubeconfig
Generate a kubeconfig file for the `admin` user: Generate a kubeconfig file for the `admin` user:
``` ```bash
{ {
kubectl config set-cluster kubernetes-the-hard-way \ kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \ --certificate-authority=ca.pem \
@ -186,18 +185,15 @@ Generate a kubeconfig file for the `admin` user:
Results: Results:
``` ```bash
admin.kubeconfig admin.kubeconfig
``` ```
##
## Distribute the Kubernetes Configuration Files ## Distribute the Kubernetes Configuration Files
Copy the appropriate `kubelet` and `kube-proxy` kubeconfig files to each worker instance: Copy the appropriate `kubelet` and `kube-proxy` kubeconfig files to each worker instance:
``` ```bash
for instance in worker-0 worker-1 worker-2; do for instance in worker-0 worker-1 worker-2; do
gcloud compute scp ${instance}.kubeconfig kube-proxy.kubeconfig ${instance}:~/ gcloud compute scp ${instance}.kubeconfig kube-proxy.kubeconfig ${instance}:~/
done done
@ -205,7 +201,7 @@ done
Copy the appropriate `kube-controller-manager` and `kube-scheduler` kubeconfig files to each controller instance: Copy the appropriate `kube-controller-manager` and `kube-scheduler` kubeconfig files to each controller instance:
``` ```bash
for instance in controller-0 controller-1 controller-2; do for instance in controller-0 controller-1 controller-2; do
gcloud compute scp admin.kubeconfig kube-controller-manager.kubeconfig kube-scheduler.kubeconfig ${instance}:~/ gcloud compute scp admin.kubeconfig kube-controller-manager.kubeconfig kube-scheduler.kubeconfig ${instance}:~/
done done

View File

@ -8,7 +8,7 @@ In this lab you will generate an encryption key and an [encryption config](https
Generate an encryption key: Generate an encryption key:
``` ```bash
ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64) ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)
``` ```
@ -16,7 +16,7 @@ ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)
Create the `encryption-config.yaml` encryption config file: Create the `encryption-config.yaml` encryption config file:
``` ```bash
cat > encryption-config.yaml <<EOF cat > encryption-config.yaml <<EOF
kind: EncryptionConfig kind: EncryptionConfig
apiVersion: v1 apiVersion: v1
@ -34,7 +34,7 @@ EOF
Copy the `encryption-config.yaml` encryption config file to each controller instance: Copy the `encryption-config.yaml` encryption config file to each controller instance:
``` ```bash
for instance in controller-0 controller-1 controller-2; do for instance in controller-0 controller-1 controller-2; do
gcloud compute scp encryption-config.yaml ${instance}:~/ gcloud compute scp encryption-config.yaml ${instance}:~/
done done

View File

@ -6,7 +6,7 @@ Kubernetes components are stateless and store cluster state in [etcd](https://gi
The commands in this lab must be run on each controller instance: `controller-0`, `controller-1`, and `controller-2`. Login to each controller instance using the `gcloud` command. Example: The commands in this lab must be run on each controller instance: `controller-0`, `controller-1`, and `controller-2`. Login to each controller instance using the `gcloud` command. Example:
``` ```bash
gcloud compute ssh controller-0 gcloud compute ssh controller-0
``` ```
@ -20,14 +20,14 @@ gcloud compute ssh controller-0
Download the official etcd release binaries from the [etcd](https://github.com/etcd-io/etcd) GitHub project: Download the official etcd release binaries from the [etcd](https://github.com/etcd-io/etcd) GitHub project:
``` ```bash
wget -q --show-progress --https-only --timestamping \ wget -q --show-progress --https-only --timestamping \
"https://github.com/etcd-io/etcd/releases/download/v3.4.0/etcd-v3.4.0-linux-amd64.tar.gz" "https://github.com/etcd-io/etcd/releases/download/v3.4.0/etcd-v3.4.0-linux-amd64.tar.gz"
``` ```
Extract and install the `etcd` server and the `etcdctl` command line utility: Extract and install the `etcd` server and the `etcdctl` command line utility:
``` ```bash
{ {
tar -xvf etcd-v3.4.0-linux-amd64.tar.gz tar -xvf etcd-v3.4.0-linux-amd64.tar.gz
sudo mv etcd-v3.4.0-linux-amd64/etcd* /usr/local/bin/ sudo mv etcd-v3.4.0-linux-amd64/etcd* /usr/local/bin/
@ -36,7 +36,7 @@ Extract and install the `etcd` server and the `etcdctl` command line utility:
### Configure the etcd Server ### Configure the etcd Server
``` ```bash
{ {
sudo mkdir -p /etc/etcd /var/lib/etcd sudo mkdir -p /etc/etcd /var/lib/etcd
sudo cp ca.pem kubernetes-key.pem kubernetes.pem /etc/etcd/ sudo cp ca.pem kubernetes-key.pem kubernetes.pem /etc/etcd/
@ -45,20 +45,20 @@ Extract and install the `etcd` server and the `etcdctl` command line utility:
The instance internal IP address will be used to serve client requests and communicate with etcd cluster peers. Retrieve the internal IP address for the current compute instance: The instance internal IP address will be used to serve client requests and communicate with etcd cluster peers. Retrieve the internal IP address for the current compute instance:
``` ```bash
INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \ INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip) http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
``` ```
Each etcd member must have a unique name within an etcd cluster. Set the etcd name to match the hostname of the current compute instance: Each etcd member must have a unique name within an etcd cluster. Set the etcd name to match the hostname of the current compute instance:
``` ```bash
ETCD_NAME=$(hostname -s) ETCD_NAME=$(hostname -s)
``` ```
Create the `etcd.service` systemd unit file: Create the `etcd.service` systemd unit file:
``` ```bash
cat <<EOF | sudo tee /etc/systemd/system/etcd.service cat <<EOF | sudo tee /etc/systemd/system/etcd.service
[Unit] [Unit]
Description=etcd Description=etcd
@ -94,7 +94,7 @@ EOF
### Start the etcd Server ### Start the etcd Server
``` ```bash
{ {
sudo systemctl daemon-reload sudo systemctl daemon-reload
sudo systemctl enable etcd sudo systemctl enable etcd
@ -108,7 +108,7 @@ EOF
List the etcd cluster members: List the etcd cluster members:
``` ```bash
sudo ETCDCTL_API=3 etcdctl member list \ sudo ETCDCTL_API=3 etcdctl member list \
--endpoints=https://127.0.0.1:2379 \ --endpoints=https://127.0.0.1:2379 \
--cacert=/etc/etcd/ca.pem \ --cacert=/etc/etcd/ca.pem \
@ -118,7 +118,7 @@ sudo ETCDCTL_API=3 etcdctl member list \
> output > output
``` ```bash
3a57933972cb5131, started, controller-2, https://10.240.0.12:2380, https://10.240.0.12:2379 3a57933972cb5131, started, controller-2, https://10.240.0.12:2380, https://10.240.0.12:2379
f98dc20bce6225a0, started, controller-0, https://10.240.0.10:2380, https://10.240.0.10:2379 f98dc20bce6225a0, started, controller-0, https://10.240.0.10:2380, https://10.240.0.10:2379
ffed16798470cab5, started, controller-1, https://10.240.0.11:2380, https://10.240.0.11:2379 ffed16798470cab5, started, controller-1, https://10.240.0.11:2380, https://10.240.0.11:2379

View File

@ -6,7 +6,7 @@ In this lab you will bootstrap the Kubernetes control plane across three compute
The commands in this lab must be run on each controller instance: `controller-0`, `controller-1`, and `controller-2`. Login to each controller instance using the `gcloud` command. Example: The commands in this lab must be run on each controller instance: `controller-0`, `controller-1`, and `controller-2`. Login to each controller instance using the `gcloud` command. Example:
``` ```bash
gcloud compute ssh controller-0 gcloud compute ssh controller-0
``` ```
@ -18,7 +18,7 @@ gcloud compute ssh controller-0
Create the Kubernetes configuration directory: Create the Kubernetes configuration directory:
``` ```bash
sudo mkdir -p /etc/kubernetes/config sudo mkdir -p /etc/kubernetes/config
``` ```
@ -26,7 +26,7 @@ sudo mkdir -p /etc/kubernetes/config
Download the official Kubernetes release binaries: Download the official Kubernetes release binaries:
``` ```bash
wget -q --show-progress --https-only --timestamping \ wget -q --show-progress --https-only --timestamping \
"https://storage.googleapis.com/kubernetes-release/release/v1.15.3/bin/linux/amd64/kube-apiserver" \ "https://storage.googleapis.com/kubernetes-release/release/v1.15.3/bin/linux/amd64/kube-apiserver" \
"https://storage.googleapis.com/kubernetes-release/release/v1.15.3/bin/linux/amd64/kube-controller-manager" \ "https://storage.googleapis.com/kubernetes-release/release/v1.15.3/bin/linux/amd64/kube-controller-manager" \
@ -36,7 +36,7 @@ wget -q --show-progress --https-only --timestamping \
Install the Kubernetes binaries: Install the Kubernetes binaries:
``` ```bash
{ {
chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl
sudo mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/ sudo mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/
@ -45,7 +45,7 @@ Install the Kubernetes binaries:
### Configure the Kubernetes API Server ### Configure the Kubernetes API Server
``` ```bash
{ {
sudo mkdir -p /var/lib/kubernetes/ sudo mkdir -p /var/lib/kubernetes/
@ -57,14 +57,14 @@ Install the Kubernetes binaries:
The instance internal IP address will be used to advertise the API Server to members of the cluster. Retrieve the internal IP address for the current compute instance: The instance internal IP address will be used to advertise the API Server to members of the cluster. Retrieve the internal IP address for the current compute instance:
``` ```bash
INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \ INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip) http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
``` ```
Create the `kube-apiserver.service` systemd unit file: Create the `kube-apiserver.service` systemd unit file:
``` ```bash
cat <<EOF | sudo tee /etc/systemd/system/kube-apiserver.service cat <<EOF | sudo tee /etc/systemd/system/kube-apiserver.service
[Unit] [Unit]
Description=Kubernetes API Server Description=Kubernetes API Server
@ -112,13 +112,13 @@ EOF
Move the `kube-controller-manager` kubeconfig into place: Move the `kube-controller-manager` kubeconfig into place:
``` ```bash
sudo mv kube-controller-manager.kubeconfig /var/lib/kubernetes/ sudo mv kube-controller-manager.kubeconfig /var/lib/kubernetes/
``` ```
Create the `kube-controller-manager.service` systemd unit file: Create the `kube-controller-manager.service` systemd unit file:
``` ```bash
cat <<EOF | sudo tee /etc/systemd/system/kube-controller-manager.service cat <<EOF | sudo tee /etc/systemd/system/kube-controller-manager.service
[Unit] [Unit]
Description=Kubernetes Controller Manager Description=Kubernetes Controller Manager
@ -150,13 +150,13 @@ EOF
Move the `kube-scheduler` kubeconfig into place: Move the `kube-scheduler` kubeconfig into place:
``` ```bash
sudo mv kube-scheduler.kubeconfig /var/lib/kubernetes/ sudo mv kube-scheduler.kubeconfig /var/lib/kubernetes/
``` ```
Create the `kube-scheduler.yaml` configuration file: Create the `kube-scheduler.yaml` configuration file:
``` ```bash
cat <<EOF | sudo tee /etc/kubernetes/config/kube-scheduler.yaml cat <<EOF | sudo tee /etc/kubernetes/config/kube-scheduler.yaml
apiVersion: kubescheduler.config.k8s.io/v1alpha1 apiVersion: kubescheduler.config.k8s.io/v1alpha1
kind: KubeSchedulerConfiguration kind: KubeSchedulerConfiguration
@ -169,7 +169,7 @@ EOF
Create the `kube-scheduler.service` systemd unit file: Create the `kube-scheduler.service` systemd unit file:
``` ```bash
cat <<EOF | sudo tee /etc/systemd/system/kube-scheduler.service cat <<EOF | sudo tee /etc/systemd/system/kube-scheduler.service
[Unit] [Unit]
Description=Kubernetes Scheduler Description=Kubernetes Scheduler
@ -189,7 +189,7 @@ EOF
### Start the Controller Services ### Start the Controller Services
``` ```bash
{ {
sudo systemctl daemon-reload sudo systemctl daemon-reload
sudo systemctl enable kube-apiserver kube-controller-manager kube-scheduler sudo systemctl enable kube-apiserver kube-controller-manager kube-scheduler
@ -207,12 +207,12 @@ A [Google Network Load Balancer](https://cloud.google.com/compute/docs/load-bala
Install a basic web server to handle HTTP health checks: Install a basic web server to handle HTTP health checks:
``` ```bash
sudo apt-get update sudo apt-get update
sudo apt-get install -y nginx sudo apt-get install -y nginx
``` ```
``` ```bash
cat > kubernetes.default.svc.cluster.local <<EOF cat > kubernetes.default.svc.cluster.local <<EOF
server { server {
listen 80; listen 80;
@ -226,7 +226,7 @@ server {
EOF EOF
``` ```
``` ```bash
{ {
sudo mv kubernetes.default.svc.cluster.local \ sudo mv kubernetes.default.svc.cluster.local \
/etc/nginx/sites-available/kubernetes.default.svc.cluster.local /etc/nginx/sites-available/kubernetes.default.svc.cluster.local
@ -235,21 +235,21 @@ EOF
} }
``` ```
``` ```bash
sudo systemctl restart nginx sudo systemctl restart nginx
``` ```
``` ```bash
sudo systemctl enable nginx sudo systemctl enable nginx
``` ```
### Verification ### Verification
``` ```bash
kubectl get componentstatuses --kubeconfig admin.kubeconfig kubectl get componentstatuses --kubeconfig admin.kubeconfig
``` ```
``` ```bash
NAME STATUS MESSAGE ERROR NAME STATUS MESSAGE ERROR
controller-manager Healthy ok controller-manager Healthy ok
scheduler Healthy ok scheduler Healthy ok
@ -260,11 +260,11 @@ etcd-1 Healthy {"health": "true"}
Test the nginx HTTP health check proxy: Test the nginx HTTP health check proxy:
``` ```bash
curl -H "Host: kubernetes.default.svc.cluster.local" -i http://127.0.0.1/healthz curl -H "Host: kubernetes.default.svc.cluster.local" -i http://127.0.0.1/healthz
``` ```
``` ```bash
HTTP/1.1 200 OK HTTP/1.1 200 OK
Server: nginx/1.14.0 (Ubuntu) Server: nginx/1.14.0 (Ubuntu)
Date: Sat, 14 Sep 2019 18:34:11 GMT Date: Sat, 14 Sep 2019 18:34:11 GMT
@ -286,13 +286,13 @@ In this section you will configure RBAC permissions to allow the Kubernetes API
The commands in this section will effect the entire cluster and only need to be run once from one of the controller nodes. The commands in this section will effect the entire cluster and only need to be run once from one of the controller nodes.
``` ```bash
gcloud compute ssh controller-0 gcloud compute ssh controller-0
``` ```
Create the `system:kube-apiserver-to-kubelet` [ClusterRole](https://kubernetes.io/docs/admin/authorization/rbac/#role-and-clusterrole) with permissions to access the Kubelet API and perform most common tasks associated with managing pods: Create the `system:kube-apiserver-to-kubelet` [ClusterRole](https://kubernetes.io/docs/admin/authorization/rbac/#role-and-clusterrole) with permissions to access the Kubelet API and perform most common tasks associated with managing pods:
``` ```bash
cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f - cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
apiVersion: rbac.authorization.k8s.io/v1beta1 apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole kind: ClusterRole
@ -320,7 +320,7 @@ The Kubernetes API Server authenticates to the Kubelet as the `kubernetes` user
Bind the `system:kube-apiserver-to-kubelet` ClusterRole to the `kubernetes` user: Bind the `system:kube-apiserver-to-kubelet` ClusterRole to the `kubernetes` user:
``` ```bash
cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f - cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
apiVersion: rbac.authorization.k8s.io/v1beta1 apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding kind: ClusterRoleBinding
@ -344,12 +344,11 @@ In this section you will provision an external load balancer to front the Kubern
> The compute instances created in this tutorial will not have permission to complete this section. **Run the following commands from the same machine used to create the compute instances**. > The compute instances created in this tutorial will not have permission to complete this section. **Run the following commands from the same machine used to create the compute instances**.
### Provision a Network Load Balancer ### Provision a Network Load Balancer
Create the external load balancer network resources: Create the external load balancer network resources:
``` ```bash
{ {
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \ KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) \ --region $(gcloud config get-value compute/region) \
@ -379,13 +378,13 @@ Create the external load balancer network resources:
} }
``` ```
### Verification ### LB Verification
> The compute instances created in this tutorial will not have permission to complete this section. **Run the following commands from the same machine used to create the compute instances**. > The compute instances created in this tutorial will not have permission to complete this section. **Run the following commands from the same machine used to create the compute instances**.
Retrieve the `kubernetes-the-hard-way` static IP address: Retrieve the `kubernetes-the-hard-way` static IP address:
``` ```bash
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \ KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) \ --region $(gcloud config get-value compute/region) \
--format 'value(address)') --format 'value(address)')
@ -393,13 +392,13 @@ KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-har
Make a HTTP request for the Kubernetes version info: Make a HTTP request for the Kubernetes version info:
``` ```bash
curl --cacert ca.pem https://${KUBERNETES_PUBLIC_ADDRESS}:6443/version curl --cacert ca.pem https://${KUBERNETES_PUBLIC_ADDRESS}:6443/version
``` ```
> output > output
``` ```bash
{ {
"major": "1", "major": "1",
"minor": "15", "minor": "15",

View File

@ -6,7 +6,7 @@ In this lab you will bootstrap three Kubernetes worker nodes. The following comp
The commands in this lab must be run on each worker instance: `worker-0`, `worker-1`, and `worker-2`. Login to each worker instance using the `gcloud` command. Example: The commands in this lab must be run on each worker instance: `worker-0`, `worker-1`, and `worker-2`. Login to each worker instance using the `gcloud` command. Example:
``` ```bash
gcloud compute ssh worker-0 gcloud compute ssh worker-0
``` ```
@ -18,7 +18,7 @@ gcloud compute ssh worker-0
Install the OS dependencies: Install the OS dependencies:
``` ```bash
{ {
sudo apt-get update sudo apt-get update
sudo apt-get -y install socat conntrack ipset sudo apt-get -y install socat conntrack ipset
@ -33,13 +33,13 @@ By default the kubelet will fail to start if [swap](https://help.ubuntu.com/comm
Verify if swap is enabled: Verify if swap is enabled:
``` ```bash
sudo swapon --show sudo swapon --show
``` ```
If output is empthy then swap is not enabled. If swap is enabled run the following command to disable swap immediately: If output is empthy then swap is not enabled. If swap is enabled run the following command to disable swap immediately:
``` ```bash
sudo swapoff -a sudo swapoff -a
``` ```
@ -47,7 +47,7 @@ sudo swapoff -a
### Download and Install Worker Binaries ### Download and Install Worker Binaries
``` ```bash
wget -q --show-progress --https-only --timestamping \ wget -q --show-progress --https-only --timestamping \
https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.15.0/crictl-v1.15.0-linux-amd64.tar.gz \ https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.15.0/crictl-v1.15.0-linux-amd64.tar.gz \
https://github.com/opencontainers/runc/releases/download/v1.0.0-rc8/runc.amd64 \ https://github.com/opencontainers/runc/releases/download/v1.0.0-rc8/runc.amd64 \
@ -60,7 +60,7 @@ wget -q --show-progress --https-only --timestamping \
Create the installation directories: Create the installation directories:
``` ```bash
sudo mkdir -p \ sudo mkdir -p \
/etc/cni/net.d \ /etc/cni/net.d \
/opt/cni/bin \ /opt/cni/bin \
@ -72,7 +72,7 @@ sudo mkdir -p \
Install the worker binaries: Install the worker binaries:
``` ```bash
{ {
mkdir containerd mkdir containerd
tar -xvf crictl-v1.15.0-linux-amd64.tar.gz tar -xvf crictl-v1.15.0-linux-amd64.tar.gz
@ -89,14 +89,14 @@ Install the worker binaries:
Retrieve the Pod CIDR range for the current compute instance: Retrieve the Pod CIDR range for the current compute instance:
``` ```bash
POD_CIDR=$(curl -s -H "Metadata-Flavor: Google" \ POD_CIDR=$(curl -s -H "Metadata-Flavor: Google" \
http://metadata.google.internal/computeMetadata/v1/instance/attributes/pod-cidr) http://metadata.google.internal/computeMetadata/v1/instance/attributes/pod-cidr)
``` ```
Create the `bridge` network configuration file: Create the `bridge` network configuration file:
``` ```bash
cat <<EOF | sudo tee /etc/cni/net.d/10-bridge.conf cat <<EOF | sudo tee /etc/cni/net.d/10-bridge.conf
{ {
"cniVersion": "0.3.1", "cniVersion": "0.3.1",
@ -118,7 +118,7 @@ EOF
Create the `loopback` network configuration file: Create the `loopback` network configuration file:
``` ```bash
cat <<EOF | sudo tee /etc/cni/net.d/99-loopback.conf cat <<EOF | sudo tee /etc/cni/net.d/99-loopback.conf
{ {
"cniVersion": "0.3.1", "cniVersion": "0.3.1",
@ -132,11 +132,11 @@ EOF
Create the `containerd` configuration file: Create the `containerd` configuration file:
``` ```bash
sudo mkdir -p /etc/containerd/ sudo mkdir -p /etc/containerd/
``` ```
``` ```bash
cat << EOF | sudo tee /etc/containerd/config.toml cat << EOF | sudo tee /etc/containerd/config.toml
[plugins] [plugins]
[plugins.cri.containerd] [plugins.cri.containerd]
@ -150,7 +150,7 @@ EOF
Create the `containerd.service` systemd unit file: Create the `containerd.service` systemd unit file:
``` ```bash
cat <<EOF | sudo tee /etc/systemd/system/containerd.service cat <<EOF | sudo tee /etc/systemd/system/containerd.service
[Unit] [Unit]
Description=containerd container runtime Description=containerd container runtime
@ -176,7 +176,7 @@ EOF
### Configure the Kubelet ### Configure the Kubelet
``` ```bash
{ {
sudo mv ${HOSTNAME}-key.pem ${HOSTNAME}.pem /var/lib/kubelet/ sudo mv ${HOSTNAME}-key.pem ${HOSTNAME}.pem /var/lib/kubelet/
sudo mv ${HOSTNAME}.kubeconfig /var/lib/kubelet/kubeconfig sudo mv ${HOSTNAME}.kubeconfig /var/lib/kubelet/kubeconfig
@ -186,7 +186,7 @@ EOF
Create the `kubelet-config.yaml` configuration file: Create the `kubelet-config.yaml` configuration file:
``` ```bash
cat <<EOF | sudo tee /var/lib/kubelet/kubelet-config.yaml cat <<EOF | sudo tee /var/lib/kubelet/kubelet-config.yaml
kind: KubeletConfiguration kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1 apiVersion: kubelet.config.k8s.io/v1beta1
@ -214,7 +214,7 @@ EOF
Create the `kubelet.service` systemd unit file: Create the `kubelet.service` systemd unit file:
``` ```bash
cat <<EOF | sudo tee /etc/systemd/system/kubelet.service cat <<EOF | sudo tee /etc/systemd/system/kubelet.service
[Unit] [Unit]
Description=Kubernetes Kubelet Description=Kubernetes Kubelet
@ -242,13 +242,13 @@ EOF
### Configure the Kubernetes Proxy ### Configure the Kubernetes Proxy
``` ```bash
sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig
``` ```
Create the `kube-proxy-config.yaml` configuration file: Create the `kube-proxy-config.yaml` configuration file:
``` ```bash
cat <<EOF | sudo tee /var/lib/kube-proxy/kube-proxy-config.yaml cat <<EOF | sudo tee /var/lib/kube-proxy/kube-proxy-config.yaml
kind: KubeProxyConfiguration kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1 apiVersion: kubeproxy.config.k8s.io/v1alpha1
@ -261,7 +261,7 @@ EOF
Create the `kube-proxy.service` systemd unit file: Create the `kube-proxy.service` systemd unit file:
``` ```bash
cat <<EOF | sudo tee /etc/systemd/system/kube-proxy.service cat <<EOF | sudo tee /etc/systemd/system/kube-proxy.service
[Unit] [Unit]
Description=Kubernetes Kube Proxy Description=Kubernetes Kube Proxy
@ -280,7 +280,7 @@ EOF
### Start the Worker Services ### Start the Worker Services
``` ```bash
{ {
sudo systemctl daemon-reload sudo systemctl daemon-reload
sudo systemctl enable containerd kubelet kube-proxy sudo systemctl enable containerd kubelet kube-proxy
@ -296,14 +296,14 @@ EOF
List the registered Kubernetes nodes: List the registered Kubernetes nodes:
``` ```bash
gcloud compute ssh controller-0 \ gcloud compute ssh controller-0 \
--command "kubectl get nodes --kubeconfig admin.kubeconfig" --command "kubectl get nodes --kubeconfig admin.kubeconfig"
``` ```
> output > output
``` ```bash
NAME STATUS ROLES AGE VERSION NAME STATUS ROLES AGE VERSION
worker-0 Ready <none> 15s v1.15.3 worker-0 Ready <none> 15s v1.15.3
worker-1 Ready <none> 15s v1.15.3 worker-1 Ready <none> 15s v1.15.3

View File

@ -10,7 +10,7 @@ Each kubeconfig requires a Kubernetes API Server to connect to. To support high
Generate a kubeconfig file suitable for authenticating as the `admin` user: Generate a kubeconfig file suitable for authenticating as the `admin` user:
``` ```bash
{ {
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \ KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) \ --region $(gcloud config get-value compute/region) \
@ -37,13 +37,13 @@ Generate a kubeconfig file suitable for authenticating as the `admin` user:
Check the health of the remote Kubernetes cluster: Check the health of the remote Kubernetes cluster:
``` ```bash
kubectl get componentstatuses kubectl get componentstatuses
``` ```
> output > output
``` ```bash
NAME STATUS MESSAGE ERROR NAME STATUS MESSAGE ERROR
controller-manager Healthy ok controller-manager Healthy ok
scheduler Healthy ok scheduler Healthy ok
@ -54,13 +54,13 @@ etcd-0 Healthy {"health":"true"}
List the nodes in the remote Kubernetes cluster: List the nodes in the remote Kubernetes cluster:
``` ```bash
kubectl get nodes kubectl get nodes
``` ```
> output > output
``` ```bash
NAME STATUS ROLES AGE VERSION NAME STATUS ROLES AGE VERSION
worker-0 Ready <none> 2m9s v1.15.3 worker-0 Ready <none> 2m9s v1.15.3
worker-1 Ready <none> 2m9s v1.15.3 worker-1 Ready <none> 2m9s v1.15.3

View File

@ -12,7 +12,7 @@ In this section you will gather the information required to create routes in the
Print the internal IP address and Pod CIDR range for each worker instance: Print the internal IP address and Pod CIDR range for each worker instance:
``` ```bash
for instance in worker-0 worker-1 worker-2; do for instance in worker-0 worker-1 worker-2; do
gcloud compute instances describe ${instance} \ gcloud compute instances describe ${instance} \
--format 'value[separator=" "](networkInterfaces[0].networkIP,metadata.items[0].value)' --format 'value[separator=" "](networkInterfaces[0].networkIP,metadata.items[0].value)'
@ -21,7 +21,7 @@ done
> output > output
``` ```bash
10.240.0.20 10.200.0.0/24 10.240.0.20 10.200.0.0/24
10.240.0.21 10.200.1.0/24 10.240.0.21 10.200.1.0/24
10.240.0.22 10.200.2.0/24 10.240.0.22 10.200.2.0/24
@ -31,7 +31,7 @@ done
Create network routes for each worker instance: Create network routes for each worker instance:
``` ```bash
for i in 0 1 2; do for i in 0 1 2; do
gcloud compute routes create kubernetes-route-10-200-${i}-0-24 \ gcloud compute routes create kubernetes-route-10-200-${i}-0-24 \
--network kubernetes-the-hard-way \ --network kubernetes-the-hard-way \
@ -42,13 +42,13 @@ done
List the routes in the `kubernetes-the-hard-way` VPC network: List the routes in the `kubernetes-the-hard-way` VPC network:
``` ```bash
gcloud compute routes list --filter "network: kubernetes-the-hard-way" gcloud compute routes list --filter "network: kubernetes-the-hard-way"
``` ```
> output > output
``` ```bash
NAME NETWORK DEST_RANGE NEXT_HOP PRIORITY NAME NETWORK DEST_RANGE NEXT_HOP PRIORITY
default-route-081879136902de56 kubernetes-the-hard-way 10.240.0.0/24 kubernetes-the-hard-way 1000 default-route-081879136902de56 kubernetes-the-hard-way 10.240.0.0/24 kubernetes-the-hard-way 1000
default-route-55199a5aa126d7aa kubernetes-the-hard-way 0.0.0.0/0 default-internet-gateway 1000 default-route-55199a5aa126d7aa kubernetes-the-hard-way 0.0.0.0/0 default-internet-gateway 1000

View File

@ -6,13 +6,13 @@ In this lab you will deploy the [DNS add-on](https://kubernetes.io/docs/concepts
Deploy the `coredns` cluster add-on: Deploy the `coredns` cluster add-on:
``` ```bash
kubectl apply -f https://storage.googleapis.com/kubernetes-the-hard-way/coredns.yaml kubectl apply -f https://storage.googleapis.com/kubernetes-the-hard-way/coredns.yaml
``` ```
> output > output
``` ```bash
serviceaccount/coredns created serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
@ -23,13 +23,13 @@ service/kube-dns created
List the pods created by the `kube-dns` deployment: List the pods created by the `kube-dns` deployment:
``` ```bash
kubectl get pods -l k8s-app=kube-dns -n kube-system kubectl get pods -l k8s-app=kube-dns -n kube-system
``` ```
> output > output
``` ```bash
NAME READY STATUS RESTARTS AGE NAME READY STATUS RESTARTS AGE
coredns-699f8ddd77-94qv9 1/1 Running 0 20s coredns-699f8ddd77-94qv9 1/1 Running 0 20s
coredns-699f8ddd77-gtcgb 1/1 Running 0 20s coredns-699f8ddd77-gtcgb 1/1 Running 0 20s
@ -39,38 +39,38 @@ coredns-699f8ddd77-gtcgb 1/1 Running 0 20s
Create a `busybox` deployment: Create a `busybox` deployment:
``` ```bash
kubectl run --generator=run-pod/v1 busybox --image=busybox:1.28 --command -- sleep 3600 kubectl run --generator=run-pod/v1 busybox --image=busybox:1.28 --command -- sleep 3600
``` ```
List the pod created by the `busybox` deployment: List the pod created by the `busybox` deployment:
``` ```bash
kubectl get pods -l run=busybox kubectl get pods -l run=busybox
``` ```
> output > output
``` ```bash
NAME READY STATUS RESTARTS AGE NAME READY STATUS RESTARTS AGE
busybox 1/1 Running 0 3s busybox 1/1 Running 0 3s
``` ```
Retrieve the full name of the `busybox` pod: Retrieve the full name of the `busybox` pod:
``` ```bash
POD_NAME=$(kubectl get pods -l run=busybox -o jsonpath="{.items[0].metadata.name}") POD_NAME=$(kubectl get pods -l run=busybox -o jsonpath="{.items[0].metadata.name}")
``` ```
Execute a DNS lookup for the `kubernetes` service inside the `busybox` pod: Execute a DNS lookup for the `kubernetes` service inside the `busybox` pod:
``` ```bash
kubectl exec -ti $POD_NAME -- nslookup kubernetes kubectl exec -ti $POD_NAME -- nslookup kubernetes
``` ```
> output > output
``` ```bash
Server: 10.32.0.10 Server: 10.32.0.10
Address 1: 10.32.0.10 kube-dns.kube-system.svc.cluster.local Address 1: 10.32.0.10 kube-dns.kube-system.svc.cluster.local

View File

@ -8,14 +8,14 @@ In this section you will verify the ability to [encrypt secret data at rest](htt
Create a generic secret: Create a generic secret:
``` ```bash
kubectl create secret generic kubernetes-the-hard-way \ kubectl create secret generic kubernetes-the-hard-way \
--from-literal="mykey=mydata" --from-literal="mykey=mydata"
``` ```
Print a hexdump of the `kubernetes-the-hard-way` secret stored in etcd: Print a hexdump of the `kubernetes-the-hard-way` secret stored in etcd:
``` ```bash
gcloud compute ssh controller-0 \ gcloud compute ssh controller-0 \
--command "sudo ETCDCTL_API=3 etcdctl get \ --command "sudo ETCDCTL_API=3 etcdctl get \
--endpoints=https://127.0.0.1:2379 \ --endpoints=https://127.0.0.1:2379 \
@ -27,7 +27,7 @@ gcloud compute ssh controller-0 \
> output > output
``` ```bash
00000000 2f 72 65 67 69 73 74 72 79 2f 73 65 63 72 65 74 |/registry/secret| 00000000 2f 72 65 67 69 73 74 72 79 2f 73 65 63 72 65 74 |/registry/secret|
00000010 73 2f 64 65 66 61 75 6c 74 2f 6b 75 62 65 72 6e |s/default/kubern| 00000010 73 2f 64 65 66 61 75 6c 74 2f 6b 75 62 65 72 6e |s/default/kubern|
00000020 65 74 65 73 2d 74 68 65 2d 68 61 72 64 2d 77 61 |etes-the-hard-wa| 00000020 65 74 65 73 2d 74 68 65 2d 68 61 72 64 2d 77 61 |etes-the-hard-wa|
@ -53,19 +53,19 @@ In this section you will verify the ability to create and manage [Deployments](h
Create a deployment for the [nginx](https://nginx.org/en/) web server: Create a deployment for the [nginx](https://nginx.org/en/) web server:
``` ```bash
kubectl create deployment nginx --image=nginx kubectl create deployment nginx --image=nginx
``` ```
List the pod created by the `nginx` deployment: List the pod created by the `nginx` deployment:
``` ```bash
kubectl get pods -l app=nginx kubectl get pods -l app=nginx
``` ```
> output > output
``` ```bash
NAME READY STATUS RESTARTS AGE NAME READY STATUS RESTARTS AGE
nginx-554b9c67f9-vt5rn 1/1 Running 0 10s nginx-554b9c67f9-vt5rn 1/1 Running 0 10s
``` ```
@ -76,32 +76,32 @@ In this section you will verify the ability to access applications remotely usin
Retrieve the full name of the `nginx` pod: Retrieve the full name of the `nginx` pod:
``` ```bash
POD_NAME=$(kubectl get pods -l app=nginx -o jsonpath="{.items[0].metadata.name}") POD_NAME=$(kubectl get pods -l app=nginx -o jsonpath="{.items[0].metadata.name}")
``` ```
Forward port `8080` on your local machine to port `80` of the `nginx` pod: Forward port `8080` on your local machine to port `80` of the `nginx` pod:
``` ```bash
kubectl port-forward $POD_NAME 8080:80 kubectl port-forward $POD_NAME 8080:80
``` ```
> output > output
``` ```bash
Forwarding from 127.0.0.1:8080 -> 80 Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80 Forwarding from [::1]:8080 -> 80
``` ```
In a new terminal make an HTTP request using the forwarding address: In a new terminal make an HTTP request using the forwarding address:
``` ```bash
curl --head http://127.0.0.1:8080 curl --head http://127.0.0.1:8080
``` ```
> output > output
``` ```bash
HTTP/1.1 200 OK HTTP/1.1 200 OK
Server: nginx/1.17.3 Server: nginx/1.17.3
Date: Sat, 14 Sep 2019 21:10:11 GMT Date: Sat, 14 Sep 2019 21:10:11 GMT
@ -115,7 +115,7 @@ Accept-Ranges: bytes
Switch back to the previous terminal and stop the port forwarding to the `nginx` pod: Switch back to the previous terminal and stop the port forwarding to the `nginx` pod:
``` ```bash
Forwarding from 127.0.0.1:8080 -> 80 Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80 Forwarding from [::1]:8080 -> 80
Handling connection for 8080 Handling connection for 8080
@ -128,13 +128,13 @@ In this section you will verify the ability to [retrieve container logs](https:/
Print the `nginx` pod logs: Print the `nginx` pod logs:
``` ```bash
kubectl logs $POD_NAME kubectl logs $POD_NAME
``` ```
> output > output
``` ```bash
127.0.0.1 - - [14/Sep/2019:21:10:11 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.52.1" "-" 127.0.0.1 - - [14/Sep/2019:21:10:11 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.52.1" "-"
``` ```
@ -144,13 +144,13 @@ In this section you will verify the ability to [execute commands in a container]
Print the nginx version by executing the `nginx -v` command in the `nginx` container: Print the nginx version by executing the `nginx -v` command in the `nginx` container:
``` ```bash
kubectl exec -ti $POD_NAME -- nginx -v kubectl exec -ti $POD_NAME -- nginx -v
``` ```
> output > output
``` ```bash
nginx version: nginx/1.17.3 nginx version: nginx/1.17.3
``` ```
@ -160,7 +160,7 @@ In this section you will verify the ability to expose applications using a [Serv
Expose the `nginx` deployment using a [NodePort](https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport) service: Expose the `nginx` deployment using a [NodePort](https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport) service:
``` ```bash
kubectl expose deployment nginx --port 80 --type NodePort kubectl expose deployment nginx --port 80 --type NodePort
``` ```
@ -168,14 +168,14 @@ kubectl expose deployment nginx --port 80 --type NodePort
Retrieve the node port assigned to the `nginx` service: Retrieve the node port assigned to the `nginx` service:
``` ```bash
NODE_PORT=$(kubectl get svc nginx \ NODE_PORT=$(kubectl get svc nginx \
--output=jsonpath='{range .spec.ports[0]}{.nodePort}') --output=jsonpath='{range .spec.ports[0]}{.nodePort}')
``` ```
Create a firewall rule that allows remote access to the `nginx` node port: Create a firewall rule that allows remote access to the `nginx` node port:
``` ```bash
gcloud compute firewall-rules create kubernetes-the-hard-way-allow-nginx-service \ gcloud compute firewall-rules create kubernetes-the-hard-way-allow-nginx-service \
--allow=tcp:${NODE_PORT} \ --allow=tcp:${NODE_PORT} \
--network kubernetes-the-hard-way --network kubernetes-the-hard-way
@ -183,20 +183,20 @@ gcloud compute firewall-rules create kubernetes-the-hard-way-allow-nginx-service
Retrieve the external IP address of a worker instance: Retrieve the external IP address of a worker instance:
``` ```bash
EXTERNAL_IP=$(gcloud compute instances describe worker-0 \ EXTERNAL_IP=$(gcloud compute instances describe worker-0 \
--format 'value(networkInterfaces[0].accessConfigs[0].natIP)') --format 'value(networkInterfaces[0].accessConfigs[0].natIP)')
``` ```
Make an HTTP request using the external IP address and the `nginx` node port: Make an HTTP request using the external IP address and the `nginx` node port:
``` ```bash
curl -I http://${EXTERNAL_IP}:${NODE_PORT} curl -I http://${EXTERNAL_IP}:${NODE_PORT}
``` ```
> output > output
``` ```bash
HTTP/1.1 200 OK HTTP/1.1 200 OK
Server: nginx/1.17.3 Server: nginx/1.17.3
Date: Sat, 14 Sep 2019 21:12:35 GMT Date: Sat, 14 Sep 2019 21:12:35 GMT

View File

@ -6,7 +6,7 @@ In this lab you will delete the compute resources created during this tutorial.
Delete the controller and worker compute instances: Delete the controller and worker compute instances:
``` ```bash
gcloud -q compute instances delete \ gcloud -q compute instances delete \
controller-0 controller-1 controller-2 \ controller-0 controller-1 controller-2 \
worker-0 worker-1 worker-2 \ worker-0 worker-1 worker-2 \
@ -17,7 +17,7 @@ gcloud -q compute instances delete \
Delete the external load balancer network resources: Delete the external load balancer network resources:
``` ```bash
{ {
gcloud -q compute forwarding-rules delete kubernetes-forwarding-rule \ gcloud -q compute forwarding-rules delete kubernetes-forwarding-rule \
--region $(gcloud config get-value compute/region) --region $(gcloud config get-value compute/region)
@ -32,7 +32,7 @@ Delete the external load balancer network resources:
Delete the `kubernetes-the-hard-way` firewall rules: Delete the `kubernetes-the-hard-way` firewall rules:
``` ```bash
gcloud -q compute firewall-rules delete \ gcloud -q compute firewall-rules delete \
kubernetes-the-hard-way-allow-nginx-service \ kubernetes-the-hard-way-allow-nginx-service \
kubernetes-the-hard-way-allow-internal \ kubernetes-the-hard-way-allow-internal \
@ -42,7 +42,7 @@ gcloud -q compute firewall-rules delete \
Delete the `kubernetes-the-hard-way` network VPC: Delete the `kubernetes-the-hard-way` network VPC:
``` ```bash
{ {
gcloud -q compute routes delete \ gcloud -q compute routes delete \
kubernetes-route-10-200-0-0-24 \ kubernetes-route-10-200-0-0-24 \