Specify commands ink
parent
bf2850974e
commit
8e949cf8ee
|
@ -16,9 +16,9 @@ Follow the Google Cloud SDK [documentation](https://cloud.google.com/sdk/) to in
|
||||||
|
|
||||||
Verify the Google Cloud SDK version is 218.0.0 or higher:
|
Verify the Google Cloud SDK version is 218.0.0 or higher:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
gcloud version
|
gcloud version
|
||||||
```
|
~~~
|
||||||
|
|
||||||
### Set a Default Compute Region and Zone
|
### Set a Default Compute Region and Zone
|
||||||
|
|
||||||
|
@ -26,21 +26,21 @@ This tutorial assumes a default compute region and zone have been configured.
|
||||||
|
|
||||||
If you are using the `gcloud` command-line tool for the first time `init` is the easiest way to do this:
|
If you are using the `gcloud` command-line tool for the first time `init` is the easiest way to do this:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
gcloud init
|
gcloud init
|
||||||
```
|
~~~
|
||||||
|
|
||||||
Otherwise set a default compute region:
|
Otherwise set a default compute region:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
gcloud config set compute/region us-west1
|
gcloud config set compute/region us-west1
|
||||||
```
|
~~~
|
||||||
|
|
||||||
Set a default compute zone:
|
Set a default compute zone:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
gcloud config set compute/zone us-west1-c
|
gcloud config set compute/zone us-west1-c
|
||||||
```
|
~~~
|
||||||
|
|
||||||
> Use the `gcloud compute zones list` command to view additional regions and zones.
|
> Use the `gcloud compute zones list` command to view additional regions and zones.
|
||||||
|
|
||||||
|
|
|
@ -11,60 +11,60 @@ Download and install `cfssl` and `cfssljson` from the [cfssl repository](https:/
|
||||||
|
|
||||||
### OS X
|
### OS X
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
curl -o cfssl https://pkg.cfssl.org/R1.2/cfssl_darwin-amd64
|
curl -o cfssl https://pkg.cfssl.org/R1.2/cfssl_darwin-amd64
|
||||||
curl -o cfssljson https://pkg.cfssl.org/R1.2/cfssljson_darwin-amd64
|
curl -o cfssljson https://pkg.cfssl.org/R1.2/cfssljson_darwin-amd64
|
||||||
```
|
~~~
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
chmod +x cfssl cfssljson
|
chmod +x cfssl cfssljson
|
||||||
```
|
~~~
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
sudo mv cfssl cfssljson /usr/local/bin/
|
sudo mv cfssl cfssljson /usr/local/bin/
|
||||||
```
|
~~~
|
||||||
|
|
||||||
Some OS X users may experience problems using the pre-built binaries in which case [Homebrew](https://brew.sh) might be a better option:
|
Some OS X users may experience problems using the pre-built binaries in which case [Homebrew](https://brew.sh) might be a better option:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
brew install cfssl
|
brew install cfssl
|
||||||
```
|
~~~
|
||||||
|
|
||||||
### Linux
|
### Linux
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
wget -q --show-progress --https-only --timestamping \
|
wget -q --show-progress --https-only --timestamping \
|
||||||
https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 \
|
https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 \
|
||||||
https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
|
https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
|
||||||
```
|
~~~
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64
|
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64
|
||||||
```
|
~~~
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
sudo mv cfssl_linux-amd64 /usr/local/bin/cfssl
|
sudo mv cfssl_linux-amd64 /usr/local/bin/cfssl
|
||||||
```
|
~~~
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
sudo mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
|
sudo mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
|
||||||
```
|
~~~
|
||||||
|
|
||||||
### Verification
|
### Verification
|
||||||
|
|
||||||
Verify `cfssl` version 1.2.0 or higher is installed:
|
Verify `cfssl` version 1.2.0 or higher is installed:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
cfssl version
|
cfssl version
|
||||||
```
|
~~~
|
||||||
|
|
||||||
> output
|
> output
|
||||||
|
|
||||||
```
|
~~~
|
||||||
Version: 1.2.0
|
Version: 1.2.0
|
||||||
Revision: dev
|
Revision: dev
|
||||||
Runtime: go1.6
|
Runtime: go1.6
|
||||||
```
|
~~~
|
||||||
|
|
||||||
> The cfssljson command line utility does not provide a way to print its version.
|
> The cfssljson command line utility does not provide a way to print its version.
|
||||||
|
|
||||||
|
@ -74,44 +74,44 @@ The `kubectl` command line utility is used to interact with the Kubernetes API S
|
||||||
|
|
||||||
### OS X
|
### OS X
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
curl -o kubectl https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/darwin/amd64/kubectl
|
curl -o kubectl https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/darwin/amd64/kubectl
|
||||||
```
|
~~~
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
chmod +x kubectl
|
chmod +x kubectl
|
||||||
```
|
~~~
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
sudo mv kubectl /usr/local/bin/
|
sudo mv kubectl /usr/local/bin/
|
||||||
```
|
~~~
|
||||||
|
|
||||||
### Linux
|
### Linux
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
wget https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kubectl
|
wget https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kubectl
|
||||||
```
|
~~~
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
chmod +x kubectl
|
chmod +x kubectl
|
||||||
```
|
~~~
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
sudo mv kubectl /usr/local/bin/
|
sudo mv kubectl /usr/local/bin/
|
||||||
```
|
~~~
|
||||||
|
|
||||||
### Verification
|
### Verification
|
||||||
|
|
||||||
Verify `kubectl` version 1.12.0 or higher is installed:
|
Verify `kubectl` version 1.12.0 or higher is installed:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
kubectl version --client
|
kubectl version --client
|
||||||
```
|
~~~
|
||||||
|
|
||||||
> output
|
> output
|
||||||
|
|
||||||
```
|
~~~
|
||||||
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.0", GitCommit:"0ed33881dc4355495f623c6f22e7dd0b7632b7c0", GitTreeState:"clean", BuildDate:"2018-09-27T17:05:32Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
|
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.0", GitCommit:"0ed33881dc4355495f623c6f22e7dd0b7632b7c0", GitTreeState:"clean", BuildDate:"2018-09-27T17:05:32Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
|
||||||
```
|
~~~
|
||||||
|
|
||||||
Next: [Provisioning Compute Resources](03-compute-resources.md)
|
Next: [Provisioning Compute Resources](03-compute-resources.md)
|
||||||
|
|
|
@ -16,19 +16,19 @@ In this section a dedicated [Virtual Private Cloud](https://cloud.google.com/com
|
||||||
|
|
||||||
Create the `kubernetes-the-hard-way` custom VPC network:
|
Create the `kubernetes-the-hard-way` custom VPC network:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
gcloud compute networks create kubernetes-the-hard-way --subnet-mode custom
|
gcloud compute networks create kubernetes-the-hard-way --subnet-mode custom
|
||||||
```
|
~~~
|
||||||
|
|
||||||
A [subnet](https://cloud.google.com/compute/docs/vpc/#vpc_networks_and_subnets) must be provisioned with an IP address range large enough to assign a private IP address to each node in the Kubernetes cluster.
|
A [subnet](https://cloud.google.com/compute/docs/vpc/#vpc_networks_and_subnets) must be provisioned with an IP address range large enough to assign a private IP address to each node in the Kubernetes cluster.
|
||||||
|
|
||||||
Create the `kubernetes` subnet in the `kubernetes-the-hard-way` VPC network:
|
Create the `kubernetes` subnet in the `kubernetes-the-hard-way` VPC network:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
gcloud compute networks subnets create kubernetes \
|
gcloud compute networks subnets create kubernetes \
|
||||||
--network kubernetes-the-hard-way \
|
--network kubernetes-the-hard-way \
|
||||||
--range 10.240.0.0/24
|
--range 10.240.0.0/24
|
||||||
```
|
~~~
|
||||||
|
|
||||||
> The `10.240.0.0/24` IP address range can host up to 254 compute instances.
|
> The `10.240.0.0/24` IP address range can host up to 254 compute instances.
|
||||||
|
|
||||||
|
@ -36,59 +36,59 @@ gcloud compute networks subnets create kubernetes \
|
||||||
|
|
||||||
Create a firewall rule that allows internal communication across all protocols:
|
Create a firewall rule that allows internal communication across all protocols:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
gcloud compute firewall-rules create kubernetes-the-hard-way-allow-internal \
|
gcloud compute firewall-rules create kubernetes-the-hard-way-allow-internal \
|
||||||
--allow tcp,udp,icmp \
|
--allow tcp,udp,icmp \
|
||||||
--network kubernetes-the-hard-way \
|
--network kubernetes-the-hard-way \
|
||||||
--source-ranges 10.240.0.0/24,10.200.0.0/16
|
--source-ranges 10.240.0.0/24,10.200.0.0/16
|
||||||
```
|
~~~
|
||||||
|
|
||||||
Create a firewall rule that allows external SSH, ICMP, and HTTPS:
|
Create a firewall rule that allows external SSH, ICMP, and HTTPS:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
gcloud compute firewall-rules create kubernetes-the-hard-way-allow-external \
|
gcloud compute firewall-rules create kubernetes-the-hard-way-allow-external \
|
||||||
--allow tcp:22,tcp:6443,icmp \
|
--allow tcp:22,tcp:6443,icmp \
|
||||||
--network kubernetes-the-hard-way \
|
--network kubernetes-the-hard-way \
|
||||||
--source-ranges 0.0.0.0/0
|
--source-ranges 0.0.0.0/0
|
||||||
```
|
~~~
|
||||||
|
|
||||||
> An [external load balancer](https://cloud.google.com/compute/docs/load-balancing/network/) will be used to expose the Kubernetes API Servers to remote clients.
|
> An [external load balancer](https://cloud.google.com/compute/docs/load-balancing/network/) will be used to expose the Kubernetes API Servers to remote clients.
|
||||||
|
|
||||||
List the firewall rules in the `kubernetes-the-hard-way` VPC network:
|
List the firewall rules in the `kubernetes-the-hard-way` VPC network:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
gcloud compute firewall-rules list --filter="network:kubernetes-the-hard-way"
|
gcloud compute firewall-rules list --filter="network:kubernetes-the-hard-way"
|
||||||
```
|
~~~
|
||||||
|
|
||||||
> output
|
> output
|
||||||
|
|
||||||
```
|
~~~
|
||||||
NAME NETWORK DIRECTION PRIORITY ALLOW DENY
|
NAME NETWORK DIRECTION PRIORITY ALLOW DENY
|
||||||
kubernetes-the-hard-way-allow-external kubernetes-the-hard-way INGRESS 1000 tcp:22,tcp:6443,icmp
|
kubernetes-the-hard-way-allow-external kubernetes-the-hard-way INGRESS 1000 tcp:22,tcp:6443,icmp
|
||||||
kubernetes-the-hard-way-allow-internal kubernetes-the-hard-way INGRESS 1000 tcp,udp,icmp
|
kubernetes-the-hard-way-allow-internal kubernetes-the-hard-way INGRESS 1000 tcp,udp,icmp
|
||||||
```
|
~~~
|
||||||
|
|
||||||
### Kubernetes Public IP Address
|
### Kubernetes Public IP Address
|
||||||
|
|
||||||
Allocate a static IP address that will be attached to the external load balancer fronting the Kubernetes API Servers:
|
Allocate a static IP address that will be attached to the external load balancer fronting the Kubernetes API Servers:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
gcloud compute addresses create kubernetes-the-hard-way \
|
gcloud compute addresses create kubernetes-the-hard-way \
|
||||||
--region $(gcloud config get-value compute/region)
|
--region $(gcloud config get-value compute/region)
|
||||||
```
|
~~~
|
||||||
|
|
||||||
Verify the `kubernetes-the-hard-way` static IP address was created in your default compute region:
|
Verify the `kubernetes-the-hard-way` static IP address was created in your default compute region:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
gcloud compute addresses list --filter="name=('kubernetes-the-hard-way')"
|
gcloud compute addresses list --filter="name=('kubernetes-the-hard-way')"
|
||||||
```
|
~~~
|
||||||
|
|
||||||
> output
|
> output
|
||||||
|
|
||||||
```
|
~~~
|
||||||
NAME REGION ADDRESS STATUS
|
NAME REGION ADDRESS STATUS
|
||||||
kubernetes-the-hard-way us-west1 XX.XXX.XXX.XX RESERVED
|
kubernetes-the-hard-way us-west1 XX.XXX.XXX.XX RESERVED
|
||||||
```
|
~~~
|
||||||
|
|
||||||
## Compute Instances
|
## Compute Instances
|
||||||
|
|
||||||
|
@ -98,7 +98,7 @@ The compute instances in this lab will be provisioned using [Ubuntu Server](http
|
||||||
|
|
||||||
Create three compute instances which will host the Kubernetes control plane:
|
Create three compute instances which will host the Kubernetes control plane:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
for i in 0 1 2; do
|
for i in 0 1 2; do
|
||||||
gcloud compute instances create controller-${i} \
|
gcloud compute instances create controller-${i} \
|
||||||
--async \
|
--async \
|
||||||
|
@ -112,7 +112,7 @@ for i in 0 1 2; do
|
||||||
--subnet kubernetes \
|
--subnet kubernetes \
|
||||||
--tags kubernetes-the-hard-way,controller
|
--tags kubernetes-the-hard-way,controller
|
||||||
done
|
done
|
||||||
```
|
~~~
|
||||||
|
|
||||||
### Kubernetes Workers
|
### Kubernetes Workers
|
||||||
|
|
||||||
|
@ -122,7 +122,7 @@ Each worker instance requires a pod subnet allocation from the Kubernetes cluste
|
||||||
|
|
||||||
Create three compute instances which will host the Kubernetes worker nodes:
|
Create three compute instances which will host the Kubernetes worker nodes:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
for i in 0 1 2; do
|
for i in 0 1 2; do
|
||||||
gcloud compute instances create worker-${i} \
|
gcloud compute instances create worker-${i} \
|
||||||
--async \
|
--async \
|
||||||
|
@ -137,19 +137,19 @@ for i in 0 1 2; do
|
||||||
--subnet kubernetes \
|
--subnet kubernetes \
|
||||||
--tags kubernetes-the-hard-way,worker
|
--tags kubernetes-the-hard-way,worker
|
||||||
done
|
done
|
||||||
```
|
~~~
|
||||||
|
|
||||||
### Verification
|
### Verification
|
||||||
|
|
||||||
List the compute instances in your default compute zone:
|
List the compute instances in your default compute zone:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
gcloud compute instances list
|
gcloud compute instances list
|
||||||
```
|
~~~
|
||||||
|
|
||||||
> output
|
> output
|
||||||
|
|
||||||
```
|
~~~
|
||||||
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
|
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
|
||||||
controller-0 us-west1-c n1-standard-1 10.240.0.10 XX.XXX.XXX.XXX RUNNING
|
controller-0 us-west1-c n1-standard-1 10.240.0.10 XX.XXX.XXX.XXX RUNNING
|
||||||
controller-1 us-west1-c n1-standard-1 10.240.0.11 XX.XXX.X.XX RUNNING
|
controller-1 us-west1-c n1-standard-1 10.240.0.11 XX.XXX.X.XX RUNNING
|
||||||
|
@ -157,7 +157,7 @@ controller-2 us-west1-c n1-standard-1 10.240.0.12 XX.XXX.XXX.XX
|
||||||
worker-0 us-west1-c n1-standard-1 10.240.0.20 XXX.XXX.XXX.XX RUNNING
|
worker-0 us-west1-c n1-standard-1 10.240.0.20 XXX.XXX.XXX.XX RUNNING
|
||||||
worker-1 us-west1-c n1-standard-1 10.240.0.21 XX.XXX.XX.XXX RUNNING
|
worker-1 us-west1-c n1-standard-1 10.240.0.21 XX.XXX.XX.XXX RUNNING
|
||||||
worker-2 us-west1-c n1-standard-1 10.240.0.22 XXX.XXX.XX.XX RUNNING
|
worker-2 us-west1-c n1-standard-1 10.240.0.22 XXX.XXX.XX.XX RUNNING
|
||||||
```
|
~~~
|
||||||
|
|
||||||
## Configuring SSH Access
|
## Configuring SSH Access
|
||||||
|
|
||||||
|
@ -165,13 +165,13 @@ SSH will be used to configure the controller and worker instances. When connecti
|
||||||
|
|
||||||
Test SSH access to the `controller-0` compute instances:
|
Test SSH access to the `controller-0` compute instances:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
gcloud compute ssh controller-0
|
gcloud compute ssh controller-0
|
||||||
```
|
~~~
|
||||||
|
|
||||||
If this is your first time connecting to a compute instance SSH keys will be generated for you. Enter a passphrase at the prompt to continue:
|
If this is your first time connecting to a compute instance SSH keys will be generated for you. Enter a passphrase at the prompt to continue:
|
||||||
|
|
||||||
```
|
~~~
|
||||||
WARNING: The public SSH key file for gcloud does not exist.
|
WARNING: The public SSH key file for gcloud does not exist.
|
||||||
WARNING: The private SSH key file for gcloud does not exist.
|
WARNING: The private SSH key file for gcloud does not exist.
|
||||||
WARNING: You do not have an SSH key for gcloud.
|
WARNING: You do not have an SSH key for gcloud.
|
||||||
|
@ -179,11 +179,11 @@ WARNING: SSH keygen will be executed to generate a key.
|
||||||
Generating public/private rsa key pair.
|
Generating public/private rsa key pair.
|
||||||
Enter passphrase (empty for no passphrase):
|
Enter passphrase (empty for no passphrase):
|
||||||
Enter same passphrase again:
|
Enter same passphrase again:
|
||||||
```
|
~~~
|
||||||
|
|
||||||
At this point the generated SSH keys will be uploaded and stored in your project:
|
At this point the generated SSH keys will be uploaded and stored in your project:
|
||||||
|
|
||||||
```
|
~~~
|
||||||
Your identification has been saved in /home/$USER/.ssh/google_compute_engine.
|
Your identification has been saved in /home/$USER/.ssh/google_compute_engine.
|
||||||
Your public key has been saved in /home/$USER/.ssh/google_compute_engine.pub.
|
Your public key has been saved in /home/$USER/.ssh/google_compute_engine.pub.
|
||||||
The key fingerprint is:
|
The key fingerprint is:
|
||||||
|
@ -203,28 +203,29 @@ The key's randomart image is:
|
||||||
Updating project ssh metadata...-Updated [https://www.googleapis.com/compute/v1/projects/$PROJECT_ID].
|
Updating project ssh metadata...-Updated [https://www.googleapis.com/compute/v1/projects/$PROJECT_ID].
|
||||||
Updating project ssh metadata...done.
|
Updating project ssh metadata...done.
|
||||||
Waiting for SSH key to propagate.
|
Waiting for SSH key to propagate.
|
||||||
```
|
~~~
|
||||||
|
|
||||||
After the SSH keys have been updated you'll be logged into the `controller-0` instance:
|
After the SSH keys have been updated you'll be logged into the `controller-0` instance:
|
||||||
|
|
||||||
```
|
~~~
|
||||||
Welcome to Ubuntu 18.04 LTS (GNU/Linux 4.15.0-1006-gcp x86_64)
|
Welcome to Ubuntu 18.04 LTS (GNU/Linux 4.15.0-1006-gcp x86_64)
|
||||||
|
|
||||||
...
|
...
|
||||||
|
|
||||||
Last login: Sun May 13 14:34:27 2018 from XX.XXX.XXX.XX
|
Last login: Sun May 13 14:34:27 2018 from XX.XXX.XXX.XX
|
||||||
```
|
~~~
|
||||||
|
|
||||||
Type `exit` at the prompt to exit the `controller-0` compute instance:
|
Type `exit` at the prompt to exit the `controller-0` compute instance:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
$USER@controller-0:~$ exit
|
$USER@controller-0:~$ exit
|
||||||
```
|
~~~
|
||||||
|
|
||||||
> output
|
> output
|
||||||
|
|
||||||
```
|
~~~
|
||||||
logout
|
logout
|
||||||
Connection to XX.XXX.XXX.XXX closed
|
Connection to XX.XXX.XXX.XXX closed
|
||||||
```
|
~~~
|
||||||
|
|
||||||
Next: [Provisioning a CA and Generating TLS Certificates](04-certificate-authority.md)
|
Next: [Provisioning a CA and Generating TLS Certificates](04-certificate-authority.md)
|
||||||
|
|
|
@ -8,9 +8,7 @@ In this section you will provision a Certificate Authority that can be used to g
|
||||||
|
|
||||||
Generate the CA configuration file, certificate, and private key:
|
Generate the CA configuration file, certificate, and private key:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
{
|
|
||||||
|
|
||||||
cat > ca-config.json <<EOF
|
cat > ca-config.json <<EOF
|
||||||
{
|
{
|
||||||
"signing": {
|
"signing": {
|
||||||
|
@ -47,16 +45,14 @@ cat > ca-csr.json <<EOF
|
||||||
EOF
|
EOF
|
||||||
|
|
||||||
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
|
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
|
||||||
|
~~~
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
Results:
|
Results:
|
||||||
|
|
||||||
```
|
~~~
|
||||||
ca-key.pem
|
ca-key.pem
|
||||||
ca.pem
|
ca.pem
|
||||||
```
|
~~~
|
||||||
|
|
||||||
## Client and Server Certificates
|
## Client and Server Certificates
|
||||||
|
|
||||||
|
@ -66,9 +62,7 @@ In this section you will generate client and server certificates for each Kubern
|
||||||
|
|
||||||
Generate the `admin` client certificate and private key:
|
Generate the `admin` client certificate and private key:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
{
|
|
||||||
|
|
||||||
cat > admin-csr.json <<EOF
|
cat > admin-csr.json <<EOF
|
||||||
{
|
{
|
||||||
"CN": "admin",
|
"CN": "admin",
|
||||||
|
@ -94,16 +88,14 @@ cfssl gencert \
|
||||||
-config=ca-config.json \
|
-config=ca-config.json \
|
||||||
-profile=kubernetes \
|
-profile=kubernetes \
|
||||||
admin-csr.json | cfssljson -bare admin
|
admin-csr.json | cfssljson -bare admin
|
||||||
|
~~~
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
Results:
|
Results:
|
||||||
|
|
||||||
```
|
~~~
|
||||||
admin-key.pem
|
admin-key.pem
|
||||||
admin.pem
|
admin.pem
|
||||||
```
|
~~~
|
||||||
|
|
||||||
### The Kubelet Client Certificates
|
### The Kubelet Client Certificates
|
||||||
|
|
||||||
|
@ -111,7 +103,7 @@ Kubernetes uses a [special-purpose authorization mode](https://kubernetes.io/doc
|
||||||
|
|
||||||
Generate a certificate and private key for each Kubernetes worker node:
|
Generate a certificate and private key for each Kubernetes worker node:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
for instance in worker-0 worker-1 worker-2; do
|
for instance in worker-0 worker-1 worker-2; do
|
||||||
cat > ${instance}-csr.json <<EOF
|
cat > ${instance}-csr.json <<EOF
|
||||||
{
|
{
|
||||||
|
@ -146,26 +138,24 @@ cfssl gencert \
|
||||||
-profile=kubernetes \
|
-profile=kubernetes \
|
||||||
${instance}-csr.json | cfssljson -bare ${instance}
|
${instance}-csr.json | cfssljson -bare ${instance}
|
||||||
done
|
done
|
||||||
```
|
~~~
|
||||||
|
|
||||||
Results:
|
Results:
|
||||||
|
|
||||||
```
|
~~~
|
||||||
worker-0-key.pem
|
worker-0-key.pem
|
||||||
worker-0.pem
|
worker-0.pem
|
||||||
worker-1-key.pem
|
worker-1-key.pem
|
||||||
worker-1.pem
|
worker-1.pem
|
||||||
worker-2-key.pem
|
worker-2-key.pem
|
||||||
worker-2.pem
|
worker-2.pem
|
||||||
```
|
~~~
|
||||||
|
|
||||||
### The Controller Manager Client Certificate
|
### The Controller Manager Client Certificate
|
||||||
|
|
||||||
Generate the `kube-controller-manager` client certificate and private key:
|
Generate the `kube-controller-manager` client certificate and private key:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
{
|
|
||||||
|
|
||||||
cat > kube-controller-manager-csr.json <<EOF
|
cat > kube-controller-manager-csr.json <<EOF
|
||||||
{
|
{
|
||||||
"CN": "system:kube-controller-manager",
|
"CN": "system:kube-controller-manager",
|
||||||
|
@ -191,25 +181,21 @@ cfssl gencert \
|
||||||
-config=ca-config.json \
|
-config=ca-config.json \
|
||||||
-profile=kubernetes \
|
-profile=kubernetes \
|
||||||
kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
|
kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
|
||||||
|
~~~
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
Results:
|
Results:
|
||||||
|
|
||||||
```
|
~~~
|
||||||
kube-controller-manager-key.pem
|
kube-controller-manager-key.pem
|
||||||
kube-controller-manager.pem
|
kube-controller-manager.pem
|
||||||
```
|
~~~
|
||||||
|
|
||||||
|
|
||||||
### The Kube Proxy Client Certificate
|
### The Kube Proxy Client Certificate
|
||||||
|
|
||||||
Generate the `kube-proxy` client certificate and private key:
|
Generate the `kube-proxy` client certificate and private key:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
{
|
|
||||||
|
|
||||||
cat > kube-proxy-csr.json <<EOF
|
cat > kube-proxy-csr.json <<EOF
|
||||||
{
|
{
|
||||||
"CN": "system:kube-proxy",
|
"CN": "system:kube-proxy",
|
||||||
|
@ -235,24 +221,20 @@ cfssl gencert \
|
||||||
-config=ca-config.json \
|
-config=ca-config.json \
|
||||||
-profile=kubernetes \
|
-profile=kubernetes \
|
||||||
kube-proxy-csr.json | cfssljson -bare kube-proxy
|
kube-proxy-csr.json | cfssljson -bare kube-proxy
|
||||||
|
~~~
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
Results:
|
Results:
|
||||||
|
|
||||||
```
|
~~~
|
||||||
kube-proxy-key.pem
|
kube-proxy-key.pem
|
||||||
kube-proxy.pem
|
kube-proxy.pem
|
||||||
```
|
~~~
|
||||||
|
|
||||||
### The Scheduler Client Certificate
|
### The Scheduler Client Certificate
|
||||||
|
|
||||||
Generate the `kube-scheduler` client certificate and private key:
|
Generate the `kube-scheduler` client certificate and private key:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
{
|
|
||||||
|
|
||||||
cat > kube-scheduler-csr.json <<EOF
|
cat > kube-scheduler-csr.json <<EOF
|
||||||
{
|
{
|
||||||
"CN": "system:kube-scheduler",
|
"CN": "system:kube-scheduler",
|
||||||
|
@ -278,16 +260,14 @@ cfssl gencert \
|
||||||
-config=ca-config.json \
|
-config=ca-config.json \
|
||||||
-profile=kubernetes \
|
-profile=kubernetes \
|
||||||
kube-scheduler-csr.json | cfssljson -bare kube-scheduler
|
kube-scheduler-csr.json | cfssljson -bare kube-scheduler
|
||||||
|
~~~
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
Results:
|
Results:
|
||||||
|
|
||||||
```
|
~~~
|
||||||
kube-scheduler-key.pem
|
kube-scheduler-key.pem
|
||||||
kube-scheduler.pem
|
kube-scheduler.pem
|
||||||
```
|
~~~
|
||||||
|
|
||||||
|
|
||||||
### The Kubernetes API Server Certificate
|
### The Kubernetes API Server Certificate
|
||||||
|
@ -296,9 +276,7 @@ The `kubernetes-the-hard-way` static IP address will be included in the list of
|
||||||
|
|
||||||
Generate the Kubernetes API Server certificate and private key:
|
Generate the Kubernetes API Server certificate and private key:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
{
|
|
||||||
|
|
||||||
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
|
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
|
||||||
--region $(gcloud config get-value compute/region) \
|
--region $(gcloud config get-value compute/region) \
|
||||||
--format 'value(address)')
|
--format 'value(address)')
|
||||||
|
@ -329,16 +307,14 @@ cfssl gencert \
|
||||||
-hostname=10.32.0.1,10.240.0.10,10.240.0.11,10.240.0.12,${KUBERNETES_PUBLIC_ADDRESS},127.0.0.1,kubernetes.default \
|
-hostname=10.32.0.1,10.240.0.10,10.240.0.11,10.240.0.12,${KUBERNETES_PUBLIC_ADDRESS},127.0.0.1,kubernetes.default \
|
||||||
-profile=kubernetes \
|
-profile=kubernetes \
|
||||||
kubernetes-csr.json | cfssljson -bare kubernetes
|
kubernetes-csr.json | cfssljson -bare kubernetes
|
||||||
|
~~~
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
Results:
|
Results:
|
||||||
|
|
||||||
```
|
~~~
|
||||||
kubernetes-key.pem
|
kubernetes-key.pem
|
||||||
kubernetes.pem
|
kubernetes.pem
|
||||||
```
|
~~~
|
||||||
|
|
||||||
## The Service Account Key Pair
|
## The Service Account Key Pair
|
||||||
|
|
||||||
|
@ -346,9 +322,7 @@ The Kubernetes Controller Manager leverages a key pair to generate and sign serv
|
||||||
|
|
||||||
Generate the `service-account` certificate and private key:
|
Generate the `service-account` certificate and private key:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
{
|
|
||||||
|
|
||||||
cat > service-account-csr.json <<EOF
|
cat > service-account-csr.json <<EOF
|
||||||
{
|
{
|
||||||
"CN": "service-accounts",
|
"CN": "service-accounts",
|
||||||
|
@ -374,36 +348,33 @@ cfssl gencert \
|
||||||
-config=ca-config.json \
|
-config=ca-config.json \
|
||||||
-profile=kubernetes \
|
-profile=kubernetes \
|
||||||
service-account-csr.json | cfssljson -bare service-account
|
service-account-csr.json | cfssljson -bare service-account
|
||||||
|
~~~
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
Results:
|
Results:
|
||||||
|
|
||||||
```
|
~~~
|
||||||
service-account-key.pem
|
service-account-key.pem
|
||||||
service-account.pem
|
service-account.pem
|
||||||
```
|
~~~
|
||||||
|
|
||||||
|
|
||||||
## Distribute the Client and Server Certificates
|
## Distribute the Client and Server Certificates
|
||||||
|
|
||||||
Copy the appropriate certificates and private keys to each worker instance:
|
Copy the appropriate certificates and private keys to each worker instance:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
for instance in worker-0 worker-1 worker-2; do
|
for instance in worker-0 worker-1 worker-2; do
|
||||||
gcloud compute scp ca.pem ${instance}-key.pem ${instance}.pem ${instance}:~/
|
gcloud compute scp ca.pem ${instance}-key.pem ${instance}.pem ${instance}:~/
|
||||||
done
|
done
|
||||||
```
|
~~~
|
||||||
|
|
||||||
Copy the appropriate certificates and private keys to each controller instance:
|
Copy the appropriate certificates and private keys to each controller instance:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
for instance in controller-0 controller-1 controller-2; do
|
for instance in controller-0 controller-1 controller-2; do
|
||||||
gcloud compute scp ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \
|
gcloud compute scp ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \
|
||||||
service-account-key.pem service-account.pem ${instance}:~/
|
service-account-key.pem service-account.pem ${instance}:~/
|
||||||
done
|
done
|
||||||
```
|
~~~
|
||||||
|
|
||||||
> The `kube-proxy`, `kube-controller-manager`, `kube-scheduler`, and `kubelet` client certificates will be used to generate client authentication configuration files in the next lab.
|
> The `kube-proxy`, `kube-controller-manager`, `kube-scheduler`, and `kubelet` client certificates will be used to generate client authentication configuration files in the next lab.
|
||||||
|
|
||||||
|
|
|
@ -12,11 +12,11 @@ Each kubeconfig requires a Kubernetes API Server to connect to. To support high
|
||||||
|
|
||||||
Retrieve the `kubernetes-the-hard-way` static IP address:
|
Retrieve the `kubernetes-the-hard-way` static IP address:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
|
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
|
||||||
--region $(gcloud config get-value compute/region) \
|
--region $(gcloud config get-value compute/region) \
|
||||||
--format 'value(address)')
|
--format 'value(address)')
|
||||||
```
|
~~~
|
||||||
|
|
||||||
### The kubelet Kubernetes Configuration File
|
### The kubelet Kubernetes Configuration File
|
||||||
|
|
||||||
|
@ -24,7 +24,7 @@ When generating kubeconfig files for Kubelets the client certificate matching th
|
||||||
|
|
||||||
Generate a kubeconfig file for each worker node:
|
Generate a kubeconfig file for each worker node:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
for instance in worker-0 worker-1 worker-2; do
|
for instance in worker-0 worker-1 worker-2; do
|
||||||
kubectl config set-cluster kubernetes-the-hard-way \
|
kubectl config set-cluster kubernetes-the-hard-way \
|
||||||
--certificate-authority=ca.pem \
|
--certificate-authority=ca.pem \
|
||||||
|
@ -45,22 +45,21 @@ for instance in worker-0 worker-1 worker-2; do
|
||||||
|
|
||||||
kubectl config use-context default --kubeconfig=${instance}.kubeconfig
|
kubectl config use-context default --kubeconfig=${instance}.kubeconfig
|
||||||
done
|
done
|
||||||
```
|
~~~
|
||||||
|
|
||||||
Results:
|
Results:
|
||||||
|
|
||||||
```
|
~~~
|
||||||
worker-0.kubeconfig
|
worker-0.kubeconfig
|
||||||
worker-1.kubeconfig
|
worker-1.kubeconfig
|
||||||
worker-2.kubeconfig
|
worker-2.kubeconfig
|
||||||
```
|
~~~
|
||||||
|
|
||||||
### The kube-proxy Kubernetes Configuration File
|
### The kube-proxy Kubernetes Configuration File
|
||||||
|
|
||||||
Generate a kubeconfig file for the `kube-proxy` service:
|
Generate a kubeconfig file for the `kube-proxy` service:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
{
|
|
||||||
kubectl config set-cluster kubernetes-the-hard-way \
|
kubectl config set-cluster kubernetes-the-hard-way \
|
||||||
--certificate-authority=ca.pem \
|
--certificate-authority=ca.pem \
|
||||||
--embed-certs=true \
|
--embed-certs=true \
|
||||||
|
@ -79,21 +78,19 @@ Generate a kubeconfig file for the `kube-proxy` service:
|
||||||
--kubeconfig=kube-proxy.kubeconfig
|
--kubeconfig=kube-proxy.kubeconfig
|
||||||
|
|
||||||
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
|
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
|
||||||
}
|
~~~
|
||||||
```
|
|
||||||
|
|
||||||
Results:
|
Results:
|
||||||
|
|
||||||
```
|
~~~
|
||||||
kube-proxy.kubeconfig
|
kube-proxy.kubeconfig
|
||||||
```
|
~~~
|
||||||
|
|
||||||
### The kube-controller-manager Kubernetes Configuration File
|
### The kube-controller-manager Kubernetes Configuration File
|
||||||
|
|
||||||
Generate a kubeconfig file for the `kube-controller-manager` service:
|
Generate a kubeconfig file for the `kube-controller-manager` service:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
{
|
|
||||||
kubectl config set-cluster kubernetes-the-hard-way \
|
kubectl config set-cluster kubernetes-the-hard-way \
|
||||||
--certificate-authority=ca.pem \
|
--certificate-authority=ca.pem \
|
||||||
--embed-certs=true \
|
--embed-certs=true \
|
||||||
|
@ -112,22 +109,21 @@ Generate a kubeconfig file for the `kube-controller-manager` service:
|
||||||
--kubeconfig=kube-controller-manager.kubeconfig
|
--kubeconfig=kube-controller-manager.kubeconfig
|
||||||
|
|
||||||
kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig
|
kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig
|
||||||
}
|
~~~
|
||||||
```
|
|
||||||
|
|
||||||
Results:
|
Results:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
kube-controller-manager.kubeconfig
|
kube-controller-manager.kubeconfig
|
||||||
```
|
~~~
|
||||||
|
|
||||||
|
|
||||||
### The kube-scheduler Kubernetes Configuration File
|
### The kube-scheduler Kubernetes Configuration File
|
||||||
|
|
||||||
Generate a kubeconfig file for the `kube-scheduler` service:
|
Generate a kubeconfig file for the `kube-scheduler` service:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
{
|
|
||||||
kubectl config set-cluster kubernetes-the-hard-way \
|
kubectl config set-cluster kubernetes-the-hard-way \
|
||||||
--certificate-authority=ca.pem \
|
--certificate-authority=ca.pem \
|
||||||
--embed-certs=true \
|
--embed-certs=true \
|
||||||
|
@ -146,21 +142,19 @@ Generate a kubeconfig file for the `kube-scheduler` service:
|
||||||
--kubeconfig=kube-scheduler.kubeconfig
|
--kubeconfig=kube-scheduler.kubeconfig
|
||||||
|
|
||||||
kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig
|
kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig
|
||||||
}
|
~~~
|
||||||
```
|
|
||||||
|
|
||||||
Results:
|
Results:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
kube-scheduler.kubeconfig
|
kube-scheduler.kubeconfig
|
||||||
```
|
~~~
|
||||||
|
|
||||||
### The admin Kubernetes Configuration File
|
### The admin Kubernetes Configuration File
|
||||||
|
|
||||||
Generate a kubeconfig file for the `admin` user:
|
Generate a kubeconfig file for the `admin` user:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
{
|
|
||||||
kubectl config set-cluster kubernetes-the-hard-way \
|
kubectl config set-cluster kubernetes-the-hard-way \
|
||||||
--certificate-authority=ca.pem \
|
--certificate-authority=ca.pem \
|
||||||
--embed-certs=true \
|
--embed-certs=true \
|
||||||
|
@ -179,14 +173,13 @@ Generate a kubeconfig file for the `admin` user:
|
||||||
--kubeconfig=admin.kubeconfig
|
--kubeconfig=admin.kubeconfig
|
||||||
|
|
||||||
kubectl config use-context default --kubeconfig=admin.kubeconfig
|
kubectl config use-context default --kubeconfig=admin.kubeconfig
|
||||||
}
|
~~~
|
||||||
```
|
|
||||||
|
|
||||||
Results:
|
Results:
|
||||||
|
|
||||||
```
|
~~~
|
||||||
admin.kubeconfig
|
admin.kubeconfig
|
||||||
```
|
~~~
|
||||||
|
|
||||||
|
|
||||||
##
|
##
|
||||||
|
@ -195,18 +188,18 @@ admin.kubeconfig
|
||||||
|
|
||||||
Copy the appropriate `kubelet` and `kube-proxy` kubeconfig files to each worker instance:
|
Copy the appropriate `kubelet` and `kube-proxy` kubeconfig files to each worker instance:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
for instance in worker-0 worker-1 worker-2; do
|
for instance in worker-0 worker-1 worker-2; do
|
||||||
gcloud compute scp ${instance}.kubeconfig kube-proxy.kubeconfig ${instance}:~/
|
gcloud compute scp ${instance}.kubeconfig kube-proxy.kubeconfig ${instance}:~/
|
||||||
done
|
done
|
||||||
```
|
~~~
|
||||||
|
|
||||||
Copy the appropriate `kube-controller-manager` and `kube-scheduler` kubeconfig files to each controller instance:
|
Copy the appropriate `kube-controller-manager` and `kube-scheduler` kubeconfig files to each controller instance:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
for instance in controller-0 controller-1 controller-2; do
|
for instance in controller-0 controller-1 controller-2; do
|
||||||
gcloud compute scp admin.kubeconfig kube-controller-manager.kubeconfig kube-scheduler.kubeconfig ${instance}:~/
|
gcloud compute scp admin.kubeconfig kube-controller-manager.kubeconfig kube-scheduler.kubeconfig ${instance}:~/
|
||||||
done
|
done
|
||||||
```
|
~~~
|
||||||
|
|
||||||
Next: [Generating the Data Encryption Config and Key](06-data-encryption-keys.md)
|
Next: [Generating the Data Encryption Config and Key](06-data-encryption-keys.md)
|
||||||
|
|
|
@ -8,15 +8,15 @@ In this lab you will generate an encryption key and an [encryption config](https
|
||||||
|
|
||||||
Generate an encryption key:
|
Generate an encryption key:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)
|
ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)
|
||||||
```
|
~~~
|
||||||
|
|
||||||
## The Encryption Config File
|
## The Encryption Config File
|
||||||
|
|
||||||
Create the `encryption-config.yaml` encryption config file:
|
Create the `encryption-config.yaml` encryption config file:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
cat > encryption-config.yaml <<EOF
|
cat > encryption-config.yaml <<EOF
|
||||||
kind: EncryptionConfig
|
kind: EncryptionConfig
|
||||||
apiVersion: v1
|
apiVersion: v1
|
||||||
|
@ -30,14 +30,14 @@ resources:
|
||||||
secret: ${ENCRYPTION_KEY}
|
secret: ${ENCRYPTION_KEY}
|
||||||
- identity: {}
|
- identity: {}
|
||||||
EOF
|
EOF
|
||||||
```
|
~~~
|
||||||
|
|
||||||
Copy the `encryption-config.yaml` encryption config file to each controller instance:
|
Copy the `encryption-config.yaml` encryption config file to each controller instance:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
for instance in controller-0 controller-1 controller-2; do
|
for instance in controller-0 controller-1 controller-2; do
|
||||||
gcloud compute scp encryption-config.yaml ${instance}:~/
|
gcloud compute scp encryption-config.yaml ${instance}:~/
|
||||||
done
|
done
|
||||||
```
|
~~~
|
||||||
|
|
||||||
Next: [Bootstrapping the etcd Cluster](07-bootstrapping-etcd.md)
|
Next: [Bootstrapping the etcd Cluster](07-bootstrapping-etcd.md)
|
||||||
|
|
|
@ -6,9 +6,9 @@ Kubernetes components are stateless and store cluster state in [etcd](https://gi
|
||||||
|
|
||||||
The commands in this lab must be run on each controller instance: `controller-0`, `controller-1`, and `controller-2`. Login to each controller instance using the `gcloud` command. Example:
|
The commands in this lab must be run on each controller instance: `controller-0`, `controller-1`, and `controller-2`. Login to each controller instance using the `gcloud` command. Example:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
gcloud compute ssh controller-0
|
gcloud compute ssh controller-0
|
||||||
```
|
~~~
|
||||||
|
|
||||||
### Running commands in parallel with tmux
|
### Running commands in parallel with tmux
|
||||||
|
|
||||||
|
@ -20,45 +20,41 @@ gcloud compute ssh controller-0
|
||||||
|
|
||||||
Download the official etcd release binaries from the [coreos/etcd](https://github.com/coreos/etcd) GitHub project:
|
Download the official etcd release binaries from the [coreos/etcd](https://github.com/coreos/etcd) GitHub project:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
wget -q --show-progress --https-only --timestamping \
|
wget -q --show-progress --https-only --timestamping \
|
||||||
"https://github.com/coreos/etcd/releases/download/v3.3.9/etcd-v3.3.9-linux-amd64.tar.gz"
|
"https://github.com/coreos/etcd/releases/download/v3.3.9/etcd-v3.3.9-linux-amd64.tar.gz"
|
||||||
```
|
~~~
|
||||||
|
|
||||||
Extract and install the `etcd` server and the `etcdctl` command line utility:
|
Extract and install the `etcd` server and the `etcdctl` command line utility:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
{
|
|
||||||
tar -xvf etcd-v3.3.9-linux-amd64.tar.gz
|
tar -xvf etcd-v3.3.9-linux-amd64.tar.gz
|
||||||
sudo mv etcd-v3.3.9-linux-amd64/etcd* /usr/local/bin/
|
sudo mv etcd-v3.3.9-linux-amd64/etcd* /usr/local/bin/
|
||||||
}
|
~~~
|
||||||
```
|
|
||||||
|
|
||||||
### Configure the etcd Server
|
### Configure the etcd Server
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
{
|
|
||||||
sudo mkdir -p /etc/etcd /var/lib/etcd
|
sudo mkdir -p /etc/etcd /var/lib/etcd
|
||||||
sudo cp ca.pem kubernetes-key.pem kubernetes.pem /etc/etcd/
|
sudo cp ca.pem kubernetes-key.pem kubernetes.pem /etc/etcd/
|
||||||
}
|
~~~
|
||||||
```
|
|
||||||
|
|
||||||
The instance internal IP address will be used to serve client requests and communicate with etcd cluster peers. Retrieve the internal IP address for the current compute instance:
|
The instance internal IP address will be used to serve client requests and communicate with etcd cluster peers. Retrieve the internal IP address for the current compute instance:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
|
INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
|
||||||
http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
|
http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
|
||||||
```
|
~~~
|
||||||
|
|
||||||
Each etcd member must have a unique name within an etcd cluster. Set the etcd name to match the hostname of the current compute instance:
|
Each etcd member must have a unique name within an etcd cluster. Set the etcd name to match the hostname of the current compute instance:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
ETCD_NAME=$(hostname -s)
|
ETCD_NAME=$(hostname -s)
|
||||||
```
|
~~~
|
||||||
|
|
||||||
Create the `etcd.service` systemd unit file:
|
Create the `etcd.service` systemd unit file:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
cat <<EOF | sudo tee /etc/systemd/system/etcd.service
|
cat <<EOF | sudo tee /etc/systemd/system/etcd.service
|
||||||
[Unit]
|
[Unit]
|
||||||
Description=etcd
|
Description=etcd
|
||||||
|
@ -89,17 +85,15 @@ RestartSec=5
|
||||||
[Install]
|
[Install]
|
||||||
WantedBy=multi-user.target
|
WantedBy=multi-user.target
|
||||||
EOF
|
EOF
|
||||||
```
|
~~~
|
||||||
|
|
||||||
### Start the etcd Server
|
### Start the etcd Server
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
{
|
|
||||||
sudo systemctl daemon-reload
|
sudo systemctl daemon-reload
|
||||||
sudo systemctl enable etcd
|
sudo systemctl enable etcd
|
||||||
sudo systemctl start etcd
|
sudo systemctl start etcd
|
||||||
}
|
~~~
|
||||||
```
|
|
||||||
|
|
||||||
> Remember to run the above commands on each controller node: `controller-0`, `controller-1`, and `controller-2`.
|
> Remember to run the above commands on each controller node: `controller-0`, `controller-1`, and `controller-2`.
|
||||||
|
|
||||||
|
@ -107,20 +101,20 @@ EOF
|
||||||
|
|
||||||
List the etcd cluster members:
|
List the etcd cluster members:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
sudo ETCDCTL_API=3 etcdctl member list \
|
sudo ETCDCTL_API=3 etcdctl member list \
|
||||||
--endpoints=https://127.0.0.1:2379 \
|
--endpoints=https://127.0.0.1:2379 \
|
||||||
--cacert=/etc/etcd/ca.pem \
|
--cacert=/etc/etcd/ca.pem \
|
||||||
--cert=/etc/etcd/kubernetes.pem \
|
--cert=/etc/etcd/kubernetes.pem \
|
||||||
--key=/etc/etcd/kubernetes-key.pem
|
--key=/etc/etcd/kubernetes-key.pem
|
||||||
```
|
~~~
|
||||||
|
|
||||||
> output
|
> output
|
||||||
|
|
||||||
```
|
~~~
|
||||||
3a57933972cb5131, started, controller-2, https://10.240.0.12:2380, https://10.240.0.12:2379
|
3a57933972cb5131, started, controller-2, https://10.240.0.12:2380, https://10.240.0.12:2379
|
||||||
f98dc20bce6225a0, started, controller-0, https://10.240.0.10:2380, https://10.240.0.10:2379
|
f98dc20bce6225a0, started, controller-0, https://10.240.0.10:2380, https://10.240.0.10:2379
|
||||||
ffed16798470cab5, started, controller-1, https://10.240.0.11:2380, https://10.240.0.11:2379
|
ffed16798470cab5, started, controller-1, https://10.240.0.11:2380, https://10.240.0.11:2379
|
||||||
```
|
~~~
|
||||||
|
|
||||||
Next: [Bootstrapping the Kubernetes Control Plane](08-bootstrapping-kubernetes-controllers.md)
|
Next: [Bootstrapping the Kubernetes Control Plane](08-bootstrapping-kubernetes-controllers.md)
|
||||||
|
|
|
@ -6,9 +6,9 @@ In this lab you will bootstrap the Kubernetes control plane across three compute
|
||||||
|
|
||||||
The commands in this lab must be run on each controller instance: `controller-0`, `controller-1`, and `controller-2`. Login to each controller instance using the `gcloud` command. Example:
|
The commands in this lab must be run on each controller instance: `controller-0`, `controller-1`, and `controller-2`. Login to each controller instance using the `gcloud` command. Example:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
gcloud compute ssh controller-0
|
gcloud compute ssh controller-0
|
||||||
```
|
~~~
|
||||||
|
|
||||||
### Running commands in parallel with tmux
|
### Running commands in parallel with tmux
|
||||||
|
|
||||||
|
@ -18,53 +18,49 @@ gcloud compute ssh controller-0
|
||||||
|
|
||||||
Create the Kubernetes configuration directory:
|
Create the Kubernetes configuration directory:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
sudo mkdir -p /etc/kubernetes/config
|
sudo mkdir -p /etc/kubernetes/config
|
||||||
```
|
~~~
|
||||||
|
|
||||||
### Download and Install the Kubernetes Controller Binaries
|
### Download and Install the Kubernetes Controller Binaries
|
||||||
|
|
||||||
Download the official Kubernetes release binaries:
|
Download the official Kubernetes release binaries:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
wget -q --show-progress --https-only --timestamping \
|
wget -q --show-progress --https-only --timestamping \
|
||||||
"https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kube-apiserver" \
|
"https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kube-apiserver" \
|
||||||
"https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kube-controller-manager" \
|
"https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kube-controller-manager" \
|
||||||
"https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kube-scheduler" \
|
"https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kube-scheduler" \
|
||||||
"https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kubectl"
|
"https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kubectl"
|
||||||
```
|
~~~
|
||||||
|
|
||||||
Install the Kubernetes binaries:
|
Install the Kubernetes binaries:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
{
|
|
||||||
chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl
|
chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl
|
||||||
sudo mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/
|
sudo mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/
|
||||||
}
|
~~~
|
||||||
```
|
|
||||||
|
|
||||||
### Configure the Kubernetes API Server
|
### Configure the Kubernetes API Server
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
{
|
|
||||||
sudo mkdir -p /var/lib/kubernetes/
|
sudo mkdir -p /var/lib/kubernetes/
|
||||||
|
|
||||||
sudo mv ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \
|
sudo mv ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \
|
||||||
service-account-key.pem service-account.pem \
|
service-account-key.pem service-account.pem \
|
||||||
encryption-config.yaml /var/lib/kubernetes/
|
encryption-config.yaml /var/lib/kubernetes/
|
||||||
}
|
~~~
|
||||||
```
|
|
||||||
|
|
||||||
The instance internal IP address will be used to advertise the API Server to members of the cluster. Retrieve the internal IP address for the current compute instance:
|
The instance internal IP address will be used to advertise the API Server to members of the cluster. Retrieve the internal IP address for the current compute instance:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
|
INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
|
||||||
http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
|
http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
|
||||||
```
|
~~~
|
||||||
|
|
||||||
Create the `kube-apiserver.service` systemd unit file:
|
Create the `kube-apiserver.service` systemd unit file:
|
||||||
|
|
||||||
```
|
~~~
|
||||||
cat <<EOF | sudo tee /etc/systemd/system/kube-apiserver.service
|
cat <<EOF | sudo tee /etc/systemd/system/kube-apiserver.service
|
||||||
[Unit]
|
[Unit]
|
||||||
Description=Kubernetes API Server
|
Description=Kubernetes API Server
|
||||||
|
@ -107,19 +103,19 @@ RestartSec=5
|
||||||
[Install]
|
[Install]
|
||||||
WantedBy=multi-user.target
|
WantedBy=multi-user.target
|
||||||
EOF
|
EOF
|
||||||
```
|
~~~
|
||||||
|
|
||||||
### Configure the Kubernetes Controller Manager
|
### Configure the Kubernetes Controller Manager
|
||||||
|
|
||||||
Move the `kube-controller-manager` kubeconfig into place:
|
Move the `kube-controller-manager` kubeconfig into place:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
sudo mv kube-controller-manager.kubeconfig /var/lib/kubernetes/
|
sudo mv kube-controller-manager.kubeconfig /var/lib/kubernetes/
|
||||||
```
|
~~~
|
||||||
|
|
||||||
Create the `kube-controller-manager.service` systemd unit file:
|
Create the `kube-controller-manager.service` systemd unit file:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
cat <<EOF | sudo tee /etc/systemd/system/kube-controller-manager.service
|
cat <<EOF | sudo tee /etc/systemd/system/kube-controller-manager.service
|
||||||
[Unit]
|
[Unit]
|
||||||
Description=Kubernetes Controller Manager
|
Description=Kubernetes Controller Manager
|
||||||
|
@ -145,19 +141,19 @@ RestartSec=5
|
||||||
[Install]
|
[Install]
|
||||||
WantedBy=multi-user.target
|
WantedBy=multi-user.target
|
||||||
EOF
|
EOF
|
||||||
```
|
~~~
|
||||||
|
|
||||||
### Configure the Kubernetes Scheduler
|
### Configure the Kubernetes Scheduler
|
||||||
|
|
||||||
Move the `kube-scheduler` kubeconfig into place:
|
Move the `kube-scheduler` kubeconfig into place:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
sudo mv kube-scheduler.kubeconfig /var/lib/kubernetes/
|
sudo mv kube-scheduler.kubeconfig /var/lib/kubernetes/
|
||||||
```
|
~~~
|
||||||
|
|
||||||
Create the `kube-scheduler.yaml` configuration file:
|
Create the `kube-scheduler.yaml` configuration file:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
cat <<EOF | sudo tee /etc/kubernetes/config/kube-scheduler.yaml
|
cat <<EOF | sudo tee /etc/kubernetes/config/kube-scheduler.yaml
|
||||||
apiVersion: componentconfig/v1alpha1
|
apiVersion: componentconfig/v1alpha1
|
||||||
kind: KubeSchedulerConfiguration
|
kind: KubeSchedulerConfiguration
|
||||||
|
@ -166,11 +162,11 @@ clientConnection:
|
||||||
leaderElection:
|
leaderElection:
|
||||||
leaderElect: true
|
leaderElect: true
|
||||||
EOF
|
EOF
|
||||||
```
|
~~~
|
||||||
|
|
||||||
Create the `kube-scheduler.service` systemd unit file:
|
Create the `kube-scheduler.service` systemd unit file:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
cat <<EOF | sudo tee /etc/systemd/system/kube-scheduler.service
|
cat <<EOF | sudo tee /etc/systemd/system/kube-scheduler.service
|
||||||
[Unit]
|
[Unit]
|
||||||
Description=Kubernetes Scheduler
|
Description=Kubernetes Scheduler
|
||||||
|
@ -186,17 +182,15 @@ RestartSec=5
|
||||||
[Install]
|
[Install]
|
||||||
WantedBy=multi-user.target
|
WantedBy=multi-user.target
|
||||||
EOF
|
EOF
|
||||||
```
|
~~~
|
||||||
|
|
||||||
### Start the Controller Services
|
### Start the Controller Services
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
{
|
|
||||||
sudo systemctl daemon-reload
|
sudo systemctl daemon-reload
|
||||||
sudo systemctl enable kube-apiserver kube-controller-manager kube-scheduler
|
sudo systemctl enable kube-apiserver kube-controller-manager kube-scheduler
|
||||||
sudo systemctl start kube-apiserver kube-controller-manager kube-scheduler
|
sudo systemctl start kube-apiserver kube-controller-manager kube-scheduler
|
||||||
}
|
~~~
|
||||||
```
|
|
||||||
|
|
||||||
> Allow up to 10 seconds for the Kubernetes API Server to fully initialize.
|
> Allow up to 10 seconds for the Kubernetes API Server to fully initialize.
|
||||||
|
|
||||||
|
@ -208,11 +202,11 @@ A [Google Network Load Balancer](https://cloud.google.com/compute/docs/load-bala
|
||||||
|
|
||||||
Install a basic web server to handle HTTP health checks:
|
Install a basic web server to handle HTTP health checks:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
sudo apt-get install -y nginx
|
sudo apt-get install -y nginx
|
||||||
```
|
~~~
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
cat > kubernetes.default.svc.cluster.local <<EOF
|
cat > kubernetes.default.svc.cluster.local <<EOF
|
||||||
server {
|
server {
|
||||||
listen 80;
|
listen 80;
|
||||||
|
@ -224,47 +218,46 @@ server {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
EOF
|
EOF
|
||||||
```
|
~~~
|
||||||
|
|
||||||
|
~~~sh
|
||||||
|
|
||||||
```
|
|
||||||
{
|
|
||||||
sudo mv kubernetes.default.svc.cluster.local \
|
sudo mv kubernetes.default.svc.cluster.local \
|
||||||
/etc/nginx/sites-available/kubernetes.default.svc.cluster.local
|
/etc/nginx/sites-available/kubernetes.default.svc.cluster.local
|
||||||
|
|
||||||
sudo ln -s /etc/nginx/sites-available/kubernetes.default.svc.cluster.local /etc/nginx/sites-enabled/
|
sudo ln -s /etc/nginx/sites-available/kubernetes.default.svc.cluster.local /etc/nginx/sites-enabled/
|
||||||
}
|
~~~
|
||||||
```
|
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
sudo systemctl restart nginx
|
sudo systemctl restart nginx
|
||||||
```
|
~~~
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
sudo systemctl enable nginx
|
sudo systemctl enable nginx
|
||||||
```
|
~~~
|
||||||
|
|
||||||
### Verification
|
### Verification
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
kubectl get componentstatuses --kubeconfig admin.kubeconfig
|
kubectl get componentstatuses --kubeconfig admin.kubeconfig
|
||||||
```
|
~~~
|
||||||
|
|
||||||
```
|
~~~
|
||||||
NAME STATUS MESSAGE ERROR
|
NAME STATUS MESSAGE ERROR
|
||||||
controller-manager Healthy ok
|
controller-manager Healthy ok
|
||||||
scheduler Healthy ok
|
scheduler Healthy ok
|
||||||
etcd-2 Healthy {"health": "true"}
|
etcd-2 Healthy {"health": "true"}
|
||||||
etcd-0 Healthy {"health": "true"}
|
etcd-0 Healthy {"health": "true"}
|
||||||
etcd-1 Healthy {"health": "true"}
|
etcd-1 Healthy {"health": "true"}
|
||||||
```
|
~~~
|
||||||
|
|
||||||
Test the nginx HTTP health check proxy:
|
Test the nginx HTTP health check proxy:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
curl -H "Host: kubernetes.default.svc.cluster.local" -i http://127.0.0.1/healthz
|
curl -H "Host: kubernetes.default.svc.cluster.local" -i http://127.0.0.1/healthz
|
||||||
```
|
~~~
|
||||||
|
|
||||||
```
|
~~~
|
||||||
HTTP/1.1 200 OK
|
HTTP/1.1 200 OK
|
||||||
Server: nginx/1.14.0 (Ubuntu)
|
Server: nginx/1.14.0 (Ubuntu)
|
||||||
Date: Sun, 30 Sep 2018 17:44:24 GMT
|
Date: Sun, 30 Sep 2018 17:44:24 GMT
|
||||||
|
@ -273,7 +266,7 @@ Content-Length: 2
|
||||||
Connection: keep-alive
|
Connection: keep-alive
|
||||||
|
|
||||||
ok
|
ok
|
||||||
```
|
~~~
|
||||||
|
|
||||||
> Remember to run the above commands on each controller node: `controller-0`, `controller-1`, and `controller-2`.
|
> Remember to run the above commands on each controller node: `controller-0`, `controller-1`, and `controller-2`.
|
||||||
|
|
||||||
|
@ -283,13 +276,13 @@ In this section you will configure RBAC permissions to allow the Kubernetes API
|
||||||
|
|
||||||
> This tutorial sets the Kubelet `--authorization-mode` flag to `Webhook`. Webhook mode uses the [SubjectAccessReview](https://kubernetes.io/docs/admin/authorization/#checking-api-access) API to determine authorization.
|
> This tutorial sets the Kubelet `--authorization-mode` flag to `Webhook`. Webhook mode uses the [SubjectAccessReview](https://kubernetes.io/docs/admin/authorization/#checking-api-access) API to determine authorization.
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
gcloud compute ssh controller-0
|
gcloud compute ssh controller-0
|
||||||
```
|
~~~
|
||||||
|
|
||||||
Create the `system:kube-apiserver-to-kubelet` [ClusterRole](https://kubernetes.io/docs/admin/authorization/rbac/#role-and-clusterrole) with permissions to access the Kubelet API and perform most common tasks associated with managing pods:
|
Create the `system:kube-apiserver-to-kubelet` [ClusterRole](https://kubernetes.io/docs/admin/authorization/rbac/#role-and-clusterrole) with permissions to access the Kubelet API and perform most common tasks associated with managing pods:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
|
cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
|
||||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||||
kind: ClusterRole
|
kind: ClusterRole
|
||||||
|
@ -311,13 +304,13 @@ rules:
|
||||||
verbs:
|
verbs:
|
||||||
- "*"
|
- "*"
|
||||||
EOF
|
EOF
|
||||||
```
|
~~~
|
||||||
|
|
||||||
The Kubernetes API Server authenticates to the Kubelet as the `kubernetes` user using the client certificate as defined by the `--kubelet-client-certificate` flag.
|
The Kubernetes API Server authenticates to the Kubelet as the `kubernetes` user using the client certificate as defined by the `--kubelet-client-certificate` flag.
|
||||||
|
|
||||||
Bind the `system:kube-apiserver-to-kubelet` ClusterRole to the `kubernetes` user:
|
Bind the `system:kube-apiserver-to-kubelet` ClusterRole to the `kubernetes` user:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
|
cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
|
||||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||||
kind: ClusterRoleBinding
|
kind: ClusterRoleBinding
|
||||||
|
@ -333,7 +326,7 @@ subjects:
|
||||||
kind: User
|
kind: User
|
||||||
name: kubernetes
|
name: kubernetes
|
||||||
EOF
|
EOF
|
||||||
```
|
~~~
|
||||||
|
|
||||||
## The Kubernetes Frontend Load Balancer
|
## The Kubernetes Frontend Load Balancer
|
||||||
|
|
||||||
|
@ -346,8 +339,7 @@ In this section you will provision an external load balancer to front the Kubern
|
||||||
|
|
||||||
Create the external load balancer network resources:
|
Create the external load balancer network resources:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
{
|
|
||||||
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
|
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
|
||||||
--region $(gcloud config get-value compute/region) \
|
--region $(gcloud config get-value compute/region) \
|
||||||
--format 'value(address)')
|
--format 'value(address)')
|
||||||
|
@ -373,28 +365,27 @@ Create the external load balancer network resources:
|
||||||
--ports 6443 \
|
--ports 6443 \
|
||||||
--region $(gcloud config get-value compute/region) \
|
--region $(gcloud config get-value compute/region) \
|
||||||
--target-pool kubernetes-target-pool
|
--target-pool kubernetes-target-pool
|
||||||
}
|
~~~
|
||||||
```
|
|
||||||
|
|
||||||
### Verification
|
### Verification
|
||||||
|
|
||||||
Retrieve the `kubernetes-the-hard-way` static IP address:
|
Retrieve the `kubernetes-the-hard-way` static IP address:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
|
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
|
||||||
--region $(gcloud config get-value compute/region) \
|
--region $(gcloud config get-value compute/region) \
|
||||||
--format 'value(address)')
|
--format 'value(address)')
|
||||||
```
|
~~~
|
||||||
|
|
||||||
Make a HTTP request for the Kubernetes version info:
|
Make a HTTP request for the Kubernetes version info:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
curl --cacert ca.pem https://${KUBERNETES_PUBLIC_ADDRESS}:6443/version
|
curl --cacert ca.pem https://${KUBERNETES_PUBLIC_ADDRESS}:6443/version
|
||||||
```
|
~~~
|
||||||
|
|
||||||
> output
|
> output
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
{
|
{
|
||||||
"major": "1",
|
"major": "1",
|
||||||
"minor": "12",
|
"minor": "12",
|
||||||
|
@ -406,6 +397,6 @@ curl --cacert ca.pem https://${KUBERNETES_PUBLIC_ADDRESS}:6443/version
|
||||||
"compiler": "gc",
|
"compiler": "gc",
|
||||||
"platform": "linux/amd64"
|
"platform": "linux/amd64"
|
||||||
}
|
}
|
||||||
```
|
~~~
|
||||||
|
|
||||||
Next: [Bootstrapping the Kubernetes Worker Nodes](09-bootstrapping-kubernetes-workers.md)
|
Next: [Bootstrapping the Kubernetes Worker Nodes](09-bootstrapping-kubernetes-workers.md)
|
||||||
|
|
|
@ -6,9 +6,9 @@ In this lab you will bootstrap three Kubernetes worker nodes. The following comp
|
||||||
|
|
||||||
The commands in this lab must be run on each worker instance: `worker-0`, `worker-1`, and `worker-2`. Login to each worker instance using the `gcloud` command. Example:
|
The commands in this lab must be run on each worker instance: `worker-0`, `worker-1`, and `worker-2`. Login to each worker instance using the `gcloud` command. Example:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
gcloud compute ssh worker-0
|
gcloud compute ssh worker-0
|
||||||
```
|
~~~
|
||||||
|
|
||||||
### Running commands in parallel with tmux
|
### Running commands in parallel with tmux
|
||||||
|
|
||||||
|
@ -18,18 +18,16 @@ gcloud compute ssh worker-0
|
||||||
|
|
||||||
Install the OS dependencies:
|
Install the OS dependencies:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
{
|
|
||||||
sudo apt-get update
|
sudo apt-get update
|
||||||
sudo apt-get -y install socat conntrack ipset
|
sudo apt-get -y install socat conntrack ipset
|
||||||
}
|
~~~
|
||||||
```
|
|
||||||
|
|
||||||
> The socat binary enables support for the `kubectl port-forward` command.
|
> The socat binary enables support for the `kubectl port-forward` command.
|
||||||
|
|
||||||
### Download and Install Worker Binaries
|
### Download and Install Worker Binaries
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
wget -q --show-progress --https-only --timestamping \
|
wget -q --show-progress --https-only --timestamping \
|
||||||
https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.12.0/crictl-v1.12.0-linux-amd64.tar.gz \
|
https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.12.0/crictl-v1.12.0-linux-amd64.tar.gz \
|
||||||
https://storage.googleapis.com/kubernetes-the-hard-way/runsc-50c283b9f56bb7200938d9e207355f05f79f0d17 \
|
https://storage.googleapis.com/kubernetes-the-hard-way/runsc-50c283b9f56bb7200938d9e207355f05f79f0d17 \
|
||||||
|
@ -39,11 +37,11 @@ wget -q --show-progress --https-only --timestamping \
|
||||||
https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kubectl \
|
https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kubectl \
|
||||||
https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kube-proxy \
|
https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kube-proxy \
|
||||||
https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kubelet
|
https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kubelet
|
||||||
```
|
~~~
|
||||||
|
|
||||||
Create the installation directories:
|
Create the installation directories:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
sudo mkdir -p \
|
sudo mkdir -p \
|
||||||
/etc/cni/net.d \
|
/etc/cni/net.d \
|
||||||
/opt/cni/bin \
|
/opt/cni/bin \
|
||||||
|
@ -51,12 +49,11 @@ sudo mkdir -p \
|
||||||
/var/lib/kube-proxy \
|
/var/lib/kube-proxy \
|
||||||
/var/lib/kubernetes \
|
/var/lib/kubernetes \
|
||||||
/var/run/kubernetes
|
/var/run/kubernetes
|
||||||
```
|
~~~
|
||||||
|
|
||||||
Install the worker binaries:
|
Install the worker binaries:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
{
|
|
||||||
sudo mv runsc-50c283b9f56bb7200938d9e207355f05f79f0d17 runsc
|
sudo mv runsc-50c283b9f56bb7200938d9e207355f05f79f0d17 runsc
|
||||||
sudo mv runc.amd64 runc
|
sudo mv runc.amd64 runc
|
||||||
chmod +x kubectl kube-proxy kubelet runc runsc
|
chmod +x kubectl kube-proxy kubelet runc runsc
|
||||||
|
@ -64,21 +61,20 @@ Install the worker binaries:
|
||||||
sudo tar -xvf crictl-v1.12.0-linux-amd64.tar.gz -C /usr/local/bin/
|
sudo tar -xvf crictl-v1.12.0-linux-amd64.tar.gz -C /usr/local/bin/
|
||||||
sudo tar -xvf cni-plugins-amd64-v0.6.0.tgz -C /opt/cni/bin/
|
sudo tar -xvf cni-plugins-amd64-v0.6.0.tgz -C /opt/cni/bin/
|
||||||
sudo tar -xvf containerd-1.2.0-rc.0.linux-amd64.tar.gz -C /
|
sudo tar -xvf containerd-1.2.0-rc.0.linux-amd64.tar.gz -C /
|
||||||
}
|
~~~
|
||||||
```
|
|
||||||
|
|
||||||
### Configure CNI Networking
|
### Configure CNI Networking
|
||||||
|
|
||||||
Retrieve the Pod CIDR range for the current compute instance:
|
Retrieve the Pod CIDR range for the current compute instance:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
POD_CIDR=$(curl -s -H "Metadata-Flavor: Google" \
|
POD_CIDR=$(curl -s -H "Metadata-Flavor: Google" \
|
||||||
http://metadata.google.internal/computeMetadata/v1/instance/attributes/pod-cidr)
|
http://metadata.google.internal/computeMetadata/v1/instance/attributes/pod-cidr)
|
||||||
```
|
~~~
|
||||||
|
|
||||||
Create the `bridge` network configuration file:
|
Create the `bridge` network configuration file:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
cat <<EOF | sudo tee /etc/cni/net.d/10-bridge.conf
|
cat <<EOF | sudo tee /etc/cni/net.d/10-bridge.conf
|
||||||
{
|
{
|
||||||
"cniVersion": "0.3.1",
|
"cniVersion": "0.3.1",
|
||||||
|
@ -96,28 +92,28 @@ cat <<EOF | sudo tee /etc/cni/net.d/10-bridge.conf
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
EOF
|
EOF
|
||||||
```
|
~~~
|
||||||
|
|
||||||
Create the `loopback` network configuration file:
|
Create the `loopback` network configuration file:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
cat <<EOF | sudo tee /etc/cni/net.d/99-loopback.conf
|
cat <<EOF | sudo tee /etc/cni/net.d/99-loopback.conf
|
||||||
{
|
{
|
||||||
"cniVersion": "0.3.1",
|
"cniVersion": "0.3.1",
|
||||||
"type": "loopback"
|
"type": "loopback"
|
||||||
}
|
}
|
||||||
EOF
|
EOF
|
||||||
```
|
~~~
|
||||||
|
|
||||||
### Configure containerd
|
### Configure containerd
|
||||||
|
|
||||||
Create the `containerd` configuration file:
|
Create the `containerd` configuration file:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
sudo mkdir -p /etc/containerd/
|
sudo mkdir -p /etc/containerd/
|
||||||
```
|
~~~
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
cat << EOF | sudo tee /etc/containerd/config.toml
|
cat << EOF | sudo tee /etc/containerd/config.toml
|
||||||
[plugins]
|
[plugins]
|
||||||
[plugins.cri.containerd]
|
[plugins.cri.containerd]
|
||||||
|
@ -135,13 +131,13 @@ cat << EOF | sudo tee /etc/containerd/config.toml
|
||||||
runtime_engine = "/usr/local/bin/runsc"
|
runtime_engine = "/usr/local/bin/runsc"
|
||||||
runtime_root = "/run/containerd/runsc"
|
runtime_root = "/run/containerd/runsc"
|
||||||
EOF
|
EOF
|
||||||
```
|
~~~
|
||||||
|
|
||||||
> Untrusted workloads will be run using the gVisor (runsc) runtime.
|
> Untrusted workloads will be run using the gVisor (runsc) runtime.
|
||||||
|
|
||||||
Create the `containerd.service` systemd unit file:
|
Create the `containerd.service` systemd unit file:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
cat <<EOF | sudo tee /etc/systemd/system/containerd.service
|
cat <<EOF | sudo tee /etc/systemd/system/containerd.service
|
||||||
[Unit]
|
[Unit]
|
||||||
Description=containerd container runtime
|
Description=containerd container runtime
|
||||||
|
@ -163,21 +159,19 @@ LimitCORE=infinity
|
||||||
[Install]
|
[Install]
|
||||||
WantedBy=multi-user.target
|
WantedBy=multi-user.target
|
||||||
EOF
|
EOF
|
||||||
```
|
~~~
|
||||||
|
|
||||||
### Configure the Kubelet
|
### Configure the Kubelet
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
{
|
|
||||||
sudo mv ${HOSTNAME}-key.pem ${HOSTNAME}.pem /var/lib/kubelet/
|
sudo mv ${HOSTNAME}-key.pem ${HOSTNAME}.pem /var/lib/kubelet/
|
||||||
sudo mv ${HOSTNAME}.kubeconfig /var/lib/kubelet/kubeconfig
|
sudo mv ${HOSTNAME}.kubeconfig /var/lib/kubelet/kubeconfig
|
||||||
sudo mv ca.pem /var/lib/kubernetes/
|
sudo mv ca.pem /var/lib/kubernetes/
|
||||||
}
|
~~~
|
||||||
```
|
|
||||||
|
|
||||||
Create the `kubelet-config.yaml` configuration file:
|
Create the `kubelet-config.yaml` configuration file:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
cat <<EOF | sudo tee /var/lib/kubelet/kubelet-config.yaml
|
cat <<EOF | sudo tee /var/lib/kubelet/kubelet-config.yaml
|
||||||
kind: KubeletConfiguration
|
kind: KubeletConfiguration
|
||||||
apiVersion: kubelet.config.k8s.io/v1beta1
|
apiVersion: kubelet.config.k8s.io/v1beta1
|
||||||
|
@ -199,13 +193,13 @@ runtimeRequestTimeout: "15m"
|
||||||
tlsCertFile: "/var/lib/kubelet/${HOSTNAME}.pem"
|
tlsCertFile: "/var/lib/kubelet/${HOSTNAME}.pem"
|
||||||
tlsPrivateKeyFile: "/var/lib/kubelet/${HOSTNAME}-key.pem"
|
tlsPrivateKeyFile: "/var/lib/kubelet/${HOSTNAME}-key.pem"
|
||||||
EOF
|
EOF
|
||||||
```
|
~~~
|
||||||
|
|
||||||
> The `resolvConf` configuration is used to avoid loops when using CoreDNS for service discovery on systems running `systemd-resolved`.
|
> The `resolvConf` configuration is used to avoid loops when using CoreDNS for service discovery on systems running `systemd-resolved`.
|
||||||
|
|
||||||
Create the `kubelet.service` systemd unit file:
|
Create the `kubelet.service` systemd unit file:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
cat <<EOF | sudo tee /etc/systemd/system/kubelet.service
|
cat <<EOF | sudo tee /etc/systemd/system/kubelet.service
|
||||||
[Unit]
|
[Unit]
|
||||||
Description=Kubernetes Kubelet
|
Description=Kubernetes Kubelet
|
||||||
|
@ -229,17 +223,17 @@ RestartSec=5
|
||||||
[Install]
|
[Install]
|
||||||
WantedBy=multi-user.target
|
WantedBy=multi-user.target
|
||||||
EOF
|
EOF
|
||||||
```
|
~~~
|
||||||
|
|
||||||
### Configure the Kubernetes Proxy
|
### Configure the Kubernetes Proxy
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig
|
sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig
|
||||||
```
|
~~~
|
||||||
|
|
||||||
Create the `kube-proxy-config.yaml` configuration file:
|
Create the `kube-proxy-config.yaml` configuration file:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
cat <<EOF | sudo tee /var/lib/kube-proxy/kube-proxy-config.yaml
|
cat <<EOF | sudo tee /var/lib/kube-proxy/kube-proxy-config.yaml
|
||||||
kind: KubeProxyConfiguration
|
kind: KubeProxyConfiguration
|
||||||
apiVersion: kubeproxy.config.k8s.io/v1alpha1
|
apiVersion: kubeproxy.config.k8s.io/v1alpha1
|
||||||
|
@ -248,11 +242,11 @@ clientConnection:
|
||||||
mode: "iptables"
|
mode: "iptables"
|
||||||
clusterCIDR: "10.200.0.0/16"
|
clusterCIDR: "10.200.0.0/16"
|
||||||
EOF
|
EOF
|
||||||
```
|
~~~
|
||||||
|
|
||||||
Create the `kube-proxy.service` systemd unit file:
|
Create the `kube-proxy.service` systemd unit file:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
cat <<EOF | sudo tee /etc/systemd/system/kube-proxy.service
|
cat <<EOF | sudo tee /etc/systemd/system/kube-proxy.service
|
||||||
[Unit]
|
[Unit]
|
||||||
Description=Kubernetes Kube Proxy
|
Description=Kubernetes Kube Proxy
|
||||||
|
@ -267,17 +261,15 @@ RestartSec=5
|
||||||
[Install]
|
[Install]
|
||||||
WantedBy=multi-user.target
|
WantedBy=multi-user.target
|
||||||
EOF
|
EOF
|
||||||
```
|
~~~
|
||||||
|
|
||||||
### Start the Worker Services
|
### Start the Worker Services
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
{
|
|
||||||
sudo systemctl daemon-reload
|
sudo systemctl daemon-reload
|
||||||
sudo systemctl enable containerd kubelet kube-proxy
|
sudo systemctl enable containerd kubelet kube-proxy
|
||||||
sudo systemctl start containerd kubelet kube-proxy
|
sudo systemctl start containerd kubelet kube-proxy
|
||||||
}
|
~~~
|
||||||
```
|
|
||||||
|
|
||||||
> Remember to run the above commands on each worker node: `worker-0`, `worker-1`, and `worker-2`.
|
> Remember to run the above commands on each worker node: `worker-0`, `worker-1`, and `worker-2`.
|
||||||
|
|
||||||
|
@ -287,18 +279,18 @@ EOF
|
||||||
|
|
||||||
List the registered Kubernetes nodes:
|
List the registered Kubernetes nodes:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
gcloud compute ssh controller-0 \
|
gcloud compute ssh controller-0 \
|
||||||
--command "kubectl get nodes --kubeconfig admin.kubeconfig"
|
--command "kubectl get nodes --kubeconfig admin.kubeconfig"
|
||||||
```
|
~~~
|
||||||
|
|
||||||
> output
|
> output
|
||||||
|
|
||||||
```
|
~~~
|
||||||
NAME STATUS ROLES AGE VERSION
|
NAME STATUS ROLES AGE VERSION
|
||||||
worker-0 Ready <none> 35s v1.12.0
|
worker-0 Ready <none> 35s v1.12.0
|
||||||
worker-1 Ready <none> 36s v1.12.0
|
worker-1 Ready <none> 36s v1.12.0
|
||||||
worker-2 Ready <none> 36s v1.12.0
|
worker-2 Ready <none> 36s v1.12.0
|
||||||
```
|
~~~
|
||||||
|
|
||||||
Next: [Configuring kubectl for Remote Access](10-configuring-kubectl.md)
|
Next: [Configuring kubectl for Remote Access](10-configuring-kubectl.md)
|
||||||
|
|
|
@ -10,8 +10,7 @@ Each kubeconfig requires a Kubernetes API Server to connect to. To support high
|
||||||
|
|
||||||
Generate a kubeconfig file suitable for authenticating as the `admin` user:
|
Generate a kubeconfig file suitable for authenticating as the `admin` user:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
{
|
|
||||||
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
|
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
|
||||||
--region $(gcloud config get-value compute/region) \
|
--region $(gcloud config get-value compute/region) \
|
||||||
--format 'value(address)')
|
--format 'value(address)')
|
||||||
|
@ -30,41 +29,40 @@ Generate a kubeconfig file suitable for authenticating as the `admin` user:
|
||||||
--user=admin
|
--user=admin
|
||||||
|
|
||||||
kubectl config use-context kubernetes-the-hard-way
|
kubectl config use-context kubernetes-the-hard-way
|
||||||
}
|
~~~
|
||||||
```
|
|
||||||
|
|
||||||
## Verification
|
## Verification
|
||||||
|
|
||||||
Check the health of the remote Kubernetes cluster:
|
Check the health of the remote Kubernetes cluster:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
kubectl get componentstatuses
|
kubectl get componentstatuses
|
||||||
```
|
~~~
|
||||||
|
|
||||||
> output
|
> output
|
||||||
|
|
||||||
```
|
~~~
|
||||||
NAME STATUS MESSAGE ERROR
|
NAME STATUS MESSAGE ERROR
|
||||||
controller-manager Healthy ok
|
controller-manager Healthy ok
|
||||||
scheduler Healthy ok
|
scheduler Healthy ok
|
||||||
etcd-1 Healthy {"health":"true"}
|
etcd-1 Healthy {"health":"true"}
|
||||||
etcd-2 Healthy {"health":"true"}
|
etcd-2 Healthy {"health":"true"}
|
||||||
etcd-0 Healthy {"health":"true"}
|
etcd-0 Healthy {"health":"true"}
|
||||||
```
|
~~~
|
||||||
|
|
||||||
List the nodes in the remote Kubernetes cluster:
|
List the nodes in the remote Kubernetes cluster:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
kubectl get nodes
|
kubectl get nodes
|
||||||
```
|
~~~
|
||||||
|
|
||||||
> output
|
> output
|
||||||
|
|
||||||
```
|
~~~
|
||||||
NAME STATUS ROLES AGE VERSION
|
NAME STATUS ROLES AGE VERSION
|
||||||
worker-0 Ready <none> 117s v1.12.0
|
worker-0 Ready <none> 117s v1.12.0
|
||||||
worker-1 Ready <none> 118s v1.12.0
|
worker-1 Ready <none> 118s v1.12.0
|
||||||
worker-2 Ready <none> 118s v1.12.0
|
worker-2 Ready <none> 118s v1.12.0
|
||||||
```
|
~~~
|
||||||
|
|
||||||
Next: [Provisioning Pod Network Routes](11-pod-network-routes.md)
|
Next: [Provisioning Pod Network Routes](11-pod-network-routes.md)
|
||||||
|
|
|
@ -12,49 +12,49 @@ In this section you will gather the information required to create routes in the
|
||||||
|
|
||||||
Print the internal IP address and Pod CIDR range for each worker instance:
|
Print the internal IP address and Pod CIDR range for each worker instance:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
for instance in worker-0 worker-1 worker-2; do
|
for instance in worker-0 worker-1 worker-2; do
|
||||||
gcloud compute instances describe ${instance} \
|
gcloud compute instances describe ${instance} \
|
||||||
--format 'value[separator=" "](networkInterfaces[0].networkIP,metadata.items[0].value)'
|
--format 'value[separator=" "](networkInterfaces[0].networkIP,metadata.items[0].value)'
|
||||||
done
|
done
|
||||||
```
|
~~~
|
||||||
|
|
||||||
> output
|
> output
|
||||||
|
|
||||||
```
|
~~~
|
||||||
10.240.0.20 10.200.0.0/24
|
10.240.0.20 10.200.0.0/24
|
||||||
10.240.0.21 10.200.1.0/24
|
10.240.0.21 10.200.1.0/24
|
||||||
10.240.0.22 10.200.2.0/24
|
10.240.0.22 10.200.2.0/24
|
||||||
```
|
~~~
|
||||||
|
|
||||||
## Routes
|
## Routes
|
||||||
|
|
||||||
Create network routes for each worker instance:
|
Create network routes for each worker instance:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
for i in 0 1 2; do
|
for i in 0 1 2; do
|
||||||
gcloud compute routes create kubernetes-route-10-200-${i}-0-24 \
|
gcloud compute routes create kubernetes-route-10-200-${i}-0-24 \
|
||||||
--network kubernetes-the-hard-way \
|
--network kubernetes-the-hard-way \
|
||||||
--next-hop-address 10.240.0.2${i} \
|
--next-hop-address 10.240.0.2${i} \
|
||||||
--destination-range 10.200.${i}.0/24
|
--destination-range 10.200.${i}.0/24
|
||||||
done
|
done
|
||||||
```
|
~~~
|
||||||
|
|
||||||
List the routes in the `kubernetes-the-hard-way` VPC network:
|
List the routes in the `kubernetes-the-hard-way` VPC network:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
gcloud compute routes list --filter "network: kubernetes-the-hard-way"
|
gcloud compute routes list --filter "network: kubernetes-the-hard-way"
|
||||||
```
|
~~~
|
||||||
|
|
||||||
> output
|
> output
|
||||||
|
|
||||||
```
|
~~~
|
||||||
NAME NETWORK DEST_RANGE NEXT_HOP PRIORITY
|
NAME NETWORK DEST_RANGE NEXT_HOP PRIORITY
|
||||||
default-route-081879136902de56 kubernetes-the-hard-way 10.240.0.0/24 kubernetes-the-hard-way 1000
|
default-route-081879136902de56 kubernetes-the-hard-way 10.240.0.0/24 kubernetes-the-hard-way 1000
|
||||||
default-route-55199a5aa126d7aa kubernetes-the-hard-way 0.0.0.0/0 default-internet-gateway 1000
|
default-route-55199a5aa126d7aa kubernetes-the-hard-way 0.0.0.0/0 default-internet-gateway 1000
|
||||||
kubernetes-route-10-200-0-0-24 kubernetes-the-hard-way 10.200.0.0/24 10.240.0.20 1000
|
kubernetes-route-10-200-0-0-24 kubernetes-the-hard-way 10.200.0.0/24 10.240.0.20 1000
|
||||||
kubernetes-route-10-200-1-0-24 kubernetes-the-hard-way 10.200.1.0/24 10.240.0.21 1000
|
kubernetes-route-10-200-1-0-24 kubernetes-the-hard-way 10.200.1.0/24 10.240.0.21 1000
|
||||||
kubernetes-route-10-200-2-0-24 kubernetes-the-hard-way 10.200.2.0/24 10.240.0.22 1000
|
kubernetes-route-10-200-2-0-24 kubernetes-the-hard-way 10.200.2.0/24 10.240.0.22 1000
|
||||||
```
|
~~~
|
||||||
|
|
||||||
Next: [Deploying the DNS Cluster Add-on](12-dns-addon.md)
|
Next: [Deploying the DNS Cluster Add-on](12-dns-addon.md)
|
||||||
|
|
|
@ -6,76 +6,76 @@ In this lab you will deploy the [DNS add-on](https://kubernetes.io/docs/concepts
|
||||||
|
|
||||||
Deploy the `coredns` cluster add-on:
|
Deploy the `coredns` cluster add-on:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
kubectl apply -f https://storage.googleapis.com/kubernetes-the-hard-way/coredns.yaml
|
kubectl apply -f https://storage.googleapis.com/kubernetes-the-hard-way/coredns.yaml
|
||||||
```
|
~~~
|
||||||
|
|
||||||
> output
|
> output
|
||||||
|
|
||||||
```
|
~~~
|
||||||
serviceaccount/coredns created
|
serviceaccount/coredns created
|
||||||
clusterrole.rbac.authorization.k8s.io/system:coredns created
|
clusterrole.rbac.authorization.k8s.io/system:coredns created
|
||||||
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
|
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
|
||||||
configmap/coredns created
|
configmap/coredns created
|
||||||
deployment.extensions/coredns created
|
deployment.extensions/coredns created
|
||||||
service/kube-dns created
|
service/kube-dns created
|
||||||
```
|
~~~
|
||||||
|
|
||||||
List the pods created by the `kube-dns` deployment:
|
List the pods created by the `kube-dns` deployment:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
kubectl get pods -l k8s-app=kube-dns -n kube-system
|
kubectl get pods -l k8s-app=kube-dns -n kube-system
|
||||||
```
|
~~~
|
||||||
|
|
||||||
> output
|
> output
|
||||||
|
|
||||||
```
|
~~~
|
||||||
NAME READY STATUS RESTARTS AGE
|
NAME READY STATUS RESTARTS AGE
|
||||||
coredns-699f8ddd77-94qv9 1/1 Running 0 20s
|
coredns-699f8ddd77-94qv9 1/1 Running 0 20s
|
||||||
coredns-699f8ddd77-gtcgb 1/1 Running 0 20s
|
coredns-699f8ddd77-gtcgb 1/1 Running 0 20s
|
||||||
```
|
~~~
|
||||||
|
|
||||||
## Verification
|
## Verification
|
||||||
|
|
||||||
Create a `busybox` deployment:
|
Create a `busybox` deployment:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
kubectl run busybox --image=busybox:1.28 --command -- sleep 3600
|
kubectl run busybox --image=busybox:1.28 --command -- sleep 3600
|
||||||
```
|
~~~
|
||||||
|
|
||||||
List the pod created by the `busybox` deployment:
|
List the pod created by the `busybox` deployment:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
kubectl get pods -l run=busybox
|
kubectl get pods -l run=busybox
|
||||||
```
|
~~~
|
||||||
|
|
||||||
> output
|
> output
|
||||||
|
|
||||||
```
|
~~~
|
||||||
NAME READY STATUS RESTARTS AGE
|
NAME READY STATUS RESTARTS AGE
|
||||||
busybox-bd8fb7cbd-vflm9 1/1 Running 0 10s
|
busybox-bd8fb7cbd-vflm9 1/1 Running 0 10s
|
||||||
```
|
~~~
|
||||||
|
|
||||||
Retrieve the full name of the `busybox` pod:
|
Retrieve the full name of the `busybox` pod:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
POD_NAME=$(kubectl get pods -l run=busybox -o jsonpath="{.items[0].metadata.name}")
|
POD_NAME=$(kubectl get pods -l run=busybox -o jsonpath="{.items[0].metadata.name}")
|
||||||
```
|
~~~
|
||||||
|
|
||||||
Execute a DNS lookup for the `kubernetes` service inside the `busybox` pod:
|
Execute a DNS lookup for the `kubernetes` service inside the `busybox` pod:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
kubectl exec -ti $POD_NAME -- nslookup kubernetes
|
kubectl exec -ti $POD_NAME -- nslookup kubernetes
|
||||||
```
|
~~~
|
||||||
|
|
||||||
> output
|
> output
|
||||||
|
|
||||||
```
|
~~~
|
||||||
Server: 10.32.0.10
|
Server: 10.32.0.10
|
||||||
Address 1: 10.32.0.10 kube-dns.kube-system.svc.cluster.local
|
Address 1: 10.32.0.10 kube-dns.kube-system.svc.cluster.local
|
||||||
|
|
||||||
Name: kubernetes
|
Name: kubernetes
|
||||||
Address 1: 10.32.0.1 kubernetes.default.svc.cluster.local
|
Address 1: 10.32.0.1 kubernetes.default.svc.cluster.local
|
||||||
```
|
~~~
|
||||||
|
|
||||||
Next: [Smoke Test](13-smoke-test.md)
|
Next: [Smoke Test](13-smoke-test.md)
|
||||||
|
|
|
@ -8,14 +8,14 @@ In this section you will verify the ability to [encrypt secret data at rest](htt
|
||||||
|
|
||||||
Create a generic secret:
|
Create a generic secret:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
kubectl create secret generic kubernetes-the-hard-way \
|
kubectl create secret generic kubernetes-the-hard-way \
|
||||||
--from-literal="mykey=mydata"
|
--from-literal="mykey=mydata"
|
||||||
```
|
~~~
|
||||||
|
|
||||||
Print a hexdump of the `kubernetes-the-hard-way` secret stored in etcd:
|
Print a hexdump of the `kubernetes-the-hard-way` secret stored in etcd:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
gcloud compute ssh controller-0 \
|
gcloud compute ssh controller-0 \
|
||||||
--command "sudo ETCDCTL_API=3 etcdctl get \
|
--command "sudo ETCDCTL_API=3 etcdctl get \
|
||||||
--endpoints=https://127.0.0.1:2379 \
|
--endpoints=https://127.0.0.1:2379 \
|
||||||
|
@ -23,11 +23,11 @@ gcloud compute ssh controller-0 \
|
||||||
--cert=/etc/etcd/kubernetes.pem \
|
--cert=/etc/etcd/kubernetes.pem \
|
||||||
--key=/etc/etcd/kubernetes-key.pem\
|
--key=/etc/etcd/kubernetes-key.pem\
|
||||||
/registry/secrets/default/kubernetes-the-hard-way | hexdump -C"
|
/registry/secrets/default/kubernetes-the-hard-way | hexdump -C"
|
||||||
```
|
~~~
|
||||||
|
|
||||||
> output
|
> output
|
||||||
|
|
||||||
```
|
~~~
|
||||||
00000000 2f 72 65 67 69 73 74 72 79 2f 73 65 63 72 65 74 |/registry/secret|
|
00000000 2f 72 65 67 69 73 74 72 79 2f 73 65 63 72 65 74 |/registry/secret|
|
||||||
00000010 73 2f 64 65 66 61 75 6c 74 2f 6b 75 62 65 72 6e |s/default/kubern|
|
00000010 73 2f 64 65 66 61 75 6c 74 2f 6b 75 62 65 72 6e |s/default/kubern|
|
||||||
00000020 65 74 65 73 2d 74 68 65 2d 68 61 72 64 2d 77 61 |etes-the-hard-wa|
|
00000020 65 74 65 73 2d 74 68 65 2d 68 61 72 64 2d 77 61 |etes-the-hard-wa|
|
||||||
|
@ -44,7 +44,7 @@ gcloud compute ssh controller-0 \
|
||||||
000000d0 18 28 f4 33 42 d9 57 d9 e3 e9 1c 38 e3 bc 1e c3 |.(.3B.W....8....|
|
000000d0 18 28 f4 33 42 d9 57 d9 e3 e9 1c 38 e3 bc 1e c3 |.(.3B.W....8....|
|
||||||
000000e0 d2 47 f3 20 60 be b8 57 a7 0a |.G. `..W..|
|
000000e0 d2 47 f3 20 60 be b8 57 a7 0a |.G. `..W..|
|
||||||
000000ea
|
000000ea
|
||||||
```
|
~~~
|
||||||
|
|
||||||
The etcd key should be prefixed with `k8s:enc:aescbc:v1:key1`, which indicates the `aescbc` provider was used to encrypt the data with the `key1` encryption key.
|
The etcd key should be prefixed with `k8s:enc:aescbc:v1:key1`, which indicates the `aescbc` provider was used to encrypt the data with the `key1` encryption key.
|
||||||
|
|
||||||
|
@ -54,22 +54,22 @@ In this section you will verify the ability to create and manage [Deployments](h
|
||||||
|
|
||||||
Create a deployment for the [nginx](https://nginx.org/en/) web server:
|
Create a deployment for the [nginx](https://nginx.org/en/) web server:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
kubectl run nginx --image=nginx
|
kubectl run nginx --image=nginx
|
||||||
```
|
~~~
|
||||||
|
|
||||||
List the pod created by the `nginx` deployment:
|
List the pod created by the `nginx` deployment:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
kubectl get pods -l run=nginx
|
kubectl get pods -l run=nginx
|
||||||
```
|
~~~
|
||||||
|
|
||||||
> output
|
> output
|
||||||
|
|
||||||
```
|
~~~
|
||||||
NAME READY STATUS RESTARTS AGE
|
NAME READY STATUS RESTARTS AGE
|
||||||
nginx-dbddb74b8-6lxg2 1/1 Running 0 10s
|
nginx-dbddb74b8-6lxg2 1/1 Running 0 10s
|
||||||
```
|
~~~
|
||||||
|
|
||||||
### Port Forwarding
|
### Port Forwarding
|
||||||
|
|
||||||
|
@ -77,32 +77,32 @@ In this section you will verify the ability to access applications remotely usin
|
||||||
|
|
||||||
Retrieve the full name of the `nginx` pod:
|
Retrieve the full name of the `nginx` pod:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
POD_NAME=$(kubectl get pods -l run=nginx -o jsonpath="{.items[0].metadata.name}")
|
POD_NAME=$(kubectl get pods -l run=nginx -o jsonpath="{.items[0].metadata.name}")
|
||||||
```
|
~~~
|
||||||
|
|
||||||
Forward port `8080` on your local machine to port `80` of the `nginx` pod:
|
Forward port `8080` on your local machine to port `80` of the `nginx` pod:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
kubectl port-forward $POD_NAME 8080:80
|
kubectl port-forward $POD_NAME 8080:80
|
||||||
```
|
~~~
|
||||||
|
|
||||||
> output
|
> output
|
||||||
|
|
||||||
```
|
~~~
|
||||||
Forwarding from 127.0.0.1:8080 -> 80
|
Forwarding from 127.0.0.1:8080 -> 80
|
||||||
Forwarding from [::1]:8080 -> 80
|
Forwarding from [::1]:8080 -> 80
|
||||||
```
|
~~~
|
||||||
|
|
||||||
In a new terminal make an HTTP request using the forwarding address:
|
In a new terminal make an HTTP request using the forwarding address:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
curl --head http://127.0.0.1:8080
|
curl --head http://127.0.0.1:8080
|
||||||
```
|
~~~
|
||||||
|
|
||||||
> output
|
> output
|
||||||
|
|
||||||
```
|
~~~
|
||||||
HTTP/1.1 200 OK
|
HTTP/1.1 200 OK
|
||||||
Server: nginx/1.15.4
|
Server: nginx/1.15.4
|
||||||
Date: Sun, 30 Sep 2018 19:23:10 GMT
|
Date: Sun, 30 Sep 2018 19:23:10 GMT
|
||||||
|
@ -112,16 +112,16 @@ Last-Modified: Tue, 25 Sep 2018 15:04:03 GMT
|
||||||
Connection: keep-alive
|
Connection: keep-alive
|
||||||
ETag: "5baa4e63-264"
|
ETag: "5baa4e63-264"
|
||||||
Accept-Ranges: bytes
|
Accept-Ranges: bytes
|
||||||
```
|
~~~
|
||||||
|
|
||||||
Switch back to the previous terminal and stop the port forwarding to the `nginx` pod:
|
Switch back to the previous terminal and stop the port forwarding to the `nginx` pod:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
Forwarding from 127.0.0.1:8080 -> 80
|
Forwarding from 127.0.0.1:8080 -> 80
|
||||||
Forwarding from [::1]:8080 -> 80
|
Forwarding from [::1]:8080 -> 80
|
||||||
Handling connection for 8080
|
Handling connection for 8080
|
||||||
^C
|
^C
|
||||||
```
|
~~~
|
||||||
|
|
||||||
### Logs
|
### Logs
|
||||||
|
|
||||||
|
@ -129,15 +129,15 @@ In this section you will verify the ability to [retrieve container logs](https:/
|
||||||
|
|
||||||
Print the `nginx` pod logs:
|
Print the `nginx` pod logs:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
kubectl logs $POD_NAME
|
kubectl logs $POD_NAME
|
||||||
```
|
~~~
|
||||||
|
|
||||||
> output
|
> output
|
||||||
|
|
||||||
```
|
~~~
|
||||||
127.0.0.1 - - [30/Sep/2018:19:23:10 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.58.0" "-"
|
127.0.0.1 - - [30/Sep/2018:19:23:10 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.58.0" "-"
|
||||||
```
|
~~~
|
||||||
|
|
||||||
### Exec
|
### Exec
|
||||||
|
|
||||||
|
@ -145,15 +145,15 @@ In this section you will verify the ability to [execute commands in a container]
|
||||||
|
|
||||||
Print the nginx version by executing the `nginx -v` command in the `nginx` container:
|
Print the nginx version by executing the `nginx -v` command in the `nginx` container:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
kubectl exec -ti $POD_NAME -- nginx -v
|
kubectl exec -ti $POD_NAME -- nginx -v
|
||||||
```
|
~~~
|
||||||
|
|
||||||
> output
|
> output
|
||||||
|
|
||||||
```
|
~~~
|
||||||
nginx version: nginx/1.15.4
|
nginx version: nginx/1.15.4
|
||||||
```
|
~~~
|
||||||
|
|
||||||
## Services
|
## Services
|
||||||
|
|
||||||
|
@ -161,43 +161,43 @@ In this section you will verify the ability to expose applications using a [Serv
|
||||||
|
|
||||||
Expose the `nginx` deployment using a [NodePort](https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport) service:
|
Expose the `nginx` deployment using a [NodePort](https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport) service:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
kubectl expose deployment nginx --port 80 --type NodePort
|
kubectl expose deployment nginx --port 80 --type NodePort
|
||||||
```
|
~~~
|
||||||
|
|
||||||
> The LoadBalancer service type can not be used because your cluster is not configured with [cloud provider integration](https://kubernetes.io/docs/getting-started-guides/scratch/#cloud-provider). Setting up cloud provider integration is out of scope for this tutorial.
|
> The LoadBalancer service type can not be used because your cluster is not configured with [cloud provider integration](https://kubernetes.io/docs/getting-started-guides/scratch/#cloud-provider). Setting up cloud provider integration is out of scope for this tutorial.
|
||||||
|
|
||||||
Retrieve the node port assigned to the `nginx` service:
|
Retrieve the node port assigned to the `nginx` service:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
NODE_PORT=$(kubectl get svc nginx \
|
NODE_PORT=$(kubectl get svc nginx \
|
||||||
--output=jsonpath='{range .spec.ports[0]}{.nodePort}')
|
--output=jsonpath='{range .spec.ports[0]}{.nodePort}')
|
||||||
```
|
~~~
|
||||||
|
|
||||||
Create a firewall rule that allows remote access to the `nginx` node port:
|
Create a firewall rule that allows remote access to the `nginx` node port:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
gcloud compute firewall-rules create kubernetes-the-hard-way-allow-nginx-service \
|
gcloud compute firewall-rules create kubernetes-the-hard-way-allow-nginx-service \
|
||||||
--allow=tcp:${NODE_PORT} \
|
--allow=tcp:${NODE_PORT} \
|
||||||
--network kubernetes-the-hard-way
|
--network kubernetes-the-hard-way
|
||||||
```
|
~~~
|
||||||
|
|
||||||
Retrieve the external IP address of a worker instance:
|
Retrieve the external IP address of a worker instance:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
EXTERNAL_IP=$(gcloud compute instances describe worker-0 \
|
EXTERNAL_IP=$(gcloud compute instances describe worker-0 \
|
||||||
--format 'value(networkInterfaces[0].accessConfigs[0].natIP)')
|
--format 'value(networkInterfaces[0].accessConfigs[0].natIP)')
|
||||||
```
|
~~~
|
||||||
|
|
||||||
Make an HTTP request using the external IP address and the `nginx` node port:
|
Make an HTTP request using the external IP address and the `nginx` node port:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
curl -I http://${EXTERNAL_IP}:${NODE_PORT}
|
curl -I http://${EXTERNAL_IP}:${NODE_PORT}
|
||||||
```
|
~~~
|
||||||
|
|
||||||
> output
|
> output
|
||||||
|
|
||||||
```
|
~~~
|
||||||
HTTP/1.1 200 OK
|
HTTP/1.1 200 OK
|
||||||
Server: nginx/1.15.4
|
Server: nginx/1.15.4
|
||||||
Date: Sun, 30 Sep 2018 19:25:40 GMT
|
Date: Sun, 30 Sep 2018 19:25:40 GMT
|
||||||
|
@ -207,7 +207,7 @@ Last-Modified: Tue, 25 Sep 2018 15:04:03 GMT
|
||||||
Connection: keep-alive
|
Connection: keep-alive
|
||||||
ETag: "5baa4e63-264"
|
ETag: "5baa4e63-264"
|
||||||
Accept-Ranges: bytes
|
Accept-Ranges: bytes
|
||||||
```
|
~~~
|
||||||
|
|
||||||
## Untrusted Workloads
|
## Untrusted Workloads
|
||||||
|
|
||||||
|
@ -215,7 +215,7 @@ This section will verify the ability to run untrusted workloads using [gVisor](h
|
||||||
|
|
||||||
Create the `untrusted` pod:
|
Create the `untrusted` pod:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
cat <<EOF | kubectl apply -f -
|
cat <<EOF | kubectl apply -f -
|
||||||
apiVersion: v1
|
apiVersion: v1
|
||||||
kind: Pod
|
kind: Pod
|
||||||
|
@ -228,7 +228,7 @@ spec:
|
||||||
- name: webserver
|
- name: webserver
|
||||||
image: gcr.io/hightowerlabs/helloworld:2.0.0
|
image: gcr.io/hightowerlabs/helloworld:2.0.0
|
||||||
EOF
|
EOF
|
||||||
```
|
~~~
|
||||||
|
|
||||||
### Verification
|
### Verification
|
||||||
|
|
||||||
|
@ -236,35 +236,35 @@ In this section you will verify the `untrusted` pod is running under gVisor (run
|
||||||
|
|
||||||
Verify the `untrusted` pod is running:
|
Verify the `untrusted` pod is running:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
kubectl get pods -o wide
|
kubectl get pods -o wide
|
||||||
```
|
~~~
|
||||||
```
|
~~~
|
||||||
NAME READY STATUS RESTARTS AGE IP NODE
|
NAME READY STATUS RESTARTS AGE IP NODE
|
||||||
busybox-68654f944b-djjjb 1/1 Running 0 5m 10.200.0.2 worker-0
|
busybox-68654f944b-djjjb 1/1 Running 0 5m 10.200.0.2 worker-0
|
||||||
nginx-65899c769f-xkfcn 1/1 Running 0 4m 10.200.1.2 worker-1
|
nginx-65899c769f-xkfcn 1/1 Running 0 4m 10.200.1.2 worker-1
|
||||||
untrusted 1/1 Running 0 10s 10.200.0.3 worker-0
|
untrusted 1/1 Running 0 10s 10.200.0.3 worker-0
|
||||||
```
|
~~~
|
||||||
|
|
||||||
|
|
||||||
Get the node name where the `untrusted` pod is running:
|
Get the node name where the `untrusted` pod is running:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
INSTANCE_NAME=$(kubectl get pod untrusted --output=jsonpath='{.spec.nodeName}')
|
INSTANCE_NAME=$(kubectl get pod untrusted --output=jsonpath='{.spec.nodeName}')
|
||||||
```
|
~~~
|
||||||
|
|
||||||
SSH into the worker node:
|
SSH into the worker node:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
gcloud compute ssh ${INSTANCE_NAME}
|
gcloud compute ssh ${INSTANCE_NAME}
|
||||||
```
|
~~~
|
||||||
|
|
||||||
List the containers running under gVisor:
|
List the containers running under gVisor:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
sudo runsc --root /run/containerd/runsc/k8s.io list
|
sudo runsc --root /run/containerd/runsc/k8s.io list
|
||||||
```
|
~~~
|
||||||
```
|
~~~
|
||||||
I0930 19:27:13.255142 20832 x:0] ***************************
|
I0930 19:27:13.255142 20832 x:0] ***************************
|
||||||
I0930 19:27:13.255326 20832 x:0] Args: [runsc --root /run/containerd/runsc/k8s.io list]
|
I0930 19:27:13.255326 20832 x:0] Args: [runsc --root /run/containerd/runsc/k8s.io list]
|
||||||
I0930 19:27:13.255386 20832 x:0] Git Revision: 50c283b9f56bb7200938d9e207355f05f79f0d17
|
I0930 19:27:13.255386 20832 x:0] Git Revision: 50c283b9f56bb7200938d9e207355f05f79f0d17
|
||||||
|
@ -281,31 +281,31 @@ ID PID S
|
||||||
79e74d0cec52a1ff4bc2c9b0bb9662f73ea918959c08bca5bcf07ddb6cb0e1fd 20449 running /run/containerd/io.containerd.runtime.v1.linux/k8s.io/79e74d0cec52a1ff4bc2c9b0bb9662f73ea918959c08bca5bcf07ddb6cb0e1fd 0001-01-01T00:00:00Z
|
79e74d0cec52a1ff4bc2c9b0bb9662f73ea918959c08bca5bcf07ddb6cb0e1fd 20449 running /run/containerd/io.containerd.runtime.v1.linux/k8s.io/79e74d0cec52a1ff4bc2c9b0bb9662f73ea918959c08bca5bcf07ddb6cb0e1fd 0001-01-01T00:00:00Z
|
||||||
af7470029008a4520b5db9fb5b358c65d64c9f748fae050afb6eaf014a59fea5 20510 running /run/containerd/io.containerd.runtime.v1.linux/k8s.io/af7470029008a4520b5db9fb5b358c65d64c9f748fae050afb6eaf014a59fea5 0001-01-01T00:00:00Z
|
af7470029008a4520b5db9fb5b358c65d64c9f748fae050afb6eaf014a59fea5 20510 running /run/containerd/io.containerd.runtime.v1.linux/k8s.io/af7470029008a4520b5db9fb5b358c65d64c9f748fae050afb6eaf014a59fea5 0001-01-01T00:00:00Z
|
||||||
I0930 19:27:13.259733 20832 x:0] Exiting with status: 0
|
I0930 19:27:13.259733 20832 x:0] Exiting with status: 0
|
||||||
```
|
~~~
|
||||||
|
|
||||||
Get the ID of the `untrusted` pod:
|
Get the ID of the `untrusted` pod:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
POD_ID=$(sudo crictl -r unix:///var/run/containerd/containerd.sock \
|
POD_ID=$(sudo crictl -r unix:///var/run/containerd/containerd.sock \
|
||||||
pods --name untrusted -q)
|
pods --name untrusted -q)
|
||||||
```
|
~~~
|
||||||
|
|
||||||
Get the ID of the `webserver` container running in the `untrusted` pod:
|
Get the ID of the `webserver` container running in the `untrusted` pod:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
CONTAINER_ID=$(sudo crictl -r unix:///var/run/containerd/containerd.sock \
|
CONTAINER_ID=$(sudo crictl -r unix:///var/run/containerd/containerd.sock \
|
||||||
ps -p ${POD_ID} -q)
|
ps -p ${POD_ID} -q)
|
||||||
```
|
~~~
|
||||||
|
|
||||||
Use the gVisor `runsc` command to display the processes running inside the `webserver` container:
|
Use the gVisor `runsc` command to display the processes running inside the `webserver` container:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
sudo runsc --root /run/containerd/runsc/k8s.io ps ${CONTAINER_ID}
|
sudo runsc --root /run/containerd/runsc/k8s.io ps ${CONTAINER_ID}
|
||||||
```
|
~~~
|
||||||
|
|
||||||
> output
|
> output
|
||||||
|
|
||||||
```
|
~~~
|
||||||
I0930 19:31:31.419765 21217 x:0] ***************************
|
I0930 19:31:31.419765 21217 x:0] ***************************
|
||||||
I0930 19:31:31.419907 21217 x:0] Args: [runsc --root /run/containerd/runsc/k8s.io ps af7470029008a4520b5db9fb5b358c65d64c9f748fae050afb6eaf014a59fea5]
|
I0930 19:31:31.419907 21217 x:0] Args: [runsc --root /run/containerd/runsc/k8s.io ps af7470029008a4520b5db9fb5b358c65d64c9f748fae050afb6eaf014a59fea5]
|
||||||
I0930 19:31:31.419959 21217 x:0] Git Revision: 50c283b9f56bb7200938d9e207355f05f79f0d17
|
I0930 19:31:31.419959 21217 x:0] Git Revision: 50c283b9f56bb7200938d9e207355f05f79f0d17
|
||||||
|
@ -321,6 +321,6 @@ I0930 19:31:31.420676 21217 x:0] ***************************
|
||||||
UID PID PPID C STIME TIME CMD
|
UID PID PPID C STIME TIME CMD
|
||||||
0 1 0 0 19:26 10ms app
|
0 1 0 0 19:26 10ms app
|
||||||
I0930 19:31:31.422022 21217 x:0] Exiting with status: 0
|
I0930 19:31:31.422022 21217 x:0] Exiting with status: 0
|
||||||
```
|
~~~
|
||||||
|
|
||||||
Next: [Cleaning Up](14-cleanup.md)
|
Next: [Cleaning Up](14-cleanup.md)
|
||||||
|
|
|
@ -6,18 +6,17 @@ In this lab you will delete the compute resources created during this tutorial.
|
||||||
|
|
||||||
Delete the controller and worker compute instances:
|
Delete the controller and worker compute instances:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
gcloud -q compute instances delete \
|
gcloud -q compute instances delete \
|
||||||
controller-0 controller-1 controller-2 \
|
controller-0 controller-1 controller-2 \
|
||||||
worker-0 worker-1 worker-2
|
worker-0 worker-1 worker-2
|
||||||
```
|
~~~
|
||||||
|
|
||||||
## Networking
|
## Networking
|
||||||
|
|
||||||
Delete the external load balancer network resources:
|
Delete the external load balancer network resources:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
{
|
|
||||||
gcloud -q compute forwarding-rules delete kubernetes-forwarding-rule \
|
gcloud -q compute forwarding-rules delete kubernetes-forwarding-rule \
|
||||||
--region $(gcloud config get-value compute/region)
|
--region $(gcloud config get-value compute/region)
|
||||||
|
|
||||||
|
@ -26,23 +25,21 @@ Delete the external load balancer network resources:
|
||||||
gcloud -q compute http-health-checks delete kubernetes
|
gcloud -q compute http-health-checks delete kubernetes
|
||||||
|
|
||||||
gcloud -q compute addresses delete kubernetes-the-hard-way
|
gcloud -q compute addresses delete kubernetes-the-hard-way
|
||||||
}
|
~~~
|
||||||
```
|
|
||||||
|
|
||||||
Delete the `kubernetes-the-hard-way` firewall rules:
|
Delete the `kubernetes-the-hard-way` firewall rules:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
gcloud -q compute firewall-rules delete \
|
gcloud -q compute firewall-rules delete \
|
||||||
kubernetes-the-hard-way-allow-nginx-service \
|
kubernetes-the-hard-way-allow-nginx-service \
|
||||||
kubernetes-the-hard-way-allow-internal \
|
kubernetes-the-hard-way-allow-internal \
|
||||||
kubernetes-the-hard-way-allow-external \
|
kubernetes-the-hard-way-allow-external \
|
||||||
kubernetes-the-hard-way-allow-health-check
|
kubernetes-the-hard-way-allow-health-check
|
||||||
```
|
~~~
|
||||||
|
|
||||||
Delete the `kubernetes-the-hard-way` network VPC:
|
Delete the `kubernetes-the-hard-way` network VPC:
|
||||||
|
|
||||||
```
|
~~~sh
|
||||||
{
|
|
||||||
gcloud -q compute routes delete \
|
gcloud -q compute routes delete \
|
||||||
kubernetes-route-10-200-0-0-24 \
|
kubernetes-route-10-200-0-0-24 \
|
||||||
kubernetes-route-10-200-1-0-24 \
|
kubernetes-route-10-200-1-0-24 \
|
||||||
|
@ -51,5 +48,4 @@ Delete the `kubernetes-the-hard-way` network VPC:
|
||||||
gcloud -q compute networks subnets delete kubernetes
|
gcloud -q compute networks subnets delete kubernetes
|
||||||
|
|
||||||
gcloud -q compute networks delete kubernetes-the-hard-way
|
gcloud -q compute networks delete kubernetes-the-hard-way
|
||||||
}
|
~~~
|
||||||
```
|
|
||||||
|
|
Loading…
Reference in New Issue