update docs

pull/345/head
Kelsey Hightower 2018-05-14 03:19:24 +00:00
parent 0c4be49b9d
commit 5218813bee
9 changed files with 424 additions and 391 deletions

1
.gitignore vendored
View File

@ -2,6 +2,7 @@ admin-csr.json
admin-key.pem admin-key.pem
admin.csr admin.csr
admin.pem admin.pem
admin.kubeconfig
ca-config.json ca-config.json
ca-csr.json ca-csr.json
ca-key.pem ca-key.pem

View File

@ -6,9 +6,11 @@ In this lab you will provision a [PKI Infrastructure](https://en.wikipedia.org/w
In this section you will provision a Certificate Authority that can be used to generate additional TLS certificates. In this section you will provision a Certificate Authority that can be used to generate additional TLS certificates.
Create the CA configuration file: Generate the CA configuration file, certificate, and private key:
``` ```
{
cat > ca-config.json <<EOF cat > ca-config.json <<EOF
{ {
"signing": { "signing": {
@ -24,11 +26,7 @@ cat > ca-config.json <<EOF
} }
} }
EOF EOF
```
Create the CA certificate signing request:
```
cat > ca-csr.json <<EOF cat > ca-csr.json <<EOF
{ {
"CN": "Kubernetes", "CN": "Kubernetes",
@ -47,12 +45,10 @@ cat > ca-csr.json <<EOF
] ]
} }
EOF EOF
```
Generate the CA certificate and private key:
```
cfssl gencert -initca ca-csr.json | cfssljson -bare ca cfssl gencert -initca ca-csr.json | cfssljson -bare ca
}
``` ```
Results: Results:
@ -68,9 +64,11 @@ In this section you will generate client and server certificates for each Kubern
### The Admin Client Certificate ### The Admin Client Certificate
Create the `admin` client certificate signing request: Generate the `admin` client certificate and private key:
``` ```
{
cat > admin-csr.json <<EOF cat > admin-csr.json <<EOF
{ {
"CN": "admin", "CN": "admin",
@ -89,17 +87,15 @@ cat > admin-csr.json <<EOF
] ]
} }
EOF EOF
```
Generate the `admin` client certificate and private key:
```
cfssl gencert \ cfssl gencert \
-ca=ca.pem \ -ca=ca.pem \
-ca-key=ca-key.pem \ -ca-key=ca-key.pem \
-config=ca-config.json \ -config=ca-config.json \
-profile=kubernetes \ -profile=kubernetes \
admin-csr.json | cfssljson -bare admin admin-csr.json | cfssljson -bare admin
}
``` ```
Results: Results:
@ -165,9 +161,11 @@ worker-2.pem
### The Controller Manager Client Certificate ### The Controller Manager Client Certificate
Create the `kube-controller-manager` client certificate signing request: Generate the `kube-controller-manager` client certificate and private key:
``` ```
{
cat > kube-controller-manager-csr.json <<EOF cat > kube-controller-manager-csr.json <<EOF
{ {
"CN": "system:kube-controller-manager", "CN": "system:kube-controller-manager",
@ -186,17 +184,15 @@ cat > kube-controller-manager-csr.json <<EOF
] ]
} }
EOF EOF
```
Generate the `kube-controller-manager` client certificate and private key:
```
cfssl gencert \ cfssl gencert \
-ca=ca.pem \ -ca=ca.pem \
-ca-key=ca-key.pem \ -ca-key=ca-key.pem \
-config=ca-config.json \ -config=ca-config.json \
-profile=kubernetes \ -profile=kubernetes \
kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
}
``` ```
Results: Results:
@ -209,9 +205,11 @@ kube-controller-manager.pem
### The Kube Proxy Client Certificate ### The Kube Proxy Client Certificate
Create the `kube-proxy` client certificate signing request: Generate the `kube-proxy` client certificate and private key:
``` ```
{
cat > kube-proxy-csr.json <<EOF cat > kube-proxy-csr.json <<EOF
{ {
"CN": "system:kube-proxy", "CN": "system:kube-proxy",
@ -230,17 +228,15 @@ cat > kube-proxy-csr.json <<EOF
] ]
} }
EOF EOF
```
Generate the `kube-proxy` client certificate and private key:
```
cfssl gencert \ cfssl gencert \
-ca=ca.pem \ -ca=ca.pem \
-ca-key=ca-key.pem \ -ca-key=ca-key.pem \
-config=ca-config.json \ -config=ca-config.json \
-profile=kubernetes \ -profile=kubernetes \
kube-proxy-csr.json | cfssljson -bare kube-proxy kube-proxy-csr.json | cfssljson -bare kube-proxy
}
``` ```
Results: Results:
@ -252,9 +248,11 @@ kube-proxy.pem
### The Scheduler Client Certificate ### The Scheduler Client Certificate
Create the `kube-scheduler` client certificate signing request: Generate the `kube-scheduler` client certificate and private key:
``` ```
{
cat > kube-scheduler-csr.json <<EOF cat > kube-scheduler-csr.json <<EOF
{ {
"CN": "system:kube-scheduler", "CN": "system:kube-scheduler",
@ -273,17 +271,15 @@ cat > kube-scheduler-csr.json <<EOF
] ]
} }
EOF EOF
```
Generate the `kube-scheduler` client certificate and private key:
```
cfssl gencert \ cfssl gencert \
-ca=ca.pem \ -ca=ca.pem \
-ca-key=ca-key.pem \ -ca-key=ca-key.pem \
-config=ca-config.json \ -config=ca-config.json \
-profile=kubernetes \ -profile=kubernetes \
kube-scheduler-csr.json | cfssljson -bare kube-scheduler kube-scheduler-csr.json | cfssljson -bare kube-scheduler
}
``` ```
Results: Results:
@ -298,17 +294,15 @@ kube-scheduler.pem
The `kubernetes-the-hard-way` static IP address will be included in the list of subject alternative names for the Kubernetes API Server certificate. This will ensure the certificate can be validated by remote clients. The `kubernetes-the-hard-way` static IP address will be included in the list of subject alternative names for the Kubernetes API Server certificate. This will ensure the certificate can be validated by remote clients.
Retrieve the `kubernetes-the-hard-way` static IP address: Generate the Kubernetes API Server certificate and private key:
``` ```
{
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \ KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) \ --region $(gcloud config get-value compute/region) \
--format 'value(address)') --format 'value(address)')
```
Create the Kubernetes API Server certificate signing request:
```
cat > kubernetes-csr.json <<EOF cat > kubernetes-csr.json <<EOF
{ {
"CN": "kubernetes", "CN": "kubernetes",
@ -327,11 +321,7 @@ cat > kubernetes-csr.json <<EOF
] ]
} }
EOF EOF
```
Generate the Kubernetes API Server certificate and private key:
```
cfssl gencert \ cfssl gencert \
-ca=ca.pem \ -ca=ca.pem \
-ca-key=ca-key.pem \ -ca-key=ca-key.pem \
@ -339,6 +329,8 @@ cfssl gencert \
-hostname=10.32.0.1,10.240.0.10,10.240.0.11,10.240.0.12,${KUBERNETES_PUBLIC_ADDRESS},127.0.0.1,kubernetes.default \ -hostname=10.32.0.1,10.240.0.10,10.240.0.11,10.240.0.12,${KUBERNETES_PUBLIC_ADDRESS},127.0.0.1,kubernetes.default \
-profile=kubernetes \ -profile=kubernetes \
kubernetes-csr.json | cfssljson -bare kubernetes kubernetes-csr.json | cfssljson -bare kubernetes
}
``` ```
Results: Results:
@ -352,9 +344,11 @@ kubernetes.pem
The Kubernetes Controller Manager leverages a key pair to generate and sign service account tokens as describe in the [managing service accounts](https://kubernetes.io/docs/admin/service-accounts-admin/) documentation. The Kubernetes Controller Manager leverages a key pair to generate and sign service account tokens as describe in the [managing service accounts](https://kubernetes.io/docs/admin/service-accounts-admin/) documentation.
Create the `service-account` certificate signing request: Generate the `service-account` certificate and private key:
``` ```
{
cat > service-account-csr.json <<EOF cat > service-account-csr.json <<EOF
{ {
"CN": "service-accounts", "CN": "service-accounts",
@ -373,17 +367,15 @@ cat > service-account-csr.json <<EOF
] ]
} }
EOF EOF
```
Generate the `service-account` certificate and private key:
```
cfssl gencert \ cfssl gencert \
-ca=ca.pem \ -ca=ca.pem \
-ca-key=ca-key.pem \ -ca-key=ca-key.pem \
-config=ca-config.json \ -config=ca-config.json \
-profile=kubernetes \ -profile=kubernetes \
service-account-csr.json | cfssljson -bare service-account service-account-csr.json | cfssljson -bare service-account
}
``` ```
Results: Results:
@ -394,7 +386,6 @@ service-account.pem
``` ```
## Distribute the Client and Server Certificates ## Distribute the Client and Server Certificates
Copy the appropriate certificates and private keys to each worker instance: Copy the appropriate certificates and private keys to each worker instance:

View File

@ -60,30 +60,26 @@ worker-2.kubeconfig
Generate a kubeconfig file for the `kube-proxy` service: Generate a kubeconfig file for the `kube-proxy` service:
``` ```
{
kubectl config set-cluster kubernetes-the-hard-way \ kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \ --certificate-authority=ca.pem \
--embed-certs=true \ --embed-certs=true \
--server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \ --server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \
--kubeconfig=kube-proxy.kubeconfig --kubeconfig=kube-proxy.kubeconfig
```
```
kubectl config set-credentials system:kube-proxy \ kubectl config set-credentials system:kube-proxy \
--client-certificate=kube-proxy.pem \ --client-certificate=kube-proxy.pem \
--client-key=kube-proxy-key.pem \ --client-key=kube-proxy-key.pem \
--embed-certs=true \ --embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig --kubeconfig=kube-proxy.kubeconfig
```
```
kubectl config set-context default \ kubectl config set-context default \
--cluster=kubernetes-the-hard-way \ --cluster=kubernetes-the-hard-way \
--user=system:kube-proxy \ --user=system:kube-proxy \
--kubeconfig=kube-proxy.kubeconfig --kubeconfig=kube-proxy.kubeconfig
```
```
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
}
``` ```
Results: Results:
@ -97,30 +93,26 @@ kube-proxy.kubeconfig
Generate a kubeconfig file for the `kube-controller-manager` service: Generate a kubeconfig file for the `kube-controller-manager` service:
``` ```
{
kubectl config set-cluster kubernetes-the-hard-way \ kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \ --certificate-authority=ca.pem \
--embed-certs=true \ --embed-certs=true \
--server=https://127.0.0.1:6443 \ --server=https://127.0.0.1:6443 \
--kubeconfig=kube-controller-manager.kubeconfig --kubeconfig=kube-controller-manager.kubeconfig
```
```
kubectl config set-credentials system:kube-controller-manager \ kubectl config set-credentials system:kube-controller-manager \
--client-certificate=kube-controller-manager.pem \ --client-certificate=kube-controller-manager.pem \
--client-key=kube-controller-manager-key.pem \ --client-key=kube-controller-manager-key.pem \
--embed-certs=true \ --embed-certs=true \
--kubeconfig=kube-controller-manager.kubeconfig --kubeconfig=kube-controller-manager.kubeconfig
```
```
kubectl config set-context default \ kubectl config set-context default \
--cluster=kubernetes-the-hard-way \ --cluster=kubernetes-the-hard-way \
--user=system:kube-controller-manager \ --user=system:kube-controller-manager \
--kubeconfig=kube-controller-manager.kubeconfig --kubeconfig=kube-controller-manager.kubeconfig
```
```
kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig
}
``` ```
Results: Results:
@ -135,30 +127,26 @@ kube-controller-manager.kubeconfig
Generate a kubeconfig file for the `kube-scheduler` service: Generate a kubeconfig file for the `kube-scheduler` service:
``` ```
{
kubectl config set-cluster kubernetes-the-hard-way \ kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \ --certificate-authority=ca.pem \
--embed-certs=true \ --embed-certs=true \
--server=https://127.0.0.1:6443 \ --server=https://127.0.0.1:6443 \
--kubeconfig=kube-scheduler.kubeconfig --kubeconfig=kube-scheduler.kubeconfig
```
```
kubectl config set-credentials system:kube-scheduler \ kubectl config set-credentials system:kube-scheduler \
--client-certificate=kube-scheduler.pem \ --client-certificate=kube-scheduler.pem \
--client-key=kube-scheduler-key.pem \ --client-key=kube-scheduler-key.pem \
--embed-certs=true \ --embed-certs=true \
--kubeconfig=kube-scheduler.kubeconfig --kubeconfig=kube-scheduler.kubeconfig
```
```
kubectl config set-context default \ kubectl config set-context default \
--cluster=kubernetes-the-hard-way \ --cluster=kubernetes-the-hard-way \
--user=system:kube-scheduler \ --user=system:kube-scheduler \
--kubeconfig=kube-scheduler.kubeconfig --kubeconfig=kube-scheduler.kubeconfig
```
```
kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig
}
``` ```
Results: Results:
@ -167,6 +155,41 @@ Results:
kube-scheduler.kubeconfig kube-scheduler.kubeconfig
``` ```
### The admin Kubernetes Configuration File
Generate a kubeconfig file for the `admin` user:
```
{
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=admin.kubeconfig
kubectl config set-credentials admin \
--client-certificate=admin.pem \
--client-key=admin-key.pem \
--embed-certs=true \
--kubeconfig=admin.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=admin \
--kubeconfig=admin.kubeconfig
kubectl config use-context default --kubeconfig=admin.kubeconfig
}
```
Results:
```
admin.kubeconfig
```
##
## Distribute the Kubernetes Configuration Files ## Distribute the Kubernetes Configuration Files
@ -182,7 +205,7 @@ Copy the appropriate `kube-controller-manager` and `kube-scheduler` kubeconfig f
``` ```
for instance in controller-0 controller-1 controller-2; do for instance in controller-0 controller-1 controller-2; do
gcloud compute scp kube-controller-manager.kubeconfig kube-scheduler.kubeconfig ${instance}:~/ gcloud compute scp admin.kubeconfig kube-controller-manager.kubeconfig kube-scheduler.kubeconfig ${instance}:~/
done done
``` ```

View File

@ -28,21 +28,19 @@ wget -q --show-progress --https-only --timestamping \
Extract and install the `etcd` server and the `etcdctl` command line utility: Extract and install the `etcd` server and the `etcdctl` command line utility:
``` ```
{
tar -xvf etcd-v3.3.5-linux-amd64.tar.gz tar -xvf etcd-v3.3.5-linux-amd64.tar.gz
```
```
sudo mv etcd-v3.3.5-linux-amd64/etcd* /usr/local/bin/ sudo mv etcd-v3.3.5-linux-amd64/etcd* /usr/local/bin/
}
``` ```
### Configure the etcd Server ### Configure the etcd Server
``` ```
{
sudo mkdir -p /etc/etcd /var/lib/etcd sudo mkdir -p /etc/etcd /var/lib/etcd
```
```
sudo cp ca.pem kubernetes-key.pem kubernetes.pem /etc/etcd/ sudo cp ca.pem kubernetes-key.pem kubernetes.pem /etc/etcd/
}
``` ```
The instance internal IP address will be used to serve client requests and communicate with etcd cluster peers. Retrieve the internal IP address for the current compute instance: The instance internal IP address will be used to serve client requests and communicate with etcd cluster peers. Retrieve the internal IP address for the current compute instance:
@ -61,7 +59,7 @@ ETCD_NAME=$(hostname -s)
Create the `etcd.service` systemd unit file: Create the `etcd.service` systemd unit file:
``` ```
cat > etcd.service <<EOF cat <<EOF | sudo tee /etc/systemd/system/etcd.service
[Unit] [Unit]
Description=etcd Description=etcd
Documentation=https://github.com/coreos Documentation=https://github.com/coreos
@ -96,19 +94,11 @@ EOF
### Start the etcd Server ### Start the etcd Server
``` ```
sudo mv etcd.service /etc/systemd/system/ {
```
```
sudo systemctl daemon-reload sudo systemctl daemon-reload
```
```
sudo systemctl enable etcd sudo systemctl enable etcd
```
```
sudo systemctl start etcd sudo systemctl start etcd
}
``` ```
> Remember to run the above commands on each controller node: `controller-0`, `controller-1`, and `controller-2`. > Remember to run the above commands on each controller node: `controller-0`, `controller-1`, and `controller-2`.

View File

@ -37,23 +37,22 @@ wget -q --show-progress --https-only --timestamping \
Install the Kubernetes binaries: Install the Kubernetes binaries:
``` ```
{
chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl
```
```
sudo mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/ sudo mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/
}
``` ```
### Configure the Kubernetes API Server ### Configure the Kubernetes API Server
``` ```
{
sudo mkdir -p /var/lib/kubernetes/ sudo mkdir -p /var/lib/kubernetes/
```
```
sudo mv ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \ sudo mv ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \
service-account-key.pem service-account.pem \ service-account-key.pem service-account.pem \
encryption-config.yaml /var/lib/kubernetes/ encryption-config.yaml /var/lib/kubernetes/
}
``` ```
The instance internal IP address will be used to advertise the API Server to members of the cluster. Retrieve the internal IP address for the current compute instance: The instance internal IP address will be used to advertise the API Server to members of the cluster. Retrieve the internal IP address for the current compute instance:
@ -66,7 +65,7 @@ INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
Create the `kube-apiserver.service` systemd unit file: Create the `kube-apiserver.service` systemd unit file:
``` ```
cat > kube-apiserver.service <<EOF cat <<EOF | sudo tee /etc/systemd/system/kube-apiserver.service
[Unit] [Unit]
Description=Kubernetes API Server Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes Documentation=https://github.com/kubernetes/kubernetes
@ -121,7 +120,7 @@ sudo mv kube-controller-manager.kubeconfig /var/lib/kubernetes/
Create the `kube-controller-manager.service` systemd unit file: Create the `kube-controller-manager.service` systemd unit file:
``` ```
cat > kube-controller-manager.service <<EOF cat <<EOF | sudo tee /etc/systemd/system/kube-controller-manager.service
[Unit] [Unit]
Description=Kubernetes Controller Manager Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes Documentation=https://github.com/kubernetes/kubernetes
@ -159,7 +158,7 @@ sudo mv kube-scheduler.kubeconfig /var/lib/kubernetes/
Create the `kube-scheduler.yaml` configuration file: Create the `kube-scheduler.yaml` configuration file:
``` ```
cat > kube-scheduler.yaml <<EOF cat <<EOF | sudo tee /etc/kubernetes/config/kube-scheduler.yaml
apiVersion: componentconfig/v1alpha1 apiVersion: componentconfig/v1alpha1
kind: KubeSchedulerConfiguration kind: KubeSchedulerConfiguration
clientConnection: clientConnection:
@ -169,14 +168,10 @@ leaderElection:
EOF EOF
``` ```
```
sudo mv kube-scheduler.yaml /etc/kubernetes/config/
```
Create the `kube-scheduler.service` systemd unit file: Create the `kube-scheduler.service` systemd unit file:
``` ```
cat > kube-scheduler.service <<EOF cat <<EOF | sudo tee /etc/systemd/system/kube-scheduler.service
[Unit] [Unit]
Description=Kubernetes Scheduler Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes Documentation=https://github.com/kubernetes/kubernetes
@ -196,28 +191,62 @@ EOF
### Start the Controller Services ### Start the Controller Services
``` ```
sudo mv kube-apiserver.service kube-scheduler.service kube-controller-manager.service /etc/systemd/system/ {
```
```
sudo systemctl daemon-reload sudo systemctl daemon-reload
```
```
sudo systemctl enable kube-apiserver kube-controller-manager kube-scheduler sudo systemctl enable kube-apiserver kube-controller-manager kube-scheduler
```
```
sudo systemctl start kube-apiserver kube-controller-manager kube-scheduler sudo systemctl start kube-apiserver kube-controller-manager kube-scheduler
}
``` ```
> Allow up to 10 seconds for the Kubernetes API Server to fully initialize. > Allow up to 10 seconds for the Kubernetes API Server to fully initialize.
### Enable HTTP Health Checks
A [Google Network Load Balancer](https://cloud.google.com/compute/docs/load-balancing/network) will be used to distribute traffic across the three API servers and allow each API server to terminate TLS connections and validate client certificates. The network load balancer only supports HTTP health checks which means the HTTPS endpoint exposed by the API server cannot be used. As a workaround the nginx webserver can be used to proxy HTTP health checks. In this section nginx will be installed and configured to accept HTTP health checks on port `80` and proxy the connections to the API server on `https://127.0.0.1:6443/healthz`.
> The `/healthz` API server endpoint does not require authentication by default.
Install a basic web server to handle HTTP health checks:
```
sudo apt-get install -y nginx
```
```
cat > kubernetes.default.svc.cluster.local <<EOF
server {
listen 80;
server_name kubernetes.default.svc.cluster.local;
location /healthz {
proxy_pass https://127.0.0.1:6443/healthz;
proxy_ssl_trusted_certificate /var/lib/kubernetes/ca.pem;
}
}
EOF
```
```
{
sudo mv kubernetes.default.svc.cluster.local \
/etc/nginx/sites-available/kubernetes.default.svc.cluster.local
sudo ln -s /etc/nginx/sites-available/kubernetes.default.svc.cluster.local /etc/nginx/sites-enabled/
}
```
```
sudo systemctl restart nginx
```
```
sudo systemctl enable nginx
```
### Verification ### Verification
``` ```
kubectl get componentstatuses \ kubectl get componentstatuses --kubeconfig admin.kubeconfig
--kubeconfig /var/lib/kubernetes/kube-controller-manager.kubeconfig
``` ```
``` ```
@ -229,6 +258,23 @@ etcd-0 Healthy {"health": "true"}
etcd-1 Healthy {"health": "true"} etcd-1 Healthy {"health": "true"}
``` ```
Test the nginx HTTP health check proxy:
```
curl -H "Host: kubernetes.default.svc.cluster.local" -i http://127.0.0.1/healthz
```
```
HTTP/1.1 200 OK
Server: nginx/1.14.0 (Ubuntu)
Date: Sun, 13 May 2018 15:03:03 GMT
Content-Type: text/plain; charset=utf-8
Content-Length: 2
Connection: keep-alive
ok
```
> Remember to run the above commands on each controller node: `controller-0`, `controller-1`, and `controller-2`. > Remember to run the above commands on each controller node: `controller-0`, `controller-1`, and `controller-2`.
## RBAC for Kubelet Authorization ## RBAC for Kubelet Authorization
@ -244,7 +290,7 @@ gcloud compute ssh controller-0
Create the `system:kube-apiserver-to-kubelet` [ClusterRole](https://kubernetes.io/docs/admin/authorization/rbac/#role-and-clusterrole) with permissions to access the Kubelet API and perform most common tasks associated with managing pods: Create the `system:kube-apiserver-to-kubelet` [ClusterRole](https://kubernetes.io/docs/admin/authorization/rbac/#role-and-clusterrole) with permissions to access the Kubelet API and perform most common tasks associated with managing pods:
``` ```
cat <<EOF | kubectl apply -f - cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
apiVersion: rbac.authorization.k8s.io/v1beta1 apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole kind: ClusterRole
metadata: metadata:
@ -272,7 +318,7 @@ The Kubernetes API Server authenticates to the Kubelet as the `kubernetes` user
Bind the `system:kube-apiserver-to-kubelet` ClusterRole to the `kubernetes` user: Bind the `system:kube-apiserver-to-kubelet` ClusterRole to the `kubernetes` user:
``` ```
cat <<EOF | kubectl apply -f - cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
apiVersion: rbac.authorization.k8s.io/v1beta1 apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding kind: ClusterRoleBinding
metadata: metadata:
@ -293,106 +339,41 @@ EOF
In this section you will provision an external load balancer to front the Kubernetes API Servers. The `kubernetes-the-hard-way` static IP address will be attached to the resulting load balancer. In this section you will provision an external load balancer to front the Kubernetes API Servers. The `kubernetes-the-hard-way` static IP address will be attached to the resulting load balancer.
### Enable HTTP Health Checks > The compute instances created in this tutorial will not have permission to complete this section. Run the following commands from the same machine used to create the compute instances.
A [Google Network Load Balancer](https://cloud.google.com/compute/docs/load-balancing/network) will be used to distribute traffic across the three API servers and allow each API server to terminate TLS connections and validate client certificates. The network load balancer only supports HTTP health checks which means the HTTPS endpoint exposed by the API server cannot be used. As a workaround the nginx webserver can be used to proxy HTTP health checks. In this section nginx will be installed and configured to accept HTTP health checks on port `80` and proxy the connections to the API server on `https://127.0.0.1:6443/healthz`.
> The `/healthz` API server endpoint does not require authentication by default.
The following commands must be run on each controller instance. Example:
```
gcloud compute ssh controller-0
```
Install a basic web server to handle HTTP health checks:
```
sudo apt-get install -y nginx
```
```
cat > kubernetes.default.svc.cluster.local <<EOF
server {
listen 80;
server_name kubernetes.default.svc.cluster.local;
location /healthz {
proxy_pass https://127.0.0.1:6443/healthz;
proxy_ssl_trusted_certificate /var/lib/kubernetes/ca.pem;
}
}
EOF
```
```
sudo mv kubernetes.default.svc.cluster.local /etc/nginx/sites-available/kubernetes.default.svc.cluster.local
```
```
sudo ln -s /etc/nginx/sites-available/kubernetes.default.svc.cluster.local /etc/nginx/sites-enabled/
```
```
sudo systemctl restart nginx
```
Test the nginx proxy:
```
curl -H "Host: kubernetes.default.svc.cluster.local" -i http://127.0.0.1/healthz
```
```
HTTP/1.1 200 OK
Server: nginx/1.14.0 (Ubuntu)
Date: Sun, 13 May 2018 15:03:03 GMT
Content-Type: text/plain; charset=utf-8
Content-Length: 2
Connection: keep-alive
ok
```
> Remember to run the above commands on each controller node: controller-0, controller-1, and controller-2.
### Provision a Network Load Balancer ### Provision a Network Load Balancer
> The compute instances created in this tutorial will not have permission to complete this section. Run the following commands from the same machine used to create the compute instances.
Create the external load balancer network resources: Create the external load balancer network resources:
``` ```
{
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) \
--format 'value(address)')
gcloud compute http-health-checks create kubernetes \ gcloud compute http-health-checks create kubernetes \
--description "Kubernetes Health Check" \ --description "Kubernetes Health Check" \
--host "kubernetes.default.svc.cluster.local" \ --host "kubernetes.default.svc.cluster.local" \
--request-path "/healthz" --request-path "/healthz"
```
```
gcloud compute firewall-rules create kubernetes-the-hard-way-allow-health-check \ gcloud compute firewall-rules create kubernetes-the-hard-way-allow-health-check \
--network kubernetes-the-hard-way \ --network kubernetes-the-hard-way \
--source-ranges 209.85.152.0/22,209.85.204.0/22,35.191.0.0/16 \ --source-ranges 209.85.152.0/22,209.85.204.0/22,35.191.0.0/16 \
--allow tcp --allow tcp
```
```
gcloud compute target-pools create kubernetes-target-pool \ gcloud compute target-pools create kubernetes-target-pool \
--http-health-check kubernetes --http-health-check kubernetes
```
```
gcloud compute target-pools add-instances kubernetes-target-pool \ gcloud compute target-pools add-instances kubernetes-target-pool \
--instances controller-0,controller-1,controller-2 --instances controller-0,controller-1,controller-2
```
```
gcloud compute forwarding-rules create kubernetes-forwarding-rule \ gcloud compute forwarding-rules create kubernetes-forwarding-rule \
--address ${KUBERNETES_PUBLIC_ADDRESS} \ --address ${KUBERNETES_PUBLIC_ADDRESS} \
--ports 6443 \ --ports 6443 \
--region $(gcloud config get-value compute/region) \ --region $(gcloud config get-value compute/region) \
--target-pool kubernetes-target-pool --target-pool kubernetes-target-pool
}
``` ```
### Verification ### Verification

View File

@ -19,11 +19,10 @@ gcloud compute ssh worker-0
Install the OS dependencies: Install the OS dependencies:
``` ```
{
sudo apt-get update sudo apt-get update
```
```
sudo apt-get -y install socat conntrack ipset sudo apt-get -y install socat conntrack ipset
}
``` ```
> The socat binary enables support for the `kubectl port-forward` command. > The socat binary enables support for the `kubectl port-forward` command.
@ -57,27 +56,14 @@ sudo mkdir -p \
Install the worker binaries: Install the worker binaries:
``` ```
{
chmod +x kubectl kube-proxy kubelet runc.amd64 runsc chmod +x kubectl kube-proxy kubelet runc.amd64 runsc
```
```
sudo mv runc.amd64 runc sudo mv runc.amd64 runc
```
```
sudo mv kubectl kube-proxy kubelet runc runsc /usr/local/bin/ sudo mv kubectl kube-proxy kubelet runc runsc /usr/local/bin/
``` sudo tar -xvf crictl-v1.0.0-beta.0-linux-amd64.tar.gz -C /usr/local/bin/
```
tar -xvf crictl-v1.0.0-beta.0-linux-amd64.tar.gz -C /usr/local/bin/
```
```
sudo tar -xvf cni-plugins-amd64-v0.6.0.tgz -C /opt/cni/bin/ sudo tar -xvf cni-plugins-amd64-v0.6.0.tgz -C /opt/cni/bin/
```
```
sudo tar -xvf containerd-1.1.0.linux-amd64.tar.gz -C / sudo tar -xvf containerd-1.1.0.linux-amd64.tar.gz -C /
}
``` ```
### Configure CNI Networking ### Configure CNI Networking
@ -92,7 +78,7 @@ POD_CIDR=$(curl -s -H "Metadata-Flavor: Google" \
Create the `bridge` network configuration file: Create the `bridge` network configuration file:
``` ```
cat > 10-bridge.conf <<EOF cat <<EOF | sudo tee /etc/cni/net.d/10-bridge.conf
{ {
"cniVersion": "0.3.1", "cniVersion": "0.3.1",
"name": "bridge", "name": "bridge",
@ -114,7 +100,7 @@ EOF
Create the `loopback` network configuration file: Create the `loopback` network configuration file:
``` ```
cat > 99-loopback.conf <<EOF cat <<EOF | sudo tee /etc/cni/net.d/99-loopback.conf
{ {
"cniVersion": "0.3.1", "cniVersion": "0.3.1",
"type": "loopback" "type": "loopback"
@ -122,12 +108,6 @@ cat > 99-loopback.conf <<EOF
EOF EOF
``` ```
Move the network configuration files to the CNI configuration directory:
```
sudo mv 10-bridge.conf 99-loopback.conf /etc/cni/net.d/
```
### Configure containerd ### Configure containerd
Create the `containerd` configuration file: Create the `containerd` configuration file:
@ -148,16 +128,16 @@ cat << EOF | sudo tee /etc/containerd/config.toml
[plugins.cri.containerd.untrusted_workload_runtime] [plugins.cri.containerd.untrusted_workload_runtime]
runtime_type = "io.containerd.runtime.v1.linux" runtime_type = "io.containerd.runtime.v1.linux"
runtime_engine = "/usr/local/bin/runsc" runtime_engine = "/usr/local/bin/runsc"
runtime_root = "" runtime_root = "/run/containerd/runsc"
EOF EOF
``` ```
> Untrusted workloads will be run using the gVisor runtime. > Untrusted workloads will be run using the gVisor (runsc) runtime.
Create the `containerd.service` systemd unit file: Create the `containerd.service` systemd unit file:
``` ```
cat > containerd.service <<EOF cat <<EOF | sudo tee /etc/systemd/system/containerd.service
[Unit] [Unit]
Description=containerd container runtime Description=containerd container runtime
Documentation=https://containerd.io Documentation=https://containerd.io
@ -183,21 +163,17 @@ EOF
### Configure the Kubelet ### Configure the Kubelet
``` ```
{
sudo mv ${HOSTNAME}-key.pem ${HOSTNAME}.pem /var/lib/kubelet/ sudo mv ${HOSTNAME}-key.pem ${HOSTNAME}.pem /var/lib/kubelet/
```
```
sudo mv ${HOSTNAME}.kubeconfig /var/lib/kubelet/kubeconfig sudo mv ${HOSTNAME}.kubeconfig /var/lib/kubelet/kubeconfig
```
```
sudo mv ca.pem /var/lib/kubernetes/ sudo mv ca.pem /var/lib/kubernetes/
}
``` ```
Create the `kubelet-config.yaml` configuration file: Create the `kubelet-config.yaml` configuration file:
``` ```
cat > kubelet-config.yaml <<EOF cat <<EOF | sudo tee /var/lib/kubelet/kubelet-config.yaml
kind: KubeletConfiguration kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1 apiVersion: kubelet.config.k8s.io/v1beta1
authentication: authentication:
@ -219,14 +195,10 @@ tlsPrivateKeyFile: "/var/lib/kubelet/${HOSTNAME}-key.pem"
EOF EOF
``` ```
```
sudo mv kubelet-config.yaml /var/lib/kubelet/kubelet-config.yaml
```
Create the `kubelet.service` systemd unit file: Create the `kubelet.service` systemd unit file:
``` ```
cat > kubelet.service <<EOF cat <<EOF | sudo tee /etc/systemd/system/kubelet.service
[Unit] [Unit]
Description=Kubernetes Kubelet Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes Documentation=https://github.com/kubernetes/kubernetes
@ -260,7 +232,7 @@ sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig
Create the `kube-proxy-config.yaml` configuration file: Create the `kube-proxy-config.yaml` configuration file:
``` ```
cat > kube-proxy-config.yaml <<EOF cat <<EOF | sudo tee /var/lib/kube-proxy/kube-proxy-config.yaml
kind: KubeProxyConfiguration kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1 apiVersion: kubeproxy.config.k8s.io/v1alpha1
clientConnection: clientConnection:
@ -270,14 +242,10 @@ clusterCIDR: "10.200.0.0/16"
EOF EOF
``` ```
```
sudo mv kube-proxy-config.yaml /var/lib/kube-proxy/kube-proxy-config.yaml
```
Create the `kube-proxy.service` systemd unit file: Create the `kube-proxy.service` systemd unit file:
``` ```
cat > kube-proxy.service <<EOF cat <<EOF | sudo tee /etc/systemd/system/kube-proxy.service
[Unit] [Unit]
Description=Kubernetes Kube Proxy Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes Documentation=https://github.com/kubernetes/kubernetes
@ -296,19 +264,11 @@ EOF
### Start the Worker Services ### Start the Worker Services
``` ```
sudo mv containerd.service kubelet.service kube-proxy.service /etc/systemd/system/ {
```
```
sudo systemctl daemon-reload sudo systemctl daemon-reload
```
```
sudo systemctl enable containerd kubelet kube-proxy sudo systemctl enable containerd kubelet kube-proxy
```
```
sudo systemctl start containerd kubelet kube-proxy sudo systemctl start containerd kubelet kube-proxy
}
``` ```
> Remember to run the above commands on each worker node: `worker-0`, `worker-1`, and `worker-2`. > Remember to run the above commands on each worker node: `worker-0`, `worker-1`, and `worker-2`.
@ -317,18 +277,11 @@ sudo systemctl start containerd kubelet kube-proxy
> The compute instances created in this tutorial will not have permission to complete this section. Run the following commands from the same machine used to create the compute instances. > The compute instances created in this tutorial will not have permission to complete this section. Run the following commands from the same machine used to create the compute instances.
Print the Kubernetes nodes:
```
gcloud compute ssh controller-0 \
--command="kubectl get nodes \
--kubeconfig /var/lib/kubernetes/kube-controller-manager.kubeconfig"
```
List the registered Kubernetes nodes: List the registered Kubernetes nodes:
``` ```
kubectl get nodes gcloud compute ssh controller-0 \
--command "kubectl get nodes --kubeconfig admin.kubeconfig"
``` ```
> output > output

View File

@ -8,37 +8,29 @@ In this lab you will generate a kubeconfig file for the `kubectl` command line u
Each kubeconfig requires a Kubernetes API Server to connect to. To support high availability the IP address assigned to the external load balancer fronting the Kubernetes API Servers will be used. Each kubeconfig requires a Kubernetes API Server to connect to. To support high availability the IP address assigned to the external load balancer fronting the Kubernetes API Servers will be used.
Retrieve the `kubernetes-the-hard-way` static IP address:
```
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) \
--format 'value(address)')
```
Generate a kubeconfig file suitable for authenticating as the `admin` user: Generate a kubeconfig file suitable for authenticating as the `admin` user:
``` ```
{
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) \
--format 'value(address)')
kubectl config set-cluster kubernetes-the-hard-way \ kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \ --certificate-authority=ca.pem \
--embed-certs=true \ --embed-certs=true \
--server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 --server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443
```
```
kubectl config set-credentials admin \ kubectl config set-credentials admin \
--client-certificate=admin.pem \ --client-certificate=admin.pem \
--client-key=admin-key.pem --client-key=admin-key.pem
```
```
kubectl config set-context kubernetes-the-hard-way \ kubectl config set-context kubernetes-the-hard-way \
--cluster=kubernetes-the-hard-way \ --cluster=kubernetes-the-hard-way \
--user=admin --user=admin
```
```
kubectl config use-context kubernetes-the-hard-way kubectl config use-context kubernetes-the-hard-way
}
``` ```
## Verification ## Verification

View File

@ -209,4 +209,118 @@ ETag: "5acb8e45-264"
Accept-Ranges: bytes Accept-Ranges: bytes
``` ```
## Untrusted Workloads
This section will verify the ability to run untrusted workloads using [gVisor](https://github.com/google/gvisor).
Create the `untrusted` pod:
```
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: untrusted
annotations:
io.kubernetes.cri.untrusted-workload: "true"
spec:
containers:
- name: webserver
image: gcr.io/hightowerlabs/helloworld:2.0.0
EOF
```
### Verification
In this section you will verify the `untrusted` pod is running under gVisor (runsc) by inspecting the assigned worker node.
Verify the `untrusted` pod is running:
```
kubectl get pods -o wide
```
```
NAME READY STATUS RESTARTS AGE IP NODE
busybox-68654f944b-brmrj 1/1 Running 0 7m 10.200.0.2 worker-0
nginx-65899c769f-4lpzz 1/1 Running 0 6m 10.200.1.2 worker-1
untrusted 1/1 Running 0 2m 10.200.0.3 worker-0
```
Get the node name where the `untrusted` pod is running:
```
INSTANCE_NAME=$(kubectl get pod untrusted --output=jsonpath='{.spec.nodeName}')
```
SSH into the worker node:
```
gcloud compute ssh ${INSTANCE_NAME}
```
List the containers running under gVisor:
```
sudo runsc --root /run/containerd/runsc/k8s.io list
```
```
I0514 12:57:57.906145 18629 x:0] ***************************
I0514 12:57:57.906472 18629 x:0] Args: [runsc --root /run/containerd/runsc/k8s.io list]
I0514 12:57:57.906537 18629 x:0] Git Revision: 08879266fef3a67fac1a77f1ea133c3ac75759dd
I0514 12:57:57.906584 18629 x:0] PID: 18629
I0514 12:57:57.906632 18629 x:0] UID: 0, GID: 0
I0514 12:57:57.906680 18629 x:0] Configuration:
I0514 12:57:57.906723 18629 x:0] RootDir: /run/containerd/runsc/k8s.io
I0514 12:57:57.906814 18629 x:0] Platform: ptrace
I0514 12:57:57.906918 18629 x:0] FileAccess: proxy, overlay: false
I0514 12:57:57.907005 18629 x:0] Network: sandbox, logging: false
I0514 12:57:57.907084 18629 x:0] Strace: false, max size: 1024, syscalls: []
I0514 12:57:57.907161 18629 x:0] ***************************
ID PID STATUS BUNDLE CREATED OWNER
5a25ef793aaa302edc5407c34723287de36609e0fc189a6c0621c65bb10eea58 18068 running /run/containerd/io.containerd.runtime.v1.linux/k8s.io/5a25ef793aaa302edc5407c34723287de36609e0fc189a6c0621c65bb10eea58 2018-05-14T12:56:53.588006482Z
5cd21d56570a6134ea6975b6e4f7df6e79d26a3deebc6558b0feb6b06d7ed819 18017 running /run/containerd/io.containerd.runtime.v1.linux/k8s.io/5cd21d56570a6134ea6975b6e4f7df6e79d26a3deebc6558b0feb6b06d7ed819 2018-05-14T12:56:53.480795974Z
I0514 12:57:57.909120 18629 x:0] Exiting with status: 0
```
Get the ID of the `untrusted` pod:
```
POD_ID=$(sudo crictl -r unix:///var/run/containerd/containerd.sock \
pods --name untrusted -q)
```
Get the ID of the `webserver` container running in the `untrusted` pod:
```
CONTAINER_ID=$(sudo crictl -r unix:///var/run/containerd/containerd.sock \
ps -p ${POD_ID} -q)
```
Use the gVisor `runsc` command to display the processes running inside the `webserver` container:
```
sudo runsc --root /run/containerd/runsc/k8s.io ps ${CONTAINER_ID}
```
> output
```
I0514 06:48:48.154040 18401 x:0] ***************************
I0514 06:48:48.154263 18401 x:0] Args: [runsc --root /run/containerd/runsc/k8s.io ps 5a25ef793aaa302edc5407c34723287de36609e0fc189a6c0621c65bb10eea58]
I0514 06:48:48.154332 18401 x:0] Git Revision: 08879266fef3a67fac1a77f1ea133c3ac75759dd
I0514 06:48:48.154380 18401 x:0] PID: 18401
I0514 06:48:48.154431 18401 x:0] UID: 0, GID: 0
I0514 06:48:48.154474 18401 x:0] Configuration:
I0514 06:48:48.154508 18401 x:0] RootDir: /run/containerd/runc/k8s.io
I0514 06:48:48.154585 18401 x:0] Platform: ptrace
I0514 06:48:48.154681 18401 x:0] FileAccess: proxy, overlay: false
I0514 06:48:48.154764 18401 x:0] Network: sandbox, logging: false
I0514 06:48:48.154844 18401 x:0] Strace: false, max size: 1024, syscalls: []
I0514 06:48:48.155015 18401 x:0] ***************************
UID PID PPID C STIME TIME CMD
0 1 0 0 06:34 10ms app
I0514 06:48:48.156130 18401 x:0] Exiting with status: 0
```
Next: [Cleaning Up](14-cleanup.md) Next: [Cleaning Up](14-cleanup.md)

View File

@ -17,22 +17,16 @@ gcloud -q compute instances delete \
Delete the external load balancer network resources: Delete the external load balancer network resources:
``` ```
{
gcloud -q compute forwarding-rules delete kubernetes-forwarding-rule \ gcloud -q compute forwarding-rules delete kubernetes-forwarding-rule \
--region $(gcloud config get-value compute/region) --region $(gcloud config get-value compute/region)
```
```
gcloud -q compute target-pools delete kubernetes-target-pool gcloud -q compute target-pools delete kubernetes-target-pool
```
```
gcloud -q compute http-health-checks delete kubernetes gcloud -q compute http-health-checks delete kubernetes
```
Delete the `kubernetes-the-hard-way` static IP address:
```
gcloud -q compute addresses delete kubernetes-the-hard-way gcloud -q compute addresses delete kubernetes-the-hard-way
}
``` ```
Delete the `kubernetes-the-hard-way` firewall rules: Delete the `kubernetes-the-hard-way` firewall rules:
@ -45,23 +39,17 @@ gcloud -q compute firewall-rules delete \
kubernetes-the-hard-way-allow-health-check kubernetes-the-hard-way-allow-health-check
``` ```
Delete the Pod network routes: Delete the `kubernetes-the-hard-way` network VPC:
``` ```
{
gcloud -q compute routes delete \ gcloud -q compute routes delete \
kubernetes-route-10-200-0-0-24 \ kubernetes-route-10-200-0-0-24 \
kubernetes-route-10-200-1-0-24 \ kubernetes-route-10-200-1-0-24 \
kubernetes-route-10-200-2-0-24 kubernetes-route-10-200-2-0-24
```
Delete the `kubernetes` subnet:
```
gcloud -q compute networks subnets delete kubernetes gcloud -q compute networks subnets delete kubernetes
```
Delete the `kubernetes-the-hard-way` network VPC:
```
gcloud -q compute networks delete kubernetes-the-hard-way gcloud -q compute networks delete kubernetes-the-hard-way
}
``` ```