Use RuntimeClass for supporting sandboxed pods

pull/400/head
Ian Lewis 2018-10-19 04:10:04 +00:00
parent bf2850974e
commit ae54691294
3 changed files with 36 additions and 10 deletions

View File

@ -100,6 +100,7 @@ ExecStart=/usr/local/bin/kube-apiserver \\
--service-node-port-range=30000-32767 \\ --service-node-port-range=30000-32767 \\
--tls-cert-file=/var/lib/kubernetes/kubernetes.pem \\ --tls-cert-file=/var/lib/kubernetes/kubernetes.pem \\
--tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem \\ --tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem \\
--feature-gates=RuntimeClass=true \\
--v=2 --v=2
Restart=on-failure Restart=on-failure
RestartSec=5 RestartSec=5
@ -335,6 +336,35 @@ subjects:
EOF EOF
``` ```
## Configure the RuntimeClass Custom Resource for gVisor
The [RuntimeClass](https://kubernetes.io/docs/concepts/containers/runtime-class/)
custom resource allows you to discover supported runtimes in your cluster and
specify a specific runtime per pod. In this lab you will use it to create pods
that can be run using gVisor.
The runtime class API is in *Alpha* so it is not a built-in resource yet. You will
need a custom resource definition to use it. First create the custom resource
definition.
```
kubectl apply --kubeconfig admin.kubeconfig \
-f https://raw.githubusercontent.com/kubernetes/kubernetes/v1.12.0/cluster/addons/runtimeclass/runtimeclass_crd.yaml
```
Create the runtime class for gVisor.
```
cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
apiVersion: node.k8s.io/v1alpha1
kind: RuntimeClass
metadata:
name: gvisor
spec:
runtimeHandler: runsc
EOF
```
## The Kubernetes Frontend Load Balancer ## The Kubernetes Frontend Load Balancer
In this section you will provision an external load balancer to front the Kubernetes API Servers. The `kubernetes-the-hard-way` static IP address will be attached to the resulting load balancer. In this section you will provision an external load balancer to front the Kubernetes API Servers. The `kubernetes-the-hard-way` static IP address will be attached to the resulting load balancer.

View File

@ -35,7 +35,7 @@ wget -q --show-progress --https-only --timestamping \
https://storage.googleapis.com/kubernetes-the-hard-way/runsc-50c283b9f56bb7200938d9e207355f05f79f0d17 \ https://storage.googleapis.com/kubernetes-the-hard-way/runsc-50c283b9f56bb7200938d9e207355f05f79f0d17 \
https://github.com/opencontainers/runc/releases/download/v1.0.0-rc5/runc.amd64 \ https://github.com/opencontainers/runc/releases/download/v1.0.0-rc5/runc.amd64 \
https://github.com/containernetworking/plugins/releases/download/v0.6.0/cni-plugins-amd64-v0.6.0.tgz \ https://github.com/containernetworking/plugins/releases/download/v0.6.0/cni-plugins-amd64-v0.6.0.tgz \
https://github.com/containerd/containerd/releases/download/v1.2.0-rc.0/containerd-1.2.0-rc.0.linux-amd64.tar.gz \ https://github.com/containerd/containerd/releases/download/v1.2.0-rc.2/containerd-1.2.0-rc.2.linux-amd64.tar.gz \
https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kubectl \ https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kubectl \
https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kube-proxy \ https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kube-proxy \
https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kubelet https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kubelet
@ -63,7 +63,7 @@ Install the worker binaries:
sudo mv kubectl kube-proxy kubelet runc runsc /usr/local/bin/ sudo mv kubectl kube-proxy kubelet runc runsc /usr/local/bin/
sudo tar -xvf crictl-v1.12.0-linux-amd64.tar.gz -C /usr/local/bin/ sudo tar -xvf crictl-v1.12.0-linux-amd64.tar.gz -C /usr/local/bin/
sudo tar -xvf cni-plugins-amd64-v0.6.0.tgz -C /opt/cni/bin/ sudo tar -xvf cni-plugins-amd64-v0.6.0.tgz -C /opt/cni/bin/
sudo tar -xvf containerd-1.2.0-rc.0.linux-amd64.tar.gz -C / sudo tar -xvf containerd-1.2.0-rc.2.linux-amd64.tar.gz -C /
} }
``` ```
@ -126,18 +126,14 @@ cat << EOF | sudo tee /etc/containerd/config.toml
runtime_type = "io.containerd.runtime.v1.linux" runtime_type = "io.containerd.runtime.v1.linux"
runtime_engine = "/usr/local/bin/runc" runtime_engine = "/usr/local/bin/runc"
runtime_root = "" runtime_root = ""
[plugins.cri.containerd.untrusted_workload_runtime] [plugins.cri.containerd.runtimes.runsc]
runtime_type = "io.containerd.runtime.v1.linux"
runtime_engine = "/usr/local/bin/runsc"
runtime_root = "/run/containerd/runsc"
[plugins.cri.containerd.gvisor]
runtime_type = "io.containerd.runtime.v1.linux" runtime_type = "io.containerd.runtime.v1.linux"
runtime_engine = "/usr/local/bin/runsc" runtime_engine = "/usr/local/bin/runsc"
runtime_root = "/run/containerd/runsc" runtime_root = "/run/containerd/runsc"
EOF EOF
``` ```
> Untrusted workloads will be run using the gVisor (runsc) runtime. > This sets up support for the runsc runtime handler in containerd.
Create the `containerd.service` systemd unit file: Create the `containerd.service` systemd unit file:
@ -222,6 +218,7 @@ ExecStart=/usr/local/bin/kubelet \\
--kubeconfig=/var/lib/kubelet/kubeconfig \\ --kubeconfig=/var/lib/kubelet/kubeconfig \\
--network-plugin=cni \\ --network-plugin=cni \\
--register-node=true \\ --register-node=true \\
--feature-gates=RuntimeClass=true \\
--v=2 --v=2
Restart=on-failure Restart=on-failure
RestartSec=5 RestartSec=5

View File

@ -221,9 +221,8 @@ apiVersion: v1
kind: Pod kind: Pod
metadata: metadata:
name: untrusted name: untrusted
annotations:
io.kubernetes.cri.untrusted-workload: "true"
spec: spec:
runtimeClassName: gvisor
containers: containers:
- name: webserver - name: webserver
image: gcr.io/hightowerlabs/helloworld:2.0.0 image: gcr.io/hightowerlabs/helloworld:2.0.0