chg: Hostnames In Documentation Continued
Updated more references to old hostnames in the documentation to reflect the new naming convention.pull/882/head
parent
1889697098
commit
398f5a73c0
|
@ -1,8 +1,17 @@
|
|||
# Set Up The Jumpbox
|
||||
|
||||
In this lab you will set up one of the four machines to be a `jumpbox`. This machine will be used to run commands throughout this tutorial. While a dedicated machine is being used to ensure consistency, these commands can also be run from just about any machine including your personal workstation running macOS or Linux.
|
||||
In this lab you will set up one of the four machines to be a `jumpbox`. This
|
||||
machine will be used to run commands throughout this tutorial. While a dedicated
|
||||
machine is being used to ensure consistency, these commands can also be run from
|
||||
just about any machine including your personal workstation running macOS or
|
||||
Linux.
|
||||
|
||||
Think of the `jumpbox` as the administration machine that you will use as a home base when setting up your Kubernetes cluster from the ground up. Before we get started we need to install a few command line utilities and clone the Kubernetes The Hard Way git repository, which contains some additional configuration files that will be used to configure various Kubernetes components throughout this tutorial.
|
||||
Think of the `jumpbox` as the administration machine that you will use as a
|
||||
home base when setting up your Kubernetes cluster from the ground up. Before
|
||||
we get started we need to install a few command line utilities and clone the
|
||||
Kubernetes The Hard Way git repository, which contains some additional
|
||||
configuration files that will be used to configure various Kubernetes
|
||||
components throughout this tutorial.
|
||||
|
||||
Log in to the `jumpbox`:
|
||||
|
||||
|
@ -10,7 +19,9 @@ Log in to the `jumpbox`:
|
|||
ssh root@jumpbox
|
||||
```
|
||||
|
||||
All commands will be run as the `root` user. This is being done for the sake of convenience, and will help reduce the number of commands required to set everything up.
|
||||
All commands will be run as the `root` user. This is being done for the sake
|
||||
of convenience, and will help reduce the number of commands required to set
|
||||
everything up.
|
||||
|
||||
### Install Command Line Utilities
|
||||
|
||||
|
|
|
@ -90,7 +90,7 @@ Copy the appropriate certificates and private keys to the `node01` and `node02`
|
|||
|
||||
```bash
|
||||
for host in node01 node02; do
|
||||
ssh root@${host} mkdir /var/lib/kubelet/
|
||||
ssh root@${host} mkdir -p /var/lib/kubelet/
|
||||
|
||||
scp ca.crt root@${host}:/var/lib/kubelet/
|
||||
|
||||
|
@ -107,9 +107,9 @@ Copy the appropriate certificates and private keys to the `controlplane` machine
|
|||
```bash
|
||||
scp \
|
||||
ca.key ca.crt \
|
||||
kube-api-controlplane.key kube-api-controlplane.crt \
|
||||
kube-apiserver.key kube-apiserver.crt \
|
||||
service-accounts.key service-accounts.crt \
|
||||
root@server:~/
|
||||
root@controlplane:~/
|
||||
```
|
||||
|
||||
> The `kube-proxy`, `kube-controller-manager`, `kube-scheduler`, and `kubelet` client certificates will be used to generate client authentication configuration files in the next lab.
|
||||
|
|
|
@ -1,16 +1,23 @@
|
|||
# Generating Kubernetes Configuration Files for Authentication
|
||||
|
||||
In this lab you will generate [Kubernetes client configuration files](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/), typically called kubeconfigs, which configure Kubernetes clients to connect and authenticate to Kubernetes API Servers.
|
||||
In this lab you will generate [Kubernetes client configuration files](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/),
|
||||
typically called kubeconfigs, which configure Kubernetes clients to connect
|
||||
and authenticate to Kubernetes API Servers.
|
||||
|
||||
## Client Authentication Configs
|
||||
|
||||
In this section you will generate kubeconfig files for the `kubelet` and the `admin` user.
|
||||
In this section you will generate kubeconfig files for the `kubelet` and the
|
||||
`admin` user.
|
||||
|
||||
### The kubelet Kubernetes Configuration File
|
||||
|
||||
When generating kubeconfig files for Kubelets the client certificate matching the Kubelet's node name must be used. This will ensure Kubelets are properly authorized by the Kubernetes [Node Authorizer](https://kubernetes.io/docs/reference/access-authn-authz/node/).
|
||||
When generating kubeconfig files for Kubelets the client certificate matching
|
||||
the Kubelet's node name must be used. This will ensure Kubelets are properly
|
||||
authorized by the Kubernetes [Node Authorizer](https://kubernetes.io/docs/reference/access-authn-authz/node/).
|
||||
|
||||
> The following commands must be run in the same directory used to generate the SSL certificates during the [Generating TLS Certificates](04-certificate-authority.md) lab.
|
||||
> The following commands must be run in the same directory used to generate
|
||||
> the SSL certificates during the
|
||||
> [Generating TLS Certificates](04-certificate-authority.md) lab.
|
||||
|
||||
Generate a kubeconfig file for the `node01` and `node02` worker nodes:
|
||||
|
||||
|
@ -191,20 +198,20 @@ for host in node01 node02; do
|
|||
ssh root@${host} "mkdir -p /var/lib/{kube-proxy,kubelet}"
|
||||
|
||||
scp kube-proxy.kubeconfig \
|
||||
root@${host}:/var/lib/kube-proxy/kubeconfig \
|
||||
root@${host}:/var/lib/kube-proxy/kubeconfig
|
||||
|
||||
scp ${host}.kubeconfig \
|
||||
root@${host}:/var/lib/kubelet/kubeconfig
|
||||
done
|
||||
```
|
||||
|
||||
Copy the `kube-controller-manager` and `kube-scheduler` kubeconfig files to the `server` machine:
|
||||
Copy the `kube-controller-manager` and `kube-scheduler` kubeconfig files to the `controlplane` machine:
|
||||
|
||||
```bash
|
||||
scp admin.kubeconfig \
|
||||
kube-controller-manager.kubeconfig \
|
||||
kube-scheduler.kubeconfig \
|
||||
root@server:~/
|
||||
root@controlplane:~/
|
||||
```
|
||||
|
||||
Next: [Generating the Data Encryption Config and Key](06-data-encryption-keys.md)
|
||||
|
|
|
@ -1,8 +1,11 @@
|
|||
# Generating the Data Encryption Config and Key
|
||||
|
||||
Kubernetes stores a variety of data including cluster state, application configurations, and secrets. Kubernetes supports the ability to [encrypt](https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data) cluster data at rest.
|
||||
Kubernetes stores a variety of data including cluster state, application
|
||||
configurations, and secrets. Kubernetes supports the ability to [encrypt]
|
||||
cluster data at rest.
|
||||
|
||||
In this lab you will generate an encryption key and an [encryption config](https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/#understanding-the-encryption-at-rest-configuration) suitable for encrypting Kubernetes Secrets.
|
||||
In this lab you will generate an encryption key and an [encryption config]
|
||||
suitable for encrypting Kubernetes Secrets.
|
||||
|
||||
## The Encryption Key
|
||||
|
||||
|
@ -21,10 +24,16 @@ envsubst < configs/encryption-config.yaml \
|
|||
> encryption-config.yaml
|
||||
```
|
||||
|
||||
Copy the `encryption-config.yaml` encryption config file to each controller instance:
|
||||
Copy the `encryption-config.yaml` encryption config file to each controller
|
||||
instance:
|
||||
|
||||
```bash
|
||||
scp encryption-config.yaml root@server:~/
|
||||
scp encryption-config.yaml root@controlplane:~/
|
||||
```
|
||||
|
||||
Next: [Bootstrapping the etcd Cluster](07-bootstrapping-etcd.md)
|
||||
|
||||
---
|
||||
|
||||
[encrypt]: https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data
|
||||
[encryption config]: https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/#understanding-the-encryption-at-rest-configuration
|
|
@ -1,23 +1,25 @@
|
|||
# Bootstrapping the etcd Cluster
|
||||
|
||||
Kubernetes components are stateless and store cluster state in [etcd](https://github.com/etcd-io/etcd). In this lab you will bootstrap a single node etcd cluster.
|
||||
Kubernetes components are stateless and store cluster state in [etcd]. In this
|
||||
lab you will bootstrap a single node etcd cluster.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Copy `etcd` binaries and systemd unit files to the `server` machine:
|
||||
Copy `etcd` binaries and systemd unit files to the `controlplane` machine:
|
||||
|
||||
```bash
|
||||
scp \
|
||||
downloads/controller/etcd \
|
||||
downloads/client/etcdctl \
|
||||
units/etcd.service \
|
||||
root@server:~/
|
||||
root@controlplane:~/
|
||||
```
|
||||
|
||||
The commands in this lab must be run on the `server` machine. Login to the `server` machine using the `ssh` command. Example:
|
||||
The commands in this lab must be run on the `controlplane` machine. Login to
|
||||
the `controlplane` machine using the `ssh` command. Example:
|
||||
|
||||
```bash
|
||||
ssh root@server
|
||||
ssh root@controlplane
|
||||
```
|
||||
|
||||
## Bootstrapping an etcd Cluster
|
||||
|
@ -38,12 +40,13 @@ Extract and install the `etcd` server and the `etcdctl` command line utility:
|
|||
{
|
||||
mkdir -p /etc/etcd /var/lib/etcd
|
||||
chmod 700 /var/lib/etcd
|
||||
cp ca.crt kube-api-server.key kube-api-server.crt \
|
||||
cp ca.crt kube-apiserver.key kube-apiserver.crt \
|
||||
/etc/etcd/
|
||||
}
|
||||
```
|
||||
|
||||
Each etcd member must have a unique name within an etcd cluster. Set the etcd name to match the hostname of the current compute instance:
|
||||
Each etcd member must have a unique name within an etcd cluster. Set the etcd
|
||||
name to match the hostname of the current compute instance:
|
||||
|
||||
Create the `etcd.service` systemd unit file:
|
||||
|
||||
|
@ -74,3 +77,7 @@ etcdctl member list
|
|||
```
|
||||
|
||||
Next: [Bootstrapping the Kubernetes Control Plane](08-bootstrapping-kubernetes-controllers.md)
|
||||
|
||||
---
|
||||
|
||||
[etcd]: https://github.com/etcd-io/etcd
|
|
@ -1,10 +1,10 @@
|
|||
# Bootstrapping the Kubernetes Control Plane
|
||||
|
||||
In this lab you will bootstrap the Kubernetes control plane. The following components will be installed on the `server` machine: Kubernetes API Server, Scheduler, and Controller Manager.
|
||||
In this lab you will bootstrap the Kubernetes control plane. The following components will be installed on the `controlplane` machine: Kubernetes API Server, Scheduler, and Controller Manager.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Connect to the `jumpbox` and copy Kubernetes binaries and systemd unit files to the `server` machine:
|
||||
Connect to the `jumpbox` and copy Kubernetes binaries and systemd unit files to the `controlplane` machine:
|
||||
|
||||
```bash
|
||||
scp \
|
||||
|
@ -17,13 +17,13 @@ scp \
|
|||
units/kube-scheduler.service \
|
||||
configs/kube-scheduler.yaml \
|
||||
configs/kube-apiserver-to-kubelet.yaml \
|
||||
root@server:~/
|
||||
root@controlplane:~/
|
||||
```
|
||||
|
||||
The commands in this lab must be run on the `server` machine. Login to the `server` machine using the `ssh` command. Example:
|
||||
The commands in this lab must be run on the `controlplane` machine. Login to the `controlplane` machine using the `ssh` command. Example:
|
||||
|
||||
```bash
|
||||
ssh root@server
|
||||
ssh root@controlplane
|
||||
```
|
||||
|
||||
## Provision the Kubernetes Control Plane
|
||||
|
@ -54,7 +54,7 @@ Install the Kubernetes binaries:
|
|||
mkdir -p /var/lib/kubernetes/
|
||||
|
||||
mv ca.crt ca.key \
|
||||
kube-api-server.key kube-api-server.crt \
|
||||
kube-apiserver.key kube-apiserver.crt \
|
||||
service-accounts.key service-accounts.crt \
|
||||
encryption-config.yaml \
|
||||
/var/lib/kubernetes/
|
||||
|
@ -155,10 +155,10 @@ In this section you will configure RBAC permissions to allow the Kubernetes API
|
|||
|
||||
> This tutorial sets the Kubelet `--authorization-mode` flag to `Webhook`. Webhook mode uses the [SubjectAccessReview](https://kubernetes.io/docs/reference/access-authn-authz/authorization/#checking-api-access) API to determine authorization.
|
||||
|
||||
The commands in this section will affect the entire cluster and only need to be run on the `server` machine.
|
||||
The commands in this section will affect the entire cluster and only need to be run on the `controlplane` machine.
|
||||
|
||||
```bash
|
||||
ssh root@server
|
||||
ssh root@controlplane
|
||||
```
|
||||
|
||||
Create the `system:kube-apiserver-to-kubelet` [ClusterRole](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#role-and-clusterrole) with permissions to access the Kubelet API and perform most common tasks associated with managing pods:
|
||||
|
@ -172,7 +172,7 @@ kubectl apply -f kube-apiserver-to-kubelet.yaml \
|
|||
|
||||
At this point the Kubernetes control plane is up and running. Run the following commands from the `jumpbox` machine to verify it's working:
|
||||
|
||||
Make a HTTP request for the Kubernetes version info:
|
||||
Make an HTTP request for the Kubernetes version info:
|
||||
|
||||
```bash
|
||||
curl --cacert ca.crt \
|
||||
|
|
|
@ -1,6 +1,8 @@
|
|||
# Bootstrapping the Kubernetes Worker Nodes
|
||||
|
||||
In this lab you will bootstrap two Kubernetes worker nodes. The following components will be installed: [runc](https://github.com/opencontainers/runc), [container networking plugins](https://github.com/containernetworking/cni), [containerd](https://github.com/containerd/containerd), [kubelet](https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet), and [kube-proxy](https://kubernetes.io/docs/concepts/cluster-administration/proxies).
|
||||
In this lab you will bootstrap two Kubernetes worker nodes. The following
|
||||
components will be installed: [runc], [container networking plugins],
|
||||
[containerd], [kubelet], and [kube-proxy].
|
||||
|
||||
## Prerequisites
|
||||
|
||||
|
@ -39,13 +41,14 @@ done
|
|||
|
||||
```bash
|
||||
for HOST in node01 node02; do
|
||||
scp \
|
||||
downloads/cni-plugins/* \
|
||||
scp -r \
|
||||
downloads/cni-plugins/ \
|
||||
root@${HOST}:~/cni-plugins/
|
||||
done
|
||||
```
|
||||
|
||||
The commands in the next section must be run on each worker instance: `node01`, `node02`. Login to the worker instance using the `ssh` command. Example:
|
||||
The commands in the next section must be run on each worker instance: `node01`,
|
||||
`node02`. Login to the worker instance using the `ssh` command. Example:
|
||||
|
||||
```bash
|
||||
ssh root@node01
|
||||
|
@ -66,7 +69,9 @@ Install the OS dependencies:
|
|||
|
||||
Disable Swap
|
||||
|
||||
Kubernetes has limited support for the use of swap memory, as it is difficult to provide guarantees and account for pod memory utilization when swap is involved.
|
||||
Kubernetes has limited support for the use of swap memory, as it is difficult
|
||||
to provide guarantees and account for pod memory utilization when swap is
|
||||
involved.
|
||||
|
||||
Verify if swap is disabled:
|
||||
|
||||
|
@ -74,13 +79,15 @@ Verify if swap is disabled:
|
|||
swapon --show
|
||||
```
|
||||
|
||||
If output is empty then swap is disabled. If swap is enabled run the following command to disable swap immediately:
|
||||
If output is empty then swap is disabled. If swap is enabled run the following
|
||||
command to disable swap immediately:
|
||||
|
||||
```bash
|
||||
swapoff -a
|
||||
```
|
||||
|
||||
> To ensure swap remains off after reboot consult your Linux distro documentation.
|
||||
> To ensure swap remains off after reboot consult your Linux distro
|
||||
> documentation.
|
||||
|
||||
Create the installation directories:
|
||||
|
||||
|
@ -98,9 +105,9 @@ Install the worker binaries:
|
|||
|
||||
```bash
|
||||
{
|
||||
mv crictl kube-proxy kubelet runc \
|
||||
/usr/local/bin/
|
||||
mv containerd containerd-shim-runc-v2 containerd-stress /bin/
|
||||
mv crictl kube-proxy kubelet /usr/local/bin/
|
||||
mv runc /usr/local/sbin/
|
||||
mv containerd ctr containerd-shim-runc-v2 containerd-stress /bin/
|
||||
mv cni-plugins/* /opt/cni/bin/
|
||||
}
|
||||
```
|
||||
|
@ -113,7 +120,8 @@ Create the `bridge` network configuration file:
|
|||
mv 10-bridge.conf 99-loopback.conf /etc/cni/net.d/
|
||||
```
|
||||
|
||||
To ensure network traffic crossing the CNI `bridge` network is processed by `iptables`, load and configure the `br-netfilter` kernel module:
|
||||
To ensure network traffic crossing the CNI `bridge` network is processed by
|
||||
`iptables`, load and configure the `br-netfilter` kernel module:
|
||||
|
||||
```bash
|
||||
{
|
||||
|
@ -193,7 +201,7 @@ Run the following commands from the `jumpbox` machine.
|
|||
List the registered Kubernetes nodes:
|
||||
|
||||
```bash
|
||||
ssh root@server \
|
||||
ssh root@controlplane \
|
||||
"kubectl get nodes \
|
||||
--kubeconfig admin.kubeconfig"
|
||||
```
|
||||
|
@ -205,3 +213,11 @@ node02 Ready <none> 10s v1.32.3
|
|||
```
|
||||
|
||||
Next: [Configuring kubectl for Remote Access](10-configuring-kubectl.md)
|
||||
|
||||
---
|
||||
|
||||
[runc]: https://github.com/opencontainers/runc
|
||||
[container networking plugins]: https://github.com/containernetworking/cni
|
||||
[containerd]: https://github.com/containerd/containerd
|
||||
[kubelet]: https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet
|
||||
[kube-proxy]: https://kubernetes.io/docs/concepts/cluster-administration/proxies
|
||||
|
|
|
@ -1,20 +1,23 @@
|
|||
# Provisioning Pod Network Routes
|
||||
|
||||
Pods scheduled to a node receive an IP address from the node's Pod CIDR range. At this point pods can not communicate with other pods running on different nodes due to missing network [routes](https://cloud.google.com/compute/docs/vpc/routes).
|
||||
Pods scheduled to a node receive an IP address from the node's Pod CIDR range.
|
||||
At this point pods can not communicate with other pods running on different
|
||||
nodes due to missing network [routes].
|
||||
|
||||
In this lab you will create a route for each worker node that maps the node's Pod CIDR range to the node's internal IP address.
|
||||
In this lab you will create a route for each worker node that maps the node's
|
||||
Pod CIDR range to the node's internal IP address.
|
||||
|
||||
> There are [other ways](https://kubernetes.io/docs/concepts/cluster-administration/networking/#how-to-achieve-this) to implement the Kubernetes networking model.
|
||||
> There are [other ways] to implement the Kubernetes networking model.
|
||||
|
||||
## The Routing Table
|
||||
|
||||
In this section you will gather the information required to create routes in the `kubernetes-the-hard-way` VPC network.
|
||||
In this section you will gather the information required to create routes in
|
||||
the `kubernetes-the-hard-way` VPC network.
|
||||
|
||||
Print the internal IP address and Pod CIDR range for each worker instance:
|
||||
|
||||
```bash
|
||||
{
|
||||
SERVER_IP=$(grep server machines.txt | cut -d " " -f 1)
|
||||
NODE_0_IP=$(grep node01 machines.txt | cut -d " " -f 1)
|
||||
NODE_0_SUBNET=$(grep node01 machines.txt | cut -d " " -f 4)
|
||||
NODE_1_IP=$(grep node02 machines.txt | cut -d " " -f 1)
|
||||
|
@ -23,7 +26,7 @@ Print the internal IP address and Pod CIDR range for each worker instance:
|
|||
```
|
||||
|
||||
```bash
|
||||
ssh root@server <<EOF
|
||||
ssh root@controlplane <<EOF
|
||||
ip route add ${NODE_0_SUBNET} via ${NODE_0_IP}
|
||||
ip route add ${NODE_1_SUBNET} via ${NODE_1_IP}
|
||||
EOF
|
||||
|
@ -44,7 +47,7 @@ EOF
|
|||
## Verification
|
||||
|
||||
```bash
|
||||
ssh root@server ip route
|
||||
ssh root@controlplane ip route
|
||||
```
|
||||
|
||||
```text
|
||||
|
@ -74,5 +77,9 @@ default via XXX.XXX.XXX.XXX dev ens160
|
|||
XXX.XXX.XXX.0/24 dev ens160 proto kernel scope link src XXX.XXX.XXX.XXX
|
||||
```
|
||||
|
||||
|
||||
Next: [Smoke Test](12-smoke-test.md)
|
||||
|
||||
---
|
||||
|
||||
[routes]: https://cloud.google.com/compute/docs/vpc/routes
|
||||
[other ways]: https://kubernetes.io/docs/concepts/cluster-administration/networking/#how-to-achieve-this
|
|
@ -8,14 +8,15 @@ Run `k` on the command like to make sure it is not already in use. You should
|
|||
get an error that it is an unknown command. Then run:
|
||||
|
||||
```shell
|
||||
echo "alias k=kubectl" | tee -a ~/.bashrc && source ~/.bashrc
|
||||
echo "alias k='kubectl'" | tee -a ~/.bashrc && source ~/.bashrc
|
||||
```
|
||||
|
||||
In this lab you will complete a series of tasks to ensure your Kubernetes cluster is functioning correctly.
|
||||
In this lab you will complete a series of tasks to ensure your Kubernetes
|
||||
cluster is functioning correctly.
|
||||
|
||||
## Data Encryption
|
||||
|
||||
In this section you will verify the ability to [encrypt secret data at rest](https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/#verifying-that-data-is-encrypted).
|
||||
In this section you will verify the ability to [encrypt secret data at rest].
|
||||
|
||||
Create a generic secret:
|
||||
|
||||
|
@ -27,7 +28,7 @@ kubectl create secret generic kubernetes-the-hard-way \
|
|||
Print a hexdump of the `kubernetes-the-hard-way` secret stored in etcd:
|
||||
|
||||
```bash
|
||||
ssh root@server \
|
||||
ssh root@controlplane \
|
||||
'etcdctl get /registry/secrets/default/kubernetes-the-hard-way | hexdump -C'
|
||||
```
|
||||
|
||||
|
@ -57,13 +58,15 @@ ssh root@server \
|
|||
0000015a
|
||||
```
|
||||
|
||||
The etcd key should be prefixed with `k8s:enc:aescbc:v1:key1`, which indicates the `aescbc` provider was used to encrypt the data with the `key1` encryption key.
|
||||
The etcd key should be prefixed with `k8s:enc:aescbc:v1:key1`, which indicates
|
||||
the `aescbc` provider was used to encrypt the data with the `key1` encryption
|
||||
key.
|
||||
|
||||
## Deployments
|
||||
|
||||
In this section you will verify the ability to create and manage [Deployments](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/).
|
||||
In this section you will verify the ability to create and manage [Deployments].
|
||||
|
||||
Create a deployment for the [nginx](https://nginx.org/en/) web server:
|
||||
Create a deployment for the [nginx] web server:
|
||||
|
||||
```bash
|
||||
kubectl create deployment nginx \
|
||||
|
@ -132,7 +135,7 @@ Handling connection for 8080
|
|||
|
||||
### Logs
|
||||
|
||||
In this section you will verify the ability to [retrieve container logs](https://kubernetes.io/docs/concepts/cluster-administration/logging/).
|
||||
In this section you will verify the ability to [retrieve container logs].
|
||||
|
||||
Print the `nginx` pod logs:
|
||||
|
||||
|
@ -147,9 +150,11 @@ kubectl logs $POD_NAME
|
|||
|
||||
### Exec
|
||||
|
||||
In this section you will verify the ability to [execute commands in a container](https://kubernetes.io/docs/tasks/debug-application-cluster/get-shell-running-container/#running-individual-commands-in-a-container).
|
||||
In this section you will verify the ability to
|
||||
[execute commands in a container].
|
||||
|
||||
Print the nginx version by executing the `nginx -v` command in the `nginx` container:
|
||||
Print the nginx version by executing the `nginx -v` command in the `nginx`
|
||||
container:
|
||||
|
||||
```bash
|
||||
kubectl exec -ti $POD_NAME -- nginx -v
|
||||
|
@ -161,16 +166,19 @@ nginx version: nginx/1.27.4
|
|||
|
||||
## Services
|
||||
|
||||
In this section you will verify the ability to expose applications using a [Service](https://kubernetes.io/docs/concepts/services-networking/service/).
|
||||
In this section you will verify the ability to expose applications using a
|
||||
[Service].
|
||||
|
||||
Expose the `nginx` deployment using a [NodePort](https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport) service:
|
||||
Expose the `nginx` deployment using a [NodePort] service:
|
||||
|
||||
```bash
|
||||
kubectl expose deployment nginx \
|
||||
--port 80 --type NodePort
|
||||
```
|
||||
|
||||
> The LoadBalancer service type can not be used because your cluster is not configured with [cloud provider integration](https://kubernetes.io/docs/getting-started-guides/scratch/#cloud-provider). Setting up cloud provider integration is out of scope for this tutorial.
|
||||
> The LoadBalancer service type can not be used because your cluster is not
|
||||
> configured with [cloud provider integration]. Setting up cloud provider
|
||||
> integration is out of scope for this tutorial.
|
||||
|
||||
Retrieve the node port assigned to the `nginx` service:
|
||||
|
||||
|
@ -205,3 +213,14 @@ Accept-Ranges: bytes
|
|||
```
|
||||
|
||||
Next: [Cleaning Up](13-cleanup.md)
|
||||
|
||||
---
|
||||
|
||||
[encrypt secret data at rest]: https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/#verifying-that-data-is-encrypted
|
||||
[Deployments]: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
|
||||
[nginx]: https://nginx.org/en/
|
||||
[retrieve container logs]: https://kubernetes.io/docs/concepts/cluster-administration/logging/
|
||||
[execute commands in a container]: https://kubernetes.io/docs/tasks/debug-application-cluster/get-shell-running-container/#running-individual-commands-in-a-container
|
||||
[Service]: https://kubernetes.io/docs/concepts/services-networking/service/
|
||||
[NodePort]: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
|
||||
[cloud provider integration]: https://kubernetes.io/docs/getting-started-guides/scratch/#cloud-provider
|
||||
|
|
|
@ -4,8 +4,13 @@ In this lab you will delete the compute resources created during this tutorial.
|
|||
|
||||
## Compute Instances
|
||||
|
||||
Previous versions of this guide made use of GCP resources for various aspects of compute and networking. The current version is agnostic, and all configuration is performed on the `jumpbox`, `server`, or nodes.
|
||||
Previous versions of this guide made use of GCP resources for various aspects
|
||||
of compute and networking. The current version is agnostic, and all
|
||||
configuration is performed on the `jumpbox`, `controlplane`, or nodes.
|
||||
|
||||
Clean up is as simple as deleting all virtual machines you created for this exercise.
|
||||
Clean up is as simple as deleting all virtual machines you created for this
|
||||
exercise.
|
||||
If you used the virtual-machines provided, then cd into the `virtual-machines`
|
||||
directory and run `vagrant destroy`.
|
||||
|
||||
Next: [Start Over](../README.md)
|
||||
|
|
|
@ -6,9 +6,14 @@ Quick access to information to help you when you run into trouble.
|
|||
* [Installing containerd]
|
||||
* [Generate Certificates Manually with OpenSSL]
|
||||
* [Running Kubelet in Standalone Mode]
|
||||
* [Using RBAC Authorization]
|
||||
* [Using Node Authorization]
|
||||
|
||||
---
|
||||
|
||||
[Install and configure prerequisites]: https://kubernetes.io/docs/setup/production-environment/container-runtimes/#install-and-configure-prerequisites
|
||||
[Installing containerd]: https://github.com/containerd/containerd/blob/main/docs/getting-started.md#installing-containerd
|
||||
[Running Kubelet in Standalone Mode]: https://v1-32.docs.kubernetes.io/docs/tutorials/cluster-management/kubelet-standalone/
|
||||
[Generate Certificates Manually with OpenSSL]: https://v1-32.docs.kubernetes.io/docs/tasks/administer-cluster/certificates/#openssl
|
||||
[Generate Certificates Manually with OpenSSL]: https://v1-32.docs.kubernetes.io/docs/tasks/administer-cluster/certificates/#openssl
|
||||
[Using RBAC Authorization]: https://kubernetes.io/docs/reference/access-authn-authz/rbac/
|
||||
[Using Node Authorization]: https://kubernetes.io/docs/reference/access-authn-authz/node/
|
||||
|
|
|
@ -5,7 +5,7 @@ Documentation=https://github.com/etcd-io/etcd
|
|||
[Service]
|
||||
Type=notify
|
||||
ExecStart=/usr/local/bin/etcd \
|
||||
--name controller \
|
||||
--name controlplane \
|
||||
--initial-advertise-peer-urls http://127.0.0.1:2380 \
|
||||
--listen-peer-urls http://127.0.0.1:2380 \
|
||||
--listen-client-urls http://127.0.0.1:2379 \
|
||||
|
|
|
@ -17,15 +17,15 @@ ExecStart=/usr/local/bin/kube-apiserver \
|
|||
--event-ttl=1h \
|
||||
--encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \
|
||||
--kubelet-certificate-authority=/var/lib/kubernetes/ca.crt \
|
||||
--kubelet-client-certificate=/var/lib/kubernetes/kube-api-server.crt \
|
||||
--kubelet-client-key=/var/lib/kubernetes/kube-api-server.key \
|
||||
--kubelet-client-certificate=/var/lib/kubernetes/kube-apiserver.crt \
|
||||
--kubelet-client-key=/var/lib/kubernetes/kube-apiserver.key \
|
||||
--runtime-config='api/all=true' \
|
||||
--service-account-key-file=/var/lib/kubernetes/service-accounts.crt \
|
||||
--service-account-signing-key-file=/var/lib/kubernetes/service-accounts.key \
|
||||
--service-account-issuer=https://controlplane.kubernetes.local:6443 \
|
||||
--service-node-port-range=30000-32767 \
|
||||
--tls-cert-file=/var/lib/kubernetes/kube-api-server.crt \
|
||||
--tls-private-key-file=/var/lib/kubernetes/kube-api-server.key \
|
||||
--tls-cert-file=/var/lib/kubernetes/kube-apiserver.crt \
|
||||
--tls-private-key-file=/var/lib/kubernetes/kube-apiserver.key \
|
||||
--v=2
|
||||
Restart=on-failure
|
||||
RestartSec=5
|
||||
|
|
Loading…
Reference in New Issue