Add basic scripts to configure kuberntes cluster

pull/863/head
rsavchuk 2023-04-24 22:52:17 +02:00
parent 79a3f79b27
commit 180ed492f0
38 changed files with 3380 additions and 2472 deletions

View File

@ -1,45 +1,25 @@
# Kubernetes The Hard Way # Kubernetes The Hard Way
This tutorial walks you through setting up Kubernetes the hard way. This guide is not for people looking for a fully automated command to bring up a Kubernetes cluster. If that's you then check out [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine), or the [Getting Started Guides](https://kubernetes.io/docs/setup). This tutorial is partially based on [Kubernetes The Hard Way](https://github.com/kelseyhightower/kubernetes-the-hard-way).
Kubernetes The Hard Way is optimized for learning, which means taking the long route to ensure you understand each task required to bootstrap a Kubernetes cluster. The main focus of this tutorial is to explain the necessity of Kubernetes components. That is why there is no need to configure multiple instances of each component and allow us to set up a single-node Kubernetes cluster. Of course, the cluster created can't be used as a production-ready Kubernetes cluster.
> The results of this tutorial should not be viewed as production ready, and may receive limited support from the community, but don't let that stop you from learning! To configure the cluster mentioned, we will use Ubuntu server 20.04 (author uses the VM in Hetzner).
## Copyright ## Copyright
<a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a>. <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a> (whatever it means).
# Labs
## Target Audience * [Cluster architecture](./docs/00-kubernetes-architecture.md)
* [Container runtime](./docs/01-container-runtime.md)
The target audience for this tutorial is someone planning to support a production Kubernetes cluster and wants to understand how everything fits together. * [Kubelet](./docs/02-kubelet.md)
* [Pod networking](./docs/03-pod-networking.md)
## Cluster Details * [ETCD](./docs/04-etcd.md)
* [Api Server](./docs/05-apiserver.md)
Kubernetes The Hard Way guides you through bootstrapping a highly available Kubernetes cluster with end-to-end encryption between components and RBAC authentication. * [Apiserver - Kubelet integration](./docs/06-apiserver-kubelet.md)
* [Controller manager](./docs/07-controller-manager.md)
* [kubernetes](https://github.com/kubernetes/kubernetes) v1.21.0 * [Scheduler](./docs/08-scheduler.md)
* [containerd](https://github.com/containerd/containerd) v1.4.4 * [Kube proxy](./docs/09-kubeproxy.md)
* [coredns](https://github.com/coredns/coredns) v1.8.3 * [DNS in Kubernetes](./docs/10-dns.md)
* [cni](https://github.com/containernetworking/cni) v0.9.1
* [etcd](https://github.com/etcd-io/etcd) v3.4.15
## Labs
This tutorial assumes you have access to the [Google Cloud Platform](https://cloud.google.com). While GCP is used for basic infrastructure requirements the lessons learned in this tutorial can be applied to other platforms.
* [Prerequisites](docs/01-prerequisites.md)
* [Installing the Client Tools](docs/02-client-tools.md)
* [Provisioning Compute Resources](docs/03-compute-resources.md)
* [Provisioning the CA and Generating TLS Certificates](docs/04-certificate-authority.md)
* [Generating Kubernetes Configuration Files for Authentication](docs/05-kubernetes-configuration-files.md)
* [Generating the Data Encryption Config and Key](docs/06-data-encryption-keys.md)
* [Bootstrapping the etcd Cluster](docs/07-bootstrapping-etcd.md)
* [Bootstrapping the Kubernetes Control Plane](docs/08-bootstrapping-kubernetes-controllers.md)
* [Bootstrapping the Kubernetes Worker Nodes](docs/09-bootstrapping-kubernetes-workers.md)
* [Configuring kubectl for Remote Access](docs/10-configuring-kubectl.md)
* [Provisioning Pod Network Routes](docs/11-pod-network-routes.md)
* [Deploying the DNS Cluster Add-on](docs/12-dns-addon.md)
* [Smoke Test](docs/13-smoke-test.md)
* [Cleaning Up](docs/14-cleanup.md)

View File

@ -0,0 +1,32 @@
# Kubernetes architecture
Kubernetes architecture consists of a control plane and worker nodes, which together enable efficient container orchestration and management.
![image](./img/00_cluster_architecture.png "Kubernetes cluster")
## Control plane
From [documentation](https://kubernetes.io/docs/concepts/overview/components/#control-plane-components)
> The control plane's components make global decisions about the cluster (for example, scheduling), as well as detecting and responding to cluster events (for example, starting up a new pod when a deployment's replicas field is unsatisfied).
The control plane comprised of several components, each with specific responsibilities:
1. API Server: The Kubernetes API server acts as the front-end for the control plane, exposing the Kubernetes API. It processes RESTful requests to manage cluster resources and updates the cluster's state accordingly.
2. etcd: A distributed key-value store, etcd stores the configuration data and state of the cluster, providing a reliable and consistent source of truth. It is crucial for maintaining the overall health and stability of the cluster.
3. Controller Manager: This component runs various controllers, such as the Node Controller, Replication Controller, and Endpoints Controller. These controllers monitor the state of the cluster and make necessary changes to achieve and maintain the desired state.
4. Scheduler: The Kubernetes scheduler is responsible for allocating resources and workloads to worker nodes based on multiple factors, such as resource requirements, node affinity/anti-affinity rules, and other constraints. It ensures efficient resource utilization and workload distribution across the cluster.
## Worker nodes
From [documentation](https://kubernetes.io/docs/concepts/overview/components/#node-components)
> Node components run on every node, maintaining running pods and providing the Kubernetes runtime environment.
Each worker node includes the following components:
1. Container Runtime: The software responsible for running and managing containers on the worker nodes. Examples of container runtime include containerd, CRI-O etc. The container runtime is responsible for pulling images, creating containers, and managing container lifecycles.
2. Kubelet: A primary node agent, the Kubelet communicates with the control plane, ensuring that the containers are running as expected. Kubelet monitors the state of the containers and reports back to the control plane, and it is responsible for starting, stopping, and maintaining application containers as directed by the control plane.
3. Kube-proxy: A network proxy that runs on each worker node, maintaining network rules and facilitating communication between services within the cluster and external clients. Kube-proxy is responsible for implementing service abstractions, such as load balancing and forwarding client requests to the appropriate backend pods.
During the tutorial, we will configure each component one by one (in a more or less logical way) and try to understand a bit more deeply the responsibilities of each component.
Next: [Container runtime](./docs/01-container-runtime.md)

View File

@ -0,0 +1,366 @@
# Container runtime
In this part of our tutorial we will focus of the container runtime.
![image](./img/01_cluster_architecture_container_runtime.png "Container runtime")
Firt of all, container runtime is a tool which can be used by other kubernetes components (kubelet) to manage containers. In case if we have two parts of the system which communicate - we need to have some specification. Nad tehre is the cpecification - CRI.
> The CRI is a plugin interface which enables the kubelet to use a wide variety of container runtimes, without having a need to recompile the cluster components.
In this tutorial we will use [containerd](https://github.com/containerd/containerd) as tool for managing the containers on the node.
On other hand there is a project under the Linux Foundation - OCI.
> The OCI is a project under the Linux Foundation is aims to develop open industry standards for container formats and runtimes. The primary goal of OCI is to ensure container portability and interoperability across different platforms and container runtime implementations. The OCI has two main specifications, Runtime Specification (runtime-spec) and Image Specification (image-spec).
In this tutorial we will use [runc](https://github.com/opencontainers/runc) as tool for running containers.
Now, we can start with the configuration.
## runc
Lets download runc binaries
```bash
wget -q --show-progress --https-only --timestamping \
https://github.com/opencontainers/runc/releases/download/v1.0.0-rc93/runc.amd64
```
As download process complete, we need to move runc binaries to bin folder
```bash
{
sudo mv runc.amd64 runc
chmod +x runc
sudo mv runc /usr/local/bin/
}
```
Now, as we have runc installed, we can run busybox container
```bash
{
mkdir -p ~/busybox-container/rootfs/bin
cd ~/busybox-container/rootfs/bin
wget https://www.busybox.net/downloads/binaries/1.31.0-defconfig-multiarch-musl/busybox-x86_64
chmod +x busybox-x86_64
./busybox-x86_64 --install .
cd ~/busybox-container
runc spec
sed -i 's/"sh"/"echo","Hello from container runned by runc!"/' config.json
}
```
Now, we created all proper files, required by runc to run the container (including container confguration and files which will be accesible from container).
```bash
runc run busybox
```
Output:
```
Hello from container runned by runc!
```
Great, everything works, now we need to clean up our workspace
```bash
{
cd ~
rm -r busybox-container
}
```
## containerd
As already mentioned, Container Runtime: The software responsible for running and managing containers on the worker nodes. The container runtime is responsible for pulling images, creating containers, and managing container lifecycles
Now, let's download containerd.
```bash
wget -q --show-progress --https-only --timestamping \
https://github.com/containerd/containerd/releases/download/v1.4.4/containerd-1.4.4-linux-amd64.tar.gz
```
Unzip containerd binaries to the bin directory
```bash
{
mkdir containerd
tar -xvf containerd-1.4.4-linux-amd64.tar.gz -C containerd
sudo mv containerd/bin/* /bin/
}
```
In comparison to the runc, containerd is a service which can be called by someone to run container.
Before we will run containerd service, we need to configure it.
```bash
{
sudo mkdir -p /etc/containerd/
cat << EOF | sudo tee /etc/containerd/config.toml
[plugins]
[plugins.cri.containerd]
snapshotter = "overlayfs"
[plugins.cri.containerd.default_runtime]
runtime_type = "io.containerd.runtime.v1.linux"
runtime_engine = "/usr/local/bin/runc"
runtime_root = ""
EOF
}
```
As we can see, we configured containerd to use runc (we installed before) to run containers.
Now we can configure contanerd service
```bash
cat <<EOF | sudo tee /etc/systemd/system/containerd.service
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target
[Service]
ExecStartPre=/sbin/modprobe overlay
ExecStart=/bin/containerd
Restart=always
RestartSec=5
Delegate=yes
KillMode=process
OOMScoreAdjust=-999
LimitNOFILE=1048576
LimitNPROC=infinity
LimitCORE=infinity
[Install]
WantedBy=multi-user.target
EOF
```
Run service
```bash
{
sudo systemctl daemon-reload
sudo systemctl enable containerd
sudo systemctl start containerd
}
```
Ensure if service is in running state
```bash
sudo systemctl status containerd
```
We should see the output like this
```
● containerd.service - containerd container runtime
Loaded: loaded (/etc/systemd/system/containerd.service; enabled; vendor preset: enabled)
Active: active (running) since Sat 2023-04-15 21:04:43 UTC; 59s ago
Docs: https://containerd.io
Process: 1018 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
Main PID: 1031 (containerd)
Tasks: 9 (limit: 2275)
Memory: 22.0M
CGroup: /system.slice/containerd.service
└─1031 /bin/containerd
...
```
As we have running containerd service, we can run some containers.
To do that, we need the tool called [ctr](//todo), it is distributed as part of containerd.
Lets pull busbox image
```bash
sudo ctr images pull docker.io/library/busybox:latest
```
And check if it is presented on our server
```bash
ctr images ls
```
Output:
```
REF TYPE DIGEST SIZE PLATFORMS LABELS
docker.io/library/busybox:latest application/vnd.docker.distribution.manifest.list.v2+json sha256:b5d6fe0712636ceb7430189de28819e195e8966372edfc2d9409d79402a0dc16 2.5 MiB linux/386,linux/amd64,linux/arm/v5,linux/arm/v6,linux/arm/v7,linux/arm64/v8,linux/mips64le,linux/ppc64le,linux/riscv64,linux/s390x -
```
Now, lets start our container.
```bash
ctr run --rm --detach docker.io/library/busybox:latest busybox-container sh -c 'echo "Hello from container runned by containerd!"'
```
Output:
```bash
Hello from container runned by containerd!
```
Now, lets clean-up.
```bash
ctr task rm busybox-container
```
Next: [Kubelet](./docs/02-kubelet.md)
Ну раз так то доавайте його втановимо
Потрібо виключити свап і переконатись що він виключений
```bash
sudo swapon --show
```
```bash
sudo swapoff -a
```
Тепер давайте завантажимо всі необхідні бінарні файли для того щоб встановити наш кублєт
```bash
wget -q --show-progress --https-only --timestamping \
https://github.com/opencontainers/runc/releases/download/v1.0.0-rc93/runc.amd64 \
https://github.com/containerd/containerd/releases/download/v1.4.4/containerd-1.4.4-linux-amd64.tar.gz \
https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kubelet
```
Тепер починаємо робиратись із тим що і для чього ми тільки що завантажували
1. runc - штука яка власне і запускаю контейнери, так давайте її перенесемо куда потрібно
```bash
{
sudo mv runc.amd64 runc
chmod +x runc
sudo mv runc /usr/local/bin/
}
```
2. containerd - штука яка використовується кублетом для запуску контейнерів, він рансі відрізняється тим що контейнерді уже вміє окрім запуску самих контейнерів отримувати образи, і робити ще всяких багато різних і цікавих речей, так давайте її відконфігуруємо
```bash
{
mkdir containerd
tar -xvf containerd-1.4.4-linux-amd64.tar.gz -C containerd
sudo mv containerd/bin/* /bin/
}
```
створимо директорію в якій буде лежати конфіг контейнерді
```bash
sudo mkdir -p /etc/containerd/
```
ну і сам конфіг, а у конфігу сказано що використовувати ми будемо рансі
```bash
cat << EOF | sudo tee /etc/containerd/config.toml
[plugins]
[plugins.cri.containerd]
snapshotter = "overlayfs"
[plugins.cri.containerd.default_runtime]
runtime_type = "io.containerd.runtime.v1.linux"
runtime_engine = "/usr/local/bin/runc"
runtime_root = ""
EOF
```
такс, контейнер ді уже готовий, можна створюватиюніт файл для запуску
```bash
cat <<EOF | sudo tee /etc/systemd/system/containerd.service
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target
[Service]
ExecStartPre=/sbin/modprobe overlay
ExecStart=/bin/containerd
Restart=always
RestartSec=5
Delegate=yes
KillMode=process
OOMScoreAdjust=-999
LimitNOFILE=1048576
LimitNPROC=infinity
LimitCORE=infinity
[Install]
WantedBy=multi-user.target
EOF
```
ну тепер залишилось тільки запустити сам демон
```bash
{
sudo systemctl daemon-reload
sudo systemctl enable containerd
sudo systemctl start containerd
}
```
щоб переконатись що все запустилось і працює успішно, давайте перевіримо що наш сервіс працює
```bash
sudo systemctl status containerd
```
маємо отримати щось наприкладі
```
● containerd.service - containerd container runtime
Loaded: loaded (/etc/systemd/system/containerd.service; enabled; vendor preset: enabled)
Active: active (running) since Sat 2023-04-15 21:04:43 UTC; 59s ago
Docs: https://containerd.io
Process: 1018 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
Main PID: 1031 (containerd)
Tasks: 9 (limit: 2275)
Memory: 22.0M
CGroup: /system.slice/containerd.service
└─1031 /bin/containerd
...
```
що цікаво, так це то що у нас уже є енджін який вміє запускати контейнери, і мо мощемо щось запустити
так давайте не втрачати час і запустимо бізівокс (штука яка нас буде ще довго супроводжувати)
```bash
wget -q --show-progress --https-only --timestamping \
https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.21.0/crictl-v1.21.0-linux-amd64.tar.gz
```
для початку спулимо імедж
```bash
sudo ctr images pull docker.io/library/busybox:latest
```
ця команда має показати щось цікаве
```bash
ctr images ls
```
ну і тепер запустимо наш контейнер
```bash
sudo ctr run --rm --detach docker.io/library/busybox:latest busybox-container sh -c 'echo "Hello, World!"'
```
мали побачити що у консоль виелось хело волд
такс ну тереп давайте почистимо з собою
```bash
ctr task rm busybox-container
```

View File

@ -1,63 +0,0 @@
# Prerequisites
## Google Cloud Platform
This tutorial leverages the [Google Cloud Platform](https://cloud.google.com/) to streamline provisioning of the compute infrastructure required to bootstrap a Kubernetes cluster from the ground up. [Sign up](https://cloud.google.com/free/) for $300 in free credits.
[Estimated cost](https://cloud.google.com/products/calculator#id=873932bc-0840-4176-b0fa-a8cfd4ca61ae) to run this tutorial: $0.23 per hour ($5.50 per day).
> The compute resources required for this tutorial exceed the Google Cloud Platform free tier.
## Google Cloud Platform SDK
### Install the Google Cloud SDK
Follow the Google Cloud SDK [documentation](https://cloud.google.com/sdk/) to install and configure the `gcloud` command line utility.
Verify the Google Cloud SDK version is 338.0.0 or higher:
```
gcloud version
```
### Set a Default Compute Region and Zone
This tutorial assumes a default compute region and zone have been configured.
If you are using the `gcloud` command-line tool for the first time `init` is the easiest way to do this:
```
gcloud init
```
Then be sure to authorize gcloud to access the Cloud Platform with your Google user credentials:
```
gcloud auth login
```
Next set a default compute region and compute zone:
```
gcloud config set compute/region us-west1
```
Set a default compute zone:
```
gcloud config set compute/zone us-west1-c
```
> Use the `gcloud compute zones list` command to view additional regions and zones.
## Running Commands in Parallel with tmux
[tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple compute instances at the same time. Labs in this tutorial may require running the same commands across multiple compute instances, in those cases consider using tmux and splitting a window into multiple panes with synchronize-panes enabled to speed up the provisioning process.
> The use of tmux is optional and not required to complete this tutorial.
![tmux screenshot](images/tmux-screenshot.png)
> Enable synchronize-panes by pressing `ctrl+b` followed by `shift+:`. Next type `set synchronize-panes on` at the prompt. To disable synchronization: `set synchronize-panes off`.
Next: [Installing the Client Tools](02-client-tools.md)

View File

@ -1,118 +0,0 @@
# Installing the Client Tools
In this lab you will install the command line utilities required to complete this tutorial: [cfssl](https://github.com/cloudflare/cfssl), [cfssljson](https://github.com/cloudflare/cfssl), and [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl).
## Install CFSSL
The `cfssl` and `cfssljson` command line utilities will be used to provision a [PKI Infrastructure](https://en.wikipedia.org/wiki/Public_key_infrastructure) and generate TLS certificates.
Download and install `cfssl` and `cfssljson`:
### OS X
```
curl -o cfssl https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/1.4.1/darwin/cfssl
curl -o cfssljson https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/1.4.1/darwin/cfssljson
```
```
chmod +x cfssl cfssljson
```
```
sudo mv cfssl cfssljson /usr/local/bin/
```
Some OS X users may experience problems using the pre-built binaries in which case [Homebrew](https://brew.sh) might be a better option:
```
brew install cfssl
```
### Linux
```
wget -q --show-progress --https-only --timestamping \
https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/1.4.1/linux/cfssl \
https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/1.4.1/linux/cfssljson
```
```
chmod +x cfssl cfssljson
```
```
sudo mv cfssl cfssljson /usr/local/bin/
```
### Verification
Verify `cfssl` and `cfssljson` version 1.4.1 or higher is installed:
```
cfssl version
```
> output
```
Version: 1.4.1
Runtime: go1.12.12
```
```
cfssljson --version
```
```
Version: 1.4.1
Runtime: go1.12.12
```
## Install kubectl
The `kubectl` command line utility is used to interact with the Kubernetes API Server. Download and install `kubectl` from the official release binaries:
### OS X
```
curl -o kubectl https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/darwin/amd64/kubectl
```
```
chmod +x kubectl
```
```
sudo mv kubectl /usr/local/bin/
```
### Linux
```
wget https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kubectl
```
```
chmod +x kubectl
```
```
sudo mv kubectl /usr/local/bin/
```
### Verification
Verify `kubectl` version 1.21.0 or higher is installed:
```
kubectl version --client
```
> output
```
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T16:31:21Z", GoVersion:"go1.16.1", Compiler:"gc", Platform:"linux/amd64"}
```
Next: [Provisioning Compute Resources](03-compute-resources.md)

180
docs/02-kubelet.md Normal file
View File

@ -0,0 +1,180 @@
# Static pods
![image](./img/02_cluster_architecture_kubelet.png "Kubelet")
як бачимо контейнери запускаються все працює
прийшла пора розбиратись із kubectl
для початку його потрібно завантажити
```bash
wget -q --show-progress --https-only --timestamping \
https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kubelet
```
```bash
{
chmod +x kubelet
sudo mv kubelet /usr/local/bin/
}
```
```bash
cat <<EOF | sudo tee /etc/systemd/system/kubelet.service
[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=https://kubernetes.io/docs/home/
Wants=network-online.target
After=network-online.target
[Service]
ExecStart=/usr/local/bin/kubelet \\
--container-runtime=remote \\
--container-runtime-endpoint=unix:///var/run/containerd/containerd.sock \\
--image-pull-progress-deadline=2m \\
--file-check-frequency=10s \\
--pod-manifest-path='/etc/kubernetes/manifests/' \\
--v=10
Restart=always
StartLimitInterval=0
RestartSec=10
[Install]
WantedBy=multi-user.target
EOF
```
```bash
{
sudo systemctl daemon-reload
sudo systemctl enable kubelet
sudo systemctl start kubelet
}
```
```bash
sudo systemctl status kubelet
```
```
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Active: active (running) since Sat 2023-04-15 22:01:06 UTC; 2s ago
Docs: https://kubernetes.io/docs/home/
Main PID: 16701 (kubelet)
Tasks: 7 (limit: 2275)
Memory: 14.8M
CGroup: /system.slice/kubelet.service
└─16701 /usr/local/bin/kubelet --container-runtime=remote --container-runtime-endpoint=unix:///var/run/containerd/containerd.sock --image-pull-progress-deadline=2m --v=2
...
```
# run static pod
знач нам потрбіно створити директорію для маніфестів
```bash
mkdir /etc/kubernetes
mkdir /etc/kubernetes/manifests
```
```bash
cat <<EOF> /etc/kubernetes/manifests/static-pod.yml
apiVersion: v1
kind: Pod
metadata:
name: static-pod
labels:
app: static-pod
spec:
hostNetwork: true
containers:
- name: busybox
image: busybox
command: ["sh", "-c", "while true; do echo 'Hello from static pod'; sleep 5; done"]
EOF
```
а тепепр поки він його знайде і запустить, ми встановимо crictl
```bash
wget -q --show-progress --https-only --timestamping \
https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.21.0/crictl-v1.21.0-linux-amd64.tar.gz
```
```bash
{
tar -xvf crictl-v1.21.0-linux-amd64.tar.gz
chmod +x crictl
sudo mv crictl /usr/local/bin/
}
```
трохи відконфігуруємо
```bash
cat <<EOF> /etc/crictl.yaml
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
EOF
```
ну і тепер поки все встановлювалось то наш под мав піднятись
спробуємо подивитись
```bash
crictl pods
```
тут є жжназва сервера
```
POD ID CREATED STATE NAME NAMESPACE ATTEMPT RUNTIME
884c50605b546 9 minutes ago Ready static-pod-example-server default 0 (default)
```
і контейнери
```bash
crictl ps
```
```
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
4cc95ba71e9d6 7cfbbec8963d8 10 minutes ago Running busybox 0 884c50605b546
```
ну і також можна глянути логи
```bash
crictl logs $(crictl ps -q)
```
```
Hello from static hostNetwork pod
Hello from static hostNetwork pod
Hello from static hostNetwork pod
Hello from static hostNetwork pod
Hello from static hostNetwork pod
Hello from static hostNetwork pod
Hello from static hostNetwork pod
Hello from static hostNetwork pod
Hello from static hostNetwork pod
Hello from static hostNetwork pod
```
такс, ну на цьому нам потрібно почитити все після себе
```bash
rm /etc/kubernetes/manifests/static-pod.yml
```
тепер має видалитись контейнер, але потрібно трохи почекати
```bash
crictl ps
```
тут пусто
```bash
crictl pods
```
і тут з часом також має стати пусто
ну всьо, на цьому на сьогодні все поки
далі будемо розбиратись із іншими компонентами кудетнетесу

View File

@ -1,227 +0,0 @@
# Provisioning Compute Resources
Kubernetes requires a set of machines to host the Kubernetes control plane and the worker nodes where containers are ultimately run. In this lab you will provision the compute resources required for running a secure and highly available Kubernetes cluster across a single [compute zone](https://cloud.google.com/compute/docs/regions-zones/regions-zones).
> Ensure a default compute zone and region have been set as described in the [Prerequisites](01-prerequisites.md#set-a-default-compute-region-and-zone) lab.
## Networking
The Kubernetes [networking model](https://kubernetes.io/docs/concepts/cluster-administration/networking/#kubernetes-model) assumes a flat network in which containers and nodes can communicate with each other. In cases where this is not desired [network policies](https://kubernetes.io/docs/concepts/services-networking/network-policies/) can limit how groups of containers are allowed to communicate with each other and external network endpoints.
> Setting up network policies is out of scope for this tutorial.
### Virtual Private Cloud Network
In this section a dedicated [Virtual Private Cloud](https://cloud.google.com/compute/docs/networks-and-firewalls#networks) (VPC) network will be setup to host the Kubernetes cluster.
Create the `kubernetes-the-hard-way` custom VPC network:
```
gcloud compute networks create kubernetes-the-hard-way --subnet-mode custom
```
A [subnet](https://cloud.google.com/compute/docs/vpc/#vpc_networks_and_subnets) must be provisioned with an IP address range large enough to assign a private IP address to each node in the Kubernetes cluster.
Create the `kubernetes` subnet in the `kubernetes-the-hard-way` VPC network:
```
gcloud compute networks subnets create kubernetes \
--network kubernetes-the-hard-way \
--range 10.240.0.0/24
```
> The `10.240.0.0/24` IP address range can host up to 254 compute instances.
### Firewall Rules
Create a firewall rule that allows internal communication across all protocols:
```
gcloud compute firewall-rules create kubernetes-the-hard-way-allow-internal \
--allow tcp,udp,icmp \
--network kubernetes-the-hard-way \
--source-ranges 10.240.0.0/24,10.200.0.0/16
```
Create a firewall rule that allows external SSH, ICMP, and HTTPS:
```
gcloud compute firewall-rules create kubernetes-the-hard-way-allow-external \
--allow tcp:22,tcp:6443,icmp \
--network kubernetes-the-hard-way \
--source-ranges 0.0.0.0/0
```
> An [external load balancer](https://cloud.google.com/compute/docs/load-balancing/network/) will be used to expose the Kubernetes API Servers to remote clients.
List the firewall rules in the `kubernetes-the-hard-way` VPC network:
```
gcloud compute firewall-rules list --filter="network:kubernetes-the-hard-way"
```
> output
```
NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED
kubernetes-the-hard-way-allow-external kubernetes-the-hard-way INGRESS 1000 tcp:22,tcp:6443,icmp False
kubernetes-the-hard-way-allow-internal kubernetes-the-hard-way INGRESS 1000 tcp,udp,icmp Fals
```
### Kubernetes Public IP Address
Allocate a static IP address that will be attached to the external load balancer fronting the Kubernetes API Servers:
```
gcloud compute addresses create kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region)
```
Verify the `kubernetes-the-hard-way` static IP address was created in your default compute region:
```
gcloud compute addresses list --filter="name=('kubernetes-the-hard-way')"
```
> output
```
NAME ADDRESS/RANGE TYPE PURPOSE NETWORK REGION SUBNET STATUS
kubernetes-the-hard-way XX.XXX.XXX.XXX EXTERNAL us-west1 RESERVED
```
## Compute Instances
The compute instances in this lab will be provisioned using [Ubuntu Server](https://www.ubuntu.com/server) 20.04, which has good support for the [containerd container runtime](https://github.com/containerd/containerd). Each compute instance will be provisioned with a fixed private IP address to simplify the Kubernetes bootstrapping process.
### Kubernetes Controllers
Create three compute instances which will host the Kubernetes control plane:
```
for i in 0 1 2; do
gcloud compute instances create controller-${i} \
--async \
--boot-disk-size 200GB \
--can-ip-forward \
--image-family ubuntu-2004-lts \
--image-project ubuntu-os-cloud \
--machine-type e2-standard-2 \
--private-network-ip 10.240.0.1${i} \
--scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring \
--subnet kubernetes \
--tags kubernetes-the-hard-way,controller
done
```
### Kubernetes Workers
Each worker instance requires a pod subnet allocation from the Kubernetes cluster CIDR range. The pod subnet allocation will be used to configure container networking in a later exercise. The `pod-cidr` instance metadata will be used to expose pod subnet allocations to compute instances at runtime.
> The Kubernetes cluster CIDR range is defined by the Controller Manager's `--cluster-cidr` flag. In this tutorial the cluster CIDR range will be set to `10.200.0.0/16`, which supports 254 subnets.
Create three compute instances which will host the Kubernetes worker nodes:
```
for i in 0 1 2; do
gcloud compute instances create worker-${i} \
--async \
--boot-disk-size 200GB \
--can-ip-forward \
--image-family ubuntu-2004-lts \
--image-project ubuntu-os-cloud \
--machine-type e2-standard-2 \
--metadata pod-cidr=10.200.${i}.0/24 \
--private-network-ip 10.240.0.2${i} \
--scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring \
--subnet kubernetes \
--tags kubernetes-the-hard-way,worker
done
```
### Verification
List the compute instances in your default compute zone:
```
gcloud compute instances list --filter="tags.items=kubernetes-the-hard-way"
```
> output
```
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
controller-0 us-west1-c e2-standard-2 10.240.0.10 XX.XX.XX.XXX RUNNING
controller-1 us-west1-c e2-standard-2 10.240.0.11 XX.XXX.XXX.XX RUNNING
controller-2 us-west1-c e2-standard-2 10.240.0.12 XX.XXX.XX.XXX RUNNING
worker-0 us-west1-c e2-standard-2 10.240.0.20 XX.XX.XXX.XXX RUNNING
worker-1 us-west1-c e2-standard-2 10.240.0.21 XX.XX.XX.XXX RUNNING
worker-2 us-west1-c e2-standard-2 10.240.0.22 XX.XXX.XX.XX RUNNING
```
## Configuring SSH Access
SSH will be used to configure the controller and worker instances. When connecting to compute instances for the first time SSH keys will be generated for you and stored in the project or instance metadata as described in the [connecting to instances](https://cloud.google.com/compute/docs/instances/connecting-to-instance) documentation.
Test SSH access to the `controller-0` compute instances:
```
gcloud compute ssh controller-0
```
If this is your first time connecting to a compute instance SSH keys will be generated for you. Enter a passphrase at the prompt to continue:
```
WARNING: The public SSH key file for gcloud does not exist.
WARNING: The private SSH key file for gcloud does not exist.
WARNING: You do not have an SSH key for gcloud.
WARNING: SSH keygen will be executed to generate a key.
Generating public/private rsa key pair.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
```
At this point the generated SSH keys will be uploaded and stored in your project:
```
Your identification has been saved in /home/$USER/.ssh/google_compute_engine.
Your public key has been saved in /home/$USER/.ssh/google_compute_engine.pub.
The key fingerprint is:
SHA256:nz1i8jHmgQuGt+WscqP5SeIaSy5wyIJeL71MuV+QruE $USER@$HOSTNAME
The key's randomart image is:
+---[RSA 2048]----+
| |
| |
| |
| . |
|o. oS |
|=... .o .o o |
|+.+ =+=.+.X o |
|.+ ==O*B.B = . |
| .+.=EB++ o |
+----[SHA256]-----+
Updating project ssh metadata...-Updated [https://www.googleapis.com/compute/v1/projects/$PROJECT_ID].
Updating project ssh metadata...done.
Waiting for SSH key to propagate.
```
After the SSH keys have been updated you'll be logged into the `controller-0` instance:
```
Welcome to Ubuntu 20.04.2 LTS (GNU/Linux 5.4.0-1042-gcp x86_64)
...
```
Type `exit` at the prompt to exit the `controller-0` compute instance:
```
$USER@controller-0:~$ exit
```
> output
```
logout
Connection to XX.XX.XX.XXX closed
```
Next: [Provisioning a CA and Generating TLS Certificates](04-certificate-authority.md)

355
docs/03-pod-networking.md Normal file
View File

@ -0,0 +1,355 @@
# Трохи про мережу в кубернетесі
Давайте спробуємо запустити нджінкс
Як і раніше ми просто покладемо правильний файл у правильне місце
```bash
cat <<EOF> /etc/kubernetes/manifests/static-nginx.yml
apiVersion: v1
kind: Pod
metadata:
name: static-nginx
labels:
app: static-nginx
spec:
hostNetwork: true
containers:
- name: nginx
image: nginx
EOF
```
можна навіть перевірити чи контейнер хапустивсь
```bash
curl localhost
```
```
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
```
```bash
crictl pods
```
```
POD ID CREATED STATE NAME NAMESPACE ATTEMPT RUNTIME
e8720dee2b08b About a minute ago Ready static-nginx-example-server default 0 (default)
```
так, це ж контейнер, тут все чотко, можна щось робити і все має працювати
давайте створимо ще 1
```bash
cat <<EOF> /etc/kubernetes/manifests/static-nginx-2.yml
apiVersion: v1
kind: Pod
metadata:
name: static-nginx-2
labels:
app: static-nginx-2
spec:
hostNetwork: true
containers:
- name: nginx
image: nginx
EOF
```
але стаять, а як ми на нього потрапимо, із першим все було просто ми просто пішли на 80 порт, хоча також питання зо там взагалі відбулось
поки не особо ясно, але у нас є чотка утіліта для дебага
то давайте глянемо що там за контейнери існують
```bash
crictl ps -a
```
```
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
9e8cb98b87aed 6efc10a0510f1 42 seconds ago Exited nginx 3 b013eca0e9d33
0e47618b39c09 6efc10a0510f1 4 minutes ago Running nginx 0 e8720dee2b08b
```
такс, що за неподобство, чому 1 контейнер не запущений, давайте розбиратись, і куди першим ділом потрібно лізти, звісно у логи
```bash
crictl logs $(crictl ps -q -s Exited)
```
```
...
2023/04/18 20:49:47 [emerg] 1#1: bind() to 0.0.0.0:80 failed (98: Address already in use)
nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use)
...
```
такс, біда, це що виходить що ми не можемо запусти 2 нджінкса?
всьо розхожимось, докер в 100 раз лучше)))
але ні, не все так просто
справа в тому що окім всього існує ще такий собі стандарт який говорить як алаштовувати мережу для контейнера - CNI
а ми його поки ніяким чином не конфігурували, і похорошому всі ті контейнери не мали б створитись якби не 1 чіт, ми їх всіх створювали у хостівій мережі, так якби ми просто запускаємо якусь програму, тож давайте зараз налаштуємо якийсь мережевий плагін
```bash
wget -q --show-progress --https-only --timestamping \
https://github.com/containernetworking/plugins/releases/download/v0.9.1/cni-plugins-linux-amd64-v0.9.1.tgz
```
```bash
sudo mkdir -p \
/etc/cni/net.d \
/opt/cni/bin
```
```bash
sudo tar -xvf cni-plugins-linux-amd64-v0.9.1.tgz -C /opt/cni/bin/
```
```bash
cat <<EOF | sudo tee /etc/cni/net.d/10-bridge.conf
{
"cniVersion": "0.4.0",
"name": "bridge",
"type": "bridge",
"bridge": "cnio0",
"isGateway": true,
"ipMasq": true,
"ipam": {
"type": "host-local",
"ranges": [
[{"subnet": "10.240.1.0/24"}]
],
"routes": [{"dst": "0.0.0.0/0"}]
}
}
EOF
```
```bash
cat <<EOF | sudo tee /etc/cni/net.d/99-loopback.conf
{
"cniVersion": "0.4.0",
"name": "lo",
"type": "loopback"
}
EOF
```
```bash
cat <<EOF | sudo tee /etc/systemd/system/kubelet.service
[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=https://kubernetes.io/docs/home/
Wants=network-online.target
After=network-online.target
[Service]
ExecStart=/usr/local/bin/kubelet \\
--container-runtime=remote \\
--container-runtime-endpoint=unix:///var/run/containerd/containerd.sock \\
--image-pull-progress-deadline=2m \\
--file-check-frequency=10s \\
--network-plugin=cni \\
--pod-manifest-path='/etc/kubernetes/manifests/' \\
--v=10
Restart=always
StartLimitInterval=0
RestartSec=10
[Install]
WantedBy=multi-user.target
EOF
```
```bash
{
sudo systemctl daemon-reload
sudo systemctl restart kubelet
}
```
```bash
sudo systemctl status kubelet
```
ну що, давайте щось запустимо
```bash
cat <<EOF> /etc/kubernetes/manifests/static-nginx-2.yml
apiVersion: v1
kind: Pod
metadata:
name: static-nginx-2
labels:
app: static-nginx-2
spec:
# hostNetwork: true
containers:
- name: nginx
image: ubuntu/nginx
EOF
```
```bash
crictl pods
```
```bash
crictl ps
```
такс, воно то все звісно добре, але як отимати айпішнік
ну тут потрібно трохи пошаманити
```bash
PID=$(crictl pods --label app=static-nginx-2 -q)
CID=$(crictl ps -q --pod $PID)
crictl exec $CID ip a
```
...
3: cnio0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether c2:44:0d:6d:17:61 brd ff:ff:ff:ff:ff:ff
inet 10.240.1.1/24 brd 10.240.1.255 scope global cnio0
valid_lft forever preferred_lft forever
inet6 fe80::c044:dff:fe6d:1761/64 scope link
valid_lft forever preferred_lft forever
...
```
пам'ятаємо що ми створювали підмережу 10.240.1.0/24 при конфігураціїх нашого плагіну
знаючи що контейнер у нас такий поки 1 можна і підбором (із 1) вибрати айішнік
```bash
curl 10.240.1.1
```
```
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
```
думаю для гарантії потрібно ще 1 контейнер створити
```bash
cat <<EOF> /etc/kubernetes/manifests/static-nginx-3.yml
apiVersion: v1
kind: Pod
metadata:
name: static-nginx-3
labels:
app: static-nginx-3
spec:
# hostNetwork: true
containers:
- name: nginx
image: ubuntu/nginx
EOF
```
```bash
PID=$(crictl pods --label app=static-nginx-3 -q)
CID=$(crictl ps -q --pod $PID)
crictl exec $CID ip a
```
```bash
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
3: eth0@if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 26:24:72:70:b7:9f brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.240.1.7/24 brd 10.240.1.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::2424:72ff:fe70:b79f/64 scope link
valid_lft forever preferred_lft forever
```
отримуємо потрібний нам айпі
такс, тепер ми знайємо щу у 1 контейнера 1 айпішнік а у іншого інший
тепер прийшов час подивитись чи можуть вони комунікувати між собою
```bash
crictl exec -it $CID bash
```
```bash
cat <<EOF> /etc/kubernetes/manifests/static-pod.yml
apiVersion: v1
kind: Pod
metadata:
name: static-pod
labels:
app: static-pod
spec:
hostNetwork: true
containers:
- name: busybox
image: busybox
command: ["sh", "-c", "while true; do echo 'Hello from static pod'; sleep 5; done"]
EOF
```
```bash
PID=$(crictl pods --label app=static-pod -q)
CID=$(crictl ps -q --pod $PID)
crictl exec -it $CID sh
wget -O - 10.240.1.7
exit
```
ну і тепер все почистити після себе
```bash
rm /etc/kubernetes/manifests/static-*
```

View File

@ -1,414 +0,0 @@
# Provisioning a CA and Generating TLS Certificates
In this lab you will provision a [PKI Infrastructure](https://en.wikipedia.org/wiki/Public_key_infrastructure) using CloudFlare's PKI toolkit, [cfssl](https://github.com/cloudflare/cfssl), then use it to bootstrap a Certificate Authority, and generate TLS certificates for the following components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler, kubelet, and kube-proxy.
## Certificate Authority
In this section you will provision a Certificate Authority that can be used to generate additional TLS certificates.
Generate the CA configuration file, certificate, and private key:
```
{
cat > ca-config.json <<EOF
{
"signing": {
"default": {
"expiry": "8760h"
},
"profiles": {
"kubernetes": {
"usages": ["signing", "key encipherment", "server auth", "client auth"],
"expiry": "8760h"
}
}
}
}
EOF
cat > ca-csr.json <<EOF
{
"CN": "Kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "Kubernetes",
"OU": "CA",
"ST": "Oregon"
}
]
}
EOF
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
}
```
Results:
```
ca-key.pem
ca.pem
```
## Client and Server Certificates
In this section you will generate client and server certificates for each Kubernetes component and a client certificate for the Kubernetes `admin` user.
### The Admin Client Certificate
Generate the `admin` client certificate and private key:
```
{
cat > admin-csr.json <<EOF
{
"CN": "admin",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "system:masters",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
admin-csr.json | cfssljson -bare admin
}
```
Results:
```
admin-key.pem
admin.pem
```
### The Kubelet Client Certificates
Kubernetes uses a [special-purpose authorization mode](https://kubernetes.io/docs/admin/authorization/node/) called Node Authorizer, that specifically authorizes API requests made by [Kubelets](https://kubernetes.io/docs/concepts/overview/components/#kubelet). In order to be authorized by the Node Authorizer, Kubelets must use a credential that identifies them as being in the `system:nodes` group, with a username of `system:node:<nodeName>`. In this section you will create a certificate for each Kubernetes worker node that meets the Node Authorizer requirements.
Generate a certificate and private key for each Kubernetes worker node:
```
for instance in worker-0 worker-1 worker-2; do
cat > ${instance}-csr.json <<EOF
{
"CN": "system:node:${instance}",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "system:nodes",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
EXTERNAL_IP=$(gcloud compute instances describe ${instance} \
--format 'value(networkInterfaces[0].accessConfigs[0].natIP)')
INTERNAL_IP=$(gcloud compute instances describe ${instance} \
--format 'value(networkInterfaces[0].networkIP)')
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-hostname=${instance},${EXTERNAL_IP},${INTERNAL_IP} \
-profile=kubernetes \
${instance}-csr.json | cfssljson -bare ${instance}
done
```
Results:
```
worker-0-key.pem
worker-0.pem
worker-1-key.pem
worker-1.pem
worker-2-key.pem
worker-2.pem
```
### The Controller Manager Client Certificate
Generate the `kube-controller-manager` client certificate and private key:
```
{
cat > kube-controller-manager-csr.json <<EOF
{
"CN": "system:kube-controller-manager",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "system:kube-controller-manager",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
}
```
Results:
```
kube-controller-manager-key.pem
kube-controller-manager.pem
```
### The Kube Proxy Client Certificate
Generate the `kube-proxy` client certificate and private key:
```
{
cat > kube-proxy-csr.json <<EOF
{
"CN": "system:kube-proxy",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "system:node-proxier",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
kube-proxy-csr.json | cfssljson -bare kube-proxy
}
```
Results:
```
kube-proxy-key.pem
kube-proxy.pem
```
### The Scheduler Client Certificate
Generate the `kube-scheduler` client certificate and private key:
```
{
cat > kube-scheduler-csr.json <<EOF
{
"CN": "system:kube-scheduler",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "system:kube-scheduler",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
kube-scheduler-csr.json | cfssljson -bare kube-scheduler
}
```
Results:
```
kube-scheduler-key.pem
kube-scheduler.pem
```
### The Kubernetes API Server Certificate
The `kubernetes-the-hard-way` static IP address will be included in the list of subject alternative names for the Kubernetes API Server certificate. This will ensure the certificate can be validated by remote clients.
Generate the Kubernetes API Server certificate and private key:
```
{
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) \
--format 'value(address)')
KUBERNETES_HOSTNAMES=kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.svc.cluster.local
cat > kubernetes-csr.json <<EOF
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "Kubernetes",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-hostname=10.32.0.1,10.240.0.10,10.240.0.11,10.240.0.12,${KUBERNETES_PUBLIC_ADDRESS},127.0.0.1,${KUBERNETES_HOSTNAMES} \
-profile=kubernetes \
kubernetes-csr.json | cfssljson -bare kubernetes
}
```
> The Kubernetes API server is automatically assigned the `kubernetes` internal dns name, which will be linked to the first IP address (`10.32.0.1`) from the address range (`10.32.0.0/24`) reserved for internal cluster services during the [control plane bootstrapping](08-bootstrapping-kubernetes-controllers.md#configure-the-kubernetes-api-server) lab.
Results:
```
kubernetes-key.pem
kubernetes.pem
```
## The Service Account Key Pair
The Kubernetes Controller Manager leverages a key pair to generate and sign service account tokens as described in the [managing service accounts](https://kubernetes.io/docs/admin/service-accounts-admin/) documentation.
Generate the `service-account` certificate and private key:
```
{
cat > service-account-csr.json <<EOF
{
"CN": "service-accounts",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "Kubernetes",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
service-account-csr.json | cfssljson -bare service-account
}
```
Results:
```
service-account-key.pem
service-account.pem
```
## Distribute the Client and Server Certificates
Copy the appropriate certificates and private keys to each worker instance:
```
for instance in worker-0 worker-1 worker-2; do
gcloud compute scp ca.pem ${instance}-key.pem ${instance}.pem ${instance}:~/
done
```
Copy the appropriate certificates and private keys to each controller instance:
```
for instance in controller-0 controller-1 controller-2; do
gcloud compute scp ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \
service-account-key.pem service-account.pem ${instance}:~/
done
```
> The `kube-proxy`, `kube-controller-manager`, `kube-scheduler`, and `kubelet` client certificates will be used to generate client authentication configuration files in the next lab.
Next: [Generating Kubernetes Configuration Files for Authentication](05-kubernetes-configuration-files.md)

193
docs/04-etcd.md Normal file
View File

@ -0,0 +1,193 @@
# Конфігурація ітісіді кластера
![image](./img/04_cluster_architecture_etcd.png "Kubelet")
це все звісно прикольно але потрібно всетаки почати конфігурувати нормальний кубернетес
а для цього нам потрібно мати базу данних де можуть зберігатись всі необхідні кубернетесу речі
і відповідно почати потрібно із ітісіді
потрібно встановити всі необхідні нам інструменти для генерації сертифікатів
```bash
{
wget -q --show-progress --https-only --timestamping \
https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/1.4.1/linux/cfssl \
https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/1.4.1/linux/cfssljson
chmod +x cfssl cfssljson
sudo mv cfssl cfssljson /usr/local/bin/
}
```
тепер потрібно згенерувати сертифікат яким ми будемо підписувати всі інші сертифікати
```bash
{
cat > ca-config.json <<EOF
{
"signing": {
"default": {
"expiry": "8760h"
},
"profiles": {
"kubernetes": {
"usages": ["signing", "key encipherment", "server auth", "client auth"],
"expiry": "8760h"
}
}
}
}
EOF
cat > ca-csr.json <<EOF
{
"CN": "Kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "Kubernetes",
"OU": "CA",
"ST": "Oregon"
}
]
}
EOF
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
}
```
Результат:
```
ca-key.pem
ca.csr
ca.pem
```
такс, а тепер нам потрібно згенерувати сертифікат який уже власне буде використовуватись самим ітісіді (але якщо бути точним то не тільки, але про то ми дізнаємось трохи згодом)
```bash
{
HOST_NAME=$(hostname -a)
KUBERNETES_HOSTNAMES=kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.svc.cluster.local
cat > kubernetes-csr.json <<EOF
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "Kubernetes",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-hostname=worker,127.0.0.1,${KUBERNETES_HOSTNAMES},10.32.0.1 \
-profile=kubernetes \
kubernetes-csr.json | cfssljson -bare kubernetes
}
```
Завантажимо etcd
```
wget -q --show-progress --https-only --timestamping \
"https://github.com/etcd-io/etcd/releases/download/v3.4.15/etcd-v3.4.15-linux-amd64.tar.gz"
```
Розпакувати і помістити etcd у диреторію /usr/local/bin/
```
{
tar -xvf etcd-v3.4.15-linux-amd64.tar.gz
sudo mv etcd-v3.4.15-linux-amd64/etcd* /usr/local/bin/
}
```
```
{
sudo mkdir -p /etc/etcd /var/lib/etcd
sudo chmod 700 /var/lib/etcd
sudo cp ca.pem \
kubernetes.pem kubernetes-key.pem \
/etc/etcd/
}
```
```
cat <<EOF | sudo tee /etc/systemd/system/etcd.service
[Unit]
Description=etcd
Documentation=https://github.com/coreos
[Service]
Type=notify
ExecStart=/usr/local/bin/etcd \\
--name etcd \\
--cert-file=/etc/etcd/kubernetes.pem \\
--key-file=/etc/etcd/kubernetes-key.pem \\
--trusted-ca-file=/etc/etcd/ca.pem \\
--client-cert-auth \\
--listen-client-urls https://127.0.0.1:2379 \\
--advertise-client-urls https://127.0.0.1:2379 \\
--data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
```
```bash
{
sudo systemctl daemon-reload
sudo systemctl enable etcd
sudo systemctl start etcd
}
```
```bash
systemctl status etcd
```
```
● etcd.service - etcd
Loaded: loaded (/etc/systemd/system/etcd.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2023-04-20 10:55:03 UTC; 18s ago
Docs: https://github.com/coreos
Main PID: 12374 (etcd)
Tasks: 10 (limit: 2275)
Memory: 4.2M
CGroup: /system.slice/etcd.service
└─12374 /usr/local/bin/etcd --name etcd --cert-file=/etc/etcd/kubernetes.pem --key-file=/etc/
...
```
```
sudo ETCDCTL_API=3 etcdctl member list \
--endpoints=https://127.0.0.1:2379 \
--cacert=/etc/etcd/ca.pem \
--cert=/etc/etcd/kubernetes.pem \
--key=/etc/etcd/kubernetes-key.pem
```
Результат:
```bash
8e9e05c52164694d, started, etcd, http://localhost:2380, https://127.0.0.1:2379, false
```

268
docs/05-apiserver.md Normal file
View File

@ -0,0 +1,268 @@
# апі сервер
![image](./img/05_cluster_architecture_apiserver.png "Kubelet")
так як ми уже налаштували бд - можна починати налаштовувати і сам куб апі сервер, будемо пробувати щось четапити
```bash
{
cat > service-account-csr.json <<EOF
{
"CN": "service-accounts",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "Kubernetes",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
service-account-csr.json | cfssljson -bare service-account
}
```
```bash
{
cat > admin-csr.json <<EOF
{
"CN": "admin",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "system:masters",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
admin-csr.json | cfssljson -bare admin
}
```
```
ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)
```
```bash
cat > encryption-config.yaml <<EOF
kind: EncryptionConfig
apiVersion: v1
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: key1
secret: ${ENCRYPTION_KEY}
- identity: {}
EOF
```
```
sudo mkdir -p /etc/kubernetes/config
```
```
wget -q --show-progress --https-only --timestamping \
"https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kube-apiserver"
```
```
{
chmod +x kube-apiserver
sudo mv kube-apiserver /usr/local/bin/
}
```
```
{
sudo mkdir -p /var/lib/kubernetes/
sudo cp \
ca.pem \
kubernetes.pem kubernetes-key.pem \
encryption-config.yaml \
service-account-key.pem service-account.pem \
/var/lib/kubernetes/
}
```
```
sudo mkdir -p /etc/kubernetes/config
```
```bash
ADVERTISE_IP=$(ip addr show | grep 'inet ' | grep -v '127.0.0.1' | awk '{print $2}' | cut -f1 -d'/')
cat <<EOF | sudo tee /etc/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-apiserver \\
--advertise-address='${ADVERTISE_IP}' \\
--allow-privileged='true' \\
--audit-log-maxage='30' \\
--audit-log-maxbackup='3' \\
--audit-log-maxsize='100' \\
--audit-log-path='/var/log/audit.log' \\
--authorization-mode='Node,RBAC' \\
--bind-address='0.0.0.0' \\
--client-ca-file='/var/lib/kubernetes/ca.pem' \\
--enable-admission-plugins='NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota' \\
--etcd-cafile='/var/lib/kubernetes/ca.pem' \\
--etcd-certfile='/var/lib/kubernetes/kubernetes.pem' \\
--etcd-keyfile='/var/lib/kubernetes/kubernetes-key.pem' \\
--etcd-servers='https://127.0.0.1:2379' \\
--event-ttl='1h' \\
--encryption-provider-config='/var/lib/kubernetes/encryption-config.yaml' \\
--kubelet-certificate-authority='/var/lib/kubernetes/ca.pem' \\
--kubelet-client-certificate='/var/lib/kubernetes/kubernetes.pem' \\
--kubelet-client-key='/var/lib/kubernetes/kubernetes-key.pem' \\
--runtime-config='api/all=true' \\
--service-account-key-file='/var/lib/kubernetes/service-account.pem' \\
--service-cluster-ip-range='10.32.0.0/24' \\
--service-node-port-range='30000-32767' \\
--tls-cert-file='/var/lib/kubernetes/kubernetes.pem' \\
--tls-private-key-file='/var/lib/kubernetes/kubernetes-key.pem' \\
--service-account-signing-key-file='/var/lib/kubernetes/service-account-key.pem' \\
--service-account-issuer='https://kubernetes.default.svc.cluster.local' \\
--api-audiences='https://kubernetes.default.svc.cluster.local' \\
--v='2'
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
```
```bash
{
sudo systemctl daemon-reload
sudo systemctl enable kube-apiserver
sudo systemctl restart kube-apiserver
}
```
```bash
sudo systemctl status kube-apiserver
```
```
● kube-apiserver.service - Kubernetes API Server
Loaded: loaded (/etc/systemd/system/kube-apiserver.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2023-04-20 11:04:29 UTC; 22s ago
Docs: https://github.com/kubernetes/kubernetes
Main PID: 12566 (kube-apiserver)
Tasks: 8 (limit: 2275)
Memory: 291.6M
CGroup: /system.slice/kube-apiserver.service
└─12566 /usr/local/bin/kube-apiserver --advertise-address=91.107.220.4 --allow-privileged=true --apiserver-count=3 --audit-log-maxage=30 --audit-log-maxbackup=3 --audit-log-m>
...
```
```bash
wget -q --show-progress --https-only --timestamping \
https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kubectl \
&& chmod +x kubectl \
&& sudo mv kubectl /usr/local/bin/
```
```
{
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://127.0.0.1:6443
kubectl config set-credentials admin \
--client-certificate=admin.pem \
--client-key=admin-key.pem \
--embed-certs=true
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=admin
kubectl config use-context default
}
```
```bash
kubectl version --kubeconfig=admin.kubeconfig
```
тепер можна починати створбвати поди і все має чотко працювати
```bash
{
HOST_NAME=$(hostname -a)
cat <<EOF> pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: hello-world
spec:
serviceAccountName: hello-world
containers:
- name: hello-world-container
image: busybox
command: ['sh', '-c', 'while true; do echo "Hello, World!"; sleep 1; done']
nodeName: ${HOST_NAME}
EOF
cat <<EOF> sa.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: hello-world
automountServiceAccountToken: false
EOF
kubectl apply -f sa.yaml --kubeconfig=admin.kubeconfig
kubectl apply -f pod.yaml --kubeconfig=admin.kubeconfig
}
```
```bash
kubectl get pod --kubeconfig=admin.kubeconfig
```
```
NAME READY STATUS RESTARTS AGE
hello-world 0/1 Pending 0 29s
```
такс под є але він у статусі пендінг, якесь неподобство
в дійсності, зоч у нас кублєт і є але він про сервер нічого незнає а сервер про нього
потрібно цю проблємку вирішити

View File

@ -1,214 +0,0 @@
# Generating Kubernetes Configuration Files for Authentication
In this lab you will generate [Kubernetes configuration files](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/), also known as kubeconfigs, which enable Kubernetes clients to locate and authenticate to the Kubernetes API Servers.
## Client Authentication Configs
In this section you will generate kubeconfig files for the `controller manager`, `kubelet`, `kube-proxy`, and `scheduler` clients and the `admin` user.
### Kubernetes Public IP Address
Each kubeconfig requires a Kubernetes API Server to connect to. To support high availability the IP address assigned to the external load balancer fronting the Kubernetes API Servers will be used.
Retrieve the `kubernetes-the-hard-way` static IP address:
```
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) \
--format 'value(address)')
```
### The kubelet Kubernetes Configuration File
When generating kubeconfig files for Kubelets the client certificate matching the Kubelet's node name must be used. This will ensure Kubelets are properly authorized by the Kubernetes [Node Authorizer](https://kubernetes.io/docs/admin/authorization/node/).
> The following commands must be run in the same directory used to generate the SSL certificates during the [Generating TLS Certificates](04-certificate-authority.md) lab.
Generate a kubeconfig file for each worker node:
```
for instance in worker-0 worker-1 worker-2; do
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \
--kubeconfig=${instance}.kubeconfig
kubectl config set-credentials system:node:${instance} \
--client-certificate=${instance}.pem \
--client-key=${instance}-key.pem \
--embed-certs=true \
--kubeconfig=${instance}.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:node:${instance} \
--kubeconfig=${instance}.kubeconfig
kubectl config use-context default --kubeconfig=${instance}.kubeconfig
done
```
Results:
```
worker-0.kubeconfig
worker-1.kubeconfig
worker-2.kubeconfig
```
### The kube-proxy Kubernetes Configuration File
Generate a kubeconfig file for the `kube-proxy` service:
```
{
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials system:kube-proxy \
--client-certificate=kube-proxy.pem \
--client-key=kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
}
```
Results:
```
kube-proxy.kubeconfig
```
### The kube-controller-manager Kubernetes Configuration File
Generate a kubeconfig file for the `kube-controller-manager` service:
```
{
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config set-credentials system:kube-controller-manager \
--client-certificate=kube-controller-manager.pem \
--client-key=kube-controller-manager-key.pem \
--embed-certs=true \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:kube-controller-manager \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig
}
```
Results:
```
kube-controller-manager.kubeconfig
```
### The kube-scheduler Kubernetes Configuration File
Generate a kubeconfig file for the `kube-scheduler` service:
```
{
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=kube-scheduler.kubeconfig
kubectl config set-credentials system:kube-scheduler \
--client-certificate=kube-scheduler.pem \
--client-key=kube-scheduler-key.pem \
--embed-certs=true \
--kubeconfig=kube-scheduler.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:kube-scheduler \
--kubeconfig=kube-scheduler.kubeconfig
kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig
}
```
Results:
```
kube-scheduler.kubeconfig
```
### The admin Kubernetes Configuration File
Generate a kubeconfig file for the `admin` user:
```
{
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=admin.kubeconfig
kubectl config set-credentials admin \
--client-certificate=admin.pem \
--client-key=admin-key.pem \
--embed-certs=true \
--kubeconfig=admin.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=admin \
--kubeconfig=admin.kubeconfig
kubectl config use-context default --kubeconfig=admin.kubeconfig
}
```
Results:
```
admin.kubeconfig
```
##
## Distribute the Kubernetes Configuration Files
Copy the appropriate `kubelet` and `kube-proxy` kubeconfig files to each worker instance:
```
for instance in worker-0 worker-1 worker-2; do
gcloud compute scp ${instance}.kubeconfig kube-proxy.kubeconfig ${instance}:~/
done
```
Copy the appropriate `kube-controller-manager` and `kube-scheduler` kubeconfig files to each controller instance:
```
for instance in controller-0 controller-1 controller-2; do
gcloud compute scp admin.kubeconfig kube-controller-manager.kubeconfig kube-scheduler.kubeconfig ${instance}:~/
done
```
Next: [Generating the Data Encryption Config and Key](06-data-encryption-keys.md)

View File

@ -0,0 +1,246 @@
# Kubelet
![image](./img/06_cluster_architecture_apiserver_kubelet.png "Kubelet")
```bash
{
HOST_NAME=$(hostname -a)
cat > kubelet-csr.json <<EOF
{
"CN": "system:node:${HOST_NAME}",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "system:nodes",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-hostname=127.0.0.1,${HOST_NAME} \
-profile=kubernetes \
kubelet-csr.json | cfssljson -bare kubelet
}
```
це конфігураційний файл кублєта - розказує кублєту як ходити на апі сервер
```bash
{
HOST_NAME=$(hostname -a)
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=kubelet.kubeconfig
kubectl config set-credentials system:node:${HOST_NAME} \
--client-certificate=kubelet.pem \
--client-key=kubelet-key.pem \
--embed-certs=true \
--kubeconfig=kubelet.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:node:${HOST_NAME} \
--kubeconfig=kubelet.kubeconfig
kubectl config use-context default --kubeconfig=kubelet.kubeconfig
}
```
```bash
{
sudo cp kubelet-key.pem kubelet.pem /var/lib/kubelet/
sudo cp kubelet.kubeconfig /var/lib/kubelet/kubeconfig
sudo cp ca.pem /var/lib/kubernetes/
}
```
```bash
cat <<EOF | sudo tee /var/lib/kubelet/kubelet-config.yaml
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: false
webhook:
enabled: true
x509:
clientCAFile: "/var/lib/kubernetes/ca.pem"
authorization:
mode: Webhook
clusterDomain: "cluster.local"
clusterDNS:
- "10.32.0.10"
podCIDR: "10.240.1.0/24"
resolvConf: "/run/systemd/resolve/resolv.conf"
runtimeRequestTimeout: "15m"
tlsCertFile: "/var/lib/kubelet/kubelet.pem"
tlsPrivateKeyFile: "/var/lib/kubelet/kubelet-key.pem"
EOF
```
```bash
cat <<EOF | sudo tee /etc/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=containerd.service
Requires=containerd.service
[Service]
ExecStart=/usr/local/bin/kubelet \\
--config=/var/lib/kubelet/kubelet-config.yaml \\
--container-runtime=remote \\
--container-runtime-endpoint=unix:///var/run/containerd/containerd.sock \\
--image-pull-progress-deadline=2m \\
--kubeconfig=/var/lib/kubelet/kubeconfig \\
--network-plugin=cni \\
--register-node=true
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
```
```bash
{
sudo systemctl daemon-reload
sudo systemctl enable kubelet
sudo systemctl restart kubelet
}
```
```bash
sudo systemctl status kubelet
```
такс, ну і тепер момент істини, потрібно розібратись чи з'явились у нас ноди у кластері
```bash
kubectl get nodes
```
```
NAME STATUS ROLES AGE VERSION
example-server NotReady <none> 6s v1.21.0
```
ох ти, раз є ноди, маж бути і контейнер
```bash
kubectl get pod
```
```
NAME READY STATUS RESTARTS AGE
hello-world 1/1 Running 0 8m1s
```
пише що запущений, а що в дійсності?
```bash
crictl pods
```
```
POD ID CREATED STATE NAME NAMESPACE ATTEMPT RUNTIME
1719d0202a5ef 8 minutes ago Ready hello-world default 0 (default)
```
```bash
crictl ps
```
```
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
3f2b0a0d70377 7cfbbec8963d8 8 minutes ago Running hello-world-container 0 1719d0202a5ef
```
навіть логи можна глянути
```bash
crictl logs $(crictl ps -q)
```
```
Hello, World!
Hello, World!
Hello, World!
Hello, World!
...
```
але так не діло логи дивитись
тепер коли ми зрозуміли що сервер наш працює і показує правду - можна користуватись тільки куб сітіель
```bash
kubectl logs hello-world
```
```
Error from server (Forbidden): Forbidden (user=kubernetes, verb=get, resource=nodes, subresource=proxy) ( pods/log hello-world)
```
причина помилки відсутність всяких приколів
давайте їх створимо
```bash
{
cat <<EOF> rbac.yml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: node-proxy-access
rules:
- apiGroups: [""]
resources: ["nodes/proxy"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: node-proxy-access-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: node-proxy-access
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: kubernetes
EOF
kubectl apply -f rbac.yml
}
```
```bash
kubectl logs hello-world
```
```
Hello, World!
Hello, World!
Hello, World!
Hello, World!
...
```
ух ти тепер точно все працює, можна користуватись кубернетесом, але стоп у нас все ще є речі які на нашому графіку сірі давайте розбиратись

View File

@ -1,43 +0,0 @@
# Generating the Data Encryption Config and Key
Kubernetes stores a variety of data including cluster state, application configurations, and secrets. Kubernetes supports the ability to [encrypt](https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data) cluster data at rest.
In this lab you will generate an encryption key and an [encryption config](https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/#understanding-the-encryption-at-rest-configuration) suitable for encrypting Kubernetes Secrets.
## The Encryption Key
Generate an encryption key:
```
ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)
```
## The Encryption Config File
Create the `encryption-config.yaml` encryption config file:
```
cat > encryption-config.yaml <<EOF
kind: EncryptionConfig
apiVersion: v1
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: key1
secret: ${ENCRYPTION_KEY}
- identity: {}
EOF
```
Copy the `encryption-config.yaml` encryption config file to each controller instance:
```
for instance in controller-0 controller-1 controller-2; do
gcloud compute scp encryption-config.yaml ${instance}:~/
done
```
Next: [Bootstrapping the etcd Cluster](07-bootstrapping-etcd.md)

View File

@ -1,128 +0,0 @@
# Bootstrapping the etcd Cluster
Kubernetes components are stateless and store cluster state in [etcd](https://github.com/etcd-io/etcd). In this lab you will bootstrap a three node etcd cluster and configure it for high availability and secure remote access.
## Prerequisites
The commands in this lab must be run on each controller instance: `controller-0`, `controller-1`, and `controller-2`. Login to each controller instance using the `gcloud` command. Example:
```
gcloud compute ssh controller-0
```
### Running commands in parallel with tmux
[tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple compute instances at the same time. See the [Running commands in parallel with tmux](01-prerequisites.md#running-commands-in-parallel-with-tmux) section in the Prerequisites lab.
## Bootstrapping an etcd Cluster Member
### Download and Install the etcd Binaries
Download the official etcd release binaries from the [etcd](https://github.com/etcd-io/etcd) GitHub project:
```
wget -q --show-progress --https-only --timestamping \
"https://github.com/etcd-io/etcd/releases/download/v3.4.15/etcd-v3.4.15-linux-amd64.tar.gz"
```
Extract and install the `etcd` server and the `etcdctl` command line utility:
```
{
tar -xvf etcd-v3.4.15-linux-amd64.tar.gz
sudo mv etcd-v3.4.15-linux-amd64/etcd* /usr/local/bin/
}
```
### Configure the etcd Server
```
{
sudo mkdir -p /etc/etcd /var/lib/etcd
sudo chmod 700 /var/lib/etcd
sudo cp ca.pem kubernetes-key.pem kubernetes.pem /etc/etcd/
}
```
The instance internal IP address will be used to serve client requests and communicate with etcd cluster peers. Retrieve the internal IP address for the current compute instance:
```
INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
```
Each etcd member must have a unique name within an etcd cluster. Set the etcd name to match the hostname of the current compute instance:
```
ETCD_NAME=$(hostname -s)
```
Create the `etcd.service` systemd unit file:
```
cat <<EOF | sudo tee /etc/systemd/system/etcd.service
[Unit]
Description=etcd
Documentation=https://github.com/coreos
[Service]
Type=notify
ExecStart=/usr/local/bin/etcd \\
--name ${ETCD_NAME} \\
--cert-file=/etc/etcd/kubernetes.pem \\
--key-file=/etc/etcd/kubernetes-key.pem \\
--peer-cert-file=/etc/etcd/kubernetes.pem \\
--peer-key-file=/etc/etcd/kubernetes-key.pem \\
--trusted-ca-file=/etc/etcd/ca.pem \\
--peer-trusted-ca-file=/etc/etcd/ca.pem \\
--peer-client-cert-auth \\
--client-cert-auth \\
--initial-advertise-peer-urls https://${INTERNAL_IP}:2380 \\
--listen-peer-urls https://${INTERNAL_IP}:2380 \\
--listen-client-urls https://${INTERNAL_IP}:2379,https://127.0.0.1:2379 \\
--advertise-client-urls https://${INTERNAL_IP}:2379 \\
--initial-cluster-token etcd-cluster-0 \\
--initial-cluster controller-0=https://10.240.0.10:2380,controller-1=https://10.240.0.11:2380,controller-2=https://10.240.0.12:2380 \\
--initial-cluster-state new \\
--data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
```
### Start the etcd Server
```
{
sudo systemctl daemon-reload
sudo systemctl enable etcd
sudo systemctl start etcd
}
```
> Remember to run the above commands on each controller node: `controller-0`, `controller-1`, and `controller-2`.
## Verification
List the etcd cluster members:
```
sudo ETCDCTL_API=3 etcdctl member list \
--endpoints=https://127.0.0.1:2379 \
--cacert=/etc/etcd/ca.pem \
--cert=/etc/etcd/kubernetes.pem \
--key=/etc/etcd/kubernetes-key.pem
```
> output
```
3a57933972cb5131, started, controller-2, https://10.240.0.12:2380, https://10.240.0.12:2379, false
f98dc20bce6225a0, started, controller-0, https://10.240.0.10:2380, https://10.240.0.10:2379, false
ffed16798470cab5, started, controller-1, https://10.240.0.11:2380, https://10.240.0.11:2379, false
```
Next: [Bootstrapping the Kubernetes Control Plane](08-bootstrapping-kubernetes-controllers.md)

View File

@ -0,0 +1,203 @@
# Controller manager
![image](./img/07_cluster_architecture_controller_manager.png "Kubelet")
для того щоб відчути весь смак - давайте почнемо із того що вернемось трохи не до конфігураційних всяких штук а до абстракцій кубернетесу
і власне наша наступна абстракція - деплоймент
тож давайте її створимо
```bash
cat <<EOF> deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment
spec:
replicas: 1
selector:
matchLabels:
app: hello-world-deployment
template:
metadata:
labels:
app: hello-world-deployment
spec:
serviceAccountName: hello-world
containers:
- name: hello-world-container
image: busybox
command: ['sh', '-c', 'while true; do echo "Hello, World from deployment!"; sleep 1; done']
EOF
kubectl apply -f deployment.yaml
```
```bash
kubectl get deploy
```
```
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 0/1 0 0 24s
```
такс, щось пішло не так
чомусь наші поди не створюються - а мають
за те щоб поди створювались відповідає контроллєр менеджер, а у нас його немає
тож давайте цю проблему вирішувати
```bash
{
cat > kube-controller-manager-csr.json <<EOF
{
"CN": "system:kube-controller-manager",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "system:kube-controller-manager",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
}
```
```bash
{
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config set-credentials system:kube-controller-manager \
--client-certificate=kube-controller-manager.pem \
--client-key=kube-controller-manager-key.pem \
--embed-certs=true \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:kube-controller-manager \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig
}
```
```bash
wget -q --show-progress --https-only --timestamping \
"https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kube-controller-manager"
```
```bash
{
chmod +x kube-controller-manager
sudo mv kube-controller-manager /usr/local/bin/
}
```
```bash
sudo mv kube-controller-manager.kubeconfig /var/lib/kubernetes/
sudo cp ca-key.pem /var/lib/kubernetes/
```
```bash
cat <<EOF | sudo tee /etc/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-controller-manager \\
--bind-address=0.0.0.0 \\
--cluster-cidr=10.200.0.0/16 \\
--cluster-name=kubernetes \\
--cluster-signing-cert-file=/var/lib/kubernetes/ca.pem \\
--cluster-signing-key-file=/var/lib/kubernetes/ca-key.pem \\
--kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig \\
--leader-elect=true \\
--root-ca-file=/var/lib/kubernetes/ca.pem \\
--service-account-private-key-file=/var/lib/kubernetes/service-account-key.pem \\
--service-cluster-ip-range=10.32.0.0/24 \\
--use-service-account-credentials=true \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
```
```bash
{
sudo systemctl daemon-reload
sudo systemctl enable kube-controller-manager
sudo systemctl start kube-controller-manager
}
```
```bash
sudo systemctl status kube-controller-manager
```
```
● kube-controller-manager.service - Kubernetes Controller Manager
Loaded: loaded (/etc/systemd/system/kube-controller-manager.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2023-04-20 11:48:41 UTC; 30s ago
Docs: https://github.com/kubernetes/kubernetes
Main PID: 14805 (kube-controller)
Tasks: 6 (limit: 2275)
Memory: 32.0M
CGroup: /system.slice/kube-controller-manager.service
└─14805 /usr/local/bin/kube-controller-manager --bind-address=0.0.0.0 --cluster-cidr=10.200.0.0/16 --cluster-name=kubernetes --cluster-signing-cert-file=/var/lib/kubernetes/c>
...
```
ну контроллер менеджер як бачимо завівся, то може і поди створились?
давайте перевіримо
```bash
kubectl get pod
```
```
NAME READY STATUS RESTARTS AGE
hello-world 1/1 Running 0 27m
nginx-deployment-5d9cbcf759-x4pk8 0/1 Pending 0 79s
```
такс, ну под уже сворився, але він ще у статусі пендінг якось не особо цікаво
відповідь на питання чого по ще досі не запущений дуже проста
```bash
kubectl get pod -o wide
```
```
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
hello-world 1/1 Running 0 31m 10.240.1.9 example-server <none> <none>
nginx-deployment-5d9cbcf759-x4pk8 0/1 Pending 0 5m22s <none> <none> <none> <none>
```
бачимо що йому ніхто ще не проставив ноду, а без ноди кублєт сам не запустить под

View File

@ -1,426 +0,0 @@
# Bootstrapping the Kubernetes Control Plane
In this lab you will bootstrap the Kubernetes control plane across three compute instances and configure it for high availability. You will also create an external load balancer that exposes the Kubernetes API Servers to remote clients. The following components will be installed on each node: Kubernetes API Server, Scheduler, and Controller Manager.
## Prerequisites
The commands in this lab must be run on each controller instance: `controller-0`, `controller-1`, and `controller-2`. Login to each controller instance using the `gcloud` command. Example:
```
gcloud compute ssh controller-0
```
### Running commands in parallel with tmux
[tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple compute instances at the same time. See the [Running commands in parallel with tmux](01-prerequisites.md#running-commands-in-parallel-with-tmux) section in the Prerequisites lab.
## Provision the Kubernetes Control Plane
Create the Kubernetes configuration directory:
```
sudo mkdir -p /etc/kubernetes/config
```
### Download and Install the Kubernetes Controller Binaries
Download the official Kubernetes release binaries:
```
wget -q --show-progress --https-only --timestamping \
"https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kube-apiserver" \
"https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kube-controller-manager" \
"https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kube-scheduler" \
"https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kubectl"
```
Install the Kubernetes binaries:
```
{
chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl
sudo mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/
}
```
### Configure the Kubernetes API Server
```
{
sudo mkdir -p /var/lib/kubernetes/
sudo mv ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \
service-account-key.pem service-account.pem \
encryption-config.yaml /var/lib/kubernetes/
}
```
The instance internal IP address will be used to advertise the API Server to members of the cluster. Retrieve the internal IP address for the current compute instance:
```
INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
```
```
REGION=$(curl -s -H "Metadata-Flavor: Google" \
http://metadata.google.internal/computeMetadata/v1/project/attributes/google-compute-default-region)
```
```
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $REGION \
--format 'value(address)')
```
Create the `kube-apiserver.service` systemd unit file:
```
cat <<EOF | sudo tee /etc/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-apiserver \\
--advertise-address=${INTERNAL_IP} \\
--allow-privileged=true \\
--apiserver-count=3 \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-path=/var/log/audit.log \\
--authorization-mode=Node,RBAC \\
--bind-address=0.0.0.0 \\
--client-ca-file=/var/lib/kubernetes/ca.pem \\
--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\
--etcd-cafile=/var/lib/kubernetes/ca.pem \\
--etcd-certfile=/var/lib/kubernetes/kubernetes.pem \\
--etcd-keyfile=/var/lib/kubernetes/kubernetes-key.pem \\
--etcd-servers=https://10.240.0.10:2379,https://10.240.0.11:2379,https://10.240.0.12:2379 \\
--event-ttl=1h \\
--encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \\
--kubelet-certificate-authority=/var/lib/kubernetes/ca.pem \\
--kubelet-client-certificate=/var/lib/kubernetes/kubernetes.pem \\
--kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem \\
--runtime-config='api/all=true' \\
--service-account-key-file=/var/lib/kubernetes/service-account.pem \\
--service-account-signing-key-file=/var/lib/kubernetes/service-account-key.pem \\
--service-account-issuer=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \\
--service-cluster-ip-range=10.32.0.0/24 \\
--service-node-port-range=30000-32767 \\
--tls-cert-file=/var/lib/kubernetes/kubernetes.pem \\
--tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
```
### Configure the Kubernetes Controller Manager
Move the `kube-controller-manager` kubeconfig into place:
```
sudo mv kube-controller-manager.kubeconfig /var/lib/kubernetes/
```
Create the `kube-controller-manager.service` systemd unit file:
```
cat <<EOF | sudo tee /etc/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-controller-manager \\
--bind-address=0.0.0.0 \\
--cluster-cidr=10.200.0.0/16 \\
--cluster-name=kubernetes \\
--cluster-signing-cert-file=/var/lib/kubernetes/ca.pem \\
--cluster-signing-key-file=/var/lib/kubernetes/ca-key.pem \\
--kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig \\
--leader-elect=true \\
--root-ca-file=/var/lib/kubernetes/ca.pem \\
--service-account-private-key-file=/var/lib/kubernetes/service-account-key.pem \\
--service-cluster-ip-range=10.32.0.0/24 \\
--use-service-account-credentials=true \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
```
### Configure the Kubernetes Scheduler
Move the `kube-scheduler` kubeconfig into place:
```
sudo mv kube-scheduler.kubeconfig /var/lib/kubernetes/
```
Create the `kube-scheduler.yaml` configuration file:
```
cat <<EOF | sudo tee /etc/kubernetes/config/kube-scheduler.yaml
apiVersion: kubescheduler.config.k8s.io/v1beta1
kind: KubeSchedulerConfiguration
clientConnection:
kubeconfig: "/var/lib/kubernetes/kube-scheduler.kubeconfig"
leaderElection:
leaderElect: true
EOF
```
Create the `kube-scheduler.service` systemd unit file:
```
cat <<EOF | sudo tee /etc/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-scheduler \\
--config=/etc/kubernetes/config/kube-scheduler.yaml \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
```
### Start the Controller Services
```
{
sudo systemctl daemon-reload
sudo systemctl enable kube-apiserver kube-controller-manager kube-scheduler
sudo systemctl start kube-apiserver kube-controller-manager kube-scheduler
}
```
> Allow up to 10 seconds for the Kubernetes API Server to fully initialize.
### Enable HTTP Health Checks
A [Google Network Load Balancer](https://cloud.google.com/compute/docs/load-balancing/network) will be used to distribute traffic across the three API servers and allow each API server to terminate TLS connections and validate client certificates. The network load balancer only supports HTTP health checks which means the HTTPS endpoint exposed by the API server cannot be used. As a workaround the nginx webserver can be used to proxy HTTP health checks. In this section nginx will be installed and configured to accept HTTP health checks on port `80` and proxy the connections to the API server on `https://127.0.0.1:6443/healthz`.
> The `/healthz` API server endpoint does not require authentication by default.
Install a basic web server to handle HTTP health checks:
```
sudo apt-get update
sudo apt-get install -y nginx
```
```
cat > kubernetes.default.svc.cluster.local <<EOF
server {
listen 80;
server_name kubernetes.default.svc.cluster.local;
location /healthz {
proxy_pass https://127.0.0.1:6443/healthz;
proxy_ssl_trusted_certificate /var/lib/kubernetes/ca.pem;
}
}
EOF
```
```
{
sudo mv kubernetes.default.svc.cluster.local \
/etc/nginx/sites-available/kubernetes.default.svc.cluster.local
sudo ln -s /etc/nginx/sites-available/kubernetes.default.svc.cluster.local /etc/nginx/sites-enabled/
}
```
```
sudo systemctl restart nginx
```
```
sudo systemctl enable nginx
```
### Verification
```
kubectl cluster-info --kubeconfig admin.kubeconfig
```
```
Kubernetes control plane is running at https://127.0.0.1:6443
```
Test the nginx HTTP health check proxy:
```
curl -H "Host: kubernetes.default.svc.cluster.local" -i http://127.0.0.1/healthz
```
```
HTTP/1.1 200 OK
Server: nginx/1.18.0 (Ubuntu)
Date: Sun, 02 May 2021 04:19:29 GMT
Content-Type: text/plain; charset=utf-8
Content-Length: 2
Connection: keep-alive
Cache-Control: no-cache, private
X-Content-Type-Options: nosniff
X-Kubernetes-Pf-Flowschema-Uid: c43f32eb-e038-457f-9474-571d43e5c325
X-Kubernetes-Pf-Prioritylevel-Uid: 8ba5908f-5569-4330-80fd-c643e7512366
ok
```
> Remember to run the above commands on each controller node: `controller-0`, `controller-1`, and `controller-2`.
## RBAC for Kubelet Authorization
In this section you will configure RBAC permissions to allow the Kubernetes API Server to access the Kubelet API on each worker node. Access to the Kubelet API is required for retrieving metrics, logs, and executing commands in pods.
> This tutorial sets the Kubelet `--authorization-mode` flag to `Webhook`. Webhook mode uses the [SubjectAccessReview](https://kubernetes.io/docs/admin/authorization/#checking-api-access) API to determine authorization.
The commands in this section will effect the entire cluster and only need to be run once from one of the controller nodes.
```
gcloud compute ssh controller-0
```
Create the `system:kube-apiserver-to-kubelet` [ClusterRole](https://kubernetes.io/docs/admin/authorization/rbac/#role-and-clusterrole) with permissions to access the Kubelet API and perform most common tasks associated with managing pods:
```
cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:kube-apiserver-to-kubelet
rules:
- apiGroups:
- ""
resources:
- nodes/proxy
- nodes/stats
- nodes/log
- nodes/spec
- nodes/metrics
verbs:
- "*"
EOF
```
The Kubernetes API Server authenticates to the Kubelet as the `kubernetes` user using the client certificate as defined by the `--kubelet-client-certificate` flag.
Bind the `system:kube-apiserver-to-kubelet` ClusterRole to the `kubernetes` user:
```
cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:kube-apiserver
namespace: ""
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:kube-apiserver-to-kubelet
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: kubernetes
EOF
```
## The Kubernetes Frontend Load Balancer
In this section you will provision an external load balancer to front the Kubernetes API Servers. The `kubernetes-the-hard-way` static IP address will be attached to the resulting load balancer.
> The compute instances created in this tutorial will not have permission to complete this section. **Run the following commands from the same machine used to create the compute instances**.
### Provision a Network Load Balancer
Create the external load balancer network resources:
```
{
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) \
--format 'value(address)')
gcloud compute http-health-checks create kubernetes \
--description "Kubernetes Health Check" \
--host "kubernetes.default.svc.cluster.local" \
--request-path "/healthz"
gcloud compute firewall-rules create kubernetes-the-hard-way-allow-health-check \
--network kubernetes-the-hard-way \
--source-ranges 209.85.152.0/22,209.85.204.0/22,35.191.0.0/16 \
--allow tcp
gcloud compute target-pools create kubernetes-target-pool \
--http-health-check kubernetes
gcloud compute target-pools add-instances kubernetes-target-pool \
--instances controller-0,controller-1,controller-2
gcloud compute forwarding-rules create kubernetes-forwarding-rule \
--address ${KUBERNETES_PUBLIC_ADDRESS} \
--ports 6443 \
--region $(gcloud config get-value compute/region) \
--target-pool kubernetes-target-pool
}
```
### Verification
> The compute instances created in this tutorial will not have permission to complete this section. **Run the following commands from the same machine used to create the compute instances**.
Retrieve the `kubernetes-the-hard-way` static IP address:
```
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) \
--format 'value(address)')
```
Make a HTTP request for the Kubernetes version info:
```
curl --cacert ca.pem https://${KUBERNETES_PUBLIC_ADDRESS}:6443/version
```
> output
```
{
"major": "1",
"minor": "21",
"gitVersion": "v1.21.0",
"gitCommit": "cb303e613a121a29364f75cc67d3d580833a7479",
"gitTreeState": "clean",
"buildDate": "2021-04-08T16:25:06Z",
"goVersion": "go1.16.1",
"compiler": "gc",
"platform": "linux/amd64"
}
```
Next: [Bootstrapping the Kubernetes Worker Nodes](09-bootstrapping-kubernetes-workers.md)

149
docs/08-scheduler.md Normal file
View File

@ -0,0 +1,149 @@
# Scheduler
![image](./img/08_cluster_architecture_scheduler.png "Kubelet")
```bash
{
cat > kube-scheduler-csr.json <<EOF
{
"CN": "system:kube-scheduler",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "system:kube-scheduler",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
kube-scheduler-csr.json | cfssljson -bare kube-scheduler
}
```
```bash
{
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=kube-scheduler.kubeconfig
kubectl config set-credentials system:kube-scheduler \
--client-certificate=kube-scheduler.pem \
--client-key=kube-scheduler-key.pem \
--embed-certs=true \
--kubeconfig=kube-scheduler.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:kube-scheduler \
--kubeconfig=kube-scheduler.kubeconfig
kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig
}
```
```bash
wget -q --show-progress --https-only --timestamping \
"https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kube-scheduler"
```
```bash
{
chmod +x kube-scheduler
sudo mv kube-scheduler /usr/local/bin/
}
```
```bash
sudo mv kube-scheduler.kubeconfig /var/lib/kubernetes/
```
```bash
cat <<EOF | sudo tee /etc/kubernetes/config/kube-scheduler.yaml
apiVersion: kubescheduler.config.k8s.io/v1beta1
kind: KubeSchedulerConfiguration
clientConnection:
kubeconfig: "/var/lib/kubernetes/kube-scheduler.kubeconfig"
leaderElection:
leaderElect: true
EOF
```
```bash
cat <<EOF | sudo tee /etc/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-scheduler \\
--config=/etc/kubernetes/config/kube-scheduler.yaml \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
```
```bash
{
sudo systemctl daemon-reload
sudo systemctl enable kube-scheduler
sudo systemctl start kube-scheduler
}
```
```bash
sudo systemctl status kube-scheduler
```
```
● kube-scheduler.service - Kubernetes Scheduler
Loaded: loaded (/etc/systemd/system/kube-scheduler.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2023-04-20 11:57:44 UTC; 16s ago
Docs: https://github.com/kubernetes/kubernetes
Main PID: 15134 (kube-scheduler)
Tasks: 7 (limit: 2275)
Memory: 13.7M
CGroup: /system.slice/kube-scheduler.service
└─15134 /usr/local/bin/kube-scheduler --config=/etc/kubernetes/config/kube-scheduler.yaml --v=2
...
```
```bash
kubectl get pod -o wide
```
```
hello-world 1/1 Running 0 35m 10.240.1.9 example-server <none> <none>
nginx-deployment-5d9cbcf759-x4pk8 1/1 Running 0 9m34s 10.240.1.10 example-server <none> <none>
```
```bash
kubectl logs nginx-deployment-5d9cbcf759-x4pk8
```
```
Hello, World from deployment!
Hello, World from deployment!
Hello, World from deployment!
...
```

View File

@ -1,313 +0,0 @@
# Bootstrapping the Kubernetes Worker Nodes
In this lab you will bootstrap three Kubernetes worker nodes. The following components will be installed on each node: [runc](https://github.com/opencontainers/runc), [container networking plugins](https://github.com/containernetworking/cni), [containerd](https://github.com/containerd/containerd), [kubelet](https://kubernetes.io/docs/admin/kubelet), and [kube-proxy](https://kubernetes.io/docs/concepts/cluster-administration/proxies).
## Prerequisites
The commands in this lab must be run on each worker instance: `worker-0`, `worker-1`, and `worker-2`. Login to each worker instance using the `gcloud` command. Example:
```
gcloud compute ssh worker-0
```
### Running commands in parallel with tmux
[tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple compute instances at the same time. See the [Running commands in parallel with tmux](01-prerequisites.md#running-commands-in-parallel-with-tmux) section in the Prerequisites lab.
## Provisioning a Kubernetes Worker Node
Install the OS dependencies:
```
{
sudo apt-get update
sudo apt-get -y install socat conntrack ipset
}
```
> The socat binary enables support for the `kubectl port-forward` command.
### Disable Swap
By default the kubelet will fail to start if [swap](https://help.ubuntu.com/community/SwapFaq) is enabled. It is [recommended](https://github.com/kubernetes/kubernetes/issues/7294) that swap be disabled to ensure Kubernetes can provide proper resource allocation and quality of service.
Verify if swap is enabled:
```
sudo swapon --show
```
If output is empthy then swap is not enabled. If swap is enabled run the following command to disable swap immediately:
```
sudo swapoff -a
```
> To ensure swap remains off after reboot consult your Linux distro documentation.
### Download and Install Worker Binaries
```
wget -q --show-progress --https-only --timestamping \
https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.21.0/crictl-v1.21.0-linux-amd64.tar.gz \
https://github.com/opencontainers/runc/releases/download/v1.0.0-rc93/runc.amd64 \
https://github.com/containernetworking/plugins/releases/download/v0.9.1/cni-plugins-linux-amd64-v0.9.1.tgz \
https://github.com/containerd/containerd/releases/download/v1.4.4/containerd-1.4.4-linux-amd64.tar.gz \
https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kubectl \
https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kube-proxy \
https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kubelet
```
Create the installation directories:
```
sudo mkdir -p \
/etc/cni/net.d \
/opt/cni/bin \
/var/lib/kubelet \
/var/lib/kube-proxy \
/var/lib/kubernetes \
/var/run/kubernetes
```
Install the worker binaries:
```
{
mkdir containerd
tar -xvf crictl-v1.21.0-linux-amd64.tar.gz
tar -xvf containerd-1.4.4-linux-amd64.tar.gz -C containerd
sudo tar -xvf cni-plugins-linux-amd64-v0.9.1.tgz -C /opt/cni/bin/
sudo mv runc.amd64 runc
chmod +x crictl kubectl kube-proxy kubelet runc
sudo mv crictl kubectl kube-proxy kubelet runc /usr/local/bin/
sudo mv containerd/bin/* /bin/
}
```
### Configure CNI Networking
Retrieve the Pod CIDR range for the current compute instance:
```
POD_CIDR=$(curl -s -H "Metadata-Flavor: Google" \
http://metadata.google.internal/computeMetadata/v1/instance/attributes/pod-cidr)
```
Create the `bridge` network configuration file:
```
cat <<EOF | sudo tee /etc/cni/net.d/10-bridge.conf
{
"cniVersion": "0.4.0",
"name": "bridge",
"type": "bridge",
"bridge": "cnio0",
"isGateway": true,
"ipMasq": true,
"ipam": {
"type": "host-local",
"ranges": [
[{"subnet": "${POD_CIDR}"}]
],
"routes": [{"dst": "0.0.0.0/0"}]
}
}
EOF
```
Create the `loopback` network configuration file:
```
cat <<EOF | sudo tee /etc/cni/net.d/99-loopback.conf
{
"cniVersion": "0.4.0",
"name": "lo",
"type": "loopback"
}
EOF
```
### Configure containerd
Create the `containerd` configuration file:
```
sudo mkdir -p /etc/containerd/
```
```
cat << EOF | sudo tee /etc/containerd/config.toml
[plugins]
[plugins.cri.containerd]
snapshotter = "overlayfs"
[plugins.cri.containerd.default_runtime]
runtime_type = "io.containerd.runtime.v1.linux"
runtime_engine = "/usr/local/bin/runc"
runtime_root = ""
EOF
```
Create the `containerd.service` systemd unit file:
```
cat <<EOF | sudo tee /etc/systemd/system/containerd.service
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target
[Service]
ExecStartPre=/sbin/modprobe overlay
ExecStart=/bin/containerd
Restart=always
RestartSec=5
Delegate=yes
KillMode=process
OOMScoreAdjust=-999
LimitNOFILE=1048576
LimitNPROC=infinity
LimitCORE=infinity
[Install]
WantedBy=multi-user.target
EOF
```
### Configure the Kubelet
```
{
sudo mv ${HOSTNAME}-key.pem ${HOSTNAME}.pem /var/lib/kubelet/
sudo mv ${HOSTNAME}.kubeconfig /var/lib/kubelet/kubeconfig
sudo mv ca.pem /var/lib/kubernetes/
}
```
Create the `kubelet-config.yaml` configuration file:
```
cat <<EOF | sudo tee /var/lib/kubelet/kubelet-config.yaml
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: false
webhook:
enabled: true
x509:
clientCAFile: "/var/lib/kubernetes/ca.pem"
authorization:
mode: Webhook
clusterDomain: "cluster.local"
clusterDNS:
- "10.32.0.10"
podCIDR: "${POD_CIDR}"
resolvConf: "/run/systemd/resolve/resolv.conf"
runtimeRequestTimeout: "15m"
tlsCertFile: "/var/lib/kubelet/${HOSTNAME}.pem"
tlsPrivateKeyFile: "/var/lib/kubelet/${HOSTNAME}-key.pem"
EOF
```
> The `resolvConf` configuration is used to avoid loops when using CoreDNS for service discovery on systems running `systemd-resolved`.
Create the `kubelet.service` systemd unit file:
```
cat <<EOF | sudo tee /etc/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=containerd.service
Requires=containerd.service
[Service]
ExecStart=/usr/local/bin/kubelet \\
--config=/var/lib/kubelet/kubelet-config.yaml \\
--container-runtime=remote \\
--container-runtime-endpoint=unix:///var/run/containerd/containerd.sock \\
--image-pull-progress-deadline=2m \\
--kubeconfig=/var/lib/kubelet/kubeconfig \\
--network-plugin=cni \\
--register-node=true \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
```
### Configure the Kubernetes Proxy
```
sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig
```
Create the `kube-proxy-config.yaml` configuration file:
```
cat <<EOF | sudo tee /var/lib/kube-proxy/kube-proxy-config.yaml
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
clientConnection:
kubeconfig: "/var/lib/kube-proxy/kubeconfig"
mode: "iptables"
clusterCIDR: "10.200.0.0/16"
EOF
```
Create the `kube-proxy.service` systemd unit file:
```
cat <<EOF | sudo tee /etc/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-proxy \\
--config=/var/lib/kube-proxy/kube-proxy-config.yaml
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
```
### Start the Worker Services
```
{
sudo systemctl daemon-reload
sudo systemctl enable containerd kubelet kube-proxy
sudo systemctl start containerd kubelet kube-proxy
}
```
> Remember to run the above commands on each worker node: `worker-0`, `worker-1`, and `worker-2`.
## Verification
> The compute instances created in this tutorial will not have permission to complete this section. Run the following commands from the same machine used to create the compute instances.
List the registered Kubernetes nodes:
```
gcloud compute ssh controller-0 \
--command "kubectl get nodes --kubeconfig admin.kubeconfig"
```
> output
```
NAME STATUS ROLES AGE VERSION
worker-0 Ready <none> 22s v1.21.0
worker-1 Ready <none> 22s v1.21.0
worker-2 Ready <none> 22s v1.21.0
```
Next: [Configuring kubectl for Remote Access](10-configuring-kubectl.md)

311
docs/09-kubeproxy.md Normal file
View File

@ -0,0 +1,311 @@
# Kubeproxy
![image](./img/09_cluster_architecture_proxy.png "Kubelet")
такс,
```bash
cat <<EOF> nginx-deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.21.3
ports:
- containerPort: 80
EOF
kubectl apply -f nginx-deployment.yml
```
```bash
kubectl get pod -o wide
```
```
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
hello-world 1/1 Running 0 109m 10.240.1.9 example-server <none> <none>
nginx-deployment-5d9cbcf759-x4pk8 1/1 Running 0 84m 10.240.1.14 example-server <none> <none>
```
нам потрібна айпі адреса поду з деплойменту, в моєму випадку 10.240.1.10
запам'ятаємо її
```bash
cat <<EOF> rbac-create.yml
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: kubernetes-user-clusterrole
rules:
- apiGroups: [""]
resources: ["nodes/proxy"]
verbs: ["create"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: kubernetes-user-clusterrolebinding
subjects:
- kind: User
name: kubernetes
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: kubernetes-user-clusterrole
apiGroup: rbac.authorization.k8s.io
EOF
kubectl apply -f rbac-create.yml
```
```
kubectl exec hello-world -- wget -O - 10.240.1.14
```
```
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
Connecting to 10.240.1.14 (10.240.1.14:80)
writing to stdout
- 100% |********************************| 615 0:00:00 ETA
written to stdout
```
але це не прикольно, хочу звертатись до нджінк деплойменту і щоб воно там само працювало
знаю що є сервіси - давай через них
```bash
{
cat <<EOF> nginx-service.yml
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
EOF
kubectl apply -f nginx-service.yml
}
```
```bash
kubectl get service
```
такс тепер беремо айпішнік того сервісу (у моєму випадку 10.32.0.95)
і спробуємо повторити те саме
```bash
kubectl exec hello-world -- wget -O - 10.32.0.95
```
і нічого (тут можна згадати ще про ендпоінти і тп, але то може бути просто на довго)
головна причина чого не працює на даному етапі - у нас не запущений ще 1 важливий компонент
а саме куб проксі
```bash
{
cat > kube-proxy-csr.json <<EOF
{
"CN": "system:kube-proxy",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "system:node-proxier",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
kube-proxy-csr.json | cfssljson -bare kube-proxy
}
```
```bash
{
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials system:kube-proxy \
--client-certificate=kube-proxy.pem \
--client-key=kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
}
```
```bash
wget -q --show-progress --https-only --timestamping \
https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kube-proxy
```
```bash
sudo mkdir -p \
/var/lib/kube-proxy
```
```bash
{
chmod +x kube-proxy
sudo mv kube-proxy /usr/local/bin/
}
```
```bash
sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig
```
```bash
cat <<EOF | sudo tee /var/lib/kube-proxy/kube-proxy-config.yaml
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
clientConnection:
kubeconfig: "/var/lib/kube-proxy/kubeconfig"
mode: "iptables"
clusterCIDR: "10.200.0.0/16"
EOF
```
```bash
cat <<EOF | sudo tee /etc/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-proxy \\
--config=/var/lib/kube-proxy/kube-proxy-config.yaml
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
```
```bash
{
sudo systemctl daemon-reload
sudo systemctl enable kube-proxy
sudo systemctl start kube-proxy
}
```
```bash
sudo systemctl status kube-proxy
```
```
● kube-proxy.service - Kubernetes Kube Proxy
Loaded: loaded (/etc/systemd/system/kube-proxy.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2023-04-20 13:37:27 UTC; 23s ago
Docs: https://github.com/kubernetes/kubernetes
Main PID: 19873 (kube-proxy)
Tasks: 5 (limit: 2275)
Memory: 10.0M
CGroup: /system.slice/kube-proxy.service
└─19873 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/kube-proxy-config.yaml
...
```
ну що, куб проксі поставили - потрібно провіряти
```bash
kubectl exec hello-world -- wget -O - 10.32.0.95
```
```
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
Connecting to 10.32.0.95 (10.32.0.95:80)
writing to stdout
- 100% |********************************| 615 0:00:00 ETA
written to stdout
```
ух ти у нас все вийшло

View File

@ -1,66 +0,0 @@
# Configuring kubectl for Remote Access
In this lab you will generate a kubeconfig file for the `kubectl` command line utility based on the `admin` user credentials.
> Run the commands in this lab from the same directory used to generate the admin client certificates.
## The Admin Kubernetes Configuration File
Each kubeconfig requires a Kubernetes API Server to connect to. To support high availability the IP address assigned to the external load balancer fronting the Kubernetes API Servers will be used.
Generate a kubeconfig file suitable for authenticating as the `admin` user:
```
{
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) \
--format 'value(address)')
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443
kubectl config set-credentials admin \
--client-certificate=admin.pem \
--client-key=admin-key.pem
kubectl config set-context kubernetes-the-hard-way \
--cluster=kubernetes-the-hard-way \
--user=admin
kubectl config use-context kubernetes-the-hard-way
}
```
## Verification
Check the version of the remote Kubernetes cluster:
```
kubectl version
```
> output
```
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T16:31:21Z", GoVersion:"go1.16.1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T16:25:06Z", GoVersion:"go1.16.1", Compiler:"gc", Platform:"linux/amd64"}
```
List the nodes in the remote Kubernetes cluster:
```
kubectl get nodes
```
> output
```
NAME STATUS ROLES AGE VERSION
worker-0 Ready <none> 2m35s v1.21.0
worker-1 Ready <none> 2m35s v1.21.0
worker-2 Ready <none> 2m35s v1.21.0
```
Next: [Provisioning Pod Network Routes](11-pod-network-routes.md)

52
docs/10-dns.md Normal file
View File

@ -0,0 +1,52 @@
# dns
такс, це звісно приколно що можна по айпішніку, але я читав що можна по назві сервісу звертатись
```bash
kubectl exec hello-world -- wget -O - nginx-service
```
не особо працює, щось пішло не так
а так тому, що ми не поставили деенес адон
але нічого, зараз ми то виправимо
```bash
kubectl apply -f https://storage.googleapis.com/kubernetes-the-hard-way/coredns-1.8.yaml
```
ну у мене не особо запрацювало
потрібно зробити зміни у кублєті
```bash
cat <<EOF | sudo tee /etc/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=containerd.service
Requires=containerd.service
[Service]
ExecStart=/usr/local/bin/kubelet \\
--config=/var/lib/kubelet/kubelet-config.yaml \\
--container-runtime=remote \\
--container-runtime-endpoint=unix:///var/run/containerd/containerd.sock \\
--image-pull-progress-deadline=2m \\
--kubeconfig=/var/lib/kubelet/kubeconfig \\
--network-plugin=cni \\
--register-node=true \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
```
```bash
{
sudo systemctl daemon-reload
sudo systemctl enable kubelet
sudo systemctl restart kubelet
}
```

View File

@ -1,60 +0,0 @@
# Provisioning Pod Network Routes
Pods scheduled to a node receive an IP address from the node's Pod CIDR range. At this point pods can not communicate with other pods running on different nodes due to missing network [routes](https://cloud.google.com/compute/docs/vpc/routes).
In this lab you will create a route for each worker node that maps the node's Pod CIDR range to the node's internal IP address.
> There are [other ways](https://kubernetes.io/docs/concepts/cluster-administration/networking/#how-to-achieve-this) to implement the Kubernetes networking model.
## The Routing Table
In this section you will gather the information required to create routes in the `kubernetes-the-hard-way` VPC network.
Print the internal IP address and Pod CIDR range for each worker instance:
```
for instance in worker-0 worker-1 worker-2; do
gcloud compute instances describe ${instance} \
--format 'value[separator=" "](networkInterfaces[0].networkIP,metadata.items[0].value)'
done
```
> output
```
10.240.0.20 10.200.0.0/24
10.240.0.21 10.200.1.0/24
10.240.0.22 10.200.2.0/24
```
## Routes
Create network routes for each worker instance:
```
for i in 0 1 2; do
gcloud compute routes create kubernetes-route-10-200-${i}-0-24 \
--network kubernetes-the-hard-way \
--next-hop-address 10.240.0.2${i} \
--destination-range 10.200.${i}.0/24
done
```
List the routes in the `kubernetes-the-hard-way` VPC network:
```
gcloud compute routes list --filter "network: kubernetes-the-hard-way"
```
> output
```
NAME NETWORK DEST_RANGE NEXT_HOP PRIORITY
default-route-1606ba68df692422 kubernetes-the-hard-way 10.240.0.0/24 kubernetes-the-hard-way 0
default-route-615e3652a8b74e4d kubernetes-the-hard-way 0.0.0.0/0 default-internet-gateway 1000
kubernetes-route-10-200-0-0-24 kubernetes-the-hard-way 10.200.0.0/24 10.240.0.20 1000
kubernetes-route-10-200-1-0-24 kubernetes-the-hard-way 10.200.1.0/24 10.240.0.21 1000
kubernetes-route-10-200-2-0-24 kubernetes-the-hard-way 10.200.2.0/24 10.240.0.22 1000
```
Next: [Deploying the DNS Cluster Add-on](12-dns-addon.md)

View File

@ -1,81 +0,0 @@
# Deploying the DNS Cluster Add-on
In this lab you will deploy the [DNS add-on](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/) which provides DNS based service discovery, backed by [CoreDNS](https://coredns.io/), to applications running inside the Kubernetes cluster.
## The DNS Cluster Add-on
Deploy the `coredns` cluster add-on:
```
kubectl apply -f https://storage.googleapis.com/kubernetes-the-hard-way/coredns-1.8.yaml
```
> output
```
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
configmap/coredns created
deployment.apps/coredns created
service/kube-dns created
```
List the pods created by the `kube-dns` deployment:
```
kubectl get pods -l k8s-app=kube-dns -n kube-system
```
> output
```
NAME READY STATUS RESTARTS AGE
coredns-8494f9c688-hh7r2 1/1 Running 0 10s
coredns-8494f9c688-zqrj2 1/1 Running 0 10s
```
## Verification
Create a `busybox` deployment:
```
kubectl run busybox --image=busybox:1.28 --command -- sleep 3600
```
List the pod created by the `busybox` deployment:
```
kubectl get pods -l run=busybox
```
> output
```
NAME READY STATUS RESTARTS AGE
busybox 1/1 Running 0 3s
```
Retrieve the full name of the `busybox` pod:
```
POD_NAME=$(kubectl get pods -l run=busybox -o jsonpath="{.items[0].metadata.name}")
```
Execute a DNS lookup for the `kubernetes` service inside the `busybox` pod:
```
kubectl exec -ti $POD_NAME -- nslookup kubernetes
```
> output
```
Server: 10.32.0.10
Address 1: 10.32.0.10 kube-dns.kube-system.svc.cluster.local
Name: kubernetes
Address 1: 10.32.0.1 kubernetes.default.svc.cluster.local
```
Next: [Smoke Test](13-smoke-test.md)

View File

@ -1,220 +0,0 @@
# Smoke Test
In this lab you will complete a series of tasks to ensure your Kubernetes cluster is functioning correctly.
## Data Encryption
In this section you will verify the ability to [encrypt secret data at rest](https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/#verifying-that-data-is-encrypted).
Create a generic secret:
```
kubectl create secret generic kubernetes-the-hard-way \
--from-literal="mykey=mydata"
```
Print a hexdump of the `kubernetes-the-hard-way` secret stored in etcd:
```
gcloud compute ssh controller-0 \
--command "sudo ETCDCTL_API=3 etcdctl get \
--endpoints=https://127.0.0.1:2379 \
--cacert=/etc/etcd/ca.pem \
--cert=/etc/etcd/kubernetes.pem \
--key=/etc/etcd/kubernetes-key.pem\
/registry/secrets/default/kubernetes-the-hard-way | hexdump -C"
```
> output
```
00000000 2f 72 65 67 69 73 74 72 79 2f 73 65 63 72 65 74 |/registry/secret|
00000010 73 2f 64 65 66 61 75 6c 74 2f 6b 75 62 65 72 6e |s/default/kubern|
00000020 65 74 65 73 2d 74 68 65 2d 68 61 72 64 2d 77 61 |etes-the-hard-wa|
00000030 79 0a 6b 38 73 3a 65 6e 63 3a 61 65 73 63 62 63 |y.k8s:enc:aescbc|
00000040 3a 76 31 3a 6b 65 79 31 3a 97 d1 2c cd 89 0d 08 |:v1:key1:..,....|
00000050 29 3c 7d 19 41 cb ea d7 3d 50 45 88 82 a3 1f 11 |)<}.A...=PE.....|
00000060 26 cb 43 2e c8 cf 73 7d 34 7e b1 7f 9f 71 d2 51 |&.C...s}4~...q.Q|
00000070 45 05 16 e9 07 d4 62 af f8 2e 6d 4a cf c8 e8 75 |E.....b...mJ...u|
00000080 6b 75 1e b7 64 db 7d 7f fd f3 96 62 e2 a7 ce 22 |ku..d.}....b..."|
00000090 2b 2a 82 01 c3 f5 83 ae 12 8b d5 1d 2e e6 a9 90 |+*..............|
000000a0 bd f0 23 6c 0c 55 e2 52 18 78 fe bf 6d 76 ea 98 |..#l.U.R.x..mv..|
000000b0 fc 2c 17 36 e3 40 87 15 25 13 be d6 04 88 68 5b |.,.6.@..%.....h[|
000000c0 a4 16 81 f6 8e 3b 10 46 cb 2c ba 21 35 0c 5b 49 |.....;.F.,.!5.[I|
000000d0 e5 27 20 4c b3 8e 6b d0 91 c2 28 f1 cc fa 6a 1b |.' L..k...(...j.|
000000e0 31 19 74 e7 a5 66 6a 99 1c 84 c7 e0 b0 fc 32 86 |1.t..fj.......2.|
000000f0 f3 29 5a a4 1c d5 a4 e3 63 26 90 95 1e 27 d0 14 |.)Z.....c&...'..|
00000100 94 f0 ac 1a cd 0d b9 4b ae 32 02 a0 f8 b7 3f 0b |.......K.2....?.|
00000110 6f ad 1f 4d 15 8a d6 68 95 63 cf 7d 04 9a 52 71 |o..M...h.c.}..Rq|
00000120 75 ff 87 6b c5 42 e1 72 27 b5 e9 1a fe e8 c0 3f |u..k.B.r'......?|
00000130 d9 04 5e eb 5d 43 0d 90 ce fa 04 a8 4a b0 aa 01 |..^.]C......J...|
00000140 cf 6d 5b 80 70 5b 99 3c d6 5c c0 dc d1 f5 52 4a |.m[.p[.<.\....RJ|
00000150 2c 2d 28 5a 63 57 8e 4f df 0a |,-(ZcW.O..|
0000015a
```
The etcd key should be prefixed with `k8s:enc:aescbc:v1:key1`, which indicates the `aescbc` provider was used to encrypt the data with the `key1` encryption key.
## Deployments
In this section you will verify the ability to create and manage [Deployments](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/).
Create a deployment for the [nginx](https://nginx.org/en/) web server:
```
kubectl create deployment nginx --image=nginx
```
List the pod created by the `nginx` deployment:
```
kubectl get pods -l app=nginx
```
> output
```
NAME READY STATUS RESTARTS AGE
nginx-f89759699-kpn5m 1/1 Running 0 10s
```
### Port Forwarding
In this section you will verify the ability to access applications remotely using [port forwarding](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/).
Retrieve the full name of the `nginx` pod:
```
POD_NAME=$(kubectl get pods -l app=nginx -o jsonpath="{.items[0].metadata.name}")
```
Forward port `8080` on your local machine to port `80` of the `nginx` pod:
```
kubectl port-forward $POD_NAME 8080:80
```
> output
```
Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80
```
In a new terminal make an HTTP request using the forwarding address:
```
curl --head http://127.0.0.1:8080
```
> output
```
HTTP/1.1 200 OK
Server: nginx/1.19.10
Date: Sun, 02 May 2021 05:29:25 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 13 Apr 2021 15:13:59 GMT
Connection: keep-alive
ETag: "6075b537-264"
Accept-Ranges: bytes
```
Switch back to the previous terminal and stop the port forwarding to the `nginx` pod:
```
Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80
Handling connection for 8080
^C
```
### Logs
In this section you will verify the ability to [retrieve container logs](https://kubernetes.io/docs/concepts/cluster-administration/logging/).
Print the `nginx` pod logs:
```
kubectl logs $POD_NAME
```
> output
```
...
127.0.0.1 - - [02/May/2021:05:29:25 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.64.0" "-"
```
### Exec
In this section you will verify the ability to [execute commands in a container](https://kubernetes.io/docs/tasks/debug-application-cluster/get-shell-running-container/#running-individual-commands-in-a-container).
Print the nginx version by executing the `nginx -v` command in the `nginx` container:
```
kubectl exec -ti $POD_NAME -- nginx -v
```
> output
```
nginx version: nginx/1.19.10
```
## Services
In this section you will verify the ability to expose applications using a [Service](https://kubernetes.io/docs/concepts/services-networking/service/).
Expose the `nginx` deployment using a [NodePort](https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport) service:
```
kubectl expose deployment nginx --port 80 --type NodePort
```
> The LoadBalancer service type can not be used because your cluster is not configured with [cloud provider integration](https://kubernetes.io/docs/getting-started-guides/scratch/#cloud-provider). Setting up cloud provider integration is out of scope for this tutorial.
Retrieve the node port assigned to the `nginx` service:
```
NODE_PORT=$(kubectl get svc nginx \
--output=jsonpath='{range .spec.ports[0]}{.nodePort}')
```
Create a firewall rule that allows remote access to the `nginx` node port:
```
gcloud compute firewall-rules create kubernetes-the-hard-way-allow-nginx-service \
--allow=tcp:${NODE_PORT} \
--network kubernetes-the-hard-way
```
Retrieve the external IP address of a worker instance:
```
EXTERNAL_IP=$(gcloud compute instances describe worker-0 \
--format 'value(networkInterfaces[0].accessConfigs[0].natIP)')
```
Make an HTTP request using the external IP address and the `nginx` node port:
```
curl -I http://${EXTERNAL_IP}:${NODE_PORT}
```
> output
```
HTTP/1.1 200 OK
Server: nginx/1.19.10
Date: Sun, 02 May 2021 05:31:52 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 13 Apr 2021 15:13:59 GMT
Connection: keep-alive
ETag: "6075b537-264"
Accept-Ranges: bytes
```
Next: [Cleaning Up](14-cleanup.md)

View File

@ -1,63 +0,0 @@
# Cleaning Up
In this lab you will delete the compute resources created during this tutorial.
## Compute Instances
Delete the controller and worker compute instances:
```
gcloud -q compute instances delete \
controller-0 controller-1 controller-2 \
worker-0 worker-1 worker-2 \
--zone $(gcloud config get-value compute/zone)
```
## Networking
Delete the external load balancer network resources:
```
{
gcloud -q compute forwarding-rules delete kubernetes-forwarding-rule \
--region $(gcloud config get-value compute/region)
gcloud -q compute target-pools delete kubernetes-target-pool
gcloud -q compute http-health-checks delete kubernetes
gcloud -q compute addresses delete kubernetes-the-hard-way
}
```
Delete the `kubernetes-the-hard-way` firewall rules:
```
gcloud -q compute firewall-rules delete \
kubernetes-the-hard-way-allow-nginx-service \
kubernetes-the-hard-way-allow-internal \
kubernetes-the-hard-way-allow-external \
kubernetes-the-hard-way-allow-health-check
```
Delete the `kubernetes-the-hard-way` network VPC:
```
{
gcloud -q compute routes delete \
kubernetes-route-10-200-0-0-24 \
kubernetes-route-10-200-1-0-24 \
kubernetes-route-10-200-2-0-24
gcloud -q compute networks subnets delete kubernetes
gcloud -q compute networks delete kubernetes-the-hard-way
}
```
Delete the `kubernetes-the-hard-way` compute address:
```
gcloud -q compute addresses delete kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region)
```

50
docs/Untitled-1.md Normal file
View File

@ -0,0 +1,50 @@
db changes after release:
- table: subscripsion
changes:
- EnableLoggingFunctionality remove
- SendLogNotifications remove
- table: FragmentSettings
changes: remove
- table: FragmentResults
changes: remove
- table: PrecalculatedFragmentResults
changes: remove
- table: Components
changes: remove
- table: ScoreProductResults
changes: remove
- table: PrecalculatedScoreResults
changes: remove
- table: DatasetInsights
changes: remove
- table: PrecalculatedDatasetInsights
changes: remove
- table: ScoringEngineVerifications
changes: remove
- table: ScoringEngineVerificationItems
changes: remove
- table: Profiles
changes: remove
- table: ProfileFields
changes: remove
- table: WebDatasetChunks
changes: removed
- table: WebEnvironments
changes: removed
- table: WebDatasets
changes: remove
- table: MobileDatasets
changes:
- FileSize remove
- SdkIdentifier remove
- table: Datasets
changes:
- IX_JsonId - remove index
- JsonId - remove column

Binary file not shown.

Before

Width:  |  Height:  |  Size: 116 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 43 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 43 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 43 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 43 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 43 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

959
docs/single-node-cluster.md Normal file
View File

@ -0,0 +1,959 @@
```
{
wget -q --show-progress --https-only --timestamping \
https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/1.4.1/linux/cfssl \
https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/1.4.1/linux/cfssljson
chmod +x cfssl cfssljson
sudo mv cfssl cfssljson /usr/local/bin/
}
```
```bash
{
cat > ca-config.json <<EOF
{
"signing": {
"default": {
"expiry": "8760h"
},
"profiles": {
"kubernetes": {
"usages": ["signing", "key encipherment", "server auth", "client auth"],
"expiry": "8760h"
}
}
}
}
EOF
cat > ca-csr.json <<EOF
{
"CN": "Kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "Kubernetes",
"OU": "CA",
"ST": "Oregon"
}
]
}
EOF
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
}
```
Результат:
```
ca-key.pem
ca.csr
ca.pem
```
```bash
{
KUBERNETES_HOSTNAMES=kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.svc.cluster.local
cat > kubernetes-csr.json <<EOF
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "Kubernetes",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-hostname=worker,127.0.0.1,${KUBERNETES_HOSTNAMES} \
-profile=kubernetes \
kubernetes-csr.json | cfssljson -bare kubernetes
}
```
Завантажимо etcd
```
wget -q --show-progress --https-only --timestamping \
"https://github.com/etcd-io/etcd/releases/download/v3.4.15/etcd-v3.4.15-linux-amd64.tar.gz"
```
Розпакувати і помістити etcd у диреторію /usr/local/bin/
```
{
tar -xvf etcd-v3.4.15-linux-amd64.tar.gz
sudo mv etcd-v3.4.15-linux-amd64/etcd* /usr/local/bin/
}
```
```
{
sudo mkdir -p /etc/etcd /var/lib/etcd
sudo chmod 700 /var/lib/etcd
sudo cp ca.pem \
kubernetes.pem kubernetes-key.pem \
/etc/etcd/
}
```
```
cat <<EOF | sudo tee /etc/systemd/system/etcd.service
[Unit]
Description=etcd
Documentation=https://github.com/coreos
[Service]
Type=notify
ExecStart=/usr/local/bin/etcd \\
--name etcd \\
--cert-file=/etc/etcd/kubernetes.pem \\
--key-file=/etc/etcd/kubernetes-key.pem \\
--trusted-ca-file=/etc/etcd/ca.pem \\
--client-cert-auth \\
--listen-client-urls https://127.0.0.1:2379 \\
--advertise-client-urls https://127.0.0.1:2379 \\
--data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
```
```bash
{
sudo systemctl daemon-reload
sudo systemctl enable etcd
sudo systemctl start etcd
}
```
```
sudo ETCDCTL_API=3 etcdctl member list \
--endpoints=https://127.0.0.1:2379 \
--cacert=/etc/etcd/ca.pem \
--cert=/etc/etcd/kubernetes.pem \
--key=/etc/etcd/kubernetes-key.pem
```
api server
```bash
{
cat > service-account-csr.json <<EOF
{
"CN": "service-accounts",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "Kubernetes",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
service-account-csr.json | cfssljson -bare service-account
}
```
```bash
{
cat > admin-csr.json <<EOF
{
"CN": "admin",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "system:masters",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
admin-csr.json | cfssljson -bare admin
}
```
```
ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)
```
```
cat > encryption-config.yaml <<EOF
kind: EncryptionConfig
apiVersion: v1
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: key1
secret: ${ENCRYPTION_KEY}
- identity: {}
EOF
```
```
sudo mkdir -p /etc/kubernetes/config
```
```
wget -q --show-progress --https-only --timestamping \
"https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kube-apiserver"
```
```
{
chmod +x kube-apiserver
sudo mv kube-apiserver /usr/local/bin/
}
```
```
{
sudo mkdir -p /var/lib/kubernetes/
sudo cp \
ca.pem \
kubernetes.pem kubernetes-key.pem \
encryption-config.yaml \
service-account-key.pem service-account.pem \
/var/lib/kubernetes/
}
```
```
sudo mkdir -p /etc/kubernetes/config
```
```
cat <<EOF | sudo tee /etc/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-apiserver \\
--advertise-address='91.107.220.4' \\
--allow-privileged='true' \\
--apiserver-count='3' \\
--audit-log-maxage='30' \\
--audit-log-maxbackup='3' \\
--audit-log-maxsize='100' \\
--audit-log-path='/var/log/audit.log' \\
--authorization-mode='Node,RBAC' \\
--bind-address='0.0.0.0' \\
--client-ca-file='/var/lib/kubernetes/ca.pem' \\
--enable-admission-plugins='NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota' \\
--etcd-cafile='/var/lib/kubernetes/ca.pem' \\
--etcd-certfile='/var/lib/kubernetes/kubernetes.pem' \\
--etcd-keyfile='/var/lib/kubernetes/kubernetes-key.pem' \\
--etcd-servers='https://127.0.0.1:2379' \\
--event-ttl='1h' \\
--encryption-provider-config='/var/lib/kubernetes/encryption-config.yaml' \\
--kubelet-certificate-authority='/var/lib/kubernetes/ca.pem' \\
--kubelet-client-certificate='/var/lib/kubernetes/kubernetes.pem' \\
--kubelet-client-key='/var/lib/kubernetes/kubernetes-key.pem' \\
--runtime-config='api/all=true' \\
--service-account-key-file='/var/lib/kubernetes/service-account.pem' \\
--service-cluster-ip-range='10.32.0.0/24' \\
--service-node-port-range='30000-32767' \\
--tls-cert-file='/var/lib/kubernetes/kubernetes.pem' \\
--tls-private-key-file='/var/lib/kubernetes/kubernetes-key.pem' \\
--service-account-signing-key-file='/var/lib/kubernetes/service-account-key.pem' \\
--service-account-issuer='https://kubernetes.default.svc.cluster.local' \\
--api-audiences='https://kubernetes.default.svc.cluster.local' \\
--v='2'
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
```
```bash
{
sudo systemctl daemon-reload
sudo systemctl enable kube-apiserver
sudo systemctl start kube-apiserver
}
```
```bash
wget -q --show-progress --https-only --timestamping \
https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kubectl \
&& chmod +x kubectl \
&& sudo mv kubectl /usr/local/bin/
```
```
{
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=admin.kubeconfig
kubectl config set-credentials admin \
--client-certificate=admin.pem \
--client-key=admin-key.pem \
--embed-certs=true \
--kubeconfig=admin.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=admin \
--kubeconfig=admin.kubeconfig
kubectl config use-context default --kubeconfig=admin.kubeconfig
}
```
```bash
kubectl version --kubeconfig=admin.kubeconfig
```
```bash
{
cat <<EOF> pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: hello-world
spec:
serviceAccountName: hello-world
containers:
- name: hello-world-container
image: busybox
command: ['sh', '-c', 'while true; do echo "Hello, World!"; sleep 1; done']
nodeName: worker
EOF
cat <<EOF> sa.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: hello-world
automountServiceAccountToken: false
EOF
kubectl apply -f sa.yaml --kubeconfig=admin.kubeconfig
kubectl apply -f pod.yaml --kubeconfig=admin.kubeconfig
}
```
kubelet
????, ага ще напевно потрібно виписувати сертифікати на публічний айпішнік
```bash
sudo echo "127.0.0.1 worker" >> /etc/hosts
```
```bash
{
cat > kubelet-csr.json <<EOF
{
"CN": "system:node:worker",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "system:nodes",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-hostname=127.0.0.1 \
-profile=kubernetes \
kubelet-csr.json | cfssljson -bare kubelet
}
```
```bash
{
sudo apt-get update
sudo apt-get -y install socat conntrack ipset
}
```
```bash
sudo swapon --show
```
```bash
sudo swapoff -a
```
```bash
wget -q --show-progress --https-only --timestamping \
https://github.com/opencontainers/runc/releases/download/v1.0.0-rc93/runc.amd64 \
https://github.com/containernetworking/plugins/releases/download/v0.9.1/cni-plugins-linux-amd64-v0.9.1.tgz \
https://github.com/containerd/containerd/releases/download/v1.4.4/containerd-1.4.4-linux-amd64.tar.gz \
https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kube-proxy \
https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kubelet
```
```bash
sudo mkdir -p \
/etc/cni/net.d \
/opt/cni/bin \
/var/lib/kubelet \
/var/lib/kube-proxy \
/var/lib/kubernetes \
/var/run/kubernetes
```
```bash
{
mkdir containerd
tar -xvf containerd-1.4.4-linux-amd64.tar.gz -C containerd
sudo tar -xvf cni-plugins-linux-amd64-v0.9.1.tgz -C /opt/cni/bin/
sudo mv runc.amd64 runc
chmod +x kube-proxy kubelet runc
sudo mv kube-proxy kubelet runc /usr/local/bin/
sudo mv containerd/bin/* /bin/
}
```
```bash
cat <<EOF | sudo tee /etc/cni/net.d/10-bridge.conf
{
"cniVersion": "0.4.0",
"name": "bridge",
"type": "bridge",
"bridge": "cnio0",
"isGateway": true,
"ipMasq": true,
"ipam": {
"type": "host-local",
"ranges": [
[{"subnet": "10.240.1.0/24"}]
],
"routes": [{"dst": "0.0.0.0/0"}]
}
}
EOF
```
```bash
cat <<EOF | sudo tee /etc/cni/net.d/99-loopback.conf
{
"cniVersion": "0.4.0",
"name": "lo",
"type": "loopback"
}
EOF
```
```bash
sudo mkdir -p /etc/containerd/
```
```bash
cat << EOF | sudo tee /etc/containerd/config.toml
[plugins]
[plugins.cri.containerd]
snapshotter = "overlayfs"
[plugins.cri.containerd.default_runtime]
runtime_type = "io.containerd.runtime.v1.linux"
runtime_engine = "/usr/local/bin/runc"
runtime_root = ""
EOF
```
```bash
cat <<EOF | sudo tee /etc/systemd/system/containerd.service
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target
[Service]
ExecStartPre=/sbin/modprobe overlay
ExecStart=/bin/containerd
Restart=always
RestartSec=5
Delegate=yes
KillMode=process
OOMScoreAdjust=-999
LimitNOFILE=1048576
LimitNPROC=infinity
LimitCORE=infinity
[Install]
WantedBy=multi-user.target
EOF
```
```bash
{
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=kubelet.kubeconfig
kubectl config set-credentials system:node:worker \
--client-certificate=kubelet.pem \
--client-key=kubelet-key.pem \
--embed-certs=true \
--kubeconfig=kubelet.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:node:worker \
--kubeconfig=kubelet.kubeconfig
kubectl config use-context default --kubeconfig=kubelet.kubeconfig
}
```
```bash
{
sudo cp kubelet-key.pem kubelet.pem /var/lib/kubelet/
sudo cp kubelet.kubeconfig /var/lib/kubelet/kubeconfig
sudo cp ca.pem /var/lib/kubernetes/
}
```
```bash
cat <<EOF | sudo tee /var/lib/kubelet/kubelet-config.yaml
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: false
webhook:
enabled: true
x509:
clientCAFile: "/var/lib/kubernetes/ca.pem"
authorization:
mode: Webhook
clusterDomain: "cluster.local"
clusterDNS:
- "10.32.0.10"
podCIDR: "10.240.1.0/24"
resolvConf: "/run/systemd/resolve/resolv.conf"
runtimeRequestTimeout: "15m"
tlsCertFile: "/var/lib/kubelet/kubelet.pem"
tlsPrivateKeyFile: "/var/lib/kubelet/kubelet-key.pem"
EOF
```
```bash
cat <<EOF | sudo tee /etc/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=containerd.service
Requires=containerd.service
[Service]
ExecStart=/usr/local/bin/kubelet \\
--config=/var/lib/kubelet/kubelet-config.yaml \\
--container-runtime=remote \\
--container-runtime-endpoint=unix:///var/run/containerd/containerd.sock \\
--image-pull-progress-deadline=2m \\
--kubeconfig=/var/lib/kubelet/kubeconfig \\
--network-plugin=cni \\
--register-node=true \\
--hostname-override=worker \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
```
```bash
{
sudo systemctl daemon-reload
sudo systemctl enable kubelet
sudo systemctl start kubelet
}
```
```bash
kubectl get nodes --kubeconfig=admin.kubeconfig
```
```bash
kubectl get pod --kubeconfig=admin.kubeconfig
```
```bash
cat <<EOF> nginx-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
spec:
serviceAccountName: hello-world
containers:
- name: nginx-container
image: nginx
ports:
- containerPort: 80
nodeName: worker
EOF
kubectl apply -f nginx-pod.yaml --kubeconfig=admin.kubeconfig
```
```bash
kubectl get pod nginx-pod --kubeconfig=admin.kubeconfig -o=jsonpath='{.status.podIP}'
```
```bash
curl $(kubectl get pod nginx-pod --kubeconfig=admin.kubeconfig -o=jsonpath='{.status.podIP}')
```
```bash
kubectl delete -f nginx-pod.yaml --kubeconfig=admin.kubeconfig
kubectl delete -f pod.yaml --kubeconfig=admin.kubeconfig
kubectl delete -f sa.yaml --kubeconfig=admin.kubeconfig
```
```bash
cat <<EOF> nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx-container
image: nginx
ports:
- containerPort: 80
EOF
kubectl apply -f nginx-deployment.yaml --kubeconfig=admin.kubeconfig
```
```bash
kubectl get pod --kubeconfig=admin.kubeconfig
```
```bash
kubectl get deployment --kubeconfig=admin.kubeconfig
```
такс деплоймент є а подів немає - неподобство
# controller manager
```bash
{
cat > kube-controller-manager-csr.json <<EOF
{
"CN": "system:kube-controller-manager",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "system:kube-controller-manager",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
}
```
```bash
{
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config set-credentials system:kube-controller-manager \
--client-certificate=kube-controller-manager.pem \
--client-key=kube-controller-manager-key.pem \
--embed-certs=true \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:kube-controller-manager \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig
}
```
```bash
wget -q --show-progress --https-only --timestamping \
"https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kube-controller-manager"
```
```bash
{
chmod +x kube-controller-manager
sudo mv kube-controller-manager /usr/local/bin/
}
```
```bash
sudo mv kube-controller-manager.kubeconfig /var/lib/kubernetes/
sudo cp ca-key.pem /var/lib/kubernetes/
```
```bash
cat <<EOF | sudo tee /etc/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-controller-manager \\
--bind-address=0.0.0.0 \\
--cluster-cidr=10.200.0.0/16 \\
--cluster-name=kubernetes \\
--cluster-signing-cert-file=/var/lib/kubernetes/ca.pem \\
--cluster-signing-key-file=/var/lib/kubernetes/ca-key.pem \\
--kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig \\
--leader-elect=true \\
--root-ca-file=/var/lib/kubernetes/ca.pem \\
--service-account-private-key-file=/var/lib/kubernetes/service-account-key.pem \\
--service-cluster-ip-range=10.32.0.0/24 \\
--use-service-account-credentials=true \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
```
```bash
{
sudo systemctl daemon-reload
sudo systemctl enable kube-controller-manager
sudo systemctl start kube-controller-manager
}
```
```bash
kubectl get pod --kubeconfig=admin.kubeconfig
```
такс, бачимо що наші поди створились
але вони незапускаються ніяк
# kube scheduler
```bash
{
cat > kube-scheduler-csr.json <<EOF
{
"CN": "system:kube-scheduler",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "system:kube-scheduler",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
kube-scheduler-csr.json | cfssljson -bare kube-scheduler
}
```
```bash
{
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=kube-scheduler.kubeconfig
kubectl config set-credentials system:kube-scheduler \
--client-certificate=kube-scheduler.pem \
--client-key=kube-scheduler-key.pem \
--embed-certs=true \
--kubeconfig=kube-scheduler.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:kube-scheduler \
--kubeconfig=kube-scheduler.kubeconfig
kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig
}
```
```bash
wget -q --show-progress --https-only --timestamping \
"https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kube-scheduler"
```
```bash
{
chmod +x kube-scheduler
sudo mv kube-scheduler /usr/local/bin/
}
```
```bash
sudo mv kube-scheduler.kubeconfig /var/lib/kubernetes/
```
```bash
cat <<EOF | sudo tee /etc/kubernetes/config/kube-scheduler.yaml
apiVersion: kubescheduler.config.k8s.io/v1beta1
kind: KubeSchedulerConfiguration
clientConnection:
kubeconfig: "/var/lib/kubernetes/kube-scheduler.kubeconfig"
leaderElection:
leaderElect: true
EOF
```
```bash
cat <<EOF | sudo tee /etc/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-scheduler \\
--config=/etc/kubernetes/config/kube-scheduler.yaml \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
```
```bash
{
sudo systemctl daemon-reload
sudo systemctl enable kube-scheduler
sudo systemctl start kube-scheduler
}
```
```bash
kubectl get pod --kubeconfig=admin.kubeconfig
```
нарешті ми бачимо наші поди, вони запущені і ми навіть можемо перевірити чи вони працюють
```bash
curl $(kubectl get pods -l app=nginx --kubeconfig=admin.kubeconfig -o=jsonpath='{.items[0].status.podIP}')
```
чотко, бачимо що запустилось і працює