+
+
+Welcome to nginx!
+
+
+
+Welcome to nginx!
+If you see this page, the nginx web server is successfully installed and
+working. Further configuration is required.
+
+For online documentation and support please refer to
+nginx.org.
+Commercial support is available at
+nginx.com.
+
+Thank you for using nginx.
+
+
+Connecting to 10.32.0.95 (10.32.0.95:80)
+writing to stdout
+- 100% |********************************| 615 0:00:00 ETA
+written to stdout
+```
+ух ти у нас все вийшло
\ No newline at end of file
diff --git a/docs/10-configuring-kubectl.md b/docs/10-configuring-kubectl.md
deleted file mode 100644
index 601fe28..0000000
--- a/docs/10-configuring-kubectl.md
+++ /dev/null
@@ -1,66 +0,0 @@
-# Configuring kubectl for Remote Access
-
-In this lab you will generate a kubeconfig file for the `kubectl` command line utility based on the `admin` user credentials.
-
-> Run the commands in this lab from the same directory used to generate the admin client certificates.
-
-## The Admin Kubernetes Configuration File
-
-Each kubeconfig requires a Kubernetes API Server to connect to. To support high availability the IP address assigned to the external load balancer fronting the Kubernetes API Servers will be used.
-
-Generate a kubeconfig file suitable for authenticating as the `admin` user:
-
-```
-{
- KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
- --region $(gcloud config get-value compute/region) \
- --format 'value(address)')
-
- kubectl config set-cluster kubernetes-the-hard-way \
- --certificate-authority=ca.pem \
- --embed-certs=true \
- --server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443
-
- kubectl config set-credentials admin \
- --client-certificate=admin.pem \
- --client-key=admin-key.pem
-
- kubectl config set-context kubernetes-the-hard-way \
- --cluster=kubernetes-the-hard-way \
- --user=admin
-
- kubectl config use-context kubernetes-the-hard-way
-}
-```
-
-## Verification
-
-Check the version of the remote Kubernetes cluster:
-
-```
-kubectl version
-```
-
-> output
-
-```
-Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T16:31:21Z", GoVersion:"go1.16.1", Compiler:"gc", Platform:"linux/amd64"}
-Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T16:25:06Z", GoVersion:"go1.16.1", Compiler:"gc", Platform:"linux/amd64"}
-```
-
-List the nodes in the remote Kubernetes cluster:
-
-```
-kubectl get nodes
-```
-
-> output
-
-```
-NAME STATUS ROLES AGE VERSION
-worker-0 Ready 2m35s v1.21.0
-worker-1 Ready 2m35s v1.21.0
-worker-2 Ready 2m35s v1.21.0
-```
-
-Next: [Provisioning Pod Network Routes](11-pod-network-routes.md)
diff --git a/docs/10-dns.md b/docs/10-dns.md
new file mode 100644
index 0000000..9a23f05
--- /dev/null
+++ b/docs/10-dns.md
@@ -0,0 +1,52 @@
+# dns
+
+такс, це звісно приколно що можна по айпішніку, але я читав що можна по назві сервісу звертатись
+
+```bash
+kubectl exec hello-world -- wget -O - nginx-service
+```
+
+не особо працює, щось пішло не так
+
+а так тому, що ми не поставили деенес адон
+але нічого, зараз ми то виправимо
+
+```bash
+kubectl apply -f https://storage.googleapis.com/kubernetes-the-hard-way/coredns-1.8.yaml
+```
+
+ну у мене не особо запрацювало
+потрібно зробити зміни у кублєті
+```bash
+cat < There are [other ways](https://kubernetes.io/docs/concepts/cluster-administration/networking/#how-to-achieve-this) to implement the Kubernetes networking model.
-
-## The Routing Table
-
-In this section you will gather the information required to create routes in the `kubernetes-the-hard-way` VPC network.
-
-Print the internal IP address and Pod CIDR range for each worker instance:
-
-```
-for instance in worker-0 worker-1 worker-2; do
- gcloud compute instances describe ${instance} \
- --format 'value[separator=" "](networkInterfaces[0].networkIP,metadata.items[0].value)'
-done
-```
-
-> output
-
-```
-10.240.0.20 10.200.0.0/24
-10.240.0.21 10.200.1.0/24
-10.240.0.22 10.200.2.0/24
-```
-
-## Routes
-
-Create network routes for each worker instance:
-
-```
-for i in 0 1 2; do
- gcloud compute routes create kubernetes-route-10-200-${i}-0-24 \
- --network kubernetes-the-hard-way \
- --next-hop-address 10.240.0.2${i} \
- --destination-range 10.200.${i}.0/24
-done
-```
-
-List the routes in the `kubernetes-the-hard-way` VPC network:
-
-```
-gcloud compute routes list --filter "network: kubernetes-the-hard-way"
-```
-
-> output
-
-```
-NAME NETWORK DEST_RANGE NEXT_HOP PRIORITY
-default-route-1606ba68df692422 kubernetes-the-hard-way 10.240.0.0/24 kubernetes-the-hard-way 0
-default-route-615e3652a8b74e4d kubernetes-the-hard-way 0.0.0.0/0 default-internet-gateway 1000
-kubernetes-route-10-200-0-0-24 kubernetes-the-hard-way 10.200.0.0/24 10.240.0.20 1000
-kubernetes-route-10-200-1-0-24 kubernetes-the-hard-way 10.200.1.0/24 10.240.0.21 1000
-kubernetes-route-10-200-2-0-24 kubernetes-the-hard-way 10.200.2.0/24 10.240.0.22 1000
-```
-
-Next: [Deploying the DNS Cluster Add-on](12-dns-addon.md)
diff --git a/docs/12-dns-addon.md b/docs/12-dns-addon.md
deleted file mode 100644
index be81ef6..0000000
--- a/docs/12-dns-addon.md
+++ /dev/null
@@ -1,81 +0,0 @@
-# Deploying the DNS Cluster Add-on
-
-In this lab you will deploy the [DNS add-on](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/) which provides DNS based service discovery, backed by [CoreDNS](https://coredns.io/), to applications running inside the Kubernetes cluster.
-
-## The DNS Cluster Add-on
-
-Deploy the `coredns` cluster add-on:
-
-```
-kubectl apply -f https://storage.googleapis.com/kubernetes-the-hard-way/coredns-1.8.yaml
-```
-
-> output
-
-```
-serviceaccount/coredns created
-clusterrole.rbac.authorization.k8s.io/system:coredns created
-clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
-configmap/coredns created
-deployment.apps/coredns created
-service/kube-dns created
-```
-
-List the pods created by the `kube-dns` deployment:
-
-```
-kubectl get pods -l k8s-app=kube-dns -n kube-system
-```
-
-> output
-
-```
-NAME READY STATUS RESTARTS AGE
-coredns-8494f9c688-hh7r2 1/1 Running 0 10s
-coredns-8494f9c688-zqrj2 1/1 Running 0 10s
-```
-
-## Verification
-
-Create a `busybox` deployment:
-
-```
-kubectl run busybox --image=busybox:1.28 --command -- sleep 3600
-```
-
-List the pod created by the `busybox` deployment:
-
-```
-kubectl get pods -l run=busybox
-```
-
-> output
-
-```
-NAME READY STATUS RESTARTS AGE
-busybox 1/1 Running 0 3s
-```
-
-Retrieve the full name of the `busybox` pod:
-
-```
-POD_NAME=$(kubectl get pods -l run=busybox -o jsonpath="{.items[0].metadata.name}")
-```
-
-Execute a DNS lookup for the `kubernetes` service inside the `busybox` pod:
-
-```
-kubectl exec -ti $POD_NAME -- nslookup kubernetes
-```
-
-> output
-
-```
-Server: 10.32.0.10
-Address 1: 10.32.0.10 kube-dns.kube-system.svc.cluster.local
-
-Name: kubernetes
-Address 1: 10.32.0.1 kubernetes.default.svc.cluster.local
-```
-
-Next: [Smoke Test](13-smoke-test.md)
diff --git a/docs/13-smoke-test.md b/docs/13-smoke-test.md
deleted file mode 100644
index 566ace7..0000000
--- a/docs/13-smoke-test.md
+++ /dev/null
@@ -1,220 +0,0 @@
-# Smoke Test
-
-In this lab you will complete a series of tasks to ensure your Kubernetes cluster is functioning correctly.
-
-## Data Encryption
-
-In this section you will verify the ability to [encrypt secret data at rest](https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/#verifying-that-data-is-encrypted).
-
-Create a generic secret:
-
-```
-kubectl create secret generic kubernetes-the-hard-way \
- --from-literal="mykey=mydata"
-```
-
-Print a hexdump of the `kubernetes-the-hard-way` secret stored in etcd:
-
-```
-gcloud compute ssh controller-0 \
- --command "sudo ETCDCTL_API=3 etcdctl get \
- --endpoints=https://127.0.0.1:2379 \
- --cacert=/etc/etcd/ca.pem \
- --cert=/etc/etcd/kubernetes.pem \
- --key=/etc/etcd/kubernetes-key.pem\
- /registry/secrets/default/kubernetes-the-hard-way | hexdump -C"
-```
-
-> output
-
-```
-00000000 2f 72 65 67 69 73 74 72 79 2f 73 65 63 72 65 74 |/registry/secret|
-00000010 73 2f 64 65 66 61 75 6c 74 2f 6b 75 62 65 72 6e |s/default/kubern|
-00000020 65 74 65 73 2d 74 68 65 2d 68 61 72 64 2d 77 61 |etes-the-hard-wa|
-00000030 79 0a 6b 38 73 3a 65 6e 63 3a 61 65 73 63 62 63 |y.k8s:enc:aescbc|
-00000040 3a 76 31 3a 6b 65 79 31 3a 97 d1 2c cd 89 0d 08 |:v1:key1:..,....|
-00000050 29 3c 7d 19 41 cb ea d7 3d 50 45 88 82 a3 1f 11 |)<}.A...=PE.....|
-00000060 26 cb 43 2e c8 cf 73 7d 34 7e b1 7f 9f 71 d2 51 |&.C...s}4~...q.Q|
-00000070 45 05 16 e9 07 d4 62 af f8 2e 6d 4a cf c8 e8 75 |E.....b...mJ...u|
-00000080 6b 75 1e b7 64 db 7d 7f fd f3 96 62 e2 a7 ce 22 |ku..d.}....b..."|
-00000090 2b 2a 82 01 c3 f5 83 ae 12 8b d5 1d 2e e6 a9 90 |+*..............|
-000000a0 bd f0 23 6c 0c 55 e2 52 18 78 fe bf 6d 76 ea 98 |..#l.U.R.x..mv..|
-000000b0 fc 2c 17 36 e3 40 87 15 25 13 be d6 04 88 68 5b |.,.6.@..%.....h[|
-000000c0 a4 16 81 f6 8e 3b 10 46 cb 2c ba 21 35 0c 5b 49 |.....;.F.,.!5.[I|
-000000d0 e5 27 20 4c b3 8e 6b d0 91 c2 28 f1 cc fa 6a 1b |.' L..k...(...j.|
-000000e0 31 19 74 e7 a5 66 6a 99 1c 84 c7 e0 b0 fc 32 86 |1.t..fj.......2.|
-000000f0 f3 29 5a a4 1c d5 a4 e3 63 26 90 95 1e 27 d0 14 |.)Z.....c&...'..|
-00000100 94 f0 ac 1a cd 0d b9 4b ae 32 02 a0 f8 b7 3f 0b |.......K.2....?.|
-00000110 6f ad 1f 4d 15 8a d6 68 95 63 cf 7d 04 9a 52 71 |o..M...h.c.}..Rq|
-00000120 75 ff 87 6b c5 42 e1 72 27 b5 e9 1a fe e8 c0 3f |u..k.B.r'......?|
-00000130 d9 04 5e eb 5d 43 0d 90 ce fa 04 a8 4a b0 aa 01 |..^.]C......J...|
-00000140 cf 6d 5b 80 70 5b 99 3c d6 5c c0 dc d1 f5 52 4a |.m[.p[.<.\....RJ|
-00000150 2c 2d 28 5a 63 57 8e 4f df 0a |,-(ZcW.O..|
-0000015a
-```
-
-The etcd key should be prefixed with `k8s:enc:aescbc:v1:key1`, which indicates the `aescbc` provider was used to encrypt the data with the `key1` encryption key.
-
-## Deployments
-
-In this section you will verify the ability to create and manage [Deployments](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/).
-
-Create a deployment for the [nginx](https://nginx.org/en/) web server:
-
-```
-kubectl create deployment nginx --image=nginx
-```
-
-List the pod created by the `nginx` deployment:
-
-```
-kubectl get pods -l app=nginx
-```
-
-> output
-
-```
-NAME READY STATUS RESTARTS AGE
-nginx-f89759699-kpn5m 1/1 Running 0 10s
-```
-
-### Port Forwarding
-
-In this section you will verify the ability to access applications remotely using [port forwarding](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/).
-
-Retrieve the full name of the `nginx` pod:
-
-```
-POD_NAME=$(kubectl get pods -l app=nginx -o jsonpath="{.items[0].metadata.name}")
-```
-
-Forward port `8080` on your local machine to port `80` of the `nginx` pod:
-
-```
-kubectl port-forward $POD_NAME 8080:80
-```
-
-> output
-
-```
-Forwarding from 127.0.0.1:8080 -> 80
-Forwarding from [::1]:8080 -> 80
-```
-
-In a new terminal make an HTTP request using the forwarding address:
-
-```
-curl --head http://127.0.0.1:8080
-```
-
-> output
-
-```
-HTTP/1.1 200 OK
-Server: nginx/1.19.10
-Date: Sun, 02 May 2021 05:29:25 GMT
-Content-Type: text/html
-Content-Length: 612
-Last-Modified: Tue, 13 Apr 2021 15:13:59 GMT
-Connection: keep-alive
-ETag: "6075b537-264"
-Accept-Ranges: bytes
-```
-
-Switch back to the previous terminal and stop the port forwarding to the `nginx` pod:
-
-```
-Forwarding from 127.0.0.1:8080 -> 80
-Forwarding from [::1]:8080 -> 80
-Handling connection for 8080
-^C
-```
-
-### Logs
-
-In this section you will verify the ability to [retrieve container logs](https://kubernetes.io/docs/concepts/cluster-administration/logging/).
-
-Print the `nginx` pod logs:
-
-```
-kubectl logs $POD_NAME
-```
-
-> output
-
-```
-...
-127.0.0.1 - - [02/May/2021:05:29:25 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.64.0" "-"
-```
-
-### Exec
-
-In this section you will verify the ability to [execute commands in a container](https://kubernetes.io/docs/tasks/debug-application-cluster/get-shell-running-container/#running-individual-commands-in-a-container).
-
-Print the nginx version by executing the `nginx -v` command in the `nginx` container:
-
-```
-kubectl exec -ti $POD_NAME -- nginx -v
-```
-
-> output
-
-```
-nginx version: nginx/1.19.10
-```
-
-## Services
-
-In this section you will verify the ability to expose applications using a [Service](https://kubernetes.io/docs/concepts/services-networking/service/).
-
-Expose the `nginx` deployment using a [NodePort](https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport) service:
-
-```
-kubectl expose deployment nginx --port 80 --type NodePort
-```
-
-> The LoadBalancer service type can not be used because your cluster is not configured with [cloud provider integration](https://kubernetes.io/docs/getting-started-guides/scratch/#cloud-provider). Setting up cloud provider integration is out of scope for this tutorial.
-
-Retrieve the node port assigned to the `nginx` service:
-
-```
-NODE_PORT=$(kubectl get svc nginx \
- --output=jsonpath='{range .spec.ports[0]}{.nodePort}')
-```
-
-Create a firewall rule that allows remote access to the `nginx` node port:
-
-```
-gcloud compute firewall-rules create kubernetes-the-hard-way-allow-nginx-service \
- --allow=tcp:${NODE_PORT} \
- --network kubernetes-the-hard-way
-```
-
-Retrieve the external IP address of a worker instance:
-
-```
-EXTERNAL_IP=$(gcloud compute instances describe worker-0 \
- --format 'value(networkInterfaces[0].accessConfigs[0].natIP)')
-```
-
-Make an HTTP request using the external IP address and the `nginx` node port:
-
-```
-curl -I http://${EXTERNAL_IP}:${NODE_PORT}
-```
-
-> output
-
-```
-HTTP/1.1 200 OK
-Server: nginx/1.19.10
-Date: Sun, 02 May 2021 05:31:52 GMT
-Content-Type: text/html
-Content-Length: 612
-Last-Modified: Tue, 13 Apr 2021 15:13:59 GMT
-Connection: keep-alive
-ETag: "6075b537-264"
-Accept-Ranges: bytes
-```
-
-Next: [Cleaning Up](14-cleanup.md)
diff --git a/docs/14-cleanup.md b/docs/14-cleanup.md
deleted file mode 100644
index 5ab908c..0000000
--- a/docs/14-cleanup.md
+++ /dev/null
@@ -1,63 +0,0 @@
-# Cleaning Up
-
-In this lab you will delete the compute resources created during this tutorial.
-
-## Compute Instances
-
-Delete the controller and worker compute instances:
-
-```
-gcloud -q compute instances delete \
- controller-0 controller-1 controller-2 \
- worker-0 worker-1 worker-2 \
- --zone $(gcloud config get-value compute/zone)
-```
-
-## Networking
-
-Delete the external load balancer network resources:
-
-```
-{
- gcloud -q compute forwarding-rules delete kubernetes-forwarding-rule \
- --region $(gcloud config get-value compute/region)
-
- gcloud -q compute target-pools delete kubernetes-target-pool
-
- gcloud -q compute http-health-checks delete kubernetes
-
- gcloud -q compute addresses delete kubernetes-the-hard-way
-}
-```
-
-Delete the `kubernetes-the-hard-way` firewall rules:
-
-```
-gcloud -q compute firewall-rules delete \
- kubernetes-the-hard-way-allow-nginx-service \
- kubernetes-the-hard-way-allow-internal \
- kubernetes-the-hard-way-allow-external \
- kubernetes-the-hard-way-allow-health-check
-```
-
-Delete the `kubernetes-the-hard-way` network VPC:
-
-```
-{
- gcloud -q compute routes delete \
- kubernetes-route-10-200-0-0-24 \
- kubernetes-route-10-200-1-0-24 \
- kubernetes-route-10-200-2-0-24
-
- gcloud -q compute networks subnets delete kubernetes
-
- gcloud -q compute networks delete kubernetes-the-hard-way
-}
-```
-
-Delete the `kubernetes-the-hard-way` compute address:
-
-```
-gcloud -q compute addresses delete kubernetes-the-hard-way \
- --region $(gcloud config get-value compute/region)
-```
diff --git a/docs/Untitled-1.md b/docs/Untitled-1.md
new file mode 100644
index 0000000..a80baf0
--- /dev/null
+++ b/docs/Untitled-1.md
@@ -0,0 +1,50 @@
+db changes after release:
+ - table: subscripsion
+ changes:
+ - EnableLoggingFunctionality remove
+ - SendLogNotifications remove
+
+ - table: FragmentSettings
+ changes: remove
+ - table: FragmentResults
+ changes: remove
+ - table: PrecalculatedFragmentResults
+ changes: remove
+
+ - table: Components
+ changes: remove
+ - table: ScoreProductResults
+ changes: remove
+ - table: PrecalculatedScoreResults
+ changes: remove
+
+ - table: DatasetInsights
+ changes: remove
+ - table: PrecalculatedDatasetInsights
+ changes: remove
+
+ - table: ScoringEngineVerifications
+ changes: remove
+ - table: ScoringEngineVerificationItems
+ changes: remove
+ - table: Profiles
+ changes: remove
+ - table: ProfileFields
+ changes: remove
+
+ - table: WebDatasetChunks
+ changes: removed
+ - table: WebEnvironments
+ changes: removed
+ - table: WebDatasets
+ changes: remove
+
+ - table: MobileDatasets
+ changes:
+ - FileSize remove
+ - SdkIdentifier remove
+
+ - table: Datasets
+ changes:
+ - IX_JsonId - remove index
+ - JsonId - remove column
diff --git a/docs/images/tmux-screenshot.png b/docs/images/tmux-screenshot.png
deleted file mode 100644
index bf23b11..0000000
Binary files a/docs/images/tmux-screenshot.png and /dev/null differ
diff --git a/docs/img/00_cluster_architecture.png b/docs/img/00_cluster_architecture.png
new file mode 100644
index 0000000..c6a8a59
Binary files /dev/null and b/docs/img/00_cluster_architecture.png differ
diff --git a/docs/img/01_cluster_architecture_container_runtime.png b/docs/img/01_cluster_architecture_container_runtime.png
new file mode 100644
index 0000000..200e2c0
Binary files /dev/null and b/docs/img/01_cluster_architecture_container_runtime.png differ
diff --git a/docs/img/02_cluster_architecture_kubelet.png b/docs/img/02_cluster_architecture_kubelet.png
new file mode 100644
index 0000000..e4d90be
Binary files /dev/null and b/docs/img/02_cluster_architecture_kubelet.png differ
diff --git a/docs/img/04_cluster_architecture_etcd.png b/docs/img/04_cluster_architecture_etcd.png
new file mode 100644
index 0000000..97ac8c1
Binary files /dev/null and b/docs/img/04_cluster_architecture_etcd.png differ
diff --git a/docs/img/05_cluster_architecture_apiserver.png b/docs/img/05_cluster_architecture_apiserver.png
new file mode 100644
index 0000000..ff7fd59
Binary files /dev/null and b/docs/img/05_cluster_architecture_apiserver.png differ
diff --git a/docs/img/06_cluster_architecture_apiserver_kubelet.png b/docs/img/06_cluster_architecture_apiserver_kubelet.png
new file mode 100644
index 0000000..2e0420b
Binary files /dev/null and b/docs/img/06_cluster_architecture_apiserver_kubelet.png differ
diff --git a/docs/img/07_cluster_architecture_controller_manager.png b/docs/img/07_cluster_architecture_controller_manager.png
new file mode 100644
index 0000000..84bd1c9
Binary files /dev/null and b/docs/img/07_cluster_architecture_controller_manager.png differ
diff --git a/docs/img/08_cluster_architecture_scheduler.png b/docs/img/08_cluster_architecture_scheduler.png
new file mode 100644
index 0000000..e475961
Binary files /dev/null and b/docs/img/08_cluster_architecture_scheduler.png differ
diff --git a/docs/img/09_cluster_architecture_proxy.png b/docs/img/09_cluster_architecture_proxy.png
new file mode 100644
index 0000000..c6a8a59
Binary files /dev/null and b/docs/img/09_cluster_architecture_proxy.png differ
diff --git a/docs/single-node-cluster.md b/docs/single-node-cluster.md
new file mode 100644
index 0000000..9ddc52b
--- /dev/null
+++ b/docs/single-node-cluster.md
@@ -0,0 +1,959 @@
+```
+{
+wget -q --show-progress --https-only --timestamping \
+ https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/1.4.1/linux/cfssl \
+ https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/1.4.1/linux/cfssljson
+chmod +x cfssl cfssljson
+sudo mv cfssl cfssljson /usr/local/bin/
+}
+```
+
+
+```bash
+{
+
+cat > ca-config.json < ca-csr.json < kubernetes-csr.json < service-account-csr.json < admin-csr.json < encryption-config.yaml < pod.yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ name: hello-world
+spec:
+ serviceAccountName: hello-world
+ containers:
+ - name: hello-world-container
+ image: busybox
+ command: ['sh', '-c', 'while true; do echo "Hello, World!"; sleep 1; done']
+ nodeName: worker
+EOF
+
+cat < sa.yaml
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+ name: hello-world
+automountServiceAccountToken: false
+EOF
+
+kubectl apply -f sa.yaml --kubeconfig=admin.kubeconfig
+kubectl apply -f pod.yaml --kubeconfig=admin.kubeconfig
+}
+```
+
+kubelet
+
+????, ага ще напевно потрібно виписувати сертифікати на публічний айпішнік
+```bash
+sudo echo "127.0.0.1 worker" >> /etc/hosts
+```
+
+```bash
+{
+cat > kubelet-csr.json < nginx-pod.yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ name: nginx-pod
+spec:
+ serviceAccountName: hello-world
+ containers:
+ - name: nginx-container
+ image: nginx
+ ports:
+ - containerPort: 80
+ nodeName: worker
+EOF
+
+
+kubectl apply -f nginx-pod.yaml --kubeconfig=admin.kubeconfig
+```
+
+```bash
+kubectl get pod nginx-pod --kubeconfig=admin.kubeconfig -o=jsonpath='{.status.podIP}'
+```
+
+```bash
+curl $(kubectl get pod nginx-pod --kubeconfig=admin.kubeconfig -o=jsonpath='{.status.podIP}')
+```
+
+```bash
+kubectl delete -f nginx-pod.yaml --kubeconfig=admin.kubeconfig
+kubectl delete -f pod.yaml --kubeconfig=admin.kubeconfig
+kubectl delete -f sa.yaml --kubeconfig=admin.kubeconfig
+```
+
+```bash
+cat < nginx-deployment.yaml
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: nginx-deployment
+spec:
+ replicas: 3
+ selector:
+ matchLabels:
+ app: nginx
+ template:
+ metadata:
+ labels:
+ app: nginx
+ spec:
+ containers:
+ - name: nginx-container
+ image: nginx
+ ports:
+ - containerPort: 80
+EOF
+
+kubectl apply -f nginx-deployment.yaml --kubeconfig=admin.kubeconfig
+```
+
+```bash
+kubectl get pod --kubeconfig=admin.kubeconfig
+```
+
+```bash
+kubectl get deployment --kubeconfig=admin.kubeconfig
+```
+такс деплоймент є а подів немає - неподобство
+
+# controller manager
+
+```bash
+{
+cat > kube-controller-manager-csr.json < kube-scheduler-csr.json <