Generate and update cert authority

pull/758/head
Tom English 2023-12-21 16:07:38 -05:00
parent 58735937f3
commit 4c9d0cd225
1 changed files with 73 additions and 30 deletions

View File

@ -9,8 +9,6 @@ In this section you will provision a Certificate Authority that can be used to g
Generate the CA configuration file, certificate, and private key:
```
{
cat > ca-config.json <<EOF
{
"signing": {
@ -47,8 +45,6 @@ cat > ca-csr.json <<EOF
EOF
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
}
```
Results:
@ -67,8 +63,6 @@ In this section you will generate client and server certificates for each Kubern
Generate the `admin` client certificate and private key:
```
{
cat > admin-csr.json <<EOF
{
"CN": "admin",
@ -94,8 +88,6 @@ cfssl gencert \
-config=ca-config.json \
-profile=kubernetes \
admin-csr.json | cfssljson -bare admin
}
```
Results:
@ -107,10 +99,11 @@ admin.pem
### The Kubelet Client Certificates
Kubernetes uses a [special-purpose authorization mode](https://kubernetes.io/docs/admin/authorization/node/) called Node Authorizer, that specifically authorizes API requests made by [Kubelets](https://kubernetes.io/docs/concepts/overview/components/#kubelet). In order to be authorized by the Node Authorizer, Kubelets must use a credential that identifies them as being in the `system:nodes` group, with a username of `system:node:<nodeName>`. In this section you will create a certificate for each Kubernetes worker node that meets the Node Authorizer requirements.
Kubernetes uses a [special-purpose authorization mode](https://kubernetes.io/docs/reference/access-authn-authz/node/) called Node Authorizer, that specifically authorizes API requests made by [Kubelets](https://kubernetes.io/docs/concepts/overview/components/#kubelet). In order to be authorized by the Node Authorizer, Kubelets must use a credential that identifies them as being in the `system:nodes` group, with a username of `system:node:<nodeName>`. In this section you will create a certificate for each Kubernetes worker node that meets the Node Authorizer requirements.
Generate a certificate and private key for each Kubernetes worker node:
```gcloud```
```
for instance in worker-0 worker-1 worker-2; do
cat > ${instance}-csr.json <<EOF
@ -148,6 +141,42 @@ cfssl gencert \
done
```
```az```
```
for instance in worker-0 worker-1 worker-2; do
cat > ${instance}-csr.json <<EOF
{
"CN": "system:node:${instance}",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "system:nodes",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
EXTERNAL_IP=$(az vm show --name worker-0 -d --query publicIps -o tsv)
INTERNAL_IP=$(az vm show --name ${instance} -d --query privateIps -o tsv)
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-hostname=${instance},${EXTERNAL_IP},${INTERNAL_IP} \
-profile=kubernetes \
${instance}-csr.json | cfssljson -bare ${instance}
done
```
Results:
```
@ -164,8 +193,6 @@ worker-2.pem
Generate the `kube-controller-manager` client certificate and private key:
```
{
cat > kube-controller-manager-csr.json <<EOF
{
"CN": "system:kube-controller-manager",
@ -191,8 +218,6 @@ cfssl gencert \
-config=ca-config.json \
-profile=kubernetes \
kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
}
```
Results:
@ -208,8 +233,6 @@ kube-controller-manager.pem
Generate the `kube-proxy` client certificate and private key:
```
{
cat > kube-proxy-csr.json <<EOF
{
"CN": "system:kube-proxy",
@ -235,8 +258,6 @@ cfssl gencert \
-config=ca-config.json \
-profile=kubernetes \
kube-proxy-csr.json | cfssljson -bare kube-proxy
}
```
Results:
@ -251,8 +272,6 @@ kube-proxy.pem
Generate the `kube-scheduler` client certificate and private key:
```
{
cat > kube-scheduler-csr.json <<EOF
{
"CN": "system:kube-scheduler",
@ -278,8 +297,6 @@ cfssl gencert \
-config=ca-config.json \
-profile=kubernetes \
kube-scheduler-csr.json | cfssljson -bare kube-scheduler
}
```
Results:
@ -296,13 +313,20 @@ The `kubernetes-the-hard-way` static IP address will be included in the list of
Generate the Kubernetes API Server certificate and private key:
```gcloud```
```
{
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) \
--format 'value(address)')
```
```az```
```
KUBERNETES_PUBLIC_ADDRESS=$(az network public-ip show --name kubernetes-the-hard-way --query ipAddress -o tsv)
```
```
KUBERNETES_HOSTNAMES=kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.svc.cluster.local
cat > kubernetes-csr.json <<EOF
@ -331,8 +355,6 @@ cfssl gencert \
-hostname=10.32.0.1,10.240.0.10,10.240.0.11,10.240.0.12,${KUBERNETES_PUBLIC_ADDRESS},127.0.0.1,${KUBERNETES_HOSTNAMES} \
-profile=kubernetes \
kubernetes-csr.json | cfssljson -bare kubernetes
}
```
> The Kubernetes API server is automatically assigned the `kubernetes` internal dns name, which will be linked to the first IP address (`10.32.0.1`) from the address range (`10.32.0.0/24`) reserved for internal cluster services during the [control plane bootstrapping](08-bootstrapping-kubernetes-controllers.md#configure-the-kubernetes-api-server) lab.
@ -346,13 +368,11 @@ kubernetes.pem
## The Service Account Key Pair
The Kubernetes Controller Manager leverages a key pair to generate and sign service account tokens as described in the [managing service accounts](https://kubernetes.io/docs/admin/service-accounts-admin/) documentation.
The Kubernetes Controller Manager leverages a key pair to generate and sign service account tokens as described in the [managing service accounts](https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/) documentation.
Generate the `service-account` certificate and private key:
```
{
cat > service-account-csr.json <<EOF
{
"CN": "service-accounts",
@ -378,8 +398,6 @@ cfssl gencert \
-config=ca-config.json \
-profile=kubernetes \
service-account-csr.json | cfssljson -bare service-account
}
```
Results:
@ -394,14 +412,26 @@ service-account.pem
Copy the appropriate certificates and private keys to each worker instance:
```gcloud```
```
for instance in worker-0 worker-1 worker-2; do
gcloud compute scp ca.pem ${instance}-key.pem ${instance}.pem ${instance}:~/
done
```
```az```
```
for instance in worker-0 worker-1 worker-2; do
IP=$(az vm show -d --name ${instance} --query "publicIps" -o tsv)
scp ca.pem azureuser@${IP}:/home/azureuser
scp ${instance}-key.pem azureuser@${IP}:/home/azureuser
scp ${instance}.pem azureuser@${IP}:/home/azureuser
done
```
Copy the appropriate certificates and private keys to each controller instance:
```gcloud```
```
for instance in controller-0 controller-1 controller-2; do
gcloud compute scp ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \
@ -409,6 +439,19 @@ for instance in controller-0 controller-1 controller-2; do
done
```
```az```
```
for instance in controller-0 controller-1 controller-2; do
IP=$(az vm show -d --name ${instance} --query "publicIps" -o tsv)
scp ca.pem azureuser@${IP}:/home/azureuser
scp ca-key.pem azureuser@${IP}:/home/azureuser
scp kubernetes-key.pem azureuser@${IP}:/home/azureuser
scp kubernetes.pem azureuser@${IP}:/home/azureuser
scp service-account-key.pem azureuser@${IP}:/home/azureuser
scp service-account.pem azureuser@${IP}:/home/azureuser
done
```
> The `kube-proxy`, `kube-controller-manager`, `kube-scheduler`, and `kubelet` client certificates will be used to generate client authentication configuration files in the next lab.
Next: [Generating Kubernetes Configuration Files for Authentication](05-kubernetes-configuration-files.md)