diff --git a/.gitignore b/.gitignore deleted file mode 100644 index 424ffc2..0000000 --- a/.gitignore +++ /dev/null @@ -1,34 +0,0 @@ -admin-csr.json -admin-key.pem -admin.csr -admin.pem -ca-config.json -ca-csr.json -ca-key.pem -ca.csr -ca.pem -encryption-config.yaml -kube-proxy-csr.json -kube-proxy-key.pem -kube-proxy.csr -kube-proxy.kubeconfig -kube-proxy.pem -kubernetes-csr.json -kubernetes-key.pem -kubernetes.csr -kubernetes.pem -worker-0-csr.json -worker-0-key.pem -worker-0.csr -worker-0.kubeconfig -worker-0.pem -worker-1-csr.json -worker-1-key.pem -worker-1.csr -worker-1.kubeconfig -worker-1.pem -worker-2-csr.json -worker-2-key.pem -worker-2.csr -worker-2.kubeconfig -worker-2.pem diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md deleted file mode 100644 index 78bff75..0000000 --- a/CONTRIBUTING.md +++ /dev/null @@ -1,18 +0,0 @@ -This project is made possible by contributors like YOU! While all contributions are welcomed, please be sure and follow the following suggestions to help your PR get merged. - -## License - -This project uses an [Apache license](LICENSE). Be sure you're comfortable with the implications of that before working up a patch. - -## Review and merge process - -Review and merge duties are managed by [@kelseyhightower](https://github.com/kelseyhightower). Expect some burden of proof for demonstrating the marginal value of adding new content to the tutorial. - -Here are some examples of the review and justification process: -- [#208](https://github.com/kelseyhightower/kubernetes-the-hard-way/pull/208) -- [#282](https://github.com/kelseyhightower/kubernetes-the-hard-way/pull/282) - -## Notes on minutiae - -If you find a bug that breaks the guide, please do submit it. If you are considering a minor copy edit for tone, grammar, or simple inconsistent whitespace, consider the tradeoff between maintainer time and community benefit before investing too much of your time. - diff --git a/LICENSE b/LICENSE deleted file mode 100644 index d645695..0000000 --- a/LICENSE +++ /dev/null @@ -1,202 +0,0 @@ - - Apache License - Version 2.0, January 2004 - http://www.apache.org/licenses/ - - TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION - - 1. Definitions. - - "License" shall mean the terms and conditions for use, reproduction, - and distribution as defined by Sections 1 through 9 of this document. - - "Licensor" shall mean the copyright owner or entity authorized by - the copyright owner that is granting the License. - - "Legal Entity" shall mean the union of the acting entity and all - other entities that control, are controlled by, or are under common - control with that entity. For the purposes of this definition, - "control" means (i) the power, direct or indirect, to cause the - direction or management of such entity, whether by contract or - otherwise, or (ii) ownership of fifty percent (50%) or more of the - outstanding shares, or (iii) beneficial ownership of such entity. - - "You" (or "Your") shall mean an individual or Legal Entity - exercising permissions granted by this License. - - "Source" form shall mean the preferred form for making modifications, - including but not limited to software source code, documentation - source, and configuration files. - - "Object" form shall mean any form resulting from mechanical - transformation or translation of a Source form, including but - not limited to compiled object code, generated documentation, - and conversions to other media types. - - "Work" shall mean the work of authorship, whether in Source or - Object form, made available under the License, as indicated by a - copyright notice that is included in or attached to the work - (an example is provided in the Appendix below). - - "Derivative Works" shall mean any work, whether in Source or Object - form, that is based on (or derived from) the Work and for which the - editorial revisions, annotations, elaborations, or other modifications - represent, as a whole, an original work of authorship. For the purposes - of this License, Derivative Works shall not include works that remain - separable from, or merely link (or bind by name) to the interfaces of, - the Work and Derivative Works thereof. - - "Contribution" shall mean any work of authorship, including - the original version of the Work and any modifications or additions - to that Work or Derivative Works thereof, that is intentionally - submitted to Licensor for inclusion in the Work by the copyright owner - or by an individual or Legal Entity authorized to submit on behalf of - the copyright owner. For the purposes of this definition, "submitted" - means any form of electronic, verbal, or written communication sent - to the Licensor or its representatives, including but not limited to - communication on electronic mailing lists, source code control systems, - and issue tracking systems that are managed by, or on behalf of, the - Licensor for the purpose of discussing and improving the Work, but - excluding communication that is conspicuously marked or otherwise - designated in writing by the copyright owner as "Not a Contribution." - - "Contributor" shall mean Licensor and any individual or Legal Entity - on behalf of whom a Contribution has been received by Licensor and - subsequently incorporated within the Work. - - 2. Grant of Copyright License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - copyright license to reproduce, prepare Derivative Works of, - publicly display, publicly perform, sublicense, and distribute the - Work and such Derivative Works in Source or Object form. - - 3. Grant of Patent License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - (except as stated in this section) patent license to make, have made, - use, offer to sell, sell, import, and otherwise transfer the Work, - where such license applies only to those patent claims licensable - by such Contributor that are necessarily infringed by their - Contribution(s) alone or by combination of their Contribution(s) - with the Work to which such Contribution(s) was submitted. If You - institute patent litigation against any entity (including a - cross-claim or counterclaim in a lawsuit) alleging that the Work - or a Contribution incorporated within the Work constitutes direct - or contributory patent infringement, then any patent licenses - granted to You under this License for that Work shall terminate - as of the date such litigation is filed. - - 4. Redistribution. You may reproduce and distribute copies of the - Work or Derivative Works thereof in any medium, with or without - modifications, and in Source or Object form, provided that You - meet the following conditions: - - (a) You must give any other recipients of the Work or - Derivative Works a copy of this License; and - - (b) You must cause any modified files to carry prominent notices - stating that You changed the files; and - - (c) You must retain, in the Source form of any Derivative Works - that You distribute, all copyright, patent, trademark, and - attribution notices from the Source form of the Work, - excluding those notices that do not pertain to any part of - the Derivative Works; and - - (d) If the Work includes a "NOTICE" text file as part of its - distribution, then any Derivative Works that You distribute must - include a readable copy of the attribution notices contained - within such NOTICE file, excluding those notices that do not - pertain to any part of the Derivative Works, in at least one - of the following places: within a NOTICE text file distributed - as part of the Derivative Works; within the Source form or - documentation, if provided along with the Derivative Works; or, - within a display generated by the Derivative Works, if and - wherever such third-party notices normally appear. The contents - of the NOTICE file are for informational purposes only and - do not modify the License. You may add Your own attribution - notices within Derivative Works that You distribute, alongside - or as an addendum to the NOTICE text from the Work, provided - that such additional attribution notices cannot be construed - as modifying the License. - - You may add Your own copyright statement to Your modifications and - may provide additional or different license terms and conditions - for use, reproduction, or distribution of Your modifications, or - for any such Derivative Works as a whole, provided Your use, - reproduction, and distribution of the Work otherwise complies with - the conditions stated in this License. - - 5. Submission of Contributions. Unless You explicitly state otherwise, - any Contribution intentionally submitted for inclusion in the Work - by You to the Licensor shall be under the terms and conditions of - this License, without any additional terms or conditions. - Notwithstanding the above, nothing herein shall supersede or modify - the terms of any separate license agreement you may have executed - with Licensor regarding such Contributions. - - 6. Trademarks. This License does not grant permission to use the trade - names, trademarks, service marks, or product names of the Licensor, - except as required for reasonable and customary use in describing the - origin of the Work and reproducing the content of the NOTICE file. - - 7. Disclaimer of Warranty. Unless required by applicable law or - agreed to in writing, Licensor provides the Work (and each - Contributor provides its Contributions) on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or - implied, including, without limitation, any warranties or conditions - of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A - PARTICULAR PURPOSE. You are solely responsible for determining the - appropriateness of using or redistributing the Work and assume any - risks associated with Your exercise of permissions under this License. - - 8. Limitation of Liability. In no event and under no legal theory, - whether in tort (including negligence), contract, or otherwise, - unless required by applicable law (such as deliberate and grossly - negligent acts) or agreed to in writing, shall any Contributor be - liable to You for damages, including any direct, indirect, special, - incidental, or consequential damages of any character arising as a - result of this License or out of the use or inability to use the - Work (including but not limited to damages for loss of goodwill, - work stoppage, computer failure or malfunction, or any and all - other commercial damages or losses), even if such Contributor - has been advised of the possibility of such damages. - - 9. Accepting Warranty or Additional Liability. While redistributing - the Work or Derivative Works thereof, You may choose to offer, - and charge a fee for, acceptance of support, warranty, indemnity, - or other liability obligations and/or rights consistent with this - License. However, in accepting such obligations, You may act only - on Your own behalf and on Your sole responsibility, not on behalf - of any other Contributor, and only if You agree to indemnify, - defend, and hold each Contributor harmless for any liability - incurred by, or claims asserted against, such Contributor by reason - of your accepting any such warranty or additional liability. - - END OF TERMS AND CONDITIONS - - APPENDIX: How to apply the Apache License to your work. - - To apply the Apache License to your work, attach the following - boilerplate notice, with the fields enclosed by brackets "[]" - replaced with your own identifying information. (Don't include - the brackets!) The text should be enclosed in the appropriate - comment syntax for the file format. We also recommend that a - file or class name and description of purpose be included on the - same "printed page" as the copyright notice for easier - identification within third-party archives. - - Copyright [yyyy] [name of copyright owner] - - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. diff --git a/README.md b/README.md deleted file mode 100644 index 33836d3..0000000 --- a/README.md +++ /dev/null @@ -1,39 +0,0 @@ -# Kubernetes The Hard Way - -This tutorial walks you through setting up Kubernetes the hard way. This guide is not for people looking for a fully automated command to bring up a Kubernetes cluster. If that's you then check out [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine), or the [Getting Started Guides](http://kubernetes.io/docs/getting-started-guides/). - -Kubernetes The Hard Way is optimized for learning, which means taking the long route to ensure you understand each task required to bootstrap a Kubernetes cluster. - -> The results of this tutorial should not be viewed as production ready, and may receive limited support from the community, but don't let that stop you from learning! - -## Target Audience - -The target audience for this tutorial is someone planning to support a production Kubernetes cluster and wants to understand how everything fits together. - -## Cluster Details - -Kubernetes The Hard Way guides you through bootstrapping a highly available Kubernetes cluster with end-to-end encryption between components and RBAC authentication. - -* [Kubernetes](https://github.com/kubernetes/kubernetes) 1.9.0 -* [cri-containerd Container Runtime](https://github.com/kubernetes-incubator/cri-containerd) 1.0.0-beta.0 -* [CNI Container Networking](https://github.com/containernetworking/cni) 0.6.0 -* [etcd](https://github.com/coreos/etcd) 3.2.11 - -## Labs - -This tutorial assumes you have access to the [Google Cloud Platform](https://cloud.google.com). While GCP is used for basic infrastructure requirements the lessons learned in this tutorial can be applied to other platforms. - -* [Prerequisites](docs/01-prerequisites.md) -* [Installing the Client Tools](docs/02-client-tools.md) -* [Provisioning Compute Resources](docs/03-compute-resources.md) -* [Provisioning the CA and Generating TLS Certificates](docs/04-certificate-authority.md) -* [Generating Kubernetes Configuration Files for Authentication](docs/05-kubernetes-configuration-files.md) -* [Generating the Data Encryption Config and Key](docs/06-data-encryption-keys.md) -* [Bootstrapping the etcd Cluster](docs/07-bootstrapping-etcd.md) -* [Bootstrapping the Kubernetes Control Plane](docs/08-bootstrapping-kubernetes-controllers.md) -* [Bootstrapping the Kubernetes Worker Nodes](docs/09-bootstrapping-kubernetes-workers.md) -* [Configuring kubectl for Remote Access](docs/10-configuring-kubectl.md) -* [Provisioning Pod Network Routes](docs/11-pod-network-routes.md) -* [Deploying the DNS Cluster Add-on](docs/12-dns-addon.md) -* [Smoke Test](docs/13-smoke-test.md) -* [Cleaning Up](docs/14-cleanup.md) diff --git a/deployments/kube-dns.yaml b/deployments/kube-dns.yaml deleted file mode 100644 index 5e19117..0000000 --- a/deployments/kube-dns.yaml +++ /dev/null @@ -1,206 +0,0 @@ -# Copyright 2016 The Kubernetes Authors. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -apiVersion: v1 -kind: Service -metadata: - name: kube-dns - namespace: kube-system - labels: - k8s-app: kube-dns - kubernetes.io/cluster-service: "true" - addonmanager.kubernetes.io/mode: Reconcile - kubernetes.io/name: "KubeDNS" -spec: - selector: - k8s-app: kube-dns - clusterIP: 10.32.0.10 - ports: - - name: dns - port: 53 - protocol: UDP - - name: dns-tcp - port: 53 - protocol: TCP ---- -apiVersion: v1 -kind: ServiceAccount -metadata: - name: kube-dns - namespace: kube-system - labels: - kubernetes.io/cluster-service: "true" - addonmanager.kubernetes.io/mode: Reconcile ---- -apiVersion: v1 -kind: ConfigMap -metadata: - name: kube-dns - namespace: kube-system - labels: - addonmanager.kubernetes.io/mode: EnsureExists ---- -apiVersion: apps/v1 -kind: Deployment -metadata: - name: kube-dns - namespace: kube-system - labels: - k8s-app: kube-dns - kubernetes.io/cluster-service: "true" - addonmanager.kubernetes.io/mode: Reconcile -spec: - # replicas: not specified here: - # 1. In order to make Addon Manager do not reconcile this replicas parameter. - # 2. Default is 1. - # 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on. - strategy: - rollingUpdate: - maxSurge: 10% - maxUnavailable: 0 - selector: - matchLabels: - k8s-app: kube-dns - template: - metadata: - labels: - k8s-app: kube-dns - annotations: - scheduler.alpha.kubernetes.io/critical-pod: '' - spec: - tolerations: - - key: "CriticalAddonsOnly" - operator: "Exists" - volumes: - - name: kube-dns-config - configMap: - name: kube-dns - optional: true - containers: - - name: kubedns - image: gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.7 - resources: - # TODO: Set memory limits when we've profiled the container for large - # clusters, then set request = limit to keep this container in - # guaranteed class. Currently, this container falls into the - # "burstable" category so the kubelet doesn't backoff from restarting it. - limits: - memory: 170Mi - requests: - cpu: 100m - memory: 70Mi - livenessProbe: - httpGet: - path: /healthcheck/kubedns - port: 10054 - scheme: HTTP - initialDelaySeconds: 60 - timeoutSeconds: 5 - successThreshold: 1 - failureThreshold: 5 - readinessProbe: - httpGet: - path: /readiness - port: 8081 - scheme: HTTP - # we poll on pod startup for the Kubernetes master service and - # only setup the /readiness HTTP server once that's available. - initialDelaySeconds: 3 - timeoutSeconds: 5 - args: - - --domain=cluster.local. - - --dns-port=10053 - - --config-dir=/kube-dns-config - - --v=2 - env: - - name: PROMETHEUS_PORT - value: "10055" - ports: - - containerPort: 10053 - name: dns-local - protocol: UDP - - containerPort: 10053 - name: dns-tcp-local - protocol: TCP - - containerPort: 10055 - name: metrics - protocol: TCP - volumeMounts: - - name: kube-dns-config - mountPath: /kube-dns-config - - name: dnsmasq - image: gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.7 - livenessProbe: - httpGet: - path: /healthcheck/dnsmasq - port: 10054 - scheme: HTTP - initialDelaySeconds: 60 - timeoutSeconds: 5 - successThreshold: 1 - failureThreshold: 5 - args: - - -v=2 - - -logtostderr - - -configDir=/etc/k8s/dns/dnsmasq-nanny - - -restartDnsmasq=true - - -- - - -k - - --cache-size=1000 - - --no-negcache - - --log-facility=- - - --server=/cluster.local/127.0.0.1#10053 - - --server=/in-addr.arpa/127.0.0.1#10053 - - --server=/ip6.arpa/127.0.0.1#10053 - ports: - - containerPort: 53 - name: dns - protocol: UDP - - containerPort: 53 - name: dns-tcp - protocol: TCP - # see: https://github.com/kubernetes/kubernetes/issues/29055 for details - resources: - requests: - cpu: 150m - memory: 20Mi - volumeMounts: - - name: kube-dns-config - mountPath: /etc/k8s/dns/dnsmasq-nanny - - name: sidecar - image: gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.7 - livenessProbe: - httpGet: - path: /metrics - port: 10054 - scheme: HTTP - initialDelaySeconds: 60 - timeoutSeconds: 5 - successThreshold: 1 - failureThreshold: 5 - args: - - --v=2 - - --logtostderr - - --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,SRV - - --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,SRV - ports: - - containerPort: 10054 - name: metrics - protocol: TCP - resources: - requests: - memory: 20Mi - cpu: 10m - dnsPolicy: Default # Don't use cluster DNS. - serviceAccountName: kube-dns diff --git a/docs/01-prerequisites.md b/docs/01-prerequisites.md deleted file mode 100644 index 79ff65a..0000000 --- a/docs/01-prerequisites.md +++ /dev/null @@ -1,47 +0,0 @@ -# Prerequisites - -## Google Cloud Platform - -This tutorial leverages the [Google Cloud Platform](https://cloud.google.com/) to streamline provisioning of the compute infrastructure required to bootstrap a Kubernetes cluster from the ground up. [Sign up](https://cloud.google.com/free/) for $300 in free credits. - -[Estimated cost](https://cloud.google.com/products/calculator/#id=78df6ced-9c50-48f8-a670-bc5003f2ddaa) to run this tutorial: $0.22 per hour ($5.39 per day). - -> The compute resources required for this tutorial exceed the Google Cloud Platform free tier. - -## Google Cloud Platform SDK - -### Install the Google Cloud SDK - -Follow the Google Cloud SDK [documentation](https://cloud.google.com/sdk/) to install and configure the `gcloud` command line utility. - -Verify the Google Cloud SDK version is 183.0.0 or higher: - -``` -gcloud version -``` - -### Set a Default Compute Region and Zone - -This tutorial assumes a default compute region and zone have been configured. - -If you are using the `gcloud` command-line tool for the first time `init` is the easiest way to do this: - -``` -gcloud init -``` - -Otherwise set a default compute region: - -``` -gcloud config set compute/region us-west1 -``` - -Set a default compute zone: - -``` -gcloud config set compute/zone us-west1-c -``` - -> Use the `gcloud compute zones list` command to view additional regions and zones. - -Next: [Installing the Client Tools](02-client-tools.md) diff --git a/docs/02-client-tools.md b/docs/02-client-tools.md deleted file mode 100644 index e6b728d..0000000 --- a/docs/02-client-tools.md +++ /dev/null @@ -1,111 +0,0 @@ -# Installing the Client Tools - -In this lab you will install the command line utilities required to complete this tutorial: [cfssl](https://github.com/cloudflare/cfssl), [cfssljson](https://github.com/cloudflare/cfssl), and [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl). - - -## Install CFSSL - -The `cfssl` and `cfssljson` command line utilities will be used to provision a [PKI Infrastructure](https://en.wikipedia.org/wiki/Public_key_infrastructure) and generate TLS certificates. - -Download and install `cfssl` and `cfssljson` from the [cfssl repository](https://pkg.cfssl.org): - -### OS X - -``` -curl -o cfssl https://pkg.cfssl.org/R1.2/cfssl_darwin-amd64 -curl -o cfssljson https://pkg.cfssl.org/R1.2/cfssljson_darwin-amd64 -``` - -``` -chmod +x cfssl cfssljson -``` - -``` -sudo mv cfssl cfssljson /usr/local/bin/ -``` - -### Linux - -``` -wget -q --show-progress --https-only --timestamping \ - https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 \ - https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -``` - -``` -chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 -``` - -``` -sudo mv cfssl_linux-amd64 /usr/local/bin/cfssl -``` - -``` -sudo mv cfssljson_linux-amd64 /usr/local/bin/cfssljson -``` - -### Verification - -Verify `cfssl` version 1.2.0 or higher is installed: - -``` -cfssl version -``` - -> output - -``` -Version: 1.2.0 -Revision: dev -Runtime: go1.6 -``` - -> The cfssljson command line utility does not provide a way to print its version. - -## Install kubectl - -The `kubectl` command line utility is used to interact with the Kubernetes API Server. Download and install `kubectl` from the official release binaries: - -### OS X - -``` -curl -o kubectl https://storage.googleapis.com/kubernetes-release/release/v1.9.0/bin/darwin/amd64/kubectl -``` - -``` -chmod +x kubectl -``` - -``` -sudo mv kubectl /usr/local/bin/ -``` - -### Linux - -``` -wget https://storage.googleapis.com/kubernetes-release/release/v1.9.0/bin/linux/amd64/kubectl -``` - -``` -chmod +x kubectl -``` - -``` -sudo mv kubectl /usr/local/bin/ -``` - -### Verification - -Verify `kubectl` version 1.9.0 or higher is installed: - -``` -kubectl version --client -``` - -> output - -``` -Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.0", GitCommit:"925c127ec6b946659ad0fd596fa959be43f0cc05", GitTreeState:"clean", BuildDate:"2017-12-15T21:07:38Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"darwin/amd64"} -``` - -Next: [Provisioning Compute Resources](03-compute-resources.md) diff --git a/docs/03-compute-resources.md b/docs/03-compute-resources.md deleted file mode 100644 index 16f1b7f..0000000 --- a/docs/03-compute-resources.md +++ /dev/null @@ -1,162 +0,0 @@ -# Provisioning Compute Resources - -Kubernetes requires a set of machines to host the Kubernetes control plane and the worker nodes where containers are ultimately run. In this lab you will provision the compute resources required for running a secure and highly available Kubernetes cluster across a single [compute zone](https://cloud.google.com/compute/docs/regions-zones/regions-zones). - -> Ensure a default compute zone and region have been set as described in the [Prerequisites](01-prerequisites.md#set-a-default-compute-region-and-zone) lab. - -## Networking - -The Kubernetes [networking model](https://kubernetes.io/docs/concepts/cluster-administration/networking/#kubernetes-model) assumes a flat network in which containers and nodes can communicate with each other. In cases where this is not desired [network policies](https://kubernetes.io/docs/concepts/services-networking/network-policies/) can limit how groups of containers are allowed to communicate with each other and external network endpoints. - -> Setting up network policies is out of scope for this tutorial. - -### Virtual Private Cloud Network - -In this section a dedicated [Virtual Private Cloud](https://cloud.google.com/compute/docs/networks-and-firewalls#networks) (VPC) network will be setup to host the Kubernetes cluster. - -Create the `kubernetes-the-hard-way` custom VPC network: - -``` -gcloud compute networks create kubernetes-the-hard-way --subnet-mode custom -``` - -A [subnet](https://cloud.google.com/compute/docs/vpc/#vpc_networks_and_subnets) must be provisioned with an IP address range large enough to assign a private IP address to each node in the Kubernetes cluster. - -Create the `kubernetes` subnet in the `kubernetes-the-hard-way` VPC network: - -``` -gcloud compute networks subnets create kubernetes \ - --network kubernetes-the-hard-way \ - --range 10.240.0.0/24 -``` - -> The `10.240.0.0/24` IP address range can host up to 254 compute instances. - -### Firewall Rules - -Create a firewall rule that allows internal communication across all protocols: - -``` -gcloud compute firewall-rules create kubernetes-the-hard-way-allow-internal \ - --allow tcp,udp,icmp \ - --network kubernetes-the-hard-way \ - --source-ranges 10.240.0.0/24,10.200.0.0/16 -``` - -Create a firewall rule that allows external SSH, ICMP, and HTTPS: - -``` -gcloud compute firewall-rules create kubernetes-the-hard-way-allow-external \ - --allow tcp:22,tcp:6443,icmp \ - --network kubernetes-the-hard-way \ - --source-ranges 0.0.0.0/0 -``` - -> An [external load balancer](https://cloud.google.com/compute/docs/load-balancing/network/) will be used to expose the Kubernetes API Servers to remote clients. - -List the firewall rules in the `kubernetes-the-hard-way` VPC network: - -``` -gcloud compute firewall-rules list --filter="network:kubernetes-the-hard-way" -``` - -> output - -``` -NAME NETWORK DIRECTION PRIORITY ALLOW DENY -kubernetes-the-hard-way-allow-external kubernetes-the-hard-way INGRESS 1000 tcp:22,tcp:6443,icmp -kubernetes-the-hard-way-allow-internal kubernetes-the-hard-way INGRESS 1000 tcp,udp,icmp -``` - -### Kubernetes Public IP Address - -Allocate a static IP address that will be attached to the external load balancer fronting the Kubernetes API Servers: - -``` -gcloud compute addresses create kubernetes-the-hard-way \ - --region $(gcloud config get-value compute/region) -``` - -Verify the `kubernetes-the-hard-way` static IP address was created in your default compute region: - -``` -gcloud compute addresses list --filter="name=('kubernetes-the-hard-way')" -``` - -> output - -``` -NAME REGION ADDRESS STATUS -kubernetes-the-hard-way us-west1 XX.XXX.XXX.XX RESERVED -``` - -## Compute Instances - -The compute instances in this lab will be provisioned using [Ubuntu Server](https://www.ubuntu.com/server) 16.04, which has good support for the [cri-containerd container runtime](https://github.com/containerd/cri-containerd). Each compute instance will be provisioned with a fixed private IP address to simplify the Kubernetes bootstrapping process. - -### Kubernetes Controllers - -Create three compute instances which will host the Kubernetes control plane: - -``` -for i in 0 1 2; do - gcloud compute instances create controller-${i} \ - --async \ - --boot-disk-size 200GB \ - --can-ip-forward \ - --image-family ubuntu-1604-lts \ - --image-project ubuntu-os-cloud \ - --machine-type n1-standard-1 \ - --private-network-ip 10.240.0.1${i} \ - --scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring \ - --subnet kubernetes \ - --tags kubernetes-the-hard-way,controller -done -``` - -### Kubernetes Workers - -Each worker instance requires a pod subnet allocation from the Kubernetes cluster CIDR range. The pod subnet allocation will be used to configure container networking in a later exercise. The `pod-cidr` instance metadata will be used to expose pod subnet allocations to compute instances at runtime. - -> The Kubernetes cluster CIDR range is defined by the Controller Manager's `--cluster-cidr` flag. In this tutorial the cluster CIDR range will be set to `10.200.0.0/16`, which supports 254 subnets. - -Create three compute instances which will host the Kubernetes worker nodes: - -``` -for i in 0 1 2; do - gcloud compute instances create worker-${i} \ - --async \ - --boot-disk-size 200GB \ - --can-ip-forward \ - --image-family ubuntu-1604-lts \ - --image-project ubuntu-os-cloud \ - --machine-type n1-standard-1 \ - --metadata pod-cidr=10.200.${i}.0/24 \ - --private-network-ip 10.240.0.2${i} \ - --scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring \ - --subnet kubernetes \ - --tags kubernetes-the-hard-way,worker -done -``` - -### Verification - -List the compute instances in your default compute zone: - -``` -gcloud compute instances list -``` - -> output - -``` -NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS -controller-0 us-west1-c n1-standard-1 10.240.0.10 XX.XXX.XXX.XXX RUNNING -controller-1 us-west1-c n1-standard-1 10.240.0.11 XX.XXX.X.XX RUNNING -controller-2 us-west1-c n1-standard-1 10.240.0.12 XX.XXX.XXX.XX RUNNING -worker-0 us-west1-c n1-standard-1 10.240.0.20 XXX.XXX.XXX.XX RUNNING -worker-1 us-west1-c n1-standard-1 10.240.0.21 XX.XXX.XX.XXX RUNNING -worker-2 us-west1-c n1-standard-1 10.240.0.22 XXX.XXX.XX.XX RUNNING -``` - -Next: [Provisioning a CA and Generating TLS Certificates](04-certificate-authority.md) diff --git a/docs/04-certificate-authority.md b/docs/04-certificate-authority.md deleted file mode 100644 index 7229356..0000000 --- a/docs/04-certificate-authority.md +++ /dev/null @@ -1,283 +0,0 @@ -# Provisioning a CA and Generating TLS Certificates - -In this lab you will provision a [PKI Infrastructure](https://en.wikipedia.org/wiki/Public_key_infrastructure) using CloudFlare's PKI toolkit, [cfssl](https://github.com/cloudflare/cfssl), then use it to bootstrap a Certificate Authority, and generate TLS certificates for the following components: etcd, kube-apiserver, kubelet, and kube-proxy. - -## Certificate Authority - -In this section you will provision a Certificate Authority that can be used to generate additional TLS certificates. - -Create the CA configuration file: - -``` -cat > ca-config.json < ca-csr.json < admin-csr.json <`. In this section you will create a certificate for each Kubernetes worker node that meets the Node Authorizer requirements. - -Generate a certificate and private key for each Kubernetes worker node: - -``` -for instance in worker-0 worker-1 worker-2; do -cat > ${instance}-csr.json < kube-proxy-csr.json < kubernetes-csr.json < The `kube-proxy` and `kubelet` client certificates will be used to generate client authentication configuration files in the next lab. - -Next: [Generating Kubernetes Configuration Files for Authentication](05-kubernetes-configuration-files.md) diff --git a/docs/05-kubernetes-configuration-files.md b/docs/05-kubernetes-configuration-files.md deleted file mode 100644 index 0b8974b..0000000 --- a/docs/05-kubernetes-configuration-files.md +++ /dev/null @@ -1,101 +0,0 @@ -# Generating Kubernetes Configuration Files for Authentication - -In this lab you will generate [Kubernetes configuration files](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/), also known as kubeconfigs, which enable Kubernetes clients to locate and authenticate to the Kubernetes API Servers. - -## Client Authentication Configs - -In this section you will generate kubeconfig files for the `kubelet` and `kube-proxy` clients. - -> The `scheduler` and `controller manager` access the Kubernetes API Server locally over an insecure API port which does not require authentication. The Kubernetes API Server's insecure port is only enabled for local access. - -### Kubernetes Public IP Address - -Each kubeconfig requires a Kubernetes API Server to connect to. To support high availability the IP address assigned to the external load balancer fronting the Kubernetes API Servers will be used. - -Retrieve the `kubernetes-the-hard-way` static IP address: - -``` -KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \ - --region $(gcloud config get-value compute/region) \ - --format 'value(address)') -``` - -### The kubelet Kubernetes Configuration File - -When generating kubeconfig files for Kubelets the client certificate matching the Kubelet's node name must be used. This will ensure Kubelets are properly authorized by the Kubernetes [Node Authorizer](https://kubernetes.io/docs/admin/authorization/node/). - -Generate a kubeconfig file for each worker node: - -``` -for instance in worker-0 worker-1 worker-2; do - kubectl config set-cluster kubernetes-the-hard-way \ - --certificate-authority=ca.pem \ - --embed-certs=true \ - --server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \ - --kubeconfig=${instance}.kubeconfig - - kubectl config set-credentials system:node:${instance} \ - --client-certificate=${instance}.pem \ - --client-key=${instance}-key.pem \ - --embed-certs=true \ - --kubeconfig=${instance}.kubeconfig - - kubectl config set-context default \ - --cluster=kubernetes-the-hard-way \ - --user=system:node:${instance} \ - --kubeconfig=${instance}.kubeconfig - - kubectl config use-context default --kubeconfig=${instance}.kubeconfig -done -``` - -Results: - -``` -worker-0.kubeconfig -worker-1.kubeconfig -worker-2.kubeconfig -``` - -### The kube-proxy Kubernetes Configuration File - -Generate a kubeconfig file for the `kube-proxy` service: - -``` -kubectl config set-cluster kubernetes-the-hard-way \ - --certificate-authority=ca.pem \ - --embed-certs=true \ - --server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \ - --kubeconfig=kube-proxy.kubeconfig -``` - -``` -kubectl config set-credentials kube-proxy \ - --client-certificate=kube-proxy.pem \ - --client-key=kube-proxy-key.pem \ - --embed-certs=true \ - --kubeconfig=kube-proxy.kubeconfig -``` - -``` -kubectl config set-context default \ - --cluster=kubernetes-the-hard-way \ - --user=kube-proxy \ - --kubeconfig=kube-proxy.kubeconfig -``` - -``` -kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig -``` - -## Distribute the Kubernetes Configuration Files - -Copy the appropriate `kubelet` and `kube-proxy` kubeconfig files to each worker instance: - -``` -for instance in worker-0 worker-1 worker-2; do - gcloud compute scp ${instance}.kubeconfig kube-proxy.kubeconfig ${instance}:~/ -done -``` - -Next: [Generating the Data Encryption Config and Key](06-data-encryption-keys.md) diff --git a/docs/06-data-encryption-keys.md b/docs/06-data-encryption-keys.md deleted file mode 100644 index 233bce2..0000000 --- a/docs/06-data-encryption-keys.md +++ /dev/null @@ -1,43 +0,0 @@ -# Generating the Data Encryption Config and Key - -Kubernetes stores a variety of data including cluster state, application configurations, and secrets. Kubernetes supports the ability to [encrypt](https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data) cluster data at rest. - -In this lab you will generate an encryption key and an [encryption config](https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/#understanding-the-encryption-at-rest-configuration) suitable for encrypting Kubernetes Secrets. - -## The Encryption Key - -Generate an encryption key: - -``` -ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64) -``` - -## The Encryption Config File - -Create the `encryption-config.yaml` encryption config file: - -``` -cat > encryption-config.yaml < etcd.service < Remember to run the above commands on each controller node: `controller-0`, `controller-1`, and `controller-2`. - -## Verification - -List the etcd cluster members: - -``` -ETCDCTL_API=3 etcdctl member list -``` - -> output - -``` -3a57933972cb5131, started, controller-2, https://10.240.0.12:2380, https://10.240.0.12:2379 -f98dc20bce6225a0, started, controller-0, https://10.240.0.10:2380, https://10.240.0.10:2379 -ffed16798470cab5, started, controller-1, https://10.240.0.11:2380, https://10.240.0.11:2379 -``` - -Next: [Bootstrapping the Kubernetes Control Plane](08-bootstrapping-kubernetes-controllers.md) diff --git a/docs/08-bootstrapping-kubernetes-controllers.md b/docs/08-bootstrapping-kubernetes-controllers.md deleted file mode 100644 index 06012d9..0000000 --- a/docs/08-bootstrapping-kubernetes-controllers.md +++ /dev/null @@ -1,315 +0,0 @@ -# Bootstrapping the Kubernetes Control Plane - -In this lab you will bootstrap the Kubernetes control plane across three compute instances and configure it for high availability. You will also create an external load balancer that exposes the Kubernetes API Servers to remote clients. The following components will be installed on each node: Kubernetes API Server, Scheduler, and Controller Manager. - -## Prerequisites - -The commands in this lab must be run on each controller instance: `controller-0`, `controller-1`, and `controller-2`. Login to each controller instance using the `gcloud` command. Example: - -``` -gcloud compute ssh controller-0 -``` - -## Provision the Kubernetes Control Plane - -### Download and Install the Kubernetes Controller Binaries - -Download the official Kubernetes release binaries: - -``` -wget -q --show-progress --https-only --timestamping \ - "https://storage.googleapis.com/kubernetes-release/release/v1.9.0/bin/linux/amd64/kube-apiserver" \ - "https://storage.googleapis.com/kubernetes-release/release/v1.9.0/bin/linux/amd64/kube-controller-manager" \ - "https://storage.googleapis.com/kubernetes-release/release/v1.9.0/bin/linux/amd64/kube-scheduler" \ - "https://storage.googleapis.com/kubernetes-release/release/v1.9.0/bin/linux/amd64/kubectl" -``` - -Install the Kubernetes binaries: - -``` -chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl -``` - -``` -sudo mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/ -``` - -### Configure the Kubernetes API Server - -``` -sudo mkdir -p /var/lib/kubernetes/ -``` - -``` -sudo mv ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem encryption-config.yaml /var/lib/kubernetes/ -``` - -The instance internal IP address will be used to advertise the API Server to members of the cluster. Retrieve the internal IP address for the current compute instance: - -``` -INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \ - http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip) -``` - -Create the `kube-apiserver.service` systemd unit file: - -``` -cat > kube-apiserver.service < kube-controller-manager.service < kube-scheduler.service < Allow up to 10 seconds for the Kubernetes API Server to fully initialize. - -### Verification - -``` -kubectl get componentstatuses -``` - -``` -NAME STATUS MESSAGE ERROR -controller-manager Healthy ok -scheduler Healthy ok -etcd-2 Healthy {"health": "true"} -etcd-0 Healthy {"health": "true"} -etcd-1 Healthy {"health": "true"} -``` - -> Remember to run the above commands on each controller node: `controller-0`, `controller-1`, and `controller-2`. - -## RBAC for Kubelet Authorization - -In this section you will configure RBAC permissions to allow the Kubernetes API Server to access the Kubelet API on each worker node. Access to the Kubelet API is required for retrieving metrics, logs, and executing commands in pods. - -> This tutorial sets the Kubelet `--authorization-mode` flag to `Webhook`. Webhook mode uses the [SubjectAccessReview](https://kubernetes.io/docs/admin/authorization/#checking-api-access) API to determine authorization. - -``` -gcloud compute ssh controller-0 -``` - -Create the `system:kube-apiserver-to-kubelet` [ClusterRole](https://kubernetes.io/docs/admin/authorization/rbac/#role-and-clusterrole) with permissions to access the Kubelet API and perform most common tasks associated with managing pods: - -``` -cat < The compute instances created in this tutorial will not have permission to complete this section. Run the following commands from the same machine used to create the compute instances. - -Create the external load balancer network resources: - -``` -gcloud compute target-pools create kubernetes-target-pool -``` - -``` -gcloud compute target-pools add-instances kubernetes-target-pool \ - --instances controller-0,controller-1,controller-2 -``` - -``` -KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \ - --region $(gcloud config get-value compute/region) \ - --format 'value(name)') -``` - -``` -gcloud compute forwarding-rules create kubernetes-forwarding-rule \ - --address ${KUBERNETES_PUBLIC_ADDRESS} \ - --ports 6443 \ - --region $(gcloud config get-value compute/region) \ - --target-pool kubernetes-target-pool -``` - -### Verification - -Retrieve the `kubernetes-the-hard-way` static IP address: - -``` -KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \ - --region $(gcloud config get-value compute/region) \ - --format 'value(address)') -``` - -Make a HTTP request for the Kubernetes version info: - -``` -curl --cacert ca.pem https://${KUBERNETES_PUBLIC_ADDRESS}:6443/version -``` - -> output - -``` -{ - "major": "1", - "minor": "9", - "gitVersion": "v1.9.0", - "gitCommit": "925c127ec6b946659ad0fd596fa959be43f0cc05", - "gitTreeState": "clean", - "buildDate": "2017-12-15T20:55:30Z", - "goVersion": "go1.9.2", - "compiler": "gc", - "platform": "linux/amd64" -} -``` - -Next: [Bootstrapping the Kubernetes Worker Nodes](09-bootstrapping-kubernetes-workers.md) diff --git a/docs/09-bootstrapping-kubernetes-workers.md b/docs/09-bootstrapping-kubernetes-workers.md deleted file mode 100644 index a4e8624..0000000 --- a/docs/09-bootstrapping-kubernetes-workers.md +++ /dev/null @@ -1,235 +0,0 @@ -# Bootstrapping the Kubernetes Worker Nodes - -In this lab you will bootstrap three Kubernetes worker nodes. The following components will be installed on each node: [runc](https://github.com/opencontainers/runc), [container networking plugins](https://github.com/containernetworking/cni), [cri-containerd](https://github.com/containerd/cri-containerd), [kubelet](https://kubernetes.io/docs/admin/kubelet), and [kube-proxy](https://kubernetes.io/docs/concepts/cluster-administration/proxies). - -## Prerequisites - -The commands in this lab must be run on each worker instance: `worker-0`, `worker-1`, and `worker-2`. Login to each worker instance using the `gcloud` command. Example: - -``` -gcloud compute ssh worker-0 -``` - -## Provisioning a Kubernetes Worker Node - -Install the OS dependencies: - -``` -sudo apt-get -y install socat -``` - -> The socat binary enables support for the `kubectl port-forward` command. - -### Download and Install Worker Binaries - -``` -wget -q --show-progress --https-only --timestamping \ - https://github.com/containernetworking/plugins/releases/download/v0.6.0/cni-plugins-amd64-v0.6.0.tgz \ - https://github.com/containerd/cri-containerd/releases/download/v1.0.0-beta.1/cri-containerd-1.0.0-beta.1.linux-amd64.tar.gz \ - https://storage.googleapis.com/kubernetes-release/release/v1.9.0/bin/linux/amd64/kubectl \ - https://storage.googleapis.com/kubernetes-release/release/v1.9.0/bin/linux/amd64/kube-proxy \ - https://storage.googleapis.com/kubernetes-release/release/v1.9.0/bin/linux/amd64/kubelet -``` - -Create the installation directories: - -``` -sudo mkdir -p \ - /etc/cni/net.d \ - /opt/cni/bin \ - /var/lib/kubelet \ - /var/lib/kube-proxy \ - /var/lib/kubernetes \ - /var/run/kubernetes -``` - -Install the worker binaries: - -``` -sudo tar -xvf cni-plugins-amd64-v0.6.0.tgz -C /opt/cni/bin/ -``` - -``` -sudo tar -xvf cri-containerd-1.0.0-beta.1.linux-amd64.tar.gz -C / -``` - -``` -chmod +x kubectl kube-proxy kubelet -``` - -``` -sudo mv kubectl kube-proxy kubelet /usr/local/bin/ -``` - -### Configure CNI Networking - -Retrieve the Pod CIDR range for the current compute instance: - -``` -POD_CIDR=$(curl -s -H "Metadata-Flavor: Google" \ - http://metadata.google.internal/computeMetadata/v1/instance/attributes/pod-cidr) -``` - -Create the `bridge` network configuration file: - -``` -cat > 10-bridge.conf < 99-loopback.conf < kubelet.service < kube-proxy.service < Remember to run the above commands on each worker node: `worker-0`, `worker-1`, and `worker-2`. - -## Verification - -Login to one of the controller nodes: - -``` -gcloud compute ssh controller-0 -``` - -List the registered Kubernetes nodes: - -``` -kubectl get nodes -``` - -> output - -``` -NAME STATUS ROLES AGE VERSION -worker-0 Ready 18s v1.9.0 -worker-1 Ready 18s v1.9.0 -worker-2 Ready 18s v1.9.0 -``` - -Next: [Configuring kubectl for Remote Access](10-configuring-kubectl.md) diff --git a/docs/10-configuring-kubectl.md b/docs/10-configuring-kubectl.md deleted file mode 100644 index 3d63825..0000000 --- a/docs/10-configuring-kubectl.md +++ /dev/null @@ -1,78 +0,0 @@ -# Configuring kubectl for Remote Access - -In this lab you will generate a kubeconfig file for the `kubectl` command line utility based on the `admin` user credentials. - -> Run the commands in this lab from the same directory used to generate the admin client certificates. - -## The Admin Kubernetes Configuration File - -Each kubeconfig requires a Kubernetes API Server to connect to. To support high availability the IP address assigned to the external load balancer fronting the Kubernetes API Servers will be used. - -Retrieve the `kubernetes-the-hard-way` static IP address: - -``` -KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \ - --region $(gcloud config get-value compute/region) \ - --format 'value(address)') -``` - -Generate a kubeconfig file suitable for authenticating as the `admin` user: - -``` -kubectl config set-cluster kubernetes-the-hard-way \ - --certificate-authority=ca.pem \ - --embed-certs=true \ - --server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 -``` - -``` -kubectl config set-credentials admin \ - --client-certificate=admin.pem \ - --client-key=admin-key.pem -``` - -``` -kubectl config set-context kubernetes-the-hard-way \ - --cluster=kubernetes-the-hard-way \ - --user=admin -``` - -``` -kubectl config use-context kubernetes-the-hard-way -``` - -## Verification - -Check the health of the remote Kubernetes cluster: - -``` -kubectl get componentstatuses -``` - -> output - -``` -NAME STATUS MESSAGE ERROR -controller-manager Healthy ok -scheduler Healthy ok -etcd-2 Healthy {"health": "true"} -etcd-0 Healthy {"health": "true"} -etcd-1 Healthy {"health": "true"} -``` - -List the nodes in the remote Kubernetes cluster: - -``` -kubectl get nodes -``` - -> output - -``` -NAME STATUS ROLES AGE VERSION -worker-0 Ready 1m v1.9.0 -worker-1 Ready 1m v1.9.0 -worker-2 Ready 1m v1.9.0 -``` - -Next: [Provisioning Pod Network Routes](11-pod-network-routes.md) diff --git a/docs/11-pod-network-routes.md b/docs/11-pod-network-routes.md deleted file mode 100644 index f0d39be..0000000 --- a/docs/11-pod-network-routes.md +++ /dev/null @@ -1,60 +0,0 @@ -# Provisioning Pod Network Routes - -Pods scheduled to a node receive an IP address from the node's Pod CIDR range. At this point pods can not communicate with other pods running on different nodes due to missing network [routes](https://cloud.google.com/compute/docs/vpc/routes). - -In this lab you will create a route for each worker node that maps the node's Pod CIDR range to the node's internal IP address. - -> There are [other ways](https://kubernetes.io/docs/concepts/cluster-administration/networking/#how-to-achieve-this) to implement the Kubernetes networking model. - -## The Routing Table - -In this section you will gather the information required to create routes in the `kubernetes-the-hard-way` VPC network. - -Print the internal IP address and Pod CIDR range for each worker instance: - -``` -for instance in worker-0 worker-1 worker-2; do - gcloud compute instances describe ${instance} \ - --format 'value[separator=" "](networkInterfaces[0].networkIP,metadata.items[0].value)' -done -``` - -> output - -``` -10.240.0.20 10.200.0.0/24 -10.240.0.21 10.200.1.0/24 -10.240.0.22 10.200.2.0/24 -``` - -## Routes - -Create network routes for each worker instance: - -``` -for i in 0 1 2; do - gcloud compute routes create kubernetes-route-10-200-${i}-0-24 \ - --network kubernetes-the-hard-way \ - --next-hop-address 10.240.0.2${i} \ - --destination-range 10.200.${i}.0/24 -done -``` - -List the routes in the `kubernetes-the-hard-way` VPC network: - -``` -gcloud compute routes list --filter "network: kubernetes-the-hard-way" -``` - -> output - -``` -NAME NETWORK DEST_RANGE NEXT_HOP PRIORITY -default-route-236a40a8bc992b5b kubernetes-the-hard-way 0.0.0.0/0 default-internet-gateway 1000 -default-route-df77b1e818a56b30 kubernetes-the-hard-way 10.240.0.0/24 1000 -kubernetes-route-10-200-0-0-24 kubernetes-the-hard-way 10.200.0.0/24 10.240.0.20 1000 -kubernetes-route-10-200-1-0-24 kubernetes-the-hard-way 10.200.1.0/24 10.240.0.21 1000 -kubernetes-route-10-200-2-0-24 kubernetes-the-hard-way 10.200.2.0/24 10.240.0.22 1000 -``` - -Next: [Deploying the DNS Cluster Add-on](12-dns-addon.md) diff --git a/docs/12-dns-addon.md b/docs/12-dns-addon.md deleted file mode 100644 index 482701b..0000000 --- a/docs/12-dns-addon.md +++ /dev/null @@ -1,78 +0,0 @@ -# Deploying the DNS Cluster Add-on - -In this lab you will deploy the [DNS add-on](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/) which provides DNS based service discovery to applications running inside the Kubernetes cluster. - -## The DNS Cluster Add-on - -Deploy the `kube-dns` cluster add-on: - -``` -kubectl create -f https://storage.googleapis.com/kubernetes-the-hard-way/kube-dns.yaml -``` - -> output - -``` -serviceaccount "kube-dns" created -configmap "kube-dns" created -service "kube-dns" created -deployment "kube-dns" created -``` - -List the pods created by the `kube-dns` deployment: - -``` -kubectl get pods -l k8s-app=kube-dns -n kube-system -``` - -> output - -``` -NAME READY STATUS RESTARTS AGE -kube-dns-3097350089-gq015 3/3 Running 0 20s -``` - -## Verification - -Create a `busybox` deployment: - -``` -kubectl run busybox --image=busybox --command -- sleep 3600 -``` - -List the pod created by the `busybox` deployment: - -``` -kubectl get pods -l run=busybox -``` - -> output - -``` -NAME READY STATUS RESTARTS AGE -busybox-2125412808-mt2vb 1/1 Running 0 15s -``` - -Retrieve the full name of the `busybox` pod: - -``` -POD_NAME=$(kubectl get pods -l run=busybox -o jsonpath="{.items[0].metadata.name}") -``` - -Execute a DNS lookup for the `kubernetes` service inside the `busybox` pod: - -``` -kubectl exec -ti $POD_NAME -- nslookup kubernetes -``` - -> output - -``` -Server: 10.32.0.10 -Address 1: 10.32.0.10 kube-dns.kube-system.svc.cluster.local - -Name: kubernetes -Address 1: 10.32.0.1 kubernetes.default.svc.cluster.local -``` - -Next: [Smoke Test](13-smoke-test.md) diff --git a/docs/13-smoke-test.md b/docs/13-smoke-test.md deleted file mode 100644 index 7e91805..0000000 --- a/docs/13-smoke-test.md +++ /dev/null @@ -1,207 +0,0 @@ -# Smoke Test - -In this lab you will complete a series of tasks to ensure your Kubernetes cluster is functioning correctly. - -## Data Encryption - -In this section you will verify the ability to [encrypt secret data at rest](https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/#verifying-that-data-is-encrypted). - -Create a generic secret: - -``` -kubectl create secret generic kubernetes-the-hard-way \ - --from-literal="mykey=mydata" -``` - -Print a hexdump of the `kubernetes-the-hard-way` secret stored in etcd: - -``` -gcloud compute ssh controller-0 \ - --command "ETCDCTL_API=3 etcdctl get /registry/secrets/default/kubernetes-the-hard-way | hexdump -C" -``` - -> output - -``` -00000000 2f 72 65 67 69 73 74 72 79 2f 73 65 63 72 65 74 |/registry/secret| -00000010 73 2f 64 65 66 61 75 6c 74 2f 6b 75 62 65 72 6e |s/default/kubern| -00000020 65 74 65 73 2d 74 68 65 2d 68 61 72 64 2d 77 61 |etes-the-hard-wa| -00000030 79 0a 6b 38 73 3a 65 6e 63 3a 61 65 73 63 62 63 |y.k8s:enc:aescbc| -00000040 3a 76 31 3a 6b 65 79 31 3a ea 7c 76 32 43 62 6f |:v1:key1:.|v2Cbo| -00000050 44 02 02 8c b7 ca fe 95 a5 33 f6 a1 18 6c 3d 53 |D........3...l=S| -00000060 e7 9c 51 ee 32 f6 e4 17 ea bb 11 d5 2f e2 40 00 |..Q.2......./.@.| -00000070 ae cf d9 e7 ba 7f 68 18 d3 c1 10 10 93 43 35 bd |......h......C5.| -00000080 24 dd 66 b4 f8 f9 82 77 4a d5 78 03 19 41 1e bc |$.f....wJ.x..A..| -00000090 94 3f 17 41 ad cc 8c ba 9f 8f 8e 56 97 7e 96 fb |.?.A.......V.~..| -000000a0 8f 2e 6a a5 bf 08 1f 0b c3 4b 2b 93 d1 ec f8 70 |..j......K+....p| -000000b0 c1 e4 1d 1a d2 0d f8 74 3a a1 4f 3c e0 c9 6d 3f |.......t:.O<..m?| -000000c0 de a3 f5 fd 76 aa 5e bc 27 d9 3c 6b 8f 54 97 45 |....v.^.'. output - -``` -NAME READY STATUS RESTARTS AGE -nginx-4217019353-b5gzn 1/1 Running 0 15s -``` - -### Port Forwarding - -In this section you will verify the ability to access applications remotely using [port forwarding](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/). - -Retrieve the full name of the `nginx` pod: - -``` -POD_NAME=$(kubectl get pods -l run=nginx -o jsonpath="{.items[0].metadata.name}") -``` - -Forward port `8080` on your local machine to port `80` of the `nginx` pod: - -``` -kubectl port-forward $POD_NAME 8080:80 -``` - -> output - -``` -Forwarding from 127.0.0.1:8080 -> 80 -Forwarding from [::1]:8080 -> 80 -``` - -In a new terminal make an HTTP request using the forwarding address: - -``` -curl --head http://127.0.0.1:8080 -``` - -> output - -``` -HTTP/1.1 200 OK -Server: nginx/1.13.7 -Date: Mon, 18 Dec 2017 14:50:36 GMT -Content-Type: text/html -Content-Length: 612 -Last-Modified: Tue, 21 Nov 2017 14:28:04 GMT -Connection: keep-alive -ETag: "5a1437f4-264" -Accept-Ranges: bytes -``` - -Switch back to the previous terminal and stop the port forwarding to the `nginx` pod: - -``` -Forwarding from 127.0.0.1:8080 -> 80 -Forwarding from [::1]:8080 -> 80 -Handling connection for 8080 -^C -``` - -### Logs - -In this section you will verify the ability to [retrieve container logs](https://kubernetes.io/docs/concepts/cluster-administration/logging/). - -Print the `nginx` pod logs: - -``` -kubectl logs $POD_NAME -``` - -> output - -``` -127.0.0.1 - - [18/Dec/2017:14:50:36 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.54.0" "-" -``` - -### Exec - -In this section you will verify the ability to [execute commands in a container](https://kubernetes.io/docs/tasks/debug-application-cluster/get-shell-running-container/#running-individual-commands-in-a-container). - -Print the nginx version by executing the `nginx -v` command in the `nginx` container: - -``` -kubectl exec -ti $POD_NAME -- nginx -v -``` - -> output - -``` -nginx version: nginx/1.13.7 -``` - -## Services - -In this section you will verify the ability to expose applications using a [Service](https://kubernetes.io/docs/concepts/services-networking/service/). - -Expose the `nginx` deployment using a [NodePort](https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport) service: - -``` -kubectl expose deployment nginx --port 80 --type NodePort -``` - -> The LoadBalancer service type can not be used because your cluster is not configured with [cloud provider integration](https://kubernetes.io/docs/getting-started-guides/scratch/#cloud-provider). Setting up cloud provider integration is out of scope for this tutorial. - -Retrieve the node port assigned to the `nginx` service: - -``` -NODE_PORT=$(kubectl get svc nginx \ - --output=jsonpath='{range .spec.ports[0]}{.nodePort}') -``` - -Create a firewall rule that allows remote access to the `nginx` node port: - -``` -gcloud compute firewall-rules create kubernetes-the-hard-way-allow-nginx-service \ - --allow=tcp:${NODE_PORT} \ - --network kubernetes-the-hard-way -``` - -Retrieve the external IP address of a worker instance: - -``` -EXTERNAL_IP=$(gcloud compute instances describe worker-0 \ - --format 'value(networkInterfaces[0].accessConfigs[0].natIP)') -``` - -Make an HTTP request using the external IP address and the `nginx` node port: - -``` -curl -I http://${EXTERNAL_IP}:${NODE_PORT} -``` - -> output - -``` -HTTP/1.1 200 OK -Server: nginx/1.13.7 -Date: Mon, 18 Dec 2017 14:52:09 GMT -Content-Type: text/html -Content-Length: 612 -Last-Modified: Tue, 21 Nov 2017 14:28:04 GMT -Connection: keep-alive -ETag: "5a1437f4-264" -Accept-Ranges: bytes -``` - -Next: [Cleaning Up](14-cleanup.md) diff --git a/docs/14-cleanup.md b/docs/14-cleanup.md deleted file mode 100644 index 620284d..0000000 --- a/docs/14-cleanup.md +++ /dev/null @@ -1,62 +0,0 @@ -# Cleaning Up - -In this lab you will delete the compute resources created during this tutorial. - -## Compute Instances - -Delete the controller and worker compute instances: - -``` -gcloud -q compute instances delete \ - controller-0 controller-1 controller-2 \ - worker-0 worker-1 worker-2 -``` - -## Networking - -Delete the external load balancer network resources: - -``` -gcloud -q compute forwarding-rules delete kubernetes-forwarding-rule \ - --region $(gcloud config get-value compute/region) -``` - -``` -gcloud -q compute target-pools delete kubernetes-target-pool -``` - -Delete the `kubernetes-the-hard-way` static IP address: - -``` -gcloud -q compute addresses delete kubernetes-the-hard-way -``` - -Delete the `kubernetes-the-hard-way` firewall rules: - -``` -gcloud -q compute firewall-rules delete \ - kubernetes-the-hard-way-allow-nginx-service \ - kubernetes-the-hard-way-allow-internal \ - kubernetes-the-hard-way-allow-external -``` - -Delete the Pod network routes: - -``` -gcloud -q compute routes delete \ - kubernetes-route-10-200-0-0-24 \ - kubernetes-route-10-200-1-0-24 \ - kubernetes-route-10-200-2-0-24 -``` - -Delete the `kubernetes` subnet: - -``` -gcloud -q compute networks subnets delete kubernetes -``` - -Delete the `kubernetes-the-hard-way` network VPC: - -``` -gcloud -q compute networks delete kubernetes-the-hard-way -```