make kube-dns work again

pull/137/head
Kelsey Hightower 2017-03-24 08:31:17 -07:00
parent c72849f7e3
commit 6827ce575e
6 changed files with 75 additions and 54 deletions

View File

@ -13,7 +13,7 @@ This tutorial is optimized for learning, which means taking the long route to he
The target audience for this tutorial is someone planning to support a production Kubernetes cluster and wants to understand how everything fits together. After completing this tutorial I encourage you to automate away the manual steps presented in this guide. The target audience for this tutorial is someone planning to support a production Kubernetes cluster and wants to understand how everything fits together. After completing this tutorial I encourage you to automate away the manual steps presented in this guide.
* This tutorial is for educational purposes only. There is much more configuration required for a production ready cluster. > This tutorial is for educational purposes only. There is much more configuration required for a production ready cluster.
## Cluster Details ## Cluster Details
@ -23,8 +23,10 @@ The target audience for this tutorial is someone planning to support a productio
* [CNI Based Networking](https://github.com/containernetworking/cni) * [CNI Based Networking](https://github.com/containernetworking/cni)
* Secure communication between all components (etcd, control plane, workers) * Secure communication between all components (etcd, control plane, workers)
* Default Service Account and Secrets * Default Service Account and Secrets
* RBAC * [RBAC authorization enabled](https://kubernetes.io/docs/admin/authorization)
* [TLS client certificate bootstrapping for kubelets](https://kubernetes.io/docs/admin/kubelet-tls-bootstrapping)
* Cloud provider integration
* DNS add-on
### What's Missing ### What's Missing
@ -33,27 +35,12 @@ The resulting cluster will be missing the following items:
* [Cluster add-ons](https://github.com/kubernetes/kubernetes/tree/master/cluster/addons) * [Cluster add-ons](https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
* [Logging](http://kubernetes.io/docs/user-guide/logging) * [Logging](http://kubernetes.io/docs/user-guide/logging)
### Assumptions
GCP
* The us-central1 region will be used
```
gcloud config set compute/region us-central1
```
AWS
* The us-west-2 region will be used
* ``jq`` parsing requires [AWS CLI output format](http://docs.aws.amazon.com/cli/latest/userguide/controlling-output.html) to be ``json``
## Platforms ## Platforms
This tutorial assumes you have access to one of the following: This tutorial assumes you have access to one of the following:
* [Google Cloud Platform](https://cloud.google.com) and the [Google Cloud SDK](https://cloud.google.com/sdk/) (125.0.0+) * [Google Cloud Platform](https://cloud.google.com) and the [Google Cloud SDK](https://cloud.google.com/sdk/) (148.0.0+)
* [Amazon Web Services](https://aws.amazon.com), the [AWS CLI](https://aws.amazon.com/cli) (1.10.63+), and [jq](https://stedolan.github.io/jq) (1.5+) * [Amazon Web Services](https://aws.amazon.com) and the [AWS CLI](https://aws.amazon.com/cli) (1.11.66+)
## Labs ## Labs
@ -61,9 +48,9 @@ While GCP or AWS will be used for basic infrastructure needs, the things learned
* [Cloud Infrastructure Provisioning](docs/01-infrastructure.md) * [Cloud Infrastructure Provisioning](docs/01-infrastructure.md)
* [Setting up a CA and TLS Cert Generation](docs/02-certificate-authority.md) * [Setting up a CA and TLS Cert Generation](docs/02-certificate-authority.md)
* [Setting up authentication](docs/03-authentication.md) * [Setting up TLS Client Bootstrap and RBAC Authentication](docs/03-authentication.md)
* [Bootstrapping an H/A etcd cluster](docs/04-etcd.md) * [Bootstrapping a H/A etcd cluster](docs/04-etcd.md)
* [Bootstrapping an H/A Kubernetes Control Plane](docs/05-kubernetes-controller.md) * [Bootstrapping a H/A Kubernetes Control Plane](docs/05-kubernetes-controller.md)
* [Bootstrapping Kubernetes Workers](docs/06-kubernetes-worker.md) * [Bootstrapping Kubernetes Workers](docs/06-kubernetes-worker.md)
* [Configuring the Kubernetes Client - Remote Access](docs/07-kubectl.md) * [Configuring the Kubernetes Client - Remote Access](docs/07-kubectl.md)
* [Managing the Container Network Routes](docs/08-network.md) * [Managing the Container Network Routes](docs/08-network.md)

View File

@ -15,31 +15,34 @@
apiVersion: extensions/v1beta1 apiVersion: extensions/v1beta1
kind: Deployment kind: Deployment
metadata: metadata:
name: kube-dns-v20 name: kube-dns
namespace: kube-system namespace: kube-system
labels: labels:
k8s-app: kube-dns k8s-app: kube-dns
version: v20
kubernetes.io/cluster-service: "true" kubernetes.io/cluster-service: "true"
spec: spec:
replicas: 2 # replicas: not specified here:
# 1. In order to make Addon Manager do not reconcile this replicas parameter.
# 2. Default is 1.
# 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
strategy:
rollingUpdate:
maxSurge: 10%
maxUnavailable: 0
selector: selector:
matchLabels: matchLabels:
k8s-app: kube-dns k8s-app: kube-dns
version: v20
template: template:
metadata: metadata:
labels: labels:
k8s-app: kube-dns k8s-app: kube-dns
version: v20
kubernetes.io/cluster-service: "true"
annotations: annotations:
scheduler.alpha.kubernetes.io/critical-pod: '' scheduler.alpha.kubernetes.io/critical-pod: ''
scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]' scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]'
spec: spec:
containers: containers:
- name: kubedns - name: kubedns
image: gcr.io/google_containers/kubedns-amd64:1.8 image: gcr.io/google_containers/kubedns-amd64:1.9
resources: resources:
# TODO: Set memory limits when we've profiled the container for large # TODO: Set memory limits when we've profiled the container for large
# clusters, then set request = limit to keep this container in # clusters, then set request = limit to keep this container in
@ -69,9 +72,15 @@ spec:
initialDelaySeconds: 3 initialDelaySeconds: 3
timeoutSeconds: 5 timeoutSeconds: 5
args: args:
# command = "/kube-dns" - --domain=cluster.local.
- --domain=cluster.local
- --dns-port=10053 - --dns-port=10053
- --config-map=kube-dns
# This should be set to v=2 only after the new image (cut from 1.5) has
# been released, otherwise we will flood the logs.
- --v=0
env:
- name: PROMETHEUS_PORT
value: "10055"
ports: ports:
- containerPort: 10053 - containerPort: 10053
name: dns-local name: dns-local
@ -79,6 +88,9 @@ spec:
- containerPort: 10053 - containerPort: 10053
name: dns-tcp-local name: dns-tcp-local
protocol: TCP protocol: TCP
- containerPort: 10055
name: metrics
protocol: TCP
- name: dnsmasq - name: dnsmasq
image: gcr.io/google_containers/kube-dnsmasq-amd64:1.4 image: gcr.io/google_containers/kube-dnsmasq-amd64:1.4
livenessProbe: livenessProbe:
@ -102,6 +114,32 @@ spec:
- containerPort: 53 - containerPort: 53
name: dns-tcp name: dns-tcp
protocol: TCP protocol: TCP
# see: https://github.com/kubernetes/kubernetes/issues/29055 for details
resources:
requests:
cpu: 150m
memory: 10Mi
- name: dnsmasq-metrics
image: gcr.io/google_containers/dnsmasq-metrics-amd64:1.0
livenessProbe:
httpGet:
path: /metrics
port: 10054
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
args:
- --v=2
- --logtostderr
ports:
- containerPort: 10054
name: metrics
protocol: TCP
resources:
requests:
memory: 10Mi
- name: healthz - name: healthz
image: gcr.io/google_containers/exechealthz-amd64:1.2 image: gcr.io/google_containers/exechealthz-amd64:1.2
resources: resources:
@ -109,6 +147,10 @@ spec:
memory: 50Mi memory: 50Mi
requests: requests:
cpu: 10m cpu: 10m
# Note that this container shouldn't really need 50Mi of memory. The
# limits are set higher than expected pending investigation on #29688.
# The extra memory was stolen from the kubedns container to keep the
# net memory requested by the pod constant.
memory: 50Mi memory: 50Mi
args: args:
- --cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null - --cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null

View File

@ -88,18 +88,11 @@ INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip) http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
``` ```
```
CLOUD_PROVIDER=gce
```
#### AWS #### AWS
``` ```
INTERNAL_IP=$(curl -s http://169.254.169.254/latest/meta-data/local-ipv4) INTERNAL_IP=$(curl -s http://169.254.169.254/latest/meta-data/local-ipv4)
``` ```
```
CLOUD_PROVIDER=aws
```
--- ---
@ -124,7 +117,6 @@ ExecStart=/usr/bin/kube-apiserver \\
--authorization-mode=RBAC \\ --authorization-mode=RBAC \\
--bind-address=0.0.0.0 \\ --bind-address=0.0.0.0 \\
--client-ca-file=/var/lib/kubernetes/ca.pem \\ --client-ca-file=/var/lib/kubernetes/ca.pem \\
--cloud-provider=${CLOUD_PROVIDER} \\
--enable-swagger-ui=true \\ --enable-swagger-ui=true \\
--etcd-cafile=/var/lib/kubernetes/ca.pem \\ --etcd-cafile=/var/lib/kubernetes/ca.pem \\
--etcd-certfile=/var/lib/kubernetes/kubernetes.pem \\ --etcd-certfile=/var/lib/kubernetes/kubernetes.pem \\
@ -181,7 +173,6 @@ Documentation=https://github.com/GoogleCloudPlatform/kubernetes
ExecStart=/usr/bin/kube-controller-manager \\ ExecStart=/usr/bin/kube-controller-manager \\
--address=0.0.0.0 \\ --address=0.0.0.0 \\
--allocate-node-cidrs=true \\ --allocate-node-cidrs=true \\
--cloud-provider=${CLOUD_PROVIDER} \\
--cluster-cidr=10.200.0.0/16 \\ --cluster-cidr=10.200.0.0/16 \\
--cluster-name=kubernetes \\ --cluster-name=kubernetes \\
--cluster-signing-cert-file="/var/lib/kubernetes/ca.pem" \\ --cluster-signing-cert-file="/var/lib/kubernetes/ca.pem" \\

View File

@ -166,7 +166,6 @@ Requires=docker.service
ExecStart=/usr/bin/kubelet \\ ExecStart=/usr/bin/kubelet \\
--api-servers=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \\ --api-servers=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \\
--allow-privileged=true \\ --allow-privileged=true \\
--cloud-provider=auto-detect \\
--cluster-dns=10.32.0.10 \\ --cluster-dns=10.32.0.10 \\
--cluster-domain=cluster.local \\ --cluster-domain=cluster.local \\
--container-runtime=docker \\ --container-runtime=docker \\
@ -215,6 +214,8 @@ Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service] [Service]
ExecStart=/usr/bin/kube-proxy \\ ExecStart=/usr/bin/kube-proxy \\
--cluster-cidr=10.200.0.0/16 \\
--masquerade-all=true \\
--master=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \\ --master=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \\
--kubeconfig=/var/lib/kubelet/kube-proxy.kubeconfig \\ --kubeconfig=/var/lib/kubelet/kube-proxy.kubeconfig \\
--proxy-mode=iptables \\ --proxy-mode=iptables \\

View File

@ -7,6 +7,12 @@ In this lab you will deploy the DNS add-on which is required for every Kubernete
## Cluster DNS Add-on ## Cluster DNS Add-on
```
kubectl create clusterrolebinding serviceaccounts-cluster-admin \
--clusterrole=cluster-admin \
--group=system:serviceaccounts
```
### Create the `kubedns` service: ### Create the `kubedns` service:
``` ```

View File

@ -12,12 +12,6 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
# This file should be kept in sync with cluster/images/hyperkube/dns-svc.yaml
# TODO - At some point, we need to rename all skydns-*.yaml.* files to kubedns-*.yaml.*
# Warning: This is a file generated from the base underscore template file: skydns-svc.yaml.base
apiVersion: v1 apiVersion: v1
kind: Service kind: Service
metadata: metadata: