博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
Kubernetes 1.5.1 部署
阅读量:5904 次
发布时间:2019-06-19

本文共 17163 字,大约阅读时间需要 57 分钟。

hot3.png

Kubernetes 1.5.1 部署 博客分类: Kubernetes  

> kubernetes 1.5.0 , 配置文档

# 1 初始化环境

## 1.1 环境:

 

| 节 点  |      I P      |

|--------|-------------|
|node-1|10.6.0.140|
|node-2|10.6.0.187|
|node-3|10.6.0.188|

## 1.2 设置hostname

hostnamectl --static set-hostname hostname

 

|       I P     | hostname |

|-------------|-------------|
|10.6.0.140|k8s-node-1|
|10.6.0.187|k8s-node-2|
|10.6.0.188|k8s-node-3|

 

## 1.3 配置 hosts

```

vi /etc/hosts
```

|     I P       | hostname |

|-------------|-------------|
|10.6.0.140|k8s-node-1|
|10.6.0.187|k8s-node-2|
|10.6.0.188|k8s-node-3|

 

# 2.0 部署 kubernetes master

 

## 2.1 添加yum

 

复制代码
# 使用我朋友的 yum 源,嘿嘿cat <
/etc/yum.repos.d/kubernetes.repo[mritdrepo]name=Mritd Repositorybaseurl=https://yum.mritd.me/centos/7/x86_64enabled=1gpgcheck=1gpgkey=https://cdn.mritd.me/keys/rpm.public.keyEOFyum makecacheyum install -y socat kubelet kubeadm kubectl kubernetes-cni
复制代码

 

## 2.2 安装docker

 

wget -qO- https://get.docker.com/ | shsystemctl enable dockersystemctl start docker

 

## 2.3 安装 etcd 集群

复制代码
yum -y install etcd# 创建etcd data 目录mkdir -p /opt/etcd/datachown -R etcd:etcd /opt/etcd/# 修改配置文件,/etc/etcd/etcd.conf 需要修改如下参数:ETCD_NAME=etcd1ETCD_DATA_DIR="/opt/etcd/data/etcd1.etcd"ETCD_LISTEN_PEER_URLS="http://10.6.0.140:2380"ETCD_LISTEN_CLIENT_URLS="http://10.6.0.140:2379,http://127.0.0.1:2379"ETCD_INITIAL_ADVERTISE_PEER_URLS="http://10.6.0.140:2380"ETCD_INITIAL_CLUSTER="etcd1=http://10.6.0.140:2380,etcd2=http://10.6.0.187:2380,etcd3=http://10.6.0.188:2380"ETCD_INITIAL_CLUSTER_STATE="new"ETCD_INITIAL_CLUSTER_TOKEN="k8s-etcd-cluster"ETCD_ADVERTISE_CLIENT_URLS="http://10.6.0.140:2379"
复制代码

 

 

# 修改 etcd 启动文件sed -i 's/\\\"${ETCD_LISTEN_CLIENT_URLS}\\\"/\\\"${ETCD_LISTEN_CLIENT_URLS}\\\" --listen-client-urls=\\\"${ETCD_LISTEN_CLIENT_URLS}\\\" --advertise-client-urls=\\\"${ETCD_ADVERTISE_CLIENT_URLS}\\\" --initial-cluster-token=\\\"${ETCD_INITIAL_CLUSTER_TOKEN}\\\" --initial-cluster=\\\"${ETCD_INITIAL_CLUSTER}\\\" --initial-cluster-state=\\\"${ETCD_INITIAL_CLUSTER_STATE}\\\"/g' /usr/lib/systemd/system/etcd.service

 

 

复制代码
# 启动 etcdsystemctl enable etcdsystemctl start etcdsystemctl status etcd# 查看集群状态etcdctl cluster-health
复制代码

 

 

## 2.4 下载镜像

 

复制代码
images=(kube-proxy-amd64:v1.5.1 kube-discovery-amd64:1.0 kubedns-amd64:1.9 kube-scheduler-amd64:v1.5.1 kube-controller-manager-amd64:v1.5.1 kube-apiserver-amd64:v1.5.1 etcd-amd64:3.0.14-kubeadm kube-dnsmasq-amd64:1.4 exechealthz-amd64:1.2 pause-amd64:3.0 kubernetes-dashboard-amd64:v1.5.0 dnsmasq-metrics-amd64:1.0)for imageName in ${images[@]} ; do  docker pull jicki/$imageName  docker tag jicki/$imageName gcr.io/google_containers/$imageName  docker rmi jicki/$imageNamedone``````# 如果速度很慢,可配置一下加速docker 启动文件 增加 --registry-mirror="http://b438f72b.m.daocloud.io"```
复制代码

 

 

 

## 2.4 启动 kubernetes

 

```systemctl enable kubeletsystemctl start kubelet```

 

 

## 2.5 创建集群

 

复制代码
```kubeadm init --api-advertise-addresses=10.6.0.140 \--external-etcd-endpoints=http://10.6.0.140:2379,http://10.6.0.187:2379,http://10.6.0.188:2379 \--use-kubernetes-version v1.5.1 \--pod-network-cidr 10.244.0.0/16``````Flag --external-etcd-endpoints has been deprecated, this flag will be removed when componentconfig exists[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.[preflight] Running pre-flight checks[preflight] Starting the kubelet service[init] Using Kubernetes version: v1.5.1[tokens] Generated token: "c53ef2.d257d49589d634f0"[certificates] Generated Certificate Authority key and certificate.[certificates] Generated API Server key and certificate[certificates] Generated Service Account signing keys[certificates] Created keys and certificates in "/etc/kubernetes/pki"[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"[apiclient] Created API client, waiting for the control plane to become ready[apiclient] All control plane components are healthy after 15.299235 seconds[apiclient] Waiting for at least one node to register and become ready[apiclient] First node is ready after 1.002937 seconds[apiclient] Creating a test deployment[apiclient] Test deployment succeeded[token-discovery] Created the kube-discovery deployment, waiting for it to become ready[token-discovery] kube-discovery is ready after 2.502881 seconds[addons] Created essential addon: kube-proxy[addons] Created essential addon: kube-dnsYour Kubernetes master has initialized successfully!You should now deploy a pod network to the cluster.Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:    http://kubernetes.io/docs/admin/addons/You can now join any number of machines by running the following on each node:kubeadm join --token=c53ef2.d257d49589d634f0 10.6.0.140```
复制代码

 

 

## 2.6 记录 token

 

You can now join any number of machines by running the following on each node:kubeadm join --token=c53ef2.d257d49589d634f0 10.6.0.140

 

 

## 2.7 配置网络

 

 

复制代码
```# 建议先下载镜像,否则容易下载不到docker pull quay.io/coreos/flannel-git:v0.6.1-28-g5dde68d-amd64# 或者这样docker pull jicki/flannel-git:v0.6.1-28-g5dde68d-amd64docker tag jicki/flannel-git:v0.6.1-28-g5dde68d-amd64 quay.io/coreos/flannel-git:v0.6.1-28-g5dde68d-amd64docker rmi jicki/flannel-git:v0.6.1-28-g5dde68d-amd64``````# http://kubernetes.io/docs/admin/addons/  这里有多种网络模式,选择一种# 这里选择 Flannel  选择 Flannel  init 时必须配置 --pod-network-cidrkubectl create -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml```
复制代码

 

 

## 2.8 检查 kubelet 状态

 

systemctl status kubelet

 

 

 

# 3.0 部署 kubernetes node

## 3.1 安装docker

 

复制代码
```wget -qO- https://get.docker.com/ | shsystemctl enable dockersystemctl start docker```
复制代码

 

 

## 3.2 下载镜像

复制代码
```images=(kube-proxy-amd64:v1.5.1 kube-discovery-amd64:1.0 kubedns-amd64:1.9 kube-scheduler-amd64:v1.5.1 kube-controller-manager-amd64:v1.5.1 kube-apiserver-amd64:v1.5.1 etcd-amd64:3.0.14-kubeadm kube-dnsmasq-amd64:1.4 exechealthz-amd64:1.2 pause-amd64:3.0 kubernetes-dashboard-amd64:v1.5.0 dnsmasq-metrics-amd64:1.0)for imageName in ${images[@]} ; do  docker pull jicki/$imageName  docker tag jicki/$imageName gcr.io/google_containers/$imageName  docker rmi jicki/$imageNamedone```
复制代码

 

## 3.3 启动 kubernetes

 

systemctl enable kubeletsystemctl start kubelet

 

## 3.4 加入集群

 

kubeadm join --token=c53ef2.d257d49589d634f0 10.6.0.140

 

Node join complete:* Certificate signing request sent to master and response  received.* Kubelet informed of new secure connection details.Run 'kubectl get nodes' on the master to see this machine join.

 

 

## 3.5 查看集群状态

 

[root@k8s-node-1 ~]#kubectl get nodeNAME         STATUS         AGEk8s-node-1   Ready,master   27mk8s-node-2   Ready          6sk8s-node-3   Ready          9s

 

## 3.6 查看服务状态

 

复制代码
[root@k8s-node-1 ~]#kubectl get pods --all-namespacesNAMESPACE     NAME                                 READY     STATUS    RESTARTS   AGEkube-system   dummy-2088944543-qrp68               1/1       Running   1          1hkube-system   kube-apiserver-k8s-node-1            1/1       Running   2          1hkube-system   kube-controller-manager-k8s-node-1   1/1       Running   2          1hkube-system   kube-discovery-1769846148-g2lpc      1/1       Running   1          1hkube-system   kube-dns-2924299975-xbhv4            4/4       Running   3          1hkube-system   kube-flannel-ds-39g5n                2/2       Running   2          1hkube-system   kube-flannel-ds-dwc82                2/2       Running   2          1hkube-system   kube-flannel-ds-qpkm0                2/2       Running   2          1hkube-system   kube-proxy-16c50                     1/1       Running   2          1hkube-system   kube-proxy-5rkc8                     1/1       Running   2          1hkube-system   kube-proxy-xwrq0                     1/1       Running   2          1hkube-system   kube-scheduler-k8s-node-1            1/1       Running   2          1h
复制代码

 

 

 

# 4.0 设置 kubernetes

## 4.1 其他主机控制集群

 

复制代码
```# 备份master节点的 配置文件/etc/kubernetes/admin.conf# 保存至 其他电脑, 通过执行配置文件控制集群kubectl --kubeconfig ./admin.conf get nodes```
复制代码

 

## 4.2 配置dashboard

 

复制代码
```#下载 yaml 文件, 直接导入会去官方拉取imagescurl -O https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml#编辑 yaml 文件vi kubernetes-dashboard.yamlimage: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.4.0修改为image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.5.0imagePullPolicy: Always修改为imagePullPolicy: IfNotPresent``````kubectl create -f ./kubernetes-dashboard.yamldeployment "kubernetes-dashboard" createdservice "kubernetes-dashboard" created``````# 查看 NodePort ,既外网访问端口kubectl describe svc kubernetes-dashboard --namespace=kube-systemNodePort:               
31736/TCP``````# 访问 dashboardhttp://10.6.0.140:31736```
复制代码

 

 

 

 

# 5.0 kubernetes 应用部署

## 5.1 部署一个 nginx rc

> 编写 一个 nginx yaml

 

复制代码
```apiVersion: v1 kind: ReplicationController metadata:   name: nginx-rc spec:   replicas: 2   selector:     name: nginx   template:     metadata:       labels:         name: nginx     spec:       containers:         - name: nginx           image: nginx:alpine          imagePullPolicy: IfNotPresent          ports:             - containerPort: 80``````[root@k8s-node-1 ~]#kubectl get rcNAME       DESIRED   CURRENT   READY     AGEnginx-rc   2         2         2         2m[root@k8s-node-1 ~]#kubectl get pod -o wideNAME             READY     STATUS    RESTARTS   AGE       IP          NODEnginx-rc-2s8k9   1/1       Running   0          10m       10.32.0.3   k8s-node-1nginx-rc-s16cm   1/1       Running   0          10m       10.40.0.1   k8s-node-2
复制代码

 

 

> 编写一个 nginx service 让集群内部容器可以访问 (ClusterIp)

 

复制代码
```apiVersion: v1 kind: Service metadata:   name: nginx-svc spec:   ports:     - port: 80      targetPort: 80      protocol: TCP   selector:     name: nginx``````[root@k8s-node-1 ~]#kubectl create -f nginx-svc.yaml service "nginx-svc" created[root@k8s-node-1 ~]#kubectl get svc -o wideNAME         CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE       SELECTORkubernetes   10.0.0.1      
443/TCP 2d
nginx-svc 10.6.164.79
80/TCP 29s name=nginx```> 编写一个 curl 的pods```apiVersion: v1kind: Podmetadata: name: curlspec: containers: - name: curl image: radial/busyboxplus:curl command: - sh - -c - while true; do sleep 1; done``````# 测试pods 内部通信[root@k8s-node-1 ~]#kubectl exec curl curl nginx``````# 在任何node节点中,可使用ip访问[root@k8s-node-2 ~]# curl 10.6.164.79[root@k8s-node-3 ~]# curl 10.6.164.79```
复制代码

 

 

> 编写一个 nginx service 让外部可以访问 (NodePort)

 

复制代码
```apiVersion: v1kind: Servicemetadata:  name: nginx-svc-nodespec:  ports:    - port: 80      targetPort: 80      protocol: TCP  type: NodePort  selector:    name: nginx``````[root@k8s-node-1 ~]#kubectl get svc -o wideNAME             CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE       SELECTORkubernetes       10.0.0.1       
443/TCP 2d
nginx-svc 10.6.164.79
80/TCP 29m name=nginxnginx-svc-node 10.12.95.227
80/TCP 17s name=nginx[root@k8s-node-1 ~]#kubectl describe svc nginx-svc-node |grep NodePortType: NodePortNodePort:
32669/TCP``````# 使用 ALL node节点物理IP + 端口访问http://10.6.0.140:32669http://10.6.0.187:32669http://10.6.0.188:32669```
复制代码

 

 

 

## 5.2 部署一个 zookeeper 集群

> 编写 一个 zookeeper-cluster.yaml

 

复制代码
```apiVersion: extensions/v1beta1kind: Deployment metadata:   name: zookeeper-1spec:   replicas: 1  template:     metadata:       labels:         name: zookeeper-1     spec:       containers:         - name: zookeeper-1          image: zk:alpine           imagePullPolicy: IfNotPresent          env:          - name: NODE_ID            value: "1"          - name: NODES            value: "0.0.0.0,zookeeper-2,zookeeper-3"          ports:          - containerPort: 2181---apiVersion: extensions/v1beta1 kind: Deploymentmetadata:  name: zookeeper-2spec:  replicas: 1  template:    metadata:      labels:        name: zookeeper-2    spec:      containers:        - name: zookeeper-2          image: zk:alpine          imagePullPolicy: IfNotPresent          env:          - name: NODE_ID            value: "2"          - name: NODES            value: "zookeeper-1,0.0.0.0,zookeeper-3"          ports:          - containerPort: 2181---apiVersion: extensions/v1beta1kind: Deploymentmetadata:  name: zookeeper-3spec:  replicas: 1  template:    metadata:      labels:        name: zookeeper-3    spec:      containers:        - name: zookeeper-3          image: zk:alpine          imagePullPolicy: IfNotPresent          env:          - name: NODE_ID            value: "3"          - name: NODES            value: "zookeeper-1,zookeeper-2,0.0.0.0"          ports:          - containerPort: 2181---apiVersion: v1 kind: Service metadata:   name: zookeeper-1   labels:    name: zookeeper-1spec:   ports:     - name: client      port: 2181      protocol: TCP    - name: followers      port: 2888      protocol: TCP    - name: election      port: 3888      protocol: TCP  selector:     name: zookeeper-1---apiVersion: v1 kind: Service metadata:   name: zookeeper-2  labels:    name: zookeeper-2spec:   ports:     - name: client      port: 2181      protocol: TCP    - name: followers      port: 2888      protocol: TCP    - name: election      port: 3888      protocol: TCP  selector:     name: zookeeper-2---apiVersion: v1 kind: Service metadata:   name: zookeeper-3  labels:    name: zookeeper-3spec:   ports:     - name: client      port: 2181      protocol: TCP    - name: followers      port: 2888      protocol: TCP    - name: election      port: 3888      protocol: TCP  selector:     name: zookeeper-3    ``````[root@k8s-node-1 ~]#kubectl create -f zookeeper-cluster.yaml --record[root@k8s-node-1 ~]#kubectl get pods -o wideNAME                           READY     STATUS    RESTARTS   AGE       IP          NODEzookeeper-1-2149121414-cfyt4   1/1       Running   0          51m       10.32.0.3   k8s-node-2zookeeper-2-2653289864-0bxee   1/1       Running   0          51m       10.40.0.1   k8s-node-3zookeeper-3-3158769034-5csqy   1/1       Running   0          51m       10.40.0.2   k8s-node-3[root@k8s-node-1 ~]#kubectl get deployment -o wide    NAME          DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGEzookeeper-1   1         1         1            1           51mzookeeper-2   1         1         1            1           51mzookeeper-3   1         1         1            1           51m[root@k8s-node-1 ~]#kubectl get svc -o wideNAME          CLUSTER-IP     EXTERNAL-IP   PORT(S)                      AGE       SELECTORzookeeper-1   10.8.111.19    
2181/TCP,2888/TCP,3888/TCP 51m name=zookeeper-1zookeeper-2 10.6.10.124
2181/TCP,2888/TCP,3888/TCP 51m name=zookeeper-2zookeeper-3 10.0.146.143
2181/TCP,2888/TCP,3888/TCP 51m name=zookeeper-3
复制代码

 

 

## 5.3 部署一个 kafka 集群

> 编写 一个 kafka-cluster.yaml

 

复制代码
```apiVersion: extensions/v1beta1kind: Deployment metadata:   name: kafka-deployment-1spec:   replicas: 1  template:     metadata:       labels:         name: kafka-1     spec:       containers:         - name: kafka-1          image: kafka:alpine           imagePullPolicy: IfNotPresent          env:          - name: NODE_ID            value: "1"          - name: ZK_NODES            value: "zookeeper-1,zookeeper-2,zookeeper-3"          ports:          - containerPort: 9092---apiVersion: extensions/v1beta1 kind: Deploymentmetadata:  name: kafka-deployment-2spec:  replicas: 1  template:    metadata:      labels:        name: kafka-2      spec:      containers:        - name: kafka-2          image: kafka:alpine          imagePullPolicy: IfNotPresent          env:          - name: NODE_ID            value: "2"          - name: ZK_NODES            value: "zookeeper-1,zookeeper-2,zookeeper-3"          ports:          - containerPort: 9092---apiVersion: extensions/v1beta1kind: Deploymentmetadata:  name: kafka-deployment-3spec:  replicas: 1  template:    metadata:      labels:        name: kafka-3      spec:      containers:        - name: kafka-3          image: kafka:alpine          imagePullPolicy: IfNotPresent          env:          - name: NODE_ID            value: "3"          - name: ZK_NODES            value: "zookeeper-1,zookeeper-2,zookeeper-3"          ports:          - containerPort: 9092---apiVersion: v1 kind: Service metadata:   name: kafka-1   labels:    name: kafka-1spec:   ports:     - name: client      port: 9092      protocol: TCP  selector:     name: kafka-1---apiVersion: v1 kind: Service metadata:   name: kafka-2  labels:    name: kafka-2spec:   ports:     - name: client      port: 9092      protocol: TCP  selector:     name: kafka-2---apiVersion: v1 kind: Service metadata:   name: kafka-3  labels:    name: kafka-3spec:   ports:     - name: client      port: 9092      protocol: TCP  selector:     name: kafka-3```
复制代码

 

 http://www.cnblogs.com/jicki/p/6208271.html

 http://blog.csdn.net/wenwst/article/details/54409205

 

 

# FAQ:

## kube-discovery error

 

failed to create "kube-discovery" deployment [deployments.extensions "kube-discovery" already exists]    kubeadm resetkubeadm init

转载于:https://my.oschina.net/xiaominmin/blog/1599750

你可能感兴趣的文章
mysql 多行合并函数
查看>>
【案例】RAID卡写策略改变引发的问题
查看>>
第四十八讲:tapestry 与 淘宝kissy editor编辑器带图片上传
查看>>
Linux/Centos 重置Mysql root用户密码
查看>>
[C语言]unicode与utf-8编码转换(一)
查看>>
利用PDO导入导出数据库
查看>>
DDR3
查看>>
分支 统计字数
查看>>
艾级计算机的发展与挑战
查看>>
RocketMQ事务消息实战
查看>>
手把手教你做出好看的文本输入框
查看>>
zabbix 3.2.7 (源码包)安装部署
查看>>
vsCode 快捷键、插件
查看>>
网络最大流问题算法小结 [转]
查看>>
iOS推送消息报错误“Domain=NSCocoaErrorDomain Code=3000”的可能问题
查看>>
kvm-1
查看>>
leetcode 64. Minimum Path Sum
查看>>
textkit
查看>>
CentOS7+CDH5.14.0安装CDH错误排查: HiveServer2 该角色的进程已退出。该角色的预期状态为已启动...
查看>>
The Oregon Trail 俄勒冈之旅
查看>>