六. k8s--ingress学习笔记

迷南。 2023-08-17 16:33 232阅读 0赞

目录

  • k8s七层代理实现方案
  • 部署安装ingress-nginx

    • 部署ingress-Controller
    • 部署ingress-service(nodeport方式)
    • ingress实验

1120683-20190906163111785-791132684.png

k8s七层代理实现方案

  • trafik
  • envoy
  • nginx

ingress和service的区别与联系

部署安装ingress-nginx

部署ingress-Controller

  1. mkdir nginx-ingress
  2. cd nginx-ingress
  3. wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.25.1/deploy/static/mandatory.yaml
  4. kubectl apply -f mandatory.yaml
  5. kubectl get ns
  6. kubectl get pod -n ingress-nginx

部署ingress-service(nodeport方式)

  1. wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/baremetal/service-nodeport.yaml
  2. cat service-nodeport.yaml
  3. apiVersion: v1
  4. kind: Service
  5. metadata:
  6. name: ingress-nginx
  7. namespace: ingress-nginx
  8. labels:
  9. app.kubernetes.io/name: ingress-nginx
  10. app.kubernetes.io/part-of: ingress-nginx
  11. spec:
  12. type: NodePort
  13. ports:
  14. - name: http
  15. port: 80
  16. targetPort: 80
  17. nodePort: 30080 #指定了nodeport, 好处便于记忆, 坏处可能会端口冲突
  18. protocol: TCP
  19. - name: https
  20. port: 443
  21. targetPort: 443
  22. nodePort: 30443 #指定了nodeport, 好处便于记忆, 坏处可能会端口冲突
  23. protocol: TCP
  24. selector:
  25. app.kubernetes.io/name: ingress-nginx
  26. app.kubernetes.io/part-of: ingress-nginx
  27. kubectl apply -f service-nodeport.yaml
  28. kubectl get pod -n ingress-nginx
  29. kubectl get svc -n ingress-nginx

ingress实验

一. myapp

创建deployment以及对应的service

  1. apiVersion: v1
  2. kind: Service
  3. metadata:
  4. name: myapp
  5. namespace: default
  6. spec:
  7. selector:
  8. app: myapp
  9. release: canary
  10. ports:
  11. - name: http
  12. targetPort: 80
  13. port: 80
  14. ---
  15. apiVersion: apps/v1
  16. kind: Deployment
  17. metadata:
  18. name: myapp-deploy
  19. namespace: default
  20. spec:
  21. replicas: 2
  22. selector:
  23. matchLabels:
  24. app: myapp
  25. release: canary
  26. template:
  27. metadata:
  28. labels:
  29. app: myapp
  30. release: canary
  31. spec:
  32. containers:
  33. - name: myapp
  34. image: ikubernetes/myapp:v1
  35. ports:
  36. - name: http
  37. containerPort: 80

创建对应service的ingress规则

  1. apiVersion: extensions/v1beta1
  2. kind: Ingress
  3. metadata:
  4. name: ingress-myapp
  5. namespace: default
  6. annotations:
  7. kubernetes.io/ingress.class: "nginx"
  8. spec:
  9. rules:
  10. - host: myapp.magedu.com
  11. http:
  12. paths:
  13. - path:
  14. backend:
  15. serviceName: myapp
  16. servicePort: 80

二. tomcat

拉取镜像

  1. docker pull tomcat:8.5.32-jre8-alpine

创建deployment以及对应的service

  1. apiVersion: v1
  2. kind: Service
  3. metadata:
  4. name: tomcat
  5. namespace: default
  6. spec:
  7. selector:
  8. app: tomcat
  9. release: canary
  10. ports:
  11. - name: http
  12. targetPort: 8080
  13. port: 8080
  14. - name: ajp
  15. targetPort: 8009
  16. port: 8009
  17. ---
  18. apiVersion: apps/v1
  19. kind: Deployment
  20. metadata:
  21. name: tomcat-deploy
  22. namespace: default
  23. spec:
  24. replicas: 2
  25. selector:
  26. matchLabels:
  27. app: tomcat
  28. release: canary
  29. template:
  30. metadata:
  31. labels:
  32. app: tomcat
  33. release: canary
  34. spec:
  35. containers:
  36. - name: tomcat
  37. image: tomcat:8.5.32-jre8-alpine
  38. ports:
  39. - name: http
  40. containerPort: 8080
  41. - name: ajp
  42. containerPort: 8009

创建对应service的ingress规则

  1. apiVersion: extensions/v1beta1
  2. kind: Ingress
  3. metadata:
  4. name: ingress-tomcat
  5. namespace: default
  6. annotations:
  7. kubernetes.io/ingress.class: "nginx"
  8. spec:
  9. rules:
  10. - host: tomcat.magedu.com
  11. http:
  12. paths:
  13. - path:
  14. backend:
  15. serviceName: tomcat
  16. servicePort: 8080

https://zhuanlan.zhihu.com/p/62623207

转载于:https://www.cnblogs.com/peitianwang/p/11475786.html

发表评论

表情:
评论列表 (有 0 条评论,232人围观)

还没有评论,来说两句吧...

相关阅读

    相关 k8s学习笔记:缩扩容&更新

    1. 前言 自动缩扩容是现代化的容器调度平台带给我们的最激动人心的一项能力。在上规模的业务系统中我们无时无刻不面临着这样的难题:用户的流量往往随着时间波动,甚至偶尔出现不