ingress是什么

在计算机领域中,Ingress是一个组件,用于管理外部访问Kubernetes集群的方式,它允许外部流量访问集群内的服务。简单地说,它是一个网关,允许用户从Kubernetes集群外部访问运行在其中的应用和服务。

使用Ingress的好处有以下几点:

  1. 简化网络配置:使用Ingress,您可以为每个服务配置单独的负载均衡器或代理,它提供了一个入口点,使您可以将流量路由到集群内的服务上。
  2. 支持虚拟主机和路径路由:Ingress允许您根据不同的虚拟主机名和路径将流量路由到不同的服务上,这样可以更好地管理流量。
  3. 支持TLS加密:Ingress可以配置TLS证书,从而实现对流量的加密保护,确保安全性。

总的来说,使用Ingress可以简化网络配置,提高服务的可访问性和安全性,使得在Kubernetes集群中运行应用和服务变得更加容易和高效。

对于Kubernetes的Service,无论是Cluster-Ip和NodePort均是四层的负载,集群内的服务如何实现七层的负载均衡,这就需要借助于Ingress,Ingress控制器的实现方式有很多,比如nginx, Contour, Haproxy, trafik, Istio。几种常用的ingress功能对比和选型可以参考这里

Ingress-nginx是7层的负载均衡器 ,负责统一管理外部对k8s cluster中Service的请求。

主要包含:

  • ingress-nginx-controller:根据用户编写的ingress规则(创建的ingress的yaml文件),动态的去更改nginx服务的配置文件,并且reload重载使其生效(是自动化的,通过lua脚本来实现);
  • Ingress资源对象:将Nginx的配置抽象成一个Ingress对象
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-wildcard-host
spec:
  ingressClassName: nginx
  rules:
  - host: "foo.bar.com"
    http:
      paths:
      - pathType: Prefix
        path: "/bar"
        backend:
          service:
            name: service1
            port:
            number: 80
  - host: "bar.foo.com"
    http:
      paths:
      - pathType: Prefix
        path: "/foo"
        backend:
          service:
            name: service2
            port:
              number: 80

img

ingress工作逻辑

1)ingress controller通过和kubernetes api交互,动态的去感知集群中ingress规则变化

2)然后读取ingress规则(规则就是写明了哪个域名对应哪个service),按照自定义的规则,生成一段nginx配置

3)再写到nginx-ingress-controller的pod里,这个Ingress controller的pod里运行着一个Nginx服务,控制器把生成的nginx配置写入/etc/nginx/nginx.conf文件中

4)然后reload一下使配置生效。以此达到域名分别配置和动态更新的问题。

安装ingress-nginx

https://github.com/kubernetes/ingress-nginx/blob/master/docs/deploy/index.md

# 下载安装脚本
wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.4.0/deploy/static/provider/cloud/deploy.yaml

# 修改配置
## 修改部署节点,修改controller容器信息
$ vim deploy.yaml
504         volumeMounts:
505         - mountPath: /usr/local/certificates/
506           name: webhook-cert
507           readOnly: true
508       dnsPolicy: ClusterFirst
509       nodeSelector:
510         ingress: "true"  # 修改node标签选择,因为们这里只用master启动ingress
511       hostNetwork: true  # 以宿主机网络启动,默认是创建LoadBalancer类型的svc网络
512       serviceAccountName: ingress-nginx
513       terminationGracePeriodSeconds: 300
514       volumes:



# 替换镜像地址
# 设置master节点,加标签,运行ingress
[root@k8s-master ~/k8s-all]#kubectl label node k8s-master ingress=true
node/k8s-master labeled


# 替换镜像地址
# 使用于超老师的阿里云镜像地址
sed -i 's#registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20220916-gd32f8c343@sha256:39c5b2e3310dc4264d638ad28d9d1d96c4cbb2b2dcfb52368fe4e3c63f61e10f#registry.cn-beijing.aliyuncs.com/yuchao-k8s/ingress-webhook-certgen:v1.3.0#g' deploy.yaml

sed -i 's#registry.k8s.io/ingress-nginx/controller:v1.4.0@sha256:34ee929b111ffc7aa426ffd409af44da48e5a0eea1eb2207994d9e0c0882d143#registry.cn-beijing.aliyuncs.com/yuchao-k8s/ingress-nginx-controller:v1.4.0#g' deploy.yaml

#修改所有镜像的标签选择器
# 
sed -i 's#kubernetes.io/os: linux#ingress: "true"#g' deploy.yaml




# 运行yaml,创建ingress-nginx
# create和apply都可以
[root@k8s-master ~/k8s-all]#kubectl apply -f deploy.yaml 

# 查看创建结果
# 会发现2个pod是用于初始化的,完毕后进入complate状态,可以删除,作用是创建configmap等
# 最后一个pod确认running,即是ingress启动了
[root@k8s-master ~]# kubectl -n ingress-nginx get po -owide
NAME                                        READY   STATUS      RESTARTS   AGE   IP            NODE         NOMINATED NODE   READINESS GATES
ingress-nginx-admission-create-p65bg        0/1     Completed   0          15s   10.244.0.9    k8s-master   <none>           <none>
ingress-nginx-admission-patch-mq98l         0/1     Completed   0          15s   10.244.0.10   k8s-master   <none>           <none>
ingress-nginx-controller-8569c89c99-xv9m4   1/1     Running     0          15s   10.0.0.10     k8s-master   <none>           <none>




# 请注意看,控制器的IP,是k8s-master的IP地址,因为我们给inrgess加上了hostNetwork: true

查看ingress-nginx容器

[root@k8s-master ~]# kubectl -n ingress-nginx exec -it ingress-nginx-controller-8569c89c99-xv9m4 -- bash
bash-5.1$ ps -ef
PID   USER     TIME  COMMAND
    1 www-data  0:00 /usr/bin/dumb-init -- /nginx-ingress-controller --publish-service=ingress-nginx/ingress-nginx-controller --election-id=ingress-controller-leader --controller-class=k8s.io/ingress-nginx --ingress-class=nginx --configmap=ingress-nginx/ingres
    6 www-data  0:00 /nginx-ingress-controller --publish-service=ingress-nginx/ingress-nginx-controller --election-id=ingress-controller-leader --controller-class=k8s.io/ingress-nginx --ingress-class=nginx --configmap=ingress-nginx/ingress-nginx-controller --v
   27 www-data  0:00 nginx: master process /usr/bin/nginx -c /etc/nginx/nginx.conf
   31 www-data  0:00 nginx: worker process
   32 www-data  0:00 nginx: worker process
   33 www-data  0:00 nginx: worker process
   34 www-data  0:00 nginx: worker process
   35 www-data  0:00 nginx: cache manager process
   36 www-data  0:00 nginx: cache loader process
  166 www-data  0:00 bash
  171 www-data  0:00 ps -ef
bash-5.1$ 


# 查看端口
bash-5.1$ netstat -tunlp |grep -E '80|443'



# ingress-nginx的80、443都是可访问的,注意是k8s-master的IP

bash-5.1$ curl -k  https://10.0.0.80:443 -I
HTTP/2 404 
date: Tue, 28 Mar 2023 15:53:33 GMT
content-type: text/html
content-length: 146
strict-transport-security: max-age=15724800; includeSubDomains


bash-5.1$ curl 10.0.0.80 -I
HTTP/1.1 404 Not Found
Date: Tue, 28 Mar 2023 15:53:46 GMT
Content-Type: text/html
Content-Length: 146
Connection: keep-alive

给业务进行域名发布(后端)

我们以前发布业务,都会装nginx、进行域名虚拟主机的发布配置。

其实这里用ingress-nginx也是一样的理念,只不过我们操作ingress,自动生成nginx配置了。

我们之前访问eladmin-api是走的service-ip,代理到pod。

但是好像还是不合适,因为service-ip本质上也是可以变化的,并且我们在公司维护项目,肯定会走更好维护的域名,如eladmin-api.yuchao.com

改造流程如下

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: eladmin-api
  namespace: yuchao
spec:
  ingressClassName: nginx # 明确使用nginx类型的ingress,也有其他控制器
  rules:
  - host: eladmin-api.yuchao.com
    http:
      paths:
      - path: / # 访问根路径
        pathType: Prefix
        backend: # 代理到后端svc
          service: 
            name: eladmin-api # 指定svc的名字
            port:
              number: 8000 # 发给svc的8000端口

参数解释

ingressClassName: nginx是用于指定Ingress控制器使用的类别名称。
在Kubernetes中,Ingress控制器是用于路由外部流量到Kubernetes集群内部的一种资源对象。
而Ingress类别名称则用于标识不同的Ingress控制器,以便Kubernetes可以正确地将外部流量路由到相应的控制器。
在这里,ingressClassName: nginx指定了使用名为nginx的Ingress控制器来处理外部流量。

创建且查看

[root@k8s-master ~/k8s-all]#kubectl create -f ingress-eladmin-api.yaml 
ingress.networking.k8s.io/eladmin-api created
[root@k8s-master ~/k8s-all]#kubectl -n yuchao get ingress
NAME          CLASS   HOSTS                    ADDRESS   PORTS   AGE
eladmin-api   nginx   eladmin-api.yuchao.com             80      5s

[root@k8s-master ~/k8s-all]#
[root@k8s-master ~/k8s-all]#kubectl -n yuchao get ingress eladmin-api -oyaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  creationTimestamp: "2023-03-27T13:09:37Z"
  generation: 1
  name: eladmin-api
  namespace: yuchao
  resourceVersion: "1281989"
  uid: 5bee86d5-f6cf-4d54-a36c-dfe6ff955d32
spec:
  ingressClassName: nginx
  rules:
  - host: eladmin-api.yuchao.com
    http:
      paths:
      - backend:
          service:
            name: eladmin-api
            port:
              number: 8000
        path: /
        pathType: Prefix
status:
  loadBalancer: {}

查看ingress-nginx原理

其实就是生成的nginx配置、upstream

[root@k8s-master ~/k8s-all]#kubectl -n ingress-nginx exec -it ingress-nginx-controller-9ccddfb4f-xlh7t  -- bash
bash-5.1$ ps -ef
PID   USER     TIME  COMMAND
    1 www-data  0:00 /usr/bin/dumb-init -- /nginx-ingress-controller --publish-service=ingress-nginx/ingress-nginx-controller --
    6 www-data  0:01 /nginx-ingress-controller --publish-service=ingress-nginx/ingress-nginx-controller --election-id=ingress-co
   27 www-data  0:00 nginx: master process /usr/bin/nginx -c /etc/nginx/nginx.conf
  192 www-data  0:00 nginx: worker process
  193 www-data  0:00 nginx: worker process
  194 www-data  0:00 nginx: worker process
  195 www-data  0:00 nginx: worker process
  196 www-data  0:00 nginx: cache manager process
  325 www-data  0:00 bash
  331 www-data  0:00 ps -ef



# 1.用户创建ingress-nginx、 2.定义yaml配置ingress且创建 3.自动生成nginx负载均衡规则
bash-5.1$ cat /etc/nginx/nginx.conf|grep eladmin-api -A10 -B1

    ## start server eladmin-api.yuchao.com
    server {
        server_name eladmin-api.yuchao.com ;

        listen 80  ;
        listen [::]:80  ;
        listen 443  ssl http2 ;
        listen [::]:443  ssl http2 ;

        set $proxy_upstream_name "-";

        ssl_certificate_by_lua_block {
            certificate.call()
--
            set $namespace      "yuchao";
            set $ingress_name   "eladmin-api";
            set $service_name   "eladmin-api";
            set $service_port   "8000";
            set $location_path  "/";
            set $global_rate_limit_exceeding n;

            rewrite_by_lua_block {
                lua_ingress.rewrite({
                    force_ssl_redirect = false,
                    ssl_redirect = true,
                    force_no_ssl_redirect = false,
                    preserve_trailing_slash = false,
--
            set $balancer_ewma_score -1;
            set $proxy_upstream_name "yuchao-eladmin-api-8000";
            set $proxy_host          $proxy_upstream_name;
            set $pass_access_scheme  $scheme;

            set $pass_server_port    $server_port;

            set $best_http_host      $http_host;
            set $pass_port           $pass_server_port;

            set $proxy_alternative_upstream_name "";

--
    }
    ## end server eladmin-api.yuchao.com

    # backend for when default-backend-service is not configured or it does not have endpoints
    server {
        listen 8181 default_server reuseport backlog=511;
        listen [::]:8181 default_server reuseport backlog=511;
        set $proxy_upstream_name "internal";

        access_log off;

        location / {
bash-5.1$

绑定hosts域名测试

此时只需要绑定域名eladmin-api.yuchao.com到ingress的ip上即可访问eladmin-api服务。

ingress是支持多副本的,高可用的情况下,生产的配置是使用lb服务(内网F5设备,公网elb、slb、clb,解析到各ingress的机器,如何域名指向lb地址)

# 修改win的hosts,修改ip为你部署ingress-controller的节点
# kubectl -n ingress-nginx get po -owide 
# 可知是部署到到了k8s-master节点

10.0.0.80 eladmin-api.yuchao.com

查看ingress-controller创建了什么

也是你访问ingress、排错的原理流程

1.查看ingress-nginx控制器创建的所有资源

[root@k8s-master ~/k8s-all]#kubectl -n ingress-nginx get all
NAME                                           READY   STATUS      RESTARTS   AGE
pod/ingress-nginx-admission-create-h5l9c       0/1     Completed   0          40m
pod/ingress-nginx-admission-patch-zx2kn        0/1     Completed   0          40m
pod/ingress-nginx-controller-9ccddfb4f-xlh7t   1/1     Running     0          40m

NAME                                         TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
service/ingress-nginx-controller             LoadBalancer   10.96.119.237    <pending>     80:31876/TCP,443:31753/TCP   40m
service/ingress-nginx-controller-admission   ClusterIP      10.100.185.199   <none>        443/TCP                      40m

NAME                                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/ingress-nginx-controller   1/1     1            1           40m

NAME                                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/ingress-nginx-controller-9ccddfb4f   1         1         1       40m

NAME                                       COMPLETIONS   DURATION   AGE
job.batch/ingress-nginx-admission-create   1/1           4s         40m
job.batch/ingress-nginx-admission-patch    1/1           4s         40m
[root@k8s-master ~/k8s-all]#

一图搞懂ingress-nginx工作流程

image-20230327142022707

访问ingress试试

# 查看ingress创建的service,并且由于ingress-nginx设置的hostNetwork: true ,因此你可以直接访问宿主机的80端口

[root@k8s-master ~/k8s-all]#kubectl -n ingress-nginx describe svc ingress-nginx-controller 
Name:                     ingress-nginx-controller
Namespace:                ingress-nginx
Labels:                   app.kubernetes.io/component=controller
                          app.kubernetes.io/instance=ingress-nginx
                          app.kubernetes.io/name=ingress-nginx
                          app.kubernetes.io/part-of=ingress-nginx
                          app.kubernetes.io/version=1.4.0
Annotations:              <none>
Selector:                 app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
Type:                     LoadBalancer
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.96.119.237
IPs:                      10.96.119.237
Port:                     http  80/TCP
TargetPort:               http/TCP
NodePort:                 http  31876/TCP
Endpoints:                10.0.0.80:80
Port:                     https  443/TCP
TargetPort:               https/TCP
NodePort:                 https  31753/TCP
Endpoints:                10.0.0.80:443
Session Affinity:         None
External Traffic Policy:  Local
HealthCheck NodePort:     32701
Events:                   <none>

而且还支持https

ingress-nginx帮你制作了自建证书

image-20230328160314510

[root@k8s-master ~/k8s-all]#curl -v -k  https://eladmin-api.yuchao.com/who
* About to connect() to eladmin-api.yuchao.com port 443 (#0)
*   Trying 10.0.0.80...
* Connected to eladmin-api.yuchao.com (10.0.0.80) port 443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* skipping SSL peer certificate verification
* SSL connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
* Server certificate:
*     subject: CN=Kubernetes Ingress Controller Fake Certificate,O=Acme Co
*     start date: Mar 28 15:17:27 2023 GMT
*     expire date: Mar 27 15:17:27 2024 GMT
*     common name: Kubernetes Ingress Controller Fake Certificate
*     issuer: CN=Kubernetes Ingress Controller Fake Certificate,O=Acme Co
> GET /who HTTP/1.1
> User-Agent: curl/7.29.0
> Host: eladmin-api.yuchao.com
> Accept: */*
> 
< HTTP/1.1 200 
< Date: Tue, 28 Mar 2023 16:03:30 GMT
< Content-Type: application/json;charset=UTF-8
< Content-Length: 72
< Connection: keep-alive
< Vary: Origin
< Vary: Access-Control-Request-Method
< Vary: Access-Control-Request-Headers
< X-Content-Type-Options: nosniff
< X-XSS-Protection: 1; mode=block
< Cache-Control: no-cache, no-store, max-age=0, must-revalidate
< Pragma: no-cache
< Expires: 0
< Strict-Transport-Security: max-age=15724800; includeSubDomains
< 
* Connection #0 to host eladmin-api.yuchao.com left intact
"Who are you ? I am teacher yuchao and my website is www.yuchaoit.cn ! "

前端改造(ingress)

现在前后端分离,生产下的部署,前后端都是走的域名;

后端我们已经配置好了,还得配置前端;

但是有两种部署方案

方案一

期望访问方式 后端接口、eladmin-api.yuchao.com 前端接口、eladmin.yuchao.com

如果你规划的后端接口url是如上,那么就得注意,前端配置文件里写入的地址,是否要修改,查看如下

注意,你要去修改你前端镜像了,于超老师这里用的是单独的docker服务器。

[root@docker01 ~]#ls
anaconda-ks.cfg  eladmin  eladmin-web  init2.sh  init.sh  tags.sh
[root@docker01 ~]#
[root@docker01 ~]#cd eladmin-web/
[root@docker01 ~/eladmin-web]#vim .env.production 

[root@docker01 ~/eladmin-web]#cat .env.production 
ENV = 'production'

# 如果使用 Nginx 代理后端接口,那么此处需要改为 '/',文件查看 Docker 部署篇,Nginx 配置
# 接口地址,注意协议,如果你没有配置 ssl,需要将 https 改为 http
VUE_APP_BASE_API  = 'http://eladmin-api.yuchao.com'
# 如果接口是 http 形式, wss 需要改为 ws
VUE_APP_WS_API = 'ws://eladmin-api.yuchao.com'
[root@docker01 ~/eladmin-web]#

代码有更新、重新构建前端镜像

[root@docker01 ~/eladmin-web]#docker build . -t 10.0.0.66:5000/eladmin/eladmin-web:v2
[+] Building 1.5s (2/4)                                                                                                          
 => [internal] load build definition from Dockerfile                                                                        0.0s
 => => transferring dockerfile: 32B                                                                                         0.0s
 => [internal] load .dockerignore                                                                                           0.0s
 => => transferring context: 2B                                                                                             0.0s
 => [internal] load metadata for docker.io/library/nginx:alpine                                                             1.5s
 => [internal] load metadata for docker.io/codemantn/vue-node:latest                                                        1.5s

# 推送到私有仓库
[root@docker01 ~/eladmin-web]#docker push  10.0.0.66:5000/eladmin/eladmin-web:v2
The push refers to repository [10.0.0.66:5000/eladmin/eladmin-web]
bdc5cc58a14b: Pushed 
5f70bf18a086: Mounted from last-eladmin-web 
419df8b60032: Mounted from last-eladmin-web 
0e835d02c1b5: Mounted from last-eladmin-web 
5ee3266a70bd: Mounted from last-eladmin-web 
3f87f0a06073: Mounted from last-eladmin-web 
1c9c1e42aafa: Mounted from last-eladmin-web 
8d3ac3489996: Mounted from last-eladmin-web 
v2: digest: sha256:2c18b5ebeb18e008adfd155c4bad3c12f6bb3dc2ab4e457d2a35f7b77bf06630 size: 1985
[root@docker01 ~/eladmin-web]#

准备前端eladmin-web资源yaml

然后为eladmin-web准备DeploymentServiceIngress 资源清单;

想一次性创建多个资源,用三道杠,分割多个类型的k8s资源,写法如下

eladmin-web-all.yaml

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: eladmin-web
  namespace: yuchao
spec:
  replicas: 1   #指定Pod副本数
  selector:             #指定Pod的选择器
    matchLabels:
      app: eladmin-web
  template:
    metadata:
      labels:   #给Pod打label
        app: eladmin-web
    spec:
      imagePullSecrets:
      - name: registry-10.0.0.66
      containers:
      - name: eladmin-web
        image: 10.0.0.66:5000/eladmin/eladmin-web:v2
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
        resources:
          requests:
            memory: 200Mi
            cpu: 50m
          limits:
            memory: 2Gi
            cpu: 2
        livenessProbe:
          tcpSocket:
            port: 80
          initialDelaySeconds: 15  # 容器启动后第一次执行探测是需要等待多少秒
          periodSeconds: 15     # 执行探测的频率
          timeoutSeconds: 3             # 探测超时时间
        readinessProbe:
          tcpSocket:
            port: 80
          initialDelaySeconds: 15
          timeoutSeconds: 3
          periodSeconds: 15
---
apiVersion: v1
kind: Service
metadata:
  name: eladmin-web
  namespace: yuchao
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: eladmin-web
  type: ClusterIP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: eladmin-web
  namespace: yuchao
spec:
  ingressClassName: nginx
  rules:
  - host: eladmin.yuchao.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service: 
            name: eladmin-web
            port:
              number: 80

创建且查看

[root@k8s-master ~/k8s-all]#kubectl create -f eladmin-web-all.yaml 
deployment.apps/eladmin-web created
service/eladmin-web created
ingress.networking.k8s.io/eladmin-web created
[root@k8s-master ~/k8s-all]#

运行结果

[root@k8s-master ~/k8s-all]#kubectl -n yuchao get svc
NAME          TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
eladmin-api   ClusterIP   10.110.57.234    <none>        8000/TCP   7d2h
eladmin-web   ClusterIP   10.97.98.201     <none>        80/TCP     55s
mysql         ClusterIP   10.106.247.171   <none>        3306/TCP   7d23h
redis         ClusterIP   10.105.65.167    <none>        6379/TCP   7d23h
[root@k8s-master ~/k8s-all]#
[root@k8s-master ~/k8s-all]#kubectl -n yuchao get ingress
NAME          CLASS   HOSTS                    ADDRESS   PORTS   AGE
eladmin-api   nginx   eladmin-api.yuchao.com             80      29m
eladmin-web   nginx   eladmin.yuchao.com                 80      58s
[root@k8s-master ~/k8s-all]#
[root@k8s-master ~/k8s-all]#kubectl -n yuchao get po
NAME                           READY   STATUS    RESTARTS       AGE
eladmin-api-9446bcc45-jrpk4    1/1     Running   2 (4d1h ago)   7d2h
eladmin-api-9446bcc45-mhrmp    1/1     Running   2 (4d1h ago)   7d1h
eladmin-api-9446bcc45-p5q2r    1/1     Running   2 (4d1h ago)   7d2h
eladmin-api-9446bcc45-pkpjr    1/1     Running   2 (4d1h ago)   7d2h
eladmin-web-584cb499d5-h5hzd   1/1     Running   0              62s
mysql-7c7cf8495f-5w5bk         1/1     Running   1 (4d1h ago)   8d
ngx01                          1/1     Running   1 (4d1h ago)   6d2h
redis-7957d49f44-cxj8z         1/1     Running   1 (4d1h ago)   7d23h
[root@k8s-master ~/k8s-all]#

查看eladmin-web-ingress情况

[root@k8s-master ~/k8s-all]#kubectl -n yuchao describe ingress eladmin-web 
Name:             eladmin-web
Labels:           <none>
Namespace:        yuchao
Address:          
Ingress Class:    nginx
Default backend:  <default>
Rules:
  Host                Path  Backends
  ----                ----  --------
  eladmin.yuchao.com  
                      /   eladmin-web:80 (10.244.2.38:80) # 确保这里有结果,表示后端pod
Annotations:          <none>
Events:
  Type    Reason  Age   From                      Message
  ----    ------  ----  ----                      -------
  Normal  Sync    2m3s  nginx-ingress-controller  Scheduled for sync

域名绑定

eladmin-web同样用的是ingress-controller,类型是主机网络,因此还是绑定到k8s-master

# win测试,就改win-hosts
10.0.0.80  eladmin-api.yuchao.com eladmin.yuchao.com

最终访问登录

有小bug,不能刷新url,这是这个eladmin系统本身的问题,无须关心。

image-20230328163001083

成功部署

测试前后端交互、修改数据,是否能提交

image-20230328163851629

方案2

公司里还有一种情况,需要设置如下url

规划使用如下地址访问:

项目    访问地址
eladmin-web    http://eladmin.yuchao.com
eladmin-api    http://eladmin.yuchao.com:8000

再次检查前端接口文件,需要改为如下

[root@docker01 ~/eladmin-web]#cat .env.production 
ENV = 'production'

# 如果使用 Nginx 代理后端接口,那么此处需要改为 '/',文件查看 Docker 部署篇,Nginx 配置
# 接口地址,注意协议,如果你没有配置 ssl,需要将 https 改为 http
VUE_APP_BASE_API  = 'http://eladmin-api.yuchao.com:8000'
# 如果接口是 http 形式, wss 需要改为 ws
VUE_APP_WS_API = 'ws://eladmin-api.yuchao.com:8000'

还得修改ingress-nginx

但是坑来了,http://eladmin.yuchao.com:8000我们现在访问这个url是访问的ingress-nginx,但是该组件本身只提供了80、443的端口请求转发、并没有暴露8000端口,默认是访问不通的。

修改ingress-nginx-controller设置tcp转发

431     spec:
432       containers:
433       - args:
434         - /nginx-ingress-controller
435         - --publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
436         - --election-id=ingress-controller-leader
437         - --controller-class=k8s.io/ingress-nginx
438         - --ingress-class=nginx
439         - --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
440         - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
441         - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
442         - --validating-webhook=:8443
443         - --validating-webhook-certificate=/usr/local/certificates/cert
444         - --validating-webhook-key=/usr/local/certificates/key

# 修改440、441关于tcp、udp的转发参数添加

参数解释

这两个参数都是Ingress Nginx Controller的命令行参数,用于指定TCP和UDP服务的配置文件。具体含义如下:

--tcp-services-configmap=$(POD_NAMESPACE)/tcp-services:指定TCP服务的配置文件。其中,$(POD_NAMESPACE)是Kubernetes中Pod的命名空间,tcp-services是存储TCP服务配置的ConfigMap名称。这个参数告诉Ingress Nginx Controller从哪里获取TCP服务配置信息。

--udp-services-configmap=$(POD_NAMESPACE)/udp-services:指定UDP服务的配置文件。其中,$(POD_NAMESPACE)是Kubernetes中Pod的命名空间,udp-services是存储UDP服务配置的ConfigMap名称。这个参数告诉Ingress Nginx Controller从哪里获取UDP服务配置信息。

这些配置文件中定义了哪些TCP/UDP服务应该被代理,以及它们应该被代理到哪些后端服务。通过这些参数,Ingress Nginx Controller可以动态地读取和更新这些配置,从而可以在Kubernetes集群中轻松地添加、修改和删除TCP/UDP服务。

重建ingress-nginx-controller

# 默认deployment是滚动更新,新建一个新的再,停止旧的
# 但是由于ingress-nginx我被用的是hostnetwork模式,没办法占用宿主机80端口2次。
# 所以是清理pod,重建的过程。
$ kubectl -n ingress-nginx scale deployment ingress-nginx-controller --replicas 0

# 读取新deploy配置
[root@k8s-master ~/k8s-all]#kubectl apply -f deploy.yaml 

# 还得创建一个pod,修改副本数
$ kubectl -n ingress-nginx scale deployment ingress-nginx-controller --replicas 1

# 检查是否ingress-nginx-controller加入了tcp转发配置
[root@k8s-master ~/k8s-all]#kubectl -n ingress-nginx get po ingress-nginx-controller-5cf5b685c5-c7k65 -oyaml|grep tcp
    - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services

创建tcp端口的8000转发配置

固定操作流程,操作即可

$ cat tcp-services.cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: tcp-services
  namespace: ingress-nginx
data:
  8000: "yuchao/eladmin-api:8000"

测试tcp-8000转发结果

[root@k8s-master ~/k8s-all]#curl 10.0.0.80:8000/who
"Who are you ? I am teacher yuchao and my website is www.yuchaoit.cn ! "

# 测试通过

eladmin-web修改

前端接口文件,还是修改为如下,做服务测试

[root@docker01 ~/eladmin-web]#cat .env.production 
ENV = 'production'

# 如果使用 Nginx 代理后端接口,那么此处需要改为 '/',文件查看 Docker 部署篇,Nginx 配置
# 接口地址,注意协议,如果你没有配置 ssl,需要将 https 改为 http
VUE_APP_BASE_API  = 'http://eladmin-api.yuchao.com:8000'
# 如果接口是 http 形式, wss 需要改为 ws
VUE_APP_WS_API = 'ws://eladmin-api.yuchao.com:8000'
[root@docker01 ~/eladmin-web]#

重新构建

[root@docker01 ~/eladmin-web]#docker build . -t 10.0.0.66:5000/eladmin/eladmin-web:v3
[root@docker01 ~/eladmin-web]#docker push 10.0.0.66:5000/eladmin/eladmin-web:v3

前端k8s部署

验证可以成功通过ingress-controller的8000端口转发到后端服务,因此我们创建前端ingress资源:

然后为eladmin-web准备DeploymentServiceIngress 资源清单:

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: eladmin-web
  namespace: yuchao
spec:
  replicas: 1   #指定Pod副本数
  selector:             #指定Pod的选择器
    matchLabels:
      app: eladmin-web
  template:
    metadata:
      labels:   #给Pod打label
        app: eladmin-web
    spec:
      imagePullSecrets:
      - name: registry-10.0.0.66
      containers:
      - name: eladmin-web
        image: 10.0.0.66:5000/eladmin/eladmin-web:v3
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
        resources:
          requests:
            memory: 200Mi
            cpu: 50m
          limits:
            memory: 2Gi
            cpu: 2
        livenessProbe:
          tcpSocket:
            port: 80
          initialDelaySeconds: 15  # 容器启动后第一次执行探测是需要等待多少秒
          periodSeconds: 15     # 执行探测的频率
          timeoutSeconds: 3             # 探测超时时间
        readinessProbe:
          tcpSocket:
            port: 80
          initialDelaySeconds: 15
          timeoutSeconds: 3
          periodSeconds: 15
---
apiVersion: v1
kind: Service
metadata:
  name: eladmin-web
  namespace: yuchao
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: eladmin-web
  type: ClusterIP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: eladmin-web
  namespace: yuchao
spec:
  ingressClassName: nginx
  rules:
  - host: eladmin.yuchao.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service: 
            name: eladmin-web
            port:
              number: 80

创建

[root@k8s-master ~/k8s-all]#kubectl create -f all-tcp-eladmin-web.yaml 
deployment.apps/eladmin-web created
service/eladmin-web created
ingress.networking.k8s.io/eladmin-web created

[root@k8s-master ~/k8s-all]#kubectl -n yuchao get svc
NAME          TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
eladmin-api   ClusterIP   10.110.57.234    <none>        8000/TCP   7d4h
eladmin-web   ClusterIP   10.96.188.248    <none>        80/TCP     16s
mysql         ClusterIP   10.106.247.171   <none>        3306/TCP   8d
redis         ClusterIP   10.105.65.167    <none>        6379/TCP   8d
[root@k8s-master ~/k8s-all]#kubectl -n yuchao get ingress
NAME          CLASS   HOSTS                    ADDRESS   PORTS   AGE
eladmin-api   nginx   eladmin-api.yuchao.com             80      125m
eladmin-web   nginx   eladmin.yuchao.com                 80      19s
[root@k8s-master ~/k8s-all]#kubectl -n yuchao get po
NAME                           READY   STATUS    RESTARTS       AGE
eladmin-api-9446bcc45-jrpk4    1/1     Running   2 (4d3h ago)   7d3h
eladmin-api-9446bcc45-mhrmp    1/1     Running   3 (81m ago)    7d3h
eladmin-api-9446bcc45-p5q2r    1/1     Running   2 (4d3h ago)   7d3h
eladmin-api-9446bcc45-pkpjr    1/1     Running   2 (4d3h ago)   7d4h
eladmin-web-59d54654fd-k7jwq   1/1     Running   0              23s
mysql-7c7cf8495f-5w5bk         1/1     Running   1 (4d3h ago)   8d
ngx01                          1/1     Running   1 (4d3h ago)   6d4h
redis-7957d49f44-cxj8z         1/1     Running   1 (4d3h ago)   8d
[root@k8s-master ~/k8s-all]#

大功告成

image-20230328180724011

从抓包可以看出,前后端通信的api地址形式。

方案3(难点)

参考如下网站,就是常见的api设计形式。

image-20230328181911971

公司的后端api又更新了,并且这一次,更新了url的path,最常见的我们得修改nginx的七层代理规则。

但是在k8s里面是如下玩法

规划使用如下地址访问:
后端没有单独的域名了、而是走前端域名、且加上了path后缀的模式;
所以,结论是
前端域名不变、后端域名要修改,且改为url_path重写的形式,且是ingress的语法。

项目    访问地址
eladmin-web    http://eladmin.yuchao.com
eladmin-api    http://eladmin.yuchao.com/apis/

这时候,你又得去修改前端的代理文件了

此方式,eladmin-api对应的地址和目前前端配置的地址存在差异,eladmin.yuchao.com:8000,因此需要对前端的代码做调整

[root@docker01 ~/eladmin-web]#cat .env.production 
ENV = 'production'

# 如果使用 Nginx 代理后端接口,那么此处需要改为 '/',文件查看 Docker 部署篇,Nginx 配置
# 接口地址,注意协议,如果你没有配置 ssl,需要将 https 改为 http
VUE_APP_BASE_API  = 'http://eladmin.yuchao.com/apis'
# 如果接口是 http 形式, wss 需要改为 ws
VUE_APP_WS_API = 'ws://eladmin.yuchao.com/apis'
[root@docker01 ~/eladmin-web]#

重新构建

[root@docker01 ~/eladmin-web]#docker build . -t 10.0.0.66:5000/eladmin/eladmin-web:v4
docker push 10.0.0.66:5000/eladmin/eladmin-web:v4

再次准备前端资源(包括api修改)

然后为eladmin-web准备DeploymentServiceIngress 资源清单:

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: eladmin-web
  namespace: yuchao
spec:
  replicas: 1   #指定Pod副本数
  selector:             #指定Pod的选择器
    matchLabels:
      app: eladmin-web
  template:
    metadata:
      labels:   #给Pod打label
        app: eladmin-web
    spec:
      imagePullSecrets:
      - name: registry-10.0.0.66
      containers:
      - name: eladmin-web
        image: 10.0.0.66:5000/eladmin/eladmin-web:v4
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
        resources:
          requests:
            memory: 200Mi
            cpu: 50m
          limits:
            memory: 2Gi
            cpu: 2
        livenessProbe:
          tcpSocket:
            port: 80
          initialDelaySeconds: 15  # 容器启动后第一次执行探测是需要等待多少秒
          periodSeconds: 15     # 执行探测的频率
          timeoutSeconds: 3             # 探测超时时间
        readinessProbe:
          tcpSocket:
            port: 80
          initialDelaySeconds: 15
          timeoutSeconds: 3
          periodSeconds: 15
---
apiVersion: v1
kind: Service
metadata:
  name: eladmin-web
  namespace: yuchao
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: eladmin-web
  type: ClusterIP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: eladmin-api # 请注意,前端不变、后端ingress-nginx要修改
  namespace: yuchao
  annotations: # 添加ingress元数据,使用/$1 匹配下面(.*)的内容,也就是/apis/后面的路径
    nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
  ingressClassName: nginx
  rules:
  - host: eladmin.yuchao.com
    http:
      paths:
      - path: /apis/(.*) # 难点是这里,ingress-nginx的七层匹配规则,
        pathType: Prefix
        backend:
          service: 
            name: eladmin-api
            port:
              number: 8000

关于ingress配置解释

这是一个用 YAML 格式编写的 Kubernetes Ingress 资源清单文件,描述了如何将传入的 HTTP 流量路由到运行在 8000 端口上的后端服务 eladmin-api。

metadata 部分提供了关于 Ingress 对象的一些基本信息,包括它的名称和命名空间。

annotations 部分指定了 NGINX Ingress 控制器的其他配置设置,例如用于修改传入请求 URL 路径的重写规则。 nginx.ingress.kubernetes.io/rewrite-target 注释用于将传入的请求 URL 路径从 /apis/<path> 重写为 /<path>。

spec 部分定义了实际的路由规则。在这种情况下,它指定所有传入到 eladmin.yuchao.com/apis/ 的 HTTP 请求应该路由到运行在 yuchao 命名空间中的 eladmin-api 服务。pathType 字段设置为 Prefix,这意味着任何以 /apis/ 开头的 URL 路径都将被匹配。正则表达式 (.*) 用于捕获 URL 路径的其余部分,并将其作为参数传递给 annotations 部分中定义的重写规则。

最后,ingressClassName 字段指定要用于此 Ingress 对象的 Ingress 控制器的名称。在这种情况下,它设置为 nginx。

Ingress重写url参数解释

在这个特定的例子中,annotations nginx.ingress.kubernetes.io/rewrite-target: /$1 被添加到 Ingress 对象中。
这个 annotation 的作用是告诉 Kubernetes Ingress Controller,在转发请求到后端服务之前,将匹配到的 URL 路径重写为指定的目标路径。

/$1 是一个正则表达式,用于从请求 URL 中提取第一个路径参数,并将其作为目标路径的一部分。

这个 annotation 通常用于解决一些常见的 Ingress 部署问题,比如将所有的请求都转发到一个后端服务,并在转发之前对请求的 URL 进行修改。

前后端梳理

前端eladmin-web检查ingress

[root@k8s-master ~/k8s-all]#kubectl -n yuchao describe ingress eladmin-web 
Name:             eladmin-web
Labels:           <none>
Namespace:        yuchao
Address:          
Ingress Class:    nginx
Default backend:  <default>
Rules:
  Host                Path  Backends
  ----                ----  --------
  eladmin.yuchao.com  
                      /   eladmin-web:80 (10.244.1.24:80)
Annotations:          <none>
Events:
  Type    Reason  Age                 From                      Message
  ----    ------  ----                ----                      -------
  Normal  Sync    41s (x2 over 111s)  nginx-ingress-controller  Scheduled for sync

访问结果

image-20230329112056431

后端eladmin-api检查ingress

[root@k8s-master ~/k8s-all]#kubectl -n yuchao describe ingress eladmin-api
Name:             eladmin-api
Labels:           <none>
Namespace:        yuchao
Address:          
Ingress Class:    nginx
Default backend:  <default>
Rules:
  Host                Path  Backends
  ----                ----  --------
  eladmin.yuchao.com  
                      /apis/(.*)   eladmin-api:8000 (10.244.0.24:8000,10.244.1.20:8000,10.244.2.33:8000 + 1 more...)
Annotations:          nginx.ingress.kubernetes.io/rewrite-target: /$1
Events:               <none>
[root@k8s-master ~/k8s-all]#
[root@k8s-master ~/k8s-all]#kubectl -n yuchao get ingress -owide
NAME          CLASS   HOSTS                ADDRESS   PORTS   AGE
eladmin-api   nginx   eladmin.yuchao.com             80      16h
eladmin-web   nginx   eladmin.yuchao.com             80      7m59s

直接测试后端域名是否正常

[root@k8s-master ~/k8s-all]#curl eladmin.yuchao.com/apis/who
"Who are you ? I am teacher yuchao and my website is www.yuchaoit.cn ! "

测试后端验证码接口

image-20230329143458797

ingress高可用

[root@k8s-master ~]#kubectl -n ingress-nginx get po 
NAME                                        READY   STATUS      RESTARTS   AGE
ingress-nginx-admission-create-h5l9c        0/1     Completed   0          23h
ingress-nginx-admission-patch-zx2kn         0/1     Completed   0          23h
ingress-nginx-controller-5cf5b685c5-c7k65   1/1     Running     0          20h

# 生产下ingress可能要支持多副本,实现负载均衡,方案就是如

# 公有云的阿里云SLB、对接多个ingress
# 私有云的f5、lvs,对接ingress
# 也就是我们之前学的lvs+keepalived、或者阿里云的SLb直接购买即可

ingress支持HTTPS

ingress-nginx默认是提供了私有证书的

https://eladmin.yuchao.com/apis/who
可访问,但是不安全,浏览器不信任的。

证书自签

#自签名证书
$ openssl req -x509 -nodes -days 2920 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=*.yuchao.com/O=ingress-nginx"

# 证书信息保存到secret对象中,ingress-nginx会读取secret对象解析出证书加载到nginx配置中
# 有专门的tls类型secret
$ kubectl -n yuchao create secret tls tls-eladmin --key tls.key --cert tls.crt

修改前端

这会,前后端都要走https,修改前端接口文件

[root@docker01 ~/eladmin-web]#cat .env.production 
ENV = 'production'

# 如果使用 Nginx 代理后端接口,那么此处需要改为 '/',文件查看 Docker 部署篇,Nginx 配置
# 接口地址,注意协议,如果你没有配置 ssl,需要将 https 改为 http
VUE_APP_BASE_API  = 'https://eladmin.yuchao.com/apis/'
# 如果接口是 http 形式, wss 需要改为 ws
VUE_APP_WS_API = 'wss://eladmin.yuchao.com/apis'

再次构建镜像

docker build . -t 10.0.0.66:5000/eladmin/eladmin-web:v5
docker push  10.0.0.66:5000/eladmin/eladmin-web:v5

修改ingress前端

就简单理解,你宿主机直接跑nginx,支持https,也是修改配置文件吧。

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: eladmin-web
  namespace: yuchao
spec:
  ingressClassName: nginx
  rules:
  - host: eladmin.yuchao.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: eladmin-web
            port:
              number: 80
  tls: # 只需要加这里的配置就行,以及读取ssl证书的secret
  - hosts:
    - eladmin.yuchao.com
    secretName: tls-eladmin

应用新的ingress配置

root@k8s-master ~/k8s-all/https-ingress]#kubectl apply -f https-ingress-eladmin.yaml 
ingress.networking.k8s.io/eladmin-web configured
[root@k8s-master ~/k8s-all/https-ingress]#
[root@k8s-master ~/k8s-all/https-ingress]#kubectl -n yuchao get ingress
NAME          CLASS   HOSTS                ADDRESS   PORTS     AGE
eladmin-api   nginx   eladmin.yuchao.com             80        20h
eladmin-web   nginx   eladmin.yuchao.com             80, 443   5h3m
[root@k8s-master ~/k8s-all/https-ingress]#kubectl -n yuchao get ingress eladmin-web -oyaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"networking.k8s.io/v1","kind":"Ingress","metadata":{"annotations":{},"name":"eladmin-web","namespace":"yuchao"},"spec":{"ingressClassName":"nginx","rules":[{"host":"eladmin.yuchao.com","http":{"paths":[{"backend":{"service":{"name":"eladmin-web","port":{"number":80}}},"path":"/","pathType":"Prefix"}]}}],"tls":[{"hosts":["eladmin.yuchao.com"],"secretName":"tls-eladmin"}]}}
  creationTimestamp: "2023-03-29T11:14:55Z"
  generation: 3
  name: eladmin-web
  namespace: yuchao
  resourceVersion: "1565401"
  uid: b4d04178-ad2d-433a-bcc4-aa6054482eea
spec:
  ingressClassName: nginx
  rules:
  - host: eladmin.yuchao.com
    http:
      paths:
      - backend:
          service:
            name: eladmin-web
            port:
              number: 80
        path: /
        pathType: Prefix
  tls:
  - hosts:
    - eladmin.yuchao.com
    secretName: tls-eladmin
status:
  loadBalancer: {}

更新前端pod

# 修改deployment,修改业务镜像版本
[root@k8s-master ~/k8s-all/https-ingress]#kubectl -n yuchao edit deployments.apps eladmin-web 
deployment.apps/eladmin-web edited



# 修改镜像读取,https的版本
    spec:
      containers:
      - image: 10.0.0.66:5000/eladmin/eladmin-web:v5

测试访问eladmin

ingress-nginx自动将你的http请求,重定向为https了。

image-20230329162520185

ingress-nginx额外设置

例如你想关闭https的重定向功能,或者说给nginx添加额外的属性配置,按如下规则设置。

nginx端存在很多可配置的参数,通常这些参数在ingress的定义中被放在annotations中实现,如下为常用的一些:

[root@k8s-master ~/k8s-all/https-ingress]#kubectl -n yuchao edit ingress eladmin-web 
ingress.networking.k8s.io/eladmin-web edited

# 修改annotations注释部分即可
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"networking.k8s.io/v1","kind":"Ingress","metadata":{"annotations":{},"name":"eladmin-web","namespace":"yuchao"},"spec":{"ingressClassName":"nginx","rules":[{"host":"eladmin.yuchao.com","http":{"paths":[{"backend":{"service":{"name":"eladmin-web","port":{"number":80}}},"path":"/","pathType":"Prefix"}]}}],"tls":[{"hosts":["eladmin.yuchao.com"],"secretName":"tls-eladmin"}]}}
    nginx.ingress.kubernetes.io/force-ssl-redirect: "false"
    nginx.ingress.kubernetes.io/proxy-body-size: 1000m
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
    nginx.org/client-max-body-size: 1000m 
  creationTimestamp: "2023-03-29T11:14:55Z"
  generation: 3
  name: eladmin-web
  namespace: yuchao
  resourceVersion: "1566552"
  uid: b4d04178-ad2d-433a-bcc4-aa6054482eea

此时可以仅访问http协议的站点了

图解k8s部署java前后端分离系统流程

image-20230329173026413

回顾Ingress

image-20230407140922565

Copyright © www.yuchaoit.cn 2025 all right reserved,powered by Gitbook作者:于超 2024-03-31 19:27:22

results matching ""

    No results matching ""