小言_互联网的博客

k8s上部署Harbor通过Nginx-Ingress域名访问

1997人阅读  评论(0)

目录

1、k8s集群环境,通过kubesphere安装部署。

1.1 集群基本信息

 1.2 集群节点信息

 2、安装Harbor

2.1、使用Helm添加Harbor仓库

 2.2 、通过openssl生成证书

2.3、 创建secret

 2.4、 创建nfs存储目录

 2.5、 创建pv

 2.6、创建pvc

2.7、values.yaml配置文件

2.8、部署执行命令

2.9、编辑ingress文件,类型vim操作

 2.9.1、部署nginx-ingress-controller

2.9.2、查看配置Ingress的配置结果

3、访问

3.1、window配置hosts

 3.2、访问地址


1、k8s集群环境,通过kubesphere安装部署。

1.1 集群基本信息

 1.2 集群节点信息

 2、安装Harbor

2.1、使用Helm添加Harbor仓库


  
  1. helm repo add harbor https://helm.goharbor.io
  2. helm pull harbor/harbor

运行上面命令,得到文件harbor-1.10.2.tgz,将文件解压,并重命名为harbor。

 2.2 、通过openssl生成证书

harbor目录下存在cert,执行cp -r cert bak,对默认的cert文件进行备份。


  
  1. cd cert
  2. openssl genrsa -des3 -passout pass:over4chars -out tls.pass. key 2048
  3. ...
  4. openssl rsa -passin pass:over4chars - in tls.pass. key -out tls. key
  5. # Writing RSA key
  6. rm -rf tls.pass. key
  7. openssl req - new - key tls. key -out tls.csr
  8. ...
  9. If you enter '.', the field will be left blank.
  10. -----
  11. Country Name ( 2 letter code) [XX]:CN
  12. State or Province Name (full name) []:Beijing
  13. Locality Name (eg, city) [ Default City]:Beijing
  14. Organization Name (eg, company) [ Default Company Ltd]:liebe
  15. Organizational Unit Name (eg, section) []:liebe
  16. Common Name (eg, your name or your server 's hostname) []:harbon.liebe.com.cn
  17. Email Address []:你的邮箱地址
  18. Please enter the following 'extra' attributes
  19. to be sent with your certificate request
  20. A challenge password []:talent
  21. An optional company name []:liebe
  22. 生成 SSL 证书
  23. 自签名 SSL 证书是从私钥和文件生成的。tls.keytls.csr
  24. openssl x509 -req -sha256 -days 365 - in tls.csr -signkey tls. key -out tls.crt

2.3、 创建secret

           执行命令

kubectl create secret tls harbor.liebe.com.cn --key tls.key --cert tls.crt -n pig-dev

         查看创建结果

kubectl get secret -n pig-dev

 2.4、 创建nfs存储目录


  
  1. mkdir -p /home/data /nfs-share/harbor/registry
  2. mkdir -p /home/data /nfs-share/harbor/chartmuseum
  3. mkdir -p /home/data /nfs-share/harbor/jobservice
  4. mkdir -p /home/data /nfs-share/harbor/database
  5. mkdir -p /home/data /nfs-share/harbor/redis
  6. mkdir -p /home/data /nfs-share/harbor/trivy
  7. mkdir -p /home/data /nfs-share/harbor/jobservicedata
  8. mkdir -p /home/data /nfs-share/harbor/jobservicelog
  9. chmod 777 /home/data /nfs-share/harbor /*

 2.5、 创建pv


  
  1. #第 1
  2. apiVersion: v1
  3. kind: PersistentVolume
  4. metadata:
  5. name: harbor-registry
  6. namespace: pig-dev
  7. labels:
  8. app: harbor-registry
  9. spec:
  10. capacity:
  11. storage: 150Gi
  12. accessModes:
  13. - ReadWriteOnce
  14. persistentVolumeReclaimPolicy: Retain
  15. storageClassName: "managed-nfs-storage"
  16. mountOptions:
  17. - hard
  18. nfs:
  19. path: /home/data/nfs-share/harbor/registry
  20. server: 10.10. 10.89
  21. ---
  22. #第 2
  23. apiVersion: v1
  24. kind: PersistentVolume
  25. metadata:
  26. name: harbor-chartmuseum
  27. namespace: pig-dev
  28. labels:
  29. app: harbor-chartmuseum
  30. spec:
  31. capacity:
  32. storage: 10G
  33. accessModes:
  34. - ReadWriteOnce
  35. persistentVolumeReclaimPolicy: Retain
  36. storageClassName: "managed-nfs-storage"
  37. mountOptions:
  38. - hard
  39. nfs:
  40. path: /home/data/nfs-share/harbor/chartmuseum
  41. server: 10.10. 10.89
  42. ---
  43. #第 3
  44. apiVersion: v1
  45. kind: PersistentVolume
  46. metadata:
  47. name: harbor-jobservicelog
  48. namespace: pig-dev
  49. labels:
  50. app: harbor-jobservicelog
  51. spec:
  52. capacity:
  53. storage: 10G
  54. accessModes:
  55. - ReadWriteOnce
  56. persistentVolumeReclaimPolicy: Retain
  57. storageClassName: "managed-nfs-storage"
  58. mountOptions:
  59. - hard
  60. nfs:
  61. path: /home/data/nfs-share/harbor/jobservicelog
  62. server: 10.10. 10.89
  63. ---
  64. apiVersion: v1
  65. kind: PersistentVolume
  66. metadata:
  67. name: harbor-jobservicedata
  68. namespace: pig-dev
  69. labels:
  70. app: harbor-jobservicedata
  71. spec:
  72. capacity:
  73. storage: 10G
  74. accessModes:
  75. - ReadWriteOnce
  76. persistentVolumeReclaimPolicy: Retain
  77. storageClassName: "managed-nfs-storage"
  78. mountOptions:
  79. - hard
  80. nfs:
  81. path: /home/data/nfs-share/harbor/jobservicedata
  82. server: 10.10. 10.89
  83. ---
  84. #第 4
  85. apiVersion: v1
  86. kind: PersistentVolume
  87. metadata:
  88. name: harbor-database
  89. namespace: pig-dev
  90. labels:
  91. app: harbor-database
  92. spec:
  93. capacity:
  94. storage: 10G
  95. accessModes:
  96. - ReadWriteOnce
  97. persistentVolumeReclaimPolicy: Retain
  98. storageClassName: "managed-nfs-storage"
  99. mountOptions:
  100. - hard
  101. nfs:
  102. path: /home/data/nfs-share/harbor/database
  103. server: 10.10. 10.89
  104. ---
  105. #第 5
  106. apiVersion: v1
  107. kind: PersistentVolume
  108. metadata:
  109. name: harbor-redis
  110. namespace: pig-dev
  111. labels:
  112. app: harbor-redis
  113. spec:
  114. capacity:
  115. storage: 10G
  116. accessModes:
  117. - ReadWriteOnce
  118. persistentVolumeReclaimPolicy: Retain
  119. storageClassName: "managed-nfs-storage"
  120. mountOptions:
  121. - hard
  122. nfs:
  123. path: /home/data/nfs-share/harbor/redis
  124. server: 10.10. 10.89
  125. ---
  126. #第 6
  127. apiVersion: v1
  128. kind: PersistentVolume
  129. metadata:
  130. name: harbor-trivy
  131. namespace: pig-dev
  132. labels:
  133. app: harbor-trivy
  134. spec:
  135. capacity:
  136. storage: 10G
  137. accessModes:
  138. - ReadWriteOnce
  139. persistentVolumeReclaimPolicy: Retain
  140. storageClassName: "managed-nfs-storage"
  141. mountOptions:
  142. - hard
  143. nfs:
  144. path: /home/data/nfs-share/harbor/trivy
  145. server: 10.10. 10.89

 2.6、创建pvc


  
  1. #第 1
  2. kind: PersistentVolumeClaim
  3. apiVersion: v1
  4. metadata:
  5. name: harbor-registry
  6. namespace: pig-dev
  7. spec:
  8. accessModes:
  9. - ReadWriteOnce
  10. storageClassName: "managed-nfs-storage"
  11. resources:
  12. requests:
  13. storage: 150Gi
  14. ---
  15. #第 2
  16. kind: PersistentVolumeClaim
  17. apiVersion: v1
  18. metadata:
  19. name: harbor-chartmuseum
  20. namespace: pig-dev
  21. spec:
  22. accessModes:
  23. - ReadWriteOnce
  24. storageClassName: "managed-nfs-storage"
  25. resources:
  26. requests:
  27. storage: 10Gi
  28. ---
  29. #第 3
  30. kind: PersistentVolumeClaim
  31. apiVersion: v1
  32. metadata:
  33. name: harbor-jobservicelog
  34. namespace: pig-dev
  35. spec:
  36. accessModes:
  37. - ReadWriteOnce
  38. storageClassName: "managed-nfs-storage"
  39. resources:
  40. requests:
  41. storage: 5Gi
  42. ---
  43. kind: PersistentVolumeClaim
  44. apiVersion: v1
  45. metadata:
  46. name: harbor-jobservicedata
  47. namespace: pig-dev
  48. spec:
  49. accessModes:
  50. - ReadWriteOnce
  51. storageClassName: "managed-nfs-storage"
  52. resources:
  53. requests:
  54. storage: 5Gi
  55. ---
  56. #第 4
  57. kind: PersistentVolumeClaim
  58. apiVersion: v1
  59. metadata:
  60. name: harbor-database
  61. namespace: pig-dev
  62. spec:
  63. accessModes:
  64. - ReadWriteOnce
  65. storageClassName: "managed-nfs-storage"
  66. resources:
  67. requests:
  68. storage: 10Gi
  69. ---
  70. #第 5
  71. kind: PersistentVolumeClaim
  72. apiVersion: v1
  73. metadata:
  74. name: harbor-redis
  75. namespace: pig-dev
  76. spec:
  77. accessModes:
  78. - ReadWriteOnce
  79. storageClassName: "managed-nfs-storage"
  80. resources:
  81. requests:
  82. storage: 10Gi
  83. ---
  84. #第 6
  85. kind: PersistentVolumeClaim
  86. apiVersion: v1
  87. metadata:
  88. name: harbor-trivy
  89. namespace: pig-dev
  90. spec:
  91. accessModes:
  92. - ReadWriteOnce
  93. storageClassName: "managed-nfs-storage"
  94. resources:
  95. requests:
  96. storage: 10Gi

2.7、values.yaml配置文件


  
  1. expose:
  2. # Set how to expose the service. Set the type as "ingress", "clusterIP", "nodePort" or "loadBalancer"
  3. # and fill the information in the corresponding section
  4. type: ingress
  5. tls:
  6. # Enable TLS or not.
  7. # Delete the "ssl-redirect" annotations in "expose.ingress.annotations" when TLS is disabled and "expose.type" is "ingress"
  8. # Note: if the "expose.type" is "ingress" and TLS is disabled,
  9. # the port must be included in the command when pulling/pushing images.
  10. # Refer to https://github.com/goharbor/harbor/issues/5291 for details.
  11. enabled: true
  12. # The source of the tls certificate. Set as "auto", "secret"
  13. # or "none" and fill the information in the corresponding section
  14. # 1) auto: generate the tls certificate automatically
  15. # 2) secret: read the tls certificate from the specified secret.
  16. # The tls certificate can be generated manually or by cert manager
  17. # 3) none: configure no tls certificate for the ingress. If the default
  18. # tls certificate is configured in the ingress controller, choose this option
  19. certSource: "secret"
  20. auto:
  21. # The common name used to generate the certificate, it's necessary
  22. # when the type isn't "ingress"
  23. commonName: ""
  24. secret:
  25. # The name of secret which contains keys named:
  26. # "tls.crt" - the certificate
  27. # "tls.key" - the private key
  28. secretName: "harbor.liebe.com.cn"
  29. # The name of secret which contains keys named:
  30. # "tls.crt" - the certificate
  31. # "tls.key" - the private key
  32. # Only needed when the "expose.type" is "ingress".
  33. notarySecretName: "harbor.liebe.com.cn"
  34. ingress:
  35. hosts:
  36. core: harbor.liebe.com.cn
  37. notary: notary-harbor.liebe.com.cn
  38. # set to the type of ingress controller if it has specific requirements.
  39. # leave as `default` for most ingress controllers.
  40. # set to `gce` if using the GCE ingress controller
  41. # set to `ncp` if using the NCP (NSX-T Container Plugin) ingress controller
  42. # set to `alb` if using the ALB ingress controller
  43. controller: default
  44. ## Allow .Capabilities.KubeVersion.Version to be overridden while creating ingress
  45. kubeVersionOverride: ""
  46. className: ""
  47. annotations:
  48. # note different ingress controllers may require a different ssl-redirect annotation
  49. # for Envoy, use ingress.kubernetes.io/force-ssl-redirect: "true" and remove the nginx lines below
  50. ingress.kubernetes.io/ssl-redirect: "true"
  51. ingress.kubernetes.io/proxy-body-size: "1024m"
  52. #### 如果是 traefik ingress,则按下面配置:
  53. # kubernetes.io/ingress.class: "traefik"
  54. # traefik.ingress.kubernetes.io/router.tls: 'true'
  55. # traefik.ingress.kubernetes.io/router.entrypoints: websecure
  56. #### 如果是 nginx ingress,则按下面配置:
  57. nginx.ingress.kubernetes.io/ssl-redirect: "true"
  58. nginx.ingress.kubernetes.io/proxy-body-size: "1024m"
  59. nginx.org/client-max-body-size: "1024m"
  60. notary:
  61. # notary ingress-specific annotations
  62. annotations: {}
  63. # notary ingress-specific labels
  64. labels: {}
  65. harbor:
  66. # harbor ingress-specific annotations
  67. annotations: {}
  68. # harbor ingress-specific labels
  69. labels: {}
  70. clusterIP:
  71. # The name of ClusterIP service
  72. name: harbor
  73. # Annotations on the ClusterIP service
  74. annotations: {}
  75. ports:
  76. # The service port Harbor listens on when serving HTTP
  77. httpPort: 80
  78. # The service port Harbor listens on when serving HTTPS
  79. httpsPort: 443
  80. # The service port Notary listens on. Only needed when notary.enabled
  81. # is set to true
  82. notaryPort: 4443
  83. nodePort:
  84. # The name of NodePort service
  85. name: harbor
  86. ports:
  87. http:
  88. # The service port Harbor listens on when serving HTTP
  89. port: 80
  90. # The node port Harbor listens on when serving HTTP
  91. nodePort: 30102
  92. https:
  93. # The service port Harbor listens on when serving HTTPS
  94. port: 443
  95. # The node port Harbor listens on when serving HTTPS
  96. nodePort: 30103
  97. # Only needed when notary.enabled is set to true
  98. notary:
  99. # The service port Notary listens on
  100. port: 4443
  101. # The node port Notary listens on
  102. nodePort: 30104
  103. loadBalancer:
  104. # The name of LoadBalancer service
  105. name: harbor
  106. # Set the IP if the LoadBalancer supports assigning IP
  107. IP: ""
  108. ports:
  109. # The service port Harbor listens on when serving HTTP
  110. httpPort: 80
  111. # The service port Harbor listens on when serving HTTPS
  112. httpsPort: 443
  113. # The service port Notary listens on. Only needed when notary.enabled
  114. # is set to true
  115. notaryPort: 4443
  116. annotations: {}
  117. sourceRanges: []
  118. # The external URL for Harbor core service. It is used to
  119. # 1) populate the docker/helm commands showed on portal
  120. # 2) populate the token service URL returned to docker/notary client
  121. #
  122. # Format: protocol://domain[:port]. Usually:
  123. # 1) if "expose.type" is "ingress", the "domain" should be
  124. # the value of "expose.ingress.hosts.core"
  125. # 2) if "expose.type" is "clusterIP", the "domain" should be
  126. # the value of "expose.clusterIP.name"
  127. # 3) if "expose.type" is "nodePort", the "domain" should be
  128. # the IP address of k8s node
  129. #
  130. # If Harbor is deployed behind the proxy, set it as the URL of proxy
  131. externalURL: https://harbor.liebe.com.cn
  132. # The internal TLS used for harbor components secure communicating. In order to enable https
  133. # in each components tls cert files need to provided in advance.
  134. internalTLS:
  135. # If internal TLS enabled
  136. enabled: true
  137. # There are three ways to provide tls
  138. # 1) "auto" will generate cert automatically
  139. # 2) "manual" need provide cert file manually in following value
  140. # 3) "secret" internal certificates from secret
  141. certSource: "auto"
  142. # The content of trust ca, only available when `certSource` is "manual"
  143. trustCa: ""
  144. # core related cert configuration
  145. core:
  146. # secret name for core's tls certs
  147. secretName: ""
  148. # Content of core's TLS cert file, only available when `certSource` is "manual"
  149. crt: ""
  150. # Content of core's TLS key file, only available when `certSource` is "manual"
  151. key: ""
  152. # jobservice related cert configuration
  153. jobservice:
  154. # secret name for jobservice's tls certs
  155. secretName: ""
  156. # Content of jobservice's TLS key file, only available when `certSource` is "manual"
  157. crt: ""
  158. # Content of jobservice's TLS key file, only available when `certSource` is "manual"
  159. key: ""
  160. # registry related cert configuration
  161. registry:
  162. # secret name for registry's tls certs
  163. secretName: ""
  164. # Content of registry's TLS key file, only available when `certSource` is "manual"
  165. crt: ""
  166. # Content of registry's TLS key file, only available when `certSource` is "manual"
  167. key: ""
  168. # portal related cert configuration
  169. portal:
  170. # secret name for portal's tls certs
  171. secretName: ""
  172. # Content of portal's TLS key file, only available when `certSource` is "manual"
  173. crt: ""
  174. # Content of portal's TLS key file, only available when `certSource` is "manual"
  175. key: ""
  176. # chartmuseum related cert configuration
  177. chartmuseum:
  178. # secret name for chartmuseum's tls certs
  179. secretName: ""
  180. # Content of chartmuseum's TLS key file, only available when `certSource` is "manual"
  181. crt: ""
  182. # Content of chartmuseum's TLS key file, only available when `certSource` is "manual"
  183. key: ""
  184. # trivy related cert configuration
  185. trivy:
  186. # secret name for trivy's tls certs
  187. secretName: ""
  188. # Content of trivy's TLS key file, only available when `certSource` is "manual"
  189. crt: ""
  190. # Content of trivy's TLS key file, only available when `certSource` is "manual"
  191. key: ""
  192. ipFamily:
  193. # ipv6Enabled set to true if ipv6 is enabled in cluster, currently it affected the nginx related component
  194. ipv6:
  195. enabled: true
  196. # ipv4Enabled set to true if ipv4 is enabled in cluster, currently it affected the nginx related component
  197. ipv4:
  198. enabled: true
  199. # The persistence is enabled by default and a default StorageClass
  200. # is needed in the k8s cluster to provision volumes dynamically.
  201. # Specify another StorageClass in the "storageClass" or set "existingClaim"
  202. # if you already have existing persistent volumes to use
  203. #
  204. # For storing images and charts, you can also use "azure", "gcs", "s3",
  205. # "swift" or "oss". Set it in the "imageChartStorage" section
  206. persistence:
  207. enabled: true
  208. # Setting it to "keep" to avoid removing PVCs during a helm delete
  209. # operation. Leaving it empty will delete PVCs after the chart deleted
  210. # (this does not apply for PVCs that are created for internal database
  211. # and redis components, i.e. they are never deleted automatically)
  212. resourcePolicy: "keep"
  213. persistentVolumeClaim:
  214. registry:
  215. # Use the existing PVC which must be created manually before bound,
  216. # and specify the "subPath" if the PVC is shared with other components
  217. existingClaim: "harbor-registry"
  218. # Specify the "storageClass" used to provision the volume. Or the default
  219. # StorageClass will be used (the default).
  220. # Set it to "-" to disable dynamic provisioning
  221. storageClass: "managed-nfs-storage"
  222. subPath: ""
  223. accessMode: ReadWriteOnce
  224. size: 150Gi
  225. annotations: {}
  226. chartmuseum:
  227. existingClaim: "harbor-chartmuseum"
  228. storageClass: "managed-nfs-storage"
  229. subPath: ""
  230. accessMode: ReadWriteOnce
  231. size: 10Gi
  232. annotations: {}
  233. jobservice:
  234. jobLog:
  235. existingClaim: "harbor-jobservicelog"
  236. storageClass: "managed-nfs-storage"
  237. subPath: ""
  238. accessMode: ReadWriteOnce
  239. size: 5Gi
  240. annotations: {}
  241. scanDataExports:
  242. existingClaim: "harbor-jobservicedata"
  243. storageClass: "managed-nfs-storage"
  244. subPath: ""
  245. accessMode: ReadWriteOnce
  246. size: 5Gi
  247. annotations: {}
  248. # If external database is used, the following settings for database will
  249. # be ignored
  250. database:
  251. existingClaim: "harbor-database"
  252. storageClass: "managed-nfs-storage"
  253. subPath: ""
  254. accessMode: ReadWriteOnce
  255. size: 10Gi
  256. annotations: {}
  257. # If external Redis is used, the following settings for Redis will
  258. # be ignored
  259. redis:
  260. existingClaim: "harbor-redis"
  261. storageClass: "managed-nfs-storage"
  262. subPath: ""
  263. accessMode: ReadWriteOnce
  264. size: 10Gi
  265. annotations: {}
  266. trivy:
  267. existingClaim: "harbor-trivy"
  268. storageClass: "managed-nfs-storage"
  269. subPath: ""
  270. accessMode: ReadWriteOnce
  271. size: 10Gi
  272. annotations: {}
  273. # Define which storage backend is used for registry and chartmuseum to store
  274. # images and charts. Refer to
  275. # https://github.com/docker/distribution/blob/master/docs/configuration.md#storage
  276. # for the detail.
  277. imageChartStorage:
  278. # Specify whether to disable `redirect` for images and chart storage, for
  279. # backends which not supported it (such as using minio for `s3` storage type), please disable
  280. # it. To disable redirects, simply set `disableredirect` to `true` instead.
  281. # Refer to
  282. # https://github.com/docker/distribution/blob/master/docs/configuration.md#redirect
  283. # for the detail.
  284. disableredirect: false
  285. # Specify the "caBundleSecretName" if the storage service uses a self-signed certificate.
  286. # The secret must contain keys named "ca.crt" which will be injected into the trust store
  287. # of registry's and chartmuseum's containers.
  288. # caBundleSecretName:
  289. # Specify the type of storage: "filesystem", "azure", "gcs", "s3", "swift",
  290. # "oss" and fill the information needed in the corresponding section. The type
  291. # must be "filesystem" if you want to use persistent volumes for registry
  292. # and chartmuseum
  293. type: filesystem
  294. filesystem:
  295. rootdirectory: /storage
  296. #maxthreads: 100
  297. azure:
  298. accountname: accountname
  299. accountkey: base64encodedaccountkey
  300. container: containername
  301. #realm: core.windows.net
  302. # To use existing secret, the key must be AZURE_STORAGE_ACCESS_KEY
  303. existingSecret: ""
  304. gcs:
  305. bucket: bucketname
  306. # The base64 encoded json file which contains the key
  307. encodedkey: base64-encoded-json-key-file
  308. #rootdirectory: /gcs/object/name/prefix
  309. #chunksize: "5242880"
  310. # To use existing secret, the key must be gcs-key.json
  311. existingSecret: ""
  312. useWorkloadIdentity: false
  313. s3:
  314. # Set an existing secret for S3 accesskey and secretkey
  315. # keys in the secret should be AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY for chartmuseum
  316. # keys in the secret should be REGISTRY_STORAGE_S3_ACCESSKEY and REGISTRY_STORAGE_S3_SECRETKEY for registry
  317. #existingSecret: ""
  318. region: us-west-1
  319. bucket: bucketname
  320. #accesskey: awsaccesskey
  321. #secretkey: awssecretkey
  322. #regionendpoint: http://myobjects.local
  323. #encrypt: false
  324. #keyid: mykeyid
  325. #secure: true
  326. #skipverify: false
  327. #v4auth: true
  328. #chunksize: "5242880"
  329. #rootdirectory: /s3/object/name/prefix
  330. #storageclass: STANDARD
  331. #multipartcopychunksize: "33554432"
  332. #multipartcopymaxconcurrency: 100
  333. #multipartcopythresholdsize: "33554432"
  334. swift:
  335. authurl: https://storage.myprovider.com/v3/auth
  336. username: username
  337. password: password
  338. container: containername
  339. #region: fr
  340. #tenant: tenantname
  341. #tenantid: tenantid
  342. #domain: domainname
  343. #domainid: domainid
  344. #trustid: trustid
  345. #insecureskipverify: false
  346. #chunksize: 5M
  347. #prefix:
  348. #secretkey: secretkey
  349. #accesskey: accesskey
  350. #authversion: 3
  351. #endpointtype: public
  352. #tempurlcontainerkey: false
  353. #tempurlmethods:
  354. oss:
  355. accesskeyid: accesskeyid
  356. accesskeysecret: accesskeysecret
  357. region: regionname
  358. bucket: bucketname
  359. #endpoint: endpoint
  360. #internal: false
  361. #encrypt: false
  362. #secure: true
  363. #chunksize: 10M
  364. #rootdirectory: rootdirectory
  365. imagePullPolicy: IfNotPresent
  366. # Use this set to assign a list of default pullSecrets
  367. imagePullSecrets:
  368. # - name: docker-registry-secret
  369. # - name: internal-registry-secret
  370. # The update strategy for deployments with persistent volumes(jobservice, registry
  371. # and chartmuseum): "RollingUpdate" or "Recreate"
  372. # Set it as "Recreate" when "RWM" for volumes isn't supported
  373. updateStrategy:
  374. type: RollingUpdate
  375. # debug, info, warning, error or fatal
  376. logLevel: info
  377. # The initial password of Harbor admin. Change it from portal after launching Harbor
  378. harborAdminPassword: "Harbor12345"
  379. # The name of the secret which contains key named "ca.crt". Setting this enables the
  380. # download link on portal to download the CA certificate when the certificate isn't
  381. # generated automatically
  382. caSecretName: ""
  383. # The secret key used for encryption. Must be a string of 16 chars.
  384. secretKey: "not-a-secure-key"
  385. # If using existingSecretSecretKey, the key must be sercretKey
  386. existingSecretSecretKey: ""
  387. # The proxy settings for updating trivy vulnerabilities from the Internet and replicating
  388. # artifacts from/to the registries that cannot be reached directly
  389. proxy:
  390. httpProxy:
  391. httpsProxy:
  392. noProxy: 127.0.0.1,localhost,. local,.internal
  393. components:
  394. - core
  395. - jobservice
  396. - trivy
  397. # Run the migration job via helm hook
  398. enableMigrateHelmHook: false
  399. # The custom ca bundle secret, the secret must contain key named "ca.crt"
  400. # which will be injected into the trust store for chartmuseum, core, jobservice, registry, trivy components
  401. # caBundleSecretName: ""
  402. ## UAA Authentication Options
  403. # If you're using UAA for authentication behind a self-signed
  404. # certificate you will need to provide the CA Cert.
  405. # Set uaaSecretName below to provide a pre-created secret that
  406. # contains a base64 encoded CA Certificate named `ca.crt`.
  407. # uaaSecretName:
  408. # If service exposed via "ingress", the Nginx will not be used
  409. nginx:
  410. image:
  411. repository: goharbor/nginx-photon
  412. tag: v2.6.2
  413. # set the service account to be used, default if left empty
  414. serviceAccountName: ""
  415. # mount the service account token
  416. automountServiceAccountToken: false
  417. replicas: 1
  418. revisionHistoryLimit: 10
  419. # resources:
  420. # requests:
  421. # memory: 256Mi
  422. # cpu: 100m
  423. nodeSelector: {}
  424. tolerations: []
  425. affinity: {}
  426. ## Additional deployment annotations
  427. podAnnotations: {}
  428. ## The priority class to run the pod as
  429. priorityClassName:
  430. portal:
  431. image:
  432. repository: goharbor/harbor-portal
  433. tag: v2.6.2
  434. # set the service account to be used, default if left empty
  435. serviceAccountName: ""
  436. # mount the service account token
  437. automountServiceAccountToken: false
  438. replicas: 1
  439. revisionHistoryLimit: 10
  440. # resources:
  441. # requests:
  442. # memory: 256Mi
  443. # cpu: 100m
  444. nodeSelector: {}
  445. tolerations: []
  446. affinity: {}
  447. ## Additional deployment annotations
  448. podAnnotations: {}
  449. ## The priority class to run the pod as
  450. priorityClassName:
  451. core:
  452. image:
  453. repository: goharbor/harbor-core
  454. tag: v2.6.2
  455. # set the service account to be used, default if left empty
  456. serviceAccountName: ""
  457. # mount the service account token
  458. automountServiceAccountToken: false
  459. replicas: 1
  460. revisionHistoryLimit: 10
  461. ## Startup probe values
  462. startupProbe:
  463. enabled: true
  464. initialDelaySeconds: 10
  465. # resources:
  466. # requests:
  467. # memory: 256Mi
  468. # cpu: 100m
  469. nodeSelector: {}
  470. tolerations: []
  471. affinity: {}
  472. ## Additional deployment annotations
  473. podAnnotations: {}
  474. # Secret is used when core server communicates with other components.
  475. # If a secret key is not specified, Helm will generate one.
  476. # Must be a string of 16 chars.
  477. secret: ""
  478. # Fill the name of a kubernetes secret if you want to use your own
  479. # TLS certificate and private key for token encryption/decryption.
  480. # The secret must contain keys named:
  481. # "tls.crt" - the certificate
  482. # "tls.key" - the private key
  483. # The default key pair will be used if it isn't set
  484. secretName: ""
  485. # The XSRF key. Will be generated automatically if it isn't specified
  486. xsrfKey: ""
  487. ## The priority class to run the pod as
  488. priorityClassName:
  489. # The time duration for async update artifact pull_time and repository
  490. # pull_count, the unit is second. Will be 10 seconds if it isn't set.
  491. # eg. artifactPullAsyncFlushDuration: 10
  492. artifactPullAsyncFlushDuration:
  493. gdpr:
  494. deleteUser: false
  495. jobservice:
  496. image:
  497. repository: goharbor/harbor-jobservice
  498. tag: v2.6.2
  499. replicas: 1
  500. revisionHistoryLimit: 10
  501. # set the service account to be used, default if left empty
  502. serviceAccountName: ""
  503. # mount the service account token
  504. automountServiceAccountToken: false
  505. maxJobWorkers: 10
  506. # The logger for jobs: "file", "database" or "stdout"
  507. jobLoggers:
  508. - file
  509. # - database
  510. # - stdout
  511. # The jobLogger sweeper duration (ignored if `jobLogger` is `stdout`)
  512. loggerSweeperDuration: 14 #days
  513. # resources:
  514. # requests:
  515. # memory: 256Mi
  516. # cpu: 100m
  517. nodeSelector: {}
  518. tolerations: []
  519. affinity: {}
  520. ## Additional deployment annotations
  521. podAnnotations: {}
  522. # Secret is used when job service communicates with other components.
  523. # If a secret key is not specified, Helm will generate one.
  524. # Must be a string of 16 chars.
  525. secret: ""
  526. ## The priority class to run the pod as
  527. priorityClassName:
  528. registry:
  529. # set the service account to be used, default if left empty
  530. serviceAccountName: ""
  531. # mount the service account token
  532. automountServiceAccountToken: false
  533. registry:
  534. image:
  535. repository: goharbor/registry-photon
  536. tag: v2.6.2
  537. # resources:
  538. # requests:
  539. # memory: 256Mi
  540. # cpu: 100m
  541. controller:
  542. image:
  543. repository: goharbor/harbor-registryctl
  544. tag: v2.6.2
  545. # resources:
  546. # requests:
  547. # memory: 256Mi
  548. # cpu: 100m
  549. replicas: 1
  550. revisionHistoryLimit: 10
  551. nodeSelector: {}
  552. tolerations: []
  553. affinity: {}
  554. ## Additional deployment annotations
  555. podAnnotations: {}
  556. ## The priority class to run the pod as
  557. priorityClassName:
  558. # Secret is used to secure the upload state from client
  559. # and registry storage backend.
  560. # See: https://github.com/docker/distribution/blob/master/docs/configuration.md#http
  561. # If a secret key is not specified, Helm will generate one.
  562. # Must be a string of 16 chars.
  563. secret: ""
  564. # If true, the registry returns relative URLs in Location headers. The client is responsible for resolving the correct URL.
  565. relativeurls: false
  566. credentials:
  567. username: "harbor_registry_user"
  568. password: "harbor_registry_password"
  569. # If using existingSecret, the key must be REGISTRY_PASSWD and REGISTRY_HTPASSWD
  570. existingSecret: ""
  571. # Login and password in htpasswd string format. Excludes `registry.credentials.username` and `registry.credentials.password`. May come in handy when integrating with tools like argocd or flux. This allows the same line to be generated each time the template is rendered, instead of the `htpasswd` function from helm, which generates different lines each time because of the salt.
  572. # htpasswdString: $apr1$XLefHzeG$Xl4.s00sMSCCcMyJljSZb0 # example string
  573. middleware:
  574. enabled: false
  575. type: cloudFront
  576. cloudFront:
  577. baseurl: example.cloudfront.net
  578. keypairid: KEYPAIRID
  579. duration: 3000s
  580. ipfilteredby: none
  581. # The secret key that should be present is CLOUDFRONT_KEY_DATA, which should be the encoded private key
  582. # that allows access to CloudFront
  583. privateKeySecret: "my-secret"
  584. # enable purge _upload directories
  585. upload_purging:
  586. enabled: true
  587. # remove files in _upload directories which exist for a period of time, default is one week.
  588. age: 168h
  589. # the interval of the purge operations
  590. interval: 24h
  591. dryrun: false
  592. chartmuseum:
  593. enabled: true
  594. # set the service account to be used, default if left empty
  595. serviceAccountName: ""
  596. # mount the service account token
  597. automountServiceAccountToken: false
  598. # Harbor defaults ChartMuseum to returning relative urls, if you want using absolute url you should enable it by change the following value to 'true'
  599. absoluteUrl: false
  600. image:
  601. repository: goharbor/chartmuseum-photon
  602. tag: v2.6.2
  603. replicas: 1
  604. revisionHistoryLimit: 10
  605. # resources:
  606. # requests:
  607. # memory: 256Mi
  608. # cpu: 100m
  609. nodeSelector: {}
  610. tolerations: []
  611. affinity: {}
  612. ## Additional deployment annotations
  613. podAnnotations: {}
  614. ## The priority class to run the pod as
  615. priorityClassName:
  616. ## limit the number of parallel indexers
  617. indexLimit: 0
  618. trivy:
  619. # enabled the flag to enable Trivy scanner
  620. enabled: true
  621. image:
  622. # repository the repository for Trivy adapter image
  623. repository: goharbor/trivy-adapter-photon
  624. # tag the tag for Trivy adapter image
  625. tag: v2.6.2
  626. # set the service account to be used, default if left empty
  627. serviceAccountName: ""
  628. # mount the service account token
  629. automountServiceAccountToken: false
  630. # replicas the number of Pod replicas
  631. replicas: 1
  632. # debugMode the flag to enable Trivy debug mode with more verbose scanning log
  633. debugMode: false
  634. # vulnType a comma-separated list of vulnerability types. Possible values are `os` and `library`.
  635. vulnType: "os,library"
  636. # severity a comma-separated list of severities to be checked
  637. severity: "UNKNOWN,LOW,MEDIUM,HIGH,CRITICAL"
  638. # ignoreUnfixed the flag to display only fixed vulnerabilities
  639. ignoreUnfixed: false
  640. # insecure the flag to skip verifying registry certificate
  641. insecure: false
  642. # gitHubToken the GitHub access token to download Trivy DB
  643. #
  644. # Trivy DB contains vulnerability information from NVD, Red Hat, and many other upstream vulnerability databases.
  645. # It is downloaded by Trivy from the GitHub release page https://github.com/aquasecurity/trivy-db/releases and cached
  646. # in the local file system (`/home/scanner/.cache/trivy/db/trivy.db`). In addition, the database contains the update
  647. # timestamp so Trivy can detect whether it should download a newer version from the Internet or use the cached one.
  648. # Currently, the database is updated every 12 hours and published as a new release to GitHub.
  649. #
  650. # Anonymous downloads from GitHub are subject to the limit of 60 requests per hour. Normally such rate limit is enough
  651. # for production operations. If, for any reason, it's not enough, you could increase the rate limit to 5000
  652. # requests per hour by specifying the GitHub access token. For more details on GitHub rate limiting please consult
  653. # https://developer.github.com/v3/#rate-limiting
  654. #
  655. # You can create a GitHub token by following the instructions in
  656. # https://help.github.com/en/github/authenticating-to-github/creating-a-personal-access-token-for-the-command-line
  657. gitHubToken: ""
  658. # skipUpdate the flag to disable Trivy DB downloads from GitHub
  659. #
  660. # You might want to set the value of this flag to `true` in test or CI/CD environments to avoid GitHub rate limiting issues.
  661. # If the value is set to `true` you have to manually download the `trivy.db` file and mount it in the
  662. # `/home/scanner/.cache/trivy/db/trivy.db` path.
  663. skipUpdate: false
  664. # The offlineScan option prevents Trivy from sending API requests to identify dependencies.
  665. #
  666. # Scanning JAR files and pom.xml may require Internet access for better detection, but this option tries to avoid it.
  667. # For example, the offline mode will not try to resolve transitive dependencies in pom.xml when the dependency doesn't
  668. # exist in the local repositories. It means a number of detected vulnerabilities might be fewer in offline mode.
  669. # It would work if all the dependencies are in local.
  670. # This option doesn’t affect DB download. You need to specify skipUpdate as well as offlineScan in an air-gapped environment.
  671. offlineScan: false
  672. # Comma-separated list of what security issues to detect. Possible values are `vuln`, `config` and `secret`. Defaults to `vuln`.
  673. securityCheck: "vuln"
  674. # The duration to wait for scan completion
  675. timeout: 5m0s
  676. resources:
  677. requests:
  678. cpu: 200m
  679. memory: 512Mi
  680. limits:
  681. cpu: 1
  682. memory: 1Gi
  683. nodeSelector: {}
  684. tolerations: []
  685. affinity: {}
  686. ## Additional deployment annotations
  687. podAnnotations: {}
  688. ## The priority class to run the pod as
  689. priorityClassName:
  690. notary:
  691. enabled: true
  692. server:
  693. # set the service account to be used, default if left empty
  694. serviceAccountName: ""
  695. # mount the service account token
  696. automountServiceAccountToken: false
  697. image:
  698. repository: goharbor/notary-server-photon
  699. tag: v2.6.2
  700. replicas: 1
  701. # resources:
  702. # requests:
  703. # memory: 256Mi
  704. # cpu: 100m
  705. nodeSelector: {}
  706. tolerations: []
  707. affinity: {}
  708. ## Additional deployment annotations
  709. podAnnotations: {}
  710. ## The priority class to run the pod as
  711. priorityClassName:
  712. signer:
  713. # set the service account to be used, default if left empty
  714. serviceAccountName: ""
  715. # mount the service account token
  716. automountServiceAccountToken: false
  717. image:
  718. repository: goharbor/notary-signer-photon
  719. tag: v2.6.2
  720. replicas: 1
  721. # resources:
  722. # requests:
  723. # memory: 256Mi
  724. # cpu: 100m
  725. nodeSelector: {}
  726. tolerations: []
  727. affinity: {}
  728. ## Additional deployment annotations
  729. podAnnotations: {}
  730. ## The priority class to run the pod as
  731. priorityClassName:
  732. # Fill the name of a kubernetes secret if you want to use your own
  733. # TLS certificate authority, certificate and private key for notary
  734. # communications.
  735. # The secret must contain keys named ca.crt, tls.crt and tls.key that
  736. # contain the CA, certificate and private key.
  737. # They will be generated if not set.
  738. secretName: ""
  739. database:
  740. # if external database is used, set "type" to "external"
  741. # and fill the connection informations in "external" section
  742. type: internal
  743. internal:
  744. # set the service account to be used, default if left empty
  745. serviceAccountName: ""
  746. # mount the service account token
  747. automountServiceAccountToken: false
  748. image:
  749. repository: goharbor/harbor-db
  750. tag: v2.6.2
  751. # The initial superuser password for internal database
  752. password: "changeit"
  753. # The size limit for Shared memory, pgSQL use it for shared_buffer
  754. # More details see:
  755. # https://github.com/goharbor/harbor/issues/15034
  756. shmSizeLimit: 512Mi
  757. # resources:
  758. # requests:
  759. # memory: 256Mi
  760. # cpu: 100m
  761. nodeSelector: {}
  762. tolerations: []
  763. affinity: {}
  764. ## The priority class to run the pod as
  765. priorityClassName:
  766. initContainer:
  767. migrator: {}
  768. # resources:
  769. # requests:
  770. # memory: 128Mi
  771. # cpu: 100m
  772. permissions: {}
  773. # resources:
  774. # requests:
  775. # memory: 128Mi
  776. # cpu: 100m
  777. external:
  778. host: "postgresql"
  779. port: "5432"
  780. username: "gitlab"
  781. password: "passw0rd"
  782. coreDatabase: "registry"
  783. notaryServerDatabase: "notary_server"
  784. notarySignerDatabase: "notary_signer"
  785. # if using existing secret, the key must be "password"
  786. existingSecret: ""
  787. # "disable" - No SSL
  788. # "require" - Always SSL (skip verification)
  789. # "verify-ca" - Always SSL (verify that the certificate presented by the
  790. # server was signed by a trusted CA)
  791. # "verify-full" - Always SSL (verify that the certification presented by the
  792. # server was signed by a trusted CA and the server host name matches the one
  793. # in the certificate)
  794. sslmode: "disable"
  795. # The maximum number of connections in the idle connection pool per pod (core+exporter).
  796. # If it <=0, no idle connections are retained.
  797. maxIdleConns: 100
  798. # The maximum number of open connections to the database per pod (core+exporter).
  799. # If it <= 0, then there is no limit on the number of open connections.
  800. # Note: the default number of connections is 1024 for postgre of harbor.
  801. maxOpenConns: 900
  802. ## Additional deployment annotations
  803. podAnnotations: {}
  804. redis:
  805. # if external Redis is used, set "type" to "external"
  806. # and fill the connection informations in "external" section
  807. type: internal
  808. internal:
  809. # set the service account to be used, default if left empty
  810. serviceAccountName: ""
  811. # mount the service account token
  812. automountServiceAccountToken: false
  813. image:
  814. repository: goharbor/redis-photon
  815. tag: v2.6.2
  816. # resources:
  817. # requests:
  818. # memory: 256Mi
  819. # cpu: 100m
  820. nodeSelector: {}
  821. tolerations: []
  822. affinity: {}
  823. ## The priority class to run the pod as
  824. priorityClassName:
  825. external:
  826. # support redis, redis+sentinel
  827. # addr for redis: <host_redis>:<port_redis>
  828. # addr for redis+sentinel: <host_sentinel1>:<port_sentinel1>,<host_sentinel2>:<port_sentinel2>,<host_sentinel3>:<port_sentinel3>
  829. addr: "192.168.0.2:6379"
  830. # The name of the set of Redis instances to monitor, it must be set to support redis+sentinel
  831. sentinelMasterSet: ""
  832. # The "coreDatabaseIndex" must be "0" as the library Harbor
  833. # used doesn't support configuring it
  834. coreDatabaseIndex: "0"
  835. jobserviceDatabaseIndex: "1"
  836. registryDatabaseIndex: "2"
  837. chartmuseumDatabaseIndex: "3"
  838. trivyAdapterIndex: "5"
  839. password: ""
  840. # If using existingSecret, the key must be REDIS_PASSWORD
  841. existingSecret: ""
  842. ## Additional deployment annotations
  843. podAnnotations: {}
  844. exporter:
  845. replicas: 1
  846. revisionHistoryLimit: 10
  847. # resources:
  848. # requests:
  849. # memory: 256Mi
  850. # cpu: 100m
  851. podAnnotations: {}
  852. serviceAccountName: ""
  853. # mount the service account token
  854. automountServiceAccountToken: false
  855. image:
  856. repository: goharbor/harbor-exporter
  857. tag: v2.6.2
  858. nodeSelector: {}
  859. tolerations: []
  860. affinity: {}
  861. cacheDuration: 23
  862. cacheCleanInterval: 14400
  863. ## The priority class to run the pod as
  864. priorityClassName:
  865. metrics:
  866. enabled: false
  867. core:
  868. path: /metrics
  869. port: 8001
  870. registry:
  871. path: /metrics
  872. port: 8001
  873. jobservice:
  874. path: /metrics
  875. port: 8001
  876. exporter:
  877. path: /metrics
  878. port: 8001
  879. ## Create prometheus serviceMonitor to scrape harbor metrics.
  880. ## This requires the monitoring.coreos.com/v1 CRD. Please see
  881. ## https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/user-guides/getting-started.md
  882. ##
  883. serviceMonitor:
  884. enabled: false
  885. additionalLabels: {}
  886. # Scrape interval. If not set, the Prometheus default scrape interval is used.
  887. interval: ""
  888. # Metric relabel configs to apply to samples before ingestion.
  889. metricRelabelings:
  890. []
  891. # - action: keep
  892. # regex: 'kube_(daemonset|deployment|pod|namespace|node|statefulset).+'
  893. # sourceLabels: [__name__]
  894. # Relabel configs to apply to samples before ingestion.
  895. relabelings:
  896. []
  897. # - sourceLabels: [__meta_kubernetes_pod_node_name]
  898. # separator: ;
  899. # regex: ^(.*)$
  900. # targetLabel: nodename
  901. # replacement: $1
  902. # action: replace
  903. trace:
  904. enabled: false
  905. # trace provider: jaeger or otel
  906. # jaeger should be 1.26+
  907. provider: jaeger
  908. # set sample_rate to 1 if you wanna sampling 100% of trace data; set 0.5 if you wanna sampling 50% of trace data, and so forth
  909. sample_rate: 1
  910. # namespace used to differentiate different harbor services
  911. # namespace:
  912. # attributes is a key value dict contains user defined attributes used to initialize trace provider
  913. # attributes:
  914. # application: harbor
  915. jaeger:
  916. # jaeger supports two modes:
  917. # collector mode(uncomment endpoint and uncomment username, password if needed)
  918. # agent mode(uncomment agent_host and agent_port)
  919. endpoint: http://hostname:14268/api/traces
  920. # username:
  921. # password:
  922. # agent_host: hostname
  923. # export trace data by jaeger.thrift in compact mode
  924. # agent_port: 6831
  925. otel:
  926. endpoint: hostname:4318
  927. url_path: /v1/traces
  928. compression: false
  929. insecure: true
  930. timeout: 10s
  931. # cache layer configurations
  932. # if this feature enabled, harbor will cache the resource
  933. # `project/project_metadata/repository/artifact/manifest` in the redis
  934. # which help to improve the performance of high concurrent pulling manifest.
  935. cache:
  936. # default is not enabled.
  937. enabled: false
  938. # default keep cache for one day.
  939. expireHours: 24

2.8、部署执行命令


  
  1. kubectl apply -f harbor -pv.yaml
  2. kubectl apply -f harbor -pvc.yaml
  3. helm install harbor ./ -f values.yaml -n pig -dev
  4. kubectl get pv,pvc - A

删除的命令


  
  1. helm list -A
  2. helm uninstall harbor -n pig-dev
  3. kubectl delete -f harbor-pvc. yaml
  4. kubectl delete -f harbor-pv. yaml

2.9、编辑ingress文件,类型vim操作


  
  1. kubectl edit ingress -n pig-dev harbor-ingress
  2. kubectl edit ingress -n pig-dev harbor-ingress-notary

增减内容:ingressClassName: nginx

两个文件都要增加ingressClassName: nginx

 2.9.1、部署nginx-ingress-controller


  
  1. apiVersion: v1
  2. kind: Namespace
  3. metadata:
  4. labels:
  5. app.kubernetes.io/instance: ingress-nginx
  6. app.kubernetes.io/name: ingress-nginx
  7. name: ingress-nginx
  8. ---
  9. apiVersion: v1
  10. automountServiceAccountToken: true
  11. kind: ServiceAccount
  12. metadata:
  13. labels:
  14. app.kubernetes.io/component: controller
  15. app.kubernetes.io/instance: ingress-nginx
  16. app.kubernetes.io/name: ingress-nginx
  17. app.kubernetes.io/part-of: ingress-nginx
  18. app.kubernetes.io/version: 1.2.1
  19. name: ingress-nginx
  20. namespace: pig-dev
  21. ---
  22. apiVersion: v1
  23. kind: ServiceAccount
  24. metadata:
  25. labels:
  26. app.kubernetes.io/component: admission-webhook
  27. app.kubernetes.io/instance: ingress-nginx
  28. app.kubernetes.io/name: ingress-nginx
  29. app.kubernetes.io/part-of: ingress-nginx
  30. app.kubernetes.io/version: 1.2.1
  31. name: ingress-nginx-admission
  32. namespace: pig-dev
  33. ---
  34. apiVersion: rbac.authorization.k8s.io/v1
  35. kind: Role
  36. metadata:
  37. labels:
  38. app.kubernetes.io/component: controller
  39. app.kubernetes.io/instance: ingress-nginx
  40. app.kubernetes.io/name: ingress-nginx
  41. app.kubernetes.io/part-of: ingress-nginx
  42. app.kubernetes.io/version: 1.2.1
  43. name: ingress-nginx
  44. namespace: pig-dev
  45. rules:
  46. - apiGroups:
  47. - ""
  48. resources:
  49. - namespaces
  50. verbs:
  51. - get
  52. - apiGroups:
  53. - ""
  54. resources:
  55. - configmaps
  56. - pods
  57. - secrets
  58. - endpoints
  59. verbs:
  60. - get
  61. - list
  62. - watch
  63. - apiGroups:
  64. - ""
  65. resources:
  66. - services
  67. verbs:
  68. - get
  69. - list
  70. - watch
  71. - apiGroups:
  72. - networking.k8s.io
  73. resources:
  74. - ingresses
  75. verbs:
  76. - get
  77. - list
  78. - watch
  79. - apiGroups:
  80. - networking.k8s.io
  81. resources:
  82. - ingresses/status
  83. verbs:
  84. - update
  85. - apiGroups:
  86. - networking.k8s.io
  87. resources:
  88. - ingressclasses
  89. verbs:
  90. - get
  91. - list
  92. - watch
  93. - apiGroups:
  94. - ""
  95. resourceNames:
  96. - ingress-controller-leader
  97. resources:
  98. - configmaps
  99. verbs:
  100. - get
  101. - update
  102. - apiGroups:
  103. - ""
  104. resources:
  105. - configmaps
  106. verbs:
  107. - create
  108. - apiGroups:
  109. - ""
  110. resources:
  111. - events
  112. verbs:
  113. - create
  114. - patch
  115. ---
  116. apiVersion: rbac.authorization.k8s.io/v1
  117. kind: Role
  118. metadata:
  119. labels:
  120. app.kubernetes.io/component: admission-webhook
  121. app.kubernetes.io/instance: ingress-nginx
  122. app.kubernetes.io/name: ingress-nginx
  123. app.kubernetes.io/part-of: ingress-nginx
  124. app.kubernetes.io/version: 1.2.1
  125. name: ingress-nginx-admission
  126. namespace: pig-dev
  127. rules:
  128. - apiGroups:
  129. - ""
  130. resources:
  131. - secrets
  132. verbs:
  133. - get
  134. - create
  135. ---
  136. apiVersion: rbac.authorization.k8s.io/v1
  137. kind: ClusterRole
  138. metadata:
  139. labels:
  140. app.kubernetes.io/instance: ingress-nginx
  141. app.kubernetes.io/name: ingress-nginx
  142. app.kubernetes.io/part-of: ingress-nginx
  143. app.kubernetes.io/version: 1.2.1
  144. name: ingress-nginx
  145. rules:
  146. - apiGroups:
  147. - ""
  148. resources:
  149. - configmaps
  150. - endpoints
  151. - nodes
  152. - pods
  153. - secrets
  154. - namespaces
  155. verbs:
  156. - list
  157. - watch
  158. - apiGroups:
  159. - ""
  160. resources:
  161. - nodes
  162. verbs:
  163. - get
  164. - apiGroups:
  165. - ""
  166. resources:
  167. - services
  168. verbs:
  169. - get
  170. - list
  171. - watch
  172. - apiGroups:
  173. - networking.k8s.io
  174. resources:
  175. - ingresses
  176. verbs:
  177. - get
  178. - list
  179. - watch
  180. - apiGroups:
  181. - ""
  182. resources:
  183. - events
  184. verbs:
  185. - create
  186. - patch
  187. - apiGroups:
  188. - networking.k8s.io
  189. resources:
  190. - ingresses/status
  191. verbs:
  192. - update
  193. - apiGroups:
  194. - networking.k8s.io
  195. resources:
  196. - ingressclasses
  197. verbs:
  198. - get
  199. - list
  200. - watch
  201. ---
  202. apiVersion: rbac.authorization.k8s.io/v1
  203. kind: ClusterRole
  204. metadata:
  205. labels:
  206. app.kubernetes.io/component: admission-webhook
  207. app.kubernetes.io/instance: ingress-nginx
  208. app.kubernetes.io/name: ingress-nginx
  209. app.kubernetes.io/part-of: ingress-nginx
  210. app.kubernetes.io/version: 1.2.1
  211. name: ingress-nginx-admission
  212. rules:
  213. - apiGroups:
  214. - admissionregistration.k8s.io
  215. resources:
  216. - validatingwebhookconfigurations
  217. verbs:
  218. - get
  219. - update
  220. ---
  221. apiVersion: rbac.authorization.k8s.io/v1
  222. kind: RoleBinding
  223. metadata:
  224. labels:
  225. app.kubernetes.io/component: controller
  226. app.kubernetes.io/instance: ingress-nginx
  227. app.kubernetes.io/name: ingress-nginx
  228. app.kubernetes.io/part-of: ingress-nginx
  229. app.kubernetes.io/version: 1.2.1
  230. name: ingress-nginx
  231. namespace: pig-dev
  232. roleRef:
  233. apiGroup: rbac.authorization.k8s.io
  234. kind: Role
  235. name: ingress-nginx
  236. subjects:
  237. - kind: ServiceAccount
  238. name: ingress-nginx
  239. namespace: pig-dev
  240. ---
  241. apiVersion: rbac.authorization.k8s.io/v1
  242. kind: RoleBinding
  243. metadata:
  244. labels:
  245. app.kubernetes.io/component: admission-webhook
  246. app.kubernetes.io/instance: ingress-nginx
  247. app.kubernetes.io/name: ingress-nginx
  248. app.kubernetes.io/part-of: ingress-nginx
  249. app.kubernetes.io/version: 1.2.1
  250. name: ingress-nginx-admission
  251. namespace: pig-dev
  252. roleRef:
  253. apiGroup: rbac.authorization.k8s.io
  254. kind: Role
  255. name: ingress-nginx-admission
  256. subjects:
  257. - kind: ServiceAccount
  258. name: ingress-nginx-admission
  259. namespace: pig-dev
  260. ---
  261. apiVersion: rbac.authorization.k8s.io/v1
  262. kind: ClusterRoleBinding
  263. metadata:
  264. labels:
  265. app.kubernetes.io/instance: ingress-nginx
  266. app.kubernetes.io/name: ingress-nginx
  267. app.kubernetes.io/part-of: ingress-nginx
  268. app.kubernetes.io/version: 1.2.1
  269. name: ingress-nginx
  270. roleRef:
  271. apiGroup: rbac.authorization.k8s.io
  272. kind: ClusterRole
  273. name: ingress-nginx
  274. subjects:
  275. - kind: ServiceAccount
  276. name: ingress-nginx
  277. namespace: pig-dev
  278. ---
  279. apiVersion: rbac.authorization.k8s.io/v1
  280. kind: ClusterRoleBinding
  281. metadata:
  282. labels:
  283. app.kubernetes.io/component: admission-webhook
  284. app.kubernetes.io/instance: ingress-nginx
  285. app.kubernetes.io/name: ingress-nginx
  286. app.kubernetes.io/part-of: ingress-nginx
  287. app.kubernetes.io/version: 1.2.1
  288. name: ingress-nginx-admission
  289. roleRef:
  290. apiGroup: rbac.authorization.k8s.io
  291. kind: ClusterRole
  292. name: ingress-nginx-admission
  293. subjects:
  294. - kind: ServiceAccount
  295. name: ingress-nginx-admission
  296. namespace: pig-dev
  297. ---
  298. apiVersion: v1
  299. data:
  300. allow-snippet-annotations: "true"
  301. kind: ConfigMap
  302. metadata:
  303. labels:
  304. app.kubernetes.io/component: controller
  305. app.kubernetes.io/instance: ingress-nginx
  306. app.kubernetes.io/name: ingress-nginx
  307. app.kubernetes.io/part-of: ingress-nginx
  308. app.kubernetes.io/version: 1.2.1
  309. name: ingress-nginx-controller
  310. namespace: pig-dev
  311. ---
  312. apiVersion: v1
  313. kind: Service
  314. metadata:
  315. labels:
  316. app.kubernetes.io/component: controller
  317. app.kubernetes.io/instance: ingress-nginx
  318. app.kubernetes.io/name: ingress-nginx
  319. app.kubernetes.io/part-of: ingress-nginx
  320. app.kubernetes.io/version: 1.2.1
  321. name: ingress-nginx-controller
  322. namespace: pig-dev
  323. spec:
  324. externalTrafficPolicy: Local
  325. ports:
  326. - appProtocol: http
  327. name: http
  328. port: 80
  329. protocol: TCP
  330. targetPort: http
  331. - appProtocol: https
  332. name: https
  333. port: 443
  334. protocol: TCP
  335. targetPort: https
  336. selector:
  337. app.kubernetes.io/component: controller
  338. app.kubernetes.io/instance: ingress-nginx
  339. app.kubernetes.io/name: ingress-nginx
  340. type: LoadBalancer
  341. ---
  342. apiVersion: v1
  343. kind: Service
  344. metadata:
  345. labels:
  346. app.kubernetes.io/component: controller
  347. app.kubernetes.io/instance: ingress-nginx
  348. app.kubernetes.io/name: ingress-nginx
  349. app.kubernetes.io/part-of: ingress-nginx
  350. app.kubernetes.io/version: 1.2.1
  351. name: ingress-nginx-controller-admission
  352. namespace: pig-dev
  353. spec:
  354. ports:
  355. - appProtocol: https
  356. name: https-webhook
  357. port: 443
  358. targetPort: webhook
  359. selector:
  360. app.kubernetes.io/component: controller
  361. app.kubernetes.io/instance: ingress-nginx
  362. app.kubernetes.io/name: ingress-nginx
  363. type: ClusterIP
  364. ---
  365. apiVersion: apps/v1
  366. #kind: Deployment
  367. kind: DaemonSet
  368. metadata:
  369. labels:
  370. app.kubernetes.io/component: controller
  371. app.kubernetes.io/instance: ingress-nginx
  372. app.kubernetes.io/name: ingress-nginx
  373. app.kubernetes.io/part-of: ingress-nginx
  374. app.kubernetes.io/version: 1.2.1
  375. name: ingress-nginx-controller
  376. namespace: pig-dev
  377. spec:
  378. minReadySeconds: 0
  379. revisionHistoryLimit: 10
  380. selector:
  381. matchLabels:
  382. app.kubernetes.io/component: controller
  383. app.kubernetes.io/instance: ingress-nginx
  384. app.kubernetes.io/name: ingress-nginx
  385. template:
  386. metadata:
  387. labels:
  388. app.kubernetes.io/component: controller
  389. app.kubernetes.io/instance: ingress-nginx
  390. app.kubernetes.io/name: ingress-nginx
  391. spec:
  392. hostNetwork: true
  393. containers:
  394. - args:
  395. - /nginx-ingress-controller
  396. - --publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
  397. - --election-id=ingress-controller-leader
  398. - --controller-class=k8s.io/ingress-nginx
  399. - --ingress-class=nginx
  400. - --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
  401. - --validating-webhook=:8443
  402. - --validating-webhook-certificate=/usr/local/certificates/cert
  403. - --validating-webhook-key=/usr/local/certificates/key
  404. env:
  405. - name: POD_NAME
  406. valueFrom:
  407. fieldRef:
  408. fieldPath: metadata.name
  409. - name: POD_NAMESPACE
  410. valueFrom:
  411. fieldRef:
  412. fieldPath: metadata.namespace
  413. - name: LD_PRELOAD
  414. value: /usr/local/lib/libmimalloc.so
  415. image: zhxl1989/ingress-nginx-controller:v1.2.1
  416. imagePullPolicy: IfNotPresent
  417. lifecycle:
  418. preStop:
  419. exec:
  420. command:
  421. - /wait-shutdown
  422. livenessProbe:
  423. failureThreshold: 5
  424. httpGet:
  425. path: /healthz
  426. port: 10254
  427. scheme: HTTP
  428. initialDelaySeconds: 10
  429. periodSeconds: 10
  430. successThreshold: 1
  431. timeoutSeconds: 1
  432. name: controller
  433. ports:
  434. - containerPort: 80
  435. name: http
  436. protocol: TCP
  437. - containerPort: 443
  438. name: https
  439. protocol: TCP
  440. - containerPort: 8443
  441. name: webhook
  442. protocol: TCP
  443. readinessProbe:
  444. failureThreshold: 3
  445. httpGet:
  446. path: /healthz
  447. port: 10254
  448. scheme: HTTP
  449. initialDelaySeconds: 10
  450. periodSeconds: 10
  451. successThreshold: 1
  452. timeoutSeconds: 1
  453. resources:
  454. requests:
  455. cpu: 100m
  456. memory: 90Mi
  457. securityContext:
  458. allowPrivilegeEscalation: true
  459. capabilities:
  460. add:
  461. - NET_BIND_SERVICE
  462. drop:
  463. - ALL
  464. runAsUser: 101
  465. volumeMounts:
  466. - mountPath: /usr/local/certificates/
  467. name: webhook-cert
  468. readOnly: true
  469. dnsPolicy: ClusterFirstWithHostNet
  470. nodeSelector:
  471. kubernetes.io/os: linux
  472. serviceAccountName: ingress-nginx
  473. terminationGracePeriodSeconds: 300
  474. volumes:
  475. - name: webhook-cert
  476. secret:
  477. secretName: ingress-nginx-admission
  478. ---
  479. apiVersion: batch/v1
  480. kind: Job
  481. metadata:
  482. labels:
  483. app.kubernetes.io/component: admission-webhook
  484. app.kubernetes.io/instance: ingress-nginx
  485. app.kubernetes.io/name: ingress-nginx
  486. app.kubernetes.io/part-of: ingress-nginx
  487. app.kubernetes.io/version: 1.2.1
  488. name: ingress-nginx-admission-create
  489. namespace: pig-dev
  490. spec:
  491. template:
  492. metadata:
  493. labels:
  494. app.kubernetes.io/component: admission-webhook
  495. app.kubernetes.io/instance: ingress-nginx
  496. app.kubernetes.io/name: ingress-nginx
  497. app.kubernetes.io/part-of: ingress-nginx
  498. app.kubernetes.io/version: 1.2.1
  499. name: ingress-nginx-admission-create
  500. spec:
  501. containers:
  502. - args:
  503. - create
  504. - --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc
  505. - --namespace=$(POD_NAMESPACE)
  506. - --secret-name=ingress-nginx-admission
  507. env:
  508. - name: POD_NAMESPACE
  509. valueFrom:
  510. fieldRef:
  511. fieldPath: metadata.namespace
  512. image: zhxl1989/ingress-nginx-kube-webhook-certgen:v1.1.1
  513. imagePullPolicy: IfNotPresent
  514. name: create
  515. securityContext:
  516. allowPrivilegeEscalation: false
  517. nodeSelector:
  518. kubernetes.io/os: linux
  519. restartPolicy: OnFailure
  520. securityContext:
  521. fsGroup: 2000
  522. runAsNonRoot: true
  523. runAsUser: 2000
  524. serviceAccountName: ingress-nginx-admission
  525. ---
  526. apiVersion: batch/v1
  527. kind: Job
  528. metadata:
  529. labels:
  530. app.kubernetes.io/component: admission-webhook
  531. app.kubernetes.io/instance: ingress-nginx
  532. app.kubernetes.io/name: ingress-nginx
  533. app.kubernetes.io/part-of: ingress-nginx
  534. app.kubernetes.io/version: 1.2.1
  535. name: ingress-nginx-admission-patch
  536. namespace: pig-dev
  537. spec:
  538. template:
  539. metadata:
  540. labels:
  541. app.kubernetes.io/component: admission-webhook
  542. app.kubernetes.io/instance: ingress-nginx
  543. app.kubernetes.io/name: ingress-nginx
  544. app.kubernetes.io/part-of: ingress-nginx
  545. app.kubernetes.io/version: 1.2.1
  546. name: ingress-nginx-admission-patch
  547. spec:
  548. containers:
  549. - args:
  550. - patch
  551. - --webhook-name=ingress-nginx-admission
  552. - --namespace=$(POD_NAMESPACE)
  553. - --patch-mutating= false
  554. - --secret-name=ingress-nginx-admission
  555. - --patch-failure-policy=Fail
  556. env:
  557. - name: POD_NAMESPACE
  558. valueFrom:
  559. fieldRef:
  560. fieldPath: metadata.namespace
  561. image: zhxl1989/ingress-nginx-kube-webhook-certgen:v1.1.1
  562. imagePullPolicy: IfNotPresent
  563. name: patch
  564. securityContext:
  565. allowPrivilegeEscalation: false
  566. nodeSelector:
  567. kubernetes.io/os: linux
  568. restartPolicy: OnFailure
  569. securityContext:
  570. fsGroup: 2000
  571. runAsNonRoot: true
  572. runAsUser: 2000
  573. serviceAccountName: ingress-nginx-admission
  574. ---
  575. apiVersion: networking.k8s.io/v1
  576. kind: IngressClass
  577. metadata:
  578. labels:
  579. app.kubernetes.io/component: controller
  580. app.kubernetes.io/instance: ingress-nginx
  581. app.kubernetes.io/name: ingress-nginx
  582. app.kubernetes.io/part-of: ingress-nginx
  583. app.kubernetes.io/version: 1.2.1
  584. name: nginx
  585. spec:
  586. controller: k8s.io/ingress-nginx
  587. ---
  588. apiVersion: admissionregistration.k8s.io/v1
  589. kind: ValidatingWebhookConfiguration
  590. metadata:
  591. labels:
  592. app.kubernetes.io/component: admission-webhook
  593. app.kubernetes.io/instance: ingress-nginx
  594. app.kubernetes.io/name: ingress-nginx
  595. app.kubernetes.io/part-of: ingress-nginx
  596. app.kubernetes.io/version: 1.2.1
  597. name: ingress-nginx-admission
  598. webhooks:
  599. - admissionReviewVersions:
  600. - v1
  601. clientConfig:
  602. service:
  603. name: ingress-nginx-controller-admission
  604. namespace: pig-dev
  605. path: /networking/v1/ingresses
  606. failurePolicy: Fail
  607. matchPolicy: Equivalent
  608. name: validate.nginx.ingress.kubernetes.io
  609. rules:
  610. - apiGroups:
  611. - networking.k8s.io
  612. apiVersions:
  613. - v1
  614. operations:
  615. - CREATE
  616. - UPDATE
  617. resources:
  618. - ingresses
  619. sideEffects: None

2.9.2、查看配置Ingress的配置结果

kubectl describe ingress/harbor-ingress -n pig-dev

3、访问

3.1、window配置hosts

 3.2、访问地址

https://harbor.liebe.com.cn/harbor/projects

用户名:admin

密码:Harbor12345

 

 

 


转载:https://blog.csdn.net/TT1024167802/article/details/128085646
查看评论
* 以上用户言论只代表其个人观点,不代表本网站的观点或立场