jenkins的容器化部署以及k8s应用的CI/CD实现

1. 使用Helm安装Mysql:

??上一篇博文谈到了如何使用Helm安装Redis和RabbitMQ,下来我们来聊聊如何用Helm安装mysql.

??本人对于Mysql数据库不是非常熟悉,因为我们公司的分工比较明确,数据库这块的工作主要由DBA负责,运维同学只负责应用的维护。

??按照我们前面博文的描述,首先是在官方文档查看helm安装mysql的书名: https://github.com/helm/charts/tree/master/stable/mysql

??我根据官方文档的描述配置的value.yaml文件如下:

## mysql image version
## ref: https://hub.docker.com/r/library/mysql/tags/
##
image: "k8s.harbor.maimaiti.site/system/mysql"
imageTag: "5.7.14"

busybox:
  image: "k8s.harbor.maimaiti.site/system/busybox"
  tag: "1.29.3"

testFramework:
  image: "k8s.harbor.maimaiti.site/system/bats"
  tag: "0.4.0"

## Specify password for root user
##
## Default: random 10 character string
mysqlRootPassword: admin123

## Create a database user
##
mysqlUser: test
## Default: random 10 character string
mysqlPassword: test123

## Allow unauthenticated access, uncomment to enable
##
# mysqlAllowEmptyPassword: true

## Create a database
##
mysqlDatabase: test

## Specify an imagePullPolicy (Required)
## It‘s recommended to change this to ‘Always‘ if the image tag is ‘latest‘
## ref: http://kubernetes.io/docs/user-guide/images/#updating-images
##
imagePullPolicy: IfNotPresent

extraVolumes: |
  # - name: extras
  #   emptyDir: {}

extraVolumeMounts: |
  # - name: extras
  #   mountPath: /usr/share/extras
  #   readOnly: true

extraInitContainers: |
  # - name: do-something
  #   image: busybox
  #   command: [‘do‘, ‘something‘]

# Optionally specify an array of imagePullSecrets.
# Secrets must be manually created in the namespace.
# ref: https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod
# imagePullSecrets:
  # - name: myRegistryKeySecretName

## Node selector
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
nodeSelector: {}

## Tolerations for pod assignment
## Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
##
tolerations: []

livenessProbe:
  initialDelaySeconds: 30
  periodSeconds: 10
  timeoutSeconds: 5
  successThreshold: 1
  failureThreshold: 3

readinessProbe:
  initialDelaySeconds: 5
  periodSeconds: 10
  timeoutSeconds: 1
  successThreshold: 1
  failureThreshold: 3

## Persist data to a persistent volume
persistence:
  enabled: true
  ## database data Persistent Volume Storage Class
  ## If defined, storageClassName: <storageClass>
  ## If set to "-", storageClassName: "", which disables dynamic provisioning
  ## If undefined (the default) or set to null, no storageClassName spec is
  ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
  ##   GKE, AWS & OpenStack)
  ##
  storageClass: "dynamic"
  accessMode: ReadWriteOnce
  size: 8Gi
  annotations: {}

## Configure resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resources:
  requests:
    memory: 256Mi
    cpu: 100m

# Custom mysql configuration files used to override default mysql settings
configurationFiles: {}
#  mysql.cnf: |-
#    [mysqld]
#    skip-name-resolve
#    ssl-ca=/ssl/ca.pem
#    ssl-cert=/ssl/server-cert.pem
#    ssl-key=/ssl/server-key.pem

# Custom mysql init SQL files used to initialize the database
initializationFiles: {}
#  first-db.sql: |-
#    CREATE DATABASE IF NOT EXISTS first DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
#  second-db.sql: |-
#    CREATE DATABASE IF NOT EXISTS second DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;

metrics:
  enabled: true
  image: k8s.harbor.maimaiti.site/system/mysqld-exporter
  imageTag: v0.10.0
  imagePullPolicy: IfNotPresent
  resources: {}
  annotations: {}
    # prometheus.io/scrape: "true"
    # prometheus.io/port: "9104"
  livenessProbe:
    initialDelaySeconds: 15
    timeoutSeconds: 5
  readinessProbe:
    initialDelaySeconds: 5
    timeoutSeconds: 1

## Configure the service
## ref: http://kubernetes.io/docs/user-guide/services/
service:
  annotations: {}
  ## Specify a service type
  ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services---service-types
  type: ClusterIP
  port: 3306
  # nodePort: 32000

ssl:
  enabled: false
  secret: mysql-ssl-certs
  certificates:
#  - name: mysql-ssl-certs
#    ca: |-
#      -----BEGIN CERTIFICATE-----
#      ...
#      -----END CERTIFICATE-----
#    cert: |-
#      -----BEGIN CERTIFICATE-----
#      ...
#      -----END CERTIFICATE-----
#    key: |-
#      -----BEGIN RSA PRIVATE KEY-----
#      ...
#      -----END RSA PRIVATE KEY-----

## Populates the ‘TZ‘ system timezone environment variable
## ref: https://dev.mysql.com/doc/refman/5.7/en/time-zone-support.html
##
## Default: nil (mysql will use image‘s default timezone, normally UTC)
## Example: ‘Australia/Sydney‘
# timezone:

# To be added to the database server pod(s)
podAnnotations: {}

podLabels: {}

## Set pod priorityClassName
# priorityClassName: {}

??主要修改了如下几点的配置:

  • 将所有的镜像都改为了私服镜像地址;
  • 配置了mysql的初始化root密码,普通用户账户和密码,创建一个测试数据库;
  • 配置了持久化存储;

??使用helm install的时候也可以自定义参数,具体参数如何使用要看官方文档;比如:

helm install --values=mysql.yaml --set mysqlRootPassword=abc123 --name r1 stable/mysql

??查看安装好之后的mysql如何连接,有一个my-mysql的服务直接调用即可;

[[email protected] mysql]#  helm status my-mysql
LAST DEPLOYED: Thu Apr 25 15:08:27 2019
NAMESPACE: kube-system
STATUS: DEPLOYED

RESOURCES:
==> v1/Pod(related)
NAME                       READY  STATUS   RESTARTS  AGE
my-mysql-5fd54bd9cb-948td  2/2    Running  3         6d22h

==> v1/Secret
NAME      TYPE    DATA  AGE
my-mysql  Opaque  2     6d22h

==> v1/ConfigMap
NAME           DATA  AGE
my-mysql-test  1     6d22h

==> v1/PersistentVolumeClaim
NAME      STATUS  VOLUME                                    CAPACITY  ACCESS MODES  STORAGECLASS  AGE
my-mysql  Bound   pvc-ed8a9252-6728-11e9-8b25-480fcf659569  8Gi       RWO           dynamic       6d22h

==> v1/Service
NAME      TYPE       CLUSTER-IP      EXTERNAL-IP  PORT(S)            AGE
my-mysql  ClusterIP  10.200.200.169  <none>       3306/TCP,9104/TCP  6d22h

==> v1beta1/Deployment
NAME      DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
my-mysql  1        1        1           1          6d22h

NOTES:
MySQL can be accessed via port 3306 on the following DNS name from within your cluster:
my-mysql.kube-system.svc.cluster.local

To get your root password run:

    MYSQL_ROOT_PASSWORD=$(kubectl get secret --namespace kube-system my-mysql -o jsonpath="{.data.mysql-root-password}" | base64 --decode; echo)

To connect to your database:

1. Run an Ubuntu pod that you can use as a client:

    kubectl run -i --tty ubuntu --image=ubuntu:16.04 --restart=Never -- bash -il

2. Install the mysql client:

    $ apt-get update && apt-get install mysql-client -y

3. Connect using the mysql cli, then provide your password:
    $ mysql -h my-mysql -p

To connect to your database directly from outside the K8s cluster:
    MYSQL_HOST=127.0.0.1
    MYSQL_PORT=3306

    # Execute the following command to route the connection:
    kubectl port-forward svc/my-mysql 3306

    mysql -h ${MYSQL_HOST} -P${MYSQL_PORT} -u root -p${MYSQL_ROOT_PASSWORD}

[[email protected] mysql]#

2. 使用prometheus监控公共组件:

2.1 在prometheus的configmap里面增加配置:

- job_name: "mysql"
  static_configs:
  - targets: [‘my-mysql:9104‘]
- job_name: "redis"
  static_configs:
  - targets: [‘my-redis-redis-ha:9121‘]

??然后重新删除和建立configmap,热更新prometheus的配置;

kubectl replace -f configmap.yaml --force
curl -X POST "http://10.109.108.37:9090/-/reload"

3. 使用helm安装jenkins

??首先还是搜索一下jenkins的chart

[[email protected] k8sdemo2]# helm search jenkins
NAME            CHART VERSION   APP VERSION DESCRIPTION
stable/jenkins  1.1.10          lts         Open source continuous integration server. It supports mu...
[[email protected] k8sdemo2]#

?? 然后我们把它下载下来并解压

helm fetch stable/jenkins --untar --untardir ./

??在kubernetes集群里面使用jenkins的原理主要包括:

  • 在k8s集群里面运行jenkins master的pod,使用持久化存储,保证jenkins pod重启或者迁移之后,jenkins的插件和页面的job配置不会丢失;
  • 在K8S集群里面运行jenkins slave Pod,当有一个job运行的时候,就会启动一个jenkins slave的Pod,当job运行完成之后这个slave pod会自动销毁,可以节省资源;
  • 因为jenkins主要是CI/CD工具,所以jenkins完成的任务包括调用gitlab下载代码--->使用maven编译打包--->使用sonar代码检查(可选)--->使用docker build构建镜像--->使用docker push上传镜像到私有仓库--->
    使用kubectl命令发布应用到k8s集群
  • 我们发布一个k8s应用一般需要用到Dockerfile、k8s YAML清单配置文件、jenkins pipeline流水线配置文件;
  • Dockerfile文件的主要目的是构建镜像,将jenkins maven打包之后的war包或者jar包ADD到镜像内,然后定义镜像的启动命令,环境变量(比如JAVA_HOME)等;
  • YAML文件主要定义的K8S部署应用的规则,比如部署几个副本,应用的资源限制是多少,应用启动之后的健康检查是curl还是tcp端口检查;应用是否需要挂载存储等;除了配置应用的deployment文件之外,
    一般还要配置service清单文件,用于其他应用调用服务名来访问本应用,如果应用还需要对外提供访问,还需要配置Ingress文件,甚至还包括配置文件configmap需要创建,应用依赖的数据库,MQ账户信息等需要
    使用secrets配置清单文件等等,所以建议熟悉Helm的同学在jenkins里面还是调用helm命令部署应用是最好的;
  • pipeline文件就是jenkins的配置内容了,现在使用jenkins都是推荐使用流水线模式,因为流水线模式非常利于jenkins的迁移等;pipeline文件主要定义了step,包括上面描述的打包、构建、镜像制作、
    k8s应用发布等动作,这些动作的实现都是依靠jenkins slave这个POD。所以这个jenkins slave镜像的制作就是非常重要的了。
  • jenkins slave镜像主要包含JAVA命令、maven命令、docker命令、kubectl命令等;还要挂载docker.sock文件和kubectl config文件等;

??紧接着就是参考官方文档的说明修改value.yaml文件: https://github.com/helm/charts/tree/master/stable/jenkins

# Default values for jenkins.
# This is a YAML-formatted file.
# Declare name/value pairs to be passed into your templates.
# name: value

## Overrides for generated resource names
# See templates/_helpers.tpl
# nameOverride:
# fullnameOverride:

master:
  # Used for label app.kubernetes.io/component
  componentName: "jenkins-master"
  image: "k8s.harbor.maimaiti.site/system/jenkins"
  imageTag: "lts"
  imagePullPolicy: "Always"
  imagePullSecretName:
  # Optionally configure lifetime for master-container
  lifecycle:
  #  postStart:
  #    exec:
  #      command:
  #      - "uname"
  #      - "-a"
  numExecutors: 0
  # configAutoReload requires UseSecurity is set to true:
  useSecurity: true
  # Allows to configure different SecurityRealm using Jenkins XML
  securityRealm: |-
    <securityRealm class="hudson.security.LegacySecurityRealm"/>
  # Allows to configure different AuthorizationStrategy using Jenkins XML
  authorizationStrategy: |-
     <authorizationStrategy class="hudson.security.FullControlOnceLoggedInAuthorizationStrategy">
       <denyAnonymousReadAccess>true</denyAnonymousReadAccess>
     </authorizationStrategy>
  hostNetworking: false
  # When enabling LDAP or another non-Jenkins identity source, the built-in admin account will no longer exist.
  # Since the AdminUser is used by configAutoReload, in order to use configAutoReload you must change the
  # .master.adminUser to a valid username on your LDAP (or other) server.  This user does not need
  # to have administrator rights in Jenkins (the default Overall:Read is sufficient) nor will it be granted any
  # additional rights.  Failure to do this will cause the sidecar container to fail to authenticate via SSH and enter
  # a restart loop.  Likewise if you disable the non-Jenkins identity store and instead use the Jenkins internal one,
  # you should revert master.adminUser to your preferred admin user:
  adminUser: "admin"
  adminPassword: [email protected]
  # adminSshKey: <defaults to auto-generated>
  # If CasC auto-reload is enabled, an SSH (RSA) keypair is needed.  Can either provide your own, or leave unconfigured to allow a random key to be auto-generated.
  # If you supply your own, it is recommended that the values file that contains your key not be committed to source control in an unencrypted format
  rollingUpdate: {}
  # Ignored if Persistence is enabled
  # maxSurge: 1
  # maxUnavailable: 25%
  resources:
    requests:
      cpu: "2000m"
      memory: "2048Mi"
    limits:
      cpu: "2000m"
      memory: "4096Mi"
  # Environment variables that get added to the init container (useful for e.g. http_proxy)
  # initContainerEnv:
  #   - name: http_proxy
  #     value: "http://192.168.64.1:3128"
  # containerEnv:
  #   - name: http_proxy
  #     value: "http://192.168.64.1:3128"
  # Set min/max heap here if needed with:
  # javaOpts: "-Xms512m -Xmx512m"
  # jenkinsOpts: ""
  # jenkinsUrl: ""
  # If you set this prefix and use ingress controller then you might want to set the ingress path below
  # jenkinsUriPrefix: "/jenkins"
  # Enable pod security context (must be `true` if runAsUser or fsGroup are set)
  usePodSecurityContext: true
  # Set runAsUser to 1000 to let Jenkins run as non-root user ‘jenkins‘ which exists in ‘jenkins/jenkins‘ docker image.
  # When setting runAsUser to a different value than 0 also set fsGroup to the same value:
  # runAsUser: <defaults to 0>
  # fsGroup: <will be omitted in deployment if runAsUser is 0>
  servicePort: 8080
  # For minikube, set this to NodePort, elsewhere use LoadBalancer
  # Use ClusterIP if your setup includes ingress controller
  serviceType: LoadBalancer
  # Jenkins master service annotations
  serviceAnnotations: {}
  # Jenkins master custom labels
  deploymentLabels: {}
  #   foo: bar
  #   bar: foo
  # Jenkins master service labels
  serviceLabels: {}
  #   service.beta.kubernetes.io/aws-load-balancer-backend-protocol: https
  # Put labels on Jenkins master pod
  podLabels: {}
  # Used to create Ingress record (should used with ServiceType: ClusterIP)
  # hostName: jenkins.cluster.local
  # nodePort: <to set explicitly, choose port between 30000-32767
  # Enable Kubernetes Liveness and Readiness Probes
  # ~ 2 minutes to allow Jenkins to restart when upgrading plugins. Set ReadinessTimeout to be shorter than LivenessTimeout.
  healthProbes: true
  healthProbesLivenessTimeout: 90
  healthProbesReadinessTimeout: 60
  healthProbeReadinessPeriodSeconds: 10
  healthProbeLivenessFailureThreshold: 12
  slaveListenerPort: 50000
  slaveHostPort:
  disabledAgentProtocols:
    - JNLP-connect
    - JNLP2-connect
  csrf:
    defaultCrumbIssuer:
      enabled: true
      proxyCompatability: true
  cli: false
  # Kubernetes service type for the JNLP slave service
  # slaveListenerServiceType is the Kubernetes Service type for the JNLP slave service,
  # either ‘LoadBalancer‘, ‘NodePort‘, or ‘ClusterIP‘
  # Note if you set this to ‘LoadBalancer‘, you *must* define annotations to secure it. By default
  # this will be an external load balancer and allowing inbound 0.0.0.0/0, a HUGE
  # security risk:  https://github.com/kubernetes/charts/issues/1341
  slaveListenerServiceType: "ClusterIP"
  slaveListenerServiceAnnotations: {}
  slaveKubernetesNamespace:

  # Example of ‘LoadBalancer‘ type of slave listener with annotations securing it
  # slaveListenerServiceType: LoadBalancer
  # slaveListenerServiceAnnotations:
  #   service.beta.kubernetes.io/aws-load-balancer-internal: "True"
  #   service.beta.kubernetes.io/load-balancer-source-ranges: "172.0.0.0/8, 10.0.0.0/8"

  # LoadBalancerSourcesRange is a list of allowed CIDR values, which are combined with ServicePort to
  # set allowed inbound rules on the security group assigned to the master load balancer
  loadBalancerSourceRanges:
  - 0.0.0.0/0
  # Optionally assign a known public LB IP
  # loadBalancerIP: 1.2.3.4
  # Optionally configure a JMX port
  # requires additional javaOpts, ie
  # javaOpts: >
  #   -Dcom.sun.management.jmxremote.port=4000
  #   -Dcom.sun.management.jmxremote.authenticate=false
  #   -Dcom.sun.management.jmxremote.ssl=false
  # jmxPort: 4000
  # Optionally configure other ports to expose in the master container
  extraPorts:
  # - name: BuildInfoProxy
  #   port: 9000

  # List of plugins to be install during Jenkins master start
  installPlugins:
    - kubernetes:1.14.0
    - workflow-job:2.31
    - workflow-aggregator:2.6
    - credentials-binding:1.17
    - git:3.9.1

  # Enable to always override the installed plugins with the values of ‘master.installPlugins‘ on upgrade or redeployment.
  # overwritePlugins: true
  # Enable HTML parsing using OWASP Markup Formatter Plugin (antisamy-markup-formatter), useful with ghprb plugin.
  # The plugin is not installed by default, please update master.installPlugins.
  enableRawHtmlMarkupFormatter: false
  # Used to approve a list of groovy functions in pipelines used the script-security plugin. Can be viewed under /scriptApproval
  scriptApproval:
  #  - "method groovy.json.JsonSlurperClassic parseText java.lang.String"
  #  - "new groovy.json.JsonSlurperClassic"
  # List of groovy init scripts to be executed during Jenkins master start
  initScripts:
  #  - |
  #    print ‘adding global pipeline libraries, register properties, bootstrap jobs...‘
  # Kubernetes secret that contains a ‘credentials.xml‘ for Jenkins
  # credentialsXmlSecret: jenkins-credentials
  # Kubernetes secret that contains files to be put in the Jenkins ‘secrets‘ directory,
  # useful to manage encryption keys used for credentials.xml for instance (such as
  # master.key and hudson.util.Secret)
  # secretsFilesSecret: jenkins-secrets
  # Jenkins XML job configs to provision
  jobs:
  #  test: |-
  #    <<xml here>>

  # Below is the implementation of Jenkins Configuration as Code.  Add a key under configScripts for each configuration area,
  # where each corresponds to a plugin or section of the UI.  Each key (prior to | character) is just a label, and can be any value.
  # Keys are only used to give the section a meaningful name.  The only restriction is they may only contain RFC 1123 \ DNS label
  # characters: lowercase letters, numbers, and hyphens.  The keys become the name of a configuration yaml file on the master in
  # /var/jenkins_home/casc_configs (by default) and will be processed by the Configuration as Code Plugin.  The lines after each |
  # become the content of the configuration yaml file.  The first line after this is a JCasC root element, eg jenkins, credentials,
  # etc.  Best reference is https://<jenkins_url>/configuration-as-code/reference.  The example below creates a welcome message:
  JCasC:
    enabled: false
    pluginVersion: 1.5
    supportPluginVersion: 1.5
    configScripts:
      welcome-message: |
        jenkins:
          systemMessage: Welcome to our CI\CD server.  This Jenkins is configured and managed ‘as code‘.

  # Optionally specify additional init-containers
  customInitContainers: []
  #   - name: CustomInit
  #     image: "alpine:3.7"
  #     imagePullPolicy: Always
  #     command: [ "uname", "-a" ]

  sidecars:
    configAutoReload:
      # If enabled: true, Jenkins Configuration as Code will be reloaded on-the-fly without a reboot.  If false or not-specified,
      # jcasc changes will cause a reboot and will only be applied at the subsequent start-up.  Auto-reload uses the Jenkins CLI
      # over SSH to reapply config when changes to the configScripts are detected.  The admin user (or account you specify in
      # master.adminUser) will have a random SSH private key (RSA 4096) assigned unless you specify adminSshKey.  This will be saved to a k8s secret.
      enabled: false
      image: shadwell/k8s-sidecar:0.0.2
      imagePullPolicy: IfNotPresent
      resources:
        #   limits:
        #     cpu: 100m
        #     memory: 100Mi
        #   requests:
        #     cpu: 50m
        #     memory: 50Mi
      # SSH port value can be set to any unused TCP port.  The default, 1044, is a non-standard SSH port that has been chosen at random.
      # Is only used to reload jcasc config from the sidecar container running in the Jenkins master pod.
      # This TCP port will not be open in the pod (unless you specifically configure this), so Jenkins will not be
      # accessible via SSH from outside of the pod.  Note if you use non-root pod privileges (runAsUser & fsGroup),
      # this must be > 1024:
      sshTcpPort: 1044
      # folder in the pod that should hold the collected dashboards:
      folder: "/var/jenkins_home/casc_configs"
      # If specified, the sidecar will search for JCasC config-maps inside this namespace.
      # Otherwise the namespace in which the sidecar is running will be used.
      # It‘s also possible to specify ALL to search in all namespaces:
      # searchNamespace:

    # Allows you to inject additional/other sidecars
    other:
    ## The example below runs the client for https://smee.io as sidecar container next to Jenkins,
    ## that allows to trigger build behind a secure firewall.
    ## https://jenkins.io/blog/2019/01/07/webhook-firewalls/#triggering-builds-with-webhooks-behind-a-secure-firewall
    ##
    ## Note: To use it you should go to https://smee.io/new and update the url to the generete one.
    # - name: smee
    #   image: docker.io/twalter/smee-client:1.0.2
    #   args: ["--port", "{{ .Values.master.servicePort }}", "--path", "/github-webhook/", "--url", "https://smee.io/new"]
    #   resources:
    #     limits:
    #       cpu: 50m
    #       memory: 128Mi
    #     requests:
    #       cpu: 10m
    #       memory: 32Mi
  # Node labels and tolerations for pod assignment
  # ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
  # ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#taints-and-tolerations-beta-feature
  nodeSelector: {}
  tolerations: []
  # Leverage a priorityClass to ensure your pods survive resource shortages
  # ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
  # priorityClass: system-cluster-critical
  podAnnotations: {}

  # The below two configuration-related values are deprecated and replaced by Jenkins Configuration as Code (see above
  # JCasC key).  They will be deleted in an upcoming version.
  customConfigMap: false
  # By default, the configMap is only used to set the initial config the first time
  # that the chart is installed.  Setting `overwriteConfig` to `true` will overwrite
  # the jenkins config with the contents of the configMap every time the pod starts.
  # This will also overwrite all init scripts
  overwriteConfig: false

  # By default, the Jobs Map is only used to set the initial jobs the first time
  # that the chart is installed.  Setting `overwriteJobs` to `true` will overwrite
  # the jenkins jobs configuration with the contents of Jobs every time the pod starts.
  overwriteJobs: false

  ingress:
    enabled: true
    # For Kubernetes v1.14+, use ‘networking.k8s.io/v1beta1‘
    apiVersion: "extensions/v1beta1"
    labels: {}
    annotations:
      kubernetes.io/ingress.class: traefik
    # kubernetes.io/tls-acme: "true"
    # Set this path to jenkinsUriPrefix above or use annotations to rewrite path
    # path: "/jenkins"
      hostName: k8s.jenkins.maimaiti.site
    tls:
    # - secretName: jenkins.cluster.local
    #   hosts:
    #     - jenkins.cluster.local

  # Openshift route
  route:
    enabled: false
    labels: {}
    annotations: {}
    # path: "/jenkins"

  additionalConfig: {}

  # master.hostAliases allows for adding entries to Pod /etc/hosts:
  # https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/
  hostAliases: []
  # - ip: 192.168.50.50
  #   hostnames:
  #     - something.local
  # - ip: 10.0.50.50
  #   hostnames:
  #     - other.local

agent:
  enabled: true
  image: "10.83.74.102/jenkins/jnlp"
  imageTag: "v11"
  customJenkinsLabels: []
  # name of the secret to be used for image pulling
  imagePullSecretName:
  componentName: "jenkins-slave"
  privileged: false
  resources:
    requests:
      cpu: "2000m"
      memory: "4096Mi"
    limits:
      cpu: "2000m"
      memory: "4096Mi"
  # You may want to change this to true while testing a new image
  alwaysPullImage: false
  # Controls how slave pods are retained after the Jenkins build completes
  # Possible values: Always, Never, OnFailure
  podRetention: "Never"
  # You can define the volumes that you want to mount for this container
  # Allowed types are: ConfigMap, EmptyDir, HostPath, Nfs, Pod, Secret
  # Configure the attributes as they appear in the corresponding Java class for that type
  # https://github.com/jenkinsci/kubernetes-plugin/tree/master/src/main/java/org/csanchez/jenkins/plugins/kubernetes/volumes
  # Pod-wide ennvironment, these vars are visible to any container in the slave pod
  envVars:
  # - name: PATH
  #   value: /usr/local/bin
  volumes:
    - type: HostPath
      hostPath: /var/run/docker.sock
      mountPath: /var/run/docker.sock
    - type: HostPath
      hostPath: /root/.kube
      mountPath: /root/.kube
    - type: Nfs
      mountPath: /root/.m2
      serverAddress: 10.83.32.224
      serverPath: /data/m2
  # - type: Secret
  #   secretName: mysecret
  #   mountPath: /var/myapp/mysecret
  nodeSelector: {}
  # Key Value selectors. Ex:
  # jenkins-agent: v1

  # Executed command when side container gets started
  command:
  args:
  # Side container name
  sideContainerName: "jnlp"
  # Doesn‘t allocate pseudo TTY by default
  TTYEnabled: false
  # Max number of spawned agent
  containerCap: 10
  # Pod name
  podName: "jenkins-slave"

persistence:
  enabled: true
  ## A manually managed Persistent Volume and Claim
  ## Requires persistence.enabled: true
  ## If defined, PVC must be created manually before volume will be bound
  existingClaim:
  ## jenkins data Persistent Volume Storage Class
  ## If defined, storageClassName: <storageClass>
  ## If set to "-", storageClassName: "", which disables dynamic provisioning
  ## If undefined (the default) or set to null, no storageClassName spec is
  ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
  ##   GKE, AWS & OpenStack)
  ##
  storageClass: "dynamic"
  annotations: {}
  accessMode: "ReadWriteOnce"
  size: "8Gi"
  volumes:
  #  - name: nothing
  #    emptyDir: {}
  mounts:
  #  - mountPath: /var/nothing
  #    name: nothing
  #    readOnly: true

networkPolicy:
  # Enable creation of NetworkPolicy resources.
  enabled: false
  # For Kubernetes v1.4, v1.5 and v1.6, use ‘extensions/v1beta1‘
  # For Kubernetes v1.7, use ‘networking.k8s.io/v1‘
  apiVersion: networking.k8s.io/v1

## Install Default RBAC roles and bindings
rbac:
  create: true

serviceAccount:
  create: true
  # The name of the service account is autogenerated by default
  name:
  annotations: {}

## Backup cronjob configuration
## Ref: https://github.com/nuvo/kube-tasks
backup:
  # Backup must use RBAC
  # So by enabling backup you are enabling RBAC specific for backup
  enabled: false
  # Used for label app.kubernetes.io/component
  componentName: "backup"
  # Schedule to run jobs. Must be in cron time format
  # Ref: https://crontab.guru/
  schedule: "0 2 * * *"
  annotations:
    # Example for authorization to AWS S3 using kube2iam
    # Can also be done using environment variables
    iam.amazonaws.com/role: "jenkins"
  image:
    repository: "nuvo/kube-tasks"
    tag: "0.1.2"
  # Additional arguments for kube-tasks
  # Ref: https://github.com/nuvo/kube-tasks#simple-backup
  extraArgs: []
  # Add additional environment variables
  env:
  # Example environment variable required for AWS credentials chain
  - name: "AWS_REGION"
    value: "us-east-1"
  resources:
    requests:
      memory: 1Gi
      cpu: 1
    limits:
      memory: 1Gi
      cpu: 1
  # Destination to store the backup artifacts
  # Supported cloud storage services: AWS S3, Minio S3, Azure Blob Storage
  # Additional support can added. Visit this repository for details
  # Ref: https://github.com/nuvo/skbn
  destination: "s3://nuvo-jenkins-data/backup"
checkDeprecation: true

??我主要修改了value.yaml以下几个配置参数,主要包括:

  • 修改了镜像的参数为私服仓库的镜像。这里有一个特别注意的点就是jenkins slave镜像,这个镜像如果只是使用
    官方的镜像还是不行的,需要自己制作镜像。镜像里面要包含kubectl命令、docker命令、mvn打包命令等;
  • 配置了jenkins的登录密码;
  • 配置了资源限制情况,这里要特别注意一点,jenkins slave镜像的默认资源限制太小,经常会因为这个资源不足导致jinkins slave Pod终端;所以需要将jenkins slave镜像的资源限制调高一点;
  • 再就是配置ingress,因为我们需要通过k8s集群外访问jenkins应用;
  • 最主要的是agent这一块的配置,这里的镜像要自己制作,并且要把资源限额调大,并且要挂载docker.sock和kubectl的配置文件,将jenkins slaves
    的.m2目录配置成nfs存储。这样的话,重新启动的jenkins slave构建的时候就不用每次都在apache官网下载依赖的jar包了
  • 配置持久化存储,因为持久化存储需要存储插件和jenkins的页面配置;

??在jenkins这个应用部署到k8s集群过程中,我踩过了几个坑,现在把这些坑都总结一下,给大家参考:

  1. 我在测试jenkins的时候,删除一个chart。由于一直习惯于使用命令

    helm delete jenkins --purge

    来删除应用,结果我使用了这个命令之后,再次使用

    helm install --name jenkins --namespace=kube-system ./jenkins

    当我登录jenkins的时候,发现所有的配置都没有了。包括我已经配置的job、安装的插件等等;所以大家在使用helm delete的时候一定要记得--purge慎用,加了--purge就代表pvc也会一并删除;

  2. 我在制作jenkins slave镜像的时候,运行起来的slave pod,当执行docker命令的时候,总是提示如下报错:
+ docker version
Client:
 Version:           18.06.0-ce
 API version:       1.38
 Go version:        go1.10.3
 Git commit:        0ffa825
 Built:             Wed Jul 18 19:04:39 2018
 OS/Arch:           linux/amd64
 Experimental:      false
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.38/version: dial unix /var/run/docker.sock: connect: permission denied
Build step ‘Execute shell‘ marked build as failure
Finished: FAILURE

其实这个报错的原因是因为官方的jenkins slave镜像是已jenkins(id为10010)的用户启动的,但是我们映射到
slave镜像的/var/run/docker.sock文件,必须要root用户才有权限读取。所以就产生了这个问题;解决方案有两种:

  1. 把所有的k8s宿主机的/var/run/docker.sock文件权限修改为chmod -R 777 /var/run/docker.sock 再mount进去,因为不安全不推荐这样;
  2. 在jenkins slave镜像中设置jenkins用户可以使用sudo命令执行root权限的命令
    jenkins ALL=(ALL)   NOPASSWD: ALL
  3. 直接定义jenkins slave镜像由root用户启动 USER root即可;

??我这里选择了第三种方案;其实第二种方案是最安全的,只是因为我的pipeline文件里面调用了多个docker插件,来实现docker build和docker login等,不知道插件里面怎么实现sudo命令;自定义jenkins slave镜像的Dockerfile可以这样写:

FROM 10.83.74.102/jenkins/jnlp:v2
MAINTAINER Yang Gao "[email protected]"
USER root
ADD jdk /root/
ADD maven /root/maven
ENV JAVA_HOME /root/jdk/
ENV MAVEN_HOME /root/maven/
ENV PATH $PATH:$JAVA_HOME/bin:$MAVEN_HOME/bin
RUN echo "deb http://apt.dockerproject.org/repo debian-jessie main"           > /etc/apt/sources.list.d/docker.list       && apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80           --recv-keys 58118E89F3A912897C070ADBF76221572C52609D       && apt-get update       && apt-get install -y apt-transport-https       && apt-get install -y sudo       && apt-get install -y docker-engine       && rm -rf /var/lib/apt/lists/*
RUN echo "jenkins ALL=NOPASSWD: ALL" >> /etc/sudoers
RUN curl -L https://github.com/docker/compose/releases/download/1.8.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose; chmod +x /usr/local/bin/docker-compose

??然后使用docker build命令构建镜像

docker build -t harbor.k8s.maimaiti.site/system/jenkins-jnlp:v11
docker push harbor.k8s.maimaiti.site/system/jenkins-jnlp:v11
  1. jenkins slave pod模板可以在两个地方配置,一个就是helm chart value.yaml文件里面定义,包括slave
    镜像的版本,挂载等;还有一种就是在jenkins页面安装了kubernetes插件之后有个云的配置,里面也可以配置jenkins slave pod模板。
    但是有个问题,如果你使用了helm upgrade 命令修改了helm里面的参数,包括slave的版本等,实际上是不生效的。因为jenkins还是
    会使用管理页面的那个配置。当使用helm install jenkins的时候,第一次登陆jenkins页面,这个jenkins的helm默认就安装了
    kubernetes插件,所以可以直接到系统管理---系统配置里面找云的配置,默认就有了slave pod的配置,这个配置和helm value.yaml里面是一样的。
    但是后续再helm upgrade jenkins的时候,实际上是不能更新jenkins页面的这个地方的slave 配置,所以会发现一直更改jenkins slave不生效。


??讲完了我探究jenkins in kubernetes遇到的坑,再看看一个k8s应用发布,到底都需要配置哪些jenkins的设置。这里我列举了一个我们公司的内部项目
其中一些地方做了脱敏处理,大家只需要关注流程和架构即可,具体的配置已各公司的应用不同会有所不通;

  1. 登录jenkins,配置插件;系统管理---插件管理---available插件---安装docker、kubernetes插件等
  2. 配置凭据:配置包括gitlab和harbor的账户和密码,后面再pipeline里面会使用到

  1. 创建一个pipeline的job,配置包括:
  • 配置发布应用的模块变量
  • 配置发布应用的版本分支名称;
  • 配置pipeline的内容;




k8s应用清单文件:

node {
    try {
        stage(‘代码拉取‘) {
          git branch: "${BranchName}",  credentialsId: ‘k8sgitlab‘, url: ‘http://k8s.gitlab.test.site/root/test.git‘
           }
        stage(‘项目构建‘) {
            if ("${MODULE}".contains(‘test-ui‘)){
               dir(‘test-ui‘){
                  sh "npm i"
                  sh "npm  run sit"
             }
             }else {
               dir(‘test-parent‘){
                 sh "mvn clean install -Psit"
                 }
             }
        }
        def regPrefix = ‘k8s.harbor.test.site/test/‘
        stage(‘构建镜像‘){
        docker.withRegistry(‘http://k8s.harbor.test.site/‘,‘k8sharbor‘){
             if ("${MODULE}".contains(‘test-admin‘)){
               dir(‘test-parent/test-admin/target‘) {
                    sh "cp ../Dockerfile . && cp -rf ../BOOT-INF ./ &&cp -rf ../../pinpoint-agent ./"
                    sh "jar -uvf admin.jar  BOOT-INF/classes/application.yml"
                    def imageName = docker.build("${regPrefix}admin:V1.0-${env.BUILD_ID}")
                    imageName.push("V1.0-${env.BUILD_ID}")
                    //imageName.push("latest")
                    sh "/usr/bin/docker rmi ${regPrefix}admin:V1.0-${env.BUILD_ID}"
                }
                }

            if ("${MODULE}".contains(‘test-eureka‘)){
                dir(‘test-parent/test-eureka/target‘) {
                    sh "cp ../Dockerfile . && cp -rf ../BOOT-INF ./ &&cp -rf ../../pinpoint-agent ./"
                    sh "jar -uvf testEurekaServer.jar  BOOT-INF/classes/application.yml"
                    def imageName = docker.build("${regPrefix}eureka:V1.0-${env.BUILD_ID}")
                    imageName.push("V1.0-${env.BUILD_ID}")
                    //imageName.push("latest")
                    sh "/usr/bin/docker rmi ${regPrefix}eureka:V1.0-${env.BUILD_ID}"
                }
                 }
            if ("${MODULE}".contains(‘test-quality‘)){
                dir(‘test-parent/test-quality/target‘) {
                    sh "cp ../Dockerfile . && cp -rf ../BOOT-INF ./ &&cp -rf ../../pinpoint-agent ./"
                    sh "jar -uvf quality.jar  BOOT-INF/classes/application.yml"
                    def imageName = docker.build("${regPrefix}quality:V1.0-${env.BUILD_ID}")
                    imageName.push("V1.0-${env.BUILD_ID}")
                    //imageName.push("latest")
                    sh "/usr/bin/docker rmi ${regPrefix}quality:V1.0-${env.BUILD_ID}"
                }
               }
            if ("${MODULE}".contains(‘test-schedule‘)){
                dir(‘test-parent/test-schedule/target‘) {
                    sh "cp ../Dockerfile . && cp -rf ../BOOT-INF ./ &&cp -rf ../../pinpoint-agent ./"
                    sh "jar -uvf schedule.jar BOOT-INF/classes/application.yml "
                     def imageName = docker.build("${regPrefix}schedule:V1.0-${env.BUILD_ID}")
                    imageName.push("V1.0-${env.BUILD_ID}")
                    //imageName.push("latest")
                    sh "/usr/bin/docker rmi ${regPrefix}schedule:V1.0-${env.BUILD_ID}"
                }
              }
            if ("${MODULE}".contains(‘test-zuul‘)){
                dir(‘test-parent/test-zuul/target‘) {
                    sh "cp ../Dockerfile . && cp -rf ../BOOT-INF ./ &&cp -rf ../../pinpoint-agent ./"
                     sh "jar -uvf test-api.jar BOOT-INF/classes/application.yml "
                     def imageName = docker.build("${regPrefix}zuul:V1.0-${env.BUILD_ID}")
                     imageName.push("V1.0-${env.BUILD_ID}")
                    //imageName.push("latest")
                    sh "/usr/bin/docker rmi ${regPrefix}zuul:V1.0-${env.BUILD_ID}"
                }
              }
            }
       }
       stage(‘重启应用‘){
           if ("${MODULE}".contains(‘test-admin‘)){
               sh "sed -i \‘s/latest/V1.0-${env.BUILD_ID}/g\‘ test-parent/DockerCompose/test-admin.yml "
               sh "/usr/local/bin/kubectl  --kubeconfig=test-parent/DockerCompose/config apply  -f test-parent/DockerCompose/test-admin.yml --record "
                }
            if ("${MODULE}".contains(‘test-eureka‘)){
               sh "sed -i \‘s/latest/V1.0-${env.BUILD_ID}/g\‘ test-parent/DockerCompose/test-eureka.yml   "
               sh "/usr/local/bin/kubectl   apply  -f test-parent/DockerCompose/test-eureka.yml --record "
                 }
            if ("${MODULE}".contains(‘test-quality‘)){
               sh "sed -i \‘s/latest/V1.0-${env.BUILD_ID}/g\‘ test-parent/DockerCompose/test-quality.yml  "
               sh "/usr/local/bin/kubectl  apply  -ftest-parent/DockerCompose/test-quality.yml  --record "
               }
            if ("${MODULE}".contains(‘test-schedule‘)){
              sh "sed -i \‘s/latest/V1.0-${env.BUILD_ID}/g\‘ test-parent/DockerCompose/test-schedule.yml   "
               sh "/usr/local/bin/kubectl   apply  -f test-parent/DockerCompose/test-schedule.yml  --record "
              }
            if ("${MODULE}".contains(‘test-zuul‘)){
                sh "sed -i \‘s/latest/V1.0-${env.BUILD_ID}/g\‘ test-parent/DockerCompose/test-zuul.yml  "
               sh "/usr/local/bin/kubectl   apply  -f test-parent/DockerCompose/test-zuul.yml  --record "
              }
       }

    }catch (any) {
        currentBuild.result = ‘FAILURE‘
        throw any}

}

??Dockerfile的内容主要如下:

FROM 10.10.10.10/library/java:8-jdk-alpine
ENV TZ=Asia/Shanghai
VOLUME /tmp

ADD    admin.jar /app/test/admin.jar
ADD   skywalking-agent/ /app/icm/skywalking-agent
ENTRYPOINT ["java","-javaagent:/app/test/skywalking-agent/skywalking-agent.jar","-Djava.security.egd=file:/dev/./urandom","-XX:+UnlockExperimentalVMOptions","-XX:+UseCGroupMemoryLimitForHeap","-jar","/app/test/admin.jar"]

??Yaml清单文件的内容主要如下:

apiVersion: apps/v1beta2
kind: Deployment
metadata:
  name: test-admin
  namespace: sit
  labels:
    k8s-app: test-admin

spec:
  replicas: 3
  revisionHistoryLimit: 3
  #滚动升级时70s后认为该pod就绪
  minReadySeconds: 70
  strategy:
    ##由于replicas为3,则整个升级,pod个数在2-4个之间
    rollingUpdate:
      #滚动升级时会先启动1个pod
      maxSurge: 1
      #滚动升级时允许的最大Unavailable的pod个数
      maxUnavailable: 1
  selector:
    matchLabels:
      k8s-app: test-admin
  template:
    metadata:
      labels:
        k8s-app: test-admin
    spec:
      containers:
      - name: test-admin
        image: k8s.harbor.test.site/test/admin:latest
        resources:
          # need more cpu upon initialization, therefore burstable class
          #limits:
          #  memory: 1024Mi
          #  cpu:  200m
          #requests:
          #  cpu: 100m
          #  memory:  256Mi
        ports:
        #容器的端口
        - containerPort: 8281
          #name: ui
          protocol: TCP
        livenessProbe:
          httpGet:
            path: /admin/health
            port: 8281
            scheme: HTTP
          initialDelaySeconds: 180
          timeoutSeconds: 5
          periodSeconds: 15
          successThreshold:  1
          failureThreshold:  2
        #volumeMounts:
        #- mountPath: "/download"
        #  name: data
      #volumes:
      #- name: data
      #  persistentVolumeClaim:
      #    claimName: download-pvc
---
apiVersion: v1
kind: Service
metadata:
  name: test-admin
  namespace: sit
  labels:
    k8s-app: test-admin

spec:
  type: NodePort
  ports:
  #集群IP的端口
  - port: 8281
    protocol: TCP
    #容器的端口
    targetPort: 8281
    #nodePort: 28281
  selector:
    k8s-app: test-admin

推荐关注我的个人微信公众号 “云时代IT运维”,周期性更新最新的应用运维类技术文档。关注虚拟化和容器技术、CI/CD、自动化运维等最新前沿运维技术和趋势;

原文地址:https://blog.51cto.com/zgui2000/2388326

时间: 2024-10-05 03:32:37

jenkins的容器化部署以及k8s应用的CI/CD实现的相关文章

Kubernetes 集群的两种部署过程(daemon部署和容器化部署)以及glusterfs的应用!

ClusterIp:通过VIP来访问, NodePort: 需要自己搭建负载据衡器 LoadBalancer:仅仅用于特定的云提供商 和 Google Container Engine https://www.nginx.com/blog/load-balancing-kubernetes-services-nginx-plus/ port:相当于服务端口(对及集群内客户访问) targetPort: 相当于pods端口 nodePort: 宿主机端口(也是服务端口,只不过是对集群外客户访问)

Flask容器化部署原理与实现

本文将介绍Flask的部署方案:Flask + Nginx + uWSGI,并使用docker进行容器化部署,部署的实例来源 Flask开发初探,操作系统为ubuntu. Flask系列文章: Flask开发初探 WSGI到底是什么 Flask源码分析一:服务启动 Flask路由内部实现原理 部署方案 在开发时,我们使用flask run命令启动的开发服务器是werkzeug提供的,但是这种方式目的是为了开发,不能很好的扩展,不适合生产部署.生产部署,我们需要一个更强健,性能更高的WSGI服务器

Spring Boot Tomcat 容器化部署实践与总结

在平时的工作和学习中经常会构建简单的web应用程序.如果只是HelloWorld级别的程序,使用传统的Spring+SpringMVC框架搭建得话会将大部分的时间花费在搭建框架本身上面,比如引入SpringMVC,配置DispatcheherServlet等.并且这些配置文件都差不多,重复这些劳动似乎意义不大.所以使用Springboot框架来搭建简单的应用程序显得十分的便捷和高效. 前两天在工作中需要一个用于测试文件下载的简单web程序,条件是使用Tomcat Docker Image作为载体

Kolla 容器化部署Openstack

1.集群架构  其中e1,e2,e3 表示网卡. e1 所在网络为管理网. e2 为vm对外通信网卡,无需配置网络协议和ip地址: BOOTPROTO=none DEVICE=eth1 HWADDR=fa:16:3e:38:20:88 ONBOOT=yes TYPE=Ethernet USERCTL=no .... e3所在网络为osd集群通信网络 备注:操作系统为centos7.4 最小化安装 2. kolla容器化部署openstack及ceph集群 2.1 基础环境 以下操作在部署机执行

【巨杉数据库SequoiaDB】巨杉 Tech | 几分钟实现巨杉数据库容器化部署

随着业务负载的不断加重,容器化.虚拟化也成为各类在线应用必须要具备的能力.对于分布式数据库,容器化也是提升快速部署.提高运维效率的一个很好的路径. 我们重新优化了 Docker部署的方式,帮助大家更快的上手SequoiaDB集群,本文就将介绍基于 Docker 的SequoiaDB分布式集群快速部署. 1.集群配置 我们将在六个容器中部署一个多节点,高度可用的 SequoiaDB 集群,如下所示: (本文以 SequoiaDB v3.2.3 版本为例) ? 该集群包括一个协调器节点.一个目录节点

Jenkins + k8s 实现企业 CI/CD 落地

一.概述 1.1.环境介绍 我们使用的是 AWS 的 EC2 来搭建我们的集群,安装方式使用 kubeadm 来进行安装,如果使用二进制安装,可以参考我相关文档. 系统版本:ubuntu 16.04 k8s 版本:1.17.1 docker 版本:18.06-ce 1.2.流程图 1.3.集群配置 名称 配置 内网IP 外网IP k8s-master 2核4GB 172.31.20.184 54.226.118.74 k8s-node1 2核4GB 172.31.27.69 52.90.221.

kubespray容器化部署kubernetes高可用集群

一.基础环境 docker版本1.12.6 CentOS 7 1.准备好要部署的机器 IP ROLE 172.30.33.89 k8s-registry-lb 172.30.33.90 k8s-master01-etcd01 172.30.33.91 k8s-master02-etcd02 172.30.33.92 k8s-master03-etcd03 172.30.33.93 k8s-node01-ingress01 172.30.33.94 k8s-node02-ingress02 172

超级简便的容器化部署工具(使用 ASP.NET Core 演示)

Docker 改变了我们部署网站的方式,从原先的手动编译打包上传,到现在的构建镜像然后推送部署,让我们在配置环境上所花费的时间大大减少了.不仅如此,通过一系列相关的工具配合,可以很轻松的实现 CI.CD.本文即将介绍的就是这么一款非常简便的工具--captainduckduck,使用 captainduckduck 只需要很少的 Docker 知识. 简介 原先,我们的部署流程可能是这样的: 拉取代码 -> 构建镜像 -> 启动容器 除此之外,还需要配置 HTTPS,配置反向代理,如果要更新应

容器化部署Cassandra高可用集群

前提: 三台装有docker的虚拟机,这里用VM1,VM2,VM3表达(当然生产环境要用三个独立物理机,否则无高可用可言),装docker可参见Ubuntu离线安装docker. 开始部署: 部署图 如上图所示,三台VM的IP分别为: 192.168.0.101 192.168.0.102 192.168.0.103 客户端将使用这三个IP来连接集群,每个VM通过端口映射由docker网桥myBridge来与Cassandra容器通信,容器的IP会在启动容器时指定 部署步骤: 1. 建docke