Istio Helm 安装

一、参考官方文档

https://istio.io/docs/setup/kubernetes/#downloading-the-release # 安装前准备
https://istio.io/docs/setup/kubernetes/install/helm/  # 参考官方文档 helm 安装

二、Istio安装前准备

1. Go to the Istio release page to download the installation file corresponding to your OS. On a macOS or Linux system, you can run the following command to download and extract the latest release automatically:
$ curl -L https://git.io/getLatestIstio | ISTIO_VERSION=1.2.5 sh -
2. Move to the Istio package directory. For example, if the package is istio-1.2.5:
$ cd istio-1.2.5

The installation directory contains:

Installation YAML files for Kubernetes in install/kubernetes
Sample applications in samples/
The istioctl client binary in the bin/ directory. istioctl is used when manually injecting Envoy as a sidecar proxy.
3. Add the istioctl client to your PATH environment variable, on a macOS or Linux system:
$ export PATH=$PWD/bin:$PATH

三、Helm 安装Istio 1.2.5版本

1. Helm tiller 安装这里不在细说,google一下很多配置方法
2. 安装CRDs
$ helm install install/kubernetes/helm/istio-init --name istio-init --namespace istio-system
3. Istio Helm Values.yaml配置参数,下面参数只是参考,可以自行调整。本人k8s集群中已安装nginx ingress,所以配置中开启ingress配置,不需要可以不配置
Istio 参数选择解释可参考 https://istio.io/docs/reference/config/installation-options/#mixer-options
global:
  defaultResources:
    requests:
      cpu: 30m
      memory: 50Mi
    limits:
      cpu: 400m
      memory: 600Mi
  proxy:
    includeIPRanges: 192.168.16.0/20,192.168.32.0/20
    # 是否开启自动注入功能,取值enabled则该pods只要没有被注解为sidecar.istio.io/inject: "false",就会自动注入。如果取值为disabled,则需要为pod设置注解sidecar.istio.io/inject: "true"才会进行注入
    autoInject: disabled
    resources:
      requests:
        cpu: 30m
        memory: 50Mi
      limits:
        cpu: 400m
        memory: 500Mi
  mtls:
    enabled: false

sidecarInjectorWebhook:
  enabled: true
  # 变量为true,就会为所有命名空间开启自动注入功能。如果赋值为false,则只有标签为istio-injection的命名空间才会开启自动注入功能
  enableNamespacesByDefault: false
  rewriteAppHTTPProbe: false

mixer:
  nodeSelector:
    label: test
  policy:
    enabled: false
  telemetry:
    enabled: true
    resources:
      requests:
        cpu: 100m
        memory: 300Mi
      limits:
        cpu: 1000m
        memory: 1024Mi

pilot:
  enabled: true
  nodeSelector:
    label: test
  resources:
    requests:
      cpu: 100m
      memory: 300Mi
    limits:
      cpu: 1000m
      memory: 1024Mi

gateways:
  enabled: true
  istio-ingressgateway:
    enabled: true
    type: NodePort
    nodeSelector:
      label: test
    resources:
      requests:
        cpu: 100m
        memory: 128Mi
      limits:
        cpu: 1000m
        memory: 1024Mi
  istio-egressgateway:
    enabled: false
    type: NodePort
    nodeSelector:
      label: test
    resources:
      requests:
        cpu: 100m
        memory: 128Mi
      limits:
        cpu: 1000m
        memory: 256Mi

tracing:
  enabled: true
  provider: jaeger
  jaeger:
    resources:
      limits:
        cpu: 300m
        memory: 900Mi
      requests:
        cpu: 30m
        memory: 100Mi
  zipkin:
    resources:
      limits:
        cpu: 300m
        memory: 900Mi
      requests:
        cpu: 30m
        memory: 100Mi
  nodeSelector:
    label: test
  contextPath: /
  ingress:
    enabled: true
    hosts:
      - tracing1.example.com

kiali:
  enabled: true
  resources:
    limits:
      cpu: 300m
      memory: 900Mi
    requests:
      cpu: 30m
      memory: 50Mi
  hub: kiali
  nodeSelector:
    label: test
  contextPath: /
  ingress:
    enabled: true
    hosts:
      - kiali1.example.com
  dashboard:
    grafanaURL: http://grafana1.example.com:8088
    jaegerURL: http://tracing1.example.com:8088

grafana:
  enabled: true
  persist: true
  storageClassName: grafana-data
  accessMode: ReadWriteMany
  resources:
    requests:
      cpu: 30m
      memory: 50Mi
    limits:
      cpu: 300m
      memory: 500Mi
  security:
    enabled: true
    secretName: grafana
    usernameKey: username
    passphraseKey: passphrase
  nodeSelector:
    label: test
  contextPath: /
  ingress:
    enabled: true
    hosts:
      - grafana1.example.com

# 默认开启
prometheus:
  resources:
    requests:
      cpu: 30m
      memory: 50Mi
    limits:
      cpu: 500m
      memory: 1024Mi
  retention: 3d
  nodeSelector:
    label: test
  contextPath: /
  ingress:
    enabled: true
    hosts:
      - prometheus1.example.com

istio_cni:
  enabled: false
4. Helm 安装 Istio
$ helm install ./install/kubernetes/helm/istio --name istio --namespace istio-system -f Values.yaml
5. Istio 检查
$ helm status istio

关注我

欢迎大家关注交流,定期分享自动化运维、DevOps、Kubernetes、Service Mesh和Cloud Native

原文地址:https://blog.51cto.com/yangpeng14/2453421

时间: 2024-07-29 23:29:56

Istio Helm 安装的相关文章

Kubernetes-使用Helm安装istio

添加istio库: helm repo add istio.io https://storage.googleapis.com/istio-release/releases/1.3.4/charts/ 更新查看: helm repo update Hang tight while we grab the latest from your chart repositories... ...Skip local chart repository ...Successfully got an upda

helm 安装prometheus operator 并监控ingress

1.helm安装 curl https://raw.githubusercontent.com/helm/helm/master/scripts/get > get_helm.shchmod 700 get_helm.sh./get_helm.shhelm init --history-max 200helm repo updatehelm install stable/mysqlhelm lskubectl -n kube-system create serviceaccount tiller

helm安装及使用

helm简介 官网文档:https://helm.sh/ helm是kubernetes的包管理器,类似于linux系统下的apt-get或yum 安装 wget https://get.helm.sh/helm-v2.12.3-linux-amd64.tar.gz cp helm tiller /usr/local/bin helm version tiller -version 安装初始化: kubectl apply -f install/kubernetes/helm/helm-serv

K8s Helm安装配置入门

作为k8s现在主流的一种包部署方式,尽管不用,也需要进行一些了解.因为,它确实太流行了. 这一套太极拳打下来,感觉helm这种部署,目前还不太适合于我们公司的应用场景.它更适合需要手工编程各种yaml文件,使用模板减少工作量和出错. 而我们已实现了web方式的yaml文件编辑,使用Helm意义不大,只能起到优化yaml存储的作用,但同时会使我们的配置文件深度依赖helm. 一,Helm用途 Helm把Kubernetes资源(比如deployments.services或 ingress等) 打

国内不fq安装K8S三: 使用helm安装kubernet-dashboard

目录 3 使用helm安装kubernet-dashboard 3.1 Helm的安装 3.2 使用Helm部署Nginx Ingress 3.3 使用Helm部署dashboard 3.4 使用Helm部署metrics-server 国内不fq安装K8S一: 安装docker 国内不fq安装K8S二: 安装kubernet 国内不fq安装K8S三: 使用helm安装kubernet-dashboard 国内不fq安装K8S四: 安装过程中遇到的问题和解决方法 本文是按照"青蛙小白"

k8s Helm安装Prometheus Operator

Ubuntu 18 Kubernetes集群的安装和部署 以及Helm的安装完成了k8s的集群和helm的安装,今天我们来看看Prometheus的监控怎么搞.Prometheus Operator 是 CoreOS 开发的基于 Prometheus 的 Kubernete s监控方案,也可能是目前功能最全面的开源方案.更多信息可以查看https://github.com/coreos/prometheus-operator 创建命名空间 为方便管理,创建一个单独的 Namespace moni

helm 安装EFK(Elasticsearch+Filebeat+Kibana)收集容器日志

官方地址:https://github.com/elastic/helm-charts 我用的是helm3安装 1.安装elasticsearch kubectl create ns logs helm repo add elastic https://helm.elastic.co helm pull elastic/elasticsearch tar zxf elasticsearch-7.6.0.tgz cd elasticsearch 编辑values.yaml文件 vim values

k8s中helm安装部署,升级和回滚(chart,helm,tiller,StorageClass)

一.Helm介绍 helm是基于kubernetes 的包管理器.它之于 kubernetes 就如 yum 之于 centos,pip 之于 python,npm 之于 javascript 那 helm 的引入对于管理集群有哪些帮助呢? 更方便地部署基础设施,如 gitlab,postgres,prometheus,grafana 等 更方便地部署自己的应用,为公司内部的项目配置 Chart,使用 helm 结合 CI,在 k8s 中部署应用一行命令般简单 1.Helm用途 Helm把Kub

Helm 安装 ElasticSearch & Kibana

Helm 安装 ElasticSearch & Kibana二.提前下载镜像下面是我们需要用到的两个主要镜像,提前将其下载避免下载过慢启动超时而导致失败. kibana 镜像: kibana:6.7.0elasticsearch 镜像: elasticsearch:6.7.0$ docker pull docker.elastic.co/kibana/kibana:6.7.0$ docker pull docker.elastic.co/elasticsearch/elasticsearch:6