Kubernetes 系列第二篇: Kubernetes 架构设计和部署

1. 架构设计和环境设计

1.1. 架构设计

  1. 部署 Haproxy 为 Kubernetes 提供 Endpoint 访问入口
  2. 使用 Keepalived 将 Endpoint 入口地址设置为 Virtual IP 并通过部署多台节点的方式实现冗余
  3. 使用 kubeadm 部署高可用 Kubernetes 集群, 指定 Endpoint IP 为 Keepalived 生成的 Virtual IP
  4. 使用 prometheus 作为 Kubernetes 的集群监控系统, 使用 grafana 作为图表监控图表展示系统, 使用 alertmanager 作为报警系统
  5. 使用 jenkins + gitlab + harbor 构建 CI/CD 系统
  6. 使用单独的域名在 Kubernetes 集群内进行通信, 在内网搭建 DNS 服务用于解析域名

1.2. 环境设计

主机名 IP 角色
kube-master-01.sk8s.io-01.sk8s.io 192.168.0.201 k8s master, haprxoy + keepalived(虚拟IP: 192.168.0.250)
kube-master-01.sk8s.io-02.sk8s.io 192.168.0.202 k8s master, haprxoy + keepalived(虚拟IP: 192.168.0.250)
kube-master-01.sk8s.io-03.sk8s.io 192.168.0.203 k8s master, DNS, Storage, GitLab, Harbor
kube-node-01.sk8s.io 192.168.0.204 node
kube-node-02.sk8s.io 192.168.0.205 node

2. 操作系统初始化设置

2.1. 关闭 SELINUX

[[email protected] ~]# setenforce 0
[[email protected] ~]# sed -i ‘s#^SELINUX=.*#SELINUX=disabled#‘ /etc/sysconfig/selinux
[[email protected] ~]# sed -i ‘s#^SELINUX=.*#SELINUX=disabled#‘ /etc/selinux/config

2.2. 关闭无用服务

[[email protected] ~]# systemctl disable firewalld postfix auditd kdump NetworkManager

2.3. 升级系统内核

[[email protected] ~]# rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
[[email protected] ~]# rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
[[email protected] ~]# yum -y --disablerepo=\* --enablerepo=elrepo-kernel install kernel-lt.x86_64 kernel-lt-devel.x86_64 kernel-lt-headers.x86_64
[[email protected] ~]# yum -y remove kernel-tools-libs.x86_64 kernel-tools.x86_64
[[email protected] ~]# yum -y --disablerepo=\* --enablerepo=elrepo-kernel install kernel-lt-tools.x86_64
[[email protected] ~]# cat <<EOF > /etc/default/grub
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="$(sed ‘s, release .*$,,g‘ /etc/system-release)"
GRUB_DEFAULT=0
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT="console"
GRUB_CMDLINE_LINUX="crashkernel=auto console=ttyS0 console=tty0 panic=5"
GRUB_DISABLE_RECOVERY="true"
GRUB_TERMINAL="serial console"
GRUB_TERMINAL_OUTPUT="serial console"
GRUB_SERIAL_COMMAND="serial --speed=9600 --unit=0 --word=8 --parity=no --stop=1"
EOF
[[email protected] ~]# grub2-mkconfig -o /boot/grub2/grub.cfg

2.4. 统一网卡名称

[[email protected] ~]# grub_cinfig=‘GRUB_CMDLINE_LINUX="crashkernel=auto ipv6.disable=1 net.ifnames=0 rd.lvm.lv=centos/root rd.lvm.lv=centos/swap rhgb quiet"‘
[[email protected] ~]# sed -i "s#GRUB_CMDLINE_LINUX.*#${grub_cinfig}#" /etc/default/grub
[[email protected] ~]# grub2-mkconfig -o /boot/grub2/grub.cfg

# ATTR{address} 为网卡 MAC 地址, NAME 为修改后的地址
[[email protected] ~]# cat /etc/udev/rules.d/70-persistent-net.rules
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="ec:xx:yy:cc:b6:xx", NAME="eth0"
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="ec:xx:yy:cc:b6:xx", NAME="eth1"

# 重启之前先修改 /etc/sysconfig/network-scripts/ 下面的网卡配置文件
[[email protected] ~]# reboot

2.5. 其他配置

[[email protected] ~]# yum -y install vim net-tools lrzsz lbzip2 bzip2 ntpdate curl wget psmisc
[[email protected] ~]# timedatectl set-timezone Asia/Shanghai
[[email protected] ~]# echo "nameserver 223.5.5.5" > /etc/resolv.conf
[[email protected] ~]# echo "nameserver 114.114.114.114" >> /etc/resolv.conf
[[email protected] ~]# echo ‘LANG="en_US.UTF-8"‘ > /etc/locale.conf
[[email protected] ~]# echo ‘export LANG="en_US.UTF-8"‘ >> /etc/profile.d/custom.sh

[[email protected] ~]# cat >> /etc/security/limits.conf <<EOF
* soft nproc 65530
* hard nproc 65530
* soft nofile 65530
* hard nofile 65530
EOF

[[email protected] ~]# cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
vm.swappiness=0
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963
fs.nr_open=52706963
net.ipv4.tcp_sack = 0
EOF

[[email protected] ~]# sysctl -p /etc/sysctl.d/k8s.conf

2.6. 配置 Hosts Deny

[[email protected] ~]# echo "sshd:192.168.0." > /etc/hosts.allow
[[email protected] ~]# echo "sshd:ALL" > /etc/hosts.deny

2.7. ssh 配置

# 创建管理员用户, 并生成 ssh key(将私钥下载下来, 禁止在服务器留存; 将公钥复制到其他服务器上的 ~/.ssh/authorized_keys)
[[email protected] ~]# useradd huyuan
[[email protected] ~]# echo "sycx123" | passwd --stdin huyuan
[[email protected] ~]# su - huyuan
[[email protected] ~]# ssh-keygen -b 4096
[[email protected] ~]# mv ~/.ssh/id_rsa.pub ~/.ssh/authorized_keys

# 回到 root 用户
[[email protected] ~]# exit

# 禁止 DNS 反解, 优化 SSH 连接速度
[[email protected] ~]# sed -i ‘s/^#UseDNS.*/UseDNS no/‘ /etc/ssh/sshd_config 

# 禁用密码认证
[[email protected] ~]# sed -i ‘s/^PasswordAuthentication.*/PasswordAuthentication no/‘ /etc/ssh/sshd_config

# 禁止 root 用户登录
[[email protected] ~]# sed -i ‘s/#PermitRootLogin.*/PermitRootLogin no/‘ /etc/ssh/sshd_config

# 只允许 huyuan 登录服务器, 多个用户使用空格进行分隔
[[email protected] ~]# echo "AllowUsers huyuan" >> /etc/ssh/sshd_config

# 重启服务
[[email protected] ~]# systemctl restart sshd

2.7. 设置统一 root 密码

[[email protected] ~]# echo "xxxxx" | passwd --stdin root 

2.8. 设置主机名

[[email protected] ~]# hostnamectl set-hostname kube-master-01.sk8s.io-01.sk8s.io
[[email protected] ~]# echo "192.168.0.201 kube-master-01.sk8s.io-01.sk8s.io" >> /etc/hosts
[[email protected] ~]# echo "192.168.0.202 kube-master-01.sk8s.io-02.sk8s.io" >> /etc/hosts
[[email protected] ~]# echo "192.168.0.203 kube-master-01.sk8s.io-03.sk8s.io" >> /etc/hosts
[[email protected] ~]# echo "192.168.0.204 kube-node-01.sk8s.io" >> /etc/hosts
[[email protected] ~]# echo "192.168.0.205 kube-node-02.sk8s.io" >> /etc/hosts

3. 初始化 Kubernetes 集群

3.1. 安装和配置 docker (所有节点)

[[email protected] ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
[[email protected] ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[[email protected] ~]# yum -y install docker-ce-18.09.6 docker-ce-cli-18.09.6
[[email protected] ~]# cat /etc/docker/daemon.json
{
"registry-mirrors": ["https://c7i79lkw.mirror.aliyuncs.com"],
"insecure-registries": ["122.228.208.72:9000"],
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"graph": "/opt/docker",
"log-opts": {
    "max-size": "100m"
},
"storage-driver": "overlay2"
}

[[email protected] ~]# systemctl enable docker
[[email protected] ~]# systemctl start docker

3.2. 配置 haproxy 作为 ApiServer 代理

# 在 kube-master-01.sk8s.io01 和 kube-master-01.sk8s.io02 主机上安装和配置
[[email protected] ~]# yum -y install haproxy
[[email protected] ~]# cat > /etc/haproxy/haproxy.cfg << EOF
global
    log 127.0.0.1  local2 notice
    chroot /var/lib/haproxy
    stats socket /var/run/haproxy.sock mode 660 level admin
    stats timeout 30s
    user haproxy
    group haproxy
    daemon
    nbproc 1
defaults
    log     global
    timeout connect 5000
    timeout client  10m
    timeout server  10m
listen  admin_stats
    bind 0.0.0.0:8000
    mode http
    stats refresh 30s
    stats uri /status
    stats realm welcome login\ Haproxy
    stats auth admin:tuitui99
    stats hide-version
    stats admin if TRUE
listen kube-master-01.sk8s.io
    bind 0.0.0.0:8443
    mode tcp
    option tcplog
    balance source
    server 192.168.0.201 192.168.0.201:6443 check inter 2000 fall 2 rise 2 weight 1
    server 192.168.0.202 192.168.0.202:6443 check inter 2000 fall 2 rise 2 weight 1
    server 192.168.0.203 192.168.0.203:6443 check inter 2000 fall 2 rise 2 weight 1
EOF

[[email protected] ~]# systemctl enable haproxy
[[email protected] ~]# systemctl start haproxy

3.3. 配置 keepalived 为 haproxy 做主从备份

# 在 kube-master-01.sk8s.io01 和 kube-master-01.sk8s.io02 主机上安装和配置
[[email protected] ~]# yum -y install keepalived
[[email protected] ~]# cp /etc/keepalived/keepalived.conf{,.bak}
[[email protected] ~]# cat > /etc/keepalived/keepalived.conf << EOF
global_defs {
    # 唯一表示 192.168.0.201 节点 ip
    router_id master-192.168.0.201
    # 执行 notify_master notify_backup notify_fault 等脚本的用户
    script_user root
}
vrrp_script check-haproxy {
    # 检测进程是否存在
    script "/bin/killall -0 haproxy &>/dev/null"
    interval 5
    weight -30
    user root
}
vrrp_instance k8s {
    state MASTER
    # 设置优先级, 主服务器为 120, 从服务器为 100
    priority 120
    dont_track_primary
    interface eth0
    virtual_router_id 80
    advert_int 3
    track_script {
        check-haproxy
    }
    authentication {
        auth_type PASS
        auth_pass tuitui99
    }
    virtual_ipaddress {
        # 设置虚拟 ip
        192.168.0.254
    }
    # 脚本参考 https://blog.51cto.com/hongchen99/2298896
    notify_master "/bin/python /etc/keepalived/notify_keepalived.py master"
    notify_backup "/bin/python /etc/keepalived/notify_keepalived.py backup"
    notify_fault "/bin/python /etc/keepalived/notify_keepalived.py fault"
}
EOF

[[email protected] ~]# chmod +x /etc/keepalived/notify_keepalived.py
[[email protected] ~]# systemctl enable keepalived
[[email protected] ~]# systemctl start keepalived

3.4. 配置 haproxy 和 keepalived 日志

# 配置 haproxy 日志
[[email protected] ~]# echo "local2.*  /var/log/haproxy.log" >> /etc/rsyslog.conf

# 配置 keepalived 日志
[[email protected] ~]# cp /etc/sysconfig/keepalived{,.bak}
[[email protected] ~]# echo KEEPALIVED_OPTIONS="-D -d -S 0" > /etc/sysconfig/keepalived
[[email protected] ~]# echo "local0.*   /var/log/keepalived.log" >> /etc/rsyslog.conf

# 由于 haproxy 日志通过 udp 传输, 需要打开 rsyslog 的 udp 端口, 在 rsyslog 里面, 去掉下面两个变量的注释
[[email protected] ~]# cat /etc/rsyslog.conf
$ModLoad imudp
$UDPServerRun 514

[[email protected] ~]# systemctl restart rsyslog
[[email protected] ~]# systemctl restart haproxy
[[email protected] ~]# systemctl restart keepalived

3.5. 安装 kubelet kubeadm 和 kubectl

[[email protected] ~]# cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

[[email protected] ~]# yum install -y kubelet-1.14.0 kubeadm-1.14.0 kubectl-1.14.0
[[email protected] ~]# systemctl enable kubelet

3.6. 初始化 kubernetes 集群

# 加载 ipvs 模块
[[email protected] ~]# cat > /etc/sysconfig/modules/ipvs.modules << EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF

[[email protected] ~]# chmod +x /etc/sysconfig/modules/ipvs.modules
[[email protected] ~]# sh /etc/sysconfig/modules/ipvs.modules
[[email protected] ~]# lsmod | grep ip_vs

# 安装 ipvsadm 管理 ipvs
[[email protected] ~]# yum -y install ipvsadm

# 编写初始化配置文件
[[email protected] ~]# cat > kubeadm-init.yaml << EOF
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
kubernetesVersion: v1.14.0
# 192.168.0.254 为虚拟 IP, 8443 为 haproxy 监听的端口
controlPlaneEndpoint: "192.168.0.254:8443"
# 设置拉取初始化镜像地址
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
networking:
  podSubnet: 10.244.0.0/16

apiServer:
  certSANs:
  - "kube-master-01.sk8s.io01"
  - "kube-master-01.sk8s.io02"
  - "kube-master-01.sk8s.io03"
  - "192.168.0.201"
  - "192.168.0.202"
  - "192.168.0.203"
  - "192.168.0.254"
  - "127.0.0.1"

---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"
EOF

# 初始化 kubernetes 集群 --experimental-upload-certs 共享证书
[[email protected] ~]# kubeadm init --config=kubeadm-init.yaml --experimental-upload-certs

# 创建 kubernetes 集群管理用户
[[email protected] ~]# groupadd -g 5000 kubelet
[[email protected] ~]# useradd -c "kubernetes-admin-user" -G docker -u 5000 -g 5000 kubelet
[[email protected] ~]# echo "kubelet" | passwd --stdin kubelet

# 复制 kubernetes 集群配置文件到管理用户
[[email protected] ~]# mkdir /home/kubelet/.kube
[[email protected] ~]# cp -i /etc/kubernetes/admin.conf /home/kubelet/.kube/config
[[email protected] ~]# chown -R kubelet:kubelet /home/kubelet/.kube

3.7. 更新 coredns

[[email protected] ~]# su - kubelet
[[email protected] ~]$ cat > coredns.yaml << EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    k8s-app: kube-dns
  name: coredns
  namespace: kube-system
spec:
  replicas: 3
  selector:
    matchLabels:
      k8s-app: kube-dns
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 1
    type: RollingUpdate
  template:
    metadata:
      labels:
        k8s-app: kube-dns
    spec:
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 100
            podAffinityTerm:
              labelSelector:
                matchExpressions:
                - key: k8s-app
                  operator: In
                  values:
                  - kube-dns
              topologyKey: kubernetes.io/hostname
      containers:
      - args:
        - -conf
        - /etc/coredns/Corefile
        image: coredns/coredns:1.5.0
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 5
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 5
        name: coredns
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        - containerPort: 9153
          name: metrics
          protocol: TCP
        resources:
          limits:
            memory: 170Mi
          requests:
            cpu: 100m
            memory: 70Mi
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - all
          readOnlyRootFilesystem: true
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /etc/coredns
          name: config-volume
          readOnly: true
      dnsPolicy: Default
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: coredns
      serviceAccountName: coredns
      terminationGracePeriodSeconds: 30
      tolerations:
      - key: CriticalAddonsOnly
        operator: Exists
      - effect: NoSchedule
        key: node-role.kubernetes.io/master
      - effect: NoSchedule
        key: node.kubernetes.io/not-ready
      volumes:
      - configMap:
          defaultMode: 420
          items:
          - key: Corefile
            path: Corefile
          name: coredns
        name: config-volume
EOF

[[email protected] ~]$ kubectl apply -f coredns.yaml

3.8. 其他节点加入集群

# 其他 master 加入集群
[[email protected] ~]$ kubeadm join 192.168.0.254:8443 --token h4n7uy.5qibssxu27vveko5 --discovery-token-ca-cert-hash sha256:a27738a4457d57ee611dd1c0281aeaabd32bc834797fe307980b95755b052e41 --experimental-control-plane --certificate-key eb37e5810fe300a42c5b610117ad57acf682a92da928cf94435a135aa338bc12

# 其他 node 加入集群
[[email protected] ~]$ kubeadm join 192.168.0.254:8443 --token h4n7uy.5qibssxu27vveko5 --discovery-token-ca-cert-hash sha256:a27738a4457d58ee611dd1c0281aeaabd34bc834797fe307980b95755b052e41

原文地址:https://blog.51cto.com/hongchen99/2439671

时间: 2024-10-10 17:14:44

Kubernetes 系列第二篇: Kubernetes 架构设计和部署的相关文章

Kubernetes 系列第二篇: 使用 kubectl 命令创建 Kubernetes 应用

1. 简介 k8s 的 API Server 提供了 RESTful 风格的网关接口, 允许用户通过这个接口向 k8s 集群发起请求.如创建一个 Pod 或销毁一个 Pod 等操作用户可以通过编程语言遵循 API Server 提供的网关接口规范和 API Server 进行通信, 也可以通过 k8s 自带的 kubectl 命令和 API Server 进行通信, 或者通过由 Dashboard 提供的 Web UI 和 API Server 进行通信其中 kubectl 是官方提供的用于和

EnjoyingSoft之Mule ESB基础系列第二篇:Mule ESB基本概念

目录 1. 使用Anypoint Studio开发 2. Mule ESB Application Structure - Mule ESB应用程序结构 3. Mule ESB Application整体构造 4. Mule ESB构造元素 - Flow 5. Mule ESB构造元素 - Connector 6. Mule ESB构造元素 - Processor Mule ESB在众多开源的ESB中处于领先者的地位,MuleSoft公司也作为独角兽,2017年在纽交所上市.我们作为MuleSo

深入理解javascript作用域系列第二篇——词法作用域和动态作用域

× 目录 [1]词法 [2]动态 前面的话 大多数时候,我们对作用域产生混乱的主要原因是分不清楚应该按照函数位置的嵌套顺序,还是按照函数的调用顺序进行变量查找.再加上this机制的干扰,使得变量查找极易出错.这实际上是由两种作用域工作模型导致的,作用域分为词法作用域和动态作用域,分清这两种作用域模型就能够对变量查找过程有清晰的认识.本文是深入理解javascript作用域系列第二篇——词法作用域和动态作用域 词法作用域 第一篇介绍过,编译器的第一个工作阶段叫作分词,就是把由字符组成的字符串分解成

ansible系列第二篇(模块使用)

ansible系列第二篇(模块使用) 模块使用 设置ansible提权 在hosts文件加入sudo提权的密码: 18.18.23.102 ansible_become_pass='passwd' 执行: ansible test -S -R root -m shell -a "ls -l /" 查看ansible有那些模块: ansible-doc -l 获取各个模块详细帮助信息 ansible-doc -s ping ping模块: ansible test -m ping 从受控

kubernetes系列教程(三)kubernetes快速入门

写在前面 kubernetes中涉及很多概念,包含云生态社区中各类技术,学习成本比较高,k8s中通常以编写yaml文件完成资源的部署,对于较多入门的人来说是个较高的门坎,本文以命令行的形式代理大家快速入门,俯瞰kubernetes核心概念,快速入门. 1. 基础概念 1.1 集群与节点 kubernetes是一个开源的容器引擎管理平台,实现容器化应用的自动化部署,任务调度,弹性伸缩,负载均衡等功能,cluster是由master和node两种角色组成,其中master负责管理集群,master节

kubernetes系列教程(六)kubernetes资源管理和服务质量

写在前面 上一篇文章中kubernetes系列教程(五)深入掌握核心概念pod初步介绍了yaml学习kubernetes中重要的一个概念pod,接下来介绍kubernetes系列教程pod的resource资源管理和pod的Quality of service服务质量. 1. Pod资源管理 1.1 resource定义 容器运行过程中需要分配所需的资源,如何与cggroup联动配合呢?答案是通过定义resource来实现资源的分配,资源的分配单位主要是cpu和memory,资源的定义分两种:r

Android Metro风格的Launcher开发系列第二篇

前言: 各位小伙伴们请原谅我隔了这么久才开始写这一系列的第二篇博客,没办法忙新产品发布,好了废话不说了,先回顾一下:在我的上一篇博客Android Metro风格的Launcher开发系列第一篇写了如何配置Android开发环境,只是用文字和图片展示了开发Metro风格Launcher的初步设计和产品要求,这一篇文章将会从代码上讲解如何实现对应的UI效果,好了,评书开讲! Launcher主体框架实现: Launcher主体框架我选用的是大家所熟悉的ViewPager控件,因为ViewPager

chromium浏览器开发系列第二篇:如何编译最新chromium源码

说一下为什么这么晚才发第二篇,上周和这周department的工作太多了,晚上都是十点半从公司出发,回家以后实在没有多余的精力去摸键盘了.所以请大家包涵! 上期回顾: chromium源码下载: 1.找个靠谱的vpn(我试过了,网上说的不用vpn拿代码的都不靠谱): 2.获取depot_tools,解压,设置环境变量; 3.gclient获取python和git,svn,设置环境变量: 4.fetch–nohooks chromium –nosvn=true 获取源码: 5.gclientsyn

chromium浏览器开发系列第二篇:如何编译最新chromium

说一下为什么这么晚才发第二篇,上周和这周department的工作太多了,晚上都是十点半从公司出发,回家以后实在没有多余的精力去摸键盘了.所以请大家包涵! 上期回顾: chromium源码下载: 1.找个靠谱的vpn(我试过了,网上说的不用vpn拿代码的都不靠谱): 2.获取depot_tools,解压,设置环境变量; 3.gclient获取python和git,svn,设置环境变量: 4.fetch–nohooks chromium –nosvn=true 获取源码: 5.gclientsyn