Ubuntu 16.04下搭建kubernetes集群环境

简介

目前Kubernetes为Ubuntu提供的kube-up脚本,不支持15.10以及16.04这两个使用systemd作为init系统的版本。

这里详细介绍一下如何以非Docker方式在Ubuntu16.04集群上手动安装部署Kubernetes的过程。

手动的部署过程,可以很容易写成自动部署的脚本。同时了解整个部署过程,对深入理解Kubernetes的架构及各功能模块也会很有帮助。

环境信息

版本信息

组件 版本
etcd 2.3.1
Flannel 0.5.5
Kubernetes 1.3.4

主机信息

主机 IP OS
k8s-master 172.16.203.133 Ubuntu 16.04
k8s-node01 172.16.203.134 Ubuntu 16.04
k8s-node02 172.16.203.135 Ubuntu 16.04

安装Docker

每台主机上安装最新版Docker Engine(目前是1.12) - https://docs.docker.com/engine/installation/linux/ubuntulinux/

部署etcd集群

我们将在3台主机上安装部署etcd集群

下载etcd

在部署机上下载etcd

ETCD_VERSION=${ETCD_VERSION:-"2.3.1"}
ETCD="etcd-v${ETCD_VERSION}-linux-amd64"
curl -L https://github.com/coreos/etcd/releases/download/v${ETCD_VERSION}/${ETCD}.tar.gz -o etcd.tar.gz

tar xzf etcd.tar.gz -C /tmpcd /tmp/etcd-v${ETCD_VERSION}-linux-amd64

for h in k8s-master k8s-node01 k8s-node02; do ssh [email protected]$h mkdir -p ‘$HOME/kube‘ && scp -r etcd* [email protected]$h:~/kube; donefor h in k8s-master k8s-node01 k8s-node02; do ssh [email protected]$h ‘sudo mkdir -p /opt/bin && sudo mv $HOME/kube/* /opt/bin && rm -rf $home/kube/*‘; done

配置etcd服务

在每台主机上,分别创建/opt/config/etcd.conf和/lib/systemd/system/etcd.service文件,(注意修改红色粗体处的IP地址)

/opt/config/etcd.conf

sudo mkdir -p /var/lib/etcd/
sudo mkdir -p /opt/config/
sudo  cat <<EOF |  sudo tee /opt/config/etcd.conf
ETCD_DATA_DIR=/var/lib/etcd
ETCD_NAME=$(hostname)
ETCD_INITIAL_CLUSTER=master=http://172.16.203.133:2380,node01=http://172.16.203.134:2380,node02=http://172.16.203.135:2380
ETCD_INITIAL_CLUSTER_STATE=new
ETCD_LISTEN_PEER_URLS=http://172.16.203.133:2380
ETCD_INITIAL_ADVERTISE_PEER_URLS=http://172.16.203.133:2380
ETCD_ADVERTISE_CLIENT_URLS=http://172.16.203.133:2379
ETCD_LISTEN_CLIENT_URLS=http://172.16.203.133:2379GOMAXPROCS=$(nproc)EOF

/lib/systemd/system/etcd.service

[Unit]
Description=Etcd Server
Documentation=https://github.com/coreos/etcd
After=network.target

[Service]
User=root
Type=simple
EnvironmentFile=-/opt/config/etcd.conf
ExecStart=/opt/bin/etcd
Restart=on-failure
RestartSec=10s
LimitNOFILE=40000

[Install]
WantedBy=multi-user.target

然后在每台主机上运行

sudo systemctl daemon-reload sudo systemctl enable etcdsudo systemctl start etcd

下载Flannel

FLANNEL_VERSION=${FLANNEL_VERSION:-"0.5.5"}
curl -L  https://github.com/coreos/flannel/releases/download/v${FLANNEL_VERSION}/flannel-${FLANNEL_VERSION}-linux-amd64.tar.gz flannel.tar.gz
tar xzf  flannel.tar.gz -C /tmp

编译K8s

在部署机上编译K8s,需要安装docker engine(1.12)和go(1.6.2)

git clone https://github.com/kubernetes/kubernetes.git
cd kubernetes
make release-skip-tests
tar xzf _output/release-stage/full/kubernetes/server/kubernetes-server-linux-amd64.tar.gz -C /tmp

Note

除了linux/amd64,默认还会为其他平台做交叉编译。为了减少编译时间,可以修改hack/lib/golang.sh,把KUBE_SERVER_PLATFORMS, KUBE_CLIENT_PLATFORMS和KUBE_TEST_PLATFORMS中除linux/amd64以外的其他平台注释掉。

部署K8s Master

复制程序文件

cd /tmp
scp kubernetes/server/bin/kube-apiserver      kubernetes/server/bin/kube-controller-manager      kubernetes/server/bin/kube-scheduler kubernetes/server/bin/kubelet kubernetes/server/bin/kube-proxy [email protected]:~/kubescp flannel-${FLANNEL_VERSION}/flanneld [email protected]:~/kube
ssh -t [email protected] ‘sudo mv ~/kube/* /opt/bin/‘

创建证书

在master主机上 ,运行如下命令创建证书

mkdir -p /srv/kubernetes/

cd /srv/kubernetes

export MASTER_IP=172.16.203.133
openssl genrsa -out ca.key 2048
openssl req -x509 -new -nodes -key ca.key -subj "/CN=${MASTER_IP}" -days 10000 -out ca.crt
openssl genrsa -out server.key 2048
openssl req -new -key server.key -subj "/CN=${MASTER_IP}" -out server.csr
openssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out server.crt -days 10000 

配置kube-apiserver服务

我们使用如下的Service以及Flannel的网段:

SERVICE_CLUSTER_IP_RANGE=172.18.0.0/16

FLANNEL_NET=192.168.0.0/16

在master主机上,创建/lib/systemd/system/kube-apiserver.service文件,内容如下

[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
User=root
ExecStart=/opt/bin/kube-apiserver  --insecure-bind-address=0.0.0.0  --insecure-port=8080  --etcd-servers=http://172.16.203.133:2379, http://172.16.203.134:2379, http://172.16.203.135:2379  --logtostderr=true \ --allow-privileged=false  --service-cluster-ip-range=172.18.0.0/16  --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,SecurityContextDeny,ResourceQuota  --service-node-port-range=30000-32767  --advertise-address=172.16.203.133  --client-ca-file=/srv/kubernetes/ca.crt  --tls-cert-file=/srv/kubernetes/server.crt  --tls-private-key-file=/srv/kubernetes/server.key
Restart=on-failure
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

配置kube-controller-manager服务

在master主机上,创建/lib/systemd/system/kube-controller-manager.service文件,内容如下

[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
User=root
ExecStart=/opt/bin/kube-controller-manager   --master=127.0.0.1:8080   --root-ca-file=/srv/kubernetes/ca.crt   --service-account-private-key-file=/srv/kubernetes/server.key   --logtostderr=true
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

配置kuber-scheduler服务

在master主机上,创建/lib/systemd/system/kube-scheduler.service文件,内容如下

[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
User=root
ExecStart=/opt/bin/kube-scheduler   --logtostderr=true   --master=127.0.0.1:8080Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

配置flanneld服务

在master主机上,创建/lib/systemd/system/flanneld.service文件,内容如下

[Unit]
Description=Flanneld
Documentation=https://github.com/coreos/flannel
After=network.target
Before=docker.service

[Service]
User=root
ExecStart=/opt/bin/flanneld   --etcd-endpoints="http://172.16.203.133:2379,http://172.16.203.134:2379,http://172.16.203.135:2379"   --iface=172.16.203.133   --ip-masq
Restart=on-failure
Type=notify
LimitNOFILE=65536

启动服务

/opt/bin/etcdctl --endpoints="http://172.16.203.133:2379,http://172.16.203.134:2379,http://172.16.203.135:2379" mk /coreos.com/network/config \
  ‘{"Network":"192.168.0.0/16", "Backend": {"Type": "vxlan"}}‘

sudo systemctl daemon-reload
sudo systemctl enable kube-apiserver
sudo systemctl enable kube-controller-manager
sudo systemctl enable kube-scheduler
sudo systemctl enable flanneld

sudo systemctl start kube-apiserver
sudo systemctl start kube-controller-manager
sudo systemctl start kube-scheduler
sudo systemctl start flanneld

修改Docker服务

source /run/flannel/subnet.env

sudo sed -i "s|^ExecStart=/usr/bin/dockerd -H fd://$|ExecStart=/usr/bin/dockerd -H tcp://127.0.0.1:4243 -H unix:///var/run/docker.sock --bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}|g" /lib/systemd/system/docker.service
rc=0
ip link show docker0 >/dev/null 2>&1 || rc="$?"
if [[ "$rc" -eq "0" ]]; then
ip link set dev docker0 down
ip link delete docker0
fi

sudo systemctl daemon-reload
sudo systemctl enable docker
sudo systemctl restart docker

部署K8s Node

复制程序文件

cd /tmp
for h in k8s-master k8s-node01 k8s-node02; do scp kubernetes/server/bin/kubelet kubernetes/server/bin/kube-proxy [email protected]$h:~/kube; done
for h in k8s-master k8s-node01 k8s-node02; do scp flannel-${FLANNEL_VERSION}/flanneld [email protected]$h:~/kube;done
for h in k8s-master k8s-node01 k8s-node02; do ssh -t [email protected]$h ‘sudo mkdir -p /opt/bin && sudo mv ~/kube/* /opt/bin/‘; done

配置Flanned以及修改Docker服务

参见Master部分相关步骤: 配置Flanneld服务,启动Flanneld服务,修改Docker服务。注意修改iface的地址

配置kubelet服务

/lib/systemd/system/kubelet.service,注意修改IP地址

[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service

[Service]
ExecStart=/opt/bin/kubelet   --hostname-override=172.16.203.133   --api-servers=http://172.16.203.133:8080   --logtostderr=true
Restart=on-failure
KillMode=process

[Install]
WantedBy=multi-user.target

启动服务

sudo systemctl daemon-reload
sudo systemctl enable kubelet
sudo systemctl start kubelet

配置kube-proxy服务

/lib/systemd/system/kube-proxy.service,注意修改IP地址

[Unit]
Description=Kubernetes Proxy
After=network.target

[Service]
ExecStart=/opt/bin/kube-proxy    --hostname-override=172.16.203.133   --master=http://172.16.203.133:8080   --logtostderr=true
Restart=on-failure

[Install]
WantedBy=multi-user.target

启动服务

sudo systemctl daemon-reload
sudo systemctl enable kube-proxy
sudo systemctl start kube-proxy

配置验证K8s

生成配置文件

在部署机上运行

KUBE_USER=admin
KUBE_PASSWORD=$(python -c ‘import string,random; print("".join(random.SystemRandom().choice(string.ascii_letters + string.digits) for _ in range(16)))‘)

DEFAULT_KUBECONFIG="${HOME}/.kube/config"
mkdir -p $(dirname "${KUBECONFIG}")
touch "${KUBECONFIG}"
CONTEXT=ubuntu
KUBECONFIG=${KUBECONFIG:-$DEFAULT_KUBECONFIG}

KUBECONFIG="${KUBECONFIG}" /tmp/kubernetes/server/bin/kubectl config set-cluster "${CONTEXT}" --server=http://172.16.203.133:8080 --insecure-skip-tls-verify=true
KUBECONFIG="${KUBECONFIG}" /tmp/kubernetes/server/bin/kubectl config set-credentials "${CONTEXT}" --username=${KUBE_USER} --password=${KUBE_PASSWORD}
KUBECONFIG="${KUBECONFIG}" /tmp/kubernetes/server/bin/kubectl config set-context "${CONTEXT}" --cluster="${CONTEXT}" --user="${CONTEXT}"
KUBECONFIG="${KUBECONFIG}" /tmp/kubernetes/server/bin/kubectl config use-context "${CONTEXT}"  --cluster="${CONTEXT}" 

验证

$ kubectl get nodes
NAME             STATUS    AGE
172.16.203.133   Ready     2h
172.16.203.134   Ready     2h
172.16.203.135   Ready     2h

$cat <<EOF > nginx.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: my-nginx
spec:
  replicas: 2
  template:
    metadata:
      labels:
        run: my-nginx
    spec:
      containers:
      - name: my-nginx
        image: nginx
        ports:
        - containerPort: 80
EOF

$kubectl create -f nginx.yml
$kubectl get pods -l run=my-nginx -o wide
NAME                        READY     STATUS    RESTARTS   AGE       IP             NODE
my-nginx-1636613490-9ibg1   1/1       Running   0          13m       192.168.31.2   172.16.203.134
my-nginx-1636613490-erx98   1/1       Running   0          13m       192.168.56.3   172.16.203.133

$kubectl expose deployment/my-nginx

$kubectl get service my-nginx
NAME       CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
my-nginx   172.18.28.48   <none>        80/TCP    37s

在三台主机上访问pod或者service的IP地址,都可以访问到nginx服务

$ curl http://172.18.28.48
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html> 

用户认证和安全

我们在最后一步生成kube配置文件的时候,创建了用户名和密码,但并没在apiserver上启用(使用--basic-auth-file参数),也就是说,只要能访问到172.16.203.133:8080,就可以操作k8s集群。如果是内部系统,并且配置好访问规则,也是可以接受的

为了增强安全性,可以启用证书认证,有两种方式:同时启用minion和客户端与master之间的认证,或者只启用客户端与master之间的证书认证。

minion节点的证书生成和配置可以参考http://kubernetes.io/docs/getting-started-guides/scratch/#security-models以及http://kubernetes.io/docs/getting-started-guides/ubuntu-calico/的相关部分。

这里我们看一下如何启用客户端与master之间的证书认证。使用这种方式也相对安全,minion节点和master一般在同一个数据中心,可以把对HTTP 8080的访问限制在数据中心内部,而客户端只能使用证书通过HTTPS访问api server。

创建客户端证书

在master主机上运行如下命令

cd /srv/kubernetes

export CLINET_IP=172.16.203.1
openssl genrsa -out client.key 2048
openssl req -new -key client.key -subj "/CN=${CLINET_IP}" -out client.csr
openssl x509 -req -in client.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out client.crt -days 10000

把client.crt和client.key复制到部署机,然后运如下命令,生成kube配置文件

DEFAULT_KUBECONFIG="${HOME}/.kube/config"
mkdir -p $(dirname "${KUBECONFIG}")
touch "${KUBECONFIG}"
CONTEXT=ubuntu
KUBECONFIG=${KUBECONFIG:-$DEFAULT_KUBECONFIG}
KUBE_CERT=client.crt
KUBE_KEY=client.key

KUBECONFIG="${KUBECONFIG}" /tmp/kubernetes/server/bin/kubectl config set-cluster "${CONTEXT}" --server=https://172.16.203.133:6443 --insecure-skip-tls-verify=true
KUBECONFIG="${KUBECONFIG}" /tmp/kubernetes/server/bin/kubectl config set-credentials "${CONTEXT}" --client-certificate=${KUBE_CERT} --client-key=${KUBE_KEY} --embed-certs=true
KUBECONFIG="${KUBECONFIG}" /tmp/kubernetes/server/bin/kubectl config set-context "${CONTEXT}" --cluster="${CONTEXT}" --user="${CONTEXT}"
KUBECONFIG="${KUBECONFIG}" /tmp/kubernetes/server/bin/kubectl config use-context "${CONTEXT}"  --cluster="${CONTEXT}"

部署附件组件

部署DNS

DNS_SERVER_IP="172.18.8.8"
DNS_DOMAIN="cluster.local"
DNS_REPLICAS=1
KUBE_APISERVER_URL=http://172.16.203.133:8080

cat <<EOF > skydns.yml
apiVersion: v1
kind: ReplicationController
metadata:
  name: kube-dns-v17.1
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    version: v17.1
    kubernetes.io/cluster-service: "true"
spec:
  replicas: $DNS_REPLICAS
  selector:
    k8s-app: kube-dns
    version: v17.1
  template:
    metadata:
      labels:
        k8s-app: kube-dns
        version: v17.1
        kubernetes.io/cluster-service: "true"
    spec:
      containers:
      - name: kubedns
        image: gcr.io/google_containers/kubedns-amd64:1.5
        resources:
          # TODO: Set memory limits when we‘ve profiled the container for large
          # clusters, then set request = limit to keep this container in
          # guaranteed class. Currently, this container falls into the
          # "burstable" category so the kubelet doesn‘t backoff from restarting it.
          limits:
            cpu: 100m
            memory: 170Mi
          requests:
            cpu: 100m
            memory: 70Mi
        livenessProbe:
          httpGet:
            path: /healthz
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        readinessProbe:
          httpGet:
            path: /readiness
            port: 8081
            scheme: HTTP
          # we poll on pod startup for the Kubernetes master service and
          # only setup the /readiness HTTP server once that‘s available.
          initialDelaySeconds: 30
          timeoutSeconds: 5
        args:
        # command = "/kube-dns"
        - --domain=$DNS_DOMAIN.
        - --dns-port=10053
        - --kube-master-url=$KUBE_APISERVER_URL
        ports:
        - containerPort: 10053
          name: dns-local
          protocol: UDP
        - containerPort: 10053
          name: dns-tcp-local
          protocol: TCP
      - name: dnsmasq
        image: gcr.io/google_containers/kube-dnsmasq-amd64:1.3
        args:
        - --cache-size=1000
        - --no-resolv
        - --server=127.0.0.1#10053
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
      - name: healthz
        image: gcr.io/google_containers/exechealthz-amd64:1.1
        resources:
          # keep request = limit to keep this container in guaranteed class
          limits:
            cpu: 10m
            memory: 50Mi
          requests:
            cpu: 10m
            # Note that this container shouldn‘t really need 50Mi of memory. The
            # limits are set higher than expected pending investigation on #29688.
            # The extra memory was stolen from the kubedns container to keep the
            # net memory requested by the pod constant.
            memory: 50Mi
        args:
        - -cmd=nslookup kubernetes.default.svc.$DNS_DOMAIN 127.0.0.1 >/dev/null
        - -port=8080
        - -quiet
        ports:
        - containerPort: 8080
          protocol: TCP
      dnsPolicy: Default  # Don‘t use cluster DNS.
---
apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    kubernetes.io/name: "KubeDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: $DNS_SERVER_IP
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP
EOF

kubectl create -f skydns.yml

然后,修该各节点的kubelet.service,添加--cluster-dns=172.18.8.8以及--cluster-domain=cluster.local

部署Dashboard

echo <<‘EOF‘ > kube-dashboard.yml
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  labels:
    app: kubernetes-dashboard
    version: v1.1.0
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kubernetes-dashboard
  template:
    metadata:
      labels:
        app: kubernetes-dashboard
    spec:
      containers:
      - name: kubernetes-dashboard
        image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.0
        imagePullPolicy: Always
        ports:
        - containerPort: 9090
          protocol: TCP
        args:
        - --apiserver-host=http://172.16.203.133:8080
        livenessProbe:
          httpGet:
            path: /
            port: 9090
          initialDelaySeconds: 30
          timeoutSeconds: 30
---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  type: ClusterIP
  ports:
  - port: 80
    targetPort: 9090
  selector:
    app: kubernetes-dashboard
EOF

kubectl create -f kube-dashboard.yml

时间: 2024-08-01 10:44:34

Ubuntu 16.04下搭建kubernetes集群环境的相关文章

Ubuntu下搭建Kubernetes集群(3)--k8s部署

1. 关闭swap并关闭防火墙 首先,我们需要先关闭swap和防火墙,否则在安装Kubernetes时会导致不成功: # 临时关闭 swapoff -a # 编辑/etc/fstab,注释掉包含swap的那一行即可,重启后可永久关闭 ufw disable 2.配置阿里源 sudo echo "deb http://mirrors.ustc.edu.cn/kubernetes/apt kubernetes-xenial main" > /etc/apt/sources.list.

ubuntu 16.04下搭建web服务器(MySQL+PHP+Apache) 教程

1.开始说明 下面很多可能参照网上其中以为前辈的,但有所改进吧.这些设置可能会有所不同,你需要根据不同情况进行修改. 安装apache2 2.切换管理员身份 在ubuntu中需要用root身份进行操作,所以用下面的命令确保以root身份登录: sudo su 3.开始安装mysql5 apt-get install mysql-server mysql-client 你将被要求提供一个mysql的root用户的密码,我们需要在红色区域设置密码. new password for the mysq

Linux下搭建hadoop集群环境

小编写在前面的话 “天下武功,唯快不破”,但是如果不清楚原理,快也是徒劳.在这物欲横行,数据爆炸的年代,BigData时代到了,如果熟悉了整个hadoop的搭建过程,我们是否也能从中攫取一桶金?! 前期准备 l  两台linux虚拟机(本文使用redhat5,IP分别为 192.168.1.210.192.168.1.211) l  JDK环境(本文使用jdk1.6,网上很多配置方法,本文省略) l  Hadoop安装包(本文使用Hadoop1.0.4) 搭建目标 210作为主机和节点机,211

ubuntu 16.04 下搭建git服务器(gitosis+git-daemon+gitweb)

近期公司为了方便管理项目,要搭建一个git服务器集中管理项目数据.通过请教同事,并在虚拟机上多次尝试终于搭建成功,特意将搭建过程分享给跟我一样的小白. 环境: 服务器端: git-server   192.168.92.128 客户端:   git-client   192.168.92.129 服务器端: 1.首先安装git 与 openssh  (openssh 安装过的,这步可以只安装git就行) [email protected]:~$ sudo apt-get install git-

Ubuntu 16.04下搭建Java(php)环境

1,安装java 2,安装tomcat 3,安装mysql 4,安装apache 5,安装pure-ftpd 安装sudo apt-get install pure-ftpdsudo netstat –nl查看21端口是否已开启.完成安装后,就可以用Ubuntu系统的用户名和密码登陆了.Ubuntu Server中的pure-ftpd缺省配置不允许匿名登陆,每个用户登陆都是访问自己的home目录,最多允许50名用户同时连接.基本上已经满足需要,不需要再多做配置.重启pure-ftpd的命令如下:

Debian 8.x / Ubuntu 16.04.x 搭建 Ghost 教程

Ghost 是一款使用 Node.js 开发的博客系统,相对于使用 PHP 开发的 WordPress 更轻巧友好,所以本站已经从 WordPress 切换至 Ghost,本文介绍在 Debian 8.x 和 Ubuntu 16.04 下搭建 Ghost 的教程 本文所有操作均在 root 用户下进行,请自行切换 首先,更新系统 apt-get update && apt-get upgrade 如果您用的 Debian 8.x 开启了 backports 也可以更新下 apt-get -

Ubuntu 14.04下搭建Python3.4 + PyQt5.3.2 + Eric6.0开发平台

Ubuntu 14.04下搭建Python3.4 + PyQt5.3.2 + Eric6.0开发平台 分类: Linux Ubuntu Oracle 2014-10-14 14:49 3613人阅读 评论(13) 收藏 举报 Ubuntu Python SIP PyQt5 Eric6 目录(?)[+] 引言 找 了很多Python GUI工具集,还是觉得PyQt比较理想,功能强大跨平台,还支持界面设计器.花一天时间折腾了Ubuntu14.04(32位)+ Python3.4 + Qt5.3.2

阿里云ECS搭建Kubernetes集群踩坑记

阿里云ECS搭建Kubernetes集群踩坑记 [TOC] 1. 现有环境.资源 资源 数量 规格 EIP 1 5M带宽 ECS 3 2 vCPU 16 GB内存 100G硬盘 ECS 3 2 vCPU 16 GB内存 150G硬盘 SLB 2 私网slb.s1.small 2. 规划 坑: 上网问题,因为只有一个EIP,所有其它节点只能通过代理上网; 负载均衡问题,因为阿里不支持LVS,负载均衡TCP方式后端又不支持访问负载均衡,HTTP和HTTPS方式,只支持后端协议为HTTP; 为了避免上

Ubuntu 12.04下搭建Web服务器 (MySQL+PHP+Apache)(转)

看了网上很多关于用linux操作系统搭建网站服务器的教程,于是我自己也测试了很多,但今天所测试的 Ubuntu 12.04下搭建Web网站服务器 (MySQL+PHP+Apache环境),感觉这个适合新手.所以这里就跟大家分享下.其实这个网上也有教程的,但我这里算是优化前辈们的教程吧,因为 我当时按照他们的操作时卡了几次,因为他们的有的地方没讲清楚. Ubuntu 12.04(代号Precise Pangolin)是一个LTS长期支持版本,已如约正式发布.Ubuntu 12.04是第16代Ubu