单机运行k8s以及e2e

1.在本机启动单机版k8s

/home/opama/workspace/k8s/src/k8s.io/kubernetes# hack/local-up-cluster.sh
Go version: go version go1.6.2 linux/amd64
+++ [1004 21:39:33] Building the toolchain targets:
    k8s.io/kubernetes/hack/cmd/teststale
+++ [1004 21:39:33] Building go targets for linux/amd64:
    cmd/kubectl
    cmd/hyperkube
+++ [1004 21:41:08] Placing binaries
curl: (7) Failed to connect to 127.0.0.1 port 8080: 拒绝连接
API SERVER port is free, proceeding...
Detected host and ready to start services.  Doing some housekeeping first...
Using GO_OUT /home/opama/workspace/k8s/src/k8s.io/kubernetes/_output/local/bin/linux/amd64
Starting services now!
Starting etcd
etcd -addr 127.0.0.1:4001 -data-dir /tmp/tmp.S62jgzua1y --bind-addr 127.0.0.1:4001 --debug > "/dev/null" 2>/dev/null
Waiting for etcd to come up.
+++ [1004 21:41:09] On try 1, etcd: :
{"action":"set","node":{"key":"/_test","value":"","modifiedIndex":3,"createdIndex":3}}
Waiting for apiserver to come up
+++ [1004 21:41:11] On try 2, apiserver: : {
  "kind": "PodList",
  "apiVersion": "v1",
  "metadata": {
    "selfLink": "/api/v1/pods",
    "resourceVersion": "10"
  },
  "items": []
}
Local Kubernetes cluster is running. Press Ctrl-C to shut it down.

Logs:
  /tmp/kube-apiserver.log
  /tmp/kube-controller-manager.log
  /tmp/kube-proxy.log
  /tmp/kube-scheduler.log
  /tmp/kubelet.log

To start using your cluster, open up another terminal/tab and run:

  export KUBERNETES_PROVIDER=local

  cluster/kubectl.sh config set-cluster local --server=http://127.0.0.1:8080 --insecure-skip-tls-verify=true
  cluster/kubectl.sh config set-context local --cluster=local
  cluster/kubectl.sh config use-context local
  cluster/kubectl.sh

2.看一下是否运行成功

/home/opama/workspace/k8s/src/k8s.io/kubernetes/_output/local/bin# mkdir /home/opama/kube-release
/home/opama/workspace/k8s/src/k8s.io/kubernetes/_output/local/bin# mv linux /home/opama/kube-release/
/home/opama/kube-release/linux/amd64# cp -p kubectl /bin/
/home/opama/kube-release/linux/amd64# kubectl get node
NAME        STATUS    AGE
127.0.0.1   Ready     11m

3.本机运行下e2e用例

/home/opama/workspace/k8s/src/k8s.io/kubernetes# KUBERNETES_PROVIDER=local go run hack/e2e.go -v -test  --test_args=‘"--host=http://127.0.0.1:8080" --ginkgo.focus="Secrets"‘
2016/10/04 22:14:42 e2e.go:212: Running: get status
Local doesn‘t need special preparations for e2e tests
Client Version: version.Info{Major:"1", Minor:"3+", GitVersion:"v1.3.1-dirty", GitCommit:"fe4aa01af2e1ce3d464e11bc465237e38dbcff27", GitTreeState:"dirty", BuildDate:"2016-09-15T15:47:40Z", GoVersion:"go1.6.2", Compiler:"gc", Platform:"linux/amd64"}
error: server version (&version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.1.1+9ec21ad3522752", GitCommit:"9ec21ad3522752415c2f82db9f6aa1117dbe583d", GitTreeState:"clean", BuildDate:"2016-10-04T13:39:33Z", GoVersion:"go1.6.2", Compiler:"gc", Platform:"linux/amd64"}) differs from client version (version.Info{Major:"1", Minor:"3+", GitVersion:"v1.3.1-dirty", GitCommit:"fe4aa01af2e1ce3d464e11bc465237e38dbcff27", GitTreeState:"dirty", BuildDate:"2016-09-15T15:47:40Z", GoVersion:"go1.6.2", Compiler:"gc", Platform:"linux/amd64"})!
2016/10/04 22:14:42 e2e.go:218: Error running get status: exit status 1
2016/10/04 22:14:42 e2e.go:214: Step ‘get status‘ finished in 49.723737ms
2016/10/04 22:14:42 e2e.go:189: Testing requested, but e2e cluster not up!
exit status 1

发现是版本不符合,手动指定kubectl,中间还会遇到node数目检查,我直接修改e2e.go中的minNodeCount=1了

/home/opama/workspace/k8s/src/k8s.io/kubernetes# export KUBECTL_PATH=/bin/kubectl
/home/opama/workspace/k8s/src/k8s.io/kubernetes# cluster/kubectl.sh version
Client Version: version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.1.1+9ec21ad3522752", GitCommit:"9ec21ad3522752415c2f82db9f6aa1117dbe583d", GitTreeState:"clean", BuildDate:"2016-10-04T13:39:33Z", GoVersion:"go1.6.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.1.1+9ec21ad3522752", GitCommit:"9ec21ad3522752415c2f82db9f6aa1117dbe583d", GitTreeState:"clean", BuildDate:"2016-10-04T13:39:33Z", GoVersion:"go1.6.2", Compiler:"gc", Platform:"linux/amd64"}version
Client Version: version.Info{Major:"1", Minor:"3+", GitVersion:"v1.3.1-dirty", GitCommit:"fe4aa01af2e1ce3d464e11bc465237e38dbcff27", GitTreeState:"dirty", BuildDate:"2016-09-15T15:47:40Z", GoVersion:"go1.6.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.1.1+9ec21ad3522752", GitCommit:"9ec21ad3522752415c2f82db9f6aa1117dbe583d", GitTreeState:"clean", BuildDate:"2016-10-04T13:39:33Z", GoVersion:"go1.6.2", Compiler:"gc", Platform:"linux/amd64"}

创建config文件

vim /root/.kube/config
{
  "User": "root",
  "Password": ""
}

本机运行Conformance的e2e用例

/home/opama/workspace/k8s/src/k8s.io/kubernetes# KUBERNETES_PROVIDER=local go run hack/e2e.go -v -test  --test_args="--host=http://127.0.0.1:8080 --ginkgo.focus=\[Conformance\] --ginkgo.skip=\[Serial\]"
2016/10/04 23:09:41 e2e.go:212: Running: get status
Local doesn‘t need special preparations for e2e tests
Client Version: version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.1.1+9ec21ad3522752", GitCommit:"9ec21ad3522752415c2f82db9f6aa1117dbe583d", GitTreeState:"clean", BuildDate:"2016-10-04T13:39:33Z", GoVersion:"go1.6.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.1.1+9ec21ad3522752", GitCommit:"9ec21ad3522752415c2f82db9f6aa1117dbe583d", GitTreeState:"clean", BuildDate:"2016-10-04T13:39:33Z", GoVersion:"go1.6.2", Compiler:"gc", Platform:"linux/amd64"}
2016/10/04 23:09:42 e2e.go:214: Step ‘get status‘ finished in 49.630847ms
Local doesn‘t need special preparations for e2e tests
2016/10/04 23:09:42 e2e.go:212: Running: Ginkgo tests
Conformance test: not doing test setup.
I1004 23:09:42.300288    7565 e2e.go:243] Starting e2e run "93b93584-8a44-11e6-ba64-2c768adbb5f8" on Ginkgo node 1
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1475593782 - Will randomize all specs
Will run 95 of 324 specs

Oct  4 23:09:42.319: INFO: >>> kubeConfig: /root/.kube/config

Oct  4 23:09:42.322: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace ‘kube-system‘ to be running and ready
Oct  4 23:09:42.327: INFO: Waiting for pods to enter Success, but no pods in "kube-system" match label map[name:e2e-image-puller]
Oct  4 23:09:42.328: INFO: 0 / 0 pods in namespace ‘kube-system‘ are running and ready (0 seconds elapsed)
Oct  4 23:09:42.328: INFO: expected 0 pod replicas in namespace ‘kube-system‘, 0 are Running and Ready.
Oct  4 23:09:42.329: INFO: Waiting for pods to enter Success, but no pods in "kube-system" match label map[name:e2e-image-puller]
Oct  4 23:09:42.329: INFO: Dumping network health container logs from all nodes
SSSSSS
------------------------------
[k8s.io] Pods

注意的是,很多默认的gcr.io镜像是下载不下来的,所以会出错,还是需要去改的

时间: 2024-10-11 23:12:19

单机运行k8s以及e2e的相关文章

redis实现单机运行多端口、多实例

redis 程序运行过程并不会消耗太多的 CPU 和 IO 资源(主要消耗memory),如是在单台机器上如果只启用一个redis实例会造成资源浪费 .同时为了增加可用性,一般也不会在单机上只运行一个redis实例,本篇就介绍下,如何在单上运行多个 redis 实例(运行在不同端口). 一.redis-server的安装 由于我现网的机器使用的是ubuntu系统,所以这里就以ubuntu为例,可以直接使用apt源安装redis-server $ sudo apt-get install redi

storm单机运行报错 ERROR backtype.storm.daemon.executor -

单机本地运行storm报错: 错误如下: java.lang.NullPointerException: null at test2.Spot2.nextTuple(Spot2.java:27) ~[classes/:na] at backtype.storm.daemon.executor$fn__3371$fn__3386$fn__3415.invoke(executor.clj:572) ~[storm-core-0.9.6.jar:0.9.6] at backtype.storm.uti

Mahout分布式运行实例:基于矩阵分解的协同过滤评分系统

Apr 08, 2014  Categories in tutorial tagged with Mahout hadoop 协同过滤  Joe Jiang 前言:之前配置Mahout时测试过一个简单的推荐例子,当时是在Eclipse上运行的,由于集成插件的缘故,所以一切进行的都比较顺利,唯一不足的是那是单机运行的,没有急于分布式系统处理.所以基于测试分布式处理环境的目的,下午找了一个实例来运行,推荐系统原型是一个电影评分的系统. 一.问题描述 对于协同过滤(Collaborative Filt

Ecstore安装篇-1.运行环境初始化

Ecstore单机运行环境初始化 Ecstore运行环境要求 应用软件版本信息 Nginx 5.3.29 PHP 5.3.29

基于js脚本的单机黄金点游戏

题目描述 N个同学(N通常大于10),每人写一个0-100之间的有理数 (不包括0或100),交给裁判,裁判算出所有数字的平均值,然后乘以0.618(所谓黄金分割常数),得到G值.提交的数字最靠近G(取绝对值)的同学得到N分,离G最远的同学得到-2分,其他同学得0分. 需求分析 一.根据题目描述,该程序主要功能如下: 1.实现N个同学个数,及游戏轮数的输入. 2.实现这些同学所报有理数的输入. 3.能够根据输入进行黄金点(G点)的计算并将结果输出. 4.能够计算出每轮游戏之后的同学得分情况并将结

Apache Spark源码走读之2 -- Job的提交与运行

转载自:http://www.cnblogs.com/hseagle/p/3673123.html 概要 本文以wordCount为例,详细说明spark创建和运行job的过程,重点是在进程及线程的创建. 实验环境搭建 在进行后续操作前,确保下列条件已满足. 下载spark binary 0.9.1 安装scala 安装sbt 安装java 启动spark-shell 单机模式运行,即local模式 local模式运行非常简单,只要运行以下命令即可,假设当前目录是$SPARK_HOME MAST

Apache Spark源码分析-- Job的提交与运行

本文以wordCount为例,详细说明spark创建和运行job的过程,重点是在进程及线程的创建. 实验环境搭建 在进行后续操作前,确保下列条件已满足. 1. 下载spark binary 0.9.1 2. 安装scala 3. 安装sbt 4. 安装java 启动spark-shell单机模式运行,即local模式 local模式运行非常简单,只要运行以下命令即可,假设当前目录是$SPARK_HOME MASTER=local bin/spark-shell "MASTER=local&quo

Spark多种运行模式

1,测试或实验性质的本地运行模式 (单机) 该模式被称为Local[N]模式,是用单机的多个线程来模拟Spark分布式计算,通常用来验证开发出来的应用程序逻辑上有没有问题. 其中N代表可以使用N个线程,每个线程拥有一个core.如果不指定N,则默认是1个线程(该线程有1个core). 如果是local[*],则代表 Run Spark locally with as many worker threads as logical cores on your machine. 如下: spark-s

Spark各运行模式详解

一.测试或实验性质的本地运行模式 (单机) 该模式被称为Local[N]模式,是用单机的多个线程来模拟Spark分布式计算,通常用来验证开发出来的应用程序逻辑上有没有问题. 其中N代表可以使用N个线程,每个线程拥有一个core.如果不指定N,则默认是1个线程(该线程有1个core). ? ? 指令示例: ? ? 1)spark-shell --master local 效果是一样的 2)spark-shell --master local[4] 代表会有4个线程(每个线程一个core)来并发执行