kubernetes 实战6_命令_Share Process Namespace between Containers in a Pod&Translate a Docker Compose File to Kubernetes Resources

Share Process Namespace between Containers in a Pod

how to configure process namespace sharing for a pod.

When process namespace sharing is enabled, processes in a container are visible to all other containers in that pod.

You can use this feature to configure cooperating containers, such as a log handler sidecar container, or to troubleshoot container images that don’t include debugging utilities like a shell

Before you begin

Your Kubernetes server must be version v1.10. To check the version, enter kubectl version.

A special alpha feature gate PodShareProcessNamespace must be set to true across the system: --feature-gates=PodShareProcessNamespace=true

Configure a Pod

Process Namespace Sharing is enabled using the ShareProcessNamespace field of v1.PodSpec.

apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  shareProcessNamespace: true
  containers:
  - name: nginx
    image: nginx
  - name: shell
    image: busybox
    securityContext:
      capabilities:
        add:
        - SYS_PTRACE
    stdin: true
    tty: true

  

#Create the pod nginx on your cluster:
$ kubectl create -f https://k8s.io/docs/tasks/configure-pod-container/share-process-namespace.yaml

#Attach to the shell container and run ps:
$ kubectl attach -it nginx -c shell
If you don‘t see a command prompt, try pressing enter.

/ # ps ax
PID   USER     TIME  COMMAND
    1 root      0:00 /pause
    8 root      0:00 nginx: master process nginx -g daemon off;
   14 101       0:00 nginx: worker process
   15 root      0:00 sh
   21 root      0:00 ps ax

#You can signal processes in other containers. For example, send SIGHUP to nginx to restart the worker process. This requires the SYS_PTRACE capability.

    / # kill -HUP 8
    / # ps ax
    PID   USER     TIME  COMMAND
        1 root      0:00 /pause
        8 root      0:00 nginx: master process nginx -g daemon off;
       15 root      0:00 sh
       22 101       0:00 nginx: worker process
       23 root      0:00 ps ax

#It’s even possible to access another container image using the /proc/$pid/root link.

/ # head /proc/8/root/etc/nginx/nginx.conf

    user  nginx;
    worker_processes  1;

    error_log  /var/log/nginx/error.log warn;
    pid        /var/run/nginx.pid;

    events {
        worker_connections  1024;

  

Understanding Process Namespace Sharing

Pods share many resources so it makes sense they would also share a process namespace.

Some container images may expect to be isolated from other containers, though, so it’s important to understand these differences:

  1. The container process no longer has PID 1.

    • Some container images refuse to start without PID 1 (for example, containers using systemd) or run commands like kill -HUP 1 to signal the container process.
    • In pods with a shared process namespace, kill -HUP 1 will signal the pod sandbox. (/pause in the above example.)
  2. Processes are visible to other containers in the pod.
    • This includes all information visible in /proc, such as passwords that were passed as arguments or environment variables.
    • These are protected only by regular Unix permissions.
  3. Container filesystems are visible to other containers in the pod through the /proc/$pid/root link.
    • This makes debugging easier, but it also means that filesystem secrets are protected only by filesystem permissions.

Translate a Docker Compose File to Kubernetes Resources

Kubernetes + Compose = Kompose

What’s Kompose?

It’s a conversion tool for all things compose (namely Docker Compose) to container orchestrators (Kubernetes or OpenShift).

More information can be found on the Kompose website at http://kompose.io.

#1. Take a sample docker-compose.yaml file

version: "2"

services:

  redis-master:
    image: k8s.gcr.io/redis:e2e
    ports:
      - "6379"

  redis-slave:
    image: gcr.io/google_samples/gb-redisslave:v1
    ports:
      - "6379"
    environment:
      - GET_HOSTS_FROM=dns

  frontend:
    image: gcr.io/google-samples/gb-frontend:v4
    ports:
      - "80:80"
    environment:
      - GET_HOSTS_FROM=dns
    labels:
      kompose.service.type: LoadBalancer

#2. Run kompose up in the same directory

$ kompose up
We are going to create Kubernetes Deployments, Services and PersistentVolumeClaims for your Dockerized application.
If you need different kind of resources, use the ‘kompose convert‘ and ‘kubectl create -f‘ commands instead. 

INFO Successfully created Service: redis
INFO Successfully created Service: web
INFO Successfully created Deployment: redis
INFO Successfully created Deployment: web         

Your application has been deployed to Kubernetes. You can run ‘kubectl get deployment,svc,pods,pvc‘ for details.

#Alternatively, you can run kompose convert and deploy with kubectl
#2.1. Run kompose convert in the same directory
$ kompose convert
INFO Kubernetes file "frontend-service.yaml" created
INFO Kubernetes file "redis-master-service.yaml" created
INFO Kubernetes file "redis-slave-service.yaml" created
INFO Kubernetes file "frontend-deployment.yaml" created
INFO Kubernetes file "redis-master-deployment.yaml" created
INFO Kubernetes file "redis-slave-deployment.yaml" created   

#2.2. And start it on Kubernetes!
$ kubectl create -f frontend-service.yaml,redis-master-service.yaml,redis-slave-service.yaml,frontend-deployment.yaml,redis-master-deployment.yaml,redis-slave-deployment.yaml
service "frontend" created
service "redis-master" created
service "redis-slave" created
deployment "frontend" created
deployment "redis-master" created
deployment "redis-slave" created

#Now that your service has been deployed, let’s access it.
#3. View the newly deployed service
#3.1 If you’re already using minikube for your development process:
$ minikube service frontend
#3.2 Otherwise, let’s look up what IP your service is using!
$ kubectl describe svc frontend
Name:                   frontend
Namespace:              default
Labels:                 service=frontend
Selector:               service=frontend
Type:                   LoadBalancer
IP:                     10.0.0.183
LoadBalancer Ingress:   123.45.67.89
Port:                   80      80/TCP
NodePort:               80      31144/TCP
Endpoints:              172.17.0.4:80
Session Affinity:       None
No events.

#If you’re using a cloud provider, your IP will be listed next to LoadBalancer Ingress.
$ curl http://123.45.67.89

  

Installation

We have multiple ways to install Kompose. Our preferred method is downloading the binary from the latest GitHub release.

GitHub release : Kompose is released via GitHub on a three-week cycle, you can see all current releases on the GitHub release page.

Go :  Installing using go get pulls from the master branch with the latest development changes.

CentOS:

  • Kompose is in EPEL CentOS repository. If you don’t have EPEL repository already installed and enabled you can do it by running sudo yum install epel-release
  • if you have EPEL enabled in your system, you can install Kompose like any other package.

Fedora:Kompose is in Fedora 24, 25 and 26 repositories. You can install it just like any other package.

macOS:On macOS you can install latest release via Homebrew:

User Guide

Kompose has support for two providers: OpenShift and Kubernetes.

You can choose a targeted provider using global option --provider.

If no provider is specified, Kubernetes is set by default.

kompose convert

Kompose supports conversion of V1, V2, and V3 Docker Compose files into Kubernetes and OpenShift objects.

Kubernetes

$ kompose --file docker-voting.yml convert
WARN Unsupported key networks - ignoring
WARN Unsupported key build - ignoring
INFO Kubernetes file "worker-svc.yaml" created
INFO Kubernetes file "db-svc.yaml" created
INFO Kubernetes file "redis-svc.yaml" created
INFO Kubernetes file "result-svc.yaml" created
INFO Kubernetes file "vote-svc.yaml" created
INFO Kubernetes file "redis-deployment.yaml" created
INFO Kubernetes file "result-deployment.yaml" created
INFO Kubernetes file "vote-deployment.yaml" created
INFO Kubernetes file "worker-deployment.yaml" created
INFO Kubernetes file "db-deployment.yaml" created

$ ls
db-deployment.yaml  docker-compose.yml         docker-gitlab.yml  redis-deployment.yaml  result-deployment.yaml  vote-deployment.yaml  worker-deployment.yaml
db-svc.yaml         docker-voting.yml          redis-svc.yaml     result-svc.yaml        vote-svc.yaml           worker-svc.yaml

  

You can also provide multiple docker-compose files at the same time:

$ kompose -f docker-compose.yml -f docker-guestbook.yml convert
INFO Kubernetes file "frontend-service.yaml" created
INFO Kubernetes file "mlbparks-service.yaml" created
INFO Kubernetes file "mongodb-service.yaml" created
INFO Kubernetes file "redis-master-service.yaml" created
INFO Kubernetes file "redis-slave-service.yaml" created
INFO Kubernetes file "frontend-deployment.yaml" created
INFO Kubernetes file "mlbparks-deployment.yaml" created
INFO Kubernetes file "mongodb-deployment.yaml" created
INFO Kubernetes file "mongodb-claim0-persistentvolumeclaim.yaml" created
INFO Kubernetes file "redis-master-deployment.yaml" created
INFO Kubernetes file "redis-slave-deployment.yaml" created   

$ ls
mlbparks-deployment.yaml  mongodb-service.yaml                       redis-slave-service.jsonmlbparks-service.yaml
frontend-deployment.yaml  mongodb-claim0-persistentvolumeclaim.yaml  redis-master-service.yaml
frontend-service.yaml     mongodb-deployment.yaml                    redis-slave-deployment.yaml
redis-master-deployment.yaml

  

When multiple docker-compose files are provided the configuration is merged.

Any configuration that is common will be over ridden by subsequent file.

kompose up

Kompose supports a straightforward way to deploy your “composed” application to Kubernetes or OpenShift via kompose up.

Kubernetes

$ kompose --file ./examples/docker-guestbook.yml up
We are going to create Kubernetes deployments and services for your Dockerized application.
If you need different kind of resources, use the ‘kompose convert‘ and ‘kubectl create -f‘ commands instead.

INFO Successfully created service: redis-master
INFO Successfully created service: redis-slave
INFO Successfully created service: frontend
INFO Successfully created deployment: redis-master
INFO Successfully created deployment: redis-slave
INFO Successfully created deployment: frontend    

Your application has been deployed to Kubernetes. You can run ‘kubectl get deployment,svc,pods‘ for details.

$ kubectl get deployment,svc,pods
NAME                               DESIRED       CURRENT       UP-TO-DATE   AVAILABLE   AGE
deploy/frontend                    1             1             1            1           4m
deploy/redis-master                1             1             1            1           4m
deploy/redis-slave                 1             1             1            1           4m

NAME                               CLUSTER-IP    EXTERNAL-IP   PORT(S)      AGE
svc/frontend                       10.0.174.12   <none>        80/TCP       4m
svc/kubernetes                     10.0.0.1      <none>        443/TCP      13d
svc/redis-master                   10.0.202.43   <none>        6379/TCP     4m
svc/redis-slave                    10.0.1.85     <none>        6379/TCP     4m

NAME                               READY         STATUS        RESTARTS     AGE
po/frontend-2768218532-cs5t5       1/1           Running       0            4m
po/redis-master-1432129712-63jn8   1/1           Running       0            4m
po/redis-slave-2504961300-nve7b    1/1           Running       0            4m

  

Note:

- You must have a running Kubernetes cluster with a pre-configured kubectl context.

- Only deployments and services are generated and deployed to Kubernetes.

If you need different kind of resources, use the ‘kompose convert’ and ‘kubectl create -f’ commands instead.

kompose down

Once you have deployed “composed” application to Kubernetes, $ kompose down will help you to take the application out by deleting its deployments and services.

If you need to remove other resources, use the ‘kubectl’ command.

$ kompose --file docker-guestbook.yml down
INFO Successfully deleted service: redis-master
INFO Successfully deleted deployment: redis-master
INFO Successfully deleted service: redis-slave
INFO Successfully deleted deployment: redis-slave
INFO Successfully deleted service: frontend
INFO Successfully deleted deployment: frontend

  

Note:

- You must have a running Kubernetes cluster with a pre-configured kubectl context

Build and Push Docker Images

Kompose supports both building and pushing Docker images. When using the build key within your Docker Compose file, your image will:

  • Automatically be built with Docker using the image key specified within your file
  • Be pushed to the correct Docker repository using local credentials (located at .docker/config)

Using an example Docker Compose file:

version: "2"

services:
    foo:
        build: "./build"
        image: docker.io/foo/bar

Using kompose up with a build key:

$ kompose up
INFO Build key detected. Attempting to build and push image ‘docker.io/foo/bar‘
INFO Building image ‘docker.io/foo/bar‘ from directory ‘build‘
INFO Image ‘docker.io/foo/bar‘ from directory ‘build‘ built successfully
INFO Pushing image ‘foo/bar:latest‘ to registry ‘docker.io‘
INFO Attempting authentication credentials ‘https://index.docker.io/v1/
INFO Successfully pushed image ‘foo/bar:latest‘ to registry ‘docker.io‘
INFO We are going to create Kubernetes Deployments, Services and PersistentVolumeClaims for your Dockerized application. If you need different kind of resources, use the ‘kompose convert‘ and ‘kubectl create -f‘ commands instead. 

INFO Deploying application in "default" namespace
INFO Successfully created Service: foo
INFO Successfully created Deployment: foo         

Your application has been deployed to Kubernetes. You can run ‘kubectl get deployment,svc,pods,pvc‘ for details.

  

In order to disable the functionality, or choose to use BuildConfig generation (with OpenShift) --build (local|build-config|none) can be passed.

# Disable building/pushing Docker images
$ kompose up --build none

# Generate Build Config artifacts for OpenShift
$ kompose up --provider openshift --build build-config

  

Alternative Conversions

The default kompose transformation will generate Kubernetes Deployments and Services, in yaml format.

You have alternative option to generate json with -j.

Also, you can alternatively generate Replication Controllers objects, Daemon Sets, or Helm charts.

$ kompose convert -j
INFO Kubernetes file "redis-svc.json" created
INFO Kubernetes file "web-svc.json" created
INFO Kubernetes file "redis-deployment.json" created
INFO Kubernetes file "web-deployment.json" created

  

The *-deployment.json files contain the Deployment objects.

$ kompose convert --replication-controller
INFO Kubernetes file "redis-svc.yaml" created
INFO Kubernetes file "web-svc.yaml" created
INFO Kubernetes file "redis-replicationcontroller.yaml" created
INFO Kubernetes file "web-replicationcontroller.yaml" created

 The *-deployment.json files contain the Deployment objects. 

$ kompose convert --replication-controller
INFO Kubernetes file "redis-svc.yaml" created
INFO Kubernetes file "web-svc.yaml" created
INFO Kubernetes file "redis-replicationcontroller.yaml" created
INFO Kubernetes file "web-replicationcontroller.yaml" created

  The *-replicationcontroller.yaml files contain the Replication Controller objects. If you want to specify replicas (default is 1), use --replicas flag:

$ kompose convert --replication-controller --replicas 3

 

$ kompose convert --daemon-set
INFO Kubernetes file "redis-svc.yaml" created
INFO Kubernetes file "web-svc.yaml" created
INFO Kubernetes file "redis-daemonset.yaml" created
INFO Kubernetes file "web-daemonset.yaml" created

  The *-daemonset.yaml files contain the Daemon Set objects

 

If you want to generate a Chart to be used with Helm simply do:

$ kompose convert -c
INFO Kubernetes file "web-svc.yaml" created
INFO Kubernetes file "redis-svc.yaml" created
INFO Kubernetes file "web-deployment.yaml" created
INFO Kubernetes file "redis-deployment.yaml" created
chart created in "./docker-compose/"

$ tree docker-compose/
docker-compose
├── Chart.yaml
├── README.md
└── templates
    ├── redis-deployment.yaml
    ├── redis-svc.yaml
    ├── web-deployment.yaml
    └── web-svc.yaml

  The chart structure is aimed at providing a skeleton for building your Helm charts.

Labels

kompose supports Kompose-specific labels within the docker-compose.yml file in order to explicitly define a service’s behavior upon conversion.

  • kompose.service.type defines the type of service to be created.
version: "2"
services:
  nginx:
    image: nginx
    dockerfile: foobar
    build: ./foobar
    cap_add:
      - ALL
    container_name: foobar
    labels:
      kompose.service.type: nodeport

  

原文地址:https://www.cnblogs.com/panpanwelcome/p/9141071.html

时间: 2024-10-13 20:14:27

kubernetes 实战6_命令_Share Process Namespace between Containers in a Pod&Translate a Docker Compose File to Kubernetes Resources的相关文章

kubernetes实战(二十六):kubeadm 安装 高可用 k8s v1.16.x dashboard 2.x

1.基本配置 基本配置.内核升级.基本服务安装参考https://www.cnblogs.com/dukuan/p/10278637.html,或者参考<再也不踩坑的Kubernetes实战指南>第一章第一节 2.Kubernetes组件安装 所有节点安装Kubeadm.Kubectl.kubelet yum install -y kubeadm-1.16.0-0.x86_64 kubectl-1.16.0-0.x86_64 kubelet-1.16.0-0.x86_64 所有节点启动Dock

kubernetes实战(三十):CentOS 8 二进制 高可用 安装 k8s 1.17.x

1. 基本说明 本文章将演示CentOS 8二进制方式安装高可用k8s 1.17.x,相对于其他版本,二进制安装方式并无太大区别. 2. 基本环境配置 主机信息 192.168.1.19 k8s-master01 192.168.1.18 k8s-master02 192.168.1.20 k8s-master03 192.168.1.88 k8s-master-lb 192.168.1.21 k8s-node01 192.168.1.22 k8s-node02 系统环境 [[email pro

Kubernetes实战总结 - EFK部署(v7.6.0)

基础概念 Elasticsearch 是一个实时的.分布式的可扩展的搜索引擎,允许进行全文.结构化搜索,它通常用于索引和搜索大量日志数据,也可用于搜索许多不同类型的文档. Beats 是数据采集的得力工具.将 Beats 和您的容器一起置于服务器上,或者将 Beats 作为函数加以部署,然后便可在 Elastisearch 中集中处理数据.如果需要更加强大的处理性能,Beats 还能将数据输送到 Logstash 进行转换和解析. Kibana 核心产品搭载了一批经典功能:柱状图.线状图.饼图.

kubernetes实战(十六):k8s高可用集群平滑升级 v1.11.x 到v1.12.x

1.基本概念 升级之后所有的containers会重启,因为hash值会变. 不可跨版本升级. 2.升级Master节点 当前版本 [[email protected] ~]# kubeadm version kubeadm version: &version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac

kubernetes实战(二十七):CentOS 8 二进制 高可用 安装 k8s 1.16.x

1. 基本说明 本文章将演示CentOS 8二进制方式安装高可用k8s 1.16.x,相对于其他版本,二进制安装方式并无太大区别.CentOS 8相对于CentOS 7操作更加方便,比如一些服务的关闭,无需修改配置文件即可永久生效,CentOS 8默认安装的内核版本是4.18,所以在安装k8s的过程中也无需在进行内核升级,系统环境也可按需升级,如果下载的是最新版的CentOS 8,系统升级也可省略. 2. 基本环境配置 主机信息 192.168.1.19 k8s-master01 192.168

Docker集群管理系统Kubernetes

一.Kubernetes简介 Kubernetes 是Google开源的容器集群管理系统,基于Docker构建一个容器的调度服务,提供资源调度.均衡容灾.服务注册.动态扩缩容等功能套件,利用Kubernetes能方便地管理跨机器运行容器化的应用.而且Kubernetes支持GCE.vShpere.CoreOS.OpenShift.Azure等平台上运行,也可以直接部署在物理主机上. 二.Kubernetes架构 1. Pod 在Kubernetes系统中,调度的最小颗粒不是单纯的容器,而是抽象成

Docker容器管理之Kubernetes

Kubernetes是Google开源的容器集群管理系统.它构建于docker技术之上,为容器化的应用提供资源调度.部署运行.服务发现.扩 容缩容等整一套功能,本质上可看作是基于容器技术的mini-PaaS平台.本文旨在梳理Kubernetes的架构.概念及基本工作流,并且通过运行一 个简单的示例应用来介绍如何使用Kubernetes. 总体概览 如下图所示是我初步阅读文档和源代码之后整理的总体概览,基本上可以从如下三个维度来认识Kubernetes. 操作对象 Kubernetes以RESTF

有容云:实战总结之 利用Docker、Docker Compose &amp;Rancher构建持续部署

前言: 本文由John Patterson . Chris Lunsford写于2016年4月4日,译者有容云张向波,转载请注明出处.(原文链接见文末) 作者John Patterson和Chris Lunsford 运营了一家提供运营和基础架构服务的公司,本文是他们给大家分享的内容:关于如何通过使用Docker.Docker-Compose和Rancher来实现容器部署落地. 我们想跟你一起从头开始体验整个过程,特别是之间遇到的一些痛点和所做的决策.目前,已经有许多的资源和工具可以与Docke

Docker CMD ENTRYPOING 和Kubernetes command args对比

Docker CMD ENTRYPOING 和Kubernetes command args对比 exec 模式 使用 exec 模式时,容器中的任务进程就是容器内的 1 号进程 shell 模式 使用 shell 模式时,docker 会以 /bin/sh -c "task command" 的方式执行任务命令.也就是说容器中的 1 号进程不是任务进程而是 bash 进程 CMD 指令 CMD 指令的目的是:为容器提供默认的执行命令. CMD 指令有三种使用方式,其中的一种是为 EN