1、本文介绍centos7下的docker容器互联及端口映射问题
环境介绍:
docker1:192.168.1.230
docker2:192.168.1.231
a.两台宿主分别更改主机名docker1 and docker2
# hostnamectl set-hostname docker1 # reboot
b.在docker1和docker2上分别用yum方式安装docker并启动服务
[[email protected] ~]# yum -y install docker [[email protected] ~]# service docker start Redirecting to /bin/systemctl start docker.service 通过ps -ef | grep docker 查看进程已经启动
c.在docker1和docker2上分别安装open vswitch及依赖环境
[[email protected] ~]# yum -y install openssl-devel kernel-devel [[email protected] ~]# yum groupinstall "Development Tools" [[email protected] ~]# adduser ovswitch [[email protected] ~]# su - ovswitch [[email protected] ~]$ wget [[email protected] ~]$ tar -zxvpf openvswitch-2.3.0.tar.gz [[email protected] ~]$ mkdir -p ~/rpmbuild/SOURCES [[email protected] ~]$ sed ‘s/openvswitch-kmod, //g‘ openvswitch-2.3.0/rhel/openvswitch.spec > openvswitch-2.3.0/rhel/openvswitch_no_kmod.spec [[email protected] ~]$ cp openvswitch-2.3.0.tar.gz rpmbuild/SOURCES/ [[email protected] ~]$ rpmbuild -bb --without check ~/openvswitch-2.3.0/rhel/openvswitch_no_kmod.spec [[email protected] ~]$ exit [[email protected] ~]# yum localinstall /home/ovswitch/rpmbuild/RPMS/x86_64/openvswitch-2.3.0-1.x86_64.rpm [[email protected] ~]# systemctl start openvswitch.service # 启动ovs [[email protected] ~]# systemctl status openvswitch.service -l #查看服务状态 鈼[0m openvswitch.service - LSB: Open vSwitch switch Loaded: loaded (/etc/rc.d/init.d/openvswitch) Active: active (running) since Fri 2016-04-22 02:37:10 EDT; 9s ago Docs: man:systemd-sysv-generator(8) Process: 24616 ExecStart=/etc/rc.d/init.d/openvswitch start (code=exited, status=0/SUCCESS) CGroup: /system.slice/openvswitch.service 鈹溾攢24640 ovsdb-server: monitoring pid 24641 (healthy) 鈹溾攢24641 ovsdb-server /etc/openvswitch/conf.db -vconsole:emer -vsyslog:err -vfile:info --remote=punix:/var/run/openvswitch/db.sock --private-key=db:Open_vSwitch,SSL,private_key --certificate=db:Open_vSwitch,SSL,certificate --bootstrap-ca-cert=db:Open_vSwitch,SSL,ca_cert --no-chdir --log-file=/var/log/openvswitch/ovsdb-server.log --pidfile=/var/run/openvswitch/ovsdb-server.pid --detach --monitor 鈹溾攢24652 ovs-vswitchd: monitoring pid 24653 (healthy) 鈹斺攢24653 ovs-vswitchd unix:/var/run/openvswitch/db.sock -vconsole:emer -vsyslog:err -vfile:info --mlockall --no-chdir --log-file=/var/log/openvswitch/ovs-vswitchd.log --pidfile=/var/run/openvswitch/ovs-vswitchd.pid --detach --monitor Apr 22 02:37:10 docker1 openvswitch[24616]: /etc/openvswitch/conf.db does not exist ... (warning). Apr 22 02:37:10 docker1 openvswitch[24616]: Creating empty database /etc/openvswitch/conf.db [ OK ] Apr 22 02:37:10 docker1 openvswitch[24616]: Starting ovsdb-server [ OK ] Apr 22 02:37:10 docker1 ovs-vsctl[24642]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=7.6.0 Apr 22 02:37:10 docker1 ovs-vsctl[24647]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=2.3.0 "external-ids:system-id=\"7469bdac-d8b0-4593-b300-fd0931eacbc2\"" "system-type=\"unknown\"" "system-version=\"unknown\"" Apr 22 02:37:10 docker1 openvswitch[24616]: Configuring Open vSwitch system IDs [ OK ] Apr 22 02:37:10 docker1 openvswitch[24616]: Inserting openvswitch module [ OK ] Apr 22 02:37:10 docker1 openvswitch[24616]: Starting ovs-vswitchd [ OK ] Apr 22 02:37:10 docker1 openvswitch[24616]: Enabling remote OVSDB managers [ OK ] Apr 22 02:37:10 docker1 systemd[1]: Started LSB: Open vSwitch switch.
d.在docker1和docker2分别建立桥接网卡和路由
[[email protected] ~]# cat /proc/sys/net/ipv4/ip_forward 1 [[email protected] ~]# ovs-vsctl add-br obr0 [[email protected] ~]# ovs-vsctl add-port obr0 gre0 -- set Interface gre0 type=gre options:remote_ip=192.168.1.230 [[email protected] ~]# brctl addbr kbr0 [[email protected] ~]# brctl addif kbr0 obr0 [[email protected] ~]# ip link set dev docker0 down [[email protected] ~]# ip link del dev docker0 [[email protected] ~]# vi /etc/sysconfig/network-scripts/ifcfg-kbr0 ONBOOT=yes BOOTPROTO=static IPADDR=192.168.100.10 NETMASK=255.255.255.0 GATEWAY=192.168.100.0 USERCTL=no TYPE=Bridge IPV6INIT=no DEVICE=kbr0 [[email protected] ~]# cat /etc/sysconfig/network-scripts/route-eth0 192.168.101.0/24 via 192.168.1.231 dev eth0~ [[email protected] ~]# systemctl restart network.service [[email protected] ~]# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.1.2 0.0.0.0 UG 100 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 1002 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 1007 0 0 kbr0 192.168.1.0 0.0.0.0 255.255.255.0 U 100 0 0 eth0 192.168.100.0 0.0.0.0 255.255.255.0 U 0 0 0 kbr0 192.168.101.0 192.168.1.231 255.255.255.0 UG 0 0 0 eth0 192.168.101.0 192.168.1.231 255.255.255.0 UG 100 0 0 eth0
c.虚拟网卡绑定kbr0后下载容器启动测试
[[email protected] ~]# vi /etc/sysconfig/docker-network # /etc/sysconfig/docker-network DOCKER_NETWORK_OPTIONS="-b=kbr0" [[email protected] ~]# service docker restart Redirecting to /bin/systemctl restart docker.service 下载镜像: [[email protected] ~]# docker search centos [[email protected] ~]# docker pull [[email protected] ~]# docker run -dti --name=mytest2 docker.io/nickistre/centos-lamp /bin/bash [[email protected] ~]# docker ps -l #查看容器的状态 CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 118479ccdebb docker.io/nickistre/centos-lamp "/bin/bash" 16 minutes ago Up About a minute 22/tcp, 80/tcp, 443/tcp mytest1 [[email protected] ~]# docker attach 118479ccdebb #进入容器 [[email protected] ~]# ifconfig #容器自动分配的一个ip地址 eth0 Link encap:Ethernet HWaddr 02:42:C0:A8:64:01 inet addr:192.168.100.1 Bcast:0.0.0.0 Mask:255.255.255.0 inet6 addr: fe80::42:c0ff:fea8:6401/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:7112 errors:0 dropped:0 overruns:0 frame:0 TX packets:3738 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:12175213 (11.6 MiB) TX bytes:249982 (244.1 KiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:1 errors:0 dropped:0 overruns:0 frame:0 TX packets:1 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:28 (28.0 b) TX bytes:28 (28.0 b) [[email protected] ~]# ping 192.168.101.1 #101.1位docker2的容器的ip地址 PING 192.168.101.1 (192.168.101.1) 56(84) bytes of data. 64 bytes from 192.168.101.1: icmp_seq=1 ttl=62 time=1.30 ms 64 bytes from 192.168.101.1: icmp_seq=2 ttl=62 time=0.620 ms 64 bytes from 192.168.101.1: icmp_seq=3 ttl=62 time=0.582 ms 至此不同宿主下面的容器就能互相通信了 ,那么如何通过宿主ip访问容器里面的业务呢呢,,
d.通过Dockerfile构建镜像
[[email protected] ~]# cat Dockerfile #基本镜像 FROM docker.io/nickistre/centos-lamp #作者 MAINTAINER PAIPX #RUN命令 ADD apache-tomcat-6.0.43 /usr/local/apache-tomcat-6.0.43 RUN cd /usr/local/ && mv apache-tomcat-6.0.43 tomcat ADD jdk-6u22-linux-x64.bin /root/ RUN cd /root/ && chmod +x jdk-6u22-linux-x64.bin && ./jdk-6u22-linux-x64.bin && mkdir -p /usr/java/ && cp jdk1.6.0_22 /usr/java/jdk -a #构建环境变量 ENV JAVA_HOME /usr/java/jdk ENV CLASSPATH $CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jre/lib ENV PATH $JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH:$HOMR/bin ENV CATALINA_HOME /usr/local/tomcat ENV PATH $CATALINA_HOME/bin:$PATH RUN mkdir -p "$CATALINA_HOME" WORKDIR $ #暴露端口 EXPOSE 8080 CMD [ "catalina.sh", "run"]
build重构一个镜像
[[email protected] ~]# docker build -t tomcat2 .
启动一个容器
[[email protected] ~]# docker run -dti -p 8000:8080 --name=mytest4 tomcat2
然后通过http://ip:8000访问即可
时间: 2024-11-03 03:26:52