Neutron分析(2)——neutron-server启动过程分析

neutron-server启动过程分析

1. /etc/init.d/neutron-server

DAEMON=/usr/bin/neutron-server
DAEMON_ARGS="--log-file=$LOGFILE"
DAEMON_DIR=/var/run
...
case $1 in
    start)
        test "$ENABLED" = "true" || exit 0
        log_daemon_msg "Starting neutron server" "neutron-server"
        start-stop-daemon -Sbmv --pidfile $PIDFILE --chdir $DAEMON_DIR --exec $DAEMON -- $DAEMON_ARGS
        log_end_msg $?
        ;;
        ...
esac

2. /usr/bin/neutron-server

import sys
from neutron.server import main

if __name__ == "__main__":
    sys.exit(main())

3. neutron.server.main

ef main():
    # the configuration will be read into the cfg.CONF global data structure
    config.init(sys.argv[1:])
    if not cfg.CONF.config_file:
        sys.exit(_("ERROR: Unable to find configuration file via the default"
                   " search paths (~/.neutron/, ~/, /etc/neutron/, /etc/) and"
                   " the ‘--config-file‘ option!"))
    try:
        pool = eventlet.GreenPool()

        # 以协程方式启动Restful API
        neutron_api = service.serve_wsgi(service.NeutronApiService)
        api_thread = pool.spawn(neutron_api.wait)

        # 启动RPC API
        try:
            neutron_rpc = service.serve_rpc()
        except NotImplementedError:
            LOG.info(_("RPC was already started in parent process by plugin."))
        else:
            rpc_thread = pool.spawn(neutron_rpc.wait)

            # api and rpc should die together.  When one dies, kill the other.
            rpc_thread.link(lambda gt: api_thread.kill())
            api_thread.link(lambda gt: rpc_thread.kill())

        pool.waitall()
    except KeyboardInterrupt:
        pass
    except RuntimeError as e:
        sys.exit(_("ERROR: %s") % e)

4. 先看neutron.service.serve_rpc()

neutron.service.serve_rpc()最重要的工作就是启动各个插件的RpcWorker

plugin = manager.NeutronManager.get_plugin()

try:
        rpc = RpcWorker(plugin)

        if cfg.CONF.rpc_workers < 1:
            rpc.start()
            return rpc
        else:
            launcher = common_service.ProcessLauncher(wait_interval=1.0)
            launcher.launch_service(rpc, workers=cfg.CONF.rpc_workers)
            return launcher

而RpcWorker最重要的工作是调用plugin的start_rpc_listeners来监听消息队列:

def start(self):
        # We may have just forked from parent process.  A quick disposal of the
        # existing sql connections avoids producing errors later when they are
        # discovered to be broken.
        session.get_engine().pool.dispose()
        self._servers = self._plugin.start_rpc_listeners()

5. 再来看Rest API部分

service.serve_wsgi(service.NeutronApiService)

def serve_wsgi(cls):

    try:
        service = cls.create()
        service.start()
    except Exception:
        with excutils.save_and_reraise_exception():
            LOG.exception(_(‘Unrecoverable error: please check log ‘
                            ‘for details.‘))

    return service

service.start()即为self.wsgi_app = _run_wsgi(self.app_name),而该函数最重要的工作是从api-paste.ini中加载app并启动

def _run_wsgi(app_name):
    app = config.load_paste_app(app_name)
    if not app:
        LOG.error(_(‘No known API applications configured.‘))
        return
    server = wsgi.Server("Neutron")
    server.start(app, cfg.CONF.bind_port, cfg.CONF.bind_host,
                 workers=cfg.CONF.api_workers)
    # Dump all option values here after all options are parsed
    cfg.CONF.log_opt_values(LOG, std_logging.DEBUG)
    LOG.info(_("Neutron service started, listening on %(host)s:%(port)s"),
             {‘host‘: cfg.CONF.bind_host,
              ‘port‘: cfg.CONF.bind_port})
    return server

6. api-paste.ini

[composite:neutron]
use = egg:Paste#urlmap
/: neutronversions
/v2.0: neutronapi_v2_0

[composite:neutronapi_v2_0]
use = call:neutron.auth:pipeline_factory
noauth = request_id catch_errors extensions neutronapiapp_v2_0
keystone = request_id catch_errors authtoken keystonecontext extensions neutronapiapp_v2_0

[filter:request_id]
paste.filter_factory = neutron.openstack.common.middleware.request_id:RequestIdMiddleware.factory

[filter:catch_errors]
paste.filter_factory = neutron.openstack.common.middleware.catch_errors:CatchErrorsMiddleware.factory

[filter:keystonecontext]
paste.filter_factory = neutron.auth:NeutronKeystoneContext.factory

[filter:authtoken]
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory

[filter:extensions]
paste.filter_factory = neutron.api.extensions:plugin_aware_extension_middleware_factory

[app:neutronversions]
paste.app_factory = neutron.api.versions:Versions.factory

[app:neutronapiapp_v2_0]
paste.app_factory = neutron.api.v2.router:APIRouter.factory

neutron.api.v2.router:APIRouter.factory针对network、subnet、port三个资源注册了 index、create等2个collection action以及show、update、delete等3个memver action,这些action最终记录在APIRouter._plugin_handlers中:

{‘create‘: ‘create_subnet‘, ‘delete‘: ‘delete_subnet‘, ‘list‘: ‘get_subnets‘, ‘update‘: ‘update_subnet‘, ‘show‘: ‘get_subnet‘}

{‘create‘: ‘create_network‘, ‘delete‘: ‘delete_network‘, ‘list‘: ‘get_networks‘, ‘update‘: ‘update_network‘, ‘show‘: ‘get_network‘}

{‘create‘: ‘create_port‘, ‘delete‘: ‘delete_port‘, ‘list‘: ‘get_ports‘, ‘update‘: ‘update_port‘, ‘show‘: ‘get_port‘}

在请求进入APIRouter之前,会先经过RequestIdMiddleware(请求header中添加 openstack.request_id)、CatchErrorsMiddleware(错误处理)、keystone权限验证以及 plugin_aware_extension_middleware_factory等几个filter的处理,前三个filter比较直 观,plugin_aware_extension_middleware_factory创建了映射到plugin的处理函数:

{‘create‘: ‘create_router‘, ‘delete‘: ‘delete_router‘, ‘list‘: ‘get_routers‘, ‘update‘: ‘update_router‘, ‘show‘: ‘get_router‘}

{‘create‘: ‘create_floatingip‘, ‘delete‘: ‘delete_floatingip‘, ‘list‘: ‘get_floatingips‘, ‘update‘: ‘update_floatingip‘, ‘show‘: ‘get_floatingip‘}

{‘create‘: ‘create_agent‘, ‘delete‘: ‘delete_agent‘, ‘list‘: ‘get_agents‘, ‘update‘: ‘update_agent‘, ‘show‘: ‘get_agent‘}

参考资料

Neutron分析(2)——neutron-server启动过程分析

时间: 2024-11-07 18:38:36

Neutron分析(2)——neutron-server启动过程分析的相关文章

Ocata Neutron代码分析(一)——Neutron API启动过程分析

首先,Neutron Server作为一种服务(neutron-server.service),可以到Neutron项目目录中的setup.cfg配置文件中找到对应的代码入口. [entry_points] console_scripts = neutron-db-manage = neutron.db.migration.cli:main neutron-debug = neutron.debug.shell:main neutron-dhcp-agent = neutron.cmd.even

openstack Neutron分析(3)—— neutron-dhcp-agent源码分析

1.neutron dhcp3个主要部件分别为什么?2.dhcp模块包含哪些内容?3.Dnsmasq配置文件是如何创建和更新的?4.DHCP agent的信息存放在neutron数据库的哪个表中? 扩展: neutron-dhcp-agent在neutron的作用是什么? 一.概述 neutron dhcp为租户网络提供DHCP服务,即IP地址动态分配,另外还会提供metadata请求服务. 3个主要的部件:DHCP agent scheduler:负责DHCP agent与network的调度

Neutron分析(3)—— neutron-l3-agent

一.Layer-3 Networking Extension neutron l3作为一种API扩展,向租户提供了路由和NAT功能. l3扩展包含两种资源: router:在不同内部子网中转发数据包:通过指定内部网关做NAT.每一个子网对应router上的一个端口,这个端口的ip就是子网的网关. floating ip:代表一个外部网络的IP,映射到内部网络的端口上.当网络的router:external属性为True时,floating ip才能定义. 这两种资源都对应有不同的属性.支持CRU

Neutron分析(6)—— neutron-openvswitch-agent

neutron-openvswitch-agent代码分析 neutron.plugins.openvswitch.agent.ovs_neutron_agent:main # init ovs first by agent_config:# setup plugin_rpc, state_rpc, msgq consumer, periodically state report# setup br-int, br-tun, bridge_mapping# start sg_agent agen

Neutron分析(4)—— neutron-dhcp-agent

一.概述 neutron dhcp为租户网络提供DHCP服务,即IP地址动态分配,另外还会提供metadata请求服务. 3个主要的部件: DHCP agent scheduler:负责DHCP agent与network的调度 DHCP agent:为租户网络提供DHCP的功能,提供metadata request服务. DHCP driver:即dnsmasq,用于管理DHCP server. 二.REST API neutron dhcp提供2类REST API接口,这两类API都是ext

Neutron分析(7)—— neutron-l3-agent HA solutions

1. keepalived vrrp/conntrackd High availability features will be implemented as extensions or drivers.A first extension/driver will be based on VRRP. A new scheduler will be also added in order to be able to spawn multiple instances of a same router

Neutron分析(5)—— neutron-l3-agent中的iptables

一.iptables简介 1.iptables数据包处理流程 以本机为目的的包,由上至下,走左边的路 本机产生的包,从local process开始走左边的路 本机转发的包,由上至下走右边的路 简化流程如下: 2.iptables表结构 在neutron中主要用到filter表和nat表 filter表: Chain INPUT Chain FORWARD Chain OUTPUT filter表用于信息包过滤,它包含INPUT.OUTPUT和FORWARD 链. nat表: Chain PRE

netty源码分析之一:server的启动

nio server启动的第一步,都是要创建一个serverSocketChannel,我截取一段启动代码,一步步分析: public void afterPropertiesSet() throws Exception { // 创建rpc工厂 ThreadFactory threadRpcFactory = new NamedThreadFactory("NettyRPC ThreadFactory"); //执行worker线程池数量 int parallel = Runtime

Android系统进程间通信(IPC)机制Binder中的Server启动过程源代码分析

文章转载至CSDN社区罗升阳的安卓之旅,原文地址:http://blog.csdn.net/luoshengyang/article/details/6629298 在前面一篇文章浅谈Android系统进程间通信(IPC)机制Binder中的Server和Client获得Service Manager接口之路中, 介绍了在Android系统中Binder进程间通信机制中的Server角色是如何获得Service Manager远程接口的,即defaultServiceManager函数的实现.S