nova boot from volume在多主机zone下的坑

测试环境:3个计算节点,分别属于3个zone

[[email protected] ~(keystone_admin)]# nova availability-zone-list

+-----------------------+----------------------------------------+

| Name                  | Status                                 |

+-----------------------+----------------------------------------+

| internal              | available                              |

| |- controller2        |                                        |

| | |- nova-conductor   | enabled :-) 2016-08-20T14:57:07.000000 |

| | |- nova-scheduler   | enabled :-) 2016-08-20T14:57:06.000000 |

| | |- nova-consoleauth | enabled :-) 2016-08-20T14:57:08.000000 |

| | |- nova-cert        | enabled :-) 2016-08-20T14:57:07.000000 |

| nova                  | available                              |

| |- controller3        |                                        |

| | |- nova-compute     | enabled :-) 2016-08-20T14:57:04.000000 |

| ag1                   | available                              |

| |- controller1        |                                        |

| | |- nova-compute     | enabled :-) 2016-08-19T23:41:45.000000 |

| ag2                   | available                              |

| |- controller2        |                                        |

| | |- nova-compute     | enabled :-) 2016-08-20T14:57:06.000000 |

+-----------------------+----------------------------------------+

测试方式: 启动虚拟机的时候选择Booting from image(creates a new volume)测试

产生的原因: cinder无法识别nova的多zone,cinder能获取的zone信息有:

1、cinder-volume所在的zone

2、cinder.conf配置文件中的两个参数storage_availability_zone = nova和default_availability_zone = nova

大致过下代码调用过程:(从nova那边调用cinder那部分开始)

1、nova/virt/block_device.py    ->    class DriverImageBlockDevice    def attach    vol = volume_api.create

2、nova/volume/cinder.py    ->   class API    def create    item = client.volumes.create

3、cinder/api/v2/volumes.py    ->    class VolumeController    def create    new_volume = self.volume_api.create

4、cinder/volume/api.py    ->    class API    def create    flow_engine = create_volume.get_flow

在create函数中cinder获取能获得到的zone的信息:

raw_zones = self.list_availability_zones(enable_cache=True)

availability_zones = set([az[‘name‘] for az in raw_zones])

if CONF.storage_availability_zone:

availability_zones.add(CONF.storage_availability_zone)

def list_availability_zones  services = objects.ServiceList.get_all_by_topic   ->

cinder/objects/service.py(def get_all_by_topic) db.service_get_all_by_topic  ->

cinder/db/api.py(def service_get_all_by_topic) IMPL.service_get_all_by_topic ->

cinder/db/sqlalchemy/api.py(def service_get_all_by_topic)

@require_admin_context
def service_get_all_by_topic(context, topic, disabled=None):
    query = model_query(
        context, models.Service, read_deleted="no").\ # models在这里cinder/db/sqlalchemy/models.py
        filter_by(topic=topic)                        # topic默认传过来的是cinder-volume
    if disabled is not None:
        query = query.filter_by(disabled=disabled)
    return query.all()

查询数据库找出cinder-volume所在的zone

5、cinder/volume/flows/api/create_volume.py    ->    def get_flow

# 这里是taskflow,只需关注add方法里面的东东
def get_flow(db_api, image_service_api, availability_zones, create_what,
             scheduler_rpcapi=None, volume_rpcapi=None):
    """Constructs and returns the api entrypoint flow.
    This flow will do the following:
    1. Inject keys & values for dependent tasks.
    2. Extracts and validates the input keys & values.
    3. Reserves the quota (reverts quota on any failures).
    4. Creates the database entry.
    5. Commits the quota.
    6. Casts to volume manager or scheduler for further processing.
    """
    flow_name = ACTION.replace(":", "_") + "_api"
    api_flow = linear_flow.Flow(flow_name)
    api_flow.add(ExtractVolumeRequestTask(
        image_service_api,
        availability_zones,
        rebind={‘size‘: ‘raw_size‘,
                ‘availability_zone‘: ‘raw_availability_zone‘,
                ‘volume_type‘: ‘raw_volume_type‘}))
                
    api_flow.add(QuotaReserveTask(),
                 EntryCreateTask(db_api),
                 QuotaCommitTask())
                 
    if scheduler_rpcapi and volume_rpcapi:
        # This will cast it out to either the scheduler or volume manager via
        # the rpc apis provided.
        api_flow.add(VolumeCastTask(scheduler_rpcapi, volume_rpcapi, db_api))
    # Now load (but do not run) the flow using the provided initial data.
    
    return taskflow.engines.load(api_flow, store=create_what)

我们关心的在class ExtractVolumeRequestTask里

先看下入口execute方法,为什么看execute方法,你翻下taskflow怎么用的,就懂了

def execute(self, context, size, snapshot, image_id, source_volume,
                availability_zone, volume_type, metadata, key_manager,
                source_replica, consistencygroup, cgsnapshot):
        utils.check_exclusive_options(snapshot=snapshot,
                                      imageRef=image_id,
                                      source_volume=source_volume)
        policy.enforce_action(context, ACTION)
        # TODO(harlowja): what guarantee is there that the snapshot or source
        # volume will remain available after we do this initial verification??
        snapshot_id = self._extract_snapshot(snapshot)
        source_volid = self._extract_source_volume(source_volume)
        source_replicaid = self._extract_source_replica(source_replica)
        size = self._extract_size(size, source_volume, snapshot)
        consistencygroup_id = self._extract_consistencygroup(consistencygroup)
        cgsnapshot_id = self._extract_cgsnapshot(cgsnapshot)
        self._check_image_metadata(context, image_id, size)
        availability_zone = self._extract_availability_zone(availability_zone,  # 关心的在这里
                                                            snapshot,
                                                            source_volume)

# _extract_availability_zone函数                                                          
def _extract_availability_zone(self, availability_zone, snapshot,
                                   source_volume):
        """Extracts and returns a validated availability zone.

        This function will extract the availability zone (if not provided) from
        the snapshot or source_volume and then performs a set of validation
        checks on the provided or extracted availability zone and then returns
        the validated availability zone.
        """

        # Try to extract the availability zone from the corresponding snapshot
        # or source volume if either is valid so that we can be in the same
        # availability zone as the source.
        if availability_zone is None:
            if snapshot:
                try:
                    availability_zone = snapshot[‘volume‘][‘availability_zone‘]
                except (TypeError, KeyError):
                    pass
            if source_volume and availability_zone is None:
                try:
                    availability_zone = source_volume[‘availability_zone‘]
                except (TypeError, KeyError):
                    pass

        if availability_zone is None:
            if CONF.default_availability_zone:    # default_availability_zone判断
                availability_zone = CONF.default_availability_zone 
            else:
                # For backwards compatibility use the storage_availability_zone
                availability_zone = CONF.storage_availability_zone 

        if availability_zone not in self.availability_zones:     # self.availability_zones就是上述cinder-volume的zone外加cinder.conf中两个配置参数
            if CONF.allow_availability_zone_fallback:    # allow_availability_zone_fallback这个很关键
                original_az = availability_zone
                availability_zone = (
                    CONF.default_availability_zone or
                    CONF.storage_availability_zone)
                LOG.warning(_LW("Availability zone ‘%(s_az)s‘ "
                                "not found, falling back to "
                                "‘%(s_fallback_az)s‘."),
                            {‘s_az‘: original_az,
                             ‘s_fallback_az‘: availability_zone})
            else:
                msg = _("Availability zone ‘%(s_az)s‘ is invalid.")
                msg = msg % {‘s_az‘: availability_zone}
                raise exception.InvalidInput(reason=msg)             #  没启动allow_availability_zone_fallback的话,就抛异常了
时间: 2024-10-10 17:43:07

nova boot from volume在多主机zone下的坑的相关文章

nova boot from volume代码分析

OpenStack Liberty版本,这里简单记录下nova boot from volume的代码调用过程. nova boot from volume命令行 nova client novaclient/v2/shell.py   # novaclient端发起请求 def do_boot(cs, args):     """Boot a new server."""     boot_args, boot_kwargs = _boot(c

nova boot from volume无法注入密码的hack

前面有篇<nova boot from volume代码分析>http://iceyao.blog.51cto.com/9426658/1770927,今天这里看下针对nova boot from volume无法注入密码的简单hack. nova/virt/libvirt/driver.py中_inject_data函数部分代码 if any((key, net, metadata, admin_pass, files)):             injection_image = sel

nova boot

$ nova help bootusage: nova boot [--flavor <flavor>] [--image <image>]                 [--image-with <key=value>] [--boot-volume <volume_id>]                 [--snapshot <snapshot_id>] [--min-count <number>]            

Boot from Volume - 每天5分钟玩转 OpenStack(61)

Volume 除了可以用作 instance 的数据盘,也可以作为启动盘(Bootable Volume),那么如何使 volume 成为 bootable 呢? 现在我们打开 instance 的 launch 操作界面. 这里有一个下拉菜单"Instance Boot Source".以前我们 launch instance 要么直接从 image launch(Boot from image),要么从 instance 的 snapshot launch(Boot from sn

nova boot代码流程分析(一):Claim机制

nova boot创建VM的流程大致为: 1. novaclient发送HTTP请求到nova-api(这里内部细节包括keystone对用户的验证及用户从keystone获取token和endpoints等信息,具体参考<keystone WSGI流程>). 2. nova-api通过rpc调用到nova-conductor. 3. nova-conductor调用rpc进入nova-scheduler进行compute节点的选择,nova-scheduler将compute节点选择的信息的

使用nova boot命令创建openstack实例

使用命令:nova boot --flavor 1 --key_name mykey--image 9e5c2bee-0373-414c-b4af-b91b0246ad3b --security_group default cirrOS 其中: flavor是虚拟机的配置,比如说内存大小,硬盘大小等,默认下1为最小,4为最大. key_name是创建虚拟机使用的密钥,使用以下三条命令创建密钥: ssh-keygen cd.ssh nova keypair-add --pub_key id_rsa

nova boot代码流程分析(三):nova与neutron的交互(2)

继续<nova boot代码流程分析(三):nova与neutron的交互(1)>的分析. #/nova/virt/libvirt/driver.py:LibvirtDriver # NOTE(ilyaalekseyev): Implementation like in multinics # for xenapi(tr3buchet) def spawn(self, context, instance, image_meta, injected_files, admin_password,

nova boot代码流程分析(五):VM启动从neutron-dhcp-agent获取IP与MAC

1.   network和subnet创建代码流程 [[email protected] ~(keystone_user1)]# neutron net-create demo-net [[email protected] ~(keystone_user1)]# neutron subnet-create  demo-net 1.1.1.0/24 --name demo-subnet --gateway 1.1.1.1 --enable_dhcp true 这里,我们主要分析上面两个命令的代码流

仅主机模式下vmware虚拟机中win7如何使宿主机与寄生机网络互联互通

实验环境:仅主机模式下vmware虚拟机中win7如何使宿主机与寄生机网络互联互通. 需求:VMware Workstation12 Pro或以上版本,并已安装一个win7系统. ----------------下为连接宿主机与寄生机网络的步骤------------- 1主机中打开控制面板-网络和INTERNET-网络和共享中心,并点击更改适配器设置. 2在网络连接里双击VMware Network Adapter VMnet1. 3在VMware Network Adapter VMnet1