Openstack nova代码部分凝视一

做个一个没怎么学过python的菜鸟。看源代码是最好的学习方式了,如今就从nova入手,主要凝视一下 nova/compute/api.py 中的 create_instance函数

   def _create_instance(self, context, instance_type,
               image_href, kernel_id, ramdisk_id,
               min_count, max_count,
               display_name, display_description,
               key_name, key_data, security_group,
               availability_zone, user_data, metadata,
               injected_files, admin_password,
               access_ip_v4, access_ip_v6,
               requested_networks, config_drive,
               block_device_mapping, auto_disk_config,
               reservation_id=None, create_instance_here=False,
               scheduler_hints=None):
        """Verify all the input parameters regardless of the provisioning
        strategy being performed and schedule the instance(s) for
        creation."""

        if not metadata:
            metadata = {}
        if not display_description:
            display_description = ''
        if not security_group:
            security_group = 'default'

        if not instance_type:
            instance_type = instance_types.get_default_instance_type()
        if not min_count:
            min_count = 1
        if not max_count:
            max_count = min_count
        if not metadata:
            metadata = {}

        block_device_mapping = block_device_mapping or []

		#从quota得到详细的实例限制
        num_instances = quota.allowed_instances(context, max_count,
                                                instance_type)
        if num_instances < min_count:
            pid = context.project_id
            if num_instances <= 0:
                msg = _("Cannot run any more instances of this type.")
            else:
                msg = (_("Can only run %s more instances of this type.") %
                       num_instances)
            LOG.warn(_("Quota exceeded for %(pid)s,"
                  " tried to run %(min_count)s instances. " + msg) % locals())
            raise exception.QuotaError(code="InstanceLimitExceeded")

		#检查元数据、注入文件、网络
        self._check_metadata_properties_quota(context, metadata)
        self._check_injected_file_quota(context, injected_files)
        self._check_requested_networks(context, requested_networks)

        (image_service, image_id) = nova.image.get_image_service(context,
                                                                 image_href)
		#通过镜像id。訪问镜像的详细服务。

得到一个image字典。
        image = image_service.show(context, image_id)

		#iamge 是一个字典。返回key='min_ram'相应的值, get是字典的一个方法。
		#memory_mb:是虚拟机所属内存。
        if instance_type['memory_mb'] < int(image.get('min_ram') or 0):
            raise exception.InstanceTypeMemoryTooSmall()
		#root_gb :虚拟机根硬盘大小。instance_type代表了创建虚拟机的需求或者说配置。image则是实际上存在镜像的一个东西。
        if instance_type['root_gb'] < int(image.get('min_disk') or 0):
            raise exception.InstanceTypeDiskTooSmall()

        config_drive_id = None
        if config_drive and config_drive is not True:
            # config_drive is volume id
            config_drive, config_drive_id = None, config_drive

        os_type = None
		#properties是字典image的一个属性,image[properties]又是一个字典。
        if 'properties' in image and 'os_type' in image['properties']:
            os_type = image['properties']['os_type']
        architecture = None
        if 'properties' in image and 'arch' in image['properties']:
            architecture = image['properties']['arch']
        vm_mode = None
        if 'properties' in image and 'vm_mode' in image['properties']:
            vm_mode = image['properties']['vm_mode']

        # If instance doesn't have auto_disk_config overridden by request, use
        # whatever the image indicates
        if auto_disk_config is None:
            if ('properties' in image and
                'auto_disk_config' in image['properties']):
				#bool_from_str:将字符串转化为数组
                auto_disk_config = utils.bool_from_str(
                    image['properties']['auto_disk_config'])

        if kernel_id is None:
            kernel_id = image['properties'].get('kernel_id', None)
        if ramdisk_id is None:
            ramdisk_id = image['properties'].get('ramdisk_id', None)
        # FIXME(sirp): is there a way we can remove null_kernel?
        # No kernel and ramdisk for raw images
        if kernel_id == str(FLAGS.null_kernel):
            kernel_id = None
            ramdisk_id = None
            LOG.debug(_("Creating a raw instance"))
        # Make sure we have access to kernel and ramdisk (if not raw)
		#locals() 返回一个名字/值对的字典
        LOG.debug(_("Using Kernel=%(kernel_id)s, Ramdisk=%(ramdisk_id)s")
                  % locals())

		#show详细是干嘛的????
        if kernel_id:
            image_service.show(context, kernel_id)
        if ramdisk_id:
            image_service.show(context, ramdisk_id)
        if config_drive_id:
            image_service.show(context, config_drive_id)

		#检查是否具有安全防火墙。假设不具有就创建一个默认的安全组。
        self.ensure_default_security_group(context)

		#ssh 密钥文件名称存在。可是数据不存在。

if key_data is None and key_name:
            key_pair = self.db.key_pair_get(context, context.user_id, key_name)
            key_data = key_pair['public_key']

        if reservation_id is None:
            reservation_id = utils.generate_uid('r')

		#根设备名
        root_device_name = block_device.properties_root_device_name(
            image['properties'])

        # NOTE(vish): We have a legacy hack to allow admins to specify hosts
        #             via az using az:host. It might be nice to expose an
        #             api to specify specific hosts to force onto, but for
        #             now it just supports this legacy hack.
        host = None
		#parition 函数进行切割字符串,假设切割成功。返回tuple,中间的为分隔符;假设找不到。返回1个实用字符串
		#剩余两个元素为空。
		#下边主要功能是设置调度集群。
        if availability_zone:
            availability_zone, _x, host = availability_zone.partition(':')
        if not availability_zone:
            availability_zone = FLAGS.default_schedule_zone
        if context.is_admin and host:
            filter_properties = {'force_hosts': [host]}
        else:
            filter_properties = {}

        filter_properties['scheduler_hints'] = scheduler_hints

        base_options = {
            'reservation_id': reservation_id,
            'image_ref': image_href,
            'kernel_id': kernel_id or '',
            'ramdisk_id': ramdisk_id or '',
            'power_state': power_state.NOSTATE,
            'vm_state': vm_states.BUILDING, #注意刚開始是创建中状态。
            'config_drive_id': config_drive_id or '',
            'config_drive': config_drive or '',
            'user_id': context.user_id,
            'project_id': context.project_id,
            'launch_time': time.strftime('%Y-%m-%dT%H:%M:%SZ', time.gmtime()),#格式化一个时间字符串。
            'instance_type_id': instance_type['id'],#虚拟机的套餐类型。

'memory_mb': instance_type['memory_mb'],#虚拟机的内存
            'vcpus': instance_type['vcpus'],#虚拟机的cpu核数。
            'root_gb': instance_type['root_gb'],#虚机根硬盘大小
            'ephemeral_gb': instance_type['ephemeral_gb'],
            'display_name': display_name,
            'display_description': display_description,
            'user_data': user_data or '',
            'key_name': key_name,#密钥文件名称
            'key_data': key_data,#密钥数据
            'locked': False,
            'metadata': metadata,
            'access_ip_v4': access_ip_v4,
            'access_ip_v6': access_ip_v6,
            'availability_zone': availability_zone,
            'os_type': os_type,#操作系统类型。
            'architecture': architecture,
            'vm_mode': vm_mode,#虚机状态。
            'root_device_name': root_device_name,#根设备名称。
            'progress': 0,
            'auto_disk_config': auto_disk_config}

        LOG.debug(_("Going to run %s instances...") % num_instances)

        if create_instance_here:
            instance = self.create_db_entry_for_new_instance(
                    context, instance_type, image, base_options,
                    security_group, block_device_mapping)
            # Tells scheduler we created the instance already.
            base_options['uuid'] = instance['uuid']
			#cast是单向的不须要等待对方回复就可以返回。
            rpc_method = rpc.cast
        else:
            # We need to wait for the scheduler to create the instance
            # DB entries, because the instance *could* be # created in
            # a child zone.
			#我的理解:该实例不一定在这个集群创建。可能去子集群。由于可能有非常多个吧
			#因此须要等待他的创建完毕。

rpc_method = rpc.call

        # TODO(comstud): We should use rpc.multicall when we can
        # retrieve the full instance dictionary from the scheduler.
        # Otherwise, we could exceed the AMQP max message size limit.
        # This would require the schedulers' schedule_run_instances
        # methods to return an iterator vs a list.
		#这里应该是调度器发消息了,让调度器去通知详细节点的nova-compute进程去创建虚机。
        instances = self._schedule_run_instance(
                rpc_method,
                context, base_options,
                instance_type,
                availability_zone, injected_files,
                admin_password, image,
                num_instances, requested_networks,
                block_device_mapping, security_group,
                filter_properties)
		#这里创建完毕。返回instance实例。
        if create_instance_here:
            return ([instance], reservation_id)
        return (instances, reservation_id)

下边这个是给调度器发送instance创建消息的函数:_schedule_run_instance

	#create_instance 调用,发送消息给调度器
    def _schedule_run_instance(self,
            rpc_method,
            context, base_options,
            instance_type,
            availability_zone, injected_files,
            admin_password, image,
            num_instances,
            requested_networks,
            block_device_mapping,
            security_group,
            filter_properties):
        """Send a run_instance request to the schedulers for processing."""

		#pid不是进程id,是项目id.....
        pid = context.project_id
        uid = context.user_id

        LOG.debug(_("Sending create to scheduler for %(pid)s/%(uid)s's") %
                locals())

        request_spec = {
            'image': utils.to_primitive(image),
            'instance_properties': base_options,
            'instance_type': instance_type,
            'num_instances': num_instances,
            'block_device_mapping': block_device_mapping,
            'security_group': security_group,
        }

        return rpc_method(context,
                FLAGS.scheduler_topic,
                {"method": "run_instance",
                 "args": {"topic": FLAGS.compute_topic,
                          "request_spec": request_spec,
                          "admin_password": admin_password,
                          "injected_files": injected_files,
                          "requested_networks": requested_networks,
                          "is_first_time": True,
                          "filter_properties": filter_properties}})

这个代码是我个人的理解。最进学习openstack感觉一头雾水。边看《云计算与openstack》边写一些东西出来,也是一种体验吧。

加油!!。!

时间: 2024-10-02 09:07:38

Openstack nova代码部分凝视一的相关文章

Openstack nova代码部分注释一

做个一个没怎么学过python的菜鸟,看源码是最好的学习方式了,现在就从nova入手,主要注释一下 nova/compute/api.py 中的 create_instance函数 def _create_instance(self, context, instance_type, image_href, kernel_id, ramdisk_id, min_count, max_count, display_name, display_description, key_name, key_da

eclipse调试openstack的nova代码

前段时间一直在研究openstack的nova部分的代码.特别想知道,如何用eclipse来调试代码,也在论坛上问了别人,无果,没人回复我.最后还是自己摸索出了出路. 下面写出自己探索之路.我是用devstack搭建的openstack环境.搭建步骤可以参见我另一篇博客文章. 我的nova代码是这段代码里面没有bin目录,这就使得程序没有入口.但是devstack安装的环境中,在/usr/loacl/bin/目录下有nova-api文件. 在你自己的项目下,新建一个bin文件下,将nova-ap

如何向OpenStack提交代码(详细步骤)

1. 创建一个 Launchpad(https://launchpad.net/openstack  )账号,加入OpenStack社区. 2. 在(https://www.openstack.org/profile  )上注册账号(这里的账号与1.中的账号,邮箱应该一致),并成为Foundation Member(否则后面提交会出现问题). 3. 进入(https://review.openstack.org  ),登陆. 4. 进入(https://review.openstack.org/

第五十七课Openstack nova的架构部署

OpenStack  nova的架构及部署 OpenStack  Neutron基础知识准备 Openvswitch使用详解 OpenStack  Neutron及部署

Openstack Nova 二次开发之Nova-extend服务实现并启动验证

 Openstack Nova 二次开发之Nova-extend service 扩展 主要是讲如何启动openstack nova-extend services,该服务用于Openstack 二次扩展及部分需求开发,例如 ,节点巡检,动态迁移(基于FUSE 文件系统实现,分布式系统,如MooseFS),文件注入,Nova 服务的自身修复,instances IO 控制,instances CPU 隔离技术实现等其他需求开发 第一章:如何create openstack nova-extend

如何删除 OpenStack Nova 僵尸实例

http://www.vpsee.com/2011/11/how-to-delete-a-openstack-nova-zombie-instance/ 前天强制重启一台 OpenStack Nova 控制结点以后发现虚拟机消失,但是 euca-describe-instances 命令显示 instances 仍然是 running 的状态,使用 euca-terminate-instances 终止命令仍然无效,暂时把这样的 instance 称作"僵尸实例(zombie instance)

OpenStack nova VM migration (live and cold) call flow

OpenStack nova compute supports two flavors of Virtual Machine (VM) migration: Cold migration -- migration of a VM which requires the VM to be powered off during the migrate operation during which time the VM is inaccessible. Hot or live migration --

[转] OpenStack &mdash; nova image-create, under the hood

I was trying to understand what kind of image nova image-create creates. It's not entirely obvious from its help output, which says - Creates a new image by taking a snapshot of a running server. But what kind of snapshot? let's figure. nova image-cr

删除 OpenStack Nova Volume 时遇到的 error_deleting 问题

(1).进入数据库 mysql-uroot –p (2).查看数据库信息 mysql>show databases; (3).进入cinder数据库 mysql> use cinder; (4).查看cinder数据库表信息 (5)查看error_deleting 状态volumes mysql> select status,id from volumes wherestatus='error_deleting'; (6)变更error_deleting状态 mysql>updat