Ceph 整合OpenStack kilo 遇到问题解决

第7章 Ceph 整合OpenStack 遇到问题解决
7.1 一个日志引发的错误追踪
1)	Ceph 问题起因 http://bbs.ceph.org.cn/question/161 错误日志
 

2)	找到 vim nova/virt/libvirt/driver.py 代码处 3090 行
************************
    def _get_guest_disk_config(self, instance, name, disk_mapping, inst_type,
                               image_type=None):
        if CONF.libvirt.hw_disk_discard:
            if not self._host.has_min_version(MIN_LIBVIRT_DISCARD_VERSION,
                                              MIN_QEMU_DISCARD_VERSION,
                                              REQ_HYPERVISOR_DISCARD):
                msg = (_(‘Volume sets discard option, but libvirt %(libvirt)s‘
                         ‘ or later is required, qemu %(qemu)s‘
                         ‘ or later is required.‘) %
                      {‘libvirt‘: MIN_LIBVIRT_DISCARD_VERSION,
                       ‘qemu‘: MIN_QEMU_DISCARD_VERSION})
                raise exception.Invalid(msg)
            else:
                pass
        image = self.image_backend.image(instance,
                                         name,
                                         image_type)
        disk_info = disk_mapping[name]
        return image.libvirt_info(disk_info[‘bus‘],
                                  disk_info[‘dev‘],
                                  disk_info[‘type‘],
                                  self.disk_cachemode,
                                  inst_type[‘extra_specs‘],
                                  self._host.get_version())
************************
对比之后的代码修改
 
************************
    def _get_guest_disk_config(self, instance, name, disk_mapping, inst_type,
                               image_type=None):
        image = self.image_backend.image(instance,
                                         name,
                                         image_type)
        disk_info = disk_mapping[name]
        return image.libvirt_info(disk_info[‘bus‘],
                                  disk_info[‘dev‘],
                                  disk_info[‘type‘],
                                  self.disk_cachemode,
                                  inst_type[‘extra_specs‘],
                                  self._host.get_version())
********************************

3)	继续创建云主机,看是否报其他错误信息
2015-08-11 01:47:10.456 82044 ERROR nova.virt.libvirt.driver [req-27f7ade9-3142-4ec6-815d-84488c6e0201 - - - - -] Error launching a defined domain with XML: <domain type=‘kvm‘>
  <name>instance-000000d9</name>
  <uuid>4b525800-3e00-4b48-a997-c104f919cde3</uuid>
  <metadata>
    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.0">
      <nova:package version="2015.1.0-3.el7"/>
      <nova:name>exta</nova:name>
      <nova:creationTime>2015-08-10 17:47:09</nova:creationTime>
      <nova:flavor name="linux-8-8-50">
        <nova:memory>8192</nova:memory>
        <nova:disk>120</nova:disk>
        <nova:swap>0</nova:swap>
        <nova:ephemeral>50</nova:ephemeral>
        <nova:vcpus>8</nova:vcpus>
      </nova:flavor>
      <nova:owner>
        <nova:user uuid="95a96f0ddcf449239c6682a3c310857e">root</nova:user>
        <nova:project uuid="be27eb2862904a0f9c636c337f66709c">admin</nova:project>
      </nova:owner>
      <nova:root type="image" uuid="59e1c70b-70c8-4c22-9253-fc889f94d891"/>
    </nova:instance>
  </metadata>
  <memory unit=‘KiB‘>8388608</memory>
  <currentMemory unit=‘KiB‘>8388608</currentMemory>
  <vcpu placement=‘static‘ cpuset=‘0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38‘>8</vcpu>
  <cputune>
    <shares>8192</shares>
  </cputune>
    <sysinfo type=‘smbios‘>
      <system>
        <entry name=‘manufacturer‘>Fedora Project</entry>
        <entry name=‘product‘>OpenStack Nova</entry>
        <entry name=‘version‘>2015.1.0-3.el7</entry>
        <entry name=‘serial‘>09c6f9d1-825f-43e2-8774-5ed6705af12b</entry>
        <entry name=‘uuid‘>4b525800-3e00-4b48-a997-c104f919cde3</entry>
      </system>
    </sysinfo>
  <os>
    <type arch=‘x86_64‘ machine=‘pc-i440fx-rhel7.0.0‘>hvm</type>
    <boot dev=‘hd‘/>
    <smbios mode=‘sysinfo‘/>
  </os>
  <features>
    <acpi/>
    <apic/>
  </features>
  <cpu mode=‘host-model‘>
    <model fallback=‘allow‘/>
    <topology sockets=‘8‘ cores=‘1‘ threads=‘1‘/>
  </cpu>
  <clock offset=‘utc‘>
    <timer name=‘pit‘ tickpolicy=‘delay‘/>
    <timer name=‘rtc‘ tickpolicy=‘catchup‘/>
    <timer name=‘hpet‘ present=‘no‘/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <devices>
    <emulator>/usr/libexec/qemu-kvm</emulator>
    <disk type=‘network‘ device=‘disk‘>
      <driver name=‘qemu‘ type=‘raw‘ cache=‘writeback‘/>
      <source protocol=‘rbd‘ name=‘vms/4b525800-3e00-4b48-a997-c104f919cde3_disk‘>
        <host name=‘192.168.103.211‘ port=‘6789‘/>
        <host name=‘192.168.103.212‘ port=‘6789‘/>
        <host name=‘192.168.103.214‘ port=‘6789‘/>
      </source>
      <target dev=‘vda‘ bus=‘virtio‘/>
      <address type=‘pci‘ domain=‘0x0000‘ bus=‘0x00‘ slot=‘0x04‘ function=‘0x0‘/>
    </disk>
    <disk type=‘network‘ device=‘disk‘>
      <driver name=‘qemu‘ type=‘raw‘ cache=‘writeback‘/>
      <source protocol=‘rbd‘ name=‘vms/4b525800-3e00-4b48-a997-c104f919cde3_disk.local‘>
        <host name=‘192.168.103.211‘ port=‘6789‘/>
        <host name=‘192.168.103.212‘ port=‘6789‘/>
        <host name=‘192.168.103.214‘ port=‘6789‘/>
      </source>
      <target dev=‘vdb‘ bus=‘virtio‘/>
      <address type=‘pci‘ domain=‘0x0000‘ bus=‘0x00‘ slot=‘0x05‘ function=‘0x0‘/>
    </disk>
    <controller type=‘usb‘ index=‘0‘>
      <address type=‘pci‘ domain=‘0x0000‘ bus=‘0x00‘ slot=‘0x01‘ function=‘0x2‘/>
    </controller>
    <controller type=‘pci‘ index=‘0‘ model=‘pci-root‘/>
    <interface type=‘bridge‘>
      <mac address=‘fa:16:3e:19:60:f5‘/>
      <source bridge=‘br100‘/>
      <model type=‘virtio‘/>
      <filterref filter=‘nova-instance-instance-000000d9-fa163e1960f5‘/>
      <address type=‘pci‘ domain=‘0x0000‘ bus=‘0x00‘ slot=‘0x03‘ function=‘0x0‘/>
    </interface>
    <serial type=‘file‘>
      <source path=‘/data/nova/instances/4b525800-3e00-4b48-a997-c104f919cde3/console.log‘/>
      <target port=‘0‘/>
    </serial>
    <serial type=‘pty‘>
      <target port=‘1‘/>
    </serial>
    <console type=‘file‘>
      <source path=‘/data/nova/instances/4b525800-3e00-4b48-a997-c104f919cde3/console.log‘/>
      <target type=‘serial‘ port=‘0‘/>
    </console>
    <input type=‘tablet‘ bus=‘usb‘/>
    <input type=‘mouse‘ bus=‘ps2‘/>
    <input type=‘keyboard‘ bus=‘ps2‘/>
    <graphics type=‘vnc‘ port=‘-1‘ autoport=‘yes‘ listen=‘0.0.0.0‘ keymap=‘en-us‘>
      <listen type=‘address‘ address=‘0.0.0.0‘/>
    </graphics>
    <video>
      <model type=‘cirrus‘ vram=‘16384‘ heads=‘1‘/>
      <address type=‘pci‘ domain=‘0x0000‘ bus=‘0x00‘ slot=‘0x02‘ function=‘0x0‘/>
    </video>
    <memballoon model=‘virtio‘>
      <address type=‘pci‘ domain=‘0x0000‘ bus=‘0x00‘ slot=‘0x06‘ function=‘0x0‘/>
      <stats period=‘10‘/>
    </memballoon>
  </devices>
</domain>

2015-08-11 01:47:10.457 82044 ERROR nova.compute.manager [req-27f7ade9-3142-4ec6-815d-84488c6e0201 - - - - -] [instance: 4b525800-3e00-4b48-a997-c104f919cde3] Instance failed to spawn
2015-08-11 01:47:10.457 82044 TRACE nova.compute.manager [instance: 4b525800-3e00-4b48-a997-c104f919cde3] Traceback (most recent call last):
2015-08-11 01:47:10.457 82044 TRACE nova.compute.manager [instance: 4b525800-3e00-4b48-a997-c104f919cde3]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2442, in _build_resources
2015-08-11 01:47:10.457 82044 TRACE nova.compute.manager [instance: 4b525800-3e00-4b48-a997-c104f919cde3]     yield resources
2015-08-11 01:47:10.457 82044 TRACE nova.compute.manager [instance: 4b525800-3e00-4b48-a997-c104f919cde3]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2314, in _build_and_run_instance
2015-08-11 01:47:10.457 82044 TRACE nova.compute.manager [instance: 4b525800-3e00-4b48-a997-c104f919cde3]     block_device_info=block_device_info)
2015-08-11 01:47:10.457 82044 TRACE nova.compute.manager [instance: 4b525800-3e00-4b48-a997-c104f919cde3]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2354, in spawn
2015-08-11 01:47:10.457 82044 TRACE nova.compute.manager [instance: 4b525800-3e00-4b48-a997-c104f919cde3]     block_device_info=block_device_info)
2015-08-11 01:47:10.457 82044 TRACE nova.compute.manager [instance: 4b525800-3e00-4b48-a997-c104f919cde3]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4380, in _create_domain_and_network
2015-08-11 01:47:10.457 82044 TRACE nova.compute.manager [instance: 4b525800-3e00-4b48-a997-c104f919cde3]     power_on=power_on)
2015-08-11 01:47:10.457 82044 TRACE nova.compute.manager [instance: 4b525800-3e00-4b48-a997-c104f919cde3]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4311, in _create_domain
2015-08-11 01:47:10.457 82044 TRACE nova.compute.manager [instance: 4b525800-3e00-4b48-a997-c104f919cde3]     LOG.error(err)
2015-08-11 01:47:10.457 82044 TRACE nova.compute.manager [instance: 4b525800-3e00-4b48-a997-c104f919cde3]   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 85, in __exit__
2015-08-11 01:47:10.457 82044 TRACE nova.compute.manager [instance: 4b525800-3e00-4b48-a997-c104f919cde3]     six.reraise(self.type_, self.value, self.tb)
2015-08-11 01:47:10.457 82044 TRACE nova.compute.manager [instance: 4b525800-3e00-4b48-a997-c104f919cde3]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4301, in _create_domain
2015-08-11 01:47:10.457 82044 TRACE nova.compute.manager [instance: 4b525800-3e00-4b48-a997-c104f919cde3]     domain.createWithFlags(launch_flags)
2015-08-11 01:47:10.457 82044 TRACE nova.compute.manager [instance: 4b525800-3e00-4b48-a997-c104f919cde3]   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 183, in doit
2015-08-11 01:47:10.457 82044 TRACE nova.compute.manager [instance: 4b525800-3e00-4b48-a997-c104f919cde3]     result = proxy_call(self._autowrap, f, *args, **kwargs)
2015-08-11 01:47:10.457 82044 TRACE nova.compute.manager [instance: 4b525800-3e00-4b48-a997-c104f919cde3]   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 141, in proxy_call
2015-08-11 01:47:10.457 82044 TRACE nova.compute.manager [instance: 4b525800-3e00-4b48-a997-c104f919cde3]     rv = execute(f, *args, **kwargs)
2015-08-11 01:47:10.457 82044 TRACE nova.compute.manager [instance: 4b525800-3e00-4b48-a997-c104f919cde3]   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 122, in execute
2015-08-11 01:47:10.457 82044 TRACE nova.compute.manager [instance: 4b525800-3e00-4b48-a997-c104f919cde3]     six.reraise(c, e, tb)
2015-08-11 01:47:10.457 82044 TRACE nova.compute.manager [instance: 4b525800-3e00-4b48-a997-c104f919cde3]   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 80, in tworker
2015-08-11 01:47:10.457 82044 TRACE nova.compute.manager [instance: 4b525800-3e00-4b48-a997-c104f919cde3]     rv = meth(*args, **kwargs)
2015-08-11 01:47:10.457 82044 TRACE nova.compute.manager [instance: 4b525800-3e00-4b48-a997-c104f919cde3]   File "/usr/lib64/python2.7/site-packages/libvirt.py", line 996, in createWithFlags
2015-08-11 01:47:10.457 82044 TRACE nova.compute.manager [instance: 4b525800-3e00-4b48-a997-c104f919cde3]     if ret == -1: raise libvirtError (‘virDomainCreateWithFlags() failed‘, dom=self)
2015-08-11 01:47:10.457 82044 TRACE nova.compute.manager [instance: 4b525800-3e00-4b48-a997-c104f919cde3] libvirtError: 内部错误:无法获得对 ACL 技术驱动程序 ‘ebiptables‘ 的访问
2015-08-11 01:47:10.457 82044 TRACE nova.compute.manager [instance: 4b525800-3e00-4b48-a997-c104f919cde3] 
2015-08-11 01:47:10.460 82044 INFO nova.compute.manager [req-210e07f7-cba3-4dec-b2d5-b7be95c2f559 95a96f0ddcf449239c6682a3c310857e be27eb2862904a0f9c636c337f66709c - - -] [instance: 4b525800-3e00-4b48-a997-c104f919cde3] Terminating instance

问题分析: 在程序中打断点,手动启动云主机:
vim nova/virt/libvirt/driver.py:4284行
 ***********************************************
    def _create_domain(self, xml=None, domain=None,
                       instance=None, launch_flags=0, power_on=True):
        """Create a domain.

        Either domain or xml must be passed in. If both are passed, then
        the domain definition is overwritten from the xml.
        """
        import ipdb;ipdb.set_trace()
        err = None
        try:
            if xml:
                err = _LE(‘Error defining a domain with XML: %s‘) % xml
                domain = self._conn.defineXML(xml)

            if power_on:
                err = _LE(‘Error launching a defined domain with XML: %s‘)                           % encodeutils.safe_decode(domain.XMLDesc(0),
                                                    errors=‘ignore‘)
                domain.createWithFlags(launch_flags)

            if not utils.is_neutron():
                err = _LE(‘Error enabling hairpin mode with XML: %s‘)                           % encodeutils.safe_decode(domain.XMLDesc(0),
                                                    errors=‘ignore‘)
                self._enable_hairpin(domain.XMLDesc(0))

***********************************************
关闭nova-compute 服务器,启用ipdb
 service opensack-nova-compute stop
 ipdb /usr/bin/nova-compute --config-file=/etc/nova/nova.conf 

获取UUID ,查看ceph vms pool 池,是否创建images

[[email protected] nova]# rbd ls vms 
175dc9db-2409-4b34-b6ca-efc0a1788687_disk
175dc9db-2409-4b34-b6ca-efc0a1788687_disk.local
5220f145-73d1-4831-872f-cfd32b09dd20_disk
5220f145-73d1-4831-872f-cfd32b09dd20_disk.local
847c6dea-a887-4fad-8acd-36f13bc29b57_disk
847c6dea-a887-4fad-8acd-36f13bc29b57_disk.local
d0adb3ea-7c4d-44d5-b4d6-a9a02a3c3468_disk
d0adb3ea-7c4d-44d5-b4d6-a9a02a3c3468_disk.local
 
 

手动启动云主机,进入/data/nova/instances/UUID/中
virsh create libvirt.xml 

virsh create  启动报错

2015-08-11 01:47:10.457 82044 TRACE nova.compute.manager [instance: 4b525800-3e00-4b48-a997-c104f919cde3]     if ret == -1: raise libvirtError (‘virDomainCreateWithFlags() failed‘, dom=self)
2015-08-11 01:47:10.457 82044 TRACE nova.compute.manager [instance: 4b525800-3e00-4b48-a997-c104f919cde3] libvirtError: 内部错误:无法获得对 ACL 技术驱动程序 ‘ebiptables‘ 的访问 
根据错误信息提示,去除nwfilter 
去除instanes nwfilter 问题,写入log,后期解决ebiptables 功能
***********************************************
vim nova/virt/libvirt/config.py : 1196 行	 
1196         if self.filtername is not None:
1197             filter = etree.Element("filterref", filter=self.filtername)
1198             for p in self.filterparams:
1199                 filter.append(etree.Element("parameter",
1200                                             name=p[‘key‘],
1201                                             value=p[‘value‘]))
1202             #dev.append(filter)
1203             LOG.info("Add William: Delete nwfilter rule %s" %filter)
***********************************************

重新进入ipdb 模式,创建3台太云主机

2015-08-11 03:06:39.834 122763 INFO nova.scheduler.client.report [-] Compute_service record updated for (‘athcontroller103210.sjz.autohome.com.cn‘, ‘athcontroller103210.sjz.autohome.com.cn‘)
2015-08-11 03:06:39.956 122763 INFO nova.scheduler.client.report [-] Compute_service record updated for (‘athcontroller103210.sjz.autohome.com.cn‘, ‘athcontroller103210.sjz.autohome.com.cn‘)
2015-08-11 03:06:40.064 122763 INFO nova.scheduler.client.report [-] Compute_service record updated for (‘athcontroller103210.sjz.autohome.com.cn‘, ‘athcontroller103210.sjz.autohome.com.cn‘)
2015-08-11 03:07:23.798 122763 INFO nova.virt.libvirt.config [req-0de0fe4b-7c21-4e4f-9172-a63c65c26bd8 - - - - -] Add William: Delete nwfilter rule <Element filterref at 0x4eb3fa0>
2015-08-11 03:07:23.814 122763 INFO nova.virt.libvirt.firewall [req-0de0fe4b-7c21-4e4f-9172-a63c65c26bd8 - - - - -] [instance: 175dc9db-2409-4b34-b6ca-efc0a1788687] Called setup_basic_filtering in nwfilter
2015-08-11 03:07:23.815 122763 INFO nova.virt.libvirt.firewall [req-0de0fe4b-7c21-4e4f-9172-a63c65c26bd8 - - - - -] [instance: 175dc9db-2409-4b34-b6ca-efc0a1788687] Ensuring static filters
> /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py(4292)_create_domain()
   4291         import ipdb;ipdb.set_trace()
-> 4292         err = None
   4293         try:

ipdb> c
2015-08-11 03:08:02.499 122763 INFO nova.compute.resource_tracker [req-28fc26e5-9ded-4394-a211-494c3cd20d87 - - - - -] Auditing locally available compute resources for node athcontroller103210.sjz.autohome.com.cn
2015-08-11 03:08:02.922 122763 INFO nova.compute.resource_tracker [req-28fc26e5-9ded-4394-a211-494c3cd20d87 - - - - -] Total usable vcpus: 40, total allocated vcpus: 8
2015-08-11 03:08:02.923 122763 INFO nova.compute.resource_tracker [req-28fc26e5-9ded-4394-a211-494c3cd20d87 - - - - -] Final resource view: name=athcontroller103210.sjz.autohome.com.cn phys_ram=257680MB used_ram=25088MB phys_disk=30137GB used_disk=510GB total_vcpus=40 used_vcpus=8 pci_stats=<nova.pci.stats.PciDeviceStats object at 0x4c0c450>
2015-08-11 03:08:02.939 122763 INFO nova.scheduler.client.report [req-28fc26e5-9ded-4394-a211-494c3cd20d87 - - - - -] Compute_service record updated for (‘athcontroller103210.sjz.autohome.com.cn‘, ‘athcontroller103210.sjz.autohome.com.cn‘)
2015-08-11 03:08:02.939 122763 INFO nova.compute.resource_tracker [req-28fc26e5-9ded-4394-a211-494c3cd20d87 - - - - -] Compute_service record updated for athcontroller103210.sjz.autohome.com.cn:athcontroller103210.sjz.autohome.com.cn
2015-08-11 03:08:03.071 122763 INFO nova.virt.libvirt.config [req-3a9ea8b7-a740-4f02-a094-dabd8b2d96b3 - - - - -] Add William: Delete nwfilter rule <Element filterref at 0x5204870>
2015-08-11 03:08:03.072 122763 INFO nova.virt.libvirt.firewall [req-3a9ea8b7-a740-4f02-a094-dabd8b2d96b3 - - - - -] [instance: 5220f145-73d1-4831-872f-cfd32b09dd20] Called setup_basic_filtering in nwfilter
2015-08-11 03:08:03.073 122763 INFO nova.virt.libvirt.firewall [req-3a9ea8b7-a740-4f02-a094-dabd8b2d96b3 - - - - -] [instance: 5220f145-73d1-4831-872f-cfd32b09dd20] Ensuring static filters
2015-08-11 03:08:03.109 122763 INFO nova.virt.libvirt.config [req-871ea11a-af06-4944-ab84-101babc1351d - - - - -] Add William: Delete nwfilter rule <Element filterref at 0x5204870>
2015-08-11 03:08:03.110 122763 INFO nova.virt.libvirt.firewall [req-871ea11a-af06-4944-ab84-101babc1351d - - - - -] [instance: 847c6dea-a887-4fad-8acd-36f13bc29b57] Called setup_basic_filtering in nwfilter
2015-08-11 03:08:03.110 122763 INFO nova.virt.libvirt.firewall [req-871ea11a-af06-4944-ab84-101babc1351d - - - - -] [instance: 847c6dea-a887-4fad-8acd-36f13bc29b57] Ensuring static filters
2015-08-11 03:08:03.260 122763 INFO nova.compute.manager [req-e26e03ea-5ed6-4a47-8c18-15fa343eb505 - - - - -] [instance: 175dc9db-2409-4b34-b6ca-efc0a1788687] VM Started (Lifecycle Event)
2015-08-11 03:08:03.287 122763 INFO nova.virt.libvirt.driver [-] [instance: 175dc9db-2409-4b34-b6ca-efc0a1788687] Instance spawned successfully.
> /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py(4292)_create_domain()
   4291         import ipdb;ipdb.set_trace()
-> 4292         err = None
   4293         try:

ipdb> c
2015-08-11 03:08:05.152 122763 INFO nova.compute.manager [req-e26e03ea-5ed6-4a47-8c18-15fa343eb505 - - - - -] [instance: 175dc9db-2409-4b34-b6ca-efc0a1788687] During sync_power_state the instance has a pending task (spawning). Skip.
> /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py(4292)_create_domain()
   4291         import ipdb;ipdb.set_trace()
-> 4292         err = None
   4293         try:

ipdb> c
2015-08-11 03:08:06.597 122763 INFO nova.compute.manager [req-e26e03ea-5ed6-4a47-8c18-15fa343eb505 - - - - -] [instance: 5220f145-73d1-4831-872f-cfd32b09dd20] VM Started (Lifecycle Event)
2015-08-11 03:08:06.613 122763 INFO nova.virt.libvirt.driver [-] [instance: 5220f145-73d1-4831-872f-cfd32b09dd20] Instance spawned successfully.
2015-08-11 03:08:06.670 122763 INFO nova.compute.manager [req-e26e03ea-5ed6-4a47-8c18-15fa343eb505 - - - - -] [instance: 5220f145-73d1-4831-872f-cfd32b09dd20] During sync_power_state the instance has a pending task (spawning). Skip.
2015-08-11 03:08:07.438 122763 INFO nova.compute.manager [req-e26e03ea-5ed6-4a47-8c18-15fa343eb505 - - - - -] [instance: 847c6dea-a887-4fad-8acd-36f13bc29b57] VM Started (Lifecycle Event)
2015-08-11 03:08:07.453 122763 INFO nova.virt.libvirt.driver [-] [instance: 847c6dea-a887-4fad-8acd-36f13bc29b57] Instance spawned successfully.
2015-08-11 03:08:07.529 122763 INFO nova.compute.manager [req-e26e03ea-5ed6-4a47-8c18-15fa343eb505 - - - - -] [instance: 847c6dea-a887-4fad-8acd-36f13bc29b57] During sync_power_state the instance has a pending task (spawning). Skip.	 
      
云主机创建完成
时间: 2024-07-30 14:08:41

Ceph 整合OpenStack kilo 遇到问题解决的相关文章

Ceph与OpenStack整合(仅为云主机提供云盘功能)

1. Ceph与OpenStack整合(仅为云主机提供云盘功能) 创建: linhaifeng,最新修改: 大约1分钟以前 ceph ceph osd pool create volumes 128 128 ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms,

Ceph与OpenStack整合(与glance整合)

2. Ceph与OpenStack整合(与glance整合) 创建: linhaifeng,最新修改: 昨天4:18 下午 思路:1.ceph集群monitor节点创建存储池images-pool,创建访问该存储池的用户images,导出秘钥文件ceph.client.images.keyring,发送ceph.conf和ceph.client.images.keyring到glance客户端的/etc/ceph目录下 2.所有glance节点安装python-rbd软件包,修改glance.a

Ceph与OpenStack整合(将云主机磁盘镜像文件直接建在ceph集群vms存储池中)

思路及实现: 1.承接前两篇文章,镜像服务的存储 池为images,云盘的存储池为volumes,目前要完成openstack新建云主机磁盘镜像文件直接存放到ceph中去(在ceph中新建存储池vms). 云主机在启动时需要有能访问镜像存储池images和云盘存储池volumes的能力,所以你需要在ceph集群的monitor节点新建存储池vms,同时新建账号,该账号具有对vms,images, 以及volumes三个存储池的rwx权限 2.这里请务必注意一个细节:回忆第一篇ceph与opens

OpenStack Kilo版加CEPH部署手册

OpenStack Kilo版加CEPH部署手册 作者: yz联系方式: QQ: 949587200日期: 2015-7-13版本: Kilo 转载地址: http://mp.weixin.qq.com/s__biz=MzAxOTAzMDEwMA==&mid=209225237&idx=1&sn=357afdabafc03e8fb75eb4e1fb9d4bf9&scene=5&ptlang=2052&ADUIN=724042315&ADSESSION

openstack kilo 集成ceph

1,环境准备 10.0.1.100    cephdeploy 10.0.1.110 cephmon1 10.0.1.120 cephmon2 10.0.1.130 cephosd1 10.0.1.140 cephosd2 10.0.1.150 cephosd3 10.0.1.11     controller 10.0.1.31     compute1 10.0.1.41     block ceph和 openstack的安装略 2.创建3个pool ceph osd pool creat

centos6.5 kvm与ceph整合问题解决

以下操作是在centos6.5上 定义xml文件 cat >> secret.xml << EOF <secret ephemeral='no' private='no'>   <usage type='ceph'>     <name>client.cinder secret</name>   </usage> </secret> EOF 使用virsh工具导入secret.xml文件 [[email pro

OpenStack Kilo版本新功能分析

OpenStack Kilo版本已经于2015年4月30日正式Release,这是OpenStack第11个版本,距离OpenStack项目推出已经整整过去了5年多的时间.在这个阶段OpenStack得到不断的增强,同时OpenStack社区也成为即Linux之后的第二大开源社区,参与的人数.厂商众多,也成就了OpenStack今天盛世的局面.虽然OpenStack在今年经历了一些初创型企业的倒闭,但是随着国内的传统行业用户对OpenStack越来越重视,我们坚信OpenStack明天会更好.

ceph(5)--Ceph 与 OpenStack 集成的实现

理解 OpenStack + Ceph 系列文章: (1)安装和部署 (2)Ceph RBD 接口和工具 (3)Ceph 物理和逻辑结构 (4)Ceph 的基础数据结构 (5)Ceph 与 OpenStack 集成的实现 (6)QEMU-KVM 和 Ceph RBD 的 缓存机制总结 (7)Ceph 的基本操作和常见故障排除方法 (8)关于Ceph PGs 1. Glance 与 Ceph RBD 集成 1.1 代码 Kilo 版本中,glance-store 代码被从 glance 代码中分离

[译] OpenStack Kilo 版本中 Neutron 的新变化

OpenStack Kilo 版本,OpenStack 这个开源项目的第11个版本,已经于2015年4月正式发布了.现在是个合适的时间来看看这个版本中Neutron到底发生了哪些变化了,以及引入了哪些新的关键功能. 1. 扩展 Neutron 开发社区 (Scaling the Neutron development community) 为了更好地扩展 Neutron 开发社区的规模,我们在Kilo开发周期中主要做了两项工作:解耦核心插件以及分离高级服务.这些变化不会直接影响 OpenStac