Help: Instances don't start

classic Classic list List threaded Threaded
7 messages Options
Reply | Threaded
Open this post in threaded view
|

Help: Instances don't start

Edier Zapata
Hi,
 I installed a 10 nodes nova-compute cloud, this way:
 Server: All nova services, including nova-compute, using the script
in: https://github.com/elasticdog/OpenStack-NOVA-Installer-Script/raw/master/nova-install
 Nodes: nova-compute, using same script than step 1
When I run an image ( euca-run-instances $emi -k mykey -t m1.tiny ) I get this:
root at cluster1:/home/cluster/OpenStack-org# euca-run-instances $emi -k
mykey -t m1.tiny
RESERVATION     r-0hr83xsv      Univalle        default
INSTANCE        i-00000015      ami-1f75034f
scheduling      mykey (Univalle, None)  0               m1.tiny
2011-07-11T15:41:06Z    unknown zone

and some time later:

root at cluster1:/home/cluster/OpenStack-org# euca-describe-instances
RESERVATION     r-0hr83xsv      Univalle        default
INSTANCE        i-00000015      ami-1f75034f    192.168.29.2
192.168.29.2    shutdown        mykey (Univalle, cluster10)     0
         m1.tiny 2011-07-11T15:41:06Z     nova

The cluster10 log say:
cluster at cluster10:~$ tail -5 /var/log/nova/nova-compute.log
2011-07-11 10:59:51,810 INFO nova.compute.manager [-] Found instance
'instance-00000015' in DB but no VM. State=5, so setting state to
shutoff.
2011-07-11 11:00:51,890 INFO nova.compute.manager [-] Found instance
'instance-00000015' in DB but no VM. State=5, so setting state to
shutoff.
2011-07-11 11:01:51,970 INFO nova.compute.manager [-] Found instance
'instance-00000015' in DB but no VM. State=5, so setting state to
shutoff.
2011-07-11 11:02:52,050 INFO nova.compute.manager [-] Found instance
'instance-00000015' in DB but no VM. State=5, so setting state to
shutoff.
2011-07-11 11:03:52,130 INFO nova.compute.manager [-] Found instance
'instance-00000015' in DB but no VM. State=5, so setting state to
shutoff.

Services running:

root at cluster1:/home/cluster/OpenStack-org# nova-manage service list
cluster1   nova-network enabled  :-) 2011-07-11 16:07:40
cluster1   nova-compute enabled  :-) 2011-07-11 16:07:44
cluster1   nova-scheduler enabled  :-) 2011-07-11 16:07:45
cluster10  nova-compute enabled  :-) 2011-07-11 16:07:39

Can anyone help me with this please?

Thanks

--
Edier Alberto Zapata Hern?ndez
Ingeniero de Sistemas
Universidad de Valle

Reply | Threaded
Open this post in threaded view
|

Help: Instances don't start

Shang Wu
Hi Edier,

Can you check the command:

sudo kvm-ok

And see if vt is enabled in the BIOS? I had similar issue before, and there might also be some logs in the nova-compute.log as well. Try search bios and see if u find anything there.

Shang Wu

Edier Zapata <edalzap at gmail.com> ? 2011/7/12 ??12:08 ???

> Hi,
> I installed a 10 nodes nova-compute cloud, this way:
> Server: All nova services, including nova-compute, using the script
> in: https://github.com/elasticdog/OpenStack-NOVA-Installer-Script/raw/master/nova-install
> Nodes: nova-compute, using same script than step 1
> When I run an image ( euca-run-instances $emi -k mykey -t m1.tiny ) I get this:
> root at cluster1:/home/cluster/OpenStack-org# euca-run-instances $emi -k
> mykey -t m1.tiny
> RESERVATION     r-0hr83xsv      Univalle        default
> INSTANCE        i-00000015      ami-1f75034f
> scheduling      mykey (Univalle, None)  0               m1.tiny
> 2011-07-11T15:41:06Z    unknown zone
>
> and some time later:
>
> root at cluster1:/home/cluster/OpenStack-org# euca-describe-instances
> RESERVATION     r-0hr83xsv      Univalle        default
> INSTANCE        i-00000015      ami-1f75034f    192.168.29.2
> 192.168.29.2    shutdown        mykey (Univalle, cluster10)     0
>         m1.tiny 2011-07-11T15:41:06Z     nova
>
> The cluster10 log say:
> cluster at cluster10:~$ tail -5 /var/log/nova/nova-compute.log
> 2011-07-11 10:59:51,810 INFO nova.compute.manager [-] Found instance
> 'instance-00000015' in DB but no VM. State=5, so setting state to
> shutoff.
> 2011-07-11 11:00:51,890 INFO nova.compute.manager [-] Found instance
> 'instance-00000015' in DB but no VM. State=5, so setting state to
> shutoff.
> 2011-07-11 11:01:51,970 INFO nova.compute.manager [-] Found instance
> 'instance-00000015' in DB but no VM. State=5, so setting state to
> shutoff.
> 2011-07-11 11:02:52,050 INFO nova.compute.manager [-] Found instance
> 'instance-00000015' in DB but no VM. State=5, so setting state to
> shutoff.
> 2011-07-11 11:03:52,130 INFO nova.compute.manager [-] Found instance
> 'instance-00000015' in DB but no VM. State=5, so setting state to
> shutoff.
>
> Services running:
>
> root at cluster1:/home/cluster/OpenStack-org# nova-manage service list
> cluster1   nova-network enabled  :-) 2011-07-11 16:07:40
> cluster1   nova-compute enabled  :-) 2011-07-11 16:07:44
> cluster1   nova-scheduler enabled  :-) 2011-07-11 16:07:45
> cluster10  nova-compute enabled  :-) 2011-07-11 16:07:39
>
> Can anyone help me with this please?
>
> Thanks
>
> --
> Edier Alberto Zapata Hern?ndez
> Ingeniero de Sistemas
> Universidad de Valle
> _______________________________________________
> Openstack-operators mailing list
> Openstack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

Reply | Threaded
Open this post in threaded view
|

Help: Instances don't start

Edier Zapata
In reply to this post by Edier Zapata
Here is the log of 2nd compute node (cluster10), when I run the
instance, the 1st compute node is the main server:

cluster at cluster10:~$ tail -f /var/log/nova/nova-compute.log

2011-07-11 13:02:10,388 DEBUG nova.rpc [-] received
{u'_context_request_id': u'VA0-3-5VJSCL9TMLWRSX',
u'_context_read_deleted': False, u'args': {u'instance_id': 22,
u'injected_files': None, u'availability_zone': None},
u'_context_is_admin': True, u'_context_timestamp':
u'2011-07-11T18:02:09Z', u'_context_user': u'osadmin', u'method':
u'run_instance', u'_context_project': u'Univalle',
u'_context_remote_address': u'192.168.28.50'} from (pid=3757) _receive
/usr/lib/pymodules/python2.6/nova/rpc.py:167
2011-07-11 13:02:10,389 DEBUG nova.rpc [-] unpacked context:
{'timestamp': u'2011-07-11T18:02:09Z', 'remote_address':
u'192.168.28.50', 'project': u'Univalle', 'is_admin': True, 'user':
u'osadmin', 'request_id': u'VA0-3-5VJSCL9TMLWRSX', 'read_deleted':
False} from (pid=3757) _unpack_context
/usr/lib/pymodules/python2.6/nova/rpc.py:331
2011-07-11 13:02:10,545 AUDIT nova.compute.manager
[VA0-3-5VJSCL9TMLWRSX osadmin Univalle] instance 22: starting...
2011-07-11 13:02:10,735 DEBUG nova.rpc [-] Making asynchronous call on
network.cluster1 ... from (pid=3757) call
/usr/lib/pymodules/python2.6/nova/rpc.py:350
2011-07-11 13:02:10,735 DEBUG nova.rpc [-] MSG_ID is
e5033ff75a554f58b6bf065bd6e0148f from (pid=3757) call
/usr/lib/pymodules/python2.6/nova/rpc.py:353
2011-07-11 13:02:11,509 DEBUG nova.utils [-] Attempting to grab
semaphore "ensure_bridge" for method "ensure_bridge"... from
(pid=3757) inner /usr/lib/pymodules/python2.6/nova/utils.py:594
2011-07-11 13:02:11,510 DEBUG nova.utils [-] Attempting to grab file
lock "ensure_bridge" for method "ensure_bridge"... from (pid=3757)
inner /usr/lib/pymodules/python2.6/nova/utils.py:599
2011-07-11 13:02:11,511 DEBUG nova.utils [-] Running cmd (subprocess):
ip link show dev br100 from (pid=3757) execute
/usr/lib/pymodules/python2.6/nova/utils.py:150
2011-07-11 13:02:11,723 DEBUG nova.virt.libvirt_conn [-] instance
instance-00000016: starting toXML method from (pid=3757) to_xml
/usr/lib/pymodules/python2.6/nova/virt/libvirt_conn.py:996
2011-07-11 13:02:11,896 DEBUG nova.virt.libvirt_conn [-] instance
instance-00000016: finished toXML method from (pid=3757) to_xml
/usr/lib/pymodules/python2.6/nova/virt/libvirt_conn.py:1041
2011-07-11 13:02:11,965 INFO nova [-] called setup_basic_filtering in nwfilter
2011-07-11 13:02:11,965 INFO nova [-] ensuring static filters
2011-07-11 13:02:12,031 INFO nova [-]
<nova.db.sqlalchemy.models.SecurityGroupIngressRule object at
0x3b77490>
2011-07-11 13:02:12,032 INFO nova [-]
<nova.db.sqlalchemy.models.SecurityGroupIngressRule object at
0x3b77510>
2011-07-11 13:02:12,037 DEBUG nova.utils [-] Attempting to grab
semaphore "iptables" for method "apply"... from (pid=3757) inner
/usr/lib/pymodules/python2.6/nova/utils.py:594
2011-07-11 13:02:12,038 DEBUG nova.utils [-] Attempting to grab file
lock "iptables" for method "apply"... from (pid=3757) inner
/usr/lib/pymodules/python2.6/nova/utils.py:599
2011-07-11 13:02:12,043 DEBUG nova.utils [-] Running cmd (subprocess):
sudo iptables-save -t filter from (pid=3757) execute
/usr/lib/pymodules/python2.6/nova/utils.py:150
2011-07-11 13:02:12,068 DEBUG nova.utils [-] Running cmd (subprocess):
sudo iptables-restore from (pid=3757) execute
/usr/lib/pymodules/python2.6/nova/utils.py:150
2011-07-11 13:02:12,095 DEBUG nova.utils [-] Running cmd (subprocess):
sudo iptables-save -t nat from (pid=3757) execute
/usr/lib/pymodules/python2.6/nova/utils.py:150
2011-07-11 13:02:12,119 DEBUG nova.utils [-] Running cmd (subprocess):
sudo iptables-restore from (pid=3757) execute
/usr/lib/pymodules/python2.6/nova/utils.py:150
2011-07-11 13:02:12,170 DEBUG nova.utils [-] Running cmd (subprocess):
mkdir -p /var/lib/nova/instances/instance-00000016/ from (pid=3757)
execute /usr/lib/pymodules/python2.6/nova/utils.py:150
2011-07-11 13:02:12,187 INFO nova.virt.libvirt_conn [-] instance
instance-00000016: Creating image
2011-07-11 13:02:12,272 DEBUG nova.utils [-] Attempting to grab
semaphore "6332e1f2" for method "call_if_not_exists"... from
(pid=3757) inner /usr/lib/pymodules/python2.6/nova/utils.py:594
2011-07-11 13:02:12,272 DEBUG nova.utils [-] Running cmd (subprocess):
cp /var/lib/nova/instances/_base/6332e1f2
/var/lib/nova/instances/instance-00000016/kernel from (pid=3757)
execute /usr/lib/pymodules/python2.6/nova/utils.py:150
2011-07-11 13:02:12,304 DEBUG nova.utils [-] Attempting to grab
semaphore "11412fa5" for method "call_if_not_exists"... from
(pid=3757) inner /usr/lib/pymodules/python2.6/nova/utils.py:594
2011-07-11 13:02:12,313 ERROR nova.compute.manager
[VA0-3-5VJSCL9TMLWRSX osadmin Univalle] Instance '22' failed to spawn.
Is virtualization enabled in the BIOS?
(nova.compute.manager): TRACE: Traceback (most recent call last):
(nova.compute.manager): TRACE:   File
"/usr/lib/pymodules/python2.6/nova/compute/manager.py", line 234, in
run_instance
(nova.compute.manager): TRACE:     self.driver.spawn(instance_ref)
(nova.compute.manager): TRACE:   File
"/usr/lib/pymodules/python2.6/nova/exception.py", line 120, in _wrap
(nova.compute.manager): TRACE:     return f(*args, **kw)
(nova.compute.manager): TRACE:   File
"/usr/lib/pymodules/python2.6/nova/virt/libvirt_conn.py", line 616, in
spawn
(nova.compute.manager): TRACE:     self._create_image(instance, xml,
network_info)
(nova.compute.manager): TRACE:   File
"/usr/lib/pymodules/python2.6/nova/virt/libvirt_conn.py", line 849, in
_create_image
(nova.compute.manager): TRACE:     project=project)
(nova.compute.manager): TRACE:   File
"/usr/lib/pymodules/python2.6/nova/virt/libvirt_conn.py", line 776, in
_cache_image
(nova.compute.manager): TRACE:     call_if_not_exists(base, fn, *args, **kwargs)
(nova.compute.manager): TRACE:   File
"/usr/lib/pymodules/python2.6/nova/utils.py", line 607, in inner
(nova.compute.manager): TRACE:     retval = f(*args, **kwargs)
(nova.compute.manager): TRACE:   File
"/usr/lib/pymodules/python2.6/nova/virt/libvirt_conn.py", line 774, in
call_if_not_exists
(nova.compute.manager): TRACE:     fn(target=base, *args, **kwargs)
(nova.compute.manager): TRACE:   File
"/usr/lib/pymodules/python2.6/nova/virt/libvirt_conn.py", line 787, in
_fetch_image
(nova.compute.manager): TRACE:     images.fetch(image_id, target, user, project)
(nova.compute.manager): TRACE:   File
"/usr/lib/pymodules/python2.6/nova/virt/images.py", line 51, in fetch
(nova.compute.manager): TRACE:     metadata =
image_service.get(elevated, image_id, image_file)
(nova.compute.manager): TRACE:   File
"/usr/lib/pymodules/python2.6/nova/image/local.py", line 116, in get
(nova.compute.manager): TRACE:     raise exception.NotFound
(nova.compute.manager): TRACE: NotFound: None
(nova.compute.manager): TRACE:
2011-07-11 13:03:02,495 INFO nova.compute.manager [-] Found instance
'instance-00000016' in DB but no VM. State=8, so setting state to
shutoff.
2011-07-11 13:03:02,496 INFO nova.compute.manager [-] DB/VM state
mismatch. Changing state from '8' to '5'


Thanks / Gracias

>
> On 07/11/2011 01:08 PM, Edier Zapata wrote:
>
> Hi,
>  I installed a 10 nodes nova-compute cloud, this way:
>  Server: All nova services, including nova-compute, using the script
> in:
> https://github.com/elasticdog/OpenStack-NOVA-Installer-Script/raw/master/nova-install
>  Nodes: nova-compute, using same script than step 1
> When I run an image ( euca-run-instances $emi -k mykey -t m1.tiny ) I get
> this:
> root at cluster1:/home/cluster/OpenStack-org# euca-run-instances $emi -k
> mykey -t m1.tiny
> RESERVATION     r-0hr83xsv      Univalle        default
> INSTANCE        i-00000015      ami-1f75034f
> scheduling      mykey (Univalle, None)  0               m1.tiny
> 2011-07-11T15:41:06Z    unknown zone
>
> and some time later:
>
> root at cluster1:/home/cluster/OpenStack-org# euca-describe-instances
> RESERVATION     r-0hr83xsv      Univalle        default
> INSTANCE        i-00000015      ami-1f75034f    192.168.29.2
> 192.168.29.2    shutdown        mykey (Univalle, cluster10)     0
>          m1.tiny 2011-07-11T15:41:06Z     nova
>
> The cluster10 log say:
> cluster at cluster10:~$ tail -5 /var/log/nova/nova-compute.log
> 2011-07-11 10:59:51,810 INFO nova.compute.manager [-] Found instance
> 'instance-00000015' in DB but no VM. State=5, so setting state to
> shutoff.
> 2011-07-11 11:00:51,890 INFO nova.compute.manager [-] Found instance
> 'instance-00000015' in DB but no VM. State=5, so setting state to
> shutoff.
> 2011-07-11 11:01:51,970 INFO nova.compute.manager [-] Found instance
> 'instance-00000015' in DB but no VM. State=5, so setting state to
> shutoff.
> 2011-07-11 11:02:52,050 INFO nova.compute.manager [-] Found instance
> 'instance-00000015' in DB but no VM. State=5, so setting state to
> shutoff.
> 2011-07-11 11:03:52,130 INFO nova.compute.manager [-] Found instance
> 'instance-00000015' in DB but no VM. State=5, so setting state to
> shutoff.
>
> Services running:
>
> root at cluster1:/home/cluster/OpenStack-org# nova-manage service list
> cluster1   nova-network enabled  :-) 2011-07-11 16:07:40
> cluster1   nova-compute enabled  :-) 2011-07-11 16:07:44
> cluster1   nova-scheduler enabled  :-) 2011-07-11 16:07:45
> cluster10  nova-compute enabled  :-) 2011-07-11 16:07:39
>
> Can anyone help me with this please?
>
> Thanks
>
>
>



--
Edier Alberto Zapata Hern?ndez
Ingeniero de Sistemas
Universidad de Valle

Reply | Threaded
Open this post in threaded view
|

Help: Instances don't start

Edier Zapata
In reply to this post by Edier Zapata
Hi again, I just checked the BIOS and Vt is enabled, btw I reboot the
node and sent the instance again, and same error:

root at cluster1:/home/cluster/OpenStack-org# euca-run-instances $emi -k
mykey -t m1.tiny
RESERVATION     r-n3iuhqxp      Univalle        default
INSTANCE        i-0000001a      ami-1f75034f
scheduling      mykey (Univalle, None)  0         m1.tiny
2011-07-11T21:16:48Z    unknown zone

root at cluster1:/home/cluster/OpenStack-org# euca-describe-instances
RESERVATION     r-n3iuhqxp      Univalle        default
INSTANCE        i-0000001a      ami-1f75034f    192.168.29.4
192.168.29.4    launching       mykey (Univalle, cluster10)        0
            m1.tiny 2011-07-11T21:16:48Z    nova

root at cluster1:/home/cluster/OpenStack-org# euca-describe-instances
RESERVATION     r-n3iuhqxp      Univalle        default
INSTANCE        i-0000001a      ami-1f75034f    192.168.29.4
192.168.29.4    shutdown        mykey (Univalle, cluster10)        0
            m1.tiny 2011-07-11T21:16:48Z    nova

I didn't install anything but Nova, is Glance required to run
Instances in multiple nodes? Do you any idea what's wrong?

This is the Node's nova-compute.log:

root at cluster10:/home/cluster# tail -f /var/log/nova/nova-compute.log
2011-07-11 16:16:48,874 DEBUG nova.rpc [-] received
{u'_context_request_id': u'VPFPV7A78LIOTSM6PD98',
u'_context_read_deleted': False, u'args': {u'instance_id': 26,
u'injected_files': None, u'availability_zone': None},
u'_context_is_admin': True, u'_context_timestamp':
u'2011-07-11T21:16:48Z', u'_context_user': u'osadmin', u'method':
u'run_instance', u'_context_project': u'Univalle',
u'_context_remote_address': u'192.168.28.50'} from (pid=1629) _receive
/usr/lib/pymodules/python2.6/nova/rpc.py:167
2011-07-11 16:16:48,875 DEBUG nova.rpc [-] unpacked context:
{'timestamp': u'2011-07-11T21:16:48Z', 'remote_address':
u'192.168.28.50', 'project': u'Univalle', 'is_admin': True, 'user':
u'osadmin', 'request_id': u'VPFPV7A78LIOTSM6PD98', 'read_deleted':
False} from (pid=1629) _unpack_context
/usr/lib/pymodules/python2.6/nova/rpc.py:331
2011-07-11 16:16:49,030 AUDIT nova.compute.manager
[VPFPV7A78LIOTSM6PD98 osadmin Univalle] instance 26: starting...
2011-07-11 16:16:49,255 DEBUG nova.rpc [-] Making asynchronous call on
network.cluster1 ... from (pid=1629) call
/usr/lib/pymodules/python2.6/nova/rpc.py:350
2011-07-11 16:16:49,255 DEBUG nova.rpc [-] MSG_ID is
d5605654cd9c4002b35ee1593f66465f from (pid=1629) call
/usr/lib/pymodules/python2.6/nova/rpc.py:353
2011-07-11 16:16:49,962 DEBUG nova.utils [-] Attempting to grab
semaphore "ensure_bridge" for method "ensure_bridge"... from
(pid=1629) inner /usr/lib/pymodules/python2.6/nova/utils.py:594
2011-07-11 16:16:49,963 DEBUG nova.utils [-] Attempting to grab file
lock "ensure_bridge" for method "ensure_bridge"... from (pid=1629)
inner /usr/lib/pymodules/python2.6/nova/utils.py:599
2011-07-11 16:16:49,964 DEBUG nova.utils [-] Running cmd (subprocess):
ip link show dev br100 from (pid=1629) execute
/usr/lib/pymodules/python2.6/nova/utils.py:150
2011-07-11 16:16:50,161 DEBUG nova.virt.libvirt_conn [-] instance
instance-0000001a: starting toXML method from (pid=1629) to_xml
/usr/lib/pymodules/python2.6/nova/virt/libvirt_conn.py:996
2011-07-11 16:16:50,234 DEBUG nova.virt.libvirt_conn [-] instance
instance-0000001a: finished toXML method from (pid=1629) to_xml
/usr/lib/pymodules/python2.6/nova/virt/libvirt_conn.py:1041
2011-07-11 16:16:50,302 INFO nova [-] called setup_basic_filtering in nwfilter
2011-07-11 16:16:50,302 INFO nova [-] ensuring static filters
2011-07-11 16:16:50,376 INFO nova [-]
<nova.db.sqlalchemy.models.SecurityGroupIngressRule object at
0x336c8d0>
2011-07-11 16:16:50,376 INFO nova [-]
<nova.db.sqlalchemy.models.SecurityGroupIngressRule object at
0x336c850>
2011-07-11 16:16:50,382 DEBUG nova.utils [-] Attempting to grab
semaphore "iptables" for method "apply"... from (pid=1629) inner
/usr/lib/pymodules/python2.6/nova/utils.py:594
2011-07-11 16:16:50,383 DEBUG nova.utils [-] Attempting to grab file
lock "iptables" for method "apply"... from (pid=1629) inner
/usr/lib/pymodules/python2.6/nova/utils.py:599
2011-07-11 16:16:50,388 DEBUG nova.utils [-] Running cmd (subprocess):
sudo iptables-save -t filter from (pid=1629) execute
/usr/lib/pymodules/python2.6/nova/utils.py:150
2011-07-11 16:16:50,412 DEBUG nova.utils [-] Running cmd (subprocess):
sudo iptables-restore from (pid=1629) execute
/usr/lib/pymodules/python2.6/nova/utils.py:150
2011-07-11 16:16:50,436 DEBUG nova.utils [-] Running cmd (subprocess):
sudo iptables-save -t nat from (pid=1629) execute
/usr/lib/pymodules/python2.6/nova/utils.py:150
2011-07-11 16:16:50,459 DEBUG nova.utils [-] Running cmd (subprocess):
sudo iptables-restore from (pid=1629) execute
/usr/lib/pymodules/python2.6/nova/utils.py:150
2011-07-11 16:16:50,507 DEBUG nova.utils [-] Running cmd (subprocess):
mkdir -p /var/lib/nova/instances/instance-0000001a/ from (pid=1629)
execute /usr/lib/pymodules/python2.6/nova/utils.py:150
2011-07-11 16:16:50,522 INFO nova.virt.libvirt_conn [-] instance
instance-0000001a: Creating image
2011-07-11 16:16:50,591 DEBUG nova.utils [-] Attempting to grab
semaphore "6332e1f2" for method "call_if_not_exists"... from
(pid=1629) inner /usr/lib/pymodules/python2.6/nova/utils.py:594
2011-07-11 16:16:50,591 DEBUG nova.utils [-] Running cmd (subprocess):
cp /var/lib/nova/instances/_base/6332e1f2
/var/lib/nova/instances/instance-0000001a/kernel from (pid=1629)
execute /usr/lib/pymodules/python2.6/nova/utils.py:150
2011-07-11 16:16:50,620 DEBUG nova.utils [-] Attempting to grab
semaphore "11412fa5" for method "call_if_not_exists"... from
(pid=1629) inner /usr/lib/pymodules/python2.6/nova/utils.py:594
2011-07-11 16:16:50,620 DEBUG nova.utils [-] Running cmd (subprocess):
cp /var/lib/nova/instances/_base/11412fa5
/var/lib/nova/instances/instance-0000001a/ramdisk from (pid=1629)
execute /usr/lib/pymodules/python2.6/nova/utils.py:150
2011-07-11 16:16:50,666 DEBUG nova.utils [-] Attempting to grab
semaphore "1f75034f_sm" for method "call_if_not_exists"... from
(pid=1629) inner /usr/lib/pymodules/python2.6/nova/utils.py:594
2011-07-11 16:16:50,666 DEBUG nova.utils [-] Running cmd (subprocess):
qemu-img create -f qcow2 -o
cluster_size=2M,backing_file=/var/lib/nova/instances/_base/1f75034f_sm
/var/lib/nova/instances/instance-0000001a/disk from (pid=1629) execute
/usr/lib/pymodules/python2.6/nova/utils.py:150
2011-07-11 16:16:50,776 INFO nova.virt.libvirt_conn [-] instance
instance-0000001a: injecting key into image 527762255
2011-07-11 16:16:50,776 INFO nova.virt.libvirt_conn [-] instance
instance-0000001a: injecting net into image 527762255
2011-07-11 16:16:50,791 DEBUG nova.utils [-] Running cmd (subprocess):
sudo qemu-nbd -c /dev/nbd15
/var/lib/nova/instances/instance-0000001a/disk from (pid=1629) execute
/usr/lib/pymodules/python2.6/nova/utils.py:150
2011-07-11 16:16:51,852 DEBUG nova.utils [-] Running cmd (subprocess):
sudo tune2fs -c 0 -i 0 /dev/nbd15 from (pid=1629) execute
/usr/lib/pymodules/python2.6/nova/utils.py:150
2011-07-11 16:16:51,887 DEBUG nova.utils [-] Result was 1 from
(pid=1629) execute /usr/lib/pymodules/python2.6/nova/utils.py:166
2011-07-11 16:16:51,890 DEBUG nova.utils [-] Running cmd (subprocess):
sudo qemu-nbd -d /dev/nbd15 from (pid=1629) execute
/usr/lib/pymodules/python2.6/nova/utils.py:150
2011-07-11 16:16:51,922 WARNING nova.virt.libvirt_conn [-] instance
instance-0000001a: ignoring error injecting data into image 527762255
(Unexpected error while running command.
Command: sudo tune2fs -c 0 -i 0 /dev/nbd15
Exit code: 1
Stdout: 'tune2fs 1.41.11 (14-Mar-2010)\n'
Stderr: "tune2fs: Invalid argument while trying to open
/dev/nbd15\r\nCouldn't find valid filesystem superblock.\n")
2011-07-11 16:17:23,572 ERROR nova.exception [-] Uncaught exception
(nova.exception): TRACE: Traceback (most recent call last):
(nova.exception): TRACE:   File
"/usr/lib/pymodules/python2.6/nova/exception.py", line 120, in _wrap
(nova.exception): TRACE:     return f(*args, **kw)
(nova.exception): TRACE:   File
"/usr/lib/pymodules/python2.6/nova/virt/libvirt_conn.py", line 617, in
spawn
(nova.exception): TRACE:     domain = self._create_new_domain(xml)
(nova.exception): TRACE:   File
"/usr/lib/pymodules/python2.6/nova/virt/libvirt_conn.py", line 1079,
in _create_new_domain
(nova.exception): TRACE:     domain.createWithFlags(launch_flags)
(nova.exception): TRACE:   File
"/usr/lib/python2.6/dist-packages/libvirt.py", line 337, in
createWithFlags
(nova.exception): TRACE:     if ret == -1: raise libvirtError
('virDomainCreateWithFlags() failed', dom=self)
(nova.exception): TRACE: libvirtError: internal error process exited
while connecting to monitor: char device redirected to /dev/pts/1
(nova.exception): TRACE: qemu: could not load kernel
'/var/lib/nova/instances/instance-0000001a/kernel': Permission denied
(nova.exception): TRACE:
(nova.exception): TRACE:
2011-07-11 16:17:23,600 ERROR nova.compute.manager
[VPFPV7A78LIOTSM6PD98 osadmin Univalle] Instance '26' failed to spawn.
Is virtualization enabled in the BIOS?
(nova.compute.manager): TRACE: Traceback (most recent call last):
(nova.compute.manager): TRACE:   File
"/usr/lib/pymodules/python2.6/nova/compute/manager.py", line 234, in
run_instance
(nova.compute.manager): TRACE:     self.driver.spawn(instance_ref)
(nova.compute.manager): TRACE:   File
"/usr/lib/pymodules/python2.6/nova/exception.py", line 126, in _wrap
(nova.compute.manager): TRACE:     raise Error(str(e))
(nova.compute.manager): TRACE: Error: internal error process exited
while connecting to monitor: char device redirected to /dev/pts/1
(nova.compute.manager): TRACE: qemu: could not load kernel
'/var/lib/nova/instances/instance-0000001a/kernel': Permission denied
(nova.compute.manager): TRACE:
(nova.compute.manager): TRACE:
2011-07-11 16:17:23,866 INFO nova.compute.manager [-] Found instance
'instance-0000001a' in DB but no VM. State=5, so setting state to
shutoff.

Thank you.

--
Edier Alberto Zapata Hern?ndez
Ingeniero de Sistemas
Universidad de Valle

Reply | Threaded
Open this post in threaded view
|

nova-manage: unexpected keyword argument 'timeout' error

Sharif Islam
In reply to this post by Shang Wu

I am in the middle of testing my openstack installation. I created a
test project and admin user. However, when I run this, I get the
following error:


# nova-manage project zipfile myproject myadmin /root/creds/cred.zip

Possible wrong number of arguments supplied
project zipfile: Exports credentials for project to a zip file
        arguments: project_id user_id [filename='nova.zip]
2011-07-11 17:17:10,444 nova.root: wait() got an unexpected keyword
argument 'timeout'
(nova.root): TRACE: Traceback (most recent call last):
(nova.root): TRACE:   File "/usr/bin/nova-manage", line 694, in <module>
(nova.root): TRACE:     main()
(nova.root): TRACE:   File "/usr/bin/nova-manage", line 686, in main
(nova.root): TRACE:     fn(*argv)
(nova.root): TRACE:   File "/usr/bin/nova-manage", line 422, in zipfile
(nova.root): TRACE:     zip_file = self.manager.get_credentials(user_id,
project_id)
(nova.root): TRACE:   File
"/usr/lib/python2.6/site-packages/nova/auth/manager.py", line 689, in
get_credentials
(nova.root): TRACE:     private_key, signed_cert =
crypto.generate_x509_cert(user.id, pid)
(nova.root): TRACE:   File
"/usr/lib/python2.6/site-packages/nova/crypto.py", line 196, in
generate_x509_cert
(nova.root): TRACE:     utils.execute("openssl genrsa -out %s %s" %
(keyfile, bits))
(nova.root): TRACE:   File
"/usr/lib/python2.6/site-packages/nova/utils.py", line 138, in execute
(nova.root): TRACE:     result = obj.communicate()
(nova.root): TRACE:   File "/usr/lib64/python2.6/subprocess.py", line
725, in communicate
(nova.root): TRACE:     stdout, stderr = self._communicate(input, endtime)
(nova.root): TRACE:   File "/usr/lib64/python2.6/subprocess.py", line
1322, in _communicate
(nova.root): TRACE:     self.wait(timeout=self._remaining_time(endtime))
(nova.root): TRACE: TypeError: wait() got an unexpected keyword argument
'timeout'
(nova.root): TRACE:

I am using RHEL 6.1 and python 2.6.6

# uname -ra
Linux i26 2.6.32-131.4.1.el6.x86_64 #1 SMP Fri Jun 10 10:54:26 EDT 2011
x86_64 x86_64 x86_64 GNU/Linux
# rpm -qa|grep openssl
openssl-devel-1.0.0-10.el6_1.4.x86_64
openssl-1.0.0-10.el6_1.4.x86_64

# ps -eaf|grep nova
nova      3562     1  0 16:12 ?        00:00:00 /usr/bin/python
/usr/bin/nova-objectstore --flagfile=/etc/nova/nova.conf
--logfile=/var/log/nova/nova-objectstore.log --pidfile
/var/run/nova/nova-objectstore.pid
glance    3628     1  0 16:12 ?        00:00:00 /usr/bin/python
/usr/bin/glance-api --flagfile=/etc/nova/glance.conf --uid=495 --gid=99
--pidfile=/var/run/glance/glance.pid --logfile=/var/log/glance/glance.log
nova      3655     1  0 16:13 ?        00:00:00 /usr/bin/python
/usr/bin/nova-api --flagfile=/etc/nova/nova.conf
--logfile=/var/log/nova/nova-api.log --pidfile /var/run/nova/nova-api.pid
root      5118  3452  0 17:29 pts/1    00:00:00 grep nova


When I start nova-console, this is what happens:

# nova-console
2011-07-11 17:23:58,157 nova.root: Starting console node (version
2011.1.1-workspace:tarmac-20110224184504-4e19t5nx33b8gpy9)

(not sure if I am suppose to get a prompt here or not). Any idea why I
am getting "Possible wrong number of arguments supplied" and timeout error.

thanks.

--sharif



Reply | Threaded
Open this post in threaded view
|

nova-manage: unexpected keyword argument 'timeout' error

Sharif Islam
FYI. I was able to get the credentials created with Diablo2, just
reinstalled everything with the new rpm
()http://openstackgd.wordpress.com/2011/07/11/diablo-2-milestone-rpms/).

I also saw this post:
http://www.mail-archive.com/openstack at lists.launchpad.net/msg02545.html

but didn't try the patch.

--sharif


On 07/11/2011 05:30 PM, Sharif Islam wrote:

>
> I am in the middle of testing my openstack installation. I created a
> test project and admin user. However, when I run this, I get the
> following error:
>
>
> # nova-manage project zipfile myproject myadmin /root/creds/cred.zip
>
> Possible wrong number of arguments supplied
> project zipfile: Exports credentials for project to a zip file
>         arguments: project_id user_id [filename='nova.zip]
> 2011-07-11 17:17:10,444 nova.root: wait() got an unexpected keyword
> argument 'timeout'
> (nova.root): TRACE: Traceback (most recent call last):
> (nova.root): TRACE:   File "/usr/bin/nova-manage", line 694, in <module>
> (nova.root): TRACE:     main()
> (nova.root): TRACE:   File "/usr/bin/nova-manage", line 686, in main
> (nova.root): TRACE:     fn(*argv)
> (nova.root): TRACE:   File "/usr/bin/nova-manage", line 422, in zipfile
> (nova.root): TRACE:     zip_file = self.manager.get_credentials(user_id,
> project_id)
> (nova.root): TRACE:   File
> "/usr/lib/python2.6/site-packages/nova/auth/manager.py", line 689, in
> get_credentials
> (nova.root): TRACE:     private_key, signed_cert =
> crypto.generate_x509_cert(user.id, pid)
> (nova.root): TRACE:   File
> "/usr/lib/python2.6/site-packages/nova/crypto.py", line 196, in
> generate_x509_cert
> (nova.root): TRACE:     utils.execute("openssl genrsa -out %s %s" %
> (keyfile, bits))
> (nova.root): TRACE:   File
> "/usr/lib/python2.6/site-packages/nova/utils.py", line 138, in execute
> (nova.root): TRACE:     result = obj.communicate()
> (nova.root): TRACE:   File "/usr/lib64/python2.6/subprocess.py", line
> 725, in communicate
> (nova.root): TRACE:     stdout, stderr = self._communicate(input, endtime)
> (nova.root): TRACE:   File "/usr/lib64/python2.6/subprocess.py", line
> 1322, in _communicate
> (nova.root): TRACE:     self.wait(timeout=self._remaining_time(endtime))
> (nova.root): TRACE: TypeError: wait() got an unexpected keyword argument
> 'timeout'
> (nova.root): TRACE:
>
> I am using RHEL 6.1 and python 2.6.6
>
> # uname -ra
> Linux i26 2.6.32-131.4.1.el6.x86_64 #1 SMP Fri Jun 10 10:54:26 EDT 2011
> x86_64 x86_64 x86_64 GNU/Linux
> # rpm -qa|grep openssl
> openssl-devel-1.0.0-10.el6_1.4.x86_64
> openssl-1.0.0-10.el6_1.4.x86_64
>
> # ps -eaf|grep nova
> nova      3562     1  0 16:12 ?        00:00:00 /usr/bin/python
> /usr/bin/nova-objectstore --flagfile=/etc/nova/nova.conf
> --logfile=/var/log/nova/nova-objectstore.log --pidfile
> /var/run/nova/nova-objectstore.pid
> glance    3628     1  0 16:12 ?        00:00:00 /usr/bin/python
> /usr/bin/glance-api --flagfile=/etc/nova/glance.conf --uid=495 --gid=99
> --pidfile=/var/run/glance/glance.pid --logfile=/var/log/glance/glance.log
> nova      3655     1  0 16:13 ?        00:00:00 /usr/bin/python
> /usr/bin/nova-api --flagfile=/etc/nova/nova.conf
> --logfile=/var/log/nova/nova-api.log --pidfile /var/run/nova/nova-api.pid
> root      5118  3452  0 17:29 pts/1    00:00:00 grep nova
>
>
> When I start nova-console, this is what happens:
>
> # nova-console
> 2011-07-11 17:23:58,157 nova.root: Starting console node (version
> 2011.1.1-workspace:tarmac-20110224184504-4e19t5nx33b8gpy9)
>
> (not sure if I am suppose to get a prompt here or not). Any idea why I
> am getting "Possible wrong number of arguments supplied" and timeout error.
>
> thanks.
>
> --sharif
>
>
> _______________________________________________
> Openstack-operators mailing list
> Openstack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


--
Sharif Islam
Senior Systems Analyst/Programmer
FutureGrid (http://futuregrid.org)
Pervasive Technology Institute, Indiana University Bloomington

Reply | Threaded
Open this post in threaded view
|

Help: Instances don't start

Christian Wittwer
In reply to this post by Edier Zapata
I had the same problems recently. The solution was, to copy over the
/var/lib/nova/instances/_base folder to the compute node.
Somehow the images were not transfered from the controller to the compute nodes.

Chris

2011/7/11 Edier Zapata <edalzap at gmail.com>:

> Hi again, I just checked the BIOS and Vt is enabled, btw I reboot the
> node and sent the instance again, and same error:
>
> root at cluster1:/home/cluster/OpenStack-org# euca-run-instances $emi -k
> mykey -t m1.tiny
> RESERVATION ? ? r-n3iuhqxp ? ? ?Univalle ? ? ? ?default
> INSTANCE ? ? ? ?i-0000001a ? ? ?ami-1f75034f
> scheduling ? ? ?mykey (Univalle, None) ?0 ? ? ? ? m1.tiny
> 2011-07-11T21:16:48Z ? ?unknown zone
>
> root at cluster1:/home/cluster/OpenStack-org# euca-describe-instances
> RESERVATION ? ? r-n3iuhqxp ? ? ?Univalle ? ? ? ?default
> INSTANCE ? ? ? ?i-0000001a ? ? ?ami-1f75034f ? ?192.168.29.4
> 192.168.29.4 ? ?launching ? ? ? mykey (Univalle, cluster10) ? ? ? ?0
> ? ? ? ? ? ?m1.tiny 2011-07-11T21:16:48Z ? ?nova
>
> root at cluster1:/home/cluster/OpenStack-org# euca-describe-instances
> RESERVATION ? ? r-n3iuhqxp ? ? ?Univalle ? ? ? ?default
> INSTANCE ? ? ? ?i-0000001a ? ? ?ami-1f75034f ? ?192.168.29.4
> 192.168.29.4 ? ?shutdown ? ? ? ?mykey (Univalle, cluster10) ? ? ? ?0
> ? ? ? ? ? ?m1.tiny 2011-07-11T21:16:48Z ? ?nova
>
> I didn't install anything but Nova, is Glance required to run
> Instances in multiple nodes? Do you any idea what's wrong?
>
> This is the Node's nova-compute.log:
>
> root at cluster10:/home/cluster# tail -f /var/log/nova/nova-compute.log
> 2011-07-11 16:16:48,874 DEBUG nova.rpc [-] received
> {u'_context_request_id': u'VPFPV7A78LIOTSM6PD98',
> u'_context_read_deleted': False, u'args': {u'instance_id': 26,
> u'injected_files': None, u'availability_zone': None},
> u'_context_is_admin': True, u'_context_timestamp':
> u'2011-07-11T21:16:48Z', u'_context_user': u'osadmin', u'method':
> u'run_instance', u'_context_project': u'Univalle',
> u'_context_remote_address': u'192.168.28.50'} from (pid=1629) _receive
> /usr/lib/pymodules/python2.6/nova/rpc.py:167
> 2011-07-11 16:16:48,875 DEBUG nova.rpc [-] unpacked context:
> {'timestamp': u'2011-07-11T21:16:48Z', 'remote_address':
> u'192.168.28.50', 'project': u'Univalle', 'is_admin': True, 'user':
> u'osadmin', 'request_id': u'VPFPV7A78LIOTSM6PD98', 'read_deleted':
> False} from (pid=1629) _unpack_context
> /usr/lib/pymodules/python2.6/nova/rpc.py:331
> 2011-07-11 16:16:49,030 AUDIT nova.compute.manager
> [VPFPV7A78LIOTSM6PD98 osadmin Univalle] instance 26: starting...
> 2011-07-11 16:16:49,255 DEBUG nova.rpc [-] Making asynchronous call on
> network.cluster1 ... from (pid=1629) call
> /usr/lib/pymodules/python2.6/nova/rpc.py:350
> 2011-07-11 16:16:49,255 DEBUG nova.rpc [-] MSG_ID is
> d5605654cd9c4002b35ee1593f66465f from (pid=1629) call
> /usr/lib/pymodules/python2.6/nova/rpc.py:353
> 2011-07-11 16:16:49,962 DEBUG nova.utils [-] Attempting to grab
> semaphore "ensure_bridge" for method "ensure_bridge"... from
> (pid=1629) inner /usr/lib/pymodules/python2.6/nova/utils.py:594
> 2011-07-11 16:16:49,963 DEBUG nova.utils [-] Attempting to grab file
> lock "ensure_bridge" for method "ensure_bridge"... from (pid=1629)
> inner /usr/lib/pymodules/python2.6/nova/utils.py:599
> 2011-07-11 16:16:49,964 DEBUG nova.utils [-] Running cmd (subprocess):
> ip link show dev br100 from (pid=1629) execute
> /usr/lib/pymodules/python2.6/nova/utils.py:150
> 2011-07-11 16:16:50,161 DEBUG nova.virt.libvirt_conn [-] instance
> instance-0000001a: starting toXML method from (pid=1629) to_xml
> /usr/lib/pymodules/python2.6/nova/virt/libvirt_conn.py:996
> 2011-07-11 16:16:50,234 DEBUG nova.virt.libvirt_conn [-] instance
> instance-0000001a: finished toXML method from (pid=1629) to_xml
> /usr/lib/pymodules/python2.6/nova/virt/libvirt_conn.py:1041
> 2011-07-11 16:16:50,302 INFO nova [-] called setup_basic_filtering in nwfilter
> 2011-07-11 16:16:50,302 INFO nova [-] ensuring static filters
> 2011-07-11 16:16:50,376 INFO nova [-]
> <nova.db.sqlalchemy.models.SecurityGroupIngressRule object at
> 0x336c8d0>
> 2011-07-11 16:16:50,376 INFO nova [-]
> <nova.db.sqlalchemy.models.SecurityGroupIngressRule object at
> 0x336c850>
> 2011-07-11 16:16:50,382 DEBUG nova.utils [-] Attempting to grab
> semaphore "iptables" for method "apply"... from (pid=1629) inner
> /usr/lib/pymodules/python2.6/nova/utils.py:594
> 2011-07-11 16:16:50,383 DEBUG nova.utils [-] Attempting to grab file
> lock "iptables" for method "apply"... from (pid=1629) inner
> /usr/lib/pymodules/python2.6/nova/utils.py:599
> 2011-07-11 16:16:50,388 DEBUG nova.utils [-] Running cmd (subprocess):
> sudo iptables-save -t filter from (pid=1629) execute
> /usr/lib/pymodules/python2.6/nova/utils.py:150
> 2011-07-11 16:16:50,412 DEBUG nova.utils [-] Running cmd (subprocess):
> sudo iptables-restore from (pid=1629) execute
> /usr/lib/pymodules/python2.6/nova/utils.py:150
> 2011-07-11 16:16:50,436 DEBUG nova.utils [-] Running cmd (subprocess):
> sudo iptables-save -t nat from (pid=1629) execute
> /usr/lib/pymodules/python2.6/nova/utils.py:150
> 2011-07-11 16:16:50,459 DEBUG nova.utils [-] Running cmd (subprocess):
> sudo iptables-restore from (pid=1629) execute
> /usr/lib/pymodules/python2.6/nova/utils.py:150
> 2011-07-11 16:16:50,507 DEBUG nova.utils [-] Running cmd (subprocess):
> mkdir -p /var/lib/nova/instances/instance-0000001a/ from (pid=1629)
> execute /usr/lib/pymodules/python2.6/nova/utils.py:150
> 2011-07-11 16:16:50,522 INFO nova.virt.libvirt_conn [-] instance
> instance-0000001a: Creating image
> 2011-07-11 16:16:50,591 DEBUG nova.utils [-] Attempting to grab
> semaphore "6332e1f2" for method "call_if_not_exists"... from
> (pid=1629) inner /usr/lib/pymodules/python2.6/nova/utils.py:594
> 2011-07-11 16:16:50,591 DEBUG nova.utils [-] Running cmd (subprocess):
> cp /var/lib/nova/instances/_base/6332e1f2
> /var/lib/nova/instances/instance-0000001a/kernel from (pid=1629)
> execute /usr/lib/pymodules/python2.6/nova/utils.py:150
> 2011-07-11 16:16:50,620 DEBUG nova.utils [-] Attempting to grab
> semaphore "11412fa5" for method "call_if_not_exists"... from
> (pid=1629) inner /usr/lib/pymodules/python2.6/nova/utils.py:594
> 2011-07-11 16:16:50,620 DEBUG nova.utils [-] Running cmd (subprocess):
> cp /var/lib/nova/instances/_base/11412fa5
> /var/lib/nova/instances/instance-0000001a/ramdisk from (pid=1629)
> execute /usr/lib/pymodules/python2.6/nova/utils.py:150
> 2011-07-11 16:16:50,666 DEBUG nova.utils [-] Attempting to grab
> semaphore "1f75034f_sm" for method "call_if_not_exists"... from
> (pid=1629) inner /usr/lib/pymodules/python2.6/nova/utils.py:594
> 2011-07-11 16:16:50,666 DEBUG nova.utils [-] Running cmd (subprocess):
> qemu-img create -f qcow2 -o
> cluster_size=2M,backing_file=/var/lib/nova/instances/_base/1f75034f_sm
> /var/lib/nova/instances/instance-0000001a/disk from (pid=1629) execute
> /usr/lib/pymodules/python2.6/nova/utils.py:150
> 2011-07-11 16:16:50,776 INFO nova.virt.libvirt_conn [-] instance
> instance-0000001a: injecting key into image 527762255
> 2011-07-11 16:16:50,776 INFO nova.virt.libvirt_conn [-] instance
> instance-0000001a: injecting net into image 527762255
> 2011-07-11 16:16:50,791 DEBUG nova.utils [-] Running cmd (subprocess):
> sudo qemu-nbd -c /dev/nbd15
> /var/lib/nova/instances/instance-0000001a/disk from (pid=1629) execute
> /usr/lib/pymodules/python2.6/nova/utils.py:150
> 2011-07-11 16:16:51,852 DEBUG nova.utils [-] Running cmd (subprocess):
> sudo tune2fs -c 0 -i 0 /dev/nbd15 from (pid=1629) execute
> /usr/lib/pymodules/python2.6/nova/utils.py:150
> 2011-07-11 16:16:51,887 DEBUG nova.utils [-] Result was 1 from
> (pid=1629) execute /usr/lib/pymodules/python2.6/nova/utils.py:166
> 2011-07-11 16:16:51,890 DEBUG nova.utils [-] Running cmd (subprocess):
> sudo qemu-nbd -d /dev/nbd15 from (pid=1629) execute
> /usr/lib/pymodules/python2.6/nova/utils.py:150
> 2011-07-11 16:16:51,922 WARNING nova.virt.libvirt_conn [-] instance
> instance-0000001a: ignoring error injecting data into image 527762255
> (Unexpected error while running command.
> Command: sudo tune2fs -c 0 -i 0 /dev/nbd15
> Exit code: 1
> Stdout: 'tune2fs 1.41.11 (14-Mar-2010)\n'
> Stderr: "tune2fs: Invalid argument while trying to open
> /dev/nbd15\r\nCouldn't find valid filesystem superblock.\n")
> 2011-07-11 16:17:23,572 ERROR nova.exception [-] Uncaught exception
> (nova.exception): TRACE: Traceback (most recent call last):
> (nova.exception): TRACE: ? File
> "/usr/lib/pymodules/python2.6/nova/exception.py", line 120, in _wrap
> (nova.exception): TRACE: ? ? return f(*args, **kw)
> (nova.exception): TRACE: ? File
> "/usr/lib/pymodules/python2.6/nova/virt/libvirt_conn.py", line 617, in
> spawn
> (nova.exception): TRACE: ? ? domain = self._create_new_domain(xml)
> (nova.exception): TRACE: ? File
> "/usr/lib/pymodules/python2.6/nova/virt/libvirt_conn.py", line 1079,
> in _create_new_domain
> (nova.exception): TRACE: ? ? domain.createWithFlags(launch_flags)
> (nova.exception): TRACE: ? File
> "/usr/lib/python2.6/dist-packages/libvirt.py", line 337, in
> createWithFlags
> (nova.exception): TRACE: ? ? if ret == -1: raise libvirtError
> ('virDomainCreateWithFlags() failed', dom=self)
> (nova.exception): TRACE: libvirtError: internal error process exited
> while connecting to monitor: char device redirected to /dev/pts/1
> (nova.exception): TRACE: qemu: could not load kernel
> '/var/lib/nova/instances/instance-0000001a/kernel': Permission denied
> (nova.exception): TRACE:
> (nova.exception): TRACE:
> 2011-07-11 16:17:23,600 ERROR nova.compute.manager
> [VPFPV7A78LIOTSM6PD98 osadmin Univalle] Instance '26' failed to spawn.
> Is virtualization enabled in the BIOS?
> (nova.compute.manager): TRACE: Traceback (most recent call last):
> (nova.compute.manager): TRACE: ? File
> "/usr/lib/pymodules/python2.6/nova/compute/manager.py", line 234, in
> run_instance
> (nova.compute.manager): TRACE: ? ? self.driver.spawn(instance_ref)
> (nova.compute.manager): TRACE: ? File
> "/usr/lib/pymodules/python2.6/nova/exception.py", line 126, in _wrap
> (nova.compute.manager): TRACE: ? ? raise Error(str(e))
> (nova.compute.manager): TRACE: Error: internal error process exited
> while connecting to monitor: char device redirected to /dev/pts/1
> (nova.compute.manager): TRACE: qemu: could not load kernel
> '/var/lib/nova/instances/instance-0000001a/kernel': Permission denied
> (nova.compute.manager): TRACE:
> (nova.compute.manager): TRACE:
> 2011-07-11 16:17:23,866 INFO nova.compute.manager [-] Found instance
> 'instance-0000001a' in DB but no VM. State=5, so setting state to
> shutoff.
>
> Thank you.
>
> --
> Edier Alberto Zapata Hern?ndez
> Ingeniero de Sistemas
> Universidad de Valle
> _______________________________________________
> Openstack-operators mailing list
> Openstack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>