Controller hostname - some questions

classic Classic list List threaded Threaded
4 messages Options
Reply | Threaded
Open this post in threaded view
|

Controller hostname - some questions

Leandro Reox
Hi all,

Being working on controller/network node HA for a time know, but at this
point im having an issue that maybe someone has faced before .
When i switch the controller to an "spare" one, the computes nodes still
searching for "network.$oldcontrollerhostname" . Is there a place where the
hostname on the controller is stored ? maybe a field on the database ? . The
instances stucks in "networking" status

The entry on the nova-compute.log from the compute that is trying to spawn
the instance is clear :

DEBUG nova.rpc [-] Making asynchronous call on network.controller1 ... from
(pid=4440) call /usr/lib/pymodules/python2.6/nova/rpc.py:350

Where "controller1" is the OLD controller/network node

Any clues ?

Regards
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20110722/8c6bb28f/attachment.html>

Reply | Threaded
Open this post in threaded view
|

[Openstack] Controller hostname - some questions

Vishvananda Ishaya
Generally the easiest is to give the new machine the same hostname.

You can also update the references to the host in the db:

update networks set host=newhostname where host=oldhostname

Vish

On Jul 22, 2011, at 12:39 PM, Leandro Reox wrote:

> Hi all,
>
> Being working on controller/network node HA for a time know, but at this point im having an issue that maybe someone has faced before .
> When i switch the controller to an "spare" one, the computes nodes still searching for "network.$oldcontrollerhostname" . Is there a place where the hostname on the controller is stored ? maybe a field on the database ? . The instances stucks in "networking" status
>
> The entry on the nova-compute.log from the compute that is trying to spawn the instance is clear :
>
> DEBUG nova.rpc [-] Making asynchronous call on network.controller1 ... from (pid=4440) call /usr/lib/pymodules/python2.6/nova/rpc.py:350
>
> Where "controller1" is the OLD controller/network node
>
> Any clues ?
>
> Regards
> _______________________________________________
> Mailing list: https://launchpad.net/~openstack
> Post to     : openstack at lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp


Reply | Threaded
Open this post in threaded view
|

[Openstack] Controller hostname - some questions

Leandro Reox
yes i already tried that and works. but i was thinking in a schema where al
nodes have the network and scheduler services installed and i can switch
them via custom heartbeat resource. Everything worked but that. Maybe i can
update the contoller hostname on switch.

using a vip like dns entry and pointing that on the nova conf will change
that behavior or.the controller will still resolving via hostname ?

guess not

regards
On Jul 22, 2011 7:52 PM, "Vishvananda Ishaya" <vishvananda at gmail.com> wrote:

> Generally the easiest is to give the new machine the same hostname.
>
> You can also update the references to the host in the db:
>
> update networks set host=newhostname where host=oldhostname
>
> Vish
>
> On Jul 22, 2011, at 12:39 PM, Leandro Reox wrote:
>
>> Hi all,
>>
>> Being working on controller/network node HA for a time know, but at this
point im having an issue that maybe someone has faced before .
>> When i switch the controller to an "spare" one, the computes nodes still
searching for "network.$oldcontrollerhostname" . Is there a place where the
hostname on the controller is stored ? maybe a field on the database ? . The
instances stucks in "networking" status
>>
>> The entry on the nova-compute.log from the compute that is trying to
spawn the instance is clear :
>>
>> DEBUG nova.rpc [-] Making asynchronous call on network.controller1 ...
from (pid=4440) call /usr/lib/pymodules/python2.6/nova/rpc.py:350

>>
>> Where "controller1" is the OLD controller/network node
>>
>> Any clues ?
>>
>> Regards
>> _______________________________________________
>> Mailing list: https://launchpad.net/~openstack
>> Post to : openstack at lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help : https://help.launchpad.net/ListHelp
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20110722/21dce70e/attachment.html>

Reply | Threaded
Open this post in threaded view
|

[Openstack] Controller hostname - some questions

Vishvananda Ishaya
you can also start the services on the new host with --host=<hostname> and they will basically pretend that they have the hostname that you specify.

Vish

On Jul 22, 2011, at 6:15 PM, Leandro Reox wrote:

> yes i already tried that and works. but i was thinking in a schema where al nodes have the network and scheduler services installed and i can switch them via custom heartbeat resource. Everything worked but that. Maybe i can update the contoller hostname on switch.
>
> using a vip like dns entry and pointing that on the nova conf will change that behavior or.the controller will still resolving via hostname ?
>
> guess not
>
> regards
>
> On Jul 22, 2011 7:52 PM, "Vishvananda Ishaya" <vishvananda at gmail.com> wrote:
> > Generally the easiest is to give the new machine the same hostname.
> >
> > You can also update the references to the host in the db:
> >
> > update networks set host=newhostname where host=oldhostname
> >
> > Vish
> >
> > On Jul 22, 2011, at 12:39 PM, Leandro Reox wrote:
> >
> >> Hi all,
> >>
> >> Being working on controller/network node HA for a time know, but at this point im having an issue that maybe someone has faced before .
> >> When i switch the controller to an "spare" one, the computes nodes still searching for "network.$oldcontrollerhostname" . Is there a place where the hostname on the controller is stored ? maybe a field on the database ? . The instances stucks in "networking" status
> >>
> >> The entry on the nova-compute.log from the compute that is trying to spawn the instance is clear :
> >>
> >> DEBUG nova.rpc [-] Making asynchronous call on network.controller1 ... from (pid=4440) call /usr/lib/pymodules/python2.6/nova/rpc.py:350
> >>
> >> Where "controller1" is the OLD controller/network node
> >>
> >> Any clues ?
> >>
> >> Regards
> >> _______________________________________________
> >> Mailing list: https://launchpad.net/~openstack
> >> Post to : openstack at lists.launchpad.net
> >> Unsubscribe : https://launchpad.net/~openstack
> >> More help : https://help.launchpad.net/ListHelp
> >

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20110722/cbc1f83a/attachment.html>