[OCTAVIA][QUEENS][KOLLA] - network/subnet not found.

classic Classic list List threaded Threaded
24 messages Options
12
Reply | Threaded
Open this post in threaded view
|

[OCTAVIA][QUEENS][KOLLA] - network/subnet not found.

Gaël THEROND
Hi guys,

I'm back to business with Octavia after a long time but I'm facing an issue that seems a little bit tricky.

When trying to create a LB using either APIs (cURL/postman) calls or openstack-client the request finish with an error such as:

`Network c0d40dfd-123e-4a3c-92de-eb7b57178dd3 not found. (HTTP 400)`

If I put my client or the Octavia api in DEBUG mode, I found out neutron to correctly sending back to him a RESP BODY with the requested network/subnet in it.

Here is the stacktrace that I get from both, Openstack client and the Octavia API logs:

```
POST call to https://api-emea-west-az1.cloud.inkdrop.sh:9876/v2.0/lbaas/loadbalancers used request id req-2f929192-4e60-491b-b65d-3a7bef43e978
Request returned failure status: 400
Network c0d40dfd-123e-4a3c-92de-eb7b57178dd3 not found. (HTTP 400) (Request-ID: req-2f929192-4e60-491b-b65d-3a7bef43e978)
Traceback (most recent call last):
  File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/octaviaclient/api/v2/octavia.py", line 29, in wrapper
    response = func(*args, **kwargs)
  File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/octaviaclient/api/v2/octavia.py", line 92, in load_balancer_create
    response = self.create(url, **params)
  File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/osc_lib/api/api.py", line 164, in create
    ret = self._request(method, url, session=session, **params)
  File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/osc_lib/api/api.py", line 141, in _request
    return session.request(url, method, **kwargs)
  File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/keystoneauth1/session.py", line 869, in request
    raise exceptions.from_response(resp, method, url)
keystoneauth1.exceptions.http.BadRequest: Bad Request (HTTP 400) (Request-ID: req-2f929192-4e60-491b-b65d-3a7bef43e978)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/cliff/app.py", line 402, in run_subcommand
    result = cmd.run(parsed_args)
  File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/osc_lib/command/command.py", line 41, in run
    return super(Command, self).run(parsed_args)
  File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/cliff/display.py", line 116, in run
    column_names, data = self.take_action(parsed_args)
  File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/octaviaclient/osc/v2/load_balancer.py", line 121, in take_action
    json=body)
  File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/octaviaclient/api/v2/octavia.py", line 38, in wrapper
    request_id=e.request_id)
octaviaclient.api.v2.octavia.OctaviaClientException: Network c0d40dfd-123e-4a3c-92de-eb7b57178dd3 not found. (HTTP 400) (Request-ID: req-2f929192-4e60-491b-b65d-3a7bef43e978)
clean_up CreateLoadBalancer: Network c0d40dfd-123e-4a3c-92de-eb7b57178dd3 not found. (HTTP 400) (Request-ID: req-2f929192-4e60-491b-b65d-3a7bef43e978)
Traceback (most recent call last):
  File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/octaviaclient/api/v2/octavia.py", line 29, in wrapper
    response = func(*args, **kwargs)
  File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/octaviaclient/api/v2/octavia.py", line 92, in load_balancer_create
    response = self.create(url, **params)
  File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/osc_lib/api/api.py", line 164, in create
    ret = self._request(method, url, session=session, **params)
  File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/osc_lib/api/api.py", line 141, in _request
    return session.request(url, method, **kwargs)
  File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/keystoneauth1/session.py", line 869, in request
    raise exceptions.from_response(resp, method, url)
keystoneauth1.exceptions.http.BadRequest: Bad Request (HTTP 400) (Request-ID: req-2f929192-4e60-491b-b65d-3a7bef43e978)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/osc_lib/shell.py", line 135, in run
    ret_val = super(OpenStackShell, self).run(argv)
  File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/cliff/app.py", line 281, in run
    result = self.run_subcommand(remainder)
  File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/osc_lib/shell.py", line 175, in run_subcommand
    ret_value = super(OpenStackShell, self).run_subcommand(argv)
  File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/cliff/app.py", line 402, in run_subcommand
    result = cmd.run(parsed_args)
  File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/osc_lib/command/command.py", line 41, in run
    return super(Command, self).run(parsed_args)
  File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/cliff/display.py", line 116, in run
    column_names, data = self.take_action(parsed_args)
  File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/octaviaclient/osc/v2/load_balancer.py", line 121, in take_action
    json=body)
  File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/octaviaclient/api/v2/octavia.py", line 38, in wrapper
    request_id=e.request_id)
octaviaclient.api.v2.octavia.OctaviaClientException: Network c0d40dfd-123e-4a3c-92de-eb7b57178dd3 not found. (HTTP 400) (Request-ID: req-2f929192-4e60-491b-b65d-3a7bef43e978)

END return value: 1
```
I'm using the following openstack clients and libraries:

```
keystoneauth1==3.11.0
kolla-ansible==6.1.0
openstacksdk==0.17.2
os-client-config==1.31.2
os-service-types==1.3.0
osc-lib==1.11.1
oslo.config==6.5.1
oslo.context==2.21.0
oslo.i18n==3.22.1
oslo.log==3.40.1
oslo.serialization==2.28.1
oslo.utils==3.37.0
python-cinderclient==4.1.0
python-dateutil==2.7.3
python-glanceclient==2.12.1
python-keystoneclient==3.17.0
python-neutronclient==6.10.0
python-novaclient==11.0.0
python-octaviaclient==1.6.0
python-openstackclient==3.16.1
```
If on the same virtualenv I'm doing:

`openstack --os-cloud ic-emea-west-az0 --os-region ic-emea-west-az1 network list`

I correctly get my requested network/subnet id.

I'm using Kolla to deploy octavia and get the exact same issue with all the kolla 6.0.0 to 6.1.0 serie.

If anyone have an idea, I'm all in ^^



_______________________________________________
OpenStack-operators mailing list
[hidden email]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
Reply | Threaded
Open this post in threaded view
|

Re: [OCTAVIA][QUEENS][KOLLA] - network/subnet not found.

Michael Johnson
Hi there.

I'm not sure what is happening there and I don't use kolla, so I need
to ask a few more questions.

Is that network ID being used for the VIP or the lb-mgmt-net?

Any chance you can provide a debug log paste from the API process for
this request?

Basically it is saying that network ID is invalid for the user making
the request.
This can happen if the user token being used doesn't have access to
that network, as an example.

It could also be a permissions issue with the service auth account
being used for the Octavia API, but this is unlikely.

Michael

On Thu, Oct 18, 2018 at 8:51 AM Gaël THEROND <[hidden email]> wrote:

>
> Hi guys,
>
> I'm back to business with Octavia after a long time but I'm facing an issue that seems a little bit tricky.
>
> When trying to create a LB using either APIs (cURL/postman) calls or openstack-client the request finish with an error such as:
>
> `Network c0d40dfd-123e-4a3c-92de-eb7b57178dd3 not found. (HTTP 400)`
>
> If I put my client or the Octavia api in DEBUG mode, I found out neutron to correctly sending back to him a RESP BODY with the requested network/subnet in it.
>
> Here is the stacktrace that I get from both, Openstack client and the Octavia API logs:
>
> ```
> POST call to https://api-emea-west-az1.cloud.inkdrop.sh:9876/v2.0/lbaas/loadbalancers used request id req-2f929192-4e60-491b-b65d-3a7bef43e978
> Request returned failure status: 400
> Network c0d40dfd-123e-4a3c-92de-eb7b57178dd3 not found. (HTTP 400) (Request-ID: req-2f929192-4e60-491b-b65d-3a7bef43e978)
> Traceback (most recent call last):
>   File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/octaviaclient/api/v2/octavia.py", line 29, in wrapper
>     response = func(*args, **kwargs)
>   File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/octaviaclient/api/v2/octavia.py", line 92, in load_balancer_create
>     response = self.create(url, **params)
>   File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/osc_lib/api/api.py", line 164, in create
>     ret = self._request(method, url, session=session, **params)
>   File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/osc_lib/api/api.py", line 141, in _request
>     return session.request(url, method, **kwargs)
>   File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/keystoneauth1/session.py", line 869, in request
>     raise exceptions.from_response(resp, method, url)
> keystoneauth1.exceptions.http.BadRequest: Bad Request (HTTP 400) (Request-ID: req-2f929192-4e60-491b-b65d-3a7bef43e978)
>
> During handling of the above exception, another exception occurred:
>
> Traceback (most recent call last):
>   File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/cliff/app.py", line 402, in run_subcommand
>     result = cmd.run(parsed_args)
>   File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/osc_lib/command/command.py", line 41, in run
>     return super(Command, self).run(parsed_args)
>   File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/cliff/display.py", line 116, in run
>     column_names, data = self.take_action(parsed_args)
>   File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/octaviaclient/osc/v2/load_balancer.py", line 121, in take_action
>     json=body)
>   File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/octaviaclient/api/v2/octavia.py", line 38, in wrapper
>     request_id=e.request_id)
> octaviaclient.api.v2.octavia.OctaviaClientException: Network c0d40dfd-123e-4a3c-92de-eb7b57178dd3 not found. (HTTP 400) (Request-ID: req-2f929192-4e60-491b-b65d-3a7bef43e978)
> clean_up CreateLoadBalancer: Network c0d40dfd-123e-4a3c-92de-eb7b57178dd3 not found. (HTTP 400) (Request-ID: req-2f929192-4e60-491b-b65d-3a7bef43e978)
> Traceback (most recent call last):
>   File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/octaviaclient/api/v2/octavia.py", line 29, in wrapper
>     response = func(*args, **kwargs)
>   File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/octaviaclient/api/v2/octavia.py", line 92, in load_balancer_create
>     response = self.create(url, **params)
>   File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/osc_lib/api/api.py", line 164, in create
>     ret = self._request(method, url, session=session, **params)
>   File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/osc_lib/api/api.py", line 141, in _request
>     return session.request(url, method, **kwargs)
>   File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/keystoneauth1/session.py", line 869, in request
>     raise exceptions.from_response(resp, method, url)
> keystoneauth1.exceptions.http.BadRequest: Bad Request (HTTP 400) (Request-ID: req-2f929192-4e60-491b-b65d-3a7bef43e978)
>
> During handling of the above exception, another exception occurred:
>
> Traceback (most recent call last):
>   File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/osc_lib/shell.py", line 135, in run
>     ret_val = super(OpenStackShell, self).run(argv)
>   File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/cliff/app.py", line 281, in run
>     result = self.run_subcommand(remainder)
>   File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/osc_lib/shell.py", line 175, in run_subcommand
>     ret_value = super(OpenStackShell, self).run_subcommand(argv)
>   File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/cliff/app.py", line 402, in run_subcommand
>     result = cmd.run(parsed_args)
>   File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/osc_lib/command/command.py", line 41, in run
>     return super(Command, self).run(parsed_args)
>   File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/cliff/display.py", line 116, in run
>     column_names, data = self.take_action(parsed_args)
>   File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/octaviaclient/osc/v2/load_balancer.py", line 121, in take_action
>     json=body)
>   File "/home/flint/.virtualenvs/ic-emea-az1/lib/python3.4/site-packages/octaviaclient/api/v2/octavia.py", line 38, in wrapper
>     request_id=e.request_id)
> octaviaclient.api.v2.octavia.OctaviaClientException: Network c0d40dfd-123e-4a3c-92de-eb7b57178dd3 not found. (HTTP 400) (Request-ID: req-2f929192-4e60-491b-b65d-3a7bef43e978)
>
> END return value: 1
> ```
> I'm using the following openstack clients and libraries:
>
> ```
> keystoneauth1==3.11.0
> kolla-ansible==6.1.0
> openstacksdk==0.17.2
> os-client-config==1.31.2
> os-service-types==1.3.0
> osc-lib==1.11.1
> oslo.config==6.5.1
> oslo.context==2.21.0
> oslo.i18n==3.22.1
> oslo.log==3.40.1
> oslo.serialization==2.28.1
> oslo.utils==3.37.0
> python-cinderclient==4.1.0
> python-dateutil==2.7.3
> python-glanceclient==2.12.1
> python-keystoneclient==3.17.0
> python-neutronclient==6.10.0
> python-novaclient==11.0.0
> python-octaviaclient==1.6.0
> python-openstackclient==3.16.1
> ```
> If on the same virtualenv I'm doing:
>
> `openstack --os-cloud ic-emea-west-az0 --os-region ic-emea-west-az1 network list`
>
> I correctly get my requested network/subnet id.
>
> I'm using Kolla to deploy octavia and get the exact same issue with all the kolla 6.0.0 to 6.1.0 serie.
>
> If anyone have an idea, I'm all in ^^
>
>
> _______________________________________________
> OpenStack-operators mailing list
> [hidden email]
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

_______________________________________________
OpenStack-operators mailing list
[hidden email]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
Reply | Threaded
Open this post in threaded view
|

[octavia][rocky] Octavia and VxLAN without DVR

Florian Engelmann
Hi,

We did test Octavia with Pike (DVR deployment) and everything was
working right our of the box. We changed our underlay network to a
Layer3 spine-leaf network now and did not deploy DVR as we don't wanted
to have that much cables in a rack.

Octavia is not working right now as the lb-mgmt-net does not exist on
the compute nodes nor does a br-ex.

The control nodes running

octavia_worker
octavia_housekeeping
octavia_health_manager
octavia_api

and as far as I understood octavia_worker, octavia_housekeeping and
octavia_health_manager have to talk to the amphora instances. But the
control nodes are spread over three different leafs. So each control
node in a different L2 domain.

So the question is how to deploy a lb-mgmt-net network in our setup?

- Compute nodes have no "stretched" L2 domain
- Control nodes, compute nodes and network nodes are in L3 networks like
api, storage, ...
- Only network nodes are connected to a L2 domain (with a separated NIC)
providing the "public" network


All the best,
Florian

_______________________________________________
OpenStack-operators mailing list
[hidden email]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

smime.p7s (6K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: [octavia][rocky] Octavia and VxLAN without DVR

Erik McCormick
On Tue, Oct 23, 2018 at 7:53 AM Florian Engelmann
<[hidden email]> wrote:

>
> Hi,
>
> We did test Octavia with Pike (DVR deployment) and everything was
> working right our of the box. We changed our underlay network to a
> Layer3 spine-leaf network now and did not deploy DVR as we don't wanted
> to have that much cables in a rack.
>
> Octavia is not working right now as the lb-mgmt-net does not exist on
> the compute nodes nor does a br-ex.
>
> The control nodes running
>
> octavia_worker
> octavia_housekeeping
> octavia_health_manager
> octavia_api
>
> and as far as I understood octavia_worker, octavia_housekeeping and
> octavia_health_manager have to talk to the amphora instances. But the
> control nodes are spread over three different leafs. So each control
> node in a different L2 domain.
>
> So the question is how to deploy a lb-mgmt-net network in our setup?
>
> - Compute nodes have no "stretched" L2 domain
> - Control nodes, compute nodes and network nodes are in L3 networks like
> api, storage, ...
> - Only network nodes are connected to a L2 domain (with a separated NIC)
> providing the "public" network
>
You'll need to add a new bridge to your compute nodes and create a
provider network associated with that bridge. In my setup this is
simply a flat network tied to a tagged interface. In your case it
probably makes more sense to make a new VNI and create a vxlan
provider network. The routing in your switches should handle the rest.

-Erik
>
> All the best,
> Florian
> _______________________________________________
> OpenStack-operators mailing list
> [hidden email]
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

_______________________________________________
OpenStack-operators mailing list
[hidden email]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
Reply | Threaded
Open this post in threaded view
|

Re: [octavia][rocky] Octavia and VxLAN without DVR

Florian Engelmann
Am 10/23/18 um 3:20 PM schrieb Erik McCormick:

> On Tue, Oct 23, 2018 at 7:53 AM Florian Engelmann
> <[hidden email]> wrote:
>>
>> Hi,
>>
>> We did test Octavia with Pike (DVR deployment) and everything was
>> working right our of the box. We changed our underlay network to a
>> Layer3 spine-leaf network now and did not deploy DVR as we don't wanted
>> to have that much cables in a rack.
>>
>> Octavia is not working right now as the lb-mgmt-net does not exist on
>> the compute nodes nor does a br-ex.
>>
>> The control nodes running
>>
>> octavia_worker
>> octavia_housekeeping
>> octavia_health_manager
>> octavia_api
>>
>> and as far as I understood octavia_worker, octavia_housekeeping and
>> octavia_health_manager have to talk to the amphora instances. But the
>> control nodes are spread over three different leafs. So each control
>> node in a different L2 domain.
>>
>> So the question is how to deploy a lb-mgmt-net network in our setup?
>>
>> - Compute nodes have no "stretched" L2 domain
>> - Control nodes, compute nodes and network nodes are in L3 networks like
>> api, storage, ...
>> - Only network nodes are connected to a L2 domain (with a separated NIC)
>> providing the "public" network
>>
> You'll need to add a new bridge to your compute nodes and create a
> provider network associated with that bridge. In my setup this is
> simply a flat network tied to a tagged interface. In your case it
> probably makes more sense to make a new VNI and create a vxlan
> provider network. The routing in your switches should handle the rest.
Ok that's what I try right now. But I don't get how to setup something
like a VxLAN provider Network. I thought only vlan and flat is supported
as provider network? I guess it is not possible to use the tunnel
interface that is used for tenant networks?
So I have to create a separated VxLAN on the control and compute nodes like:

# ip link add vxoctavia type vxlan id 42 dstport 4790 group 239.1.1.1
dev vlan3535 ttl 5
# ip addr add 172.16.1.11/20 dev vxoctavia
# ip link set vxoctavia up

and use it like a flat provider network, true?



>
> -Erik
>>
>> All the best,
>> Florian
>> _______________________________________________
>> OpenStack-operators mailing list
>> [hidden email]
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

_______________________________________________
OpenStack-operators mailing list
[hidden email]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

smime.p7s (6K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: [octavia][rocky] Octavia and VxLAN without DVR

Florian Engelmann
In reply to this post by Florian Engelmann
Is there any kolla-ansible Octavia guide?

Am 10/23/18 um 1:52 PM schrieb Florian Engelmann:

> Hi,
>
> We did test Octavia with Pike (DVR deployment) and everything was
> working right our of the box. We changed our underlay network to a
> Layer3 spine-leaf network now and did not deploy DVR as we don't wanted
> to have that much cables in a rack.
>
> Octavia is not working right now as the lb-mgmt-net does not exist on
> the compute nodes nor does a br-ex.
>
> The control nodes running
>
> octavia_worker
> octavia_housekeeping
> octavia_health_manager
> octavia_api
>
> and as far as I understood octavia_worker, octavia_housekeeping and
> octavia_health_manager have to talk to the amphora instances. But the
> control nodes are spread over three different leafs. So each control
> node in a different L2 domain.
>
> So the question is how to deploy a lb-mgmt-net network in our setup?
>
> - Compute nodes have no "stretched" L2 domain
> - Control nodes, compute nodes and network nodes are in L3 networks like
> api, storage, ...
> - Only network nodes are connected to a L2 domain (with a separated NIC)
> providing the "public" network
>
>
> All the best,
> Florian
>
> _______________________________________________
> OpenStack-operators mailing list
> [hidden email]
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
--

EveryWare AG
Florian Engelmann
Systems Engineer
Zurlindenstrasse 52a
CH-8003 Zürich

tel: +41 44 466 60 00
fax: +41 44 466 60 10
mail: mailto:[hidden email]
web: http://www.everyware.ch

_______________________________________________
OpenStack-operators mailing list
[hidden email]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

smime.p7s (6K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: [octavia][rocky] Octavia and VxLAN without DVR

Erik McCormick
In reply to this post by Florian Engelmann
So in your other email you said asked if there was a guide for
deploying it with Kolla ansible...

Oh boy. No there's not. I don't know if you've seen my recent mails on
Octavia, but I am going through this deployment process with
kolla-ansible right now and it is lacking in a few areas.

If you plan to use different CA certificates for client and server in
Octavia, you'll need to add that into the playbook. Presently it only
copies over ca_01.pem, cacert.key, and client.pem and uses them for
everything. I was completely unable to make it work with only one CA
as I got some SSL errors. It passes gate though, so I aasume it must
work? I dunno.

Networking comments and a really messy kolla-ansible / octavia how-to below...

On Tue, Oct 23, 2018 at 10:09 AM Florian Engelmann
<[hidden email]> wrote:

>
> Am 10/23/18 um 3:20 PM schrieb Erik McCormick:
> > On Tue, Oct 23, 2018 at 7:53 AM Florian Engelmann
> > <[hidden email]> wrote:
> >>
> >> Hi,
> >>
> >> We did test Octavia with Pike (DVR deployment) and everything was
> >> working right our of the box. We changed our underlay network to a
> >> Layer3 spine-leaf network now and did not deploy DVR as we don't wanted
> >> to have that much cables in a rack.
> >>
> >> Octavia is not working right now as the lb-mgmt-net does not exist on
> >> the compute nodes nor does a br-ex.
> >>
> >> The control nodes running
> >>
> >> octavia_worker
> >> octavia_housekeeping
> >> octavia_health_manager
> >> octavia_api
> >>
> >> and as far as I understood octavia_worker, octavia_housekeeping and
> >> octavia_health_manager have to talk to the amphora instances. But the
> >> control nodes are spread over three different leafs. So each control
> >> node in a different L2 domain.
> >>
> >> So the question is how to deploy a lb-mgmt-net network in our setup?
> >>
> >> - Compute nodes have no "stretched" L2 domain
> >> - Control nodes, compute nodes and network nodes are in L3 networks like
> >> api, storage, ...
> >> - Only network nodes are connected to a L2 domain (with a separated NIC)
> >> providing the "public" network
> >>
> > You'll need to add a new bridge to your compute nodes and create a
> > provider network associated with that bridge. In my setup this is
> > simply a flat network tied to a tagged interface. In your case it
> > probably makes more sense to make a new VNI and create a vxlan
> > provider network. The routing in your switches should handle the rest.
>
> Ok that's what I try right now. But I don't get how to setup something
> like a VxLAN provider Network. I thought only vlan and flat is supported
> as provider network? I guess it is not possible to use the tunnel
> interface that is used for tenant networks?
> So I have to create a separated VxLAN on the control and compute nodes like:
>
> # ip link add vxoctavia type vxlan id 42 dstport 4790 group 239.1.1.1
> dev vlan3535 ttl 5
> # ip addr add 172.16.1.11/20 dev vxoctavia
> # ip link set vxoctavia up
>
> and use it like a flat provider network, true?
>
This is a fine way of doing things, but it's only half the battle.
You'll need to add a bridge on the compute nodes and bind it to that
new interface. Something like this if you're using openvswitch:

docker exec openvswitch_db
/usr/local/bin/kolla_ensure_openvswitch_configured br-mgmt vxoctavia

Also you'll want to remove the IP address from that interface as it's
going to be a bridge. Think of it like your public (br-ex) interface
on your network nodes.

From there you'll need to update the bridge mappings via kolla
overrides. This would usually be in /etc/kolla/config/neutron. Create
a subdirectory for your compute inventory group and create an
ml2_conf.ini there. So you'd end up with something like:

[root@kolla-deploy ~]# cat /etc/kolla/config/neutron/compute/ml2_conf.ini
[ml2_type_flat]
flat_networks = mgmt-net

[ovs]
bridge_mappings = mgmt-net:br-mgmt

run kolla-ansible --tags neutron reconfigure to push out the new
configs. Note that there is a bug where the neutron containers may not
restart after the change, so you'll probably need to do a 'docker
container restart neutron_openvswitch_agent' on each compute node.

At this point, you'll need to create the provider network in the admin
project like:

openstack network create --provider-network-type flat
--provider-physical-network mgmt-net lb-mgmt-net

And then create a normal subnet attached to this network with some
largeish address scope. I wouldn't use 172.16.0.0/16 because docker
uses that by default. I'm not sure if it matters since the network
traffic will be isolated on a bridge, but it makes me paranoid so I
avoided it.

For your controllers, I think you can just let everything function off
your api interface since you're routing in your spines. Set up a
gateway somewhere from that lb-mgmt network and save yourself the
complication of adding an interface to your controllers. If you choose
to use a separate interface on your controllers, you'll need to make
sure this patch is in your kolla-ansible install or cherry pick it.

https://github.com/openstack/kolla-ansible/commit/0b6e401c4fdb9aa4ff87d0bfd4b25c91b86e0d60#diff-6c871f6865aecf0057a5b5f677ae7d59

I don't think that's been backported at all, so unless you're running
off master you'll need to go get it.

From here on out, the regular Octavia instruction should serve you.
Create a flavor, Create a security group, and capture their UUIDs
along with the UUID of the provider network you made. Override them in
globals.yml with:

octavia_amp_boot_network_list: <uuid>
octavia_amp_secgroup_list: <uuid>
octavia_amp_flavor_id: <uuid>

This is all from my scattered notes and bad memory. Hopefully it makes
sense. Corrections welcome.

-Erik



>
>
> >
> > -Erik
> >>
> >> All the best,
> >> Florian
> >> _______________________________________________
> >> OpenStack-operators mailing list
> >> [hidden email]
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

_______________________________________________
OpenStack-operators mailing list
[hidden email]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
Reply | Threaded
Open this post in threaded view
|

Re: [octavia][rocky] Octavia and VxLAN without DVR

Michael Johnson
I am still catching up on e-mail from the weekend.

There are a lot of different options for how to implement the
lb-mgmt-network for the controller<->amphora communication. I can't
talk to what options Kolla provides, but I can talk to how Octavia
works.

One thing to note on the lb-mgmt-net issue, if you can setup routes
such that the controllers can reach the IP addresses used for the
lb-mgmt-net, and that the amphora can reach the controllers, Octavia
can run with a routed lb-mgmt-net setup. There is no L2 requirement
between the controllers and the amphora instances.

Michael

On Tue, Oct 23, 2018 at 9:57 AM Erik McCormick
<[hidden email]> wrote:

>
> So in your other email you said asked if there was a guide for
> deploying it with Kolla ansible...
>
> Oh boy. No there's not. I don't know if you've seen my recent mails on
> Octavia, but I am going through this deployment process with
> kolla-ansible right now and it is lacking in a few areas.
>
> If you plan to use different CA certificates for client and server in
> Octavia, you'll need to add that into the playbook. Presently it only
> copies over ca_01.pem, cacert.key, and client.pem and uses them for
> everything. I was completely unable to make it work with only one CA
> as I got some SSL errors. It passes gate though, so I aasume it must
> work? I dunno.
>
> Networking comments and a really messy kolla-ansible / octavia how-to below...
>
> On Tue, Oct 23, 2018 at 10:09 AM Florian Engelmann
> <[hidden email]> wrote:
> >
> > Am 10/23/18 um 3:20 PM schrieb Erik McCormick:
> > > On Tue, Oct 23, 2018 at 7:53 AM Florian Engelmann
> > > <[hidden email]> wrote:
> > >>
> > >> Hi,
> > >>
> > >> We did test Octavia with Pike (DVR deployment) and everything was
> > >> working right our of the box. We changed our underlay network to a
> > >> Layer3 spine-leaf network now and did not deploy DVR as we don't wanted
> > >> to have that much cables in a rack.
> > >>
> > >> Octavia is not working right now as the lb-mgmt-net does not exist on
> > >> the compute nodes nor does a br-ex.
> > >>
> > >> The control nodes running
> > >>
> > >> octavia_worker
> > >> octavia_housekeeping
> > >> octavia_health_manager
> > >> octavia_api
> > >>
> > >> and as far as I understood octavia_worker, octavia_housekeeping and
> > >> octavia_health_manager have to talk to the amphora instances. But the
> > >> control nodes are spread over three different leafs. So each control
> > >> node in a different L2 domain.
> > >>
> > >> So the question is how to deploy a lb-mgmt-net network in our setup?
> > >>
> > >> - Compute nodes have no "stretched" L2 domain
> > >> - Control nodes, compute nodes and network nodes are in L3 networks like
> > >> api, storage, ...
> > >> - Only network nodes are connected to a L2 domain (with a separated NIC)
> > >> providing the "public" network
> > >>
> > > You'll need to add a new bridge to your compute nodes and create a
> > > provider network associated with that bridge. In my setup this is
> > > simply a flat network tied to a tagged interface. In your case it
> > > probably makes more sense to make a new VNI and create a vxlan
> > > provider network. The routing in your switches should handle the rest.
> >
> > Ok that's what I try right now. But I don't get how to setup something
> > like a VxLAN provider Network. I thought only vlan and flat is supported
> > as provider network? I guess it is not possible to use the tunnel
> > interface that is used for tenant networks?
> > So I have to create a separated VxLAN on the control and compute nodes like:
> >
> > # ip link add vxoctavia type vxlan id 42 dstport 4790 group 239.1.1.1
> > dev vlan3535 ttl 5
> > # ip addr add 172.16.1.11/20 dev vxoctavia
> > # ip link set vxoctavia up
> >
> > and use it like a flat provider network, true?
> >
> This is a fine way of doing things, but it's only half the battle.
> You'll need to add a bridge on the compute nodes and bind it to that
> new interface. Something like this if you're using openvswitch:
>
> docker exec openvswitch_db
> /usr/local/bin/kolla_ensure_openvswitch_configured br-mgmt vxoctavia
>
> Also you'll want to remove the IP address from that interface as it's
> going to be a bridge. Think of it like your public (br-ex) interface
> on your network nodes.
>
> From there you'll need to update the bridge mappings via kolla
> overrides. This would usually be in /etc/kolla/config/neutron. Create
> a subdirectory for your compute inventory group and create an
> ml2_conf.ini there. So you'd end up with something like:
>
> [root@kolla-deploy ~]# cat /etc/kolla/config/neutron/compute/ml2_conf.ini
> [ml2_type_flat]
> flat_networks = mgmt-net
>
> [ovs]
> bridge_mappings = mgmt-net:br-mgmt
>
> run kolla-ansible --tags neutron reconfigure to push out the new
> configs. Note that there is a bug where the neutron containers may not
> restart after the change, so you'll probably need to do a 'docker
> container restart neutron_openvswitch_agent' on each compute node.
>
> At this point, you'll need to create the provider network in the admin
> project like:
>
> openstack network create --provider-network-type flat
> --provider-physical-network mgmt-net lb-mgmt-net
>
> And then create a normal subnet attached to this network with some
> largeish address scope. I wouldn't use 172.16.0.0/16 because docker
> uses that by default. I'm not sure if it matters since the network
> traffic will be isolated on a bridge, but it makes me paranoid so I
> avoided it.
>
> For your controllers, I think you can just let everything function off
> your api interface since you're routing in your spines. Set up a
> gateway somewhere from that lb-mgmt network and save yourself the
> complication of adding an interface to your controllers. If you choose
> to use a separate interface on your controllers, you'll need to make
> sure this patch is in your kolla-ansible install or cherry pick it.
>
> https://github.com/openstack/kolla-ansible/commit/0b6e401c4fdb9aa4ff87d0bfd4b25c91b86e0d60#diff-6c871f6865aecf0057a5b5f677ae7d59
>
> I don't think that's been backported at all, so unless you're running
> off master you'll need to go get it.
>
> From here on out, the regular Octavia instruction should serve you.
> Create a flavor, Create a security group, and capture their UUIDs
> along with the UUID of the provider network you made. Override them in
> globals.yml with:
>
> octavia_amp_boot_network_list: <uuid>
> octavia_amp_secgroup_list: <uuid>
> octavia_amp_flavor_id: <uuid>
>
> This is all from my scattered notes and bad memory. Hopefully it makes
> sense. Corrections welcome.
>
> -Erik
>
>
>
> >
> >
> > >
> > > -Erik
> > >>
> > >> All the best,
> > >> Florian
> > >> _______________________________________________
> > >> OpenStack-operators mailing list
> > >> [hidden email]
> > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
> _______________________________________________
> OpenStack-operators mailing list
> [hidden email]
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

_______________________________________________
OpenStack-operators mailing list
[hidden email]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
Reply | Threaded
Open this post in threaded view
|

Re: [octavia][rocky] Octavia and VxLAN without DVR

Gaël THEROND
For the records, I’m actually working on a fairly large overhaul of our Openstack services deployement using Kolla-Ansible. We’re leveraging kolla-ansible to as smoothly as possible migrate all ou legacy architecture to a shiny new using exactly the same topology as described upper (Using cumulus/calico etc).

One of the new services that we try to provide with such method is Octavia.

As I too faced some trouble I find them not that hard to solve either by reading carefully the current APIs ref, guides available and source code or by asking for help right here.

People responding to octavia’s questions are IMHO blazing fast and really clear and add great details about internals mechanisms which is really appreciated.

As I’ve almost finish our own deployment I had noted almost all pitfalls that I faced and which part of the documentation that was missing.

I’ll finish my deployment and test and redact a clean (and I hope as complet as possible) documentation as I feel it’s something really needed.

On a side note regarding CA and SSL I had an issue that I solved by correctly rebuilding my amphora. Another tip and trick here is to use Barbican when possible as it really help a lot.

I hope it can help anyone else willing to use Octavia as I truly think this service is a huge addition to Openstack and its gaining more and more momentum since the Pike/Queens releases.

Le mar. 23 oct. 2018 à 19:49, Michael Johnson <[hidden email]> a écrit :
I am still catching up on e-mail from the weekend.

There are a lot of different options for how to implement the
lb-mgmt-network for the controller<->amphora communication. I can't
talk to what options Kolla provides, but I can talk to how Octavia
works.

One thing to note on the lb-mgmt-net issue, if you can setup routes
such that the controllers can reach the IP addresses used for the
lb-mgmt-net, and that the amphora can reach the controllers, Octavia
can run with a routed lb-mgmt-net setup. There is no L2 requirement
between the controllers and the amphora instances.

Michael

On Tue, Oct 23, 2018 at 9:57 AM Erik McCormick
<[hidden email]> wrote:
>
> So in your other email you said asked if there was a guide for
> deploying it with Kolla ansible...
>
> Oh boy. No there's not. I don't know if you've seen my recent mails on
> Octavia, but I am going through this deployment process with
> kolla-ansible right now and it is lacking in a few areas.
>
> If you plan to use different CA certificates for client and server in
> Octavia, you'll need to add that into the playbook. Presently it only
> copies over ca_01.pem, cacert.key, and client.pem and uses them for
> everything. I was completely unable to make it work with only one CA
> as I got some SSL errors. It passes gate though, so I aasume it must
> work? I dunno.
>
> Networking comments and a really messy kolla-ansible / octavia how-to below...
>
> On Tue, Oct 23, 2018 at 10:09 AM Florian Engelmann
> <[hidden email]> wrote:
> >
> > Am 10/23/18 um 3:20 PM schrieb Erik McCormick:
> > > On Tue, Oct 23, 2018 at 7:53 AM Florian Engelmann
> > > <[hidden email]> wrote:
> > >>
> > >> Hi,
> > >>
> > >> We did test Octavia with Pike (DVR deployment) and everything was
> > >> working right our of the box. We changed our underlay network to a
> > >> Layer3 spine-leaf network now and did not deploy DVR as we don't wanted
> > >> to have that much cables in a rack.
> > >>
> > >> Octavia is not working right now as the lb-mgmt-net does not exist on
> > >> the compute nodes nor does a br-ex.
> > >>
> > >> The control nodes running
> > >>
> > >> octavia_worker
> > >> octavia_housekeeping
> > >> octavia_health_manager
> > >> octavia_api
> > >>
> > >> and as far as I understood octavia_worker, octavia_housekeeping and
> > >> octavia_health_manager have to talk to the amphora instances. But the
> > >> control nodes are spread over three different leafs. So each control
> > >> node in a different L2 domain.
> > >>
> > >> So the question is how to deploy a lb-mgmt-net network in our setup?
> > >>
> > >> - Compute nodes have no "stretched" L2 domain
> > >> - Control nodes, compute nodes and network nodes are in L3 networks like
> > >> api, storage, ...
> > >> - Only network nodes are connected to a L2 domain (with a separated NIC)
> > >> providing the "public" network
> > >>
> > > You'll need to add a new bridge to your compute nodes and create a
> > > provider network associated with that bridge. In my setup this is
> > > simply a flat network tied to a tagged interface. In your case it
> > > probably makes more sense to make a new VNI and create a vxlan
> > > provider network. The routing in your switches should handle the rest.
> >
> > Ok that's what I try right now. But I don't get how to setup something
> > like a VxLAN provider Network. I thought only vlan and flat is supported
> > as provider network? I guess it is not possible to use the tunnel
> > interface that is used for tenant networks?
> > So I have to create a separated VxLAN on the control and compute nodes like:
> >
> > # ip link add vxoctavia type vxlan id 42 dstport 4790 group 239.1.1.1
> > dev vlan3535 ttl 5
> > # ip addr add 172.16.1.11/20 dev vxoctavia
> > # ip link set vxoctavia up
> >
> > and use it like a flat provider network, true?
> >
> This is a fine way of doing things, but it's only half the battle.
> You'll need to add a bridge on the compute nodes and bind it to that
> new interface. Something like this if you're using openvswitch:
>
> docker exec openvswitch_db
> /usr/local/bin/kolla_ensure_openvswitch_configured br-mgmt vxoctavia
>
> Also you'll want to remove the IP address from that interface as it's
> going to be a bridge. Think of it like your public (br-ex) interface
> on your network nodes.
>
> From there you'll need to update the bridge mappings via kolla
> overrides. This would usually be in /etc/kolla/config/neutron. Create
> a subdirectory for your compute inventory group and create an
> ml2_conf.ini there. So you'd end up with something like:
>
> [root@kolla-deploy ~]# cat /etc/kolla/config/neutron/compute/ml2_conf.ini
> [ml2_type_flat]
> flat_networks = mgmt-net
>
> [ovs]
> bridge_mappings = mgmt-net:br-mgmt
>
> run kolla-ansible --tags neutron reconfigure to push out the new
> configs. Note that there is a bug where the neutron containers may not
> restart after the change, so you'll probably need to do a 'docker
> container restart neutron_openvswitch_agent' on each compute node.
>
> At this point, you'll need to create the provider network in the admin
> project like:
>
> openstack network create --provider-network-type flat
> --provider-physical-network mgmt-net lb-mgmt-net
>
> And then create a normal subnet attached to this network with some
> largeish address scope. I wouldn't use 172.16.0.0/16 because docker
> uses that by default. I'm not sure if it matters since the network
> traffic will be isolated on a bridge, but it makes me paranoid so I
> avoided it.
>
> For your controllers, I think you can just let everything function off
> your api interface since you're routing in your spines. Set up a
> gateway somewhere from that lb-mgmt network and save yourself the
> complication of adding an interface to your controllers. If you choose
> to use a separate interface on your controllers, you'll need to make
> sure this patch is in your kolla-ansible install or cherry pick it.
>
> https://github.com/openstack/kolla-ansible/commit/0b6e401c4fdb9aa4ff87d0bfd4b25c91b86e0d60#diff-6c871f6865aecf0057a5b5f677ae7d59
>
> I don't think that's been backported at all, so unless you're running
> off master you'll need to go get it.
>
> From here on out, the regular Octavia instruction should serve you.
> Create a flavor, Create a security group, and capture their UUIDs
> along with the UUID of the provider network you made. Override them in
> globals.yml with:
>
> octavia_amp_boot_network_list: <uuid>
> octavia_amp_secgroup_list: <uuid>
> octavia_amp_flavor_id: <uuid>
>
> This is all from my scattered notes and bad memory. Hopefully it makes
> sense. Corrections welcome.
>
> -Erik
>
>
>
> >
> >
> > >
> > > -Erik
> > >>
> > >> All the best,
> > >> Florian
> > >> _______________________________________________
> > >> OpenStack-operators mailing list
> > >> [hidden email]
> > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
> _______________________________________________
> OpenStack-operators mailing list
> [hidden email]
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

_______________________________________________
OpenStack-operators mailing list
[hidden email]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

_______________________________________________
OpenStack-operators mailing list
[hidden email]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
Reply | Threaded
Open this post in threaded view
|

Re: [octavia][rocky] Octavia and VxLAN without DVR

Florian Engelmann
In reply to this post by Erik McCormick
Ohoh - thank you for your empathy :)
And those great details about how to setup this mgmt network.
I will try to do so this afternoon but solving that routing "puzzle"
(virtual network to control nodes) I will need our network guys to help
me out...

But I will need to tell all Amphorae a static route to the gateway that
is routing to the control nodes?


Am 10/23/18 um 6:57 PM schrieb Erik McCormick:

> So in your other email you said asked if there was a guide for
> deploying it with Kolla ansible...
>
> Oh boy. No there's not. I don't know if you've seen my recent mails on
> Octavia, but I am going through this deployment process with
> kolla-ansible right now and it is lacking in a few areas.
>
> If you plan to use different CA certificates for client and server in
> Octavia, you'll need to add that into the playbook. Presently it only
> copies over ca_01.pem, cacert.key, and client.pem and uses them for
> everything. I was completely unable to make it work with only one CA
> as I got some SSL errors. It passes gate though, so I aasume it must
> work? I dunno.
>
> Networking comments and a really messy kolla-ansible / octavia how-to below...
>
> On Tue, Oct 23, 2018 at 10:09 AM Florian Engelmann
> <[hidden email]> wrote:
>>
>> Am 10/23/18 um 3:20 PM schrieb Erik McCormick:
>>> On Tue, Oct 23, 2018 at 7:53 AM Florian Engelmann
>>> <[hidden email]> wrote:
>>>>
>>>> Hi,
>>>>
>>>> We did test Octavia with Pike (DVR deployment) and everything was
>>>> working right our of the box. We changed our underlay network to a
>>>> Layer3 spine-leaf network now and did not deploy DVR as we don't wanted
>>>> to have that much cables in a rack.
>>>>
>>>> Octavia is not working right now as the lb-mgmt-net does not exist on
>>>> the compute nodes nor does a br-ex.
>>>>
>>>> The control nodes running
>>>>
>>>> octavia_worker
>>>> octavia_housekeeping
>>>> octavia_health_manager
>>>> octavia_api
>>>>
Amphorae-VMs, z.b.

lb-mgmt-net 172.16.0.0/16 default GW

>>>> and as far as I understood octavia_worker, octavia_housekeeping and
>>>> octavia_health_manager have to talk to the amphora instances. But the
>>>> control nodes are spread over three different leafs. So each control
>>>> node in a different L2 domain.
>>>>
>>>> So the question is how to deploy a lb-mgmt-net network in our setup?
>>>>
>>>> - Compute nodes have no "stretched" L2 domain
>>>> - Control nodes, compute nodes and network nodes are in L3 networks like
>>>> api, storage, ...
>>>> - Only network nodes are connected to a L2 domain (with a separated NIC)
>>>> providing the "public" network
>>>>
>>> You'll need to add a new bridge to your compute nodes and create a
>>> provider network associated with that bridge. In my setup this is
>>> simply a flat network tied to a tagged interface. In your case it
>>> probably makes more sense to make a new VNI and create a vxlan
>>> provider network. The routing in your switches should handle the rest.
>>
>> Ok that's what I try right now. But I don't get how to setup something
>> like a VxLAN provider Network. I thought only vlan and flat is supported
>> as provider network? I guess it is not possible to use the tunnel
>> interface that is used for tenant networks?
>> So I have to create a separated VxLAN on the control and compute nodes like:
>>
>> # ip link add vxoctavia type vxlan id 42 dstport 4790 group 239.1.1.1
>> dev vlan3535 ttl 5
>> # ip addr add 172.16.1.11/20 dev vxoctavia
>> # ip link set vxoctavia up
>>
>> and use it like a flat provider network, true?
>>
> This is a fine way of doing things, but it's only half the battle.
> You'll need to add a bridge on the compute nodes and bind it to that
> new interface. Something like this if you're using openvswitch:
>
> docker exec openvswitch_db
> /usr/local/bin/kolla_ensure_openvswitch_configured br-mgmt vxoctavia
>
> Also you'll want to remove the IP address from that interface as it's
> going to be a bridge. Think of it like your public (br-ex) interface
> on your network nodes.
>
>  From there you'll need to update the bridge mappings via kolla
> overrides. This would usually be in /etc/kolla/config/neutron. Create
> a subdirectory for your compute inventory group and create an
> ml2_conf.ini there. So you'd end up with something like:
>
> [root@kolla-deploy ~]# cat /etc/kolla/config/neutron/compute/ml2_conf.ini
> [ml2_type_flat]
> flat_networks = mgmt-net
>
> [ovs]
> bridge_mappings = mgmt-net:br-mgmt
>
> run kolla-ansible --tags neutron reconfigure to push out the new
> configs. Note that there is a bug where the neutron containers may not
> restart after the change, so you'll probably need to do a 'docker
> container restart neutron_openvswitch_agent' on each compute node.
>
> At this point, you'll need to create the provider network in the admin
> project like:
>
> openstack network create --provider-network-type flat
> --provider-physical-network mgmt-net lb-mgmt-net
>
> And then create a normal subnet attached to this network with some
> largeish address scope. I wouldn't use 172.16.0.0/16 because docker
> uses that by default. I'm not sure if it matters since the network
> traffic will be isolated on a bridge, but it makes me paranoid so I
> avoided it.
>
> For your controllers, I think you can just let everything function off
> your api interface since you're routing in your spines. Set up a
> gateway somewhere from that lb-mgmt network and save yourself the
> complication of adding an interface to your controllers. If you choose
> to use a separate interface on your controllers, you'll need to make
> sure this patch is in your kolla-ansible install or cherry pick it.
>
> https://github.com/openstack/kolla-ansible/commit/0b6e401c4fdb9aa4ff87d0bfd4b25c91b86e0d60#diff-6c871f6865aecf0057a5b5f677ae7d59
>
> I don't think that's been backported at all, so unless you're running
> off master you'll need to go get it.
>
>  From here on out, the regular Octavia instruction should serve you.
> Create a flavor, Create a security group, and capture their UUIDs
> along with the UUID of the provider network you made. Override them in
> globals.yml with:
>
> octavia_amp_boot_network_list: <uuid>
> octavia_amp_secgroup_list: <uuid>
> octavia_amp_flavor_id: <uuid>
>
> This is all from my scattered notes and bad memory. Hopefully it makes
> sense. Corrections welcome.
>
> -Erik
>
>
>
>>
>>
>>>
>>> -Erik
>>>>
>>>> All the best,
>>>> Florian
>>>> _______________________________________________
>>>> OpenStack-operators mailing list
>>>> [hidden email]
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
--

EveryWare AG
Florian Engelmann
Systems Engineer
Zurlindenstrasse 52a
CH-8003 Zürich

tel: +41 44 466 60 00
fax: +41 44 466 60 10
mail: mailto:[hidden email]
web: http://www.everyware.ch

_______________________________________________
OpenStack-operators mailing list
[hidden email]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

smime.p7s (6K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: [octavia][rocky] Octavia and VxLAN without DVR

Florian Engelmann
In reply to this post by Michael Johnson
Hi Michael,

yes I definitely would prefer to build a routed setup. Would it be an
option for you to provide some rough step by step "how-to" with
openvswitch in a non-DVR setup?

All the best,
Flo


Am 10/23/18 um 7:48 PM schrieb Michael Johnson:

> I am still catching up on e-mail from the weekend.
>
> There are a lot of different options for how to implement the
> lb-mgmt-network for the controller<->amphora communication. I can't
> talk to what options Kolla provides, but I can talk to how Octavia
> works.
>
> One thing to note on the lb-mgmt-net issue, if you can setup routes
> such that the controllers can reach the IP addresses used for the
> lb-mgmt-net, and that the amphora can reach the controllers, Octavia
> can run with a routed lb-mgmt-net setup. There is no L2 requirement
> between the controllers and the amphora instances.
>
> Michael
>
> On Tue, Oct 23, 2018 at 9:57 AM Erik McCormick
> <[hidden email]> wrote:
>>
>> So in your other email you said asked if there was a guide for
>> deploying it with Kolla ansible...
>>
>> Oh boy. No there's not. I don't know if you've seen my recent mails on
>> Octavia, but I am going through this deployment process with
>> kolla-ansible right now and it is lacking in a few areas.
>>
>> If you plan to use different CA certificates for client and server in
>> Octavia, you'll need to add that into the playbook. Presently it only
>> copies over ca_01.pem, cacert.key, and client.pem and uses them for
>> everything. I was completely unable to make it work with only one CA
>> as I got some SSL errors. It passes gate though, so I aasume it must
>> work? I dunno.
>>
>> Networking comments and a really messy kolla-ansible / octavia how-to below...
>>
>> On Tue, Oct 23, 2018 at 10:09 AM Florian Engelmann
>> <[hidden email]> wrote:
>>>
>>> Am 10/23/18 um 3:20 PM schrieb Erik McCormick:
>>>> On Tue, Oct 23, 2018 at 7:53 AM Florian Engelmann
>>>> <[hidden email]> wrote:
>>>>>
>>>>> Hi,
>>>>>
>>>>> We did test Octavia with Pike (DVR deployment) and everything was
>>>>> working right our of the box. We changed our underlay network to a
>>>>> Layer3 spine-leaf network now and did not deploy DVR as we don't wanted
>>>>> to have that much cables in a rack.
>>>>>
>>>>> Octavia is not working right now as the lb-mgmt-net does not exist on
>>>>> the compute nodes nor does a br-ex.
>>>>>
>>>>> The control nodes running
>>>>>
>>>>> octavia_worker
>>>>> octavia_housekeeping
>>>>> octavia_health_manager
>>>>> octavia_api
>>>>>
>>>>> and as far as I understood octavia_worker, octavia_housekeeping and
>>>>> octavia_health_manager have to talk to the amphora instances. But the
>>>>> control nodes are spread over three different leafs. So each control
>>>>> node in a different L2 domain.
>>>>>
>>>>> So the question is how to deploy a lb-mgmt-net network in our setup?
>>>>>
>>>>> - Compute nodes have no "stretched" L2 domain
>>>>> - Control nodes, compute nodes and network nodes are in L3 networks like
>>>>> api, storage, ...
>>>>> - Only network nodes are connected to a L2 domain (with a separated NIC)
>>>>> providing the "public" network
>>>>>
>>>> You'll need to add a new bridge to your compute nodes and create a
>>>> provider network associated with that bridge. In my setup this is
>>>> simply a flat network tied to a tagged interface. In your case it
>>>> probably makes more sense to make a new VNI and create a vxlan
>>>> provider network. The routing in your switches should handle the rest.
>>>
>>> Ok that's what I try right now. But I don't get how to setup something
>>> like a VxLAN provider Network. I thought only vlan and flat is supported
>>> as provider network? I guess it is not possible to use the tunnel
>>> interface that is used for tenant networks?
>>> So I have to create a separated VxLAN on the control and compute nodes like:
>>>
>>> # ip link add vxoctavia type vxlan id 42 dstport 4790 group 239.1.1.1
>>> dev vlan3535 ttl 5
>>> # ip addr add 172.16.1.11/20 dev vxoctavia
>>> # ip link set vxoctavia up
>>>
>>> and use it like a flat provider network, true?
>>>
>> This is a fine way of doing things, but it's only half the battle.
>> You'll need to add a bridge on the compute nodes and bind it to that
>> new interface. Something like this if you're using openvswitch:
>>
>> docker exec openvswitch_db
>> /usr/local/bin/kolla_ensure_openvswitch_configured br-mgmt vxoctavia
>>
>> Also you'll want to remove the IP address from that interface as it's
>> going to be a bridge. Think of it like your public (br-ex) interface
>> on your network nodes.
>>
>>  From there you'll need to update the bridge mappings via kolla
>> overrides. This would usually be in /etc/kolla/config/neutron. Create
>> a subdirectory for your compute inventory group and create an
>> ml2_conf.ini there. So you'd end up with something like:
>>
>> [root@kolla-deploy ~]# cat /etc/kolla/config/neutron/compute/ml2_conf.ini
>> [ml2_type_flat]
>> flat_networks = mgmt-net
>>
>> [ovs]
>> bridge_mappings = mgmt-net:br-mgmt
>>
>> run kolla-ansible --tags neutron reconfigure to push out the new
>> configs. Note that there is a bug where the neutron containers may not
>> restart after the change, so you'll probably need to do a 'docker
>> container restart neutron_openvswitch_agent' on each compute node.
>>
>> At this point, you'll need to create the provider network in the admin
>> project like:
>>
>> openstack network create --provider-network-type flat
>> --provider-physical-network mgmt-net lb-mgmt-net
>>
>> And then create a normal subnet attached to this network with some
>> largeish address scope. I wouldn't use 172.16.0.0/16 because docker
>> uses that by default. I'm not sure if it matters since the network
>> traffic will be isolated on a bridge, but it makes me paranoid so I
>> avoided it.
>>
>> For your controllers, I think you can just let everything function off
>> your api interface since you're routing in your spines. Set up a
>> gateway somewhere from that lb-mgmt network and save yourself the
>> complication of adding an interface to your controllers. If you choose
>> to use a separate interface on your controllers, you'll need to make
>> sure this patch is in your kolla-ansible install or cherry pick it.
>>
>> https://github.com/openstack/kolla-ansible/commit/0b6e401c4fdb9aa4ff87d0bfd4b25c91b86e0d60#diff-6c871f6865aecf0057a5b5f677ae7d59
>>
>> I don't think that's been backported at all, so unless you're running
>> off master you'll need to go get it.
>>
>>  From here on out, the regular Octavia instruction should serve you.
>> Create a flavor, Create a security group, and capture their UUIDs
>> along with the UUID of the provider network you made. Override them in
>> globals.yml with:
>>
>> octavia_amp_boot_network_list: <uuid>
>> octavia_amp_secgroup_list: <uuid>
>> octavia_amp_flavor_id: <uuid>
>>
>> This is all from my scattered notes and bad memory. Hopefully it makes
>> sense. Corrections welcome.
>>
>> -Erik
>>
>>
>>
>>>
>>>
>>>>
>>>> -Erik
>>>>>
>>>>> All the best,
>>>>> Florian
>>>>> _______________________________________________
>>>>> OpenStack-operators mailing list
>>>>> [hidden email]
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>> _______________________________________________
>> OpenStack-operators mailing list
>> [hidden email]
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
--

EveryWare AG
Florian Engelmann
Systems Engineer
Zurlindenstrasse 52a
CH-8003 Zürich

tel: +41 44 466 60 00
fax: +41 44 466 60 10
mail: mailto:[hidden email]
web: http://www.everyware.ch

_______________________________________________
OpenStack-operators mailing list
[hidden email]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

smime.p7s (6K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: [octavia][rocky] Octavia and VxLAN without DVR

Florian Engelmann
small update:

I am still stuck at the point "how to route the L2 lb-mgmt-net to my
different physical L3 networks?".

As we wanna distribute our control nodes over diffrent leafs, each with
its own L2 domain we will have to route to all of those leafs.

Should we enable the compute nodes to route to those controler networks?
And how?


Am 10/24/18 um 9:52 AM schrieb Florian Engelmann:

> Hi Michael,
>
> yes I definitely would prefer to build a routed setup. Would it be an
> option for you to provide some rough step by step "how-to" with
> openvswitch in a non-DVR setup?
>
> All the best,
> Flo
>
>
> Am 10/23/18 um 7:48 PM schrieb Michael Johnson:
>> I am still catching up on e-mail from the weekend.
>>
>> There are a lot of different options for how to implement the
>> lb-mgmt-network for the controller<->amphora communication. I can't
>> talk to what options Kolla provides, but I can talk to how Octavia
>> works.
>>
>> One thing to note on the lb-mgmt-net issue, if you can setup routes
>> such that the controllers can reach the IP addresses used for the
>> lb-mgmt-net, and that the amphora can reach the controllers, Octavia
>> can run with a routed lb-mgmt-net setup. There is no L2 requirement
>> between the controllers and the amphora instances.
>>
>> Michael
>>
>> On Tue, Oct 23, 2018 at 9:57 AM Erik McCormick
>> <[hidden email]> wrote:
>>>
>>> So in your other email you said asked if there was a guide for
>>> deploying it with Kolla ansible...
>>>
>>> Oh boy. No there's not. I don't know if you've seen my recent mails on
>>> Octavia, but I am going through this deployment process with
>>> kolla-ansible right now and it is lacking in a few areas.
>>>
>>> If you plan to use different CA certificates for client and server in
>>> Octavia, you'll need to add that into the playbook. Presently it only
>>> copies over ca_01.pem, cacert.key, and client.pem and uses them for
>>> everything. I was completely unable to make it work with only one CA
>>> as I got some SSL errors. It passes gate though, so I aasume it must
>>> work? I dunno.
>>>
>>> Networking comments and a really messy kolla-ansible / octavia how-to
>>> below...
>>>
>>> On Tue, Oct 23, 2018 at 10:09 AM Florian Engelmann
>>> <[hidden email]> wrote:
>>>>
>>>> Am 10/23/18 um 3:20 PM schrieb Erik McCormick:
>>>>> On Tue, Oct 23, 2018 at 7:53 AM Florian Engelmann
>>>>> <[hidden email]> wrote:
>>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> We did test Octavia with Pike (DVR deployment) and everything was
>>>>>> working right our of the box. We changed our underlay network to a
>>>>>> Layer3 spine-leaf network now and did not deploy DVR as we don't
>>>>>> wanted
>>>>>> to have that much cables in a rack.
>>>>>>
>>>>>> Octavia is not working right now as the lb-mgmt-net does not exist on
>>>>>> the compute nodes nor does a br-ex.
>>>>>>
>>>>>> The control nodes running
>>>>>>
>>>>>> octavia_worker
>>>>>> octavia_housekeeping
>>>>>> octavia_health_manager
>>>>>> octavia_api
>>>>>>
>>>>>> and as far as I understood octavia_worker, octavia_housekeeping and
>>>>>> octavia_health_manager have to talk to the amphora instances. But the
>>>>>> control nodes are spread over three different leafs. So each control
>>>>>> node in a different L2 domain.
>>>>>>
>>>>>> So the question is how to deploy a lb-mgmt-net network in our setup?
>>>>>>
>>>>>> - Compute nodes have no "stretched" L2 domain
>>>>>> - Control nodes, compute nodes and network nodes are in L3
>>>>>> networks like
>>>>>> api, storage, ...
>>>>>> - Only network nodes are connected to a L2 domain (with a
>>>>>> separated NIC)
>>>>>> providing the "public" network
>>>>>>
>>>>> You'll need to add a new bridge to your compute nodes and create a
>>>>> provider network associated with that bridge. In my setup this is
>>>>> simply a flat network tied to a tagged interface. In your case it
>>>>> probably makes more sense to make a new VNI and create a vxlan
>>>>> provider network. The routing in your switches should handle the rest.
>>>>
>>>> Ok that's what I try right now. But I don't get how to setup something
>>>> like a VxLAN provider Network. I thought only vlan and flat is
>>>> supported
>>>> as provider network? I guess it is not possible to use the tunnel
>>>> interface that is used for tenant networks?
>>>> So I have to create a separated VxLAN on the control and compute
>>>> nodes like:
>>>>
>>>> # ip link add vxoctavia type vxlan id 42 dstport 4790 group 239.1.1.1
>>>> dev vlan3535 ttl 5
>>>> # ip addr add 172.16.1.11/20 dev vxoctavia
>>>> # ip link set vxoctavia up
>>>>
>>>> and use it like a flat provider network, true?
>>>>
>>> This is a fine way of doing things, but it's only half the battle.
>>> You'll need to add a bridge on the compute nodes and bind it to that
>>> new interface. Something like this if you're using openvswitch:
>>>
>>> docker exec openvswitch_db
>>> /usr/local/bin/kolla_ensure_openvswitch_configured br-mgmt vxoctavia
>>>
>>> Also you'll want to remove the IP address from that interface as it's
>>> going to be a bridge. Think of it like your public (br-ex) interface
>>> on your network nodes.
>>>
>>>  From there you'll need to update the bridge mappings via kolla
>>> overrides. This would usually be in /etc/kolla/config/neutron. Create
>>> a subdirectory for your compute inventory group and create an
>>> ml2_conf.ini there. So you'd end up with something like:
>>>
>>> [root@kolla-deploy ~]# cat
>>> /etc/kolla/config/neutron/compute/ml2_conf.ini
>>> [ml2_type_flat]
>>> flat_networks = mgmt-net
>>>
>>> [ovs]
>>> bridge_mappings = mgmt-net:br-mgmt
>>>
>>> run kolla-ansible --tags neutron reconfigure to push out the new
>>> configs. Note that there is a bug where the neutron containers may not
>>> restart after the change, so you'll probably need to do a 'docker
>>> container restart neutron_openvswitch_agent' on each compute node.
>>>
>>> At this point, you'll need to create the provider network in the admin
>>> project like:
>>>
>>> openstack network create --provider-network-type flat
>>> --provider-physical-network mgmt-net lb-mgmt-net
>>>
>>> And then create a normal subnet attached to this network with some
>>> largeish address scope. I wouldn't use 172.16.0.0/16 because docker
>>> uses that by default. I'm not sure if it matters since the network
>>> traffic will be isolated on a bridge, but it makes me paranoid so I
>>> avoided it.
>>>
>>> For your controllers, I think you can just let everything function off
>>> your api interface since you're routing in your spines. Set up a
>>> gateway somewhere from that lb-mgmt network and save yourself the
>>> complication of adding an interface to your controllers. If you choose
>>> to use a separate interface on your controllers, you'll need to make
>>> sure this patch is in your kolla-ansible install or cherry pick it.
>>>
>>> https://github.com/openstack/kolla-ansible/commit/0b6e401c4fdb9aa4ff87d0bfd4b25c91b86e0d60#diff-6c871f6865aecf0057a5b5f677ae7d59 
>>>
>>>
>>> I don't think that's been backported at all, so unless you're running
>>> off master you'll need to go get it.
>>>
>>>  From here on out, the regular Octavia instruction should serve you.
>>> Create a flavor, Create a security group, and capture their UUIDs
>>> along with the UUID of the provider network you made. Override them in
>>> globals.yml with:
>>>
>>> octavia_amp_boot_network_list: <uuid>
>>> octavia_amp_secgroup_list: <uuid>
>>> octavia_amp_flavor_id: <uuid>
>>>
>>> This is all from my scattered notes and bad memory. Hopefully it makes
>>> sense. Corrections welcome.
>>>
>>> -Erik
>>>
>>>
>>>
>>>>
>>>>
>>>>>
>>>>> -Erik
>>>>>>
>>>>>> All the best,
>>>>>> Florian
>>>>>> _______________________________________________
>>>>>> OpenStack-operators mailing list
>>>>>> [hidden email]
>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators 
>>>>>>
>>>
>>> _______________________________________________
>>> OpenStack-operators mailing list
>>> [hidden email]
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
> _______________________________________________
> OpenStack-operators mailing list
> [hidden email]
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
--

EveryWare AG
Florian Engelmann
Systems Engineer
Zurlindenstrasse 52a
CH-8003 Zürich

tel: +41 44 466 60 00
fax: +41 44 466 60 10
mail: mailto:[hidden email]
web: http://www.everyware.ch

_______________________________________________
OpenStack-operators mailing list
[hidden email]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

smime.p7s (6K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: [octavia][rocky] Octavia and VxLAN without DVR

Erik McCormick
In reply to this post by Florian Engelmann


On Wed, Oct 24, 2018, 3:14 AM Florian Engelmann <[hidden email]> wrote:
Ohoh - thank you for your empathy :)
And those great details about how to setup this mgmt network.
I will try to do so this afternoon but solving that routing "puzzle"
(virtual network to control nodes) I will need our network guys to help
me out...

But I will need to tell all Amphorae a static route to the gateway that
is routing to the control nodes?

Just set the default gateway when you create the neutron subnet. No need for excess static routes. The route on the other connection won't interfere with it as it lives in a namespace.



Am 10/23/18 um 6:57 PM schrieb Erik McCormick:
> So in your other email you said asked if there was a guide for
> deploying it with Kolla ansible...
>
> Oh boy. No there's not. I don't know if you've seen my recent mails on
> Octavia, but I am going through this deployment process with
> kolla-ansible right now and it is lacking in a few areas.
>
> If you plan to use different CA certificates for client and server in
> Octavia, you'll need to add that into the playbook. Presently it only
> copies over ca_01.pem, cacert.key, and client.pem and uses them for
> everything. I was completely unable to make it work with only one CA
> as I got some SSL errors. It passes gate though, so I aasume it must
> work? I dunno.
>
> Networking comments and a really messy kolla-ansible / octavia how-to below...
>
> On Tue, Oct 23, 2018 at 10:09 AM Florian Engelmann
> <[hidden email]> wrote:
>>
>> Am 10/23/18 um 3:20 PM schrieb Erik McCormick:
>>> On Tue, Oct 23, 2018 at 7:53 AM Florian Engelmann
>>> <[hidden email]> wrote:
>>>>
>>>> Hi,
>>>>
>>>> We did test Octavia with Pike (DVR deployment) and everything was
>>>> working right our of the box. We changed our underlay network to a
>>>> Layer3 spine-leaf network now and did not deploy DVR as we don't wanted
>>>> to have that much cables in a rack.
>>>>
>>>> Octavia is not working right now as the lb-mgmt-net does not exist on
>>>> the compute nodes nor does a br-ex.
>>>>
>>>> The control nodes running
>>>>
>>>> octavia_worker
>>>> octavia_housekeeping
>>>> octavia_health_manager
>>>> octavia_api
>>>>
Amphorae-VMs, z.b.

lb-mgmt-net 172.16.0.0/16 default GW
>>>> and as far as I understood octavia_worker, octavia_housekeeping and
>>>> octavia_health_manager have to talk to the amphora instances. But the
>>>> control nodes are spread over three different leafs. So each control
>>>> node in a different L2 domain.
>>>>
>>>> So the question is how to deploy a lb-mgmt-net network in our setup?
>>>>
>>>> - Compute nodes have no "stretched" L2 domain
>>>> - Control nodes, compute nodes and network nodes are in L3 networks like
>>>> api, storage, ...
>>>> - Only network nodes are connected to a L2 domain (with a separated NIC)
>>>> providing the "public" network
>>>>
>>> You'll need to add a new bridge to your compute nodes and create a
>>> provider network associated with that bridge. In my setup this is
>>> simply a flat network tied to a tagged interface. In your case it
>>> probably makes more sense to make a new VNI and create a vxlan
>>> provider network. The routing in your switches should handle the rest.
>>
>> Ok that's what I try right now. But I don't get how to setup something
>> like a VxLAN provider Network. I thought only vlan and flat is supported
>> as provider network? I guess it is not possible to use the tunnel
>> interface that is used for tenant networks?
>> So I have to create a separated VxLAN on the control and compute nodes like:
>>
>> # ip link add vxoctavia type vxlan id 42 dstport 4790 group 239.1.1.1
>> dev vlan3535 ttl 5
>> # ip addr add 172.16.1.11/20 dev vxoctavia
>> # ip link set vxoctavia up
>>
>> and use it like a flat provider network, true?
>>
> This is a fine way of doing things, but it's only half the battle.
> You'll need to add a bridge on the compute nodes and bind it to that
> new interface. Something like this if you're using openvswitch:
>
> docker exec openvswitch_db
> /usr/local/bin/kolla_ensure_openvswitch_configured br-mgmt vxoctavia
>
> Also you'll want to remove the IP address from that interface as it's
> going to be a bridge. Think of it like your public (br-ex) interface
> on your network nodes.
>
>  From there you'll need to update the bridge mappings via kolla
> overrides. This would usually be in /etc/kolla/config/neutron. Create
> a subdirectory for your compute inventory group and create an
> ml2_conf.ini there. So you'd end up with something like:
>
> [root@kolla-deploy ~]# cat /etc/kolla/config/neutron/compute/ml2_conf.ini
> [ml2_type_flat]
> flat_networks = mgmt-net
>
> [ovs]
> bridge_mappings = mgmt-net:br-mgmt
>
> run kolla-ansible --tags neutron reconfigure to push out the new
> configs. Note that there is a bug where the neutron containers may not
> restart after the change, so you'll probably need to do a 'docker
> container restart neutron_openvswitch_agent' on each compute node.
>
> At this point, you'll need to create the provider network in the admin
> project like:
>
> openstack network create --provider-network-type flat
> --provider-physical-network mgmt-net lb-mgmt-net
>
> And then create a normal subnet attached to this network with some
> largeish address scope. I wouldn't use 172.16.0.0/16 because docker
> uses that by default. I'm not sure if it matters since the network
> traffic will be isolated on a bridge, but it makes me paranoid so I
> avoided it.
>
> For your controllers, I think you can just let everything function off
> your api interface since you're routing in your spines. Set up a
> gateway somewhere from that lb-mgmt network and save yourself the
> complication of adding an interface to your controllers. If you choose
> to use a separate interface on your controllers, you'll need to make
> sure this patch is in your kolla-ansible install or cherry pick it.
>
> https://github.com/openstack/kolla-ansible/commit/0b6e401c4fdb9aa4ff87d0bfd4b25c91b86e0d60#diff-6c871f6865aecf0057a5b5f677ae7d59
>
> I don't think that's been backported at all, so unless you're running
> off master you'll need to go get it.
>
>  From here on out, the regular Octavia instruction should serve you.
> Create a flavor, Create a security group, and capture their UUIDs
> along with the UUID of the provider network you made. Override them in
> globals.yml with:
>
> octavia_amp_boot_network_list: <uuid>
> octavia_amp_secgroup_list: <uuid>
> octavia_amp_flavor_id: <uuid>
>
> This is all from my scattered notes and bad memory. Hopefully it makes
> sense. Corrections welcome.
>
> -Erik
>
>
>
>>
>>
>>>
>>> -Erik
>>>>
>>>> All the best,
>>>> Florian
>>>> _______________________________________________
>>>> OpenStack-operators mailing list
>>>> [hidden email]
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

--

EveryWare AG
Florian Engelmann
Systems Engineer
Zurlindenstrasse 52a
CH-8003 Zürich

tel: +41 44 466 60 00
fax: +41 44 466 60 10
mail: mailto:[hidden email]
web: http://www.everyware.ch

_______________________________________________
OpenStack-operators mailing list
[hidden email]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
Reply | Threaded
Open this post in threaded view
|

Re: [octavia][rocky] Octavia and VxLAN without DVR

Florian Engelmann
Am 10/24/18 um 2:08 PM schrieb Erik McCormick:

>
>
> On Wed, Oct 24, 2018, 3:14 AM Florian Engelmann
> <[hidden email] <mailto:[hidden email]>>
> wrote:
>
>     Ohoh - thank you for your empathy :)
>     And those great details about how to setup this mgmt network.
>     I will try to do so this afternoon but solving that routing "puzzle"
>     (virtual network to control nodes) I will need our network guys to help
>     me out...
>
>     But I will need to tell all Amphorae a static route to the gateway that
>     is routing to the control nodes?
>
>
> Just set the default gateway when you create the neutron subnet. No need
> for excess static routes. The route on the other connection won't
> interfere with it as it lives in a namespace.

My compute nodes have no br-ex and there is no L2 domain spread over all
compute nodes. As far as I understood lb-mgmt-net is a provider network
and has to be flat or VLAN and will need a "physical" gateway (as there
is no virtual router).
So the question - is it possible to get octavia up and running without a
br-ex (L2 domain spread over all compute nodes) on the compute nodes?


>
>
>
>     Am 10/23/18 um 6:57 PM schrieb Erik McCormick:
>      > So in your other email you said asked if there was a guide for
>      > deploying it with Kolla ansible...
>      >
>      > Oh boy. No there's not. I don't know if you've seen my recent
>     mails on
>      > Octavia, but I am going through this deployment process with
>      > kolla-ansible right now and it is lacking in a few areas.
>      >
>      > If you plan to use different CA certificates for client and server in
>      > Octavia, you'll need to add that into the playbook. Presently it only
>      > copies over ca_01.pem, cacert.key, and client.pem and uses them for
>      > everything. I was completely unable to make it work with only one CA
>      > as I got some SSL errors. It passes gate though, so I aasume it must
>      > work? I dunno.
>      >
>      > Networking comments and a really messy kolla-ansible / octavia
>     how-to below...
>      >
>      > On Tue, Oct 23, 2018 at 10:09 AM Florian Engelmann
>      > <[hidden email]
>     <mailto:[hidden email]>> wrote:
>      >>
>      >> Am 10/23/18 um 3:20 PM schrieb Erik McCormick:
>      >>> On Tue, Oct 23, 2018 at 7:53 AM Florian Engelmann
>      >>> <[hidden email]
>     <mailto:[hidden email]>> wrote:
>      >>>>
>      >>>> Hi,
>      >>>>
>      >>>> We did test Octavia with Pike (DVR deployment) and everything was
>      >>>> working right our of the box. We changed our underlay network to a
>      >>>> Layer3 spine-leaf network now and did not deploy DVR as we
>     don't wanted
>      >>>> to have that much cables in a rack.
>      >>>>
>      >>>> Octavia is not working right now as the lb-mgmt-net does not
>     exist on
>      >>>> the compute nodes nor does a br-ex.
>      >>>>
>      >>>> The control nodes running
>      >>>>
>      >>>> octavia_worker
>      >>>> octavia_housekeeping
>      >>>> octavia_health_manager
>      >>>> octavia_api
>      >>>>
>     Amphorae-VMs, z.b.
>
>     lb-mgmt-net 172.16.0.0/16 <http://172.16.0.0/16> default GW
>      >>>> and as far as I understood octavia_worker,
>     octavia_housekeeping and
>      >>>> octavia_health_manager have to talk to the amphora instances.
>     But the
>      >>>> control nodes are spread over three different leafs. So each
>     control
>      >>>> node in a different L2 domain.
>      >>>>
>      >>>> So the question is how to deploy a lb-mgmt-net network in our
>     setup?
>      >>>>
>      >>>> - Compute nodes have no "stretched" L2 domain
>      >>>> - Control nodes, compute nodes and network nodes are in L3
>     networks like
>      >>>> api, storage, ...
>      >>>> - Only network nodes are connected to a L2 domain (with a
>     separated NIC)
>      >>>> providing the "public" network
>      >>>>
>      >>> You'll need to add a new bridge to your compute nodes and create a
>      >>> provider network associated with that bridge. In my setup this is
>      >>> simply a flat network tied to a tagged interface. In your case it
>      >>> probably makes more sense to make a new VNI and create a vxlan
>      >>> provider network. The routing in your switches should handle
>     the rest.
>      >>
>      >> Ok that's what I try right now. But I don't get how to setup
>     something
>      >> like a VxLAN provider Network. I thought only vlan and flat is
>     supported
>      >> as provider network? I guess it is not possible to use the tunnel
>      >> interface that is used for tenant networks?
>      >> So I have to create a separated VxLAN on the control and compute
>     nodes like:
>      >>
>      >> # ip link add vxoctavia type vxlan id 42 dstport 4790 group
>     239.1.1.1
>      >> dev vlan3535 ttl 5
>      >> # ip addr add 172.16.1.11/20 <http://172.16.1.11/20> dev vxoctavia
>      >> # ip link set vxoctavia up
>      >>
>      >> and use it like a flat provider network, true?
>      >>
>      > This is a fine way of doing things, but it's only half the battle.
>      > You'll need to add a bridge on the compute nodes and bind it to that
>      > new interface. Something like this if you're using openvswitch:
>      >
>      > docker exec openvswitch_db
>      > /usr/local/bin/kolla_ensure_openvswitch_configured br-mgmt vxoctavia
>      >
>      > Also you'll want to remove the IP address from that interface as it's
>      > going to be a bridge. Think of it like your public (br-ex) interface
>      > on your network nodes.
>      >
>      >  From there you'll need to update the bridge mappings via kolla
>      > overrides. This would usually be in /etc/kolla/config/neutron. Create
>      > a subdirectory for your compute inventory group and create an
>      > ml2_conf.ini there. So you'd end up with something like:
>      >
>      > [root@kolla-deploy ~]# cat
>     /etc/kolla/config/neutron/compute/ml2_conf.ini
>      > [ml2_type_flat]
>      > flat_networks = mgmt-net
>      >
>      > [ovs]
>      > bridge_mappings = mgmt-net:br-mgmt
>      >
>      > run kolla-ansible --tags neutron reconfigure to push out the new
>      > configs. Note that there is a bug where the neutron containers
>     may not
>      > restart after the change, so you'll probably need to do a 'docker
>      > container restart neutron_openvswitch_agent' on each compute node.
>      >
>      > At this point, you'll need to create the provider network in the
>     admin
>      > project like:
>      >
>      > openstack network create --provider-network-type flat
>      > --provider-physical-network mgmt-net lb-mgmt-net
>      >
>      > And then create a normal subnet attached to this network with some
>      > largeish address scope. I wouldn't use 172.16.0.0/16
>     <http://172.16.0.0/16> because docker
>      > uses that by default. I'm not sure if it matters since the network
>      > traffic will be isolated on a bridge, but it makes me paranoid so I
>      > avoided it.
>      >
>      > For your controllers, I think you can just let everything
>     function off
>      > your api interface since you're routing in your spines. Set up a
>      > gateway somewhere from that lb-mgmt network and save yourself the
>      > complication of adding an interface to your controllers. If you
>     choose
>      > to use a separate interface on your controllers, you'll need to make
>      > sure this patch is in your kolla-ansible install or cherry pick it.
>      >
>      >
>     https://github.com/openstack/kolla-ansible/commit/0b6e401c4fdb9aa4ff87d0bfd4b25c91b86e0d60#diff-6c871f6865aecf0057a5b5f677ae7d59
>      >
>      > I don't think that's been backported at all, so unless you're running
>      > off master you'll need to go get it.
>      >
>      >  From here on out, the regular Octavia instruction should serve you.
>      > Create a flavor, Create a security group, and capture their UUIDs
>      > along with the UUID of the provider network you made. Override
>     them in
>      > globals.yml with:
>      >
>      > octavia_amp_boot_network_list: <uuid>
>      > octavia_amp_secgroup_list: <uuid>
>      > octavia_amp_flavor_id: <uuid>
>      >
>      > This is all from my scattered notes and bad memory. Hopefully it
>     makes
>      > sense. Corrections welcome.
>      >
>      > -Erik
>      >
>      >
>      >
>      >>
>      >>
>      >>>
>      >>> -Erik
>      >>>>
>      >>>> All the best,
>      >>>> Florian
>      >>>> _______________________________________________
>      >>>> OpenStack-operators mailing list
>      >>>> [hidden email]
>     <mailto:[hidden email]>
>      >>>>
>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>     --
>
>     EveryWare AG
>     Florian Engelmann
>     Systems Engineer
>     Zurlindenstrasse 52a
>     CH-8003 Zürich
>
>     tel: +41 44 466 60 00
>     fax: +41 44 466 60 10
>     mail: mailto:[hidden email]
>     <mailto:[hidden email]>
>     web: http://www.everyware.ch
>
--

EveryWare AG
Florian Engelmann
Systems Engineer
Zurlindenstrasse 52a
CH-8003 Zürich

tel: +41 44 466 60 00
fax: +41 44 466 60 10
mail: mailto:[hidden email]
web: http://www.everyware.ch

_______________________________________________
OpenStack-operators mailing list
[hidden email]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

smime.p7s (6K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: [octavia][rocky] Octavia and VxLAN without DVR

Erik McCormick


On Wed, Oct 24, 2018, 12:02 PM Florian Engelmann <[hidden email]> wrote:
Am 10/24/18 um 2:08 PM schrieb Erik McCormick:
>
>
> On Wed, Oct 24, 2018, 3:14 AM Florian Engelmann
> <[hidden email] <mailto:[hidden email]>>
> wrote:
>
>     Ohoh - thank you for your empathy :)
>     And those great details about how to setup this mgmt network.
>     I will try to do so this afternoon but solving that routing "puzzle"
>     (virtual network to control nodes) I will need our network guys to help
>     me out...
>
>     But I will need to tell all Amphorae a static route to the gateway that
>     is routing to the control nodes?
>
>
> Just set the default gateway when you create the neutron subnet. No need
> for excess static routes. The route on the other connection won't
> interfere with it as it lives in a namespace.


My compute nodes have no br-ex and there is no L2 domain spread over all
compute nodes. As far as I understood lb-mgmt-net is a provider network
and has to be flat or VLAN and will need a "physical" gateway (as there
is no virtual router).
So the question - is it possible to get octavia up and running without a
br-ex (L2 domain spread over all compute nodes) on the compute nodes?

Sorry, I only meant it was *like* br-ex on your network nodes. You don't need that on your computes.

The router here would be whatever does routing in your physical network. Setting the gateway in the neutron subnet simply adds that to the DHCP information sent to the amphorae.

This does bring up another thingI forgot  though. You'll probably want to add the management network / bridge to your network nodes or wherever you run the DHCP agents. When you create the subnet, be sure to leave some space in the address scope for the physical devices with static IPs.

As for multiple L2 domains, I can't think of a way to go about that for the lb-mgmt network. It's a single network with a single subnet. Perhaps you could limit load balancers to an AZ in a single rack? Seems not very HA friendly.


>
>
>
>     Am 10/23/18 um 6:57 PM schrieb Erik McCormick:
>      > So in your other email you said asked if there was a guide for
>      > deploying it with Kolla ansible...
>      >
>      > Oh boy. No there's not. I don't know if you've seen my recent
>     mails on
>      > Octavia, but I am going through this deployment process with
>      > kolla-ansible right now and it is lacking in a few areas.
>      >
>      > If you plan to use different CA certificates for client and server in
>      > Octavia, you'll need to add that into the playbook. Presently it only
>      > copies over ca_01.pem, cacert.key, and client.pem and uses them for
>      > everything. I was completely unable to make it work with only one CA
>      > as I got some SSL errors. It passes gate though, so I aasume it must
>      > work? I dunno.
>      >
>      > Networking comments and a really messy kolla-ansible / octavia
>     how-to below...
>      >
>      > On Tue, Oct 23, 2018 at 10:09 AM Florian Engelmann
>      > <[hidden email]
>     <mailto:[hidden email]>> wrote:
>      >>
>      >> Am 10/23/18 um 3:20 PM schrieb Erik McCormick:
>      >>> On Tue, Oct 23, 2018 at 7:53 AM Florian Engelmann
>      >>> <[hidden email]
>     <mailto:[hidden email]>> wrote:
>      >>>>
>      >>>> Hi,
>      >>>>
>      >>>> We did test Octavia with Pike (DVR deployment) and everything was
>      >>>> working right our of the box. We changed our underlay network to a
>      >>>> Layer3 spine-leaf network now and did not deploy DVR as we
>     don't wanted
>      >>>> to have that much cables in a rack.
>      >>>>
>      >>>> Octavia is not working right now as the lb-mgmt-net does not
>     exist on
>      >>>> the compute nodes nor does a br-ex.
>      >>>>
>      >>>> The control nodes running
>      >>>>
>      >>>> octavia_worker
>      >>>> octavia_housekeeping
>      >>>> octavia_health_manager
>      >>>> octavia_api
>      >>>>
>     Amphorae-VMs, z.b.
>
>     lb-mgmt-net 172.16.0.0/16 <http://172.16.0.0/16> default GW
>      >>>> and as far as I understood octavia_worker,
>     octavia_housekeeping and
>      >>>> octavia_health_manager have to talk to the amphora instances.
>     But the
>      >>>> control nodes are spread over three different leafs. So each
>     control
>      >>>> node in a different L2 domain.
>      >>>>
>      >>>> So the question is how to deploy a lb-mgmt-net network in our
>     setup?
>      >>>>
>      >>>> - Compute nodes have no "stretched" L2 domain
>      >>>> - Control nodes, compute nodes and network nodes are in L3
>     networks like
>      >>>> api, storage, ...
>      >>>> - Only network nodes are connected to a L2 domain (with a
>     separated NIC)
>      >>>> providing the "public" network
>      >>>>
>      >>> You'll need to add a new bridge to your compute nodes and create a
>      >>> provider network associated with that bridge. In my setup this is
>      >>> simply a flat network tied to a tagged interface. In your case it
>      >>> probably makes more sense to make a new VNI and create a vxlan
>      >>> provider network. The routing in your switches should handle
>     the rest.
>      >>
>      >> Ok that's what I try right now. But I don't get how to setup
>     something
>      >> like a VxLAN provider Network. I thought only vlan and flat is
>     supported
>      >> as provider network? I guess it is not possible to use the tunnel
>      >> interface that is used for tenant networks?
>      >> So I have to create a separated VxLAN on the control and compute
>     nodes like:
>      >>
>      >> # ip link add vxoctavia type vxlan id 42 dstport 4790 group
>     239.1.1.1
>      >> dev vlan3535 ttl 5
>      >> # ip addr add 172.16.1.11/20 <http://172.16.1.11/20> dev vxoctavia
>      >> # ip link set vxoctavia up
>      >>
>      >> and use it like a flat provider network, true?
>      >>
>      > This is a fine way of doing things, but it's only half the battle.
>      > You'll need to add a bridge on the compute nodes and bind it to that
>      > new interface. Something like this if you're using openvswitch:
>      >
>      > docker exec openvswitch_db
>      > /usr/local/bin/kolla_ensure_openvswitch_configured br-mgmt vxoctavia
>      >
>      > Also you'll want to remove the IP address from that interface as it's
>      > going to be a bridge. Think of it like your public (br-ex) interface
>      > on your network nodes.
>      >
>      >  From there you'll need to update the bridge mappings via kolla
>      > overrides. This would usually be in /etc/kolla/config/neutron. Create
>      > a subdirectory for your compute inventory group and create an
>      > ml2_conf.ini there. So you'd end up with something like:
>      >
>      > [root@kolla-deploy ~]# cat
>     /etc/kolla/config/neutron/compute/ml2_conf.ini
>      > [ml2_type_flat]
>      > flat_networks = mgmt-net
>      >
>      > [ovs]
>      > bridge_mappings = mgmt-net:br-mgmt
>      >
>      > run kolla-ansible --tags neutron reconfigure to push out the new
>      > configs. Note that there is a bug where the neutron containers
>     may not
>      > restart after the change, so you'll probably need to do a 'docker
>      > container restart neutron_openvswitch_agent' on each compute node.
>      >
>      > At this point, you'll need to create the provider network in the
>     admin
>      > project like:
>      >
>      > openstack network create --provider-network-type flat
>      > --provider-physical-network mgmt-net lb-mgmt-net
>      >
>      > And then create a normal subnet attached to this network with some
>      > largeish address scope. I wouldn't use 172.16.0.0/16
>     <http://172.16.0.0/16> because docker
>      > uses that by default. I'm not sure if it matters since the network
>      > traffic will be isolated on a bridge, but it makes me paranoid so I
>      > avoided it.
>      >
>      > For your controllers, I think you can just let everything
>     function off
>      > your api interface since you're routing in your spines. Set up a
>      > gateway somewhere from that lb-mgmt network and save yourself the
>      > complication of adding an interface to your controllers. If you
>     choose
>      > to use a separate interface on your controllers, you'll need to make
>      > sure this patch is in your kolla-ansible install or cherry pick it.
>      >
>      >
>     https://github.com/openstack/kolla-ansible/commit/0b6e401c4fdb9aa4ff87d0bfd4b25c91b86e0d60#diff-6c871f6865aecf0057a5b5f677ae7d59
>      >
>      > I don't think that's been backported at all, so unless you're running
>      > off master you'll need to go get it.
>      >
>      >  From here on out, the regular Octavia instruction should serve you.
>      > Create a flavor, Create a security group, and capture their UUIDs
>      > along with the UUID of the provider network you made. Override
>     them in
>      > globals.yml with:
>      >
>      > octavia_amp_boot_network_list: <uuid>
>      > octavia_amp_secgroup_list: <uuid>
>      > octavia_amp_flavor_id: <uuid>
>      >
>      > This is all from my scattered notes and bad memory. Hopefully it
>     makes
>      > sense. Corrections welcome.
>      >
>      > -Erik
>      >
>      >
>      >
>      >>
>      >>
>      >>>
>      >>> -Erik
>      >>>>
>      >>>> All the best,
>      >>>> Florian
>      >>>> _______________________________________________
>      >>>> OpenStack-operators mailing list
>      >>>> [hidden email]
>     <mailto:[hidden email]>
>      >>>>
>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>     --
>
>     EveryWare AG
>     Florian Engelmann
>     Systems Engineer
>     Zurlindenstrasse 52a
>     CH-8003 Zürich
>
>     tel: +41 44 466 60 00
>     fax: +41 44 466 60 10
>     mail: mailto:[hidden email]
>     <mailto:[hidden email]>
>     web: http://www.everyware.ch
>

--

EveryWare AG
Florian Engelmann
Systems Engineer
Zurlindenstrasse 52a
CH-8003 Zürich

tel: +41 44 466 60 00
fax: +41 44 466 60 10
mail: mailto:[hidden email]
web: http://www.everyware.ch

_______________________________________________
OpenStack-operators mailing list
[hidden email]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
Reply | Threaded
Open this post in threaded view
|

Re: [octavia][rocky] Octavia and VxLAN without DVR

Florian Engelmann

On the network nodes we've got a dedicated interface to deploy VLANs (like the provider network for internet access). What about creating another VLAN on the network nodes, give that bridge a IP which is part of the subnet of lb-mgmt-net and start the octavia worker, healthmanager and controller on the network nodes binding to that IP?



From: Erik McCormick <[hidden email]>
Sent: Wednesday, October 24, 2018 6:18 PM
To: Engelmann Florian
Cc: openstack-operators
Subject: Re: [Openstack-operators] [octavia][rocky] Octavia and VxLAN without DVR
 


On Wed, Oct 24, 2018, 12:02 PM Florian Engelmann <[hidden email]> wrote:
Am 10/24/18 um 2:08 PM schrieb Erik McCormick:
>
>
> On Wed, Oct 24, 2018, 3:14 AM Florian Engelmann
> <[hidden email] <mailto:[hidden email]>>
> wrote:
>
>     Ohoh - thank you for your empathy :)
>     And those great details about how to setup this mgmt network.
>     I will try to do so this afternoon but solving that routing "puzzle"
>     (virtual network to control nodes) I will need our network guys to help
>     me out...
>
>     But I will need to tell all Amphorae a static route to the gateway that
>     is routing to the control nodes?
>
>
> Just set the default gateway when you create the neutron subnet. No need
> for excess static routes. The route on the other connection won't
> interfere with it as it lives in a namespace.


My compute nodes have no br-ex and there is no L2 domain spread over all
compute nodes. As far as I understood lb-mgmt-net is a provider network
and has to be flat or VLAN and will need a "physical" gateway (as there
is no virtual router).
So the question - is it possible to get octavia up and running without a
br-ex (L2 domain spread over all compute nodes) on the compute nodes?

Sorry, I only meant it was *like* br-ex on your network nodes. You don't need that on your computes.

The router here would be whatever does routing in your physical network. Setting the gateway in the neutron subnet simply adds that to the DHCP information sent to the amphorae.

This does bring up another thingI forgot  though. You'll probably want to add the management network / bridge to your network nodes or wherever you run the DHCP agents. When you create the subnet, be sure to leave some space in the address scope for the physical devices with static IPs.

As for multiple L2 domains, I can't think of a way to go about that for the lb-mgmt network. It's a single network with a single subnet. Perhaps you could limit load balancers to an AZ in a single rack? Seems not very HA friendly.


>
>
>
>     Am 10/23/18 um 6:57 PM schrieb Erik McCormick:
>      > So in your other email you said asked if there was a guide for
>      > deploying it with Kolla ansible...
>      >
>      > Oh boy. No there's not. I don't know if you've seen my recent
>     mails on
>      > Octavia, but I am going through this deployment process with
>      > kolla-ansible right now and it is lacking in a few areas.
>      >
>      > If you plan to use different CA certificates for client and server in
>      > Octavia, you'll need to add that into the playbook. Presently it only
>      > copies over ca_01.pem, cacert.key, and client.pem and uses them for
>      > everything. I was completely unable to make it work with only one CA
>      > as I got some SSL errors. It passes gate though, so I aasume it must
>      > work? I dunno.
>      >
>      > Networking comments and a really messy kolla-ansible / octavia
>     how-to below...
>      >
>      > On Tue, Oct 23, 2018 at 10:09 AM Florian Engelmann
>      > <[hidden email]
>     <mailto:[hidden email]>> wrote:
>      >>
>      >> Am 10/23/18 um 3:20 PM schrieb Erik McCormick:
>      >>> On Tue, Oct 23, 2018 at 7:53 AM Florian Engelmann
>      >>> <[hidden email]
>     <mailto:[hidden email]>> wrote:
>      >>>>
>      >>>> Hi,
>      >>>>
>      >>>> We did test Octavia with Pike (DVR deployment) and everything was
>      >>>> working right our of the box. We changed our underlay network to a
>      >>>> Layer3 spine-leaf network now and did not deploy DVR as we
>     don't wanted
>      >>>> to have that much cables in a rack.
>      >>>>
>      >>>> Octavia is not working right now as the lb-mgmt-net does not
>     exist on
>      >>>> the compute nodes nor does a br-ex.
>      >>>>
>      >>>> The control nodes running
>      >>>>
>      >>>> octavia_worker
>      >>>> octavia_housekeeping
>      >>>> octavia_health_manager
>      >>>> octavia_api
>      >>>>
>     Amphorae-VMs, z.b.
>
>     lb-mgmt-net 172.16.0.0/16 <http://172.16.0.0/16> default GW
>      >>>> and as far as I understood octavia_worker,
>     octavia_housekeeping and
>      >>>> octavia_health_manager have to talk to the amphora instances.
>     But the
>      >>>> control nodes are spread over three different leafs. So each
>     control
>      >>>> node in a different L2 domain.
>      >>>>
>      >>>> So the question is how to deploy a lb-mgmt-net network in our
>     setup?
>      >>>>
>      >>>> - Compute nodes have no "stretched" L2 domain
>      >>>> - Control nodes, compute nodes and network nodes are in L3
>     networks like
>      >>>> api, storage, ...
>      >>>> - Only network nodes are connected to a L2 domain (with a
>     separated NIC)
>      >>>> providing the "public" network
>      >>>>
>      >>> You'll need to add a new bridge to your compute nodes and create a
>      >>> provider network associated with that bridge. In my setup this is
>      >>> simply a flat network tied to a tagged interface. In your case it
>      >>> probably makes more sense to make a new VNI and create a vxlan
>      >>> provider network. The routing in your switches should handle
>     the rest.
>      >>
>      >> Ok that's what I try right now. But I don't get how to setup
>     something
>      >> like a VxLAN provider Network. I thought only vlan and flat is
>     supported
>      >> as provider network? I guess it is not possible to use the tunnel
>      >> interface that is used for tenant networks?
>      >> So I have to create a separated VxLAN on the control and compute
>     nodes like:
>      >>
>      >> # ip link add vxoctavia type vxlan id 42 dstport 4790 group
>     239.1.1.1
>      >> dev vlan3535 ttl 5
>      >> # ip addr add 172.16.1.11/20 <http://172.16.1.11/20> dev vxoctavia
>      >> # ip link set vxoctavia up
>      >>
>      >> and use it like a flat provider network, true?
>      >>
>      > This is a fine way of doing things, but it's only half the battle.
>      > You'll need to add a bridge on the compute nodes and bind it to that
>      > new interface. Something like this if you're using openvswitch:
>      >
>      > docker exec openvswitch_db
>      > /usr/local/bin/kolla_ensure_openvswitch_configured br-mgmt vxoctavia
>      >
>      > Also you'll want to remove the IP address from that interface as it's
>      > going to be a bridge. Think of it like your public (br-ex) interface
>      > on your network nodes.
>      >
>      >  From there you'll need to update the bridge mappings via kolla
>      > overrides. This would usually be in /etc/kolla/config/neutron. Create
>      > a subdirectory for your compute inventory group and create an
>      > ml2_conf.ini there. So you'd end up with something like:
>      >
>      > [root@kolla-deploy ~]# cat
>     /etc/kolla/config/neutron/compute/ml2_conf.ini
>      > [ml2_type_flat]
>      > flat_networks = mgmt-net
>      >
>      > [ovs]
>      > bridge_mappings = mgmt-net:br-mgmt
>      >
>      > run kolla-ansible --tags neutron reconfigure to push out the new
>      > configs. Note that there is a bug where the neutron containers
>     may not
>      > restart after the change, so you'll probably need to do a 'docker
>      > container restart neutron_openvswitch_agent' on each compute node.
>      >
>      > At this point, you'll need to create the provider network in the
>     admin
>      > project like:
>      >
>      > openstack network create --provider-network-type flat
>      > --provider-physical-network mgmt-net lb-mgmt-net
>      >
>      > And then create a normal subnet attached to this network with some
>      > largeish address scope. I wouldn't use 172.16.0.0/16
>     <http://172.16.0.0/16> because docker
>      > uses that by default. I'm not sure if it matters since the network
>      > traffic will be isolated on a bridge, but it makes me paranoid so I
>      > avoided it.
>      >
>      > For your controllers, I think you can just let everything
>     function off
>      > your api interface since you're routing in your spines. Set up a
>      > gateway somewhere from that lb-mgmt network and save yourself the
>      > complication of adding an interface to your controllers. If you
>     choose
>      > to use a separate interface on your controllers, you'll need to make
>      > sure this patch is in your kolla-ansible install or cherry pick it.
>      >
>      >
>     https://github.com/openstack/kolla-ansible/commit/0b6e401c4fdb9aa4ff87d0bfd4b25c91b86e0d60#diff-6c871f6865aecf0057a5b5f677ae7d59
>      >
>      > I don't think that's been backported at all, so unless you're running
>      > off master you'll need to go get it.
>      >
>      >  From here on out, the regular Octavia instruction should serve you.
>      > Create a flavor, Create a security group, and capture their UUIDs
>      > along with the UUID of the provider network you made. Override
>     them in
>      > globals.yml with:
>      >
>      > octavia_amp_boot_network_list: <uuid>
>      > octavia_amp_secgroup_list: <uuid>
>      > octavia_amp_flavor_id: <uuid>
>      >
>      > This is all from my scattered notes and bad memory. Hopefully it
>     makes
>      > sense. Corrections welcome.
>      >
>      > -Erik
>      >
>      >
>      >
>      >>
>      >>
>      >>>
>      >>> -Erik
>      >>>>
>      >>>> All the best,
>      >>>> Florian
>      >>>> _______________________________________________
>      >>>> OpenStack-operators mailing list
>      >>>> [hidden email]
>     <mailto:[hidden email]>
>      >>>>
>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>     --
>
>     EveryWare AG
>     Florian Engelmann
>     Systems Engineer
>     Zurlindenstrasse 52a
>     CH-8003 Zürich
>
>     tel: +41 44 466 60 00
>     fax: +41 44 466 60 10
>     mail: mailto:[hidden email]
>     <mailto:[hidden email]>
>     web: http://www.everyware.ch
>

--

EveryWare AG
Florian Engelmann
Systems Engineer
Zurlindenstrasse 52a
CH-8003 Zürich

tel: +41 44 466 60 00
fax: +41 44 466 60 10
mail: mailto:[hidden email]
web: http://www.everyware.ch

_______________________________________________
OpenStack-operators mailing list
[hidden email]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

smime.p7s (6K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: [octavia][rocky] Octavia and VxLAN without DVR

Erik McCormick


On Wed, Oct 24, 2018, 3:33 PM Engelmann Florian <[hidden email]> wrote:

On the network nodes we've got a dedicated interface to deploy VLANs (like the provider network for internet access). What about creating another VLAN on the network nodes, give that bridge a IP which is part of the subnet of lb-mgmt-net and start the octavia worker, healthmanager and controller on the network nodes binding to that IP?

The problem with that is you can't out an IP in the vlan interface and also use it as an OVS bridge, so the Octavia processes would have nothing to bind to.



From: Erik McCormick <[hidden email]>
Sent: Wednesday, October 24, 2018 6:18 PM
To: Engelmann Florian
Cc: openstack-operators
Subject: Re: [Openstack-operators] [octavia][rocky] Octavia and VxLAN without DVR
 


On Wed, Oct 24, 2018, 12:02 PM Florian Engelmann <[hidden email]> wrote:
Am 10/24/18 um 2:08 PM schrieb Erik McCormick:
>
>
> On Wed, Oct 24, 2018, 3:14 AM Florian Engelmann
> <[hidden email] <mailto:[hidden email]>>
> wrote:
>
>     Ohoh - thank you for your empathy :)
>     And those great details about how to setup this mgmt network.
>     I will try to do so this afternoon but solving that routing "puzzle"
>     (virtual network to control nodes) I will need our network guys to help
>     me out...
>
>     But I will need to tell all Amphorae a static route to the gateway that
>     is routing to the control nodes?
>
>
> Just set the default gateway when you create the neutron subnet. No need
> for excess static routes. The route on the other connection won't
> interfere with it as it lives in a namespace.


My compute nodes have no br-ex and there is no L2 domain spread over all
compute nodes. As far as I understood lb-mgmt-net is a provider network
and has to be flat or VLAN and will need a "physical" gateway (as there
is no virtual router).
So the question - is it possible to get octavia up and running without a
br-ex (L2 domain spread over all compute nodes) on the compute nodes?

Sorry, I only meant it was *like* br-ex on your network nodes. You don't need that on your computes.

The router here would be whatever does routing in your physical network. Setting the gateway in the neutron subnet simply adds that to the DHCP information sent to the amphorae.

This does bring up another thingI forgot  though. You'll probably want to add the management network / bridge to your network nodes or wherever you run the DHCP agents. When you create the subnet, be sure to leave some space in the address scope for the physical devices with static IPs.

As for multiple L2 domains, I can't think of a way to go about that for the lb-mgmt network. It's a single network with a single subnet. Perhaps you could limit load balancers to an AZ in a single rack? Seems not very HA friendly.


>
>
>
>     Am 10/23/18 um 6:57 PM schrieb Erik McCormick:
>      > So in your other email you said asked if there was a guide for
>      > deploying it with Kolla ansible...
>      >
>      > Oh boy. No there's not. I don't know if you've seen my recent
>     mails on
>      > Octavia, but I am going through this deployment process with
>      > kolla-ansible right now and it is lacking in a few areas.
>      >
>      > If you plan to use different CA certificates for client and server in
>      > Octavia, you'll need to add that into the playbook. Presently it only
>      > copies over ca_01.pem, cacert.key, and client.pem and uses them for
>      > everything. I was completely unable to make it work with only one CA
>      > as I got some SSL errors. It passes gate though, so I aasume it must
>      > work? I dunno.
>      >
>      > Networking comments and a really messy kolla-ansible / octavia
>     how-to below...
>      >
>      > On Tue, Oct 23, 2018 at 10:09 AM Florian Engelmann
>      > <[hidden email]
>     <mailto:[hidden email]>> wrote:
>      >>
>      >> Am 10/23/18 um 3:20 PM schrieb Erik McCormick:
>      >>> On Tue, Oct 23, 2018 at 7:53 AM Florian Engelmann
>      >>> <[hidden email]
>     <mailto:[hidden email]>> wrote:
>      >>>>
>      >>>> Hi,
>      >>>>
>      >>>> We did test Octavia with Pike (DVR deployment) and everything was
>      >>>> working right our of the box. We changed our underlay network to a
>      >>>> Layer3 spine-leaf network now and did not deploy DVR as we
>     don't wanted
>      >>>> to have that much cables in a rack.
>      >>>>
>      >>>> Octavia is not working right now as the lb-mgmt-net does not
>     exist on
>      >>>> the compute nodes nor does a br-ex.
>      >>>>
>      >>>> The control nodes running
>      >>>>
>      >>>> octavia_worker
>      >>>> octavia_housekeeping
>      >>>> octavia_health_manager
>      >>>> octavia_api
>      >>>>
>     Amphorae-VMs, z.b.
>
>     lb-mgmt-net 172.16.0.0/16 <http://172.16.0.0/16> default GW
>      >>>> and as far as I understood octavia_worker,
>     octavia_housekeeping and
>      >>>> octavia_health_manager have to talk to the amphora instances.
>     But the
>      >>>> control nodes are spread over three different leafs. So each
>     control
>      >>>> node in a different L2 domain.
>      >>>>
>      >>>> So the question is how to deploy a lb-mgmt-net network in our
>     setup?
>      >>>>
>      >>>> - Compute nodes have no "stretched" L2 domain
>      >>>> - Control nodes, compute nodes and network nodes are in L3
>     networks like
>      >>>> api, storage, ...
>      >>>> - Only network nodes are connected to a L2 domain (with a
>     separated NIC)
>      >>>> providing the "public" network
>      >>>>
>      >>> You'll need to add a new bridge to your compute nodes and create a
>      >>> provider network associated with that bridge. In my setup this is
>      >>> simply a flat network tied to a tagged interface. In your case it
>      >>> probably makes more sense to make a new VNI and create a vxlan
>      >>> provider network. The routing in your switches should handle
>     the rest.
>      >>
>      >> Ok that's what I try right now. But I don't get how to setup
>     something
>      >> like a VxLAN provider Network. I thought only vlan and flat is
>     supported
>      >> as provider network? I guess it is not possible to use the tunnel
>      >> interface that is used for tenant networks?
>      >> So I have to create a separated VxLAN on the control and compute
>     nodes like:
>      >>
>      >> # ip link add vxoctavia type vxlan id 42 dstport 4790 group
>     239.1.1.1
>      >> dev vlan3535 ttl 5
>      >> # ip addr add 172.16.1.11/20 <http://172.16.1.11/20> dev vxoctavia
>      >> # ip link set vxoctavia up
>      >>
>      >> and use it like a flat provider network, true?
>      >>
>      > This is a fine way of doing things, but it's only half the battle.
>      > You'll need to add a bridge on the compute nodes and bind it to that
>      > new interface. Something like this if you're using openvswitch:
>      >
>      > docker exec openvswitch_db
>      > /usr/local/bin/kolla_ensure_openvswitch_configured br-mgmt vxoctavia
>      >
>      > Also you'll want to remove the IP address from that interface as it's
>      > going to be a bridge. Think of it like your public (br-ex) interface
>      > on your network nodes.
>      >
>      >  From there you'll need to update the bridge mappings via kolla
>      > overrides. This would usually be in /etc/kolla/config/neutron. Create
>      > a subdirectory for your compute inventory group and create an
>      > ml2_conf.ini there. So you'd end up with something like:
>      >
>      > [root@kolla-deploy ~]# cat
>     /etc/kolla/config/neutron/compute/ml2_conf.ini
>      > [ml2_type_flat]
>      > flat_networks = mgmt-net
>      >
>      > [ovs]
>      > bridge_mappings = mgmt-net:br-mgmt
>      >
>      > run kolla-ansible --tags neutron reconfigure to push out the new
>      > configs. Note that there is a bug where the neutron containers
>     may not
>      > restart after the change, so you'll probably need to do a 'docker
>      > container restart neutron_openvswitch_agent' on each compute node.
>      >
>      > At this point, you'll need to create the provider network in the
>     admin
>      > project like:
>      >
>      > openstack network create --provider-network-type flat
>      > --provider-physical-network mgmt-net lb-mgmt-net
>      >
>      > And then create a normal subnet attached to this network with some
>      > largeish address scope. I wouldn't use 172.16.0.0/16
>     <http://172.16.0.0/16> because docker
>      > uses that by default. I'm not sure if it matters since the network
>      > traffic will be isolated on a bridge, but it makes me paranoid so I
>      > avoided it.
>      >
>      > For your controllers, I think you can just let everything
>     function off
>      > your api interface since you're routing in your spines. Set up a
>      > gateway somewhere from that lb-mgmt network and save yourself the
>      > complication of adding an interface to your controllers. If you
>     choose
>      > to use a separate interface on your controllers, you'll need to make
>      > sure this patch is in your kolla-ansible install or cherry pick it.
>      >
>      >
>     https://github.com/openstack/kolla-ansible/commit/0b6e401c4fdb9aa4ff87d0bfd4b25c91b86e0d60#diff-6c871f6865aecf0057a5b5f677ae7d59
>      >
>      > I don't think that's been backported at all, so unless you're running
>      > off master you'll need to go get it.
>      >
>      >  From here on out, the regular Octavia instruction should serve you.
>      > Create a flavor, Create a security group, and capture their UUIDs
>      > along with the UUID of the provider network you made. Override
>     them in
>      > globals.yml with:
>      >
>      > octavia_amp_boot_network_list: <uuid>
>      > octavia_amp_secgroup_list: <uuid>
>      > octavia_amp_flavor_id: <uuid>
>      >
>      > This is all from my scattered notes and bad memory. Hopefully it
>     makes
>      > sense. Corrections welcome.
>      >
>      > -Erik
>      >
>      >
>      >
>      >>
>      >>
>      >>>
>      >>> -Erik
>      >>>>
>      >>>> All the best,
>      >>>> Florian
>      >>>> _______________________________________________
>      >>>> OpenStack-operators mailing list
>      >>>> [hidden email]
>     <mailto:[hidden email]>
>      >>>>
>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>     --
>
>     EveryWare AG
>     Florian Engelmann
>     Systems Engineer
>     Zurlindenstrasse 52a
>     CH-8003 Zürich
>
>     tel: +41 44 466 60 00
>     fax: +41 44 466 60 10
>     mail: mailto:[hidden email]
>     <mailto:[hidden email]>
>     web: http://www.everyware.ch
>

--

EveryWare AG
Florian Engelmann
Systems Engineer
Zurlindenstrasse 52a
CH-8003 Zürich

tel: +41 44 466 60 00
fax: +41 44 466 60 10
mail: mailto:[hidden email]
web: http://www.everyware.ch

_______________________________________________
OpenStack-operators mailing list
[hidden email]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
Reply | Threaded
Open this post in threaded view
|

Re: [octavia][rocky] Octavia and VxLAN without DVR

Florian Engelmann
Hmm - so right now I can't see any routed option because:

The gateway connected to the VLAN provider networks (bond1 on the
network nodes) is not able to route any traffic to my control nodes in
the spine-leaf layer3 backend network.

And right now there is no br-ex at all nor any "streched" L2 domain
connecting all compute nodes.


So the only solution I can think of right now is to create an overlay
VxLAN in the spine-leaf backend network, connect all compute and control
nodes to this overlay L2 network, create a OVS bridge connected to that
network on the compute nodes and allow the Amphorae to get an IPin this
network as well.
Not to forget about DHCP... so the network nodes will need this bridge
as well.

Am 10/24/18 um 10:01 PM schrieb Erik McCormick:

>
>
> On Wed, Oct 24, 2018, 3:33 PM Engelmann Florian
> <[hidden email] <mailto:[hidden email]>>
> wrote:
>
>     On the network nodes we've got a dedicated interface to deploy VLANs
>     (like the provider network for internet access). What about creating
>     another VLAN on the network nodes, give that bridge a IP which is
>     part of the subnet of lb-mgmt-net and start the octavia worker,
>     healthmanager and controller on the network nodes binding to that IP?
>
> The problem with that is you can't out an IP in the vlan interface and
> also use it as an OVS bridge, so the Octavia processes would have
> nothing to bind to.
>
>
>     ------------------------------------------------------------------------
>     *From:* Erik McCormick <[hidden email]
>     <mailto:[hidden email]>>
>     *Sent:* Wednesday, October 24, 2018 6:18 PM
>     *To:* Engelmann Florian
>     *Cc:* openstack-operators
>     *Subject:* Re: [Openstack-operators] [octavia][rocky] Octavia and
>     VxLAN without DVR
>
>
>     On Wed, Oct 24, 2018, 12:02 PM Florian Engelmann
>     <[hidden email]
>     <mailto:[hidden email]>> wrote:
>
>         Am 10/24/18 um 2:08 PM schrieb Erik McCormick:
>          >
>          >
>          > On Wed, Oct 24, 2018, 3:14 AM Florian Engelmann
>          > <[hidden email]
>         <mailto:[hidden email]>
>         <mailto:[hidden email]
>         <mailto:[hidden email]>>>
>          > wrote:
>          >
>          >     Ohoh - thank you for your empathy :)
>          >     And those great details about how to setup this mgmt network.
>          >     I will try to do so this afternoon but solving that
>         routing "puzzle"
>          >     (virtual network to control nodes) I will need our
>         network guys to help
>          >     me out...
>          >
>          >     But I will need to tell all Amphorae a static route to
>         the gateway that
>          >     is routing to the control nodes?
>          >
>          >
>          > Just set the default gateway when you create the neutron
>         subnet. No need
>          > for excess static routes. The route on the other connection
>         won't
>          > interfere with it as it lives in a namespace.
>
>
>         My compute nodes have no br-ex and there is no L2 domain spread
>         over all
>         compute nodes. As far as I understood lb-mgmt-net is a provider
>         network
>         and has to be flat or VLAN and will need a "physical" gateway
>         (as there
>         is no virtual router).
>         So the question - is it possible to get octavia up and running
>         without a
>         br-ex (L2 domain spread over all compute nodes) on the compute
>         nodes?
>
>
>     Sorry, I only meant it was *like* br-ex on your network nodes. You
>     don't need that on your computes.
>
>     The router here would be whatever does routing in your physical
>     network. Setting the gateway in the neutron subnet simply adds that
>     to the DHCP information sent to the amphorae.
>
>     This does bring up another thingI forgot  though. You'll probably
>     want to add the management network / bridge to your network nodes or
>     wherever you run the DHCP agents. When you create the subnet, be
>     sure to leave some space in the address scope for the physical
>     devices with static IPs.
>
>     As for multiple L2 domains, I can't think of a way to go about that
>     for the lb-mgmt network. It's a single network with a single subnet.
>     Perhaps you could limit load balancers to an AZ in a single rack?
>     Seems not very HA friendly.
>
>
>
>          >
>          >
>          >
>          >     Am 10/23/18 um 6:57 PM schrieb Erik McCormick:
>          >      > So in your other email you said asked if there was a
>         guide for
>          >      > deploying it with Kolla ansible...
>          >      >
>          >      > Oh boy. No there's not. I don't know if you've seen my
>         recent
>          >     mails on
>          >      > Octavia, but I am going through this deployment
>         process with
>          >      > kolla-ansible right now and it is lacking in a few areas.
>          >      >
>          >      > If you plan to use different CA certificates for
>         client and server in
>          >      > Octavia, you'll need to add that into the playbook.
>         Presently it only
>          >      > copies over ca_01.pem, cacert.key, and client.pem and
>         uses them for
>          >      > everything. I was completely unable to make it work
>         with only one CA
>          >      > as I got some SSL errors. It passes gate though, so I
>         aasume it must
>          >      > work? I dunno.
>          >      >
>          >      > Networking comments and a really messy kolla-ansible /
>         octavia
>          >     how-to below...
>          >      >
>          >      > On Tue, Oct 23, 2018 at 10:09 AM Florian Engelmann
>          >      > <[hidden email]
>         <mailto:[hidden email]>
>          >     <mailto:[hidden email]
>         <mailto:[hidden email]>>> wrote:
>          >      >>
>          >      >> Am 10/23/18 um 3:20 PM schrieb Erik McCormick:
>          >      >>> On Tue, Oct 23, 2018 at 7:53 AM Florian Engelmann
>          >      >>> <[hidden email]
>         <mailto:[hidden email]>
>          >     <mailto:[hidden email]
>         <mailto:[hidden email]>>> wrote:
>          >      >>>>
>          >      >>>> Hi,
>          >      >>>>
>          >      >>>> We did test Octavia with Pike (DVR deployment) and
>         everything was
>          >      >>>> working right our of the box. We changed our
>         underlay network to a
>          >      >>>> Layer3 spine-leaf network now and did not deploy
>         DVR as we
>          >     don't wanted
>          >      >>>> to have that much cables in a rack.
>          >      >>>>
>          >      >>>> Octavia is not working right now as the lb-mgmt-net
>         does not
>          >     exist on
>          >      >>>> the compute nodes nor does a br-ex.
>          >      >>>>
>          >      >>>> The control nodes running
>          >      >>>>
>          >      >>>> octavia_worker
>          >      >>>> octavia_housekeeping
>          >      >>>> octavia_health_manager
>          >      >>>> octavia_api
>          >      >>>>
>          >     Amphorae-VMs, z.b.
>          >
>          >     lb-mgmt-net 172.16.0.0/16 <http://172.16.0.0/16>
>         <http://172.16.0.0/16> default GW
>          >      >>>> and as far as I understood octavia_worker,
>          >     octavia_housekeeping and
>          >      >>>> octavia_health_manager have to talk to the amphora
>         instances.
>          >     But the
>          >      >>>> control nodes are spread over three different
>         leafs. So each
>          >     control
>          >      >>>> node in a different L2 domain.
>          >      >>>>
>          >      >>>> So the question is how to deploy a lb-mgmt-net
>         network in our
>          >     setup?
>          >      >>>>
>          >      >>>> - Compute nodes have no "stretched" L2 domain
>          >      >>>> - Control nodes, compute nodes and network nodes
>         are in L3
>          >     networks like
>          >      >>>> api, storage, ...
>          >      >>>> - Only network nodes are connected to a L2 domain
>         (with a
>          >     separated NIC)
>          >      >>>> providing the "public" network
>          >      >>>>
>          >      >>> You'll need to add a new bridge to your compute
>         nodes and create a
>          >      >>> provider network associated with that bridge. In my
>         setup this is
>          >      >>> simply a flat network tied to a tagged interface. In
>         your case it
>          >      >>> probably makes more sense to make a new VNI and
>         create a vxlan
>          >      >>> provider network. The routing in your switches
>         should handle
>          >     the rest.
>          >      >>
>          >      >> Ok that's what I try right now. But I don't get how
>         to setup
>          >     something
>          >      >> like a VxLAN provider Network. I thought only vlan
>         and flat is
>          >     supported
>          >      >> as provider network? I guess it is not possible to
>         use the tunnel
>          >      >> interface that is used for tenant networks?
>          >      >> So I have to create a separated VxLAN on the control
>         and compute
>          >     nodes like:
>          >      >>
>          >      >> # ip link add vxoctavia type vxlan id 42 dstport 4790
>         group
>          >     239.1.1.1
>          >      >> dev vlan3535 ttl 5
>          >      >> # ip addr add 172.16.1.11/20 <http://172.16.1.11/20>
>         <http://172.16.1.11/20> dev vxoctavia
>          >      >> # ip link set vxoctavia up
>          >      >>
>          >      >> and use it like a flat provider network, true?
>          >      >>
>          >      > This is a fine way of doing things, but it's only half
>         the battle.
>          >      > You'll need to add a bridge on the compute nodes and
>         bind it to that
>          >      > new interface. Something like this if you're using
>         openvswitch:
>          >      >
>          >      > docker exec openvswitch_db
>          >      > /usr/local/bin/kolla_ensure_openvswitch_configured
>         br-mgmt vxoctavia
>          >      >
>          >      > Also you'll want to remove the IP address from that
>         interface as it's
>          >      > going to be a bridge. Think of it like your public
>         (br-ex) interface
>          >      > on your network nodes.
>          >      >
>          >      >  From there you'll need to update the bridge mappings
>         via kolla
>          >      > overrides. This would usually be in
>         /etc/kolla/config/neutron. Create
>          >      > a subdirectory for your compute inventory group and
>         create an
>          >      > ml2_conf.ini there. So you'd end up with something like:
>          >      >
>          >      > [root@kolla-deploy ~]# cat
>          >     /etc/kolla/config/neutron/compute/ml2_conf.ini
>          >      > [ml2_type_flat]
>          >      > flat_networks = mgmt-net
>          >      >
>          >      > [ovs]
>          >      > bridge_mappings = mgmt-net:br-mgmt
>          >      >
>          >      > run kolla-ansible --tags neutron reconfigure to push
>         out the new
>          >      > configs. Note that there is a bug where the neutron
>         containers
>          >     may not
>          >      > restart after the change, so you'll probably need to
>         do a 'docker
>          >      > container restart neutron_openvswitch_agent' on each
>         compute node.
>          >      >
>          >      > At this point, you'll need to create the provider
>         network in the
>          >     admin
>          >      > project like:
>          >      >
>          >      > openstack network create --provider-network-type flat
>          >      > --provider-physical-network mgmt-net lb-mgmt-net
>          >      >
>          >      > And then create a normal subnet attached to this
>         network with some
>          >      > largeish address scope. I wouldn't use 172.16.0.0/16
>         <http://172.16.0.0/16>
>          >     <http://172.16.0.0/16> because docker
>          >      > uses that by default. I'm not sure if it matters since
>         the network
>          >      > traffic will be isolated on a bridge, but it makes me
>         paranoid so I
>          >      > avoided it.
>          >      >
>          >      > For your controllers, I think you can just let everything
>          >     function off
>          >      > your api interface since you're routing in your
>         spines. Set up a
>          >      > gateway somewhere from that lb-mgmt network and save
>         yourself the
>          >      > complication of adding an interface to your
>         controllers. If you
>          >     choose
>          >      > to use a separate interface on your controllers,
>         you'll need to make
>          >      > sure this patch is in your kolla-ansible install or
>         cherry pick it.
>          >      >
>          >      >
>          >
>         https://github.com/openstack/kolla-ansible/commit/0b6e401c4fdb9aa4ff87d0bfd4b25c91b86e0d60#diff-6c871f6865aecf0057a5b5f677ae7d59
>          >      >
>          >      > I don't think that's been backported at all, so unless
>         you're running
>          >      > off master you'll need to go get it.
>          >      >
>          >      >  From here on out, the regular Octavia instruction
>         should serve you.
>          >      > Create a flavor, Create a security group, and capture
>         their UUIDs
>          >      > along with the UUID of the provider network you made.
>         Override
>          >     them in
>          >      > globals.yml with:
>          >      >
>          >      > octavia_amp_boot_network_list: <uuid>
>          >      > octavia_amp_secgroup_list: <uuid>
>          >      > octavia_amp_flavor_id: <uuid>
>          >      >
>          >      > This is all from my scattered notes and bad memory.
>         Hopefully it
>          >     makes
>          >      > sense. Corrections welcome.
>          >      >
>          >      > -Erik
>          >      >
>          >      >
>          >      >
>          >      >>
>          >      >>
>          >      >>>
>          >      >>> -Erik
>          >      >>>>
>          >      >>>> All the best,
>          >      >>>> Florian
>          >      >>>> _______________________________________________
>          >      >>>> OpenStack-operators mailing list
>          >      >>>> [hidden email]
>         <mailto:[hidden email]>
>          >     <mailto:[hidden email]
>         <mailto:[hidden email]>>
>          >      >>>>
>          >
>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>          >
>          >     --
>          >
>          >     EveryWare AG
>          >     Florian Engelmann
>          >     Systems Engineer
>          >     Zurlindenstrasse 52a
>          >     CH-8003 Zürich
>          >
>          >     tel: +41 44 466 60 00
>          >     fax: +41 44 466 60 10
>          >     mail: mailto:[hidden email]
>         <mailto:[hidden email]>
>          >     <mailto:[hidden email]
>         <mailto:[hidden email]>>
>          >     web: http://www.everyware.ch
>          >
>
>         --
>
>         EveryWare AG
>         Florian Engelmann
>         Systems Engineer
>         Zurlindenstrasse 52a
>         CH-8003 Zürich
>
>         tel: +41 44 466 60 00
>         fax: +41 44 466 60 10
>         mail: mailto:[hidden email]
>         <mailto:[hidden email]>
>         web: http://www.everyware.ch
>
--

EveryWare AG
Florian Engelmann
Systems Engineer
Zurlindenstrasse 52a
CH-8003 Zürich

tel: +41 44 466 60 00
fax: +41 44 466 60 10
mail: mailto:[hidden email]
web: http://www.everyware.ch

_______________________________________________
OpenStack-operators mailing list
[hidden email]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

smime.p7s (6K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: [octavia][rocky] Octavia and VxLAN without DVR

Florian Engelmann
Or could I create lb-mgmt-net as VxLAN and connect the control nodes to
this VxLAN? How to do something like that?

Am 10/25/18 um 10:03 AM schrieb Florian Engelmann:

> Hmm - so right now I can't see any routed option because:
>
> The gateway connected to the VLAN provider networks (bond1 on the
> network nodes) is not able to route any traffic to my control nodes in
> the spine-leaf layer3 backend network.
>
> And right now there is no br-ex at all nor any "streched" L2 domain
> connecting all compute nodes.
>
>
> So the only solution I can think of right now is to create an overlay
> VxLAN in the spine-leaf backend network, connect all compute and control
> nodes to this overlay L2 network, create a OVS bridge connected to that
> network on the compute nodes and allow the Amphorae to get an IPin this
> network as well.
> Not to forget about DHCP... so the network nodes will need this bridge
> as well.
>
> Am 10/24/18 um 10:01 PM schrieb Erik McCormick:
>>
>>
>> On Wed, Oct 24, 2018, 3:33 PM Engelmann Florian
>> <[hidden email]
>> <mailto:[hidden email]>> wrote:
>>
>>     On the network nodes we've got a dedicated interface to deploy VLANs
>>     (like the provider network for internet access). What about creating
>>     another VLAN on the network nodes, give that bridge a IP which is
>>     part of the subnet of lb-mgmt-net and start the octavia worker,
>>     healthmanager and controller on the network nodes binding to that IP?
>>
>> The problem with that is you can't out an IP in the vlan interface and
>> also use it as an OVS bridge, so the Octavia processes would have
>> nothing to bind to.
>>
>>
>>    
>> ------------------------------------------------------------------------
>>     *From:* Erik McCormick <[hidden email]
>>     <mailto:[hidden email]>>
>>     *Sent:* Wednesday, October 24, 2018 6:18 PM
>>     *To:* Engelmann Florian
>>     *Cc:* openstack-operators
>>     *Subject:* Re: [Openstack-operators] [octavia][rocky] Octavia and
>>     VxLAN without DVR
>>
>>
>>     On Wed, Oct 24, 2018, 12:02 PM Florian Engelmann
>>     <[hidden email]
>>     <mailto:[hidden email]>> wrote:
>>
>>         Am 10/24/18 um 2:08 PM schrieb Erik McCormick:
>>          >
>>          >
>>          > On Wed, Oct 24, 2018, 3:14 AM Florian Engelmann
>>          > <[hidden email]
>>         <mailto:[hidden email]>
>>         <mailto:[hidden email]
>>         <mailto:[hidden email]>>>
>>          > wrote:
>>          >
>>          >     Ohoh - thank you for your empathy :)
>>          >     And those great details about how to setup this mgmt
>> network.
>>          >     I will try to do so this afternoon but solving that
>>         routing "puzzle"
>>          >     (virtual network to control nodes) I will need our
>>         network guys to help
>>          >     me out...
>>          >
>>          >     But I will need to tell all Amphorae a static route to
>>         the gateway that
>>          >     is routing to the control nodes?
>>          >
>>          >
>>          > Just set the default gateway when you create the neutron
>>         subnet. No need
>>          > for excess static routes. The route on the other connection
>>         won't
>>          > interfere with it as it lives in a namespace.
>>
>>
>>         My compute nodes have no br-ex and there is no L2 domain spread
>>         over all
>>         compute nodes. As far as I understood lb-mgmt-net is a provider
>>         network
>>         and has to be flat or VLAN and will need a "physical" gateway
>>         (as there
>>         is no virtual router).
>>         So the question - is it possible to get octavia up and running
>>         without a
>>         br-ex (L2 domain spread over all compute nodes) on the compute
>>         nodes?
>>
>>
>>     Sorry, I only meant it was *like* br-ex on your network nodes. You
>>     don't need that on your computes.
>>
>>     The router here would be whatever does routing in your physical
>>     network. Setting the gateway in the neutron subnet simply adds that
>>     to the DHCP information sent to the amphorae.
>>
>>     This does bring up another thingI forgot  though. You'll probably
>>     want to add the management network / bridge to your network nodes or
>>     wherever you run the DHCP agents. When you create the subnet, be
>>     sure to leave some space in the address scope for the physical
>>     devices with static IPs.
>>
>>     As for multiple L2 domains, I can't think of a way to go about that
>>     for the lb-mgmt network. It's a single network with a single subnet.
>>     Perhaps you could limit load balancers to an AZ in a single rack?
>>     Seems not very HA friendly.
>>
>>
>>
>>          >
>>          >
>>          >
>>          >     Am 10/23/18 um 6:57 PM schrieb Erik McCormick:
>>          >      > So in your other email you said asked if there was a
>>         guide for
>>          >      > deploying it with Kolla ansible...
>>          >      >
>>          >      > Oh boy. No there's not. I don't know if you've seen my
>>         recent
>>          >     mails on
>>          >      > Octavia, but I am going through this deployment
>>         process with
>>          >      > kolla-ansible right now and it is lacking in a few
>> areas.
>>          >      >
>>          >      > If you plan to use different CA certificates for
>>         client and server in
>>          >      > Octavia, you'll need to add that into the playbook.
>>         Presently it only
>>          >      > copies over ca_01.pem, cacert.key, and client.pem and
>>         uses them for
>>          >      > everything. I was completely unable to make it work
>>         with only one CA
>>          >      > as I got some SSL errors. It passes gate though, so I
>>         aasume it must
>>          >      > work? I dunno.
>>          >      >
>>          >      > Networking comments and a really messy kolla-ansible /
>>         octavia
>>          >     how-to below...
>>          >      >
>>          >      > On Tue, Oct 23, 2018 at 10:09 AM Florian Engelmann
>>          >      > <[hidden email]
>>         <mailto:[hidden email]>
>>          >     <mailto:[hidden email]
>>         <mailto:[hidden email]>>> wrote:
>>          >      >>
>>          >      >> Am 10/23/18 um 3:20 PM schrieb Erik McCormick:
>>          >      >>> On Tue, Oct 23, 2018 at 7:53 AM Florian Engelmann
>>          >      >>> <[hidden email]
>>         <mailto:[hidden email]>
>>          >     <mailto:[hidden email]
>>         <mailto:[hidden email]>>> wrote:
>>          >      >>>>
>>          >      >>>> Hi,
>>          >      >>>>
>>          >      >>>> We did test Octavia with Pike (DVR deployment) and
>>         everything was
>>          >      >>>> working right our of the box. We changed our
>>         underlay network to a
>>          >      >>>> Layer3 spine-leaf network now and did not deploy
>>         DVR as we
>>          >     don't wanted
>>          >      >>>> to have that much cables in a rack.
>>          >      >>>>
>>          >      >>>> Octavia is not working right now as the lb-mgmt-net
>>         does not
>>          >     exist on
>>          >      >>>> the compute nodes nor does a br-ex.
>>          >      >>>>
>>          >      >>>> The control nodes running
>>          >      >>>>
>>          >      >>>> octavia_worker
>>          >      >>>> octavia_housekeeping
>>          >      >>>> octavia_health_manager
>>          >      >>>> octavia_api
>>          >      >>>>
>>          >     Amphorae-VMs, z.b.
>>          >
>>          >     lb-mgmt-net 172.16.0.0/16 <http://172.16.0.0/16>
>>         <http://172.16.0.0/16> default GW
>>          >      >>>> and as far as I understood octavia_worker,
>>          >     octavia_housekeeping and
>>          >      >>>> octavia_health_manager have to talk to the amphora
>>         instances.
>>          >     But the
>>          >      >>>> control nodes are spread over three different
>>         leafs. So each
>>          >     control
>>          >      >>>> node in a different L2 domain.
>>          >      >>>>
>>          >      >>>> So the question is how to deploy a lb-mgmt-net
>>         network in our
>>          >     setup?
>>          >      >>>>
>>          >      >>>> - Compute nodes have no "stretched" L2 domain
>>          >      >>>> - Control nodes, compute nodes and network nodes
>>         are in L3
>>          >     networks like
>>          >      >>>> api, storage, ...
>>          >      >>>> - Only network nodes are connected to a L2 domain
>>         (with a
>>          >     separated NIC)
>>          >      >>>> providing the "public" network
>>          >      >>>>
>>          >      >>> You'll need to add a new bridge to your compute
>>         nodes and create a
>>          >      >>> provider network associated with that bridge. In my
>>         setup this is
>>          >      >>> simply a flat network tied to a tagged interface. In
>>         your case it
>>          >      >>> probably makes more sense to make a new VNI and
>>         create a vxlan
>>          >      >>> provider network. The routing in your switches
>>         should handle
>>          >     the rest.
>>          >      >>
>>          >      >> Ok that's what I try right now. But I don't get how
>>         to setup
>>          >     something
>>          >      >> like a VxLAN provider Network. I thought only vlan
>>         and flat is
>>          >     supported
>>          >      >> as provider network? I guess it is not possible to
>>         use the tunnel
>>          >      >> interface that is used for tenant networks?
>>          >      >> So I have to create a separated VxLAN on the control
>>         and compute
>>          >     nodes like:
>>          >      >>
>>          >      >> # ip link add vxoctavia type vxlan id 42 dstport 4790
>>         group
>>          >     239.1.1.1
>>          >      >> dev vlan3535 ttl 5
>>          >      >> # ip addr add 172.16.1.11/20 <http://172.16.1.11/20>
>>         <http://172.16.1.11/20> dev vxoctavia
>>          >      >> # ip link set vxoctavia up
>>          >      >>
>>          >      >> and use it like a flat provider network, true?
>>          >      >>
>>          >      > This is a fine way of doing things, but it's only half
>>         the battle.
>>          >      > You'll need to add a bridge on the compute nodes and
>>         bind it to that
>>          >      > new interface. Something like this if you're using
>>         openvswitch:
>>          >      >
>>          >      > docker exec openvswitch_db
>>          >      > /usr/local/bin/kolla_ensure_openvswitch_configured
>>         br-mgmt vxoctavia
>>          >      >
>>          >      > Also you'll want to remove the IP address from that
>>         interface as it's
>>          >      > going to be a bridge. Think of it like your public
>>         (br-ex) interface
>>          >      > on your network nodes.
>>          >      >
>>          >      >  From there you'll need to update the bridge mappings
>>         via kolla
>>          >      > overrides. This would usually be in
>>         /etc/kolla/config/neutron. Create
>>          >      > a subdirectory for your compute inventory group and
>>         create an
>>          >      > ml2_conf.ini there. So you'd end up with something
>> like:
>>          >      >
>>          >      > [root@kolla-deploy ~]# cat
>>          >     /etc/kolla/config/neutron/compute/ml2_conf.ini
>>          >      > [ml2_type_flat]
>>          >      > flat_networks = mgmt-net
>>          >      >
>>          >      > [ovs]
>>          >      > bridge_mappings = mgmt-net:br-mgmt
>>          >      >
>>          >      > run kolla-ansible --tags neutron reconfigure to push
>>         out the new
>>          >      > configs. Note that there is a bug where the neutron
>>         containers
>>          >     may not
>>          >      > restart after the change, so you'll probably need to
>>         do a 'docker
>>          >      > container restart neutron_openvswitch_agent' on each
>>         compute node.
>>          >      >
>>          >      > At this point, you'll need to create the provider
>>         network in the
>>          >     admin
>>          >      > project like:
>>          >      >
>>          >      > openstack network create --provider-network-type flat
>>          >      > --provider-physical-network mgmt-net lb-mgmt-net
>>          >      >
>>          >      > And then create a normal subnet attached to this
>>         network with some
>>          >      > largeish address scope. I wouldn't use 172.16.0.0/16
>>         <http://172.16.0.0/16>
>>          >     <http://172.16.0.0/16> because docker
>>          >      > uses that by default. I'm not sure if it matters since
>>         the network
>>          >      > traffic will be isolated on a bridge, but it makes me
>>         paranoid so I
>>          >      > avoided it.
>>          >      >
>>          >      > For your controllers, I think you can just let
>> everything
>>          >     function off
>>          >      > your api interface since you're routing in your
>>         spines. Set up a
>>          >      > gateway somewhere from that lb-mgmt network and save
>>         yourself the
>>          >      > complication of adding an interface to your
>>         controllers. If you
>>          >     choose
>>          >      > to use a separate interface on your controllers,
>>         you'll need to make
>>          >      > sure this patch is in your kolla-ansible install or
>>         cherry pick it.
>>          >      >
>>          >      >
>>          >
>>        
>> https://github.com/openstack/kolla-ansible/commit/0b6e401c4fdb9aa4ff87d0bfd4b25c91b86e0d60#diff-6c871f6865aecf0057a5b5f677ae7d59 
>>
>>          >      >
>>          >      > I don't think that's been backported at all, so unless
>>         you're running
>>          >      > off master you'll need to go get it.
>>          >      >
>>          >      >  From here on out, the regular Octavia instruction
>>         should serve you.
>>          >      > Create a flavor, Create a security group, and capture
>>         their UUIDs
>>          >      > along with the UUID of the provider network you made.
>>         Override
>>          >     them in
>>          >      > globals.yml with:
>>          >      >
>>          >      > octavia_amp_boot_network_list: <uuid>
>>          >      > octavia_amp_secgroup_list: <uuid>
>>          >      > octavia_amp_flavor_id: <uuid>
>>          >      >
>>          >      > This is all from my scattered notes and bad memory.
>>         Hopefully it
>>          >     makes
>>          >      > sense. Corrections welcome.
>>          >      >
>>          >      > -Erik
>>          >      >
>>          >      >
>>          >      >
>>          >      >>
>>          >      >>
>>          >      >>>
>>          >      >>> -Erik
>>          >      >>>>
>>          >      >>>> All the best,
>>          >      >>>> Florian
>>          >      >>>> _______________________________________________
>>          >      >>>> OpenStack-operators mailing list
>>          >      >>>> [hidden email]
>>         <mailto:[hidden email]>
>>          >     <mailto:[hidden email]
>>         <mailto:[hidden email]>>
>>          >      >>>>
>>          >
>>        
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>          >
>>          >     --
>>          >
>>          >     EveryWare AG
>>          >     Florian Engelmann
>>          >     Systems Engineer
>>          >     Zurlindenstrasse 52a
>>          >     CH-8003 Zürich
>>          >
>>          >     tel: +41 44 466 60 00
>>          >     fax: +41 44 466 60 10
>>          >     mail: mailto:[hidden email]
>>         <mailto:[hidden email]>
>>          >     <mailto:[hidden email]
>>         <mailto:[hidden email]>>
>>          >     web: http://www.everyware.ch
>>          >
>>
>>         --
>>         EveryWare AG
>>         Florian Engelmann
>>         Systems Engineer
>>         Zurlindenstrasse 52a
>>         CH-8003 Zürich
>>
>>         tel: +41 44 466 60 00
>>         fax: +41 44 466 60 10
>>         mail: mailto:[hidden email]
>>         <mailto:[hidden email]>
>>         web: http://www.everyware.ch
>>
>
>
> _______________________________________________
> OpenStack-operators mailing list
> [hidden email]
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
--

EveryWare AG
Florian Engelmann
Systems Engineer
Zurlindenstrasse 52a
CH-8003 Zürich

tel: +41 44 466 60 00
fax: +41 44 466 60 10
mail: mailto:[hidden email]
web: http://www.everyware.ch

_______________________________________________
OpenStack-operators mailing list
[hidden email]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

smime.p7s (6K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: [octavia][rocky] Octavia and VxLAN without DVR

Florian Engelmann
It looks like devstack implemented some o-hm0 interface to connect the
physical control host to a VxLAN.
In our case there is no VxLAN at the control nodes nor is OVS.

Is it a option to deploy those Octavia services needing this conenction
to the compute or network nodes and use o-hm0?

Am 10/25/18 um 10:22 AM schrieb Florian Engelmann:

> Or could I create lb-mgmt-net as VxLAN and connect the control nodes to
> this VxLAN? How to do something like that?
>
> Am 10/25/18 um 10:03 AM schrieb Florian Engelmann:
>> Hmm - so right now I can't see any routed option because:
>>
>> The gateway connected to the VLAN provider networks (bond1 on the
>> network nodes) is not able to route any traffic to my control nodes in
>> the spine-leaf layer3 backend network.
>>
>> And right now there is no br-ex at all nor any "streched" L2 domain
>> connecting all compute nodes.
>>
>>
>> So the only solution I can think of right now is to create an overlay
>> VxLAN in the spine-leaf backend network, connect all compute and
>> control nodes to this overlay L2 network, create a OVS bridge
>> connected to that network on the compute nodes and allow the Amphorae
>> to get an IPin this network as well.
>> Not to forget about DHCP... so the network nodes will need this bridge
>> as well.
>>
>> Am 10/24/18 um 10:01 PM schrieb Erik McCormick:
>>>
>>>
>>> On Wed, Oct 24, 2018, 3:33 PM Engelmann Florian
>>> <[hidden email]
>>> <mailto:[hidden email]>> wrote:
>>>
>>>     On the network nodes we've got a dedicated interface to deploy VLANs
>>>     (like the provider network for internet access). What about creating
>>>     another VLAN on the network nodes, give that bridge a IP which is
>>>     part of the subnet of lb-mgmt-net and start the octavia worker,
>>>     healthmanager and controller on the network nodes binding to that
>>> IP?
>>>
>>> The problem with that is you can't out an IP in the vlan interface
>>> and also use it as an OVS bridge, so the Octavia processes would have
>>> nothing to bind to.
>>>
>>>
>>> ------------------------------------------------------------------------
>>>     *From:* Erik McCormick <[hidden email]
>>>     <mailto:[hidden email]>>
>>>     *Sent:* Wednesday, October 24, 2018 6:18 PM
>>>     *To:* Engelmann Florian
>>>     *Cc:* openstack-operators
>>>     *Subject:* Re: [Openstack-operators] [octavia][rocky] Octavia and
>>>     VxLAN without DVR
>>>
>>>
>>>     On Wed, Oct 24, 2018, 12:02 PM Florian Engelmann
>>>     <[hidden email]
>>>     <mailto:[hidden email]>> wrote:
>>>
>>>         Am 10/24/18 um 2:08 PM schrieb Erik McCormick:
>>>          >
>>>          >
>>>          > On Wed, Oct 24, 2018, 3:14 AM Florian Engelmann
>>>          > <[hidden email]
>>>         <mailto:[hidden email]>
>>>         <mailto:[hidden email]
>>>         <mailto:[hidden email]>>>
>>>          > wrote:
>>>          >
>>>          >     Ohoh - thank you for your empathy :)
>>>          >     And those great details about how to setup this mgmt
>>> network.
>>>          >     I will try to do so this afternoon but solving that
>>>         routing "puzzle"
>>>          >     (virtual network to control nodes) I will need our
>>>         network guys to help
>>>          >     me out...
>>>          >
>>>          >     But I will need to tell all Amphorae a static route to
>>>         the gateway that
>>>          >     is routing to the control nodes?
>>>          >
>>>          >
>>>          > Just set the default gateway when you create the neutron
>>>         subnet. No need
>>>          > for excess static routes. The route on the other connection
>>>         won't
>>>          > interfere with it as it lives in a namespace.
>>>
>>>
>>>         My compute nodes have no br-ex and there is no L2 domain spread
>>>         over all
>>>         compute nodes. As far as I understood lb-mgmt-net is a provider
>>>         network
>>>         and has to be flat or VLAN and will need a "physical" gateway
>>>         (as there
>>>         is no virtual router).
>>>         So the question - is it possible to get octavia up and running
>>>         without a
>>>         br-ex (L2 domain spread over all compute nodes) on the compute
>>>         nodes?
>>>
>>>
>>>     Sorry, I only meant it was *like* br-ex on your network nodes. You
>>>     don't need that on your computes.
>>>
>>>     The router here would be whatever does routing in your physical
>>>     network. Setting the gateway in the neutron subnet simply adds that
>>>     to the DHCP information sent to the amphorae.
>>>
>>>     This does bring up another thingI forgot  though. You'll probably
>>>     want to add the management network / bridge to your network nodes or
>>>     wherever you run the DHCP agents. When you create the subnet, be
>>>     sure to leave some space in the address scope for the physical
>>>     devices with static IPs.
>>>
>>>     As for multiple L2 domains, I can't think of a way to go about that
>>>     for the lb-mgmt network. It's a single network with a single subnet.
>>>     Perhaps you could limit load balancers to an AZ in a single rack?
>>>     Seems not very HA friendly.
>>>
>>>
>>>
>>>          >
>>>          >
>>>          >
>>>          >     Am 10/23/18 um 6:57 PM schrieb Erik McCormick:
>>>          >      > So in your other email you said asked if there was a
>>>         guide for
>>>          >      > deploying it with Kolla ansible...
>>>          >      >
>>>          >      > Oh boy. No there's not. I don't know if you've seen my
>>>         recent
>>>          >     mails on
>>>          >      > Octavia, but I am going through this deployment
>>>         process with
>>>          >      > kolla-ansible right now and it is lacking in a few
>>> areas.
>>>          >      >
>>>          >      > If you plan to use different CA certificates for
>>>         client and server in
>>>          >      > Octavia, you'll need to add that into the playbook.
>>>         Presently it only
>>>          >      > copies over ca_01.pem, cacert.key, and client.pem and
>>>         uses them for
>>>          >      > everything. I was completely unable to make it work
>>>         with only one CA
>>>          >      > as I got some SSL errors. It passes gate though, so I
>>>         aasume it must
>>>          >      > work? I dunno.
>>>          >      >
>>>          >      > Networking comments and a really messy kolla-ansible /
>>>         octavia
>>>          >     how-to below...
>>>          >      >
>>>          >      > On Tue, Oct 23, 2018 at 10:09 AM Florian Engelmann
>>>          >      > <[hidden email]
>>>         <mailto:[hidden email]>
>>>          >     <mailto:[hidden email]
>>>         <mailto:[hidden email]>>> wrote:
>>>          >      >>
>>>          >      >> Am 10/23/18 um 3:20 PM schrieb Erik McCormick:
>>>          >      >>> On Tue, Oct 23, 2018 at 7:53 AM Florian Engelmann
>>>          >      >>> <[hidden email]
>>>         <mailto:[hidden email]>
>>>          >     <mailto:[hidden email]
>>>         <mailto:[hidden email]>>> wrote:
>>>          >      >>>>
>>>          >      >>>> Hi,
>>>          >      >>>>
>>>          >      >>>> We did test Octavia with Pike (DVR deployment) and
>>>         everything was
>>>          >      >>>> working right our of the box. We changed our
>>>         underlay network to a
>>>          >      >>>> Layer3 spine-leaf network now and did not deploy
>>>         DVR as we
>>>          >     don't wanted
>>>          >      >>>> to have that much cables in a rack.
>>>          >      >>>>
>>>          >      >>>> Octavia is not working right now as the lb-mgmt-net
>>>         does not
>>>          >     exist on
>>>          >      >>>> the compute nodes nor does a br-ex.
>>>          >      >>>>
>>>          >      >>>> The control nodes running
>>>          >      >>>>
>>>          >      >>>> octavia_worker
>>>          >      >>>> octavia_housekeeping
>>>          >      >>>> octavia_health_manager
>>>          >      >>>> octavia_api
>>>          >      >>>>
>>>          >     Amphorae-VMs, z.b.
>>>          >
>>>          >     lb-mgmt-net 172.16.0.0/16 <http://172.16.0.0/16>
>>>         <http://172.16.0.0/16> default GW
>>>          >      >>>> and as far as I understood octavia_worker,
>>>          >     octavia_housekeeping and
>>>          >      >>>> octavia_health_manager have to talk to the amphora
>>>         instances.
>>>          >     But the
>>>          >      >>>> control nodes are spread over three different
>>>         leafs. So each
>>>          >     control
>>>          >      >>>> node in a different L2 domain.
>>>          >      >>>>
>>>          >      >>>> So the question is how to deploy a lb-mgmt-net
>>>         network in our
>>>          >     setup?
>>>          >      >>>>
>>>          >      >>>> - Compute nodes have no "stretched" L2 domain
>>>          >      >>>> - Control nodes, compute nodes and network nodes
>>>         are in L3
>>>          >     networks like
>>>          >      >>>> api, storage, ...
>>>          >      >>>> - Only network nodes are connected to a L2 domain
>>>         (with a
>>>          >     separated NIC)
>>>          >      >>>> providing the "public" network
>>>          >      >>>>
>>>          >      >>> You'll need to add a new bridge to your compute
>>>         nodes and create a
>>>          >      >>> provider network associated with that bridge. In my
>>>         setup this is
>>>          >      >>> simply a flat network tied to a tagged interface. In
>>>         your case it
>>>          >      >>> probably makes more sense to make a new VNI and
>>>         create a vxlan
>>>          >      >>> provider network. The routing in your switches
>>>         should handle
>>>          >     the rest.
>>>          >      >>
>>>          >      >> Ok that's what I try right now. But I don't get how
>>>         to setup
>>>          >     something
>>>          >      >> like a VxLAN provider Network. I thought only vlan
>>>         and flat is
>>>          >     supported
>>>          >      >> as provider network? I guess it is not possible to
>>>         use the tunnel
>>>          >      >> interface that is used for tenant networks?
>>>          >      >> So I have to create a separated VxLAN on the control
>>>         and compute
>>>          >     nodes like:
>>>          >      >>
>>>          >      >> # ip link add vxoctavia type vxlan id 42 dstport 4790
>>>         group
>>>          >     239.1.1.1
>>>          >      >> dev vlan3535 ttl 5
>>>          >      >> # ip addr add 172.16.1.11/20 <http://172.16.1.11/20>
>>>         <http://172.16.1.11/20> dev vxoctavia
>>>          >      >> # ip link set vxoctavia up
>>>          >      >>
>>>          >      >> and use it like a flat provider network, true?
>>>          >      >>
>>>          >      > This is a fine way of doing things, but it's only half
>>>         the battle.
>>>          >      > You'll need to add a bridge on the compute nodes and
>>>         bind it to that
>>>          >      > new interface. Something like this if you're using
>>>         openvswitch:
>>>          >      >
>>>          >      > docker exec openvswitch_db
>>>          >      > /usr/local/bin/kolla_ensure_openvswitch_configured
>>>         br-mgmt vxoctavia
>>>          >      >
>>>          >      > Also you'll want to remove the IP address from that
>>>         interface as it's
>>>          >      > going to be a bridge. Think of it like your public
>>>         (br-ex) interface
>>>          >      > on your network nodes.
>>>          >      >
>>>          >      >  From there you'll need to update the bridge mappings
>>>         via kolla
>>>          >      > overrides. This would usually be in
>>>         /etc/kolla/config/neutron. Create
>>>          >      > a subdirectory for your compute inventory group and
>>>         create an
>>>          >      > ml2_conf.ini there. So you'd end up with something
>>> like:
>>>          >      >
>>>          >      > [root@kolla-deploy ~]# cat
>>>          >     /etc/kolla/config/neutron/compute/ml2_conf.ini
>>>          >      > [ml2_type_flat]
>>>          >      > flat_networks = mgmt-net
>>>          >      >
>>>          >      > [ovs]
>>>          >      > bridge_mappings = mgmt-net:br-mgmt
>>>          >      >
>>>          >      > run kolla-ansible --tags neutron reconfigure to push
>>>         out the new
>>>          >      > configs. Note that there is a bug where the neutron
>>>         containers
>>>          >     may not
>>>          >      > restart after the change, so you'll probably need to
>>>         do a 'docker
>>>          >      > container restart neutron_openvswitch_agent' on each
>>>         compute node.
>>>          >      >
>>>          >      > At this point, you'll need to create the provider
>>>         network in the
>>>          >     admin
>>>          >      > project like:
>>>          >      >
>>>          >      > openstack network create --provider-network-type flat
>>>          >      > --provider-physical-network mgmt-net lb-mgmt-net
>>>          >      >
>>>          >      > And then create a normal subnet attached to this
>>>         network with some
>>>          >      > largeish address scope. I wouldn't use 172.16.0.0/16
>>>         <http://172.16.0.0/16>
>>>          >     <http://172.16.0.0/16> because docker
>>>          >      > uses that by default. I'm not sure if it matters since
>>>         the network
>>>          >      > traffic will be isolated on a bridge, but it makes me
>>>         paranoid so I
>>>          >      > avoided it.
>>>          >      >
>>>          >      > For your controllers, I think you can just let
>>> everything
>>>          >     function off
>>>          >      > your api interface since you're routing in your
>>>         spines. Set up a
>>>          >      > gateway somewhere from that lb-mgmt network and save
>>>         yourself the
>>>          >      > complication of adding an interface to your
>>>         controllers. If you
>>>          >     choose
>>>          >      > to use a separate interface on your controllers,
>>>         you'll need to make
>>>          >      > sure this patch is in your kolla-ansible install or
>>>         cherry pick it.
>>>          >      >
>>>          >      >
>>>          >
>>> https://github.com/openstack/kolla-ansible/commit/0b6e401c4fdb9aa4ff87d0bfd4b25c91b86e0d60#diff-6c871f6865aecf0057a5b5f677ae7d59 
>>>
>>>          >      >
>>>          >      > I don't think that's been backported at all, so unless
>>>         you're running
>>>          >      > off master you'll need to go get it.
>>>          >      >
>>>          >      >  From here on out, the regular Octavia instruction
>>>         should serve you.
>>>          >      > Create a flavor, Create a security group, and capture
>>>         their UUIDs
>>>          >      > along with the UUID of the provider network you made.
>>>         Override
>>>          >     them in
>>>          >      > globals.yml with:
>>>          >      >
>>>          >      > octavia_amp_boot_network_list: <uuid>
>>>          >      > octavia_amp_secgroup_list: <uuid>
>>>          >      > octavia_amp_flavor_id: <uuid>
>>>          >      >
>>>          >      > This is all from my scattered notes and bad memory.
>>>         Hopefully it
>>>          >     makes
>>>          >      > sense. Corrections welcome.
>>>          >      >
>>>          >      > -Erik
>>>          >      >
>>>          >      >
>>>          >      >
>>>          >      >>
>>>          >      >>
>>>          >      >>>
>>>          >      >>> -Erik
>>>          >      >>>>
>>>          >      >>>> All the best,
>>>          >      >>>> Florian
>>>          >      >>>> _______________________________________________
>>>          >      >>>> OpenStack-operators mailing list
>>>          >      >>>> [hidden email]
>>>         <mailto:[hidden email]>
>>>          >     <mailto:[hidden email]
>>>         <mailto:[hidden email]>>
>>>          >      >>>>
>>>          >
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>>          >
>>>          >     --
>>>          >
>>>          >     EveryWare AG
>>>          >     Florian Engelmann
>>>          >     Systems Engineer
>>>          >     Zurlindenstrasse 52a
>>>          >     CH-8003 Zürich
>>>          >
>>>          >     tel: +41 44 466 60 00
>>>          >     fax: +41 44 466 60 10
>>>          >     mail: mailto:[hidden email]
>>>         <mailto:[hidden email]>
>>>          >     <mailto:[hidden email]
>>>         <mailto:[hidden email]>>
>>>          >     web: http://www.everyware.ch
>>>          >
>>>
>>>         --
>>>         EveryWare AG
>>>         Florian Engelmann
>>>         Systems Engineer
>>>         Zurlindenstrasse 52a
>>>         CH-8003 Zürich
>>>
>>>         tel: +41 44 466 60 00
>>>         fax: +41 44 466 60 10
>>>         mail: mailto:[hidden email]
>>>         <mailto:[hidden email]>
>>>         web: http://www.everyware.ch
>>>
>>
>>
>> _______________________________________________
>> OpenStack-operators mailing list
>> [hidden email]
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>
>
> _______________________________________________
> OpenStack-operators mailing list
> [hidden email]
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
--

EveryWare AG
Florian Engelmann
Systems Engineer
Zurlindenstrasse 52a
CH-8003 Zürich

tel: +41 44 466 60 00
fax: +41 44 466 60 10
mail: mailto:[hidden email]
web: http://www.everyware.ch

_______________________________________________
OpenStack-operators mailing list
[hidden email]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

smime.p7s (6K) Download Attachment
12