Keystone handling http requests synchronously

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
17 messages Options
Reply | Threaded
Open this post in threaded view
|

Keystone handling http requests synchronously

Kanade, Rohan12
Hi,

I was trying to create 200 users using the keystone client. All the users are unique and are created on separate threads which are started at the same time. 

keystone is handling each request synchronously , i.e. user 1 is created, then user 2 is created ...

Shouldnt  keystone be running  a greenthread for each request and try to create these users asynchronously? 
like start creating user 1 , while handling that request, start creating user 2 or user n... 

I have attached the keystone service logs for further assistance.
http://paste.openstack.org/show/34216/

______________________________________________________________________
Disclaimer:This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding

_______________________________________________
OpenStack-dev mailing list
[hidden email]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Reply | Threaded
Open this post in threaded view
|

Re: Keystone handling http requests synchronously

Palanisamy, Anand
Hi Rohan,

What is your keystone backend? LDAP?

Thanks
Anand
(408)601-7148

On Mar 21, 2013, at 4:53 AM, "Kanade, Rohan" <[hidden email]> wrote:

Hi,

I was trying to create 200 users using the keystone client. All the users are unique and are created on separate threads which are started at the same time. 

keystone is handling each request synchronously , i.e. user 1 is created, then user 2 is created ...

Shouldnt  keystone be running  a greenthread for each request and try to create these users asynchronously? 
like start creating user 1 , while handling that request, start creating user 2 or user n... 

I have attached the keystone service logs for further assistance.

______________________________________________________________________
Disclaimer:This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding
_______________________________________________
OpenStack-dev mailing list
[hidden email]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

_______________________________________________
OpenStack-dev mailing list
[hidden email]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Reply | Threaded
Open this post in threaded view
|

Re: Keystone handling http requests synchronously

Kanade, Rohan12
In reply to this post by Kanade, Rohan12
My keystone backend is sql (Mysql).

______________________________________________________________________
Disclaimer:This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data.  If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding

_______________________________________________
OpenStack-dev mailing list
[hidden email]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Reply | Threaded
Open this post in threaded view
|

Re: Keystone handling http requests synchronously

Jay Pipes
In reply to this post by Kanade, Rohan12
Unfortunately, Keystone's WSGI server is only a single process, with a
greenthread pool. Unlike Glance, Nova, Cinder, and Swift, which all use
multi-process, greenthread-pool-per-process WSGI servers[1], Keystone
does it differently[2].

There was a patchset[3] that added multiprocess support to Keystone, but
due to objections from termie and others about it not being necessary,
it died on the vine. Termie even noted that Keystone "was designed to be
run as multiple instances and load balanced over and [he felt] that
should be the preferred scaling point".

Because the mysql client connection is C-based, calls to it will be
blocking operations on greenthreads within a single process, meaning
even if multiple greenthreads are spawned for those 200 incoming
requests, they will be processed synchronously.

The solution is for Keystone to implement the same multi-processed WSGI
worker stuff that is in the other OpenStack projects. Or, diverge from
the deployment solution of Nova, Glance, Cinder, and Swift, and manually
run multiple instances of keystone, as Termie suggests.

Best,
-jay

[1] All pretty much derived from the original Swift code, with some Oslo
improvements around config
[2] Compare
https://github.com/openstack/glance/blob/master/glance/common/wsgi.py
with
https://github.com/openstack/keystone/blob/master/keystone/common/wsgi.py
[3] https://review.openstack.org/#/c/7017/

On 03/21/2013 07:45 AM, Kanade, Rohan wrote:

> Hi,
>
> I was trying to create 200 users using the keystone client. All the
> users are unique and are created on separate threads which are started
> at the same time.
>
> keystone is handling each request synchronously , i.e. user 1 is
> created, then user 2 is created ...
>
> Shouldnt  keystone be running  a greenthread for each request and try to
> create these users asynchronously?
> like start creating user 1 , while handling that request, start creating
> user 2 or user n...
>
> I have attached the keystone service logs for further assistance.
> http://paste.openstack.org/show/34216/
>
> ______________________________________________________________________
> Disclaimer:This email and any attachments are sent in strictest
> confidence for the sole use of the addressee and may contain legally
> privileged, confidential, and proprietary data. If you are not the
> intended recipient, please advise the sender by replying promptly to
> this email and then delete and destroy this email and any attachments
> without any further use, copying or forwarding
>
>
> _______________________________________________
> OpenStack-dev mailing list
> [hidden email]
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

_______________________________________________
OpenStack-dev mailing list
[hidden email]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Reply | Threaded
Open this post in threaded view
|

Re: Keystone handling http requests synchronously

Joshua Harlow
Or I think u can run keystone in wsgi+apache easily, thus getting u the multiprocess support via apache worker processes.

Sent from my really tiny device...

On Mar 22, 2013, at 10:47 AM, "Jay Pipes" <[hidden email]> wrote:

> Unfortunately, Keystone's WSGI server is only a single process, with a
> greenthread pool. Unlike Glance, Nova, Cinder, and Swift, which all use
> multi-process, greenthread-pool-per-process WSGI servers[1], Keystone
> does it differently[2].
>
> There was a patchset[3] that added multiprocess support to Keystone, but
> due to objections from termie and others about it not being necessary,
> it died on the vine. Termie even noted that Keystone "was designed to be
> run as multiple instances and load balanced over and [he felt] that
> should be the preferred scaling point".
>
> Because the mysql client connection is C-based, calls to it will be
> blocking operations on greenthreads within a single process, meaning
> even if multiple greenthreads are spawned for those 200 incoming
> requests, they will be processed synchronously.
>
> The solution is for Keystone to implement the same multi-processed WSGI
> worker stuff that is in the other OpenStack projects. Or, diverge from
> the deployment solution of Nova, Glance, Cinder, and Swift, and manually
> run multiple instances of keystone, as Termie suggests.
>
> Best,
> -jay
>
> [1] All pretty much derived from the original Swift code, with some Oslo
> improvements around config
> [2] Compare
> https://github.com/openstack/glance/blob/master/glance/common/wsgi.py
> with
> https://github.com/openstack/keystone/blob/master/keystone/common/wsgi.py
> [3] https://review.openstack.org/#/c/7017/
>
> On 03/21/2013 07:45 AM, Kanade, Rohan wrote:
>> Hi,
>>
>> I was trying to create 200 users using the keystone client. All the
>> users are unique and are created on separate threads which are started
>> at the same time.
>>
>> keystone is handling each request synchronously , i.e. user 1 is
>> created, then user 2 is created ...
>>
>> Shouldnt  keystone be running  a greenthread for each request and try to
>> create these users asynchronously?
>> like start creating user 1 , while handling that request, start creating
>> user 2 or user n...
>>
>> I have attached the keystone service logs for further assistance.
>> http://paste.openstack.org/show/34216/
>>
>> ______________________________________________________________________
>> Disclaimer:This email and any attachments are sent in strictest
>> confidence for the sole use of the addressee and may contain legally
>> privileged, confidential, and proprietary data. If you are not the
>> intended recipient, please advise the sender by replying promptly to
>> this email and then delete and destroy this email and any attachments
>> without any further use, copying or forwarding
>>
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> [hidden email]
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> _______________________________________________
> OpenStack-dev mailing list
> [hidden email]
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

_______________________________________________
OpenStack-dev mailing list
[hidden email]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Reply | Threaded
Open this post in threaded view
|

Re: Keystone handling http requests synchronously

Joshua Harlow

For example...

This lets apache do the multiprocess instead of how nova, glance ... have basically recreated the same mechanism that apache has had for years.

Sent from my really tiny device...

On Mar 23, 2013, at 10:14 AM, "Joshua Harlow" <[hidden email]> wrote:

Or I think u can run keystone in wsgi+apache easily, thus getting u the
multiprocess support via apache worker processes.

Sent from my really tiny
device...

On Mar 22, 2013, at 10:47 AM, "Jay Pipes" <[hidden email]>
wrote:

Unfortunately, Keystone's WSGI server is only a single process,
with a
greenthread pool. Unlike Glance, Nova, Cinder, and Swift, which all
use
multi-process, greenthread-pool-per-process WSGI servers[1],
Keystone
does it differently[2].

There was a patchset[3] that added
multiprocess support to Keystone, but
due to objections from termie and
others about it not being necessary,
it died on the vine. Termie even
noted that Keystone "was designed to be
run as multiple instances and load
balanced over and [he felt] that
should be the preferred scaling point".

Because the mysql client connection is C-based, calls to it will be

blocking operations on greenthreads within a single process, meaning
even
if multiple greenthreads are spawned for those 200 incoming
requests, they
will be processed synchronously.

The solution is for Keystone to
implement the same multi-processed WSGI
worker stuff that is in the other
OpenStack projects. Or, diverge from
the deployment solution of Nova,
Glance, Cinder, and Swift, and manually
run multiple instances of
keystone, as Termie suggests.

Best,
-jay

[1] All pretty much
derived from the original Swift code, with some Oslo
improvements around
config
[2] Compare

https://github.com/openstack/glance/blob/master/glance/common/wsgi.py

with

https://github.com/openstack/keystone/blob/master/keystone/common/wsgi.py

[3] https://review.openstack.org/#/c/7017/

On 03/21/2013 07:45 AM,
Kanade, Rohan wrote:
Hi,

I was trying to create 200 users using
the keystone client. All the
users are unique and are created on separate
threads which are started
at the same time.

keystone is handling
each request synchronously , i.e. user 1 is
created, then user 2 is
created ...

Shouldnt  keystone be running  a greenthread for each
request and try to
create these users asynchronously?
like start
creating user 1 , while handling that request, start creating
user 2 or
user n...

I have attached the keystone service logs for further
assistance.
http://paste.openstack.org/show/34216/


______________________________________________________________________

Disclaimer:This email and any attachments are sent in strictest

confidence for the sole use of the addressee and may contain legally

privileged, confidential, and proprietary data. If you are not the

intended recipient, please advise the sender by replying promptly to
this
email and then delete and destroy this email and any attachments
without
any further use, copying or forwarding



_______________________________________________
OpenStack-dev mailing
list
[hidden email]

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


_______________________________________________
OpenStack-dev mailing
list
[hidden email]

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



_______________________________________________
OpenStack-dev mailing list
[hidden email]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Reply | Threaded
Open this post in threaded view
|

Re: Keystone handling http requests synchronously

Jay Pipes
Sure, you could do that, of course. Just like you could use gunicorn or
some other web server. Just like you could deploy any of the other
OpenStack services that way.

It would just be nice if one could configure Keystone in the same manner
that all the other OpenStack services are configured.

-jay

On 03/23/2013 01:19 PM, Joshua Harlow wrote:

> See: https://github.com/openstack/keystone/tree/master/httpd
>
> For example...
>
> This lets apache do the multiprocess instead of how nova, glance ...
> have basically recreated the same mechanism that apache has had for years.
>
> Sent from my really tiny device...
>
> On Mar 23, 2013, at 10:14 AM, "Joshua Harlow" <[hidden email]
> <mailto:[hidden email]>> wrote:
>
>> Or I think u can run keystone in wsgi+apache easily, thus getting u the
>> multiprocess support via apache worker processes.
>>
>> Sent from my really tiny
>> device....
>>
>> On Mar 22, 2013, at 10:47 AM, "Jay Pipes" <[hidden email]
>> <mailto:[hidden email]>>
>> wrote:
>>
>>> Unfortunately, Keystone's WSGI server is only a single process,
>> with a
>>> greenthread pool. Unlike Glance, Nova, Cinder, and Swift, which all
>> use
>>> multi-process, greenthread-pool-per-process WSGI servers[1],
>> Keystone
>>> does it differently[2].
>>>
>>> There was a patchset[3] that added
>> multiprocess support to Keystone, but
>>> due to objections from termie and
>> others about it not being necessary,
>>> it died on the vine. Termie even
>> noted that Keystone "was designed to be
>>> run as multiple instances and load
>> balanced over and [he felt] that
>>> should be the preferred scaling point".
>>>
>>> Because the mysql client connection is C-based, calls to it will be
>>>
>> blocking operations on greenthreads within a single process, meaning
>>> even
>> if multiple greenthreads are spawned for those 200 incoming
>>> requests, they
>> will be processed synchronously.
>>>
>>> The solution is for Keystone to
>> implement the same multi-processed WSGI
>>> worker stuff that is in the other
>> OpenStack projects. Or, diverge from
>>> the deployment solution of Nova,
>> Glance, Cinder, and Swift, and manually
>>> run multiple instances of
>> keystone, as Termie suggests.
>>>
>>> Best,
>>> -jay
>>>
>>> [1] All pretty much
>> derived from the original Swift code, with some Oslo
>>> improvements around
>> config
>>> [2] Compare
>>>
>> https://github.com/openstack/glance/blob/master/glance/common/wsgi.py
>>>
>> with
>>>
>> https://github.com/openstack/keystone/blob/master/keystone/common/wsgi.py
>>>
>> [3] https://review.openstack.org/#/c/7017/
>>>
>>> On 03/21/2013 07:45 AM,
>> Kanade, Rohan wrote:
>>>> Hi,
>>>>
>>>> I was trying to create 200 users using
>> the keystone client. All the
>>>> users are unique and are created on separate
>> threads which are started
>>>> at the same time.
>>>>
>>>> keystone is handling
>> each request synchronously , i.e. user 1 is
>>>> created, then user 2 is
>> created ...
>>>>
>>>> Shouldnt  keystone be running  a greenthread for each
>> request and try to
>>>> create these users asynchronously?
>>>> like start
>> creating user 1 , while handling that request, start creating
>>>> user 2 or
>> user n...
>>>>
>>>> I have attached the keystone service logs for further
>> assistance.
>>>> http://paste.openstack.org/show/34216/
>>>>
>>>>
>> ______________________________________________________________________
>>>>
>> Disclaimer:This email and any attachments are sent in strictest
>>>>
>> confidence for the sole use of the addressee and may contain legally
>>>>
>> privileged, confidential, and proprietary data. If you are not the
>>>>
>> intended recipient, please advise the sender by replying promptly to
>>>> this
>> email and then delete and destroy this email and any attachments
>>>> without
>> any further use, copying or forwarding
>>>>
>>>>
>>>>
>> _______________________________________________
>>>> OpenStack-dev mailing
>> list
>>>> [hidden email]
>>>> <mailto:[hidden email]>
>>>>
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>> _______________________________________________
>>> OpenStack-dev mailing
>> list
>>> [hidden email]
>>> <mailto:[hidden email]>
>>>
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> _______________________________________________
> OpenStack-dev mailing list
> [hidden email]
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

_______________________________________________
OpenStack-dev mailing list
[hidden email]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Reply | Threaded
Open this post in threaded view
|

Re: Keystone handling http requests synchronously

Kanade, Rohan12
In reply to this post by Kanade, Rohan12
>Because the mysql client connection is C-based, calls to it will be
>blocking operations on greenthreads within a single process, meaning
>even if multiple greenthreads are spawned for those 200 incoming
>requests, they will be processed synchronously.

Hi Jay, can you please specify what methods in the sqlalchemy library are blocking the greenthreads in a single process?
If i call get_session multiple times during an http api request (say create_user), does it block all greenthreads on each get_session[1]? Or does the session.commit, session.begin, or session.rollback methods block all greenthreads?

[1] https://github.com/openstack/keystone/blob/master/keystone/common/sql/core.py#L217

______________________________________________________________________
Disclaimer:This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding

_______________________________________________
OpenStack-dev mailing list
[hidden email]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Reply | Threaded
Open this post in threaded view
|

Re: Keystone handling http requests synchronously

Jay Pipes
On 03/25/2013 03:19 AM, Kanade, Rohan wrote:

>>Because the mysql client connection is C-based, calls to it will be
>>blocking operations on greenthreads within a single process, meaning
>>even if multiple greenthreads are spawned for those 200 incoming
>>requests, they will be processed synchronously.
>
> Hi Jay, can you please specify what methods in the sqlalchemy library
> are blocking the greenthreads in a single process?
> If i call get_session multiple times during an http api request (say
> create_user), does it block all greenthreads on each get_session[1]? Or
> does the session.commit, session.begin, or session.rollback methods
> block all greenthreads?

It's actually not the SQLAlchemy library at all... it's the fact that
all greenthreads in the same process share a single linked external C
library and eventlet cannot monkey-patch the blocking calls in the mysql
library in order to make them non-blocking. See here for more details:

http://docs.openstack.org/developer/nova/devref/threading.html

Best,
-jay

_______________________________________________
OpenStack-dev mailing list
[hidden email]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Reply | Threaded
Open this post in threaded view
|

Re: [keystone] Keystone handling http requests synchronously

David Kranz
In reply to this post by Jay Pipes
Related to this, I measured that the rate at which keystone (running on
a real fairly hefty server) can handle the requests coming from the
auth_token middleware (no pki tokens) is about 16/s. That seems pretty
low to me. Is there some other keystone performance problem here, or is
that not surprising?

  -David

On 3/24/2013 9:11 PM, Jay Pipes wrote:

> Sure, you could do that, of course. Just like you could use gunicorn or
> some other web server. Just like you could deploy any of the other
> OpenStack services that way.
>
> It would just be nice if one could configure Keystone in the same manner
> that all the other OpenStack services are configured.
>
> -jay
>
> On 03/23/2013 01:19 PM, Joshua Harlow wrote:
>> See: https://github.com/openstack/keystone/tree/master/httpd
>>
>> For example...
>>
>> This lets apache do the multiprocess instead of how nova, glance ...
>> have basically recreated the same mechanism that apache has had for years.
>>
>> Sent from my really tiny device...
>>
>> On Mar 23, 2013, at 10:14 AM, "Joshua Harlow" <[hidden email]
>> <mailto:[hidden email]>> wrote:
>>
>>> Or I think u can run keystone in wsgi+apache easily, thus getting u the
>>> multiprocess support via apache worker processes.
>>>
>>> Sent from my really tiny
>>> device....
>>>
>>> On Mar 22, 2013, at 10:47 AM, "Jay Pipes" <[hidden email]
>>> <mailto:[hidden email]>>
>>> wrote:
>>>
>>>> Unfortunately, Keystone's WSGI server is only a single process,
>>> with a
>>>> greenthread pool. Unlike Glance, Nova, Cinder, and Swift, which all
>>> use
>>>> multi-process, greenthread-pool-per-process WSGI servers[1],
>>> Keystone
>>>> does it differently[2].
>>>>
>>>> There was a patchset[3] that added
>>> multiprocess support to Keystone, but
>>>> due to objections from termie and
>>> others about it not being necessary,
>>>> it died on the vine. Termie even
>>> noted that Keystone "was designed to be
>>>> run as multiple instances and load
>>> balanced over and [he felt] that
>>>> should be the preferred scaling point".
>>>>
>>>> Because the mysql client connection is C-based, calls to it will be
>>>>
>>> blocking operations on greenthreads within a single process, meaning
>>>> even
>>> if multiple greenthreads are spawned for those 200 incoming
>>>> requests, they
>>> will be processed synchronously.
>>>> The solution is for Keystone to
>>> implement the same multi-processed WSGI
>>>> worker stuff that is in the other
>>> OpenStack projects. Or, diverge from
>>>> the deployment solution of Nova,
>>> Glance, Cinder, and Swift, and manually
>>>> run multiple instances of
>>> keystone, as Termie suggests.
>>>> Best,
>>>> -jay
>>>>
>>>> [1] All pretty much
>>> derived from the original Swift code, with some Oslo
>>>> improvements around
>>> config
>>>> [2] Compare
>>>>
>>> https://github.com/openstack/glance/blob/master/glance/common/wsgi.py
>>> with
>>> https://github.com/openstack/keystone/blob/master/keystone/common/wsgi.py
>>> [3] https://review.openstack.org/#/c/7017/
>>>> On 03/21/2013 07:45 AM,
>>> Kanade, Rohan wrote:
>>>>> Hi,
>>>>>
>>>>> I was trying to create 200 users using
>>> the keystone client. All the
>>>>> users are unique and are created on separate
>>> threads which are started
>>>>> at the same time.
>>>>>
>>>>> keystone is handling
>>> each request synchronously , i.e. user 1 is
>>>>> created, then user 2 is
>>> created ...
>>>>> Shouldnt  keystone be running  a greenthread for each
>>> request and try to
>>>>> create these users asynchronously?
>>>>> like start
>>> creating user 1 , while handling that request, start creating
>>>>> user 2 or
>>> user n...
>>>>> I have attached the keystone service logs for further
>>> assistance.
>>>>> http://paste.openstack.org/show/34216/
>>>>>
>>>>>
>>> ______________________________________________________________________
>>> Disclaimer:This email and any attachments are sent in strictest
>>> confidence for the sole use of the addressee and may contain legally
>>> privileged, confidential, and proprietary data. If you are not the
>>> intended recipient, please advise the sender by replying promptly to
>>>>> this
>>> email and then delete and destroy this email and any attachments
>>>>> without
>>> any further use, copying or forwarding
>>>>>
>>>>>
>>> _______________________________________________
>>>>> OpenStack-dev mailing
>>> list
>>>>> [hidden email]
>>>>> <mailto:[hidden email]>
>>>>>
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>> _______________________________________________
>>>> OpenStack-dev mailing
>>> list
>>>> [hidden email]
>>>> <mailto:[hidden email]>
>>>>
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> [hidden email]
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> _______________________________________________
> OpenStack-dev mailing list
> [hidden email]
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


_______________________________________________
OpenStack-dev mailing list
[hidden email]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Reply | Threaded
Open this post in threaded view
|

Re: [keystone] Keystone handling http requests synchronously

Chmouel Boudjnah
this seems to be pretty low, do you have memcaching enabled?

On Tue, Mar 26, 2013 at 4:20 PM, David Kranz <[hidden email]> wrote:

> Related to this, I measured that the rate at which keystone (running on a
> real fairly hefty server) can handle the requests coming from the auth_token
> middleware (no pki tokens) is about 16/s. That seems pretty low to me. Is
> there some other keystone performance problem here, or is that not
> surprising?
>
>  -David
>
>
> On 3/24/2013 9:11 PM, Jay Pipes wrote:
>>
>> Sure, you could do that, of course. Just like you could use gunicorn or
>> some other web server. Just like you could deploy any of the other
>> OpenStack services that way.
>>
>> It would just be nice if one could configure Keystone in the same manner
>> that all the other OpenStack services are configured.
>>
>> -jay
>>
>> On 03/23/2013 01:19 PM, Joshua Harlow wrote:
>>>
>>> See: https://github.com/openstack/keystone/tree/master/httpd
>>>
>>> For example...
>>>
>>> This lets apache do the multiprocess instead of how nova, glance ...
>>> have basically recreated the same mechanism that apache has had for
>>> years.
>>>
>>> Sent from my really tiny device...
>>>
>>> On Mar 23, 2013, at 10:14 AM, "Joshua Harlow" <[hidden email]
>>> <mailto:[hidden email]>> wrote:
>>>
>>>> Or I think u can run keystone in wsgi+apache easily, thus getting u the
>>>> multiprocess support via apache worker processes.
>>>>
>>>> Sent from my really tiny
>>>> device....
>>>>
>>>> On Mar 22, 2013, at 10:47 AM, "Jay Pipes" <[hidden email]
>>>> <mailto:[hidden email]>>
>>>> wrote:
>>>>
>>>>> Unfortunately, Keystone's WSGI server is only a single process,
>>>>
>>>> with a
>>>>>
>>>>> greenthread pool. Unlike Glance, Nova, Cinder, and Swift, which all
>>>>
>>>> use
>>>>>
>>>>> multi-process, greenthread-pool-per-process WSGI servers[1],
>>>>
>>>> Keystone
>>>>>
>>>>> does it differently[2].
>>>>>
>>>>> There was a patchset[3] that added
>>>>
>>>> multiprocess support to Keystone, but
>>>>>
>>>>> due to objections from termie and
>>>>
>>>> others about it not being necessary,
>>>>>
>>>>> it died on the vine. Termie even
>>>>
>>>> noted that Keystone "was designed to be
>>>>>
>>>>> run as multiple instances and load
>>>>
>>>> balanced over and [he felt] that
>>>>>
>>>>> should be the preferred scaling point".
>>>>>
>>>>> Because the mysql client connection is C-based, calls to it will be
>>>>>
>>>> blocking operations on greenthreads within a single process, meaning
>>>>>
>>>>> even
>>>>
>>>> if multiple greenthreads are spawned for those 200 incoming
>>>>>
>>>>> requests, they
>>>>
>>>> will be processed synchronously.
>>>>>
>>>>> The solution is for Keystone to
>>>>
>>>> implement the same multi-processed WSGI
>>>>>
>>>>> worker stuff that is in the other
>>>>
>>>> OpenStack projects. Or, diverge from
>>>>>
>>>>> the deployment solution of Nova,
>>>>
>>>> Glance, Cinder, and Swift, and manually
>>>>>
>>>>> run multiple instances of
>>>>
>>>> keystone, as Termie suggests.
>>>>>
>>>>> Best,
>>>>> -jay
>>>>>
>>>>> [1] All pretty much
>>>>
>>>> derived from the original Swift code, with some Oslo
>>>>>
>>>>> improvements around
>>>>
>>>> config
>>>>>
>>>>> [2] Compare
>>>>>
>>>> https://github.com/openstack/glance/blob/master/glance/common/wsgi.py
>>>> with
>>>>
>>>> https://github.com/openstack/keystone/blob/master/keystone/common/wsgi.py
>>>> [3] https://review.openstack.org/#/c/7017/
>>>>>
>>>>> On 03/21/2013 07:45 AM,
>>>>
>>>> Kanade, Rohan wrote:
>>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> I was trying to create 200 users using
>>>>
>>>> the keystone client. All the
>>>>>>
>>>>>> users are unique and are created on separate
>>>>
>>>> threads which are started
>>>>>>
>>>>>> at the same time.
>>>>>>
>>>>>> keystone is handling
>>>>
>>>> each request synchronously , i.e. user 1 is
>>>>>>
>>>>>> created, then user 2 is
>>>>
>>>> created ...
>>>>>>
>>>>>> Shouldnt  keystone be running  a greenthread for each
>>>>
>>>> request and try to
>>>>>>
>>>>>> create these users asynchronously?
>>>>>> like start
>>>>
>>>> creating user 1 , while handling that request, start creating
>>>>>>
>>>>>> user 2 or
>>>>
>>>> user n...
>>>>>>
>>>>>> I have attached the keystone service logs for further
>>>>
>>>> assistance.
>>>>>>
>>>>>> http://paste.openstack.org/show/34216/
>>>>>>
>>>>>>
>>>> ______________________________________________________________________
>>>> Disclaimer:This email and any attachments are sent in strictest
>>>> confidence for the sole use of the addressee and may contain legally
>>>> privileged, confidential, and proprietary data. If you are not the
>>>> intended recipient, please advise the sender by replying promptly to
>>>>>>
>>>>>> this
>>>>
>>>> email and then delete and destroy this email and any attachments
>>>>>>
>>>>>> without
>>>>
>>>> any further use, copying or forwarding
>>>>>>
>>>>>>
>>>>>>
>>>> _______________________________________________
>>>>>>
>>>>>> OpenStack-dev mailing
>>>>
>>>> list
>>>>>>
>>>>>> [hidden email]
>>>>>> <mailto:[hidden email]>
>>>>>>
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>
>>>>>
>>>> _______________________________________________
>>>>>
>>>>> OpenStack-dev mailing
>>>>
>>>> list
>>>>>
>>>>> [hidden email]
>>>>> <mailto:[hidden email]>
>>>>>
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>>
>>>
>>> _______________________________________________
>>> OpenStack-dev mailing list
>>> [hidden email]
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> [hidden email]
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> _______________________________________________
> OpenStack-dev mailing list
> [hidden email]
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

_______________________________________________
OpenStack-dev mailing list
[hidden email]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Reply | Threaded
Open this post in threaded view
|

Re: [keystone] Keystone handling http requests synchronously

David Kranz
This is without memcache in auth_token. I was trying to find a way past
https://bugs.launchpad.net/keystone/+bug/1020127
which I think I now have. I  would appreciate it if you could validate
my comment at the end of that ticket. Here, I just thought that the keystone
throughput was very low. I know that swift should not be hitting it so
hard. If you were referring to using memcache in the keystone server
itself then
I didn't know you could do that.

  -David



On 3/26/2013 12:33 PM, Chmouel Boudjnah wrote:

> this seems to be pretty low, do you have memcaching enabled?
>
> On Tue, Mar 26, 2013 at 4:20 PM, David Kranz <[hidden email]> wrote:
>> Related to this, I measured that the rate at which keystone (running on a
>> real fairly hefty server) can handle the requests coming from the auth_token
>> middleware (no pki tokens) is about 16/s. That seems pretty low to me. Is
>> there some other keystone performance problem here, or is that not
>> surprising?
>>
>>   -David
>>
>>
>> On 3/24/2013 9:11 PM, Jay Pipes wrote:
>>> Sure, you could do that, of course. Just like you could use gunicorn or
>>> some other web server. Just like you could deploy any of the other
>>> OpenStack services that way.
>>>
>>> It would just be nice if one could configure Keystone in the same manner
>>> that all the other OpenStack services are configured.
>>>
>>> -jay
>>>
>>> On 03/23/2013 01:19 PM, Joshua Harlow wrote:
>>>> See: https://github.com/openstack/keystone/tree/master/httpd
>>>>
>>>> For example...
>>>>
>>>> This lets apache do the multiprocess instead of how nova, glance ...
>>>> have basically recreated the same mechanism that apache has had for
>>>> years.
>>>>
>>>> Sent from my really tiny device...
>>>>
>>>> On Mar 23, 2013, at 10:14 AM, "Joshua Harlow" <[hidden email]
>>>> <mailto:[hidden email]>> wrote:
>>>>
>>>>> Or I think u can run keystone in wsgi+apache easily, thus getting u the
>>>>> multiprocess support via apache worker processes.
>>>>>
>>>>> Sent from my really tiny
>>>>> device....
>>>>>
>>>>> On Mar 22, 2013, at 10:47 AM, "Jay Pipes" <[hidden email]
>>>>> <mailto:[hidden email]>>
>>>>> wrote:
>>>>>
>>>>>> Unfortunately, Keystone's WSGI server is only a single process,
>>>>> with a
>>>>>> greenthread pool. Unlike Glance, Nova, Cinder, and Swift, which all
>>>>> use
>>>>>> multi-process, greenthread-pool-per-process WSGI servers[1],
>>>>> Keystone
>>>>>> does it differently[2].
>>>>>>
>>>>>> There was a patchset[3] that added
>>>>> multiprocess support to Keystone, but
>>>>>> due to objections from termie and
>>>>> others about it not being necessary,
>>>>>> it died on the vine. Termie even
>>>>> noted that Keystone "was designed to be
>>>>>> run as multiple instances and load
>>>>> balanced over and [he felt] that
>>>>>> should be the preferred scaling point".
>>>>>>
>>>>>> Because the mysql client connection is C-based, calls to it will be
>>>>>>
>>>>> blocking operations on greenthreads within a single process, meaning
>>>>>> even
>>>>> if multiple greenthreads are spawned for those 200 incoming
>>>>>> requests, they
>>>>> will be processed synchronously.
>>>>>> The solution is for Keystone to
>>>>> implement the same multi-processed WSGI
>>>>>> worker stuff that is in the other
>>>>> OpenStack projects. Or, diverge from
>>>>>> the deployment solution of Nova,
>>>>> Glance, Cinder, and Swift, and manually
>>>>>> run multiple instances of
>>>>> keystone, as Termie suggests.
>>>>>> Best,
>>>>>> -jay
>>>>>>
>>>>>> [1] All pretty much
>>>>> derived from the original Swift code, with some Oslo
>>>>>> improvements around
>>>>> config
>>>>>> [2] Compare
>>>>>>
>>>>> https://github.com/openstack/glance/blob/master/glance/common/wsgi.py
>>>>> with
>>>>>
>>>>> https://github.com/openstack/keystone/blob/master/keystone/common/wsgi.py
>>>>> [3] https://review.openstack.org/#/c/7017/
>>>>>> On 03/21/2013 07:45 AM,
>>>>> Kanade, Rohan wrote:
>>>>>>> Hi,
>>>>>>>
>>>>>>> I was trying to create 200 users using
>>>>> the keystone client. All the
>>>>>>> users are unique and are created on separate
>>>>> threads which are started
>>>>>>> at the same time.
>>>>>>>
>>>>>>> keystone is handling
>>>>> each request synchronously , i.e. user 1 is
>>>>>>> created, then user 2 is
>>>>> created ...
>>>>>>> Shouldnt  keystone be running  a greenthread for each
>>>>> request and try to
>>>>>>> create these users asynchronously?
>>>>>>> like start
>>>>> creating user 1 , while handling that request, start creating
>>>>>>> user 2 or
>>>>> user n...
>>>>>>> I have attached the keystone service logs for further
>>>>> assistance.
>>>>>>> http://paste.openstack.org/show/34216/
>>>>>>>
>>>>>>>
>>>>> ______________________________________________________________________
>>>>> Disclaimer:This email and any attachments are sent in strictest
>>>>> confidence for the sole use of the addressee and may contain legally
>>>>> privileged, confidential, and proprietary data. If you are not the
>>>>> intended recipient, please advise the sender by replying promptly to
>>>>>>> this
>>>>> email and then delete and destroy this email and any attachments
>>>>>>> without
>>>>> any further use, copying or forwarding
>>>>>>>
>>>>>>>
>>>>> _______________________________________________
>>>>>>> OpenStack-dev mailing
>>>>> list
>>>>>>> [hidden email]
>>>>>>> <mailto:[hidden email]>
>>>>>>>
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>>
>>>>> _______________________________________________
>>>>>> OpenStack-dev mailing
>>>>> list
>>>>>> [hidden email]
>>>>>> <mailto:[hidden email]>
>>>>>>
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>
>>>>>
>>>> _______________________________________________
>>>> OpenStack-dev mailing list
>>>> [hidden email]
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>> _______________________________________________
>>> OpenStack-dev mailing list
>>> [hidden email]
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> [hidden email]
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> _______________________________________________
> OpenStack-dev mailing list
> [hidden email]
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


_______________________________________________
OpenStack-dev mailing list
[hidden email]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Reply | Threaded
Open this post in threaded view
|

Re: [keystone] Keystone handling http requests synchronously

Adam Young
On 03/26/2013 01:34 PM, David Kranz wrote:
> This is without memcache in auth_token. I was trying to find a way
> past https://bugs.launchpad.net/keystone/+bug/1020127
> which I think I now have. I  would appreciate it if you could validate
> my comment at the end of that ticket. Here, I just thought that the
> keystone
> throughput was very low. I know that swift should not be hitting it so
> hard. If you were referring to using memcache in the keystone server
> itself then
You can use memcached as an alternate token  back end, but I have no
reason to thin it would perform any better than SQL.  It was broken
until fairly recently, too, so I suspect it is not used much in the wild.


> I didn't know you could do that.
>
>  -David
>
>
>
> On 3/26/2013 12:33 PM, Chmouel Boudjnah wrote:
>> this seems to be pretty low, do you have memcaching enabled?
>>
>> On Tue, Mar 26, 2013 at 4:20 PM, David Kranz <[hidden email]>
>> wrote:
>>> Related to this, I measured that the rate at which keystone (running
>>> on a
>>> real fairly hefty server) can handle the requests coming from the
>>> auth_token
>>> middleware (no pki tokens) is about 16/s. That seems pretty low to
>>> me. Is
>>> there some other keystone performance problem here, or is that not
>>> surprising?
>>>
>>>   -David
>>>
>>>
>>> On 3/24/2013 9:11 PM, Jay Pipes wrote:
>>>> Sure, you could do that, of course. Just like you could use
>>>> gunicorn or
>>>> some other web server. Just like you could deploy any of the other
>>>> OpenStack services that way.
>>>>
>>>> It would just be nice if one could configure Keystone in the same
>>>> manner
>>>> that all the other OpenStack services are configured.
>>>>
>>>> -jay
>>>>
>>>> On 03/23/2013 01:19 PM, Joshua Harlow wrote:
>>>>> See: https://github.com/openstack/keystone/tree/master/httpd
>>>>>
>>>>> For example...
>>>>>
>>>>> This lets apache do the multiprocess instead of how nova, glance ...
>>>>> have basically recreated the same mechanism that apache has had for
>>>>> years.
>>>>>
>>>>> Sent from my really tiny device...
>>>>>
>>>>> On Mar 23, 2013, at 10:14 AM, "Joshua Harlow" <[hidden email]
>>>>> <mailto:[hidden email]>> wrote:
>>>>>
>>>>>> Or I think u can run keystone in wsgi+apache easily, thus getting
>>>>>> u the
>>>>>> multiprocess support via apache worker processes.
>>>>>>
>>>>>> Sent from my really tiny
>>>>>> device....
>>>>>>
>>>>>> On Mar 22, 2013, at 10:47 AM, "Jay Pipes" <[hidden email]
>>>>>> <mailto:[hidden email]>>
>>>>>> wrote:
>>>>>>
>>>>>>> Unfortunately, Keystone's WSGI server is only a single process,
>>>>>> with a
>>>>>>> greenthread pool. Unlike Glance, Nova, Cinder, and Swift, which all
>>>>>> use
>>>>>>> multi-process, greenthread-pool-per-process WSGI servers[1],
>>>>>> Keystone
>>>>>>> does it differently[2].
>>>>>>>
>>>>>>> There was a patchset[3] that added
>>>>>> multiprocess support to Keystone, but
>>>>>>> due to objections from termie and
>>>>>> others about it not being necessary,
>>>>>>> it died on the vine. Termie even
>>>>>> noted that Keystone "was designed to be
>>>>>>> run as multiple instances and load
>>>>>> balanced over and [he felt] that
>>>>>>> should be the preferred scaling point".
>>>>>>>
>>>>>>> Because the mysql client connection is C-based, calls to it will be
>>>>>>>
>>>>>> blocking operations on greenthreads within a single process, meaning
>>>>>>> even
>>>>>> if multiple greenthreads are spawned for those 200 incoming
>>>>>>> requests, they
>>>>>> will be processed synchronously.
>>>>>>> The solution is for Keystone to
>>>>>> implement the same multi-processed WSGI
>>>>>>> worker stuff that is in the other
>>>>>> OpenStack projects. Or, diverge from
>>>>>>> the deployment solution of Nova,
>>>>>> Glance, Cinder, and Swift, and manually
>>>>>>> run multiple instances of
>>>>>> keystone, as Termie suggests.
>>>>>>> Best,
>>>>>>> -jay
>>>>>>>
>>>>>>> [1] All pretty much
>>>>>> derived from the original Swift code, with some Oslo
>>>>>>> improvements around
>>>>>> config
>>>>>>> [2] Compare
>>>>>>>
>>>>>> https://github.com/openstack/glance/blob/master/glance/common/wsgi.py 
>>>>>>
>>>>>> with
>>>>>>
>>>>>> https://github.com/openstack/keystone/blob/master/keystone/common/wsgi.py 
>>>>>>
>>>>>> [3] https://review.openstack.org/#/c/7017/
>>>>>>> On 03/21/2013 07:45 AM,
>>>>>> Kanade, Rohan wrote:
>>>>>>>> Hi,
>>>>>>>>
>>>>>>>> I was trying to create 200 users using
>>>>>> the keystone client. All the
>>>>>>>> users are unique and are created on separate
>>>>>> threads which are started
>>>>>>>> at the same time.
>>>>>>>>
>>>>>>>> keystone is handling
>>>>>> each request synchronously , i.e. user 1 is
>>>>>>>> created, then user 2 is
>>>>>> created ...
>>>>>>>> Shouldnt  keystone be running a greenthread for each
>>>>>> request and try to
>>>>>>>> create these users asynchronously?
>>>>>>>> like start
>>>>>> creating user 1 , while handling that request, start creating
>>>>>>>> user 2 or
>>>>>> user n...
>>>>>>>> I have attached the keystone service logs for further
>>>>>> assistance.
>>>>>>>> http://paste.openstack.org/show/34216/
>>>>>>>>
>>>>>>>>
>>>>>> ______________________________________________________________________
>>>>>>
>>>>>> Disclaimer:This email and any attachments are sent in strictest
>>>>>> confidence for the sole use of the addressee and may contain legally
>>>>>> privileged, confidential, and proprietary data. If you are not the
>>>>>> intended recipient, please advise the sender by replying promptly to
>>>>>>>> this
>>>>>> email and then delete and destroy this email and any attachments
>>>>>>>> without
>>>>>> any further use, copying or forwarding
>>>>>>>>
>>>>>>>>
>>>>>> _______________________________________________
>>>>>>>> OpenStack-dev mailing
>>>>>> list
>>>>>>>> [hidden email]
>>>>>>>> <mailto:[hidden email]>
>>>>>>>>
>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>>>
>>>>>> _______________________________________________
>>>>>>> OpenStack-dev mailing
>>>>>> list
>>>>>>> [hidden email]
>>>>>>> <mailto:[hidden email]>
>>>>>>>
>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>>
>>>>>>
>>>>> _______________________________________________
>>>>> OpenStack-dev mailing list
>>>>> [hidden email]
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>
>>>> _______________________________________________
>>>> OpenStack-dev mailing list
>>>> [hidden email]
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>> _______________________________________________
>>> OpenStack-dev mailing list
>>> [hidden email]
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> _______________________________________________
>> OpenStack-dev mailing list
>> [hidden email]
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> _______________________________________________
> OpenStack-dev mailing list
> [hidden email]
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


_______________________________________________
OpenStack-dev mailing list
[hidden email]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Reply | Threaded
Open this post in threaded view
|

Re: [keystone] Keystone handling http requests synchronously

Mike Wilson
Actually, Bluehost is using it in production. We couldn't get past a couple thousand nodes without it because of the amount of requests that the quantum network driver produces (5 every periodic interval per compute node). It does have some problems if one tenant builds up a large list of tokens, but other than that it has been great for us. I think our deployment is somewhere around 15,000 nodes right now and it is still holding up strong. It is MUCH more performant than just a plain SQL backend.


On Thu, Mar 28, 2013 at 3:04 PM, Adam Young <[hidden email]> wrote:
On 03/26/2013 01:34 PM, David Kranz wrote:
This is without memcache in auth_token. I was trying to find a way past https://bugs.launchpad.net/keystone/+bug/1020127
which I think I now have. I  would appreciate it if you could validate my comment at the end of that ticket. Here, I just thought that the keystone
throughput was very low. I know that swift should not be hitting it so hard. If you were referring to using memcache in the keystone server itself then
You can use memcached as an alternate token  back end, but I have no reason to thin it would perform any better than SQL.  It was broken until fairly recently, too, so I suspect it is not used much in the wild.



I didn't know you could do that.

 -David



On 3/26/2013 12:33 PM, Chmouel Boudjnah wrote:
this seems to be pretty low, do you have memcaching enabled?

On Tue, Mar 26, 2013 at 4:20 PM, David Kranz <[hidden email]> wrote:
Related to this, I measured that the rate at which keystone (running on a
real fairly hefty server) can handle the requests coming from the auth_token
middleware (no pki tokens) is about 16/s. That seems pretty low to me. Is
there some other keystone performance problem here, or is that not
surprising?

  -David


On 3/24/2013 9:11 PM, Jay Pipes wrote:
Sure, you could do that, of course. Just like you could use gunicorn or
some other web server. Just like you could deploy any of the other
OpenStack services that way.

It would just be nice if one could configure Keystone in the same manner
that all the other OpenStack services are configured.

-jay

On 03/23/2013 01:19 PM, Joshua Harlow wrote:
See: https://github.com/openstack/keystone/tree/master/httpd

For example...

This lets apache do the multiprocess instead of how nova, glance ...
have basically recreated the same mechanism that apache has had for
years.

Sent from my really tiny device...

On Mar 23, 2013, at 10:14 AM, "Joshua Harlow" <[hidden email]
<mailto:[hidden email]>> wrote:

Or I think u can run keystone in wsgi+apache easily, thus getting u the
multiprocess support via apache worker processes.

Sent from my really tiny
device....

On Mar 22, 2013, at 10:47 AM, "Jay Pipes" <[hidden email]
<mailto:[hidden email]>>
wrote:

Unfortunately, Keystone's WSGI server is only a single process,
with a
greenthread pool. Unlike Glance, Nova, Cinder, and Swift, which all
use
multi-process, greenthread-pool-per-process WSGI servers[1],
Keystone
does it differently[2].

There was a patchset[3] that added
multiprocess support to Keystone, but
due to objections from termie and
others about it not being necessary,
it died on the vine. Termie even
noted that Keystone "was designed to be
run as multiple instances and load
balanced over and [he felt] that
should be the preferred scaling point".

Because the mysql client connection is C-based, calls to it will be

blocking operations on greenthreads within a single process, meaning
even
if multiple greenthreads are spawned for those 200 incoming
requests, they
will be processed synchronously.
The solution is for Keystone to
implement the same multi-processed WSGI
worker stuff that is in the other
OpenStack projects. Or, diverge from
the deployment solution of Nova,
Glance, Cinder, and Swift, and manually
run multiple instances of
keystone, as Termie suggests.
Best,
-jay

[1] All pretty much
derived from the original Swift code, with some Oslo
improvements around
config
[2] Compare

https://github.com/openstack/glance/blob/master/glance/common/wsgi.py
with

https://github.com/openstack/keystone/blob/master/keystone/common/wsgi.py
[3] https://review.openstack.org/#/c/7017/
On 03/21/2013 07:45 AM,
Kanade, Rohan wrote:
Hi,

I was trying to create 200 users using
the keystone client. All the
users are unique and are created on separate
threads which are started
at the same time.

keystone is handling
each request synchronously , i.e. user 1 is
created, then user 2 is
created ...
Shouldnt  keystone be running a greenthread for each
request and try to
create these users asynchronously?
like start
creating user 1 , while handling that request, start creating
user 2 or
user n...
I have attached the keystone service logs for further
assistance.
http://paste.openstack.org/show/34216/


______________________________________________________________________
Disclaimer:This email and any attachments are sent in strictest
confidence for the sole use of the addressee and may contain legally
privileged, confidential, and proprietary data. If you are not the
intended recipient, please advise the sender by replying promptly to
this
email and then delete and destroy this email and any attachments
without
any further use, copying or forwarding


_______________________________________________
OpenStack-dev mailing
list
[hidden email]
<mailto:[hidden email]>

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

_______________________________________________
OpenStack-dev mailing
list
[hidden email]
<mailto:[hidden email]>

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


_______________________________________________
OpenStack-dev mailing list
[hidden email]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

_______________________________________________
OpenStack-dev mailing list
[hidden email]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


_______________________________________________
OpenStack-dev mailing list
[hidden email]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
_______________________________________________
OpenStack-dev mailing list
[hidden email]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


_______________________________________________
OpenStack-dev mailing list
[hidden email]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


_______________________________________________
OpenStack-dev mailing list
[hidden email]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


_______________________________________________
OpenStack-dev mailing list
[hidden email]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Reply | Threaded
Open this post in threaded view
|

Fwd: [keystone] Keystone handling http requests synchronously

Chmouel Boudjnah
In reply to this post by Adam Young
FYI


---------- Forwarded message ----------
From: Adam Young <[hidden email]>
Date: Thu, Mar 28, 2013 at 10:04 PM
Subject: Re: [openstack-dev] [keystone] Keystone handling http
requests synchronously
To: [hidden email]


On 03/26/2013 01:34 PM, David Kranz wrote:
>
> This is without memcache in auth_token. I was trying to find a way past https://bugs.launchpad.net/keystone/+bug/1020127
> which I think I now have. I  would appreciate it if you could validate my comment at the end of that ticket. Here, I just thought that the keystone
> throughput was very low. I know that swift should not be hitting it so hard. If you were referring to using memcache in the keystone server itself then

You can use memcached as an alternate token  back end, but I have no
reason to thin it would perform any better than SQL.  It was broken
until fairly recently, too, so I suspect it is not used much in the
wild.



> I didn't know you could do that.
>
>  -David
>
>
>
> On 3/26/2013 12:33 PM, Chmouel Boudjnah wrote:
>>
>> this seems to be pretty low, do you have memcaching enabled?
>>
>> On Tue, Mar 26, 2013 at 4:20 PM, David Kranz <[hidden email]> wrote:
>>>
>>> Related to this, I measured that the rate at which keystone (running on a
>>> real fairly hefty server) can handle the requests coming from the auth_token
>>> middleware (no pki tokens) is about 16/s. That seems pretty low to me. Is
>>> there some other keystone performance problem here, or is that not
>>> surprising?
>>>
>>>   -David
>>>
>>>
>>> On 3/24/2013 9:11 PM, Jay Pipes wrote:
>>>>
>>>> Sure, you could do that, of course. Just like you could use gunicorn or
>>>> some other web server. Just like you could deploy any of the other
>>>> OpenStack services that way.
>>>>
>>>> It would just be nice if one could configure Keystone in the same manner
>>>> that all the other OpenStack services are configured.
>>>>
>>>> -jay
>>>>
>>>> On 03/23/2013 01:19 PM, Joshua Harlow wrote:
>>>>>
>>>>> See: https://github.com/openstack/keystone/tree/master/httpd
>>>>>
>>>>> For example...
>>>>>
>>>>> This lets apache do the multiprocess instead of how nova, glance ...
>>>>> have basically recreated the same mechanism that apache has had for
>>>>> years.
>>>>>
>>>>> Sent from my really tiny device...
>>>>>
>>>>> On Mar 23, 2013, at 10:14 AM, "Joshua Harlow" <[hidden email]
>>>>> <mailto:[hidden email]>> wrote:
>>>>>
>>>>>> Or I think u can run keystone in wsgi+apache easily, thus getting u the
>>>>>> multiprocess support via apache worker processes.
>>>>>>
>>>>>> Sent from my really tiny
>>>>>> device....
>>>>>>
>>>>>> On Mar 22, 2013, at 10:47 AM, "Jay Pipes" <[hidden email]
>>>>>> <mailto:[hidden email]>>
>>>>>> wrote:
>>>>>>
>>>>>>> Unfortunately, Keystone's WSGI server is only a single process,
>>>>>>
>>>>>> with a
>>>>>>>
>>>>>>> greenthread pool. Unlike Glance, Nova, Cinder, and Swift, which all
>>>>>>
>>>>>> use
>>>>>>>
>>>>>>> multi-process, greenthread-pool-per-process WSGI servers[1],
>>>>>>
>>>>>> Keystone
>>>>>>>
>>>>>>> does it differently[2].
>>>>>>>
>>>>>>> There was a patchset[3] that added
>>>>>>
>>>>>> multiprocess support to Keystone, but
>>>>>>>
>>>>>>> due to objections from termie and
>>>>>>
>>>>>> others about it not being necessary,
>>>>>>>
>>>>>>> it died on the vine. Termie even
>>>>>>
>>>>>> noted that Keystone "was designed to be
>>>>>>>
>>>>>>> run as multiple instances and load
>>>>>>
>>>>>> balanced over and [he felt] that
>>>>>>>
>>>>>>> should be the preferred scaling point".
>>>>>>>
>>>>>>> Because the mysql client connection is C-based, calls to it will be
>>>>>>>
>>>>>> blocking operations on greenthreads within a single process, meaning
>>>>>>>
>>>>>>> even
>>>>>>
>>>>>> if multiple greenthreads are spawned for those 200 incoming
>>>>>>>
>>>>>>> requests, they
>>>>>>
>>>>>> will be processed synchronously.
>>>>>>>
>>>>>>> The solution is for Keystone to
>>>>>>
>>>>>> implement the same multi-processed WSGI
>>>>>>>
>>>>>>> worker stuff that is in the other
>>>>>>
>>>>>> OpenStack projects. Or, diverge from
>>>>>>>
>>>>>>> the deployment solution of Nova,
>>>>>>
>>>>>> Glance, Cinder, and Swift, and manually
>>>>>>>
>>>>>>> run multiple instances of
>>>>>>
>>>>>> keystone, as Termie suggests.
>>>>>>>
>>>>>>> Best,
>>>>>>> -jay
>>>>>>>
>>>>>>> [1] All pretty much
>>>>>>
>>>>>> derived from the original Swift code, with some Oslo
>>>>>>>
>>>>>>> improvements around
>>>>>>
>>>>>> config
>>>>>>>
>>>>>>> [2] Compare
>>>>>>>
>>>>>> https://github.com/openstack/glance/blob/master/glance/common/wsgi.py
>>>>>> with
>>>>>>
>>>>>> https://github.com/openstack/keystone/blob/master/keystone/common/wsgi.py
>>>>>> [3] https://review.openstack.org/#/c/7017/
>>>>>>>
>>>>>>> On 03/21/2013 07:45 AM,
>>>>>>
>>>>>> Kanade, Rohan wrote:
>>>>>>>>
>>>>>>>> Hi,
>>>>>>>>
>>>>>>>> I was trying to create 200 users using
>>>>>>
>>>>>> the keystone client. All the
>>>>>>>>
>>>>>>>> users are unique and are created on separate
>>>>>>
>>>>>> threads which are started
>>>>>>>>
>>>>>>>> at the same time.
>>>>>>>>
>>>>>>>> keystone is handling
>>>>>>
>>>>>> each request synchronously , i.e. user 1 is
>>>>>>>>
>>>>>>>> created, then user 2 is
>>>>>>
>>>>>> created ...
>>>>>>>>
>>>>>>>> Shouldnt  keystone be running a greenthread for each
>>>>>>
>>>>>> request and try to
>>>>>>>>
>>>>>>>> create these users asynchronously?
>>>>>>>> like start
>>>>>>
>>>>>> creating user 1 , while handling that request, start creating
>>>>>>>>
>>>>>>>> user 2 or
>>>>>>
>>>>>> user n...
>>>>>>>>
>>>>>>>> I have attached the keystone service logs for further
>>>>>>
>>>>>> assistance.
>>>>>>>>
>>>>>>>> http://paste.openstack.org/show/34216/
>>>>>>>>
>>>>>>>>
>>>>>> ______________________________________________________________________
>>>>>> Disclaimer:This email and any attachments are sent in strictest
>>>>>> confidence for the sole use of the addressee and may contain legally
>>>>>> privileged, confidential, and proprietary data. If you are not the
>>>>>> intended recipient, please advise the sender by replying promptly to
>>>>>>>>
>>>>>>>> this
>>>>>>
>>>>>> email and then delete and destroy this email and any attachments
>>>>>>>>
>>>>>>>> without
>>>>>>
>>>>>> any further use, copying or forwarding
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>> _______________________________________________
>>>>>>>>
>>>>>>>> OpenStack-dev mailing
>>>>>>
>>>>>> list
>>>>>>>>
>>>>>>>> [hidden email]
>>>>>>>> <mailto:[hidden email]>
>>>>>>>>
>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>>>
>>>>>>>
>>>>>> _______________________________________________
>>>>>>>
>>>>>>> OpenStack-dev mailing
>>>>>>
>>>>>> list
>>>>>>>
>>>>>>> [hidden email]
>>>>>>> <mailto:[hidden email]>
>>>>>>>
>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>>
>>>>>>
>>>>> _______________________________________________
>>>>> OpenStack-dev mailing list
>>>>> [hidden email]
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>
>>>> _______________________________________________
>>>> OpenStack-dev mailing list
>>>> [hidden email]
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>> _______________________________________________
>>> OpenStack-dev mailing list
>>> [hidden email]
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> [hidden email]
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> _______________________________________________
> OpenStack-dev mailing list
> [hidden email]
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



_______________________________________________
OpenStack-dev mailing list
[hidden email]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

_______________________________________________
OpenStack-dev mailing list
[hidden email]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Reply | Threaded
Open this post in threaded view
|

Re: [keystone] Keystone handling http requests synchronously

Jay Pipes
In reply to this post by Adam Young
On 03/28/2013 05:04 PM, Adam Young wrote:

> On 03/26/2013 01:34 PM, David Kranz wrote:
>> This is without memcache in auth_token. I was trying to find a way
>> past https://bugs.launchpad.net/keystone/+bug/1020127
>> which I think I now have. I  would appreciate it if you could validate
>> my comment at the end of that ticket. Here, I just thought that the
>> keystone
>> throughput was very low. I know that swift should not be hitting it so
>> hard. If you were referring to using memcache in the keystone server
>> itself then
> You can use memcached as an alternate token  back end, but I have no
> reason to thin it would perform any better than SQL.  It was broken
> until fairly recently, too, so I suspect it is not used much in the wild.

We use the memcached token backend in production. Performance is
substantially better for us because we use synchronous replication for
the identity tables between availability zones and regions. Replicating
the token table, which was inserting records at around 200K+ a day on a
simple test environment, was slow. The insert/update/delete rate of the
user, tenant, role, credential, and user_tenant_membership tables was
orders of magnitude less than the token table, and replicating
cross-country to multiple availability zones for those tables had
perfectly acceptable performance. The tradeoff is that tokens are not
cross-AZ, but AFAICT for our users that's not a big deal.

Note that we had to turn off PKI because memcached token backend is
broken with PKI in Folsom (PKI token is not hashed prior to memcache key
generation, resulting in keys too big to use in memcache -- fixed in
Grizzly). We're looking forward to deploying Grizzly shortly and will
likely enable PKI which should result in even better performance since
the number of roundtrips to Keystone is reduced.

We also use the templated catalog backend in production because the
various clients have no idea how to work with multiple availability
zones in the same region and just pick the first endpoint for their
service type returned.

May not be suitable in all cases, but this works well for us.

Best,
-jay

_______________________________________________
OpenStack-dev mailing list
[hidden email]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Reply | Threaded
Open this post in threaded view
|

Re: [keystone] Keystone handling http requests synchronously

Dolph Mathews

On Fri, Mar 29, 2013 at 8:19 AM, Jay Pipes <[hidden email]> wrote:
On 03/28/2013 05:04 PM, Adam Young wrote:
> On 03/26/2013 01:34 PM, David Kranz wrote:
>> This is without memcache in auth_token. I was trying to find a way
>> past https://bugs.launchpad.net/keystone/+bug/1020127
>> which I think I now have. I  would appreciate it if you could validate
>> my comment at the end of that ticket. Here, I just thought that the
>> keystone
>> throughput was very low. I know that swift should not be hitting it so
>> hard. If you were referring to using memcache in the keystone server
>> itself then
> You can use memcached as an alternate token  back end, but I have no
> reason to thin it would perform any better than SQL.  It was broken
> until fairly recently, too, so I suspect it is not used much in the wild.

We use the memcached token backend in production. Performance is
substantially better for us because we use synchronous replication for
the identity tables between availability zones and regions. Replicating
the token table, which was inserting records at around 200K+ a day on a
simple test environment, was slow. The insert/update/delete rate of the
user, tenant, role, credential, and user_tenant_membership tables was
orders of magnitude less than the token table, and replicating
cross-country to multiple availability zones for those tables had
perfectly acceptable performance. The tradeoff is that tokens are not
cross-AZ, but AFAICT for our users that's not a big deal.

Note that we had to turn off PKI because memcached token backend is
broken with PKI in Folsom (PKI token is not hashed prior to memcache key
generation, resulting in keys too big to use in memcache -- fixed in
Grizzly). We're looking forward to deploying Grizzly shortly and will
likely enable PKI which should result in even better performance since
the number of roundtrips to Keystone is reduced.

P.S. really looking forward to hearing about the results of your switch to PKI, given the size & architecture of your deployment
 

We also use the templated catalog backend in production because the
various clients have no idea how to work with multiple availability
zones in the same region and just pick the first endpoint for their
service type returned.

May not be suitable in all cases, but this works well for us.

Best,
-jay

_______________________________________________
OpenStack-dev mailing list
[hidden email]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


_______________________________________________
OpenStack-dev mailing list
[hidden email]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev