[uWSGI] Sporadic sqlalchemy errors when using Pylons app with multiple workers
Roberto De Ioris
roberto at unbit.it
Wed May 25 22:04:28 CEST 2011
> We are trying to migrate our Pylons app from Paster to uWSGI. But we are
> running into some issues when trying to use our app with uWSGI while
> more than one worker.
> Our current set up is to have 22 separate instances of "paster serve"
> running on each app server, each on its own port. There are a variable
> number of app servers, and HAProxy balances among all of the "paster
> serve" instances. The app works perfectly when keeping this architecture,
> but substituting uWSGI (with 1 worker per instance) for Paster.
> The current plan is to try and move to a setup where a single uWSGI
> master process manages 20-odd workers on each app server and HAProxy
> only sees each app server as a whole. Unfortunately, when running our
> app in uWSGI with more than one worker, we get sporadic, but frequent,
> sqlalchemy-related exceptions when testing under load. Following is an
> example of one of the more common errors we get.
> Error - <class 'sqlalchemy.exc.OperationalError'>:
> (OperationalError) server closed the connection unexpectedly
> This probably means the server terminated abnormally
> before or while processing the request.
> It would seem that our app, or sqlalchemy, is making an assumption that
> is no longer true when running as multiple workers in uWSGI. What are the
> usual culprits in this kind of situation? Does anyone have any thoughts
> on what to investigate?
> We are on Pylons 0.9.6.2 and SQLAlchemy 0.5.3.
> uWSGI mailing list
> uWSGI at lists.unbit.it
SQLalchemy does not cooperate well (it depends on the adapter) in
multiprocess environment where the connections are shared between multiple
You have 4 ways to follow:
1) open a new SQLAlchemy connection after each fork
uwsgi.post_fork_hook = open_connection
2) use uWSGI 0.9.8 --lazy option
With the --lazy option you will create independent workers that share only
the uwsgi socket. You do not need to change your code but you will consume
(potentially) more memory (but for sure less than in a pure-python
environment like paster)
3) move to threads
depending on your app you could follow this way, but i do not recommend
this approach if you never tested the app on threads
4) Use independent uWSGI instances and automaticaly load-balance them with
This is the same things you did with haproxy + paster but with
autoconfiguration. i do not recommend this way even it is very funny to
Roberto De Ioris
More information about the uWSGI