[uWSGI] frustrating low performances

Roberto De Ioris roberto at unbit.it
Wed Jul 7 15:08:57 CEST 2010


> Hi,
> 	I was impressed by the speed reached by uWSGI so I decided to use it also
> on a recent Django project. Surprisingly, I cannot manage to keep the
> server up for more than 30 seconds and I would like to investigate this.
>
> Premise: I am not blaming uWSGI, I am blaming myself as I am the one who
> configured it.
>
> My application is structured as follow (from the backend to the frontend):
>
> 	* postgresql backend with 100 connections max
> 	* pgbouncer connection pooler with 100 connections and a pool of 50
> 	* django, which exposes WSGI)
> 	* uWSGI with -p 4 (I have 4 CPUs and 4G of RAM)
> 		/usr/local/bin/uwsgi -w myapp.wsgi -s 127.0.0.1:9000 -p 4 -i -L -H
> /home/myapp/
> 	* Cherokee serving / via one uWSGI source (actually, those 4 processes)
>
> For each HTTP request coming in my application does the following:
>
> 	0) SELECT FROM table WHERE url = <url passed via GET>
> 	1) if no results are found:
> 		- launch an instance of Python Mechanize to surf to <url passed via GET>
> 		- keep track of any possible redirection
> 	2) INSERT INTO ... the output of 1)
>
> So, depending on 1), serving one request could take some time as Django
> has to launch Mechanize and wait for the remote HTTP server to respond.
> For this reason I set Cherokee with a timeout of 30 seconds.
>
> I set 1h cache life in Cherokee and I also leverage memcached as a Django
> caching backend.
>
> Unfortunately, after a few requests, the server starts responding 500 and
> I have to killall uwsgi in order to put it back working.
>
> What could it be?
>
> How would you proceed for investigating the cause of all this?
>

I bet all of your 4 uWSGI processes are still processing past requests
when you start receiving errors by cherokee.

The best way to check this is attaching a strace to the worker processes.

(strace -p <pid>)

and check what workers are doing. (pay attention to the uWSGI logs too)


By the way, for your particular situation (potential long running
requests, but no so long to pass them to the spooler) you can have a look
at the "grunt" mode of the upcoming uWSGI 0.9.6

Pratically, when your request could take a big amount of time, you can
"detach" your worker that will continue to work in background without
stealing resource (read: workers) to the uWSGI pool.

The system is very simple, run uWSGI (the current mercurial code) with the
--grunt option:

./uwsgi -s XXX -w XXX --grunt

Now in your request code, if you need to manage a long request:


grunt = uwsgi.grunt()

if grunt is None:
    print "worker %d detached" % uwsgi.worker_id()
else:
    YOUR LONG CODE


So grunt works very similar to fork(), if it returns None you are in the
worker otherwise you are in the "grunt" (detached) process.

In the tests directory of the mercurial repository you will find an
example (grunter.py)

Do not abuse grunt processes, they use memory so avoid to spawn a lot of
them !!!

-- 
Roberto De Ioris
http://unbit.it


More information about the uWSGI mailing list