[uWSGI] frustrating low performances

Federico Maggi federico.maggi at gmail.com
Wed Jul 7 14:43:12 CEST 2010


Hi,
	I was impressed by the speed reached by uWSGI so I decided to use it also on a recent Django project. Surprisingly, I cannot manage to keep the server up for more than 30 seconds and I would like to investigate this.

Premise: I am not blaming uWSGI, I am blaming myself as I am the one who configured it.

My application is structured as follow (from the backend to the frontend):

	* postgresql backend with 100 connections max
	* pgbouncer connection pooler with 100 connections and a pool of 50
	* django, which exposes WSGI)
	* uWSGI with -p 4 (I have 4 CPUs and 4G of RAM)
		/usr/local/bin/uwsgi -w myapp.wsgi -s 127.0.0.1:9000 -p 4 -i -L -H /home/myapp/
	* Cherokee serving / via one uWSGI source (actually, those 4 processes)

For each HTTP request coming in my application does the following:

	0) SELECT FROM table WHERE url = <url passed via GET>
	1) if no results are found:
		- launch an instance of Python Mechanize to surf to <url passed via GET>
		- keep track of any possible redirection
	2) INSERT INTO ... the output of 1)

So, depending on 1), serving one request could take some time as Django has to launch Mechanize and wait for the remote HTTP server to respond. For this reason I set Cherokee with a timeout of 30 seconds.

I set 1h cache life in Cherokee and I also leverage memcached as a Django caching backend.

Unfortunately, after a few requests, the server starts responding 500 and I have to killall uwsgi in order to put it back working.

What could it be?

How would you proceed for investigating the cause of all this?

Thanks in advance. Ciao,
-- Federico



More information about the uWSGI mailing list