[uWSGI] Versions and benchmarks and other problems

Roberto De Ioris roberto at unbit.it
Tue Jul 6 17:02:43 CEST 2010


Il giorno 06/lug/2010, alle ore 16.36, Paul van der Linden ha scritto:

> Hi,
> 
> I've found several benchmarks on uwsgi and it was very promising. I 
> wanted a less heavy weight server then apache, because it would only 
> serve wsgi applications.
> I'm a little bit disappointed by it's prestations and stability, but 
> maybe I'm using the wrong version, but if I understand is correctly 
> 0.9.5.4 should be a stable too.
> 
> The configuration:
> - dynamic apps
> - 4 processes
> - unix socket
> - master process
> - logging disabled
> - behind nginx
> - nginx: gzip compression, https, sendfile
> The application:
> - Does a simple check on the post-data
> - Sends an empty body with the X-Accel-Redirect header, so nginx can do 
> the rest.
> 
> At my own benchmarks (I was testing my applications and vps 
> prestations), some things attracted my attention:
> - Most of the time it's only uses 3 of the 4 processes
> - I tested also with apache (mod_wsgi, embedded mode), differences: 
> apache uses almost double the memory per process, but a throughput of 95 
> req/s, while uwsgi was only serving 44 req/s
> - But worst of all: after all requests were finished (1000 with 
> concurrency 16-32 with ab), uwsgi was still using almost all of my cpu!
> 

Hi Paul, before doing benchmarks of such different solutions, you should
clearly understand the different technics, or you will obtain pratically useless data.

For example, mod_wsgi in embedded mode will use all the available apache processes/workers while
uWSGI will continue to use only the number you have set). So you are testing an environment with hundreds of workers available
with another with only 4 workers :)

Onestly i do not know if mod_wsgi embedded is slower than uWSGI, i have stop doing this kind of tests after 0.9.2 :P

The second problem is probably spawned by nginx that has a bigger (really bigger) throutput than uWSGI (and apache obviously).

If you attach a strace to the uWSGI processes (just after ab ends)  you can send it to the list, and we  eventually find a bug or a confirmation about
overload made by nginx.

Then post your python code and the full command line to allow other users to rebuild your situation and make tests or suggestions.

Other Notes:

- Dynamic apps are loaded on demand, so you will lose a huge amount (multiplied for the number of processes) of time waiting for them to be ready. Use
static app and single interpreter mode for useful benchmarks. 

- Always run the first benchmark with logging enabled, most of the time uWSGI will warn you of overload. 
When you reach  a no-error point, you can re-run with logging disabled or redirected to another server via udp.

--
Roberto De Ioris
http://unbit.it
JID: roberto at jabber.unbit.it



More information about the uWSGI mailing list