Friday, January 13, 2012

Nosetests and memory consumption

At work my main project is a pylons website for internal consumption. I do most of the coding, collect requirements, bug fixing, database design - you name it, I do it.
Last night I was about to commit a set of changes and I do run my tests before I do so. There are lots of tests, and it takes 10 minutes or so for all to run.
Of course after 9 minutes or so nosetests ends with a failure, and it's 1AM.

These are the errors:
MemoryError
Logged from file base.py, line 1388
Traceback (most recent call last):
File "c:\Python27\Lib\logging\__init__.py", line 859, in emit
stream.write(fs % msg)
File "c:\Python27\Lib\StringIO.py", line 221, in write
self.buflist.append(s)


So it's memory related. I develop on Windows 7 (yes), so I start TaskManager and in fact the nosetests process fails after reaching 1.8Gb or so.

There are two interesting facts: first, nosetests quickly goes up to 1.3Gb within the first 10 seconds of running, before any of my tests are run. I think that this is nose collecting test metadata - it seems like a lot of memory, but that is not the problem. The second fact is that as the tests run, the process gobbles more and more RAM.

The problem is that by default nose collects the stdout stream to memory: if you have some logging (say Sqlalchemy.echo=True), the logs can get very big. Basically I found the threshold at which (at least on Windows) nose will run out of space.

The solution is to run with the -s option and pipe the stdout to file. If failure logs are needed, run with --stop so that the bottom of the log has the relevant info. Still, there seem to be nothing out there on nose memory consumption. I would really like to know if anybody else has run into this.

So, do this:

nosetests -s --stop > stdout.log 2> error.log

No comments:

Post a Comment