Prerequisites:
1) A modern Python 2.x (x > 4)
2) Easy_Install
3) virtualenv
Virtualenv creates a python sandbox that has no dependencies on the system python library. When used this way, the sandbox cannot be broken by system changes.
You can have multiple sandboxes to manage applications that have different dependencies (versions included).
Let's first make sure that we have a sane system python:
bash> which python
/Library/Frameworks/Python.framework/Versions/2.7/bin/python
bash> python --version
Python 2.7.2
bash> which easy_install
/usr/bin/easy_install
I like to keep all my sandboxes together, in a "sandboxen" dir:
bash> mkdir sandboxen
bash> cd sandboxen
bash> virtualenv pyramid
New python executable in pyramid/bin/python
Installing setuptools............done.
Installing pip...............done.
At this point you need to activate the virtual environment. This is done by sourcing the "activate" script in ~\sandboxen\pyramid\bin. Do not make this script executable! You need to run it as
bash> source ~\sandboxen\pyramid\bin\activate
(pyramid) bash>
The prompt will include the sandbox name to remind you of the special behavior. Use the "deactivate" command to go back to the normal shell. Activate works by prepending the sandbox bin, lib and include to your path.
Pyramid Install
At this point installing pyramid from pypi should be a matter of running
easy_install pyramid==1.2
(at the time of writing 1.3 is still in alpha, so we'll stick with the released version). This will download pyramid and all of its dependencies, and install them into the sandbox. The standard dependencies include a few zope packages, Mako, Chameleon, Paster and a few more.
From this point on we will rely on paster commands to create an application and run it locally in dev mode. If you are familiar with Rails, paster is the rake equivalent.
A logical next step is the excellent tutorial "Pyramid for Humans" .
Monday, February 13, 2012
Friday, January 13, 2012
Nosetests and memory consumption
At work my main project is a pylons website for internal consumption. I do most of the coding, collect requirements, bug fixing, database design - you name it, I do it.
Last night I was about to commit a set of changes and I do run my tests before I do so. There are lots of tests, and it takes 10 minutes or so for all to run.
Of course after 9 minutes or so nosetests ends with a failure, and it's 1AM.
These are the errors:
MemoryError
Logged from file base.py, line 1388
Traceback (most recent call last):
File "c:\Python27\Lib\logging\__init__.py", line 859, in emit
stream.write(fs % msg)
File "c:\Python27\Lib\StringIO.py", line 221, in write
self.buflist.append(s)
So it's memory related. I develop on Windows 7 (yes), so I start TaskManager and in fact the nosetests process fails after reaching 1.8Gb or so.
There are two interesting facts: first, nosetests quickly goes up to 1.3Gb within the first 10 seconds of running, before any of my tests are run. I think that this is nose collecting test metadata - it seems like a lot of memory, but that is not the problem. The second fact is that as the tests run, the process gobbles more and more RAM.
The problem is that by default nose collects the stdout stream to memory: if you have some logging (say Sqlalchemy.echo=True), the logs can get very big. Basically I found the threshold at which (at least on Windows) nose will run out of space.
The solution is to run with the -s option and pipe the stdout to file. If failure logs are needed, run with --stop so that the bottom of the log has the relevant info. Still, there seem to be nothing out there on nose memory consumption. I would really like to know if anybody else has run into this.
So, do this:
nosetests -s --stop > stdout.log 2> error.log
Last night I was about to commit a set of changes and I do run my tests before I do so. There are lots of tests, and it takes 10 minutes or so for all to run.
Of course after 9 minutes or so nosetests ends with a failure, and it's 1AM.
These are the errors:
MemoryError
Logged from file base.py, line 1388
Traceback (most recent call last):
File "c:\Python27\Lib\logging\__init__.py", line 859, in emit
stream.write(fs % msg)
File "c:\Python27\Lib\StringIO.py", line 221, in write
self.buflist.append(s)
So it's memory related. I develop on Windows 7 (yes), so I start TaskManager and in fact the nosetests process fails after reaching 1.8Gb or so.
There are two interesting facts: first, nosetests quickly goes up to 1.3Gb within the first 10 seconds of running, before any of my tests are run. I think that this is nose collecting test metadata - it seems like a lot of memory, but that is not the problem. The second fact is that as the tests run, the process gobbles more and more RAM.
The problem is that by default nose collects the stdout stream to memory: if you have some logging (say Sqlalchemy.echo=True), the logs can get very big. Basically I found the threshold at which (at least on Windows) nose will run out of space.
The solution is to run with the -s option and pipe the stdout to file. If failure logs are needed, run with --stop so that the bottom of the log has the relevant info. Still, there seem to be nothing out there on nose memory consumption. I would really like to know if anybody else has run into this.
So, do this:
nosetests -s --stop > stdout.log 2> error.log
Subscribe to:
Comments (Atom)