I have developed a wxpython application for molecular graphics. One
thing that puzzles me is that the application burns CPU time even when
it isn't doing anything. I did a profile and found...
ncalls tottime percall cumtime percall filename:lineno(function)
1 234.370 234.370 234.400 234.400 wx.py:88(MainLoop)
1 0.390 0.390 0.390 0.390 glcanvas.py:112(__init__)
1 0.040 0.040 0.070 0.070 windows.py:486(Show)
39 0.040 0.001 0.040 0.001 windows.py:814(Append)
5 0.030 0.006 0.030 0.006 glcanvas.py:96(SetCurrent)
During this time the CPU was maxed out running the application even
though it was idling. Is there anything I can do so that mainloop
doesn't burn so much CPU time when the application is idling?
R.
Muller, Richard wrote:
During this time the CPU was maxed out running the application even
though it was idling. Is there anything I can do so that mainloop
doesn't burn so much CPU time when the application is idling?
The mainloop doesn't burn CPU time while idle and the profile results
don't reflect reality. The body of the mainloop looks something like
this:
while not Quit:
msg=GetNextMessage()
DispatchMessage(msg)
The GetNextMessage function will sleep until the next message from
the underlying gui system arrives and not use CPU time. However
the profiler sees it as taking time since the call is made, and
however many seconds later returns.
As another example, the profiler will attibute 5 seconds of cpu to this line:
time.sleep(5)
Things get more complicated because a lot of the code is native
C/C++ code, and there are timers in addition to events.
Also the profiler only profiles the main thread (or more accurately
the thread it is invoked in).
You will need to do some more digging to find out what is actually
going on.
Roger
As mentioned by Roger, the profiler isn't giving you CPU-usage, but rather time-spent-in-the-function, which for the mainloop is the time spent in the application. If you were looking at a blocking-socket application it would appear that the app spent huge amounts of time in the socket.recv method, for instance. However, you also mention that your CPU was running at 100% while the application was running.
Assuming you knew this some way *other* than by the results of profile, it's likely that your OpenGL code is taken from a tutorial. These are often written in such a way as to trigger a redraw at the end of each drawing cycle. That's done because it makes it easier for the beginning programmer to know when to redraw (because they are always redrawing). For more complex scenarios where you want to avoid extra redraws/processor load, you need to track changes by all of network, GUI/user, AI and animation mechanisms and issue a redraw when needed. Scenegraph engines tend to take care of tracking when to redraw for you; with a scenegraph it boils down to redraw-if-the-graph-is-changed; which is one of the reasons I normally use them instead of raw OpenGL (where you wind up tracking all of the various things that could require a change separately).
HTH,
Mike
Muller, Richard wrote:
···
I have developed a wxpython application for molecular graphics. One thing that puzzles me is that the application burns CPU time even when it isn't doing anything. I did a profile and found...
ncalls tottime percall cumtime percall filename:lineno(function)
1 234.370 234.370 234.400 234.400 wx.py:88(MainLoop)
1 0.390 0.390 0.390 0.390 glcanvas.py:112(__init__)
1 0.040 0.040 0.070 0.070 windows.py:486(Show)
39 0.040 0.001 0.040 0.001 windows.py:814(Append)
5 0.030 0.006 0.030 0.006 glcanvas.py:96(SetCurrent)
During this time the CPU was maxed out running the application even though it was idling. Is there anything I can do so that mainloop doesn't burn so much CPU time when the application is idling?
R.
________________________________________________
Mike C. Fletcher
Designer, VR Plumber, Coder
http://www.vrplumber.com
http://blog.vrplumber.com