How to perform calculations on a wx.grid with threading and multiprocessing?

Hi,

In my wxpython GUI application, I have a huge wx.grid object (1000 rows, 4000 columns = 4,000,000 cells), which contains values in each cell.

I will do calculations on this wx.grid (say, sum the values in all cells).

I also want the GUI responsive during grid calculation.
Since there are 4,000,000 cells, the calculation takes too long time. I have 8 cores on my PC’s processor, so why not take advantage of it?

Therefore I decided to use threading and multiprocessing at the same time (I don’t know if it is possible or the best idea):

1- When user presses “CALCULATE” button, a new thread (“grid calculation thread”) will do the calculation, and it will prevent the GUI becoming unresponsive.
2-" The grid calculation thread" will use python’s multiprocessing module, it will create 2 processes. Each process will get 2,000,000 cells and do the calculation (sum)
3- Then, " The grid calculation thread" will get the results from these 2 processes, sum them, and update the GUI (for example it will write the result to a StaticText)

Is this possible? Or is there a better way to do this?

I don't know enough to answer your question about threads and processes, so
I'll defer to someone else on the list, but would like to ask you: what's
the user experience rationale for a display with four million values?

···

On Sat, Jul 19, 2014 at 5:14 PM, steve <oslocourse@gmail.com> wrote:

Hi,

In my wxpython GUI application, I have a huge wx.grid object (1000 rows,
4000 columns = 4,000,000 cells), which contains values in each cell.

It doesn’t display the all cells at once. There are scrollbars, max 3000 cells are displayed to the user. User has to navigate through cells with scrollbars.
Actually the grid is a work schedule of a number of employers. Each column is a date, and each row is an employee’s work schedule.

···

On Sunday, July 20, 2014 2:08:35 AM UTC+3, Che M wrote:

On Sat, Jul 19, 2014 at 5:14 PM, steve osloc...@gmail.com wrote:

Hi,

In my wxpython GUI application, I have a huge wx.grid object (1000 rows, 4000 columns = 4,000,000 cells), which contains values in each cell.

I don’t know enough to answer your question about threads and processes, so I’ll defer to someone else on the list, but would like to ask you: what’s the user experience rationale for a display with four million values?

It doesn't display the all cells at once. There are scrollbars, max 3000
cells are displayed to the user. User has to navigate through cells with
scrollbars.
Actually the grid is a work schedule of a number of employers. Each column
is a date, and each row is an employee's work schedule.

What's the purpose of the grid for the user? I mean, are users going to
want to browse through it, or do they just really want to find specific
information about Employee X on day y or over a range of days?

···

On Sat, Jul 19, 2014 at 7:15 PM, steve <oslocourse@gmail.com> wrote:

On Sunday, July 20, 2014 2:08:35 AM UTC+3, Che M wrote:

On Sat, Jul 19, 2014 at 5:14 PM, steve <osloc...@gmail.com> wrote:

Hi,

In my wxpython GUI application, I have a huge wx.grid object (1000 rows,
4000 columns = 4,000,000 cells), which contains values in each cell.

I don't know enough to answer your question about threads and processes,
so I'll defer to someone else on the list, but would like to ask you:
what's the user experience rationale for a display with four million
values?

--
You received this message because you are subscribed to the Google Groups
"wxPython-users" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to wxpython-users+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Hi,

In my wxpython GUI application, I have a huge wx.grid object (1000 rows,
4000 columns = 4,000,000 cells), which contains values in each cell.

I will do calculations on this wx.grid (say, sum the values in all cells).

I also want the GUI responsive during grid calculation.
Since there are 4,000,000 cells, the calculation takes too long time. I
have 8 cores on my PC's processor, so why not take advantage of it?

Therefore I decided to use threading and multiprocessing at the same time
(I don't know if it is possible or the best idea):

1- When user presses "CALCULATE" button, a new thread ("grid calculation
thread") will do the calculation, and it will prevent the GUI becoming
unresponsive.
2-" The grid calculation thread" will use python's multiprocessing module,
it will create 2 processes. Each process will get 2,000,000 cells and do
the calculation (sum)
3- Then, " The grid calculation thread" will get the results from these 2
processes, sum them, and update the GUI (for example it will write the
result to a StaticText)

Is this possible? Or is there a better way to do this?

I do share C M's doubts about the usability of this, but:

It is certainly possible to do this multiprocessing-thing,
but I'd like to ask, why does it take "too long"?

I tried on my machine:

from timeit import timeit
def sumup():

  s=0
  for x in c:
    s+=x

c=
for x in xrange(0,4000000):

  c.append(123.456)

timeit(sumup,number=1)

0.3921653333151198

That's 0.4 seconds for adding 4 million floats in a list.

With a dictionary:

c={}
for x in xrange(0,1000):

  c={}
  for y in xrange(0,4000):
    c[y]=123.456

c={}
for x in xrange(0,1000):

  c={}
  for y in xrange(0,4000):
    c[y]=123.456

def sumup():

  s=0
  for x in c.iterkeys():
    for y in c.itervalues():
      s+=y

timeit(sumup,number=1)

0.38582007965885623

Why "too long" on your end?
I'd try for improvment of the calculation time before doing complex thread-multiprocess-setups.

... let me guess... Grids require values to be strings, right?
Do you convert string-to-float for every value when you do this calculation?

Michael

···

On Sat, 19 Jul 2014 23:14:02 +0200, steve <oslocourse@gmail.com> wrote:

Yes, the cells contain strings, not integers. The calculation is not made up of summing cell values. There are look-ups, conversions, additions etc. Therefore the calculation takes long time.

···

On Sunday, July 20, 2014 2:56:06 AM UTC+3, Michael Ross wrote:

On Sat, 19 Jul 2014 23:14:02 +0200, steve osloc...@gmail.com wrote:

Hi,

In my wxpython GUI application, I have a huge wx.grid object (1000 rows,

4000 columns = 4,000,000 cells), which contains values in each cell.

I will do calculations on this wx.grid (say, sum the values in all

cells).

I also want the GUI responsive during grid calculation.

Since there are 4,000,000 cells, the calculation takes too long time. I

have 8 cores on my PC’s processor, so why not take advantage of it?

Therefore I decided to use threading and multiprocessing at the same time

(I don’t know if it is possible or the best idea):

1- When user presses “CALCULATE” button, a new thread ("grid calculation

thread") will do the calculation, and it will prevent the GUI becoming

unresponsive.

2-" The grid calculation thread" will use python’s multiprocessing

module,

it will create 2 processes. Each process will get 2,000,000 cells and do

the calculation (sum)

3- Then, " The grid calculation thread" will get the results from these 2

processes, sum them, and update the GUI (for example it will write the

result to a StaticText)

Is this possible? Or is there a better way to do this?

I do share C M’s doubts about the usability of this, but:

It is certainly possible to do this multiprocessing-thing,

but I’d like to ask, why does it take “too long”?

I tried on my machine:

from timeit import timeit

def sumup():

    s=0

    for x in c:

            s+=x

c=

for x in xrange(0,4000000):

    c.append(123.456)

timeit(sumup,number=1)

0.3921653333151198

That’s 0.4 seconds for adding 4 million floats in a list.

With a dictionary:

c={}

for x in xrange(0,1000):

    c[x]={}

    for y in xrange(0,4000):

            c[x][y]=123.456

c={}

for x in xrange(0,1000):

    c[x]={}

    for y in xrange(0,4000):

            c[x][y]=123.456

def sumup():

    s=0

    for x in c.iterkeys():

            for y in c[x].itervalues():

                    s+=y

timeit(sumup,number=1)

0.38582007965885623

Why “too long” on your end?

I’d try for improvment of the calculation time before doing complex

thread-multiprocess-setups.

… let me guess… Grids require values to be strings, right?

Do you convert string-to-float for every value when you do this

calculation?

Michael

As Michael said - it is worth looking at how your calculations are
done and pre-calculating as much as possible:

···

On 20/07/14 06:04, steve wrote:

    Yes, the cells contain strings, not integers. The

calculation is not made up of summing cell values. There are
look-ups, conversions, additions etc. Therefore the calculation
takes long time.

  •     rather than storing your 4 Million values in strings store
    
    them in values and only convert to strings the 3000 maximum on
    display - you may be doing this already but probably as floats.
  •     Assuming that the look-up is the individual workers rate(s) of
    
    pay and the calculation is something like the below * with
    suggested changes in italics*:
  1.       For worker look-up rates they are entitled to get in the
    

week/month - * this can be done at data entry time &
possibly re-calculated in the event of backdated pay awards
people will generally not mind if something like this
requires a longer offline calculation.** Note that
when you select a cell on a given row you can do this lookup
before the data is entered this should slow cell selection* * by
a period in milliseconds**.*

    •        Check how many hours they have worked so far in the
      

week/month - this can* * be accumulated at entry time -
possibly with a forward calculation for that one worked and
that one week/month - this will slow the reaction to
pressing the enter key/changing the selection by
microseconds.*

  1.       Check if there are enhanced rates applicable to the day
    

&/or location - do this at * cell selection time
again correcting when new data is entered.*

  1.       Calculate the wages earned in the pay period from the above
    
  • calculate * as the data is entered slows enter key
    response by micro seconds.*
  1. Calculate the deductions for tax, etc., - * calculate
    for the payment period when enter is pressed correcting if
    data is changed and recalculating* * in the event of
    back dated changes - a rare event and again an offline one.*
    •        Calculate the final payment for the payment period as the
      

data is entered as well as storing the hours worked for the
day store:*

  1. the rates for the pay period,
  2. hours worked at each rate,
  3. money earned,
  4. deductions made &
  5. final payment.
  6.       The above changes will increase your data size fractionally
    

but will give you a fast responsive system.

  • As an aside it is normally not recommended to use
    floats to store things like money due to a number of problems
    with the IEEE floating point representation, use either the
    decimal library or store your data as scaled integers, e.g. 1 =
    1¢ or some such, you will find that the results are much more
    accurate and your system is likely to be much faster.
    Hope that is some help.

Gadget/Steve

Yes, the cells contain strings, not integers. The calculation is not made
up of summing cell values.

That's sort-of what you wrote, though.

There are look-ups, conversions, additions etc.
Therefore the calculation takes long time.

Can you cache some (a lot) of the lookups or conversions you mention?
and only recalculate part of the whole dataset?

As for multiprocessing:
I don't see the need for an extra thread to manage this,
just have a wx.Timer pull your results from the processes.

Something like:

from multiprocessing import Process, Queue
from Queue import Empty

if __name__ == '__main__':

     input=Queue()
     output=Queue()

     for x in xrange(0,num_procs):
         p = Process(target=calculating_function, args=(input,output))
         p.daemon=True
         p.start()

def calculating_function(input,output):

     run=True
     while run:
         data=input.get() # this waits until it can get something from the queue
         result=...(data)
         output.put(result)

def OnCalculate(self,event):
     input.put(dataset[0])
     input.put(dataset[1])

you can also do something like
     input.put( ( 'sumup', dataset[0] ) )
so your processes can do different tasks.

and somewhere in your UI ( wx.Timer called function )

try:
     result=output.get_nowait()
     put_in_grid(result)
except Empty:
     pass

Caveat:
Starting your processes will import/re-run your main application.
Take care of what you put inside and outside of __name__ == '__main__'.

···

On Sun, 20 Jul 2014 07:04:33 +0200, steve <oslocourse@gmail.com> wrote:

On Sunday, July 20, 2014 2:56:06 AM UTC+3, Michael Ross wrote:

On Sat, 19 Jul 2014 23:14:02 +0200, steve <osloc...@gmail.com >> <javascript:>> wrote:

> Hi,
>
> In my wxpython GUI application, I have a huge wx.grid object (1000 rows,
> 4000 columns = 4,000,000 cells), which contains values in each cell.
>
> I will do calculations on this wx.grid (say, sum the values in all
> cells).
>
> I also want the GUI responsive during grid calculation.
> Since there are 4,000,000 cells, the calculation takes too long time. I
> have 8 cores on my PC's processor, so why not take advantage of it?
>
> Therefore I decided to use threading and multiprocessing at the same
time
> (I don't know if it is possible or the best idea):
>
> 1- When user presses "CALCULATE" button, a new thread ("grid calculation
> thread") will do the calculation, and it will prevent the GUI becoming
> unresponsive.
> 2-" The grid calculation thread" will use python's multiprocessing
> module,
> it will create 2 processes. Each process will get 2,000,000 cells and do
> the calculation (sum)
> 3- Then, " The grid calculation thread" will get the results from these
2
> processes, sum them, and update the GUI (for example it will write the
> result to a StaticText)
>
> Is this possible? Or is there a better way to do this?
>

I do share C M's doubts about the usability of this, but:

It is certainly possible to do this multiprocessing-thing,
but I'd like to ask, why does it take "too long"?

I tried on my machine:

>>> from timeit import timeit
>>> def sumup():
        s=0
        for x in c:
                s+=x

>>> c=
>>> for x in xrange(0,4000000):
        c.append(123.456)

>>> timeit(sumup,number=1)
0.3921653333151198

That's 0.4 seconds for adding 4 million floats in a list.

With a dictionary:
>>> c={}
>>> for x in xrange(0,1000):
        c={}
        for y in xrange(0,4000):
                c[y]=123.456

>>> c={}
>>> for x in xrange(0,1000):
        c={}
        for y in xrange(0,4000):
                c[y]=123.456

>>> def sumup():
        s=0
        for x in c.iterkeys():
                for y in c.itervalues():
                        s+=y

>>> timeit(sumup,number=1)
0.38582007965885623

Why "too long" on your end?
I'd try for improvment of the calculation time before doing complex
thread-multiprocess-setups.

... let me guess... Grids require values to be strings, right?
Do you convert string-to-float for every value when you do this
calculation?

Michael

use multiprocessing not threading, threads use the same core or at least prefer the same core.

You can pretty much take any Threading.Thread example and replace with multiprocessing objects.

Since the processes are separate, you couldn't simply pass a callback function to the multiprocessing worker, rather you'd need to save the callback to your Process sub-class (which is still in your GUI thread), then use multiprocess.Event objects to synchronize states between the processes.

Something like this:

def startProcessing():
    def __init__(self, arg, arg2, callbackFunc):
        self.callbackFunc = callbackFunc
        self.doneEvent = multiprocessing.Event()
        self.results = multiprocessing.Queue()
        self.myProc = multiprocess.Process(self.doProcessing, [arg,arg2, self.doneEvent, self.results])
        self.myProc.Start()
        
    def doProcessing(arg1, arg2, doneEvent, resultQueue):
        #some code that takes a while
        doneEvent.set()
        resultQueue.put(someDataToReturn)

    def checkDone():
        timeOutSeconds = 0.1
        if self.doneEvent.wait(timeout=timeOutSeconds):
            self.callbackFunc(self.results.get())

def myFrame(wx.Frame):
    def OnButtonClick(self, event):
        self.longTimeProcess = startProcessing(self.gridData, otherArg, self.doneProcessing)
        t = wx.Timer(self, -1)
        self.Bind(wx.EVT_TIMER, self.longTimeProcess.checkDone, t)
        t.Start(500) #500 milliseconds
    def doneProcessing(self, results):
        #do something with returned value

I agree with other posts-- work on making your calculation faster,
rather than adding the complexity of multi-processing.

This looks like a problem made for numpy+pandas, for instance.

-Chris

···

On Jul 20, 2014, at 12:57 PM, Nathan McCorkle <nmz787@gmail.com> wrote:

use multiprocessing not threading, threads use the same core or at least prefer the same core.

You can pretty much take any Threading.Thread example and replace with multiprocessing objects.

Since the processes are separate, you couldn't simply pass a callback function to the multiprocessing worker, rather you'd need to save the callback to your Process sub-class (which is still in your GUI thread), then use multiprocess.Event objects to synchronize states between the processes.

Something like this:

def startProcessing():
   def __init__(self, arg, arg2, callbackFunc):
       self.callbackFunc = callbackFunc
       self.doneEvent = multiprocessing.Event()
       self.results = multiprocessing.Queue()
       self.myProc = multiprocess.Process(self.doProcessing, [arg,arg2, self.doneEvent, self.results])
       self.myProc.Start()

   def doProcessing(arg1, arg2, doneEvent, resultQueue):
       #some code that takes a while
       doneEvent.set()
       resultQueue.put(someDataToReturn)

   def checkDone():
       timeOutSeconds = 0.1
       if self.doneEvent.wait(timeout=timeOutSeconds):
           self.callbackFunc(self.results.get())

def myFrame(wx.Frame):
   def OnButtonClick(self, event):
       self.longTimeProcess = startProcessing(self.gridData, otherArg, self.doneProcessing)
       t = wx.Timer(self, -1)
       self.Bind(wx.EVT_TIMER, self.longTimeProcess.checkDone, t)
       t.Start(500) #500 milliseconds
   def doneProcessing(self, results):
       #do something with returned value

--
You received this message because you are subscribed to the Google Groups "wxPython-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wxpython-users+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Nathan McCorkle wrote:

use multiprocessing not threading, threads use the same core or at least prefer the same core.

That's simply not true, and I would hope it was obvious. There wouldn't
be any point to multithreading if threads were sharing the same core.
Remember that processes are not executed: a process is just a container
that holds memory and threads. It is threads that get scheduled into
CPUs. The difference you're talking about is between several threads of
one process, and several threads in different processes.

In Python, there are several good reasons to choose multiprocessing
instead of threading, but those reasons revolve around the global
interpreter lock, not around core use.

···

--
Tim Roberts, timr@probo.com
Providenza & Boekelheide, Inc.

That was exactly what I am talking about. My belief was you get around the GIL threading contention by spawning a new GIL, which if your OS is smart will likely bring that onto a separate core so you are doing simultaneous processing. I said ‘threads use the same core or at least prefer the same core’, but yes I meant GIL (though I didn’t think that would make sense to the O.P. or general public).

Multiprocessing vs Threading Python

which references a GREAT ~hour-long video on the GIL, threads, and how performance can really suffer in computationally-intensive scenarios (i.e. processing that doesn’t require I/O).

http://blip.tv/pycon-us-videos-2009-2010-2011/pycon-2010-understanding-the-python-gil-82-3273690

···

On Monday, July 21, 2014 10:12:04 AM UTC-7, Tim Roberts wrote:

In Python, there are several good reasons to choose multiprocessing

instead of threading, but those reasons revolve around the global

interpreter lock, not around core use.

Hi, I read all posts,
I agree with you that the calculation style on grid might be amended to reduce calculation time, but let’s assume it is not possible for now.

Which one should I choose, threading or multiprocessing?

There are 2 points on this issue:

  • GUI must be responsive during the calculation on grid
  • Calculation time must be reduced
    At the moment, I use threading in my application. User presses “CALCULATE” button, a new thread starts, does the grid calculation in background while GUI is still responsive. But the calculation takes approximately 60 seconds.
    I have 4 cores on my CPU. Hence I think I can use multiprocessing module to reduce calculation time.

BUT, can I use solely threading instead to reduce calculation time?

The grid uses PyGridTableBase, so the grid data comes from a python dictionary on demand. The calculation takes place on this dictionary. (self.data)

I think of 2 options:

Option 1- When user presses “CALCULATE” button, a new thread starts. This new thread creates 2 multiprocessing.Process workers. The self.data dictionary is divided into 2 such as:

data_1 = dict(``self.data.items()[len(self.data )/2:])
data_2 = dict(``self.data .items()``[:len(```self.data ```)/2]````)

Put these 2 sub-dictionaries in a multiprocessing.Queue, let each worker process one of the sub-dictionaries and put the result in another Queue (result queue). Then combine the result and use it to update the GUI.

Option 2- When user presses “CALCULATE” button, a new thread starts. This new thread creates 2 other threads . The self.data dictionary is divided into 2, as above. Let each thread process one of the sub-dictionaries, Then combine the result and use it to update the GUI.

The question is: Which of the options above will help me reduce the calculation time? Or are these two options impossible to achieve?

Best regards

···

On Monday, July 21, 2014 8:12:04 PM UTC+3, Tim Roberts wrote:

Nathan McCorkle wrote:

use multiprocessing not threading, threads use the same core or at least prefer the same core.

That’s simply not true, and I would hope it was obvious. There wouldn’t

be any point to multithreading if threads were sharing the same core.
Remember that processes are not executed: a process is just a container

that holds memory and threads. It is threads that get scheduled into

CPUs. The difference you’re talking about is between several threads of

one process, and several threads in different processes.

In Python, there are several good reasons to choose multiprocessing

instead of threading, but those reasons revolve around the global

interpreter lock, not around core use.


Tim Roberts, ti...@probo.com

Providenza & Boekelheide, Inc.

We can’t tell. Really, the only way you can figure this out is to
try it. The answer depends on how much data you’re sending. and how
your computations work. It is more expensive to send data across
processes than it is to send data to another thread. The Global
Interpret Lock isn’t as much of a bottleneck, but whether that
impacts you or not depends on the processing.

···

steve wrote:

    ...At the moment, I use threading in my application. User

presses “CALCULATE” button, a new thread starts, does the grid
calculation in background while GUI is still responsive. But the
calculation takes approximately 60 seconds.

    I have 4 cores  on my CPU. Hence I think I can use multiprocessing         module

to reduce calculation time.

    BUT, can I use solely threading         instead to reduce

calculation time?

    ..

    I think of 2 options:



    Option 1- When user presses "CALCULATE" button, a new thread

starts. This new thread creates 2 multiprocessing.Process workers.
The self.data
dictionary is divided into 2 such as:

    data_1 =` dict(``self.data.items()[len(`          self.data

)/2:])

      data_2 =` dict(``            self.data

.items()``[:len(self.data )/2]````)`

    Put these 2 sub-dictionaries in a multiprocessing.Queue        , let each

worker process one of the sub-dictionaries and put the result in
another Queue
(result queue) .
Then combine the result and use it to update the GUI.

    Option 2- When user presses "CALCULATE" button, a new thread         starts.

This new thread creates 2 other
threads . The self.data dictionary is divided into
2, as above. Let each thread process one of the
sub-dictionaries, Then combine the result and use it to update
the GUI.

    The question is: Which of the options above will help me reduce

the calculation time? Or are these two options impossible to
achieve?

-- Tim Roberts, Providenza & Boekelheide, Inc.

timr@probo.com

As one of the stackoverflow links I posted says, if you write some code for Threading.Thread, you can swap it relatively easily for Multiprocessing.Process… so you can try both.

···

On Wednesday, July 23, 2014 1:58:00 PM UTC-7, steve wrote:

Hi, I read all posts,
I agree with you that the calculation style on grid might be amended to reduce calculation time, but let’s assume it is not possible for now.

Which one should I choose, threading or multiprocessing?

There are 2 points on this issue:

  • GUI must be responsive during the calculation on grid
  • Calculation time must be reduced
    At the moment, I use threading in my application. User presses “CALCULATE” button, a new thread starts, does the grid calculation in background while GUI is still responsive. But the calculation takes approximately 60 seconds.
    I have 4 cores on my CPU. Hence I think I can use multiprocessing module to reduce calculation time.

BUT, can I use solely threading instead to reduce calculation time?

The grid uses PyGridTableBase, so the grid data comes from a python dictionary on demand. The calculation takes place on this dictionary. (self.data)

I think of 2 options:

Option 1- When user presses “CALCULATE” button, a new thread starts. This new thread creates 2 multiprocessing.Process workers. The self.data dictionary is divided into 2 such as:

data_1 = dict(``self.data.items()[len(self.data )/2:])
data_2 = dict(``self.data .items()``[:len(```self.data ```)/2]````)

Put these 2 sub-dictionaries in a multiprocessing.Queue, let each worker process one of the sub-dictionaries and put the result in another Queue (result queue). Then combine the result and use it to update the GUI.

Option 2- When user presses “CALCULATE” button, a new thread starts. This new thread creates 2 other threads . The self.data dictionary is divided into 2, as above. Let each thread process one of the sub-dictionaries, Then combine the result and use it to update the GUI.

The question is: Which of the options above will help me reduce the calculation time? Or are these two options impossible to achieve?

Best regards

On Monday, July 21, 2014 8:12:04 PM UTC+3, Tim Roberts wrote:

Nathan McCorkle wrote:

use multiprocessing not threading, threads use the same core or at least prefer the same core.

That’s simply not true, and I would hope it was obvious. There wouldn’t

be any point to multithreading if threads were sharing the same core.
Remember that processes are not executed: a process is just a container

that holds memory and threads. It is threads that get scheduled into

CPUs. The difference you’re talking about is between several threads of

one process, and several threads in different processes.

In Python, there are several good reasons to choose multiprocessing

instead of threading, but those reasons revolve around the global

interpreter lock, not around core use.


Tim Roberts, ti...@probo.com

Providenza & Boekelheide, Inc.