displaying realtime video from a capture device to wx frame

Hello to all,
I posted some time ago to the ctypes mailing list that I am working on
implementing a fingerprint capture library on python, based on the
LScanEssentials toolkit from Crossmatch Technologies.
Basically I want to display what the fingerprint device is ‘seeing’ in some kind of windows window.
I already ported the whole headers to ctypes python via the conversion tools h2xml and xml2py.
Everything
is working great, for example the taking of the image generates
correctly the bitmap I expect, the initialization, device selection,
etc… is working great.
However I am not getting the realtime ‘capturing image’.
The API says you should create a window, then pass it’s handle and ‘display zone’ to the SDK so he can draw to it.
However I do not know how to do that.
I
created some wxpython window and used GetHandle to pass it. The SDK
won’t prompt for errors or whatever, but nothing shows at all on screen.
I
dont know if there is some kind of event handling or whatever to have
it preview the image. I also read about the opencv HighGUI component to
implement a live preview on this ‘more generic’ window.
Do someone has an idea of what could be wrong on my code or reasoning ?
FYI here is the acquiring workflow from the SDK:

···

Workflow to Acquire Images
The following shows a typical sequence of steps to perform to acquire images.

  1. Call LSCAN_Main_GetDeviceCount() to obtain number of detected devices.
  2. Call LSCAN_Main_GetDeviceInfo() to determine device index to use.
  3. Call LSCAN_Main_Initialize() to initialize device and get device handle for image acquisition.
  4. Call appropriate callback registration functions to retrieve required notifications.
  5. Call LSCAN_Visualization_SetWindow() to specify visualization area.
  6. Call LSCAN_Visualization_SetMode() to define visualization behavior.
  7. Call LSCAN_Capture_SetMode() to select correct acquisition type.
  8. Call LSCAN_Capture_Start() to start image acquisition.
  9. Result image acquisition is triggered automatically (Auto Capture) and/or manually by LSCAN_Capture_TakeResultImage()
  10. Result image is passed as ImageData to callback LSCAN_CallbackResultImage().
  11. Call LSCAN_Main_Release() when finished using the device.

For subsequent acquisitions please note the following:

If the device is still initialized then preconditional steps 1 to 4 must not be called again
Steps 5 and 6 are optional (e.g. to change visualization behavior)
Step 7 is only required when modifying acquisition image type


And about the SetWindow stuff I am doing this on my code:


def createDisplay(self, obj=None):
    if not obj:
        # we must create some basic standard window
        pass
    else:
        print 'createdisplay got an object passed in'
        # we have been passed a window, we must try to see if we get a HWND handle of it
        self.display_handle = obj.GetHandle()
        print 'display handle is '+str(self.display_handle)

def startVisualization(self, winobj=None):
    print 'startvisualization'
    if not self.display_handle:
        self.createDisplay(winobj)
        draw_area = lse.RECT()
        draw_area.left = 10
        draw_area.top = 10
        draw_area.right = 510
        draw_area.bottom = 510
        res_visu = LSCAN_Visualization_SetWindow(self.device_handle, lse.HWND(self.display_handle), draw_area)
        print 'lscanvisualizationsetwindow:'+str(res_visu)
        res_mode = LSCAN_Visualization_SetMode(self.device_handle, lse.LSCAN_VIS_ALWAYS, lse.LSCAN_OPTION_VIS_FULL_IMAGE)
        print 'lscanvisualizationsetmode:'+str(res_mode)

    res_capmode = LSCAN_Capture_SetMode( self.device_handle,

lse.LSCAN_ROLL_SINGLE_FINGER, lse.LSCAN_IMAGE_RESOLUTION_500,
lse.LSCAN_IMAGE_INHERIT_LINE_ORDER, lse.LSCAN_OPTION_AUTO_OVERRIDE,
None, None, None, None)
print ‘LSCAN_Capture_SetMode:’+str(res_capmode)
ov = lse.HANDLE()

    addov = LSCAN_Visualization_AddOverlayText(self.device_handle,

“blabla”, 10, 10, lse.COLORREF( 255 ), “Arial”, 10, False, ov)
print ‘addoverlay:’+str(addov)
print ‘ov:’+str(ov.value)
showov = LSCAN_Visualization_ShowOverlay(self.device_handle, ov, True)
print ‘showov:’+str(showov)
bgcolor = LSCAN_Visualization_SetBackgroundColor(self.device_handle, lse.COLORREF(123123))

        # register a callback function
        preview_context = c_void_p()

        preview_callback = LSCAN_CallbackPreviewImage(self.callbackPreviewImage)
        regcbpi = LSCAN_Capture_RegisterCallbackPreviewImage( self.device_handle, preview_callback,  preview_context)
        print 'regcbpi:'+str(regcbpi)

        result_callback = LSCAN_CallbackResultImage(self.callbackResultImage)
        regcbri = LSCAN_Capture_RegisterCallbackResultImage(self.device_handle, result_callback, preview_context)
        print 'regcbri:'+str(regcbri)

        res_startcap = LSCAN_Capture_Start(self.device_handle, 1)
        #LSCAN_Controls_DisplayShowCaptureProgressScreen()
        print 'startcap:'+str(res_startcap)

        res_tpi = LSCAN_Capture_TakeResultImage(self.device_handle)
        print 'res_tpi:'+str(res_tpi)

here with winobj being a frame of wxpython:


import App1
app = App1.BoaApp(0)
c.startVisualization(app.main)
app.MainLoop()

as defined here:


#!/usr/bin/env python
#Boa:App:BoaApp

import wx

import Frame2

modules ={‘Frame2’: [1, ‘Main frame of Application’, ‘none://Frame2.py’]}

class BoaApp(wx.App):
def OnInit(self):
self.main = Frame2.create(None)
self.main.Show()
self.SetTopWindow(self.main)
return True

def main():
application = BoaApp(0)
application.MainLoop()

if name == ‘main’:
main()


If someone has any thought I would really appreciate !
Thank you for your reading !

Patricio


Get news, entertainment and everything you care about at Live.com. Check it out!

Patricio Stegmann wrote:

Hello to all,
I posted some time ago to the ctypes mailing list that I am working on implementing a fingerprint capture library on python, based on the LScanEssentials toolkit from Crossmatch Technologies.
Basically I want to display what the fingerprint device is 'seeing' in some kind of windows window.
I already ported the whole headers to ctypes python via the conversion tools h2xml and xml2py.
Everything is working great, for example the taking of the image generates correctly the bitmap I expect, the initialization, device selection, etc... is working great.
However I am not getting the realtime 'capturing image'.
The API says you should create a window, then pass it's handle and 'display zone' to the SDK so he can draw to it.
However I do not know how to do that.
I created some wxpython window and used GetHandle to pass it.

I know nothing about LSCAN, so this is all just guessing: GetHandle should give you the right value to pass to the library. Are you sure that the rectangle you specify is within the visible area of that window? Does it work better if you do your startVisualization after the MainLoop has started?

Another approach that may work better than asking the library to display on your window is to do the image acquisition and convert the data to a wx.Bitmap and then display that bitmap in a EVT_PAINT handler yourself.

···

--
Robin Dunn
Software Craftsman
http://wxPython.org Java give you jitters? Relax with wxPython!

Robin,
thanks for your answer !
The SDK doesnt allow a way of being passed the live preview image, it only allows to give the handle of the window and the write zone ass coordinates on it.
Is there a generic way to have an external app or code draw on a wxpython window and be able to see what it has drawn ? I mean, is there a need to manually trigger onpaint event or whatever to have it updated ?
I will try to make it happen after the mainloop but if I remember correctly that didnt do the job. Is there something wrong with the widget I created ? I mean, if an external code is going to write to the widget, do I have to create a ‘drawable’ widget like a DC or other ?
Please I need help ! I did a try creating a window with opencv HighGUI submodule and it worked for half a second then stops updating the frame, so I guess the way I am pointing and doing my code is correct. It just wont write to my wxwindow widget ! Help muchly wanted !
Patricio

···

Date: Mon, 16 Feb 2009 11:31:12 -0800
From: robin@alldunn.com
To: wxpython-users@lists.wxwidgets.org
Subject: Re: [wxpython-users] displaying realtime video from a capture device to wx frame

Patricio Stegmann wrote:

Hello to all,
I posted some time ago to the ctypes mailing list that I am working on
implementing a fingerprint capture library on python, based on the
LScanEssentials toolkit from Crossmatch Technologies.
Basically I want to display what the fingerprint device is ‘seeing’ in
some kind of windows window.
I already ported the whole headers to ctypes python via the conversion
tools h2xml and xml2py.
Everything is working great, for example the taking of the image
generates correctly the bitmap I expect, the initialization, device
selection, etc… is working great.
However I am not getting the realtime ‘capturing image’.
The API says you should create a window, then pass it’s handle and
‘display zone’ to the SDK so he can draw to it.
However I do not know how to do that.
I created some wxpython window and used GetHandle to pass it.

I know nothing about LSCAN, so this is all just guessing: GetHandle
should give you the right value to pass to the library. Are you sure
that the rectangle you specify is within the visible area of that
window? Does it work better if you do your startVisualization after the
MainLoop has started?

Another approach that may work better than asking the library to display
on your window is to do the image acquisition and convert the data to a
wx.Bitmap and then display that bitmap in a EVT_PAINT handler yourself.


Robin Dunn
Software Craftsman
http://wxPython.org Java give you jitters? Relax with wxPython!


wxpython-users mailing list
wxpython-users@lists.wxwidgets.org
http://lists.wxwidgets.org/mailman/listinfo/wxpython-users


check out the rest of the Windows Live™.
More than mail–Windows Live™ goes way beyond your inbox.
More than messages

Patricio Stegmann <kpoman@hotmail.com> writes:

The SDK doesnt allow a way of being passed the live preview image,
it only allows to give the handle of the window and the write zone
ass coordinates on it.
Is there a generic way to have an external app or code draw on a
wxpython window and be able to see what it has drawn ? I mean, is
there a need to manually trigger onpaint event or whatever to have
it updated ?
I will try to make it happen after the mainloop but if I remember
correctly that didnt do the job. Is there something wrong with the
widget I created ? I mean, if an external code is going to write to
the widget, do I have to create a 'drawable' widget like a DC or
other ?
Please I need help ! I did a try creating a window with opencv
HighGUI submodule and it worked for half a second then stops
updating the frame, so I guess the way I am pointing and doing my
code is correct. It just wont write to my wxwindow widget ! Help
muchly wanted !

Just passing the GetHandle() value from a window should be all that is
necessary. I use it with MPlayer's -wid option, for example, to embed
the output of MPlayer in a wxPython window without any problems.

Of course, you're dependent on the external library then using that
handle, but there shouldn't be anything else necessary on the wxPython
side. You don't have to trigger special events, or worry about
repainting or whatever (in fact I'm not positive you can paint over
the window contents once the other library has access).

With respect to the OpenCV highgui window, the highgui module in
OpenCV runs its own event loop whenever the cvWaitKey method is
called, so if you don't call that the window won't be updated by
OpenCV, but it isn't clear from the above if you mean that you were
using OpenCV (e.g., cvShowImage) to display the image or pointing your
other library at the OpenCV window.

Is it possible that your SDK also requires some sort of event loop
"pump" operation when used as part of another UI application? Perhaps
you need to call in the SDK periodically when it isn't in control of
the main event loop? Or, as Robin also suggested, is the display
region you supply to the SDK valid for the window who's handle you are
supplying?

BTW, you mentioned not having access to the actual image to do your
own display, but of the steps you listed:

4. Call appropriate callback registration functions to retrieve required notifications.

(...)

9. Result image acquisition is triggered automatically (Auto Capture) and/or manually by LSCAN_Capture_TakeResultImage()
10. Result image is passed as ImageData to callback LSCAN_CallbackResultImage().

sure sounds like you receive the image data in the callback set up in
step 4, if you then wanted to process it into a bitmap for display.

-- David

David,
Thanks for your answer.
The module I am trying to build is a standard module using ctypes for the dll driving stuff.
Because the SDK requires widgets to draw to, I used an opencv ‘generic’ and ‘empty’ window, and pass it’s handle to the SDK, which then updates the window and shows me the live display of the scanner.
This works great, however, obviously, highgui is limited in everything I need, so it is not a useful solution.
I then tried the WxPython way. However I dont know how to do this thing. Will the module be initialized from a wx app and then live on it ? Will the module load a wxapp as a target window for its drawing ?
The problem specifically is the MainLoop and these things relative to wxpython.
How did you do your mplayer app ? Is it a wxapp that loads the stuff and handles mplayer call etc… by itself ?
From the SDK documentation, it is ambigous, but basically you have to separate things:

  • the live preview of the finger rolling
  • the image capture specifically launched, which will capture the finger and give a callback when capture is done

I have the capture stuff working which generates a bitmap file. However the tricky part is the live preview which is something extremely accelerated / optimized for live visualization (using a firewire interface which give almost in realtime a 500DPI 10x10 inches image, and is then driven by sdk and direct writing to screen).

Please tell me how to attack this problem, if I need to create a wx app and do my sdk initialization etc… from within it.

Thanks all,

···

To: wxpython-users@lists.wxwidgets.org
From: db3l.net@gmail.com
Date: Tue, 17 Feb 2009 16:03:43 -0500
Subject: [wxpython-users] Re: displaying realtime video from a capture device to wx frame

Patricio Stegmann kpoman@hotmail.com writes:

The SDK doesnt allow a way of being passed the live preview image,
it only allows to give the handle of the window and the write zone
ass coordinates on it.
Is there a generic way to have an external app or code draw on a
wxpython window and be able to see what it has drawn ? I mean, is
there a need to manually trigger onpaint event or whatever to have
it updated ?
I will try to make it happen after the mainloop but if I remember
correctly that didnt do the job. Is there something wrong with the
widget I created ? I mean, if an external code is going to write to
the widget, do I have to create a ‘drawable’ widget like a DC or
other ?
Please I need help ! I did a try creating a window with opencv
HighGUI submodule and it worked for half a second then stops
updating the frame, so I guess the way I am pointing and doing my
code is correct. It just wont write to my wxwindow widget ! Help
muchly wanted !

Just passing the GetHandle() value from a window should be all that is
necessary. I use it with MPlayer’s -wid option, for example, to embed
the output of MPlayer in a wxPython window without any problems.

Of course, you’re dependent on the external library then using that
handle, but there shouldn’t be anything else necessary on the wxPython
side. You don’t have to trigger special events, or worry about
repainting or whatever (in fact I’m not positive you can paint over
the window contents once the other library has access).

With respect to the OpenCV highgui window, the highgui module in
OpenCV runs its own event loop whenever the cvWaitKey method is
called, so if you don’t call that the window won’t be updated by
OpenCV, but it isn’t clear from the above if you mean that you were
using OpenCV (e.g., cvShowImage) to display the image or pointing your
other library at the OpenCV window.

Is it possible that your SDK also requires some sort of event loop
“pump” operation when used as part of another UI application? Perhaps
you need to call in the SDK periodically when it isn’t in control of
the main event loop? Or, as Robin also suggested, is the display
region you supply to the SDK valid for the window who’s handle you are
supplying?

BTW, you mentioned not having access to the actual image to do your
own display, but of the steps you listed:

  1. Call appropriate callback registration functions to retrieve required notifications.
    (…)
  2. Result image acquisition is triggered automatically (Auto Capture) and/or manually by LSCAN_Capture_TakeResultImage()
  3. Result image is passed as ImageData to callback LSCAN_CallbackResultImage().

sure sounds like you receive the image data in the callback set up in
step 4, if you then wanted to process it into a bitmap for display.

– David


wxpython-users mailing list
wxpython-users@lists.wxwidgets.org
http://lists.wxwidgets.org/mailman/listinfo/wxpython-users


Invite your mail contacts to join your friends list with Windows Live Spaces. It’s easy! Try it!

Patricio Stegmann <kpoman@hotmail.com> writes:

The module I am trying to build is a standard module using ctypes
for the dll driving stuff. Because the SDK requires widgets to draw
to, I used an opencv 'generic' and 'empty' window, and pass it's
handle to the SDK, which then updates the window and shows me the
live display of the scanner. This works great, however, obviously,
highgui is limited in everything I need, so it is not a useful
solution.

I don't know how you've got stuff set up, but note that if you didn't
structure your application to call cvWaitKey periodically (or block in
a call to it) then technically your window isn't working properly,
although if the underlying SDK keeps redrawing it and you never try to
place another window over it, you'd likely not notice. But you still
need cvWaitKey with highgui to properly service the event loop for its
windows, and if you have your own event loop (such as if you use
highgui windows from within a wxPython application), you just have to
arrange to periodically call cvWaitKey - either that or perform all
your highgui work in a separate thread that can block on cvWaitKey.

I then tried the WxPython way. However I dont know how to do this
thing. Will the module be initialized from a wx app and then live on
it ? Will the module load a wxapp as a target window for its drawing
? The problem specifically is the MainLoop and these things
relative to wxpython.

Yes, you need to create the windows through wxPython, which implies that
you have a wx.App around. In most cases it would be created as part of
the main thread of the application, and the wxPython MainLoop becomes
the primary event loop for the application.

So, in highgui you made a cvNamedWindow. In wxPython, you'd build as
minimal an app as you need, and somewhere along the line create a
window to use for the output. Most likely you could just use a plain
wxWindow, although a wxPanel would work fine too. The full
flexibility of window management, layouts/sizers, etc.. would be
available in terms of how you wanted to present that window as a
component of your application to the end user.

Now, in highgui, you called cvGetWindowHandle on your named window to
get HWND. In wxPython, you would call GetHandle() on your window
object reference.

Then, you hand that window handle off to your SDK, regardless of what
environment created it.

In theory, if previously you just blocked waiting for the SDK to keep the
window up to date you can do the same thing with wxPython, but you'd have
the same issue where the application itself was not really going to update
properly unless you service the event loop.

So, whereas with highgui you need to call cvWaitKey periodically, with
wxPython you just need to return to the main event loop. Depending on
the internals of the SDK's display system, it should either keep
updating or you may need to arrange to call some sort of "pump" method
inside the SDK to do the updates, I can't say.

                       How did you do your mplayer app ? Is it a
wxapp that loads the stuff and handles mplayer call etc... by itself?

I have a small custom control that can be created (just like any other
wxPython control). It's actually just a subclass of wx.Window since all
I need is the blank client area.

When the control is instantiated, it uses a wrapper class I have for
mplayer that in turn spins off an mplayer process, supplying it the
window id for the client area of my control (obtained via GetHandle()).

I run mplayer in slave mode, which essentially means it can be
controlled via a pipe, accepting commands on stdin and generating
output to stdout. My mplayer wrapper uses threads internally to
process output from the mplayer child process, and to generate
appropriate commands over the pipe to mplayer's stdin based on
requests (play, pause, seek, etc...)

Now, since mplayer is by its nature a separate executable that is
executing in its own process, it has it's own message loop and I don't
need to do anything special to let it keep updating the window as it
plays, so I can just let the main wxPython event loop perform
normally. That means that my wxPython application responds normally
to all window activities while the mplayer control window continues to
update.

As it turns out, mplayer also checks the size of the window it was
pointed at during its operation, so if I resize it on the wxPython
side then the next time mplayer updates a frame it adjusts its size as
well, but that's not necessarily something you can expect all code
that supports attaching to an existing window to do.

From the SDK documentation, it is ambigous, but basically you have to separate things:

- the live preview of the finger rolling
- the image capture specifically launched, which will capture the
finger and give a callback when capture is done

Ah ok, I may have misunderstood that the image capture callback was only for
a completely processed result and not the intermediate.

I have the capture stuff working which generates a bitmap
file. However the tricky part is the live preview which is something
extremely accelerated / optimized for live visualization (using a
firewire interface which give almost in realtime a 500DPI 10x10
inches image, and is then driven by sdk and direct writing to
screen).

In the end, any system that is going to do that has to play by the
Win32 rules. That means they have to associate with an existing
window or create their own window that is created as a child of an
existing window. They may create a GL context or use DirectX/3D to
obtain hardware acceleration, but they still have to tie into the
regular window system.

Please tell me how to attack this problem, if I need to create a wx
app and do my sdk initialization etc.. from within it.

I think the answer is yes, though the fact that you're asking that makes
me wonder if I'm missing something.

If you need a GUI that is more fully featured than the relatively
simplistic highgui, and want to use wxPython for it, then yes,
obviously you need to play by wxPython's rules. That implies a wx.App
object and event loop.

There's certainly no reason why your SDK ought not be able to
cooperate with such an application - after all in the end, both
highgui and wx are just passing the HWND to a standard Win32 window to
the SDK.

But because your SDK is in the same process, either it's going to have
to have some internal threading of its own to maintain a message queue
(something reasonably common in such circumstances), or it has to
provide a way for an external user of the SDK to interface it to an
existing event loop. Without having more specific details about the
SDK I can't judge any further how it works.

Although as I mentioned initially, if today you are basically just
calling into the SDK and blocking while waiting for it to generate a
result, you can get away with that if the only thing you care about
updating is the preview - but the rest of your application (whether
highgui or wxPython based) will not be responsive while the SDK has
control. For highgui, that impact may be minimal (it's not like you
care if mouse clicks are processed or anything since basic
functionality of a highgui window is so low to start with), but for
wxPython it'll block all the rest of the normal GUI behavior.

If perhaps there's some way to get further details about the SDK (is
the reference online somewhere), I could offer something further.

-- David