Scrolling text at high resolutions for a teleprompter

I am trying to make a teleprompter application that’s more flexible than the one we currently use at our public access TV station. In a teleprompter their are two displays of scrolling text: one for the person controlling it and another for the person reading. Often the display of the person reading must be mirrored in the x or y axis. The biggest challenge is that, other than mirroring, the controller must be able to see exactly what is on the reader’s display. While most widget toolkits have efficient ways of creating scrolled text, it can be difficult to prove that the wrapping points for text are identical when scaling occurs.

In my naive approach, I’m trying to short-circuit the difficulty of ensuring wrap points by grabbing a bitmap image of everything painted to the reader’s window and blitting a scaled copy back on the controller’s window. While this works really well in my test windows that are 300x300 pixels, when I scale up the reader’s window to the monitor’s full resolution of 1920x1080, I’m unable to animate smoothly. I’m using a very naive painting technique right now:

        def paint(self, event):
dc = wx.ClientDC(self)
dc.SetBackground(self.background_brush)
dc.Clear()
dc.SetTextForeground(self.text_color)
dc.SetFont(self.font)
text = wordwrap(self.script, self.GetSize()[0], dc)
dc.DrawText(text, 0, self.y_scroll)

My initial thought was that because the script was so long, drawing tons of text that isn't even seen was wasteful. However, if I make the text only a few lines long, I have the same delays. It seems to me that the painting time is based more on the number of pixels, rather than the actual drawing occurring.

I've been thinking of less naive ways to approach this. One was to create a big bitmap containing all of the text once, then just blit the currently viewed portion to the screen. However, for a high resolution, this bitmap rapidly gets too large to store in memory. Since that won't work, I could create bitmaps for the current screenful of text and the next and previous screenfuls, and blit those in, only creating a new bitmap when a screenful of text has exited the viewable region. Alternatively, I could make each line of text a bitmap, creating them when necessary.

I am trying to keep whatever technique I use fairly generic, since I'd like to add other graphics to the screen like highlighted regions and countdown timers. I was thinking that switching to OpenGL is somewhat a nuclear option, but that would at least let hardware do positioning and scaling of images. Is there a better way to figure out which parts of paint are bogging down the framerate? My goal is to achieve 60 fps or better, because any jitter puts strain on the reader.

I'm open to any ideas, and I wanted to make sure I wasn't reinventing the wheel, or missing something in the drawing API specifically to make this sort of high resolution drawing task faster. The [github repository](https://github.com/superlou/flexcue) should work fine on Linux, though I am getting some flickering on Windows.

Thanks,
Louis

How much text do you have? My inclination would be to draw all of
the text to a long bitmap in memory, then just blit the current
section to both screens. Also, it looks like you are clearing the
screen every time. Instead of doing the drawing in a paint handler,
you might consider doing the drawing in your timer handler by
scrolling the visible screen by a few scans, and then only blitting
the new part. Screen-to-screen blits are WAY faster than drawing
from scratch.

···

Louis Simons wrote:

    I am trying to make a teleprompter application

that’s more flexible than the one we currently use at our public
access TV station. In a teleprompter their are two displays of
scrolling text: one for the person controlling it and another
for the person reading. Often the display of the person reading
must be mirrored in the x or y axis. The biggest challenge is
that, other than mirroring, the controller must be able to see
exactly what is on the reader’s display. While most widget
toolkits have efficient ways of creating scrolled text, it can
be difficult to prove that the wrapping points for text are
identical when scaling occurs.

In my naive
approach
, I’m trying to short-circuit the difficulty of
ensuring wrap points by grabbing a bitmap image of everything
painted to the reader’s window and blitting a scaled copy back
on the controller’s window. While this works really well in
my test windows that are 300x300 pixels, when I scale up the
reader’s window to the monitor’s full resolution of 1920x1080,
I’m unable to animate smoothly. I’m using a very naive
painting technique right now:

-- Tim Roberts, Providenza & Boekelheide, Inc.

timr@probo.com

How much text do you have?  My inclination would be to draw all of

the text to a long bitmap in memory, then just blit the current
section to both screens.

Unfortunately, it’s a couple hundred lines of text typically, so with a 1920x1080 display, the bitmap was too large to fit in memory. I received an error trying to create it.

Also, it looks like you are clearing the

screen every time. Instead of doing the drawing in a paint handler,
you might consider doing the drawing in your timer handler by
scrolling the visible screen by a few scans, and then only blitting
the new part. Screen-to-screen blits are WAY faster than drawing
from scratch.

How would you recommend doing that if you can’t draw the entire text bitmap to memory? I was initially looking at making a bunch of “clipped” bitmaps of more reasonably sized chunks, so I only have to create new bitmaps when I start running out of already made bitmap. However, making the cutoff points is a little tricky, so I wanted to see if there was more low-hanging fruit before I tried that. Is there a trick to profiling wxpython drawing to pin down exactly where time is being spent? Something more tactical than just wrapping the whole app in a cProfile.run?

These are just big block letters, a couple of inches tall, aren’t
they? You could draw the text in to a much smaller bitmap and
stretchblt it onto the screen. With a smooth blit, I don’t think
you’d ever notice.
Well, as long as you know the current position of each line, you can
scroll the display up and just DrawText the new bottom line.
Not that I’m aware of. You could use “import time” to grab time
stamps here and there, but that’s a little hit and miss. You’re
likely to miss something.

···

Louis Simons wrote:

        How much text do you

have? My inclination would be to draw all of the text to a
long bitmap in memory, then just blit the current section to
both screens.

      Unfortunately, it's a couple hundred lines of text

typically, so with a 1920x1080 display, the bitmap was too
large to fit in memory. I received an error trying to create
it.

        Also, it looks like you

are clearing the screen every time. Instead of doing the
drawing in a paint handler, you might consider doing the
drawing in your timer handler by scrolling the visible
screen by a few scans, and then only blitting the new part.
Screen-to-screen blits are WAY faster than drawing from
scratch.

      How would you recommend doing that if you can't draw the

entire text bitmap to memory?

      Is there a trick to profiling wxpython drawing to pin down

exactly where time is being spent? Something more tactical
than just wrapping the whole app in a cProfile.run?

-- Tim Roberts, Providenza & Boekelheide, Inc.

timr@probo.com

I wonder if it would help to put each line of text in its own bitmap.

You could draw each line to a “buffer” of bitmaps offscreen, maybe in a separate thread or even process.

Then the paint routine would be only blitting the line-images to the screen.

You could combine this with Tim’s suggestion of drawing smaller than you need and scaling up.

By the way, it is often the case that drawing time scales with number of pixels pushed, rather than complexity of what is being drawn. Though I’m still surprised the simple way is too slow on modern hardware.

-CHB

···

Louis Simons wrote:

        How much text do you

have? My inclination would be to draw all of the text to a
long bitmap in memory, then just blit the current section to
both screens.

      Unfortunately, it's a couple hundred lines of text

typically, so with a 1920x1080 display, the bitmap was too
large to fit in memory. I received an error trying to create
it.

        Also, it looks like you

are clearing the screen every time. Instead of doing the
drawing in a paint handler, you might consider doing the
drawing in your timer handler by scrolling the visible
screen by a few scans, and then only blitting the new part.
Screen-to-screen blits are WAY faster than drawing from
scratch.

      How would you recommend doing that if you can't draw the

entire text bitmap to memory?

      Is there a trick to profiling wxpython drawing to pin down

exactly where time is being spent? Something more tactical
than just wrapping the whole app in a cProfile.run?

-- Tim Roberts, Providenza & Boekelheide, Inc.

timr@probo.com

wxpython-users+unsubscribe@googlegroups.com
https://groups.google.com/d/optout

I wonder if it would help to put each line of text in its own bitmap.

You could draw each line to a “buffer” of bitmaps offscreen, maybe in a separate thread or even process.

Then the paint routine would be only blitting the line-images to the screen.

The sounds like a good strategy, especially as I believe everything’s currently done in a single thread. It seemed over-kill to use a multicore PC in order to draw scrolling text efficiently, but maybe as the screen resolution increases, it’s necessary.

You could combine this with Tim’s suggestion of drawing smaller than you need and scaling up.

I tried the smaller screen (bitmap with scaling) technique, and it had mixed results. Since it’s a smaller bitmap originally, I get double the savings, since the reader’s display and the copy to the controller’s display are both using smaller bitmaps. However, losing the vertical resolution is making it difficult to get a smooth scroll, and jumpiness causes eyestrain in the reader. Plus, the text does lose some crispness on normal Image scaling, and when I do “high quality” scaling, the extra overhead negates the benefit of using the smaller screen. Is there a more efficient way to scale then taking the bitmap, converting to image, scaling, then converting back to bitmap?

By the way, it is often the case that drawing time scales with number of pixels pushed, rather than complexity of what is being drawn. Though I’m still surprised the simple way is too slow on modern hardware.

I was too. I’m hoping to get a chance tomorrow to try making each line of text a full-resolution bitmap, then blitting in the ones that are currently on-screen. This seems like it will have the advantage of making the text painting faster (since most of the time, only one line of text is being painted when it first shows up) and turn all the actual paints into just blits. The downside is that generating the controller’s display from the high resolution reader’s display is more expensive.

I don’t want to beat a dead horse over this, but I’m trying to find the most naive/generic solution so that it is easier for people to extend past just scrolling text to other information on the display.

Making the line bitmaps (without making ALL the bitmaps) is turning out to be tricky, though hopefully will have a prototype tonight. Alternatively, is there a way to animate the scrollbars of a TextCtrl or RichTextCtrl? They seem to always move smoothly, which suggests there’s already some optimization at work there.

Making the line bitmaps (without making ALL the bitmaps) is turning out to be tricky, though hopefully will have a prototype tonight.

A really ugly prototype is at GitHub - superlou/flexcue. Whenever the prompter window is resized or script is changed, I’m rebuilding a bitmap of each line of script. Then, I’m selecting the bitmaps that should be on-screen each frame, and blitting them to the frame. This seems to just about keep up, with my stopwatch showing almost the same performance at 1920x1080 as 320x240.

Unfortunately, that’s only if I skip my naive technique of cloning the prompter screen to the controller window using the following:

def get_bitmap(self):

dc = wx.ClientDC(self)

size = dc.Size

bmp = wx.Bitmap(size.width, size.height)

memDC = wx.MemoryDC()

memDC.SelectObject(bmp)

memDC.Blit(0, 0, size.width, size.height, dc, 0, 0)

memDC.SelectObject(wx.NullBitmap)

return bmp

def scale_bitmap(bitmap, size):

width, height = size

image = bitmap.ConvertToImage()

image = image.Scale(width, height)

result = wx.Bitmap(image)

return result

Is there a more efficient way to get a copy of a high resolution display and scale it down? Since my bottleneck now seems to be copying/scaling instead of DrawText, I don’t think the same bitmap technique will work.

Thinking outside of the box I would suggest not using Bitmaps yourself,
instead render your text as SVG, (which will be size independent and
will always scale smoothly) - you can include graphics, etc., in an SVG
format. Note that you can scale SVG with the transform="scale=(2, 3)"
command but transform="scale=(-2, 3)" will mirror the X axis as well as
scaling by 2. There is a wxSVG library that looks to be both active and
well maintained at http://wxsvg.sourceforge.net/ but I haven't tested
it, or you could possible use ciaro or webkit methods.

svgutils, (svgutils · PyPI), is also worth a look.

···

On 27/02/2017 01:37, Louis Simons wrote:

    Making the line bitmaps (without making ALL the bitmaps) is turning
    out to be tricky, though hopefully will have a prototype tonight.

A really ugly prototype is at GitHub - superlou/flexcue.
Whenever the prompter window is resized or script is changed, I'm
rebuilding a bitmap of each line of script. Then, I'm selecting the
bitmaps that should be on-screen each frame, and blitting them to the
frame. This seems to just about keep up, with my stopwatch showing
almost the same performance at 1920x1080 as 320x240.

Unfortunately, that's only if I skip my naive technique of cloning the
prompter screen to the controller window using the following:

    def get_bitmap(self):
        dc = wx.ClientDC(self)
        size = dc.Size
        bmp = wx.Bitmap(size.width, size.height)
        memDC = wx.MemoryDC()
        memDC.SelectObject(bmp)
        memDC.Blit(0, 0, size.width, size.height, dc, 0, 0)
        memDC.SelectObject(wx.NullBitmap)
        return bmp

    def scale_bitmap(bitmap, size):
        width, height = size
        image = bitmap.ConvertToImage()
        image = image.Scale(width, height)
        result = wx.Bitmap(image)
        return result

Is there a more efficient way to get a copy of a high resolution display
and scale it down? Since my bottleneck now seems to be copying/scaling
instead of DrawText, I don't think the same bitmap technique will work.

--
Steve (Gadget) Barnes
Any opinions in this message are my personal opinions and do not reflect
those of my employer.

This is an interesting problem, I"m sorry I don’t have more time to investigate, but a couple more comments:

···

The sounds like a good strategy, especially as I believe everything’s currently done in a single thread. It seemed over-kill to use a multicore PC in order to draw scrolling text efficiently,

and wx is not thread safe, so I’m not sure if you can draw in separate threads anyway – might be worth a try.

Is there a more efficient way to scale then taking the bitmap, converting to image, scaling, then converting back to bitmap?

I think you can scale it directly in a Blit call. Or use DrawBitmap, setting the scale factor with

DC.SetUserScale(self, x, y)

-CHB

I don’t want to beat a dead horse over this, but I’m trying to find the most naive/generic solution so that it is easier for people to extend past just scrolling text to other information on the display.

Then the bitmap-based approach makes sense – it’ll work for anything.

I’ve got to say – I still confused that you can’t do this the Naive way – drawing a screen full of large text should be pretty fast…

Maybe I’ll take a look at your code at some point…

BTW, I have a collection of little demos of various things you can do with wxPython here:

https://github.com/PythonCHB/wxPythonDemos

I haven’t been maintaining them in a while, but most are probably still helpful. And it would be great if you’d like to contribute this to that repo when you get it working.

-CHB

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R (206) 526-6959 voice
7600 Sand Point Way NE (206) 526-6329 fax
Seattle, WA 98115 (206) 526-6317 main reception

Chris.Barker@noaa.gov

It is highly unlikely that a generic SVG renderer will be faster than the
built-in text rendering, which is already nicely scalable -- but apparently
too slow :frowning:

On the other hand, maybe another rendering engine, such as Cairo (or py_gd)
would be faster, or, if not faster, at least multi-threadable.

-CHB

* py_gd is my wrapper around libgd, which is an old but venerable and fast
graphics renderer:

A real advantage of py_gd is that it's set up to use 8-bit mode, which is
less pretty but inherently faster than full RGBA -- that is, drawing time
ends up being pretty much proportional to how many bytes you are changing,
rather than the complexity of the drawing, so drawing a complex polygon
into an 8bit buffer is about 4 times as fast as drawing the same thing into
a 32 bit buffer. (I tested this with multiple drawing engines, too -- AGG,
Skia, GD, PIL)

I'm not sure it this applies to text rendering, though.

Also -- while GD has decent text support, py_gd is pretty limited, I
haven't had the need for it yet. It does do basics though, which may be
enough, at least as a proof of concept.

-CHB

···

On Mon, Feb 27, 2017 at 9:42 AM, Steve Barnes <gadgetsteve@live.co.uk> wrote:

Thinking outside of the box I would suggest not using Bitmaps yourself,
instead render your text as SVG, (which will be size independent and
will always scale smoothly) - you can include graphics, etc., in an SVG
format. Note that you can scale SVG with the transform="scale=(2, 3)"
command but transform="scale=(-2, 3)" will mirror the X axis as well as
scaling by 2. There is a wxSVG library that looks to be both active and
well maintained at http://wxsvg.sourceforge.net/ but I haven't tested
it, or you could possible use ciaro or webkit methods.

--

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R (206) 526-6959 voice
7600 Sand Point Way NE (206) 526-6329 fax
Seattle, WA 98115 (206) 526-6317 main reception

Chris.Barker@noaa.gov

So, I finally got a chance to do some more benchmarking. In the repo at https://github.com/superlou/flexcue, there are three prompter types:

  • Naive - Simple DrawText command with an offset y-position to animate the scroll
  • Small Bitmap - Similar to naive, but drawn to a fixed size bitmap smaller than full resolution (1920x1080), but large enough to minimize distortion. However, the distortion is visible.
  • Line Bitmap - At the beginning of scrolling, all lines of text are drawn to individual bitmaps. When animating, only the bitmaps that fall into the visible region are drawn. This is a pretty gross implementation, but seems to test the basic concept.
    I used a simple python decorator to get timing information for the paint function call (and the update_animation+paint call for small_bitmap since there’s separate step) for the 320x240 test window and at full 1920x1080 in milliseconds:

Resolution Naive Small Bitmap Line Bitmap

···

320x240 21 16 + 2 1


1920x1080 16 16 + 16 10

So, none of these are bad enough (except for small bitmap) to prevent running at 60 frames per second. So, something else must be the bottleneck. The next big thing moving bits around is the get_bitmap function for the prompter that’s used to screenshot the window’s contents. It’s then resized and painted into the controller’s screen. Since the controller may be viewing the prompter screen as a thumbnail or as full screen, it needs to be able to handle all kinds of up and down scaling. Reneabling get_bitmap and using the timing decorator, it turns out:

Resolution Naive Small Bitmap Line Bitmap


320x240 2 2 2


1920x1080 30 17 21

So, it turns out that after switching to the line bitmap technique, I still need a way to fetch and scale the controller’s window more efficiently. I guess I could do a similar approach by creating yet another set of line bitmaps and blitting them into the controller’s screen, but I really liked the guarantee of having the two displays match since one was a copy of the other.

Louis Simons wrote:

So, none of these are bad enough (except for small bitmap) to prevent
running at 60 frames per second. So, something else must be the
bottleneck. The next big thing moving bits around is the get_bitmap
function for the prompter that's used to screenshot the window's
contents. It's then resized and painted into the controller's screen.
Since the controller may be viewing the prompter screen as a
thumbnail or as full screen, it needs to be able to handle all kinds
of up and down scaling.

Fetching an image from a graphics card is ALWAYS going to be a sucky
operation. Going in that direction is just not important, so that's not
where the chip makers focus their optimization efforts. Plus, you're
moving a massive amount of data. 1920x1080x4 is 8.3 megabytes, and 30ms
means you're copying 275 MB/s.

You really need to find a way to have a master image in memory, then
update both displays from that master image.

···

--
Tim Roberts, timr@probo.com
Providenza & Boekelheide, Inc.

Double buffering may help here -- do your drawing to a buffer for the main
screen. blit that to the screen and use it directly to make your smaller
version.

blitting the buffer to the screen has always been fast in my experience --
thuogh I can't say I"ve ever done a full screen app and high res.

Also -- if you draw the main screen and smaller screen the same way, rather
than trying to copy and shrink the main one -- maybe you can even
multi-thread or multi-process it...

One more thought on timing -- draw calls are not necessarily blocking. This
may be only for old X11 systems, but at least some calls:

DC.DrawSomething()

return long before the content is actually done rendering. That is, the
Draw* call sends message to the X server to draw something, then it
returns. but The X Server could then take who knows how long to actually
draw it.

So very hard to time. BACk in the day, Windows did block on drawing calls
--who knows if it still does.

-CHB

···

On Fri, Mar 3, 2017 at 12:21 PM, Tim Roberts <timr@probo.com> wrote:

Fetching an image from a graphics card is ALWAYS going to be a sucky
operation.

You really need to find a way to have a master image in memory, then
update both displays from that master image.

--

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R (206) 526-6959 voice
7600 Sand Point Way NE (206) 526-6329 fax
Seattle, WA 98115 (206) 526-6317 main reception

Chris.Barker@noaa.gov

Fetching an image from a graphics card is ALWAYS going to be a sucky

operation.

You really need to find a way to have a master image in memory, then

update both displays from that master image.

Double buffering may help here – do your drawing to a buffer for the main screen. blit that to the screen and use it directly to make your smaller version.

That’s a huge improvement. With a new prompter where I draw to a buffer bitmap (flexcue/flexcue/prompter_line_bitmap_buffered.py at master · superlou/flexcue · GitHub), then blit it to the screen, or simply return it via get_bitmap, the get_bitmap call is much improved:

Resolution Naive Small Bitmap Line Bitmap Line Bitmap Buffered

···

320x240 2 2 2 0.1


1920x1080 30 17 21 0.1

Now, my bottleneck (and the last big time use I can think of) is the resizing operation of the 1920x1080 buffer bitmap to fit on the prompter monitor (flexcue/flexcue/prompter_monitor.py at master · superlou/flexcue · GitHub):

Resolution Prompter Monitor


320x240 1


1920x1080 10

So, it still takes 10 ms to draw using text line bitmaps, then another 10 ms to scale it down and paint to a thumbnail. It will probably take even longer once I’m drawing the monitor full-screen. Maybe it’s just simply too much data to push. It’s almost a worst case scenario since the entire screen has to be updated each frame. Do side-scrollers have a similar problem?

Also – if you draw the main screen and smaller screen the same way, rather than trying to copy and shrink the main one – maybe you can even multi-thread or multi-process it…

I agree. In the back of my head, the best case scenario is that my “prompter” and “prompter monitor” are just the same class with a different size. That way any efficiency improvements on one carry over to the other, and it seems like image resizing a full 1920x1080 is prohibitive. Unfortunately, when the screen size is different between the prompter and monitor, it is not trivial to guarantee that the text wrapping points are identical. One way to handle that is to just give the monitor the pre-wrapped text from the prompter. Maybe it will be a pixel or so off in width, but no big deal. However, just the wrapped text string isn’t enough: the font needs to be scaled perfectly so the same amount of text is visible vertically.

I would have to make the monitor slave off the line bitmaps from the prompter. Whenever the prompter is resized (almost never), it regenerates it’s bitmaps of each line of text. It then gives the monitor these bitmaps and the monitor resizes them for it’s own resolution. The monitor may resize occasionally (when the operator changes it from fullscreen to thumbnail and back), but a little hiccup there is tolerable so long as the two displays stay in sync. Maybe the slaved monitor can use the same positioning information from the prompter to cut down on the number of times the logic for selecting which bitmaps to display is called.

Once these two are separated, I don’t think it’d be impossible to split the monitor and prompter into separate processes, though it feels like I’m pretty far down a rabbit hole. Also, since the original prompter’s paint and the monitor’s resize operation are taking similar amounts of time, splitting them into two threads or processes might be my only option for significant improvements.

One more thought on timing – draw calls are not necessarily blocking. This may be only for old X11 systems, but at least some calls:

DC.DrawSomething()

return long before the content is actually done rendering. That is, the Draw* call sends message to the X server to draw something, then it returns. but The X Server could then take who knows how long to actually draw it.

That makes a lot of sense. Is there a more appropriate way to time it? I assume it will only get more difficult to profile if I move to multiprocessing.

Now, my bottleneck (and the last big time use I can think of) is the resizing operation of the 1920x1080 buffer bitmap to fit on the prompter monitor (https://github.com/superlou/flexcue/blob/master/flexcue/prompter_monitor.py#L19):

Sorry to double post. Rather than using Image to resize the prompter display to fit the monitor, the overall paint routine on the monitor is more than twice as fast with StretchBlit:

Resolution Image#Scale ClientDC#StretchBlit

···

320x240 1 1


1920x1080 10 4

So, now I’m getting nearly fast enough, though at 1920x1080, the screen still appears to stutter even though there should be enough time for all of the drawing. I may not be double-buffering properly. Is there a way to explicitly make sure paints are aligned with vsync that’s not platform dependent?

Is there a way to explicitly make sure paints are aligned with vsync that's

not platform dependent?

I _think_ the best way is to put your blitting of the buffer to the screen

in a Paint handler, and then when you want to do the blit, call .Update()
.Refresh() (maybe in the other order?)

In theory, the system will then issue the Paint event at the "right" time.
At least on OS-X.

-CHB

···

On Sat, Mar 4, 2017 at 7:06 AM, Louis Simons <lousimons@gmail.com> wrote:

--
You received this message because you are subscribed to the Google Groups
"wxPython-users" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to wxpython-users+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

--

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R (206) 526-6959 voice
7600 Sand Point Way NE (206) 526-6329 fax
Seattle, WA 98115 (206) 526-6317 main reception

Chris.Barker@noaa.gov

Thinking about it you only need to move both screens up by the scroll
amount, e.g. 10 pixels on the high res screen, draw the new bottom 10
pixels for the high res screen at this point you can start the blit to
the high res screen, then scale the redrawn portion of the in memory
bitmap only and move your low res bitmap up by the height of the scaled
update area and add the new information to it then blit.

I would also consider always having 2 low res images in RAM, one for
"full screen" and one for "thumbnail" this should make switching stutter
free.

···

On 04/03/2017 01:36, Louis Simons wrote:

        Fetching an image from a graphics card is ALWAYS going to be a sucky
        operation.

        You really need to find a way to have a master image in memory, then
        update both displays from that master image.

    Double buffering may help here -- do your drawing to a buffer for
    the main screen. blit that to the screen and use it directly to make
    your smaller version.

That's a huge improvement. With a new prompter where I draw to a buffer
bitmap
(flexcue/flexcue/prompter_line_bitmap_buffered.py at master · superlou/flexcue · GitHub),
then blit it to the screen, or simply return it via get_bitmap, the
get_bitmap call is much improved:

Resolution Naive Small Bitmap Line Bitmap Line Bitmap Buffered
------------------------------------------------------------------------
320x240 2 2 2 0.1
------------------------------------------------------------------------
1920x1080 30 17 21 0.1

Now, my bottleneck (and the last big time use I can think of) is the
resizing operation of the 1920x1080 buffer bitmap to fit on the prompter
monitor
(flexcue/flexcue/prompter_monitor.py at master · superlou/flexcue · GitHub):

Resolution Prompter Monitor
-----------------------------
320x240 1
-----------------------------
1920x1080 10

So, it still takes 10 ms to draw using text line bitmaps, then another
10 ms to scale it down and paint to a thumbnail. It will probably take
even longer once I'm drawing the monitor full-screen. Maybe it's just
simply too much data to push. It's almost a worst case scenario since
the entire screen has to be updated each frame. Do side-scrollers have
a similar problem?

    Also -- if you draw the main screen and smaller screen the same way,
    rather than trying to copy and shrink the main one -- maybe you can
    even multi-thread or multi-process it...

I agree. In the back of my head, the best case scenario is that my
"prompter" and "prompter monitor" are just the same class with a
different size. That way any efficiency improvements on one carry over
to the other, and it seems like image resizing a full 1920x1080 is
prohibitive. Unfortunately, when the screen size is different between
the prompter and monitor, it is not trivial to guarantee that the text
wrapping points are identical. One way to handle that is to just give
the monitor the pre-wrapped text from the prompter. Maybe it will be a
pixel or so off in width, but no big deal. However, just the wrapped
text string isn't enough: the font needs to be scaled perfectly so the
same amount of text is visible vertically.

I would have to make the monitor slave off the line bitmaps from the
prompter. Whenever the prompter is resized (almost never), it
regenerates it's bitmaps of each line of text. It then gives the
monitor these bitmaps and the monitor resizes them for it's own
resolution. The monitor may resize occasionally (when the operator
changes it from fullscreen to thumbnail and back), but a little hiccup
there is tolerable so long as the two displays stay in sync. Maybe the
slaved monitor can use the same positioning information from the
prompter to cut down on the number of times the logic for selecting
which bitmaps to display is called.

Once these two are separated, I don't think it'd be impossible to split
the monitor and prompter into separate processes, though it feels like
I'm pretty far down a rabbit hole. Also, since the original prompter's
paint and the monitor's resize operation are taking similar amounts of
time, splitting them into two threads or processes might be my only
option for significant improvements.

    One more thought on timing -- draw calls are not necessarily
    blocking. This may be only for old X11 systems, but at least some calls:

    DC.DrawSomething()

    return long before the content is actually done rendering. That is,
    the Draw* call sends message to the X server to draw something,
    then it returns. but The X Server could then take who knows how long
    to actually draw it.

That makes a lot of sense. Is there a more appropriate way to time it?
I assume it will only get more difficult to profile if I move to
multiprocessing.

--
You received this message because you are subscribed to the Google
Groups "wxPython-users" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to wxpython-users+unsubscribe@googlegroups.com
<mailto:wxpython-users+unsubscribe@googlegroups.com>.
For more options, visit https://groups.google.com/d/optout.

--
Steve (Gadget) Barnes
Any opinions in this message are my personal opinions and do not reflect
those of my employer.

I think the best way is to put your blitting of the buffer to the screen in a Paint handler, and then when you want to do the blit, call .Update() .Refresh() (maybe in the other order?)
In theory, the system will then issue the Paint event at the “right” time. At least on OS-X.

So, I tried a couple of varieties of Update and Refresh. It’s a little funny: the scrolling is noticeably smoother when I call self.Refresh() followed by self.Update(). If I switch the order, or only call Refresh(), it’s more jerky. My understanding was that Refresh causes the repaint event, so I’m not sure how the Update that follows improves things. On the other hand, when calling Refresh+Update, the rest of the application (controller’s text entry control) is sluggish, so maybe Update is doing something that gives the paint command priority?

I have noticed, though, that the timer callback can have some pretty large jitter (which is reasonable for a non-realtime framework). I can’t find any reference to changing the priority of the timer. Is there any way to replace the timer with something more determinstic?

Thinking about it you only need to move both screens up by the scroll
amount, e.g. 10 pixels on the high res screen, draw the new bottom 10
pixels for the high res screen at this point you can start the blit to
the high res screen, then scale the redrawn portion of the in memory
bitmap only and move your low res bitmap up by the height of the scaled
update area and add the new information to it then blit.

That makes a lot of sense, though I’ll have to do some pencil/paper logic to work this out. My only concern is that since I’m going to eventually be overlaying static stuff on the scrolling text, I’ll need to hold another buffer of just scrolling stuff before blitting the static stuff on top. That’s not hard, by my understanding is that although blitting is cheaper than most drawing commands, it’s still pushing a lot of pixels around. I’ll see if I can test it out though.

I would also consider always having 2 low res images in RAM, one for
“full screen” and one for “thumbnail” this should make switching stutter
free.

In my latest master at GitHub - superlou/flexcue, I added the logic to toggle between “thumbnail” (really, just split screen for now) and full. There’s a small hiccup at the transition, but it should happen pretty rarely in practice, so I’ll hold off on that optimization for now.