AutoSizeColumn creates a renderer for each and every cell to compute renderer->GetBestWidth, which may involve creating a DC to measure text. Not unsurprising that it’s slow.
Now, you could reimplement the relevant parts of wxGrid::AutoSizeColOrRow in Python, skipping cell values that you deem unlikely to be the widest. It basically boils down to creating the GridCellRenderer and then calling renderer.GetBestWidth. But I have a better idea:
Prior to filling the grid, sort your cell values according to likely width. Insert the top 1% into the grid. Run AutoSizeColumn just like before. Then replace the 1% with your full dataset.
Very good idea. The sort execution time is negligible and the AutoSizeColumns too.
I do the sort directly on data outside the grid & GridTable, and I then simply AutoSizeColumns using the labels and the longest cell of each column (like a single row table). That’s perfect and cost less than 0.4sec wall time.