It seems that the different subsets you can color are smoothed one by one in the 3D view. This means that two parts that fit exactly will have some kind of groove between them, making them look like they are a bad fit. The fit would look better without the smoothing.
Welcome to our world! Actually fun note, we do just the sort of remeshing you are describing for the global overview. We’ve gotten the overhead down to less than a minute to update, which is great for the global view, but nowhere near fast enough for the individual meshes in realtime.
Thank you for your answer
Good eye! The two systems are separate. Each individual volume that you guys work on has the individual meshes stored separately fully simplified. The overview has unsimplified data which is simplified each time we remesh. The reason that we can get away with this is that the overview is sparse while the individual volumes are dense. That is to say that in an individual volume, users can click on absolutely any segment in the volume and we have to be able to display the mesh for that segment quickly. We only update the overview with the data that has been determined by you guys (and a little magic from us) to be correct. That substantially restricts the amount of data which needs to be meshed (probably way less than 1%). That’s how we can practically restrict the amount of data involved in doing these overview meshes, and how we can practically update them in near realtime.
I think the smoothing for the 3D view is quite confusing when it comes to recognizing if the edge of a branch is jagged or not
Wow, doesn’t sound like inexperience to me. To be honest the biggest impediment to what you are describing is development resources. I’d imagine that what you just described would take someone far more experienced than I with opengl several months to get right. And that’s assuming that what you describe is possible/practical (I couldn’t say one way or the other at first glance).