Remove the smoothing algorithm in the 3D view to increase readability

It seems that the different subsets you can color are smoothed one by one in the 3D view. This means that two parts that fit exactly will have some kind of groove between them, making them look like they are a bad fit. The fit would look better without the smoothing.


There would be no groove if there was no smoothing, or if the meshes of all the subsets were merged into a single mesh before applying the smoothing.
However the latest solution would be more computationally intensive, because the mesh would have to be recomputed at each colouring/uncolouring.

Welcome to our world!  Actually fun note, we do just the sort of remeshing you are describing for the global overview.  We’ve gotten the overhead down to less than a minute to update, which is great for the global view, but nowhere near fast enough for the individual meshes in realtime.


In terms of eliminating the smoothing, the issue is that the unsimplified meshes are huge!  Without simplification, we’d approximately be tripling the data that we send over the wire (maybe more, I don’t have the exact numbers in front of me right now).  That would slow down the whole system especially the responsiveness of the client, and take up a lot more space on the servers.

Unfortunately I think you may have to learn to live with with this little ugliness, unless we can come up with some sort of major breakthrough in how we do our meshes.  Great observation though.

Thank you for your answer :slight_smile:

I understand it is not a very important issue ^^ I just thought it would be something easy to do, but apparently not.
Just out of curiosity then: you wrote it would take more space on the servers. But don't you store the unsimplified data on the server anyway, in order to do the remeshing on the global overview?

Good eye!  The two systems are separate.  Each individual volume that you guys work on has the individual meshes stored separately fully simplified.  The overview has unsimplified data which is simplified each time we remesh.  The reason that we can get away with this is that the overview is sparse while the individual volumes are dense.  That is to say that in an individual volume, users can click on absolutely any segment in the volume and we have to be able to display the mesh for that segment quickly.  We only update the overview with the data that has been determined by you guys (and a little magic from us) to be correct.  That substantially restricts the amount of data which needs to be meshed (probably way less than 1%).  That’s how we can practically restrict the amount of data involved in doing these overview meshes, and how we can practically update them in near realtime.


I love these types of discussions!  If anyone else has technical questions about how we make this happen, feel free.  I sometimes wonder at the fact that we’ve made this possible, given the challenges involved!
Forgive my inexperience, but I don't understand why this is so computationally intensive. Given two meshes, A and B, the goal is to join them into a single mesh, removing the boundaries between them. The hard part is to identify which verteces form those boundary surfaces. But this is basically just a proximity test, which can be accomplished very efficiently with collision detection methods. The intersection of their bounding boxes would fully contain the region of contact between the two meshes, while immediately eliminating most other verteces of both. A precomputed octree or BSP would further reduce the number of verteces to test. Then measure the distance between each vertex of mesh A and the nearest faces of mesh B, along their normals. When the smallest such distance is below a certain heuristic threshold, that vertex would be flagged as being on the boundary surface. Repeat the same process for the verteces of B and faces of A. The polygon count looks pretty low, so I can't imagine there would be much more than a few dozen verteces on the boundary surfaces, and a few hundred within the search space. Game engines do this sort of thing at 60 FPS.

Alternatively, since you've got the raw volumetric data, it might be simpler to use 2-D methods, slice by slice, to build up a list of voxels where the two regions meet. Slice-wise bounding boxes would constrain the search area, and quadtrees would greatly reduce the number of individual voxels to test. Or perhaps simple edge-following algorithms would be even faster. Then map this back to the 3-D, ignoring any verteces outside those boundary voxels. Octrees would again be useful here.

Then once you have this list of verteces which form the surfaces dividing the two meshes, discard those boundary verteces, then identify the "outer rim" of each opening as the set of verteces which previously shared an edge with a discarded boundary vertex. Match these up to the closest ones on the other side, then form new faces between them. Viola, union.

Easier said than done, perhaps. I've never implemented anything remotely similar to this, and I've had very little experience working with 3-D in general. So I don't mean to be presumptuous -- I'm sure there must be a good reason why it can't be done as simply as I described. I'm just brainstorming...

I think the smoothing for the 3D view is quite confusing when it comes to recognizing if the edge of a branch is jagged or not

Wow, doesn’t sound like inexperience to me.  To be honest the biggest impediment to what you are describing is development resources.  I’d imagine that what you just described would take someone far more experienced than I with opengl several months to get right.  And that’s assuming that what you describe is possible/practical (I couldn’t say one way or the other at first glance).


Remember that all this is being done in javascript and webgl not c++ and opengl.  We also want it to run quickly on as many computers as possible.  While this may be something that games do at 60fps, it might not be so fast on low end machines.  Currently the whole pipeline is displaying simple static meshes with one color and simple shading.  No vertex or fragment shaders whatsoever.

Thanks for the brainstorming though.  We always love thinking outside the box!