Semantic Segmentation block list not working as expected

  • Issue category: Semantic Segmentation
  • Device type & OS version: Android Google Pixel 7
  • Issue Environment : On Device
  • ARDK version: 3.13 & 3.14
  • Unity version: 2022.3.53

Bug reproduction steps:

  • Setup meshing like described in the documentation.
  • Setup semantic segmentation like described in the documentation.
  • Add ‘LightshipMeshingExtension’ to meshing game object
  • Add ‘ground’ to block list of ‘LightshipMeshingExtension’
  • Optionally add semantic querying script for debugging like described here: How to Query Semantics and Highlight Semantic Channels | Niantic Spatial Platform (for my purpose I have simplified it to just print the selected semantic channel into the console)

Bug:
Multiple patches of ground get a generated mesh, even though that part of the mesh should be filtered. Debugging it with the semantic querying script shows that they are even tagged with the ‘ground’ semantic channel, so there shouldn’t be wrong recognition of parts of the floor. These parts of the generated mesh never get cleaned up either, so you’re stuck with an incorrectly generated mesh.

Description:
All I want to achieve is to mesh everything in an indoors environment but the floor. Using any combination of ‘artificial_ground’ and ‘natural_ground’ or all three ground types leads to the same result. I have also tried going for the allow list approach, but that doesn’t work at all in an indoor environment. 95% of the indoor scene is meshed without any semantic channel assigned to it. That includes walls, tables, chairs, trash bins, computers and more. The occasional ‘experimental_loungeable’ is found on couches, but that’s about it for semantic channels I could whitelist.

Does anyone have a suggestion of how to work around this issue? Or is this a bug we can expect to be fixed some time soon?

Best regard
Julian

Hi Julian,

Thank you for taking the time to submit this detailed report about your experience with the Semantic Segmentation block list. Have you tested the semantics related samples on your device? Do they function as expected? Could you share a minimal reproduction project of this issue so we can investigate this further with your particular setup?

I look forward to hearing from you.

Kind regards,
Maverick L.

Sorry for the late reply. We are currently quite busy at work with some deadlines and I will get to creating the minimum reproduction project for you as soon as possible.

In the interim I do have another similar, but not directly related question, that you might have an answer for: Another ‘problem’ we’re experiencing is that the mesh is offset by quite a lot compared to the real position of objects it is trying to mesh. This we noticed with a few different measurements: Firstly comparing the distance between meshing of the floor and plane detection of the floor often resulted in distances of up to two real meters (on iOS devices without LIDAR capabilities that is. On Android devices it’s better, but still usually around 1 meter of distance). Another measurement is raycasting from the camera to the mesh to get the distance. This also matches the offset we measured compared to plane detection. Lastly if we’re meshing something like a desk from the front, and then walk up to the side of the desk the virtual scanned mesh of the desk can be found quite a lot behind the real desk.

Is this issue expected behaviour with your way of meshing? Or another bug we’re experiencing?

Best regards
Julian

Hi Julian,

I sincerely apologize for the delay in response.

Meshing works better with the more data it has to work with. Try moving around the object(s) you would like to mesh and see if the meshing becomes more accurate.

Let me know how that goes!

Kind regards,

Maverick L.