Getting semantics at certain depth ranges

Hi folks!

I’m working on a photobooth type application and would like to achieve the following:

  • Segment out the people standing in frame (that’s easy with segmentation manager set to the “person” layer)
  • Ignore the people standing too far away to be relevant for the picture

Is there a way to limit the segmentation manager so it only gives pixels that are in the correct segmentation layer AND below a certain distance?

The alternative would be to get the segmentation layer info and check for distance afterwards but i found this to be too costly on performance, instead i’d like to simply send in less data to the “segmentator”.

If anyone has some ideas on this i’d love to hear it.

1 Like

Hi Merijn, it doesn’t look like the semantic segmentation tools have that capability by themselves. I’m checking with the team to see if there’s more tricks to modifying the segmentation buffer and I’ll update you with anything I can find

-Bill

A quick naive workaround would be to sample the semantics, and in the desired semantics sample depth distance every few pixels. For performance maybe this could be multithreaded using jobs🤔

I would be very interested in this! I am having a similar problem, where I want to display Objects in the semantic layer sky, but also show the same Object by depth buffering

Reading through the documentation again there could be a way to achieve this. Using channel suppression in the depth manager i could suppress all layers except for the one i’m trying to single out. From there we can filter out the depth values above and below a given value.
I’ll let you know if i manage to get this working.

I would appreciate that!