Include the following details (edit as applicable):
Issue category: Semantic Segmentation / Multiplayer / Real-time Mapping-Depth / ARDK Documentation / Unity Example Package / Sample App aka ‘AR Voyage’ / ARDK Virtual Studio Tools / Developer Tools / Networking / VPS
Device type & OS version: Android / iOS / Other Ex. iPhone 8+ on iOS 13
Host machine & OS version: Mac / Windows / Linux / Other Ex. Mac on Big Sur x.x
Issue Environment : Unity Remote / Unity Mock / On Device / Dev Portal
ARDK version: 2.1 and 2.4
I use the semantic segmentation feature to display far-away objects in the sky layer. This works fine, but I want to turn off the segmentation for specific game objects.
Has anyone implemented such a feature, and knows if it is possible/other solutions exist?
I think this could be achieved by doing the following:
- have a base camera with depth occlusion for objects close by or on the ground
- have an overlay camera that renders objects on the horizon or in the sky, add segmentation manager to this one.
- finally add the camera as an overlay to the base camera or do the reverse based on what you want to render on top
In this way you can use depth for objects in the vicinity of the user and segmentation for large scale objects. The far away objects will still be occluded by objects close by, just in a lower quality (segmented instead of depth tested). Haven’t tested this myself yet but i think it can be done.
I would recommend looking into the solution Merijn provided and let us know how it goes. If it doesn’t work for you we can look into your question further.
Thanks for helping the community Merijn!
The problem I am facing is slightly different, as I need to turn the segmentation off, for certain gameobjects.
Another approach might be to inverse the segmentation mask that gets returned: eg all pixels where sky is not present. Does anybody have experience with this problem?
Sorry for the delay. Then in this case you would like to turn off segmentation for certain objects in the sky correct? And these objects would be mixed in with other objects that would need to keep the channel on correct?
Would it also be possible to get a description of more or less what you’re attempting to achieve as an end result to get a better understanding?
I have objects that show the location of events. These objects are far away, which is why I decided to use semantic segmentation for displaying said objects in the sky layer. This works fine 95% of the time, but sometimes events are near to the user and/or the user is at an elevated position where the effect of occluding objects such as houses doesn’t influence the field of vision.
In such a case, where it would be essential to show the object not only in the sky but in front of buildings/hills etc., I need to stop applying the sky segmentation and either add additional channels (eg building, ground, …) or turn it off completely BUT only for these few objects and not all of them.
So far I tried out Merijns approach, by creating two cameras with different segmentation managers and layers with the objects that should render far/near away. Sadly this doesn’t work due to the depth settings of the two cameras, as they inevitably receive the output from the other camera, and apply their segmentation to it. I have tried setting them to equals, and parenting them.
If anyone has experience with such a problem and is willing to share their solution, I would be very grateful!
I was wondering if you can send in some screenshots of what you’re currently seeing on your device for a more accurate visual on what is happening vs. what should be happening. I’m still looking into this issue. If you’ve made any progress please include any additional details so that I don’t recommend things you’ve already tried