How to spawn giant objects in the sky/on the horizon with semantic segmentation?

I’m trying to recreate an effect I’ve seen in a few different ARDK showcases. For instance,

This image is in the docs, but it isn’t explained how to achieve the effect:

It seems like I’m able to place the character a far distance away while suppressing the sky channel, I get that part. But since plane detection doesn’t work for such a huge area, how is the app gaining context to know where’s the “ground” to place the character on?


Another example from the One Piece demo showcase video. Once again I know you just suppress the “sky” channel. For this example since no ground plane involved, do you just spawn the objects X distance away, use depth occlusion for the foreground buildings and suppress the sky channel in the background?

Lastly what about this example? Is it using the “building” semantic channel to mask out the object? Or is it just regular occlusion/meshing?

Sorry for the large chunk of questions, but while I’m able to easily make my “sky” channel turn white, it’s quite confusing what are the “best practices” for making large-scale objects respond to the environment.

Hello Kang-An,

You are spot on. All examples used semantic segmentation focusing on the building channel to occlude 3d models, and pushed back the sky channel. Far distanced scenes without distinct ground (such as the robot example) likely utilized pre-pathed animation to create the desired effect, while still implementing the before mentioned features. They could have also been instantiated through a physical location based marker or image to coordinate the user’s direction and distance to the AR experience. Since AR is still early in its application, and is being refined every day, best practices are constantly being created and changed. But as usual, the general industry standards still apply:

  • Keep the software up to date with the newest version of the application.
  • Read the documentation to understand what the features are, as well as how they work.
  • Communicate and collaborate with other developers, and ask questions.
    • Community
    • Discord
    • Game Jams

~Erik

Hi @Erik_Brown thanks for the reply. I’m aware of how to suppress the sky channel (i.e. using the ARSemanticSegmentationManager and adding “Depth Suppression Channels”) but is there any similar ARDK functionality to use the building channel to occlude the 3D models? Or do I need to implement my own shader solution?

EDIT: I found the semantic masking example here: Semantic Masking – Niantic Lightship

It works when I use the default Unity renderer, but it doesn’t work when using URP. The “mask” texture is just black when using URP, but when using a different project on the built-in renderer, the mask renders properly… Almost as if CopyToAlignedTextureARGB32() isn’t working properly in URP? Not sure if I’m missing any other setup.

@Erik_Brown

Just to update. I’ve discovered that this problem only happens in URP in editor. Once I build to device everything works as expected. So to summarise for this Semantic Masking sample:

URP, Editor mock scene: NOT working
URP, Device: Working
Built-in renderer, Editor mock scene: Working
Built-in renderer, device: Untested, but most likely working

I’ve noticed that while I previously mentioned it works on device, it actually only works on Android devices but not on iOS devices. The object just disappears when I enable “sky” segmentation on iOS.

EDIT: I’ve managed to fix it working in mock mode on editor (the layer mask was set wrongly on my Ardk Replacement Renderer object). But it still doesn’t work on iOS…

So to summarise now on URP, the exact same build works on Android and Editor Mock mode, but still doesn’t work on iOS…

I believe there’s something wrong with using this method of the custom shader that writes to the segmentation canvas, as was done here: Semantic Masking – Niantic Lightship. Strangely it just doesn’t work with URP on iOS and I’m still trying to figure out why after 1+ days…

Hello Kang-An,

Currently, the semantic segmentation example scene only works out of the box with the built-in render pipeline. It should still be possible with URP, although it requires a custom render pass to be set up and scripted. We are currently exploring a resolution to this issue.

~Erik

Hey Erik thanks for the response. Funnily enough after >1 week of trial and error, I’ve noticed that enabling the post processing checkbox on the main AR camera actually causes the shader to work properly in URP. Yeah there’s probably a better way to do this with render passes but this will work for now haha.

Any update on this?

I was having the same issue and realized somehow in the template the sky_Camera in SegmentationCameras had its culling mask set to “ARDK_Gameboard”, switching this to “Everything” allowed the scene to work with the Exterior Mock scene in editor in URP. Not sure if I did that or not.

Hi,
Here is a tutorial, but it is for Spanish speakers, however if you follow it you can replicate it.