Seeking Solutions for 3D Asset Display Issues, using Unity and VPS

Include the following details (edit as applicable):

  • Issue category: VPS / Unity
  • Device type & OS version: Android
  • Host machine & OS version: Windows
  • Issue Environment : On Device
  • ARDK version: 3.5.0
  • Unity version: 2021.3.28.f1

Description of the issue:

Hello, I am working on a Digital Art gallery in the streets of Paris for the GO 2024, but I am encountering several problems: the 3D assets (optimized) take a long time to load and sometimes do not appear at all.
I am working with VPS technology on Unity with a basic architecture including an AR session, an XR origin, and the AR location manager in public locations, but I am facing some issues at times.

  1. Does the mesh of the location detect the user? Or is it detected from a certain distance from the Niantic scan? Can an existing scan by Niantic be “reinforced, consolidated” with your own scan?

  2. What can cause 3D assets to disappear 30 seconds after being displayed? Or change size after being displayed? Do you have any ideas for a solution?

  3. Is it possible to improve the anchoring of a 3D asset to prevent it from drifting? In the AR location script I don’t include the mesh in the build, should I? (Will it improve stability?)

Thank you for your help

Hi Adrien,

Your project sounds exciting! Let’s get your problems solved so your event can go off without a hitch:

For your 3D asset loading issues, I would recommend loading them at app startup with some kind of visual feedback while this process is happening such as a loading screen so users of your app don’t think your app is frozen/unresponsive. That means that the assets would be loading asynchronously (in the background) whilst your loading screen is displayed.

  1. Scans taken of points of interest (POIs) train our algorithm to “recognize” them. Instead of having Lightship try to guess through our gigantic catalog of data of what it could be looking at, you as the developer tell Lightship what points of interest your app will encounter so it can monitor the camera feed to see if it “recognizes” one from the datapoints it has for those particular locations/POIs. This learning process is iterative meaning that every single scan builds upon the last and teaches the algorithm more and more about what it should be looking for. This means that the scans you take and add to a location strengthen the algorithm’s ability to recognize that location or point of interest – especially if you’re taking that scan in different lighting, weather, etc. than previous scans were taken. More on our Visual Positioning System (VPS):
  2. Placed content can disappear and change in size or orientation for many reasons but some of the most common ones would be that the content is improperly anchored, something is going on with occlusion/render prioritization, or Lightship has insufficient datapoints to recognize the POI accurately (Lightship has seen the POI in excellent lighting on a sunny day only but you’re trying to get it to recognize the same POI at dusk on a cloudy night with light rain drizzle). More on content placement:
  3. The mesh you download from the Geospatial Browser should be looked at as a visual aid we provide for developers to accurately place content rather than a requirement for functionality. To manage issues with content drift, I would recommend reviewing AR Foundation’s anchor guides ( as well as our experimental Location Drift Mitigation feature (

Kind regards,
Maverick L.

Hi Maverick,

Thank you for your detailed response! I’ve taken your suggestions into account.

However, regarding the experimental Location Drift Mitigation feature, I’m still a bit uncertain about how the Temporal Fusion feature works within Lightship ARDK. I understand it involves averaging localization results over a certain period to provide more stable localization, but I’m not entirely sure how to implement this correctly. Does it work just by enabling Temporal Fusion and Continuous Localization? Or do I need to add AR locations from the Geospatial Browser or manually set them up in some other way?

Thanks again,


Hi Adrien,

You’re very welcome! Temporal Fusion is a setting that takes note of where Lightship has determined you are over the last five “pings” or location collections and takes the average of them. It has nothing to do with having a certain number of unique points of interest (AR locations) in your app. Other than enabling Temporal Fusion and Continuous Localization, you won’t have anything else to do to reap the benefits.


This topic was automatically closed 2 hours after the last reply. New replies are no longer allowed.