Do VPNs work with symmetric scans?

I’m not entirely certain about how the VPS system works to determine the user’s position, but I’m wondering wether it can work with scans that have a high degree of symmetry. By “symmetry” here, I mean when the scanned location appears identical from various viewpoints. For instance, when observing the scan presented in this Niantic Labs video: https://youtu.be/iRM_VVJUR84, I’m struggling to understand how the VPS could determine the user’s position.

Actually I was having an issue with this that I was just trying to debug. At specific symmetric locations the VPS anchor gets restored on the wrong side of the object being scanned, resulting in all the content being off massively.

Like you, I wonder if there’s something that can be done about this.

@Julian_Anderson In my case, I didn’t see any anchor showing up. Perhaps if it really was a symmetry issue for me, as you described, the anchor should still appear, but not necessarily in the correct location. While conducting research, I discovered that Lightship also registers your phone’s orientation during a scan.
src: Lightship VPS: Engineering the World’s Most Dynamic 3D AR Map – Niantic Labs
Therefore, I believe that if the symmetry is not radial (as in the case of a streetlight), the current orientation of the phone could potentially be utilized by Lightship to determine the correct position among the several options.

But it’s just theory and I would like to know more about how it works and if symmetry should be avoided.

So yours was just failing?

I think also having the context around the symmetric object helps to determine the side, but that’s not always available nor asymmetric. I wonder how one would avoid symmetry to begin with. I think this is best done on the Niantic server and rejecting those spots. But the spot I am having issues with is marked with a Good localization score.

Hey,
So I can’t really tell you the minute details on how localization works, it is by design that it is a bit of a black box, but I can give you a high-level run down of the workflow that hopefully clarifies a bit on why this might happen. Keep in mind this workflow is set to change on version 3.0 currently on beta so you might want to keep an eye out for that too.
On the ARDK 2.5, when you try to localize to a VPS Wayspot, the wayspot may have multiple scans with different “origin points” internally we align them so that if you localize to any we can still tell your position, but as with @Julian_Anderson situation, this might cause this kind of edge cases, now the description of “symmetric object” might be confusing, there are more considerations that go into this alignment than just the “object of interest” being scanned so it’s unlikely that object symmetry is a problem, but what really goes against the scanning “best practices” and might cause this sort of alignment problems is scanning things that are uniformly patterned, so think less like the clock in the video and more like patterned tiles on walls/floors.
Having said all this, what @Julian_Anderson is facing makes a bit more sense to me, @Joachim_Peignaux situation is weirder though, it sounds like a localization failure that’s probably unrelated to the scanned object (although maybe the scan quality could still be the cause), could you give me more details on the situation? maybe we can find a root cause to your issue.

1 Like

Hello, thank you for this response. It’s interesting to know this about objects with uniform patterns; I’ll take that into consideration. In my case, the issue is that I need to work on AR using scans that someone else is taking for me, as it’s in a different city. Therefore, I can’t test them any way other than by sending a build to that person. Unfortunately, the last time, none of the three scans worked even after several attempts, while the tests I conducted in my own city worked very well. While it’s possible that the problem could be due to poor scan quality, before attempting again, I’m trying to understand if there might be other reasons that would lead to the same issue even with better scans.

Here are the three scans in question:



These three scans were private since we are still in the proof of concept phase.
For the first one, could the problem have been the overly uniform patterns on the ground?
For the second one, I thought that the pattern on the ground might help. But perhaps VPS recognition isn’t really designed to rely on the ground?
For the third one, even though the quality is quite poor, do you think there might be something else preventing the recognition from working?
I’m really trying to understand all the best practices to minimize the need for the person taking my scans to travel as much as possible.

Hey @Joachim_Peignaux ,
Ok so the first thing to verify is that private scans are tied to your account and your apikey, so just checking that if there’s 2 people scanning/testing they would have to use the same account, otherwise the scan will not be recognized and not localize at all.
Alright having said that, at first glance, the first and third scan could be problematic, the second one should in theory be good if the scan quality is good (the mesh might look good to me but the mesh quality could still be bad).
The problem with the first and third scans is that VPS is that the scans are expected to have a clear and recognizable point of interest , in those scans you’re scanning areas rather than a thing, on the second one the post would be the recognizable point of interest, so in theory that should be good.
So yea, to answer your questions directly:

For the first one, could the problem have been the overly uniform patterns on the ground?

Yes, and also the fact that it is a scan of the ground rather than of something recognizable on it.

For the second one, I thought that the pattern on the ground might help. But perhaps VPS recognition isn’t really designed to rely on the ground?

This one in theory should be good, having patterns on the ground should not affect the localization, but you are right that it isn’t designed to rely on the ground, but rather on the post, it should work regardless of wether the post is “symmetric” or not.

For the third one, even though the quality is quite poor, do you think there might be something else preventing the recognition from working?

Yes, again, this is a scan of an area and not of an object.
Hoped this helped to clarify some things, let me know if you have more questions.

1 Like

This topic was automatically closed 2 hours after the last reply. New replies are no longer allowed.