Image Detection documentation fails to mention that only JPG files are supported

Include the following details while filing a bug report (edit as applicable):

  • Issue category: Documentation
  • Device type & OS version: All
  • Host machine & OS version: Any
  • Issue Environment : Unity
  • Xcode version: N/A
  • ARDK version: 2.4.1
  • Unity version: 2021

Bug reproduction steps:

The ARDK documentation for Image Tracking is missing critical information and is insufficient for making an Image Detection process.
Detecting Images — Niantic Lightship Augmented Reality Developer Kit release-2.4.1 documentation

The primary issues with the article are as follows:

  • Limited information regarding Tracked Image formats - not mentioned but I found ONLY JPG files work, all other types simply and quietly log a “Invalid Reference Image” message (not even reported as an error)
  • The logic to set the scale of the instantiated prefab does not work (or the scale back from image detection is wrong), Instantiated objects are not scaled at all.
  • There is no “Tracked Image Lost” event, this seems linked to the underlying issue in ARCore / ARKit which are currently unable to raise “Lost” events, indicating the Image tracking ios only using the underlying capabilities of ARCore and ARKit. THis need identifying and providing links to where to find information on such.
  • It needs to be made clear that Image detection ALSO does plane detection, identifying planes as tracked images. Even though this is apparent in the EXAMPLE, it needs to be made clearer else developers will trip over a known thing to the SDK.

Overall I am liking the implementation, but the documentation needs to be a LOT clearer, and the framework needs to report properly when images are not accepted, lost or destroyed.

1 Like

Hi @SimonDarksideJ, thank you for writing in! I’ll try to address each bullet point separately below.

  1. At this time, .JPG files are indeed required for image detection purposes, as outlined in the documentation here.

  2. Are you referring to the local scale that gets set during the UpdatePlaneTransform() method? If so, we tested and it appears to set the instantiated plane object’s scale in both the sample script and in the ImageDetection sample scene located within the ARDK Examples package. If you have any further information on instantiated example objects not being scaled properly, please let us know!

  3. Thank you for bringing this up! This is a known shortcoming of ARDK and we’re planning on addressing this in a future release. I’ve added your feedback to our internal issue tracking for added visibility to the team.

  4. Although the instantiated GameObjects in the Image Detection samples are simple colorful planes for display purposes, this is distinct from the real-time plane detection that is computed via a ARPlaneManager component.

The documentation mentions jpg files and some examples of their use. It DOES NOT say that ONLY jpgs are supported, as it’s not the same thing.
THe documentation should be clearer on the subject, especially since ARCore and ARKit support a multitude of formats natively for trackable images

I will try and create a sample test and post as a separate issue, but ultimately what I found was:

  1. When the detected image was on a wall, the detected plane was orientated in the same position, but the scale of the plane attached was NOT called with the scale of the tracked image.

  2. When the detected image is on a surface, e.g. a table, the detected plan appears ON THE FLOOR below the tracked image and is still its original scale.

I think we have crossed wires here, as I was stating that the “ImageDetection” while running with the DepthManager, detects BOTH the tracked image and it’s plane, but it also detects the planes from the DepthManager, identified by the anchor.AnchorType
Although it is I the sample, it’s not clearly documented that this is the case, as I tested by validating what was being tracked ignoring that filter.

Hello @SimonDarksideJ ,
I think you’re right in that we could/should make clearer that any other file type won’t work, I’ll look into how we’d want to document that.
I’m still confused at what you mean when you say “plane detected” though…:

When the detected image was on a wall, the detected plane was orientated in the same position, but the scale of the plane attached was NOT called with the scale of the tracked image.

There is no plane detection happening in our example scene, so I’m guessing you mean the plane prefab that gets instantiated, but from my own personal tests the instantiated plane does get scaled and rotated to match the image, and you can see it happen in the UpdatePlaneTransform method in the sample code.

When the detected image is on a surface, e.g. a table, the detected plan appears ON THE FLOOR below the tracked image and is still its original scale.

This I have not seen happen on my tests, but then again, you mention “detected plane” rather than instantiated, so I feel like I’m missing some information here.

Although it is I the sample, it’s not clearly documented that this is the case, as I tested by validating what was being tracked ignoring that filter.

I am not sure on what you are asking for here… Could you make a suggestion on how or where you’d document this? Im just not sure I am following and think an example would clear things up…

Ok, to clarify, when I say detected plane, I mean the plane prefab that is spawned, positioned and scaled to the target image.

Although I think I may have found a cause for the disparity in the detection. In my original tests I was using a prefab that contained just a plane, however, I noticed in the samples the prefab has an empty gameobject as a parent and a plane as a separate child gameobject. WHen I updated my prefab to be the same, it worked much better and tracking was better.

So the question is, why is there a behaviour difference between using a single plane as a prefab and having a prefab with a parent GO and a Plane GO as a child?

I think the issues relate to the above, and apologies for potentially adding confusion comparing the plane detection, depth detection and the image tracking.

But it would be good to have a new example, with documentation of a real world example, showing both:

  • Spawning a model on a detected tracked image, in a more real world use case where a user can detect a tracked image, have a model placed and be able to “step back” to view the model in the position of the card.

  • An example with an Tracked Image acting as an anchor for a small scene that is aligned to the surface the image was detected on.

Hey @SimonDarksideJ ,
Alright, so the description on the first sample you’re looking for sounds a lot like what we already have but maybe this template is a better fit? If it isn’t though, please let me know what is missing, just need to be clear specifics.
Speaking of specifics, because some of these terms are somewhat easy to mix I feel I should clarify, Image detection is not meant to be used to “track” images, but to “detect them” (main difference being the detection and correction of changes/movement) … more info here.
So if what you’re asking for is just a more complete “experience” then I’m sure I can push for that, though I cannot guarantee that it will happen, but if you want something that tracks a moving image that will probably be harder to get approved.

Regarding the behavior of the sample:

So the question is, why is there a behaviour difference between using a single plane as a prefab and having a prefab with a parent GO and a Plane GO as a child?

Right, so short answer here is, the empty game object parent is there to scale down the plane. Admittedly it is a bit of a hidden operation but if you open the Plane Prefab and select the plane object inside, you’ll notice it’s Scale is 0.1 across its dimensions.
I don’t want to get into a lot of detail since this is more Unity’s thing, but it is important to note the difference between an object’s Size and an object’s Scale. Planes by default are 10x10 meshes, while 3d objects like cubes or spheres default to 1 Unit in size (1x1x1 and 1 unit diameter), the sample scales the prefab to match the image’s Size, but it kind of assumes 1 unit sized (not scaled) objects (or close to it). Normally this isn’t that big a problem because you’d know the size of your mesh and can change it to fit the scene, but with planes, because it is a somewhat automated process, and size is not really easy to see, that gets a bit lost.

Let me know if that makes sense or if you got more questions on that though.

Thank you for the clarifications @Gilberto

I believe the distinctions for the “Image Detection” should probably be made clearer in the documentation, mainly because those coming from ARFoundation, Vuforia and others will expect tracking as part of the image detection solution, it just needs pointing out better.

However, as for enhancements, if there “technically” is no tracking (although the samples do give a false expectation as Tracked Image are updated/tracked?) then maybe there should be a feature to automatically create an anchor at the detected image location.
The solution I have implemented feels very clunky in this regard, as I need to transpose the position of the tracked image in to screen coordinates in order to then use the current ARSession frame to do a hit test against the plane manager.
Ultimately for stability this should either be automatic or at least an option on the ImageDetector, or an API to pass a Tracked Image definition and get an anchor back, either would be good (or both if I’m being cheeky)

Actual Image tracking would be better, but the above is a good compromise to make “stable” detected image anchors.

And thank you for the clarification on the plane.