-
Issue category: Unity Example Package / Sample App aka ‘AR Voyage’ / ARDK Virtual Studio Tools / Developer Tools/ VPS
-
Device type & OS version: Android
-
Host machine & OS version: Windows
-
Issue Environment : Unity Remote /On Device / Dev Portal
-
ARDK version: 2.4.2
-
Unity version: 2021.3.12f
Description of the issue:
I want to use VPS of Niantic Lightship and Image Detection of AR Core (maybe AR foundation?) together in one scene. Is is possible? ( I am not talking about Niantic - Image detection. )
I am asking this because when i install ar core through package manager in my unity project that already head ardk package installed, some error happened and i couldn’t build on my device. (Can’t remember which error happened because i didn’t capture the log.)
And if it’s possible, i have second question. Is is possible to use AR Foundation Sample - Image Detection example scene in the project?
I want to know briefly how ARDK & AR Foundation || AR Core works .
Thanks
Hello Olivia,
Unfortunately, ARDK cannot be mixed with ARFoundation nor ARCore at the moment. When using ARDK, no other AR API can be in use. ARDK is built around ARCore and is a wrapper to many of its API calls and therefore cannot be used together with ARDK. ARFoundations can’t be used because it would cause conflicts with ARDK.
1 Like
Hey @olivia0901 ,
There is this framework called NatML for Unity that i’ve been working with.
I’m trying out a couple ways of feeding this predictor lightship data instead of the ARFoundation ones which is what it’s built on.
I’ll let you know if i find any solution next week.
If you want to try it out yourself you’ll find it here: Getting Started - Unity
On natml hub you can find the predictors that are currently available: https://hub.natml.ai/
1 Like
Thank you so muck Merjin!
Hey @olivia0901 ,
It took some time to figure out but we’ve been able to pass the CPU/GPUTexture to a couple NatML predictors. Only thing that i didn’t get working yet is the depth textures.
Here is a sample of the movenet predictor being used to draw canvas elements at given joints during runtime, tested on iOS:
using System.Collections.Generic;
using NatML.Vision;
using Niantic.ARDK.Extensions;
using UnityEngine;
public class BodyTracking : MonoBehaviour
{
public ARRenderingManager RenderingManager;
public GameObject PoseDebugPrefab;
public Transform TrackerParent;
private MoveNetPredictor predictor;
private List<GameObject> poseDebugs = new();
private RenderTexture _renderTexture;
private int _width;
private int _height;
private async void Awake()
{
predictor = await MoveNetPredictor.Create();
_width = Screen.width;
_height = Screen.height;
}
private void Update()
{
// predictor initializes in async so wait for it to load
if (predictor == null)
return;
// movenet is still 2D prediction but you could do ARDK depth-test at those positions to get a 3D location
MoveNetPredictor.Pose pose = predictor.Predict(RenderingManager.CPUTexture);
// Create canvas pose debug circles
if (poseDebugs.Count == 0)
{
foreach (var position in pose)
poseDebugs.Add(Instantiate(PoseDebugPrefab, TrackerParent));
}
// for each pose, vector x + y are positions, the z is the certainty/confidence
for (var index = 0; index < pose.Count; index++)
{
var position = pose[index];
poseDebugs[index].transform.position = new Vector2(_width * position.x, _height * position.y);
poseDebugs[index].SetActive(position.z > .3f);
}
}
}
Alternatives i’ve tested are:
Barracuda, which was too slow for our purposes
Google MediaPipe (GitHub - homuler/MediaPipeUnityPlugin: Unity plugin to run MediaPipe graphs), which was perfect but we couldn’t get working in combination with Lighsthip due to conflicting Tensorflow frameworks on iOS.
ARKit itself, this is not supported.
OpenCV, which could be depending on my implementation but it was just too slow and performance heavy for what we were trying to do.