Hmmm I’m not sure this workaround actually works for me - I only enable meshing after all the peers are Stable (I changed it to work like that because of this issue) - and I even tried waiting until they were all stable and then destroying the ARMeshManager and instantiating a new one… and still got the same problem.
Hi @Ross_Styants1 thank you for letting us know. We’ve verified on our end that the workaround is no longer working in the latest version of ARDK (currently 1.3.0). Since this is a regression of functionality, we’ve entered a bug to get that fixed. Apologies for the delay, and thanks again.
@David_Quevedo any update on this issue?
This issue does not only happen on ARMeshing, also have on ARDepth data.
The peer depth pixel will be offset.
as the image, depth and mesh was offset
Hello folks - is there any ETA on when this bug might get looked into / fixed? it’s a complete show-stopper for the game i’m working on.
Thanks
@David_Quevedo Hello! any updates ?
Hello @Ross_Styants1,
I apologize for the delay in getting back to you.
I’ve reached out to the team again to verify the status of this issue. I will update this post again as soon as I receive any information regarding a solution or workaround. Thank you for your patience.
Here I come the solution in concept.
As I know in ARNetworking, world-zero position from clients will be changed/synced to Host’s world-zero, therefore any object created from client position will be offset.
In general, the locally created object needed to be sync using either HLAPI or LLAPI.
Therefore, we can “sync” host’s ARMesh to all peers, that allows the Meshes is network synced.
To do that, Host should Serialize Mesh to byte[ ] and broadcasting to clients, and clients Deserialize byte[ ] into Mesh. Spawn as a GameObject and wrap into a container exactly the same as Host’s one.
This should work in ARNetworking setting.
Thanks @Alexson_Chu i can see that should work for l applications that want a shared mesh.
For my use case i really want each player to build a mesh locally and then for shared position to be still based on an underlying scene understanding. i don’t require a shared mesh & was hoping to keep refining each players local mesh throughout play.
the reason is that players will be standing around the room in different areas and it’s really important that the fidelity of the mesh for each player is at its best locally where they point their phones. I think the game won’t really work if the host has to keep moving around the room to improve mesh detail on the other side of the table etc.
is there any way we could have a separate world center for networked positions and meshing on the clients?
Hello again! - sorry to keep banging on about this same thing… but so - can anybody from Niantic tell me if this is something that is likely one day to actually work the way I want it to (as described above) - or is this just a side effect of the engine & networking that will never work that way? Would be very useful to know! Thanks./
Hello all,
Apologies again for the massive delay regarding this issue.
If you’re still interested in this feature, we recommend trying Shared AR in ARDK 3.0. Please see our Shared AR documentation for general information and our How to Use Shared AR page for how to set up your project. If you come across any issues while setting it up, please create a new post so we can troubleshoot and find a solution.
Thank you all again for your patience.