in order to render the camera feed to a texture.
I then want to display this texture exactly as it would be displayed if the ARRenderingManager were set to ‘Camera.’
(I’m doing this because i want to use some pixels form the live feed at various points in the application)
The way I’m currently attempting to do this is:
At startup I create a RenderTexture at screen resolution
Assign that RenderTexture to ARRenderingManager - and also to the material of a plane (which fits the screen exactly) that is drawn by its own camera behind everything else.
This works ok except there is a slight lack of synchronisation between the live feed and the mesh - suggesting that I should be cropping the live feed slightly / transforming it in some way.
It’s different on different devices - the video is on iPhone13 pro…
Can you give me any clues how i get to the correct transformation?
Thanks!
Do you copy the ProjectionMatrix of the Camera connected to the ARRendering Manager and paste it on the Camera that renders your virtual scene?
Because that’s what I did to fix that synchronisation error.
Wow thank so much for the quick reply. Is the orthographic camera a child of the ARCamera? Where in the code do you set the projectionMatrix? Is it in an update loop? Thanks again!
Hello, nope the orthographic camera is just on its own in the scene somewhere - just renders the live feed as background -
and i set the projection matrix in this callback i think -
ARSessionFactory.SessionInitialized += OnSessionInitialized;