XRCpuImage in ARDK

Hello,

Is there an equivalent of XRCpuImage - ARFoundation in ARDK?

I want to migrate a ARFoundation project to ARDK. In this project I use XRCpuImage and ZXing to read a QR code, so I would like to know its equivalent.

BR

Hi Sergio,

Our equivalent to the XRCpuImage would be CapturedImageBuffer which can be accessed via IARFrame. For Android it’s just one BGRA plane while for iOS there are two planes for YCbCr; one for Y (index 0) and one for CbCr (index 1)

We also have ZXingBarcodeParser which is our wrapper around the ZXing library.

If possible I would recommend making sure that ARDK’s alternatives suffice the needs of your project before fully porting over :slight_smile:

Hello,

Thanks for your recommendation,
The reason why I plan to migrate my project, is because I have realized that the IARAnchor of ARDK has a better performance on android devices than the ARAnchor of ARFoundation, and this feature is vital for my application.

Regarding QR code scanning, another vital feature for my application.

Using “CapturedImageBuffer” as you mentioned, I managed to read and decode the QR code. I have followed the “ARFrameMarkerScanner” script as a guide, but I have noticed one thing, the example does not implement the logic you mention.

  • First in the function InitializeFrameSettings
        private void InitializeFrameSettings()
        {
#if AR_NATIVE_SUPPORT && UNITY_ANDROID
      _textureType = TextureType.BGRA;
#else
            _textureType = TextureType.YCbCr;
#endif
        }

BGRA is never triggered for Android nor for the platform (UNITY_EDITOR).

  • Secondly, in the Update function, when taking the planes, it always takes the plane of index 0.
IARFrame frame = updateArgs.Frame;
_arCamera = frame.Camera;
_timestamp = _coordinatedClock.CurrentCorrectedTime;

// Use raw data instead of doing extra step of getting pixels from texture
 // For Android this will get the BGRA data
 // For iOS this will get the Y of the YCbCr data, which is all the parser needs
_rawPixels = frame.CapturedImageBuffer.Planes[0].Data.ToArray();
_rawWidth = frame.Camera.CPUImageResolution.width;
_rawHeight = frame.Camera.CPUImageResolution.height;
  • Third in the ConvertTextureAndDecode function, when converting the Raw info into a Texture, for Android is taking YCbCR.
            var pixels = new Color32[_rawWidth * _rawHeight];

            var rawIndex = 0;
            for (var idx = 0; idx < _rawWidth * _rawHeight; idx++)
            {
                if (_textureType == TextureType.YCbCr)
                {
                    // Use Y value in YCbCr texture to create a greyscale texture
                    var val = _rawPixels[idx];
                    pixels[idx] = new Color32(val, val, val, 255);
                }
                else
                {
                    pixels[idx] =
                      new Color32
                      (
                        _rawPixels[rawIndex + 2],
                        _rawPixels[rawIndex + 1],
                        _rawPixels[rawIndex],
                        _rawPixels[rawIndex + 3]
                      );

                    rawIndex += 4;
                }


            }

In short, the ARFrameMarkerScanner script seems to work, but the logic is not well implemented.

Another thing, as I told you, I managed to read and decode the QR code, but doing some tests in Android devices, it seems that it doesn’t work correctly in all of them. For example, it worked perfectly, on Pixel and on Huawei, but on Oppo it doesn’t work.
(I have not yet tested on iOS)

Do you know what could be the reason?
I know this is a very difficult question due to the large number of Andorid devices, I was wondering if it could be due to some property of the device’s camera, or something like that, I’ve been trying to fix it for a couple of days, but I’m stuck, if you can help me out it would be great.

Thank you,

New update,
I was checking the information that comes in the “CapturedImageBuffer”.

Testing on Android I found.

frame.CapturedImageBuffer.Format → YCvCr21
and frame.CapturedImageBuffer.Planes.Count → 2
when I take the size of the two planes I have found
plane 1 → frame.CapturedImageBuffer.Planes[0].Data.ToArray().Length
is the double of plane 2 → frame.CapturedImageBuffer.Planes[0].Data.ToArray().Length.

This is contrary to what you tell me:

To replicate on Android, just build a scene with ARSessionManager and this script

using Niantic.ARDK.AR;
using Niantic.ARDK.AR.ARSessionEventArgs;
using UnityEngine;

public class BufferCamera : MonoBehaviour
{
    void Start()
    {
        ARSessionFactory.SessionInitialized += ARSessionFactory_SessionInitialized;
    }

    private void ARSessionFactory_SessionInitialized(AnyARSessionInitializedArgs args)
    {
        args.Session.FrameUpdated += Session_FrameUpdated;
    }

    private void Session_FrameUpdated(FrameUpdatedArgs args)
    {

        if (args.Frame == null)
        {
            return;
        }

        var capturedImageBufferargs = args.Frame.CapturedImageBuffer;
        var format = capturedImageBufferargs.Format;
        var planes = capturedImageBufferargs.Planes;

        Debug.Log("img format: " + format + " count planes: " + planes.Count);
    }
}

You will get this output:

image

Hi Sergio,

I do apologize for the confusion. It appears that, when the texture is being grabbed from the CPU, the Android texture is not BGRA, it’s also YCbCr. You are right, the logic is technically incorrect, but due to this oversight, it still works since in both cases the Y channel is still the only one that’s needed since both textures are of the same type.

The reason the Y channel is larger than the Cb and Cr channels (which are put together) is that ARCore is using YCbCr420. With 420 subsampling, there are twice as many samples for the Y channel as there are for the Cb and Cr channel so it will be twice as big.

Lastly, for the issue with scanning barcodes, I was advised that you can try running ConvertTextureAndDecode roughly every 250ms instead of letting it run each update. You can also try tweaking this value to your liking but if this doesn’t resolve the scanning issue please let me know.

Thank you for the clarifications,
I tried as you recommended, but unfortunately it didn’t work, I tried with different time intervals 100 ms, 250 ms 500 ms.
I think the problem is related to the QR code size, since I am trying to read a 2cmx2cm QR code, when I try to read a 5cmx5cm QR code it works.
But I also think it is an ARDK issue, because if I take a screenshot of the QR code that is observed and process the image with ZXing everything works perfectly.