Shader Problems

Description of the issue:
We need the ratio of the masked area by the semantic texture to check the % of a layer ( e.g. sky ) currently in the camera.
We would like to ask you if it’s already inside the API or whether you could add such value (like a float inside ISemanticBuffer or a thread-safe function) to calculate this
Also, we had an idea to calculate the value in the shader by accessing pixels data, but it needs a higher version of HLSL/GLSL which are not supported on a wide range of many mobile devices yet
Another question we have is about the ARVideoFeed::GPUTexture which contains white colour on the sky area and not the actual colour of the sky, is there any other way to achieve this data or could it be a bug?
We are trying to render a transparent video that currently comes out black when we blend it with the ARVideoFeed texture

Hello Ash,

We are still looking into your question about how to get the percentage of pixels/ screen space a layer takes up.

As for your other question about the GPUTexture in ARVideoFeed. This tutorial here on Semantic Textures also shows how you can incorporate a shader to change the texture that GPUTexture outputs.

https://lightship.dev/docs/moderate_semantictextures_tutorial.html

There isn’t a built in function for this but its pretty easy to do cpu side.

Here is a snipit from an upcoming semantics tutorial that does what your asking. It keeps a running average of the semantics % on screen so you can make decisions on it.

 List<float> _channelAverages = new List<float>();

    //flag for first frame.
    bool _firstFrame = true;
    
    private void OnSemanticsBufferUpdated(ContextAwarenessStreamUpdatedArgs<ISemanticBuffer> args)
    {
        //get the current buffer
        ISemanticBuffer currentBuffer = args.Sender.AwarenessBuffer;

        if (_firstFrame)
        {
            //on first run lets grab the number of channels and set up our storage buffer
            _channelCt = _semanticManager.SemanticBufferProcessor.ChannelCount;

            foreach (var c in _semanticManager.SemanticBufferProcessor.Channels)
            {
                _channelAverages.Add(0.0f);
            }

            _firstFrame = false;
        }

        //clear the buffer
        for (int i = 0; i < _channelCt; i++)
        {
            _channelAverages[i] = 0;
        }

        //for each point in the buffer
        //we can also update this to take a sample e.g. 10% of the buffer buy just skiping locations if it is too slow set to x10 here.
        int iteration = 10;
        for (int j = 0; j < currentBuffer.Data.Length; j += iteration)
        {
           //directly access the packed buffer data
            var semanticPixel = currentBuffer.Data[j];
            //what is in it
            for (int i = 0; i < _channelCt; i++)
            {
                //each pixel contains multiple semantic channels packed into a bit mask so we need to walk through the masks and see what is there using bit masking
                //another optimisation would is to just take the first mask and move on loosing some accuracy.
                var mask = currentBuffer.GetChannelTextureMask(i);
                if ((semanticPixel & mask) != 0u)
                {
                    _channelAverages[i] += 1.0f;
                    //speed up just break here, there could be more than one mask but this will work well enough remove the break if you want it to be more accurate, but this will be more expensive and its is an edge case.
                    break;
                }
            }
        }

        //average it
        for (int i = 0; i < _channelCt; i++)
        {
            _channelAverages[i] /= (currentBuffer.Data.Length / iteration);
            // Debug.Log(_semanticManager.SemanticBufferProcessor.Channels[i] + " " + channelAverages[i] );
        }

    }

1 Like