[prev in list] [next in list] [prev in thread] [next in thread] 

List:       openjdk-openjfx-dev
Subject:    Re: PixelBuffer API threading model?
From:       Michael Paus <mp () jugs ! org>
Date:       2020-02-25 19:18:28
Message-ID: a51510e4-40c0-d560-e050-ff8001494621 () jugs ! org
[Download RAW message or body]

Am 25.02.20 um 15:51 schrieb Neil C Smith:
> In there, we have a small amount of locking between the
> GStreamer callback and the OpenGL thread to cover buffer swap and
> texture upload.  Being able to do likewise with the PixelBuffer API,
> to swap or null the underlying buffer, would cover both use cases
> here?  Swapping the underlying buffer also makes sense for a number of
> APIs like GStreamer that ping pong between various memory locations,
> otherwise we need to do some careful image caching.

The API can do some limited form of buffer swapping. I did this for a
proof of concept demo recently. Whether this works or not depends
on how much control you have on the allocation of the native buffer.

What I did was to allocate a contiguous native buffer which was twice
as large as needed for the image. Let's say the image was 1000x500 pixels.
I then allocated a buffer for 1000x1000 pixels.

I then could define in C two addresses for two buffers. The first one
at offset 0 and the second one in the middle of the memory.
The two addresses can then be forwarded independently to the native
renderer. When you now get a notification from your native renderer
that one of theses buffers has been written, you can tell that to the
PixelBuffer API in the callback function by appropriately setting the
viewport. You have to set the viewport to (0, 0, 1000, 500) or
(0, 500, 1000, 500) depending on which buffer part you have written.

This concept could even be extended to more than two buffers.
The key point is that the allocated memory must be contiguous.

[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic