[prev in list] [next in list] [prev in thread] [next in thread] 

List:       hurd-bug
Subject:    Re: Sound support for Hurd and shortfalls of ALSA
From:       Damien Zammit <damien () zamaudio ! com>
Date:       2020-07-09 5:54:56
Message-ID: 1efef5aa-e29c-0c3d-7aa2-50923ff3e557 () zamaudio ! com
[Download RAW message or body]

On 29/6/20 7:52 pm, Ricardo Wurmus wrote:
> What is the API provided to user applications?  Or would it be enough to
> add support for this new API to JACK, so that all audio applications
> using JACK automatically benefit from this?

I'm not sure if it may be best to use the NetBSD implementation of SunOS audio stack,
with some modifications to provide sub-sample accurate buffer pointer,
via a userspace rump interface. This would allow JACK to run as it already has Sun \
audio support. But would it be a really good design?

I am suggesting we do put the audio driver stack completely in userspace,
but with a completely redesigned server API (that can be implemented as a JACK \
backend so we maintain JACK application compatibility).  We need to start from the \
audio driver side, and we can because we are not restricted by what ALSA provides.

> Enthusiastic implementation in the Hurd is unlikely if adoption required
> active support from audio application developers.

I don't want to reinvent more inferior wheels, but Paul Davis (linuxaudiosystems) has \
said that the biggest problem with audio on GNU/Linux is that there is no single \
unified API for using the audio subsystem that all user applications use.
We need a pull model at the lowest level just like the drm video subsystem, kind of \
like a framebuffer interface for audio.
Can you imagine trying to use the video subsystem via "open()", "read()" and \
"write()" calls? In Paul's words, "... We don't do this for video, why do we do it \
for audio?" I need help to design this, I don't have all the know-how by myself.
But if we have the right design, I'm sure we can move forward to create something \
wonderful for audio on GNU/Hurd.  

A summary from Paul's 2009 slides includes as a suggestion for the server API:

> > > 
    Formulating a server API that handles:

    - Data format, including sample rate
    - Signal routing
    - Start/stop
    - Latency inquiries
    - Synchronization
    - A server daemon that handles device interaction

    Details of timing considerations:

    - Capture and communication of best-possible timestamps of sound buffer positions
    - Communication of the latency/buffering sizes to userspace
    - Driver <-> userspace audio communication (e.g. an "audio framebuffer" interface \
                instead of read()/write())
    - Communicating audio data
    - Communicating timing information
    - Communicating control flow
<<<

Damien


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic