[prev in list] [next in list] [prev in thread] [next in thread] 

List:       kde-core-devel
Subject:    Re: aRts as KDE2.0 audio server
From:       Stefan Westerfeld <stefan () space ! twc ! de>
Date:       1999-09-11 20:19:10
[Download RAW message or body]

   Hi!

On Wed, Sep 08, 1999 at 06:32:26PM +0200, Martin Vogt wrote:
> On Wed, Sep 08, 1999 at 12:59:13AM +0200, Stefan Westerfeld wrote:
> > On Tue, Sep 07, 1999 at 10:26:38PM +0200, Martin Vogt wrote:
> > > ok here the new results, with the "example_mixer_nosat.arts" 
> > > This looks better :)
> > > But still it uses :
> > > 
> > > 19288 m_vogt    15   0  6512 6512  3464 R       0 21.4 10.3   0:10 artsserver.b
> > > 19297 m_vogt     8   0  1044 1044   824 R       0  3.6  1.6   0:00 top
> > >  
> > > 20 % of my cpu, when nothing is played (!).
> > 
> > Well, yes - I think I need to implement a "power saving" function or
> > something like that, which does disable unused components of a flow
> > graph. I will probably work on a special low CPU usage only audio
> > server set of structures/modules, which should bring down the CPU load
> > quite a bit further.
> >
> This is a must.
> Its inaccepible to use 20 % of cpu if nothing is done.
It's due to the mixer stuff - which was rather thought for real audio
mixers with millions of buttons (like the ones in studios), which shall
be always running - however if ESD has no means to set the volume of an
incoming stream anyways, we should probably do the benchmarking without
any kind of "processing", besides simple 1:1 mixing of all channels to
be fair.

> > By the way: standard mixing behaviour... what would be good here? I know
> > that you would feel better if calling artsmp3 just started playing,
> > without any further user interaction. I also know you want dynamically
> > added mixer channels (instead of statically configured ones).
> >
> This is a must as well.
> You don't need a mixer for one channel at all.
> (I think this is the case for 99% of the time)
> 
> But you need one if you have more then one channel.

Well - yes - busses do just that - you don't need the mixer at all - you
simply can assign everything to the same channel and it will get mixed
anyways.

> > So the question arises: how loud shall the first mp3 be played? 100%?
> > 50%? What should happen if 16 bit are not enough to do all the output?
> > Clipping? Saturation filter?
> 
> For one channel the setVolume call goes directly to the mixer.
> For more, it must be done in software.
I won't implement that in aRts - you can't mix signal processing with
a mixer and whatever and integrate it.

If somebody wants to use his hardware mixer to achieve a task (he might
for instance be able to replay two streams with his soundcard, and have
to output stereo channels for that), he should use kmix. If he wants to
have it handled by software, he should use some controls in aRts - if
he doesn't need a "nice" software mixer emulation, there is no need to
do one, aRts can do 1:1 mixing just as esd does.

> Btw: ESD cant do this. You have no setVolume call to give
> the daemon the hint how loud this sample should be.
> 
> ESD has a few other shortcomings as well (cannot re init streams,
> because the setting of framesize,channels and rate is tightly bound
> to the open socket call)
> 
> 
> The problem to have different sample sizes (eg mono 8 bit and stero 16
> bit) with different frequencies, well.
> BeOs can't handle this. ESD can't handle this.
> Does any one know what Windows does in this case (crash?)
> 
> I still think:
> 
> audioserver != multimedia framework.
> 
> Why is it not possible to use esd _and_ the CORBA interface together?
> 
> I would prefer a small, tested audioserver which does not more
> than take stream mix them and pass them to /dev/dsp.

My opinion is:

IF you have an audio server (or dsp wrapper), it should be possible to
achieve all tasks with that. After all, if you have only one application
which isn't able to use it, you'll always need to start and stop it, and
while that one application is running, you won't have an audio server at
all.

So an audio server is only a good audio server, if you can achieve every
task with it you could achieve without audio server as well. Of course,
the audio server may require you to do the task differently (e.g. use a
special API).

The problem is that esd excludes the whole range of realtime applications.
Arts is in its current form can synthesize just in time what I play on the
keyboard. It couldn't when sending its output to esd. KHdRec can do harddisk
recording. It couldn't when sending its output to esd.

Sure - there are not many "real" audio applications for linux yet, only
some mp3 player and some trackers.

But I want them to be there for linux in the future, and so I can't accept
esd as audio server.

Of course this argumentation only works if all people adapt their
application to work with a complete framework solution like an aRts
based KDE multimedia framework. You gain nothing from not using esd
if nobody uses the better alternative instead.

> The multimedia framework (with the whole CORBA overhead) is seperated.
> The framework should support "remote decodeing of eg. mp3" maybe have
> this saturation filters (what does this actually do?) and maybe
The saturation filter tries to avoid clipping by rather distorting the
signal if it is too loud (softening out the peaks). While I think it is
useful for music applications where you never can know how many voices
you'll have and how loud they might be, you probably don't need to do
that kind of effort when you are just replaying one mp3 for instance (you
know how high the maximum peaks can get anyway).

> have some other filters as well (eg fft for these nice winamp analysis)
> But then the framework sends the data (finally) to the audioserver.

Well I can't see how this seperation could be done while still allowing
all multimedia apps. If you have a better solution than mine, please tell
me.

> I don't like the idea to use CORBA only to play a simple wav file.

Other people might not like the idea to use CORBA just to open a jpg file
with Konqueror - but as far as I know, it will be done.

Now to the rational reasons, why not using CORBA:

- you may think it is slower (in CPU sense)

  as it is NOT used for signal transmission, there is no reason while a
  CORBA based audio server should be slower than a non CORBA based

- you may think its more complicated

  as you can write wrapper libs for CORBA just as for esd internals, I see
  no reason why this should be the case

- you may think it uses more memory

  I agree here - there have been no benchmarkings on that yet, but before
  you start doing them, I'd like to have aRts running with tinymico (that
  is with ministl)


So I personally can't recognize any valid reasons why you wouldn't want to
use CORBA. Sure - aRts is a lot more complex than esd - but thats not due
to CORBA, but since a multimedia framework *IS* more compilcated to achieve
than a simple program which mixes some data in a fixed manner - in fact, I
think I could write the latter in a week and it wouldn't be worse than esd,
while the former took me two years (well, not uninterrupted work, to be fair)
and isn't finished yet.

   Cu... Stefan
-- 
  -* Stefan Westerfeld, stefan@space.twc.de (PGP!), Hamburg/Germany
     KDE Developer, project infos at http://space.twc.de/~stefan/kde *-

[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic