[prev in list] [next in list] [prev in thread] [next in thread] 

List:       kde-devel
Subject:    Re: aKtion! [was: Re: empath]
From:       Stefan Westerfeld <stefan () space ! twc ! de>
Date:       2000-01-06 19:00:00
[Download RAW message or body]

   Hi!

On Thu, Jan 06, 2000 at 07:50:25AM -1000, Greg Lee wrote:
> ...
> > > I'm not really happy with the way that KDE multimedia is shaping up for
> > > KDE 2.0.  We discussed lots of advances at KDE II but none of them have
> > > really occurred.
> ...
> > There are of course people who work on various other multimedia projects,
> > like Martin Vogt "kmp3/yaf-lib", Christian Esken "knotify", Antonio
> > Larrossa "kmid", Greg Lee "kmidi" or Jan Wuerthner "Brahms".
> > 
> ...
> > I hope to come up with an primitive-but-works soundserver like esd implemented
> > using the new MCOP code in some time.
> 
> I wouldn't like kmidi to be obsoleted by the new soundserver.
> There are a couple of issues I'm worried about.  Kmidi now calls
> the system soundcard driver to find out what sample is currently
> being played and also to find out how much data the driver has
> buffered up.  This is necessary to synchonize sound with
> visual display and also to decide whether to economize on
> computations in case kmidi is falling behind.  I hope I can
> still get this information if I have to work through an
> intermediate driver.

KMidi seems to be a really great way of creating nice output for midi
data, even if you have a $20 soundcard. What I could imagine is that
people who are playing a game for instance, may want to listen to the
kmidi rendered output, even while they get the soundeffects from the
game.

Or for instance musicians (like me) who are desperately looking forward
to the ability to use kmidi as "output port" in Brahms (which is a sequencing
software) just besides synthesized instruments from aRts, besides really midi
instruments (from external midi).

It would be soo great!!

So I certainly don't want to obsolete anything - just to use the code that
is already there in other application contexts. This is why I am working
on this multimedia stuff.

So now to the technical issues:

I am not sure how kmidi works internally, but would it be possible to divide
it into a component, which gets (timestamped) midi data and outputs a
continous stream of samples? And could this component be split up in a way
from the rest, that you can still do your GUI updatings, etc.?

You could still communicate with it over MCOP, and I assume it must be
possible to get this fast enough... after all, the X11 Server and your
application are also not in the same process space, and you only need
to do screen updates 50 times per second at most.

If that would be possible, then the kmidi playing component could live
inside the sound server, which would be running real time priority, and
contain other sound facilities as software synthesis, game sounds, effect
processing, i/o. This architecture grants you that no matter how much you
stress your desktop, sound always gets priority (and never breaks).

Currently I have never thought about the ability to measure buffer size to
reduce CPU usage (mainly because I am always working with 10ms buffers, due
to realtime abilities), but I am sure, if you really need that, we can find
a way to make it even possible inside the sound server.

The other option (besides making a kmidi rendering component) might be that
kmidi stays as it is, but sends it's output as stream to the soundserver. But
this is time-critical and doesn't give you the benefits of for instance
using it as backend for Brahms.

   Cu... Stefan
-- 
  -* Stefan Westerfeld, stefan@space.twc.de (PGP!), Hamburg/Germany
     KDE Developer, project infos at http://space.twc.de/~stefan/kde *-

[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic