[prev in list] [next in list] [prev in thread] [next in thread] 

List:       kde-multimedia
Subject:    Re: KDE2.1 multimedia planning
From:       Stefan Westerfeld <stefan () space ! twc ! de>
Date:       2000-10-30 21:00:24
[Download RAW message or body]

   Hi!

On Sun, Oct 29, 2000 at 02:16:15AM +0200, Antonio Larrosa wrote:
> Stefan Westerfeld escribió:
> > 1.2 Pluggable effects
> > 
> > aRts allows much more than vanilla playback. Filters can be used to affect
> > how things should sound. Currently, this feature has been mostly unavailable
> > in the media player(s), mostly for two reasons:
> > 
> >  1. there were not too many effects
> >  2. binding a GUI to effects was not-implemented-yet
> > 
> > The fix for (1) seems obvious: write more effects. One starting point could
> > be porting the FreeVerb code to the aRts architecture.
> > 
> > The fix for (2) is less obvious. The problem is that a way needs to be found
> > how GUI objects (which run in the player process) can be connected to the
> > effects (which runs in the sound server process). The clean way to do this
> > is extending MCOP to provide a signals & slots technology.
> > 
> 
> I'd like to propose doing an important change in aRts, and let it link
> to DCOP.
> 
> [...]
>
> Please don't reinvent the wheel, there're reasons to use mcop
> for large amounts of transfers, but signals between apps is a work
> for dcop.

I've thought a bit about it, and I have a bit different view on things. You
are right with the fact that aRts/MCOP is currently doing things different
than the way you would do it in a pure KDE app. This means

 * it is using an own object model (as opposed to the Qt object model)
 * it is using STL instead of QTL/QString/QList and friends
 * it is using a well-defined marshalling (not Qt serialization)
 * it is using an own IO subsystem (as opposed to QSocketNotifiers)

and maybe some more. The fact is, that it already is doing these things,
and that I am quite happy with it.

The object model is much more built towards IPC (reference counting, CORBA
like stubs), than the one Qt offers. The STL is nice, and the language
bindings build quite nice upon it. The marshalling has a proper formal
definition and foundation, as opposed to Qt style marshalling (which is
write-whatever-you-can-somewhen-read-again). Having a custom IO subsystem
allows room for optimization towards multimedia (i.e. timer priorizations,
multithreaded scheduling, ...) later much easier than changing the deep
secrets of Qt.

And, it is consistent in itself, and maintaining it consistent from the point
where we are now is possible, quite nicely.

Now, you can of course start mixing and tweaking together the different
concepts of IPC and philosophy in aRts. Add DCOP, somewhere, somewhen. Add
the Qt object model, moc and such. Add using Qt data types. Still you will
be bound in doing this by restrictions like binary compatibility to 2.0.

Rewriting everything from scratch on top of Qt is an option, but ... well,
... do it if you like, but keep me out of that for now. I have had my CORBA
-> MCOP rewriting, and I would be more than happy to actually *do something*
with the clean and consistent result, rather than starting all over again.

I am quite convinced that adding signals and slots consistent with the-MCOP-
way-of-doing-things is easier and will produce a more better result than
mixing up with DCOP in a very undefined way. I have done some experiments
on the pure MCOP way. I think if you want to provide good techical reasons,
you would need to do some practical experiments to prove that mixing DCOP
signals and Qt objects in the MCOP context can be done elegantly.

> > If - for instance - Gnome would start using aRts, having it mixed in the
> > CVS and in packaging with KDE is probably a bad idea.
> 
> Are you sure they're going to use artsd and not their own server ?

Well, currently you can only try astrology to answer such questions ;).

> > aRts in the CVS provides already midi realtime synthesis. You can do midisend,
> > and run instruments in artsbuilder. You can also combine it with Brahms or
> > artstracker to compose songs. The problem with it is, that if you are not
> > a rocket scientist, and study the code or collected READMEs for a while, you
> > will probably not manage to do it. That has to change.
> > 
> > A good start would be providing an instrument manager, where you can
> > graphically assign which midi channel should get which instruments, without
> > bothering about the details.
> > 
> 
> If alsa is doing this, I don't see any need to put midi synth
> very high on the TODO list, but better put real midi support there.

Having taken the code base of a modular synthesizer, and having most of
the structure and know how in place, it would be silly not to offer midi
synthesis in the future again.

I definitely don't think ALSA is doing what I have in mind. It's doing
midi, but to compare with CuBase/VST or AudioLogic, apps like Brahms
or Anthem will have to build on technology like aRts (which is mostly
modular audio processing and synthesis) in addition to pure midi output.

In fact, the audio processing part is getting more and more important
in the "virtual studio" idea recent developments follow, and I think aRts
is in the right position to develop in that direction.

> I miss a section of "optimizations on 2.1" :)

Oh, yes, I will add it. In fact, aRts is already much better on quite a
few things than in KDE2.0 right now. ;)

   Cu... Stefan
-- 
  -* Stefan Westerfeld, stefan@space.twc.de (PGP!), Hamburg/Germany
     KDE Developer, project infos at http://space.twc.de/~stefan/kde *-         
_______________________________________________
Kde-multimedia mailing list
Kde-multimedia@master.kde.org
http://master.kde.org/mailman/listinfo/kde-multimedia

[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic