Re: RFC: GNOME 2.0 Multimedia strategy



   Hi!

On Fri, May 18, 2001 at 10:35:56PM +0200, Christian Schaller wrote:
> During GUADEC 2 there was some discussion about what we should do with
> multimedia support in regards to GNOME 2.0. The general concensus is
> that ESD has to go, and there was also a leaning towards replacing it
> with artsd in order to both get a better soundserver and to have
> increased compatibility with KDE. In this document I will try to outline
> why I think that will be a to limited approach. I would like to point
> out that I have personally been trying to help out with GStreamer for
> some time now, so I could be percieved as biased :)
> 
> 
> Why is this important to decide now:
> 
> Due to the limited time until the GNOME 2.0 freeze we need to quickly
> finalize the strategy for Multimedia in GNOME 2.0 in order for audio and
> video applications developers to be able to port their applications in
> time.

I see that it is important to decide the basic strategy now. So let me add
my point of view to the debate as well. I was at the GUADEC, talked to a lot
of people about the sound server issue (having implemented aRts). Note that
I'll use the term sound server, although artsd also supports midi and video
stuff, so it could be called media server as well.

Anyway, here is my view of the problem:


* Why you want a sound server?

Sound servers are generally considered useful, as they share the users
hardware - network transparently - between different applications. That is,
you can use applications concurrently and let them share sound input/output.

* Why you want only ONE sound server?

As the soundserver itself uses the hardware, and exports them via network,
having two sound servers will result in incompatibilities. Imagine that KDE
and GNOME (just as example) would use a K display server and a G display
server, running - at most - on different virtual consoles. Imagine the pain
for the user. That is not what you want to achieve.

* Should a sound server do more than mixing?

Yes, I think it should. Just as X11 applications have a variety of ways how
they interact (overlapping windows, copy & paste, drag & drop, virtual
desktops, transparency), and a variety of ways how they achieve performance
through intelligence on the server (3D rendering, X-render extension,
accelerated pixmap operations), sound - or rather - media applications
have similar needs.

A sequencer could send data to a synthesizer. An effect could operate on the
server. A game could add plugins to the server to allow 3D spatialization
right there, to save latency. Especially latency is the critical thing you
want to minimize. You can do this by running things on the server rather
than in the client for latency critical operations.

* What is artsd?

artsd is the sound server used by KDE. It is based on the aRts technology
which allows component oriented media development, in a similar way that
Bonobo allows component oriented application development. aRts is implemented
in C++, but has no further dependancies.

aRts components are network transparent, support streaming, and so on. aRts
is successfully used in music applications (i.e. Brahms), but also can be
used to play audio or video files.

* Do all applications require a media framework?

No, in fact a lot of applications are happy with the open() write() close()
scheme for sound, and don't want to bother with details. Those include games,
command line players (mpg123, timidity, ...) and lots of other apps. Although
some of them would benefit from being reimplemented for a media framework,
the code isn't going to cease to exist, and often just keeping it as it is
will be a good option.

* What is CSL?

CSL (Common Sound Layer) is a development of Tim Janik and me to allow sound
device independant sound output. Currently, a lot of applications implement
their own wrappers on OS-specific things, such as open() write() close() for
AIX, Solaris, OSS, aRts, ALSA, ESD, ... - CSL is a try to unify these in a
simple to use API.

It will also offer a simple play sample "foo.wav" API call.



So... what to do with GStreamer? At the GUADEC, it seemed that aRts and
GStreamer fullfill complementary needs. GStreamer is good at byte streaming,
aRts is good at network transparency. aRts is good at music and low latency,
GStreamer is good at video. aRts is a lot client/server oriented whereas
GStreamer is a lot multithreading oriented.

I can imaging quite some applications where you can find a sane combination
of both filling the needs of the application (which could be sinking data
via gst_arts sink at one end, or running a gstreamer decoder inside artsd
at the other end).


I'd argue that applications that do not want to use GStreamer (as I said:
there are authors that want to keep up programming as they did all the
times), are probably better off with CSL.

CSL depends on nothing, so that it can find as wide acceptance as X11. CSL
operates fine with aRts. CSL works fine without. Many apps that you will see
on your desktop just have casual media needs (such as playing a wav file
occasionally), and thus, using a lightweight solution as CSL fits better than
forcing anybody to program around with a media framework.

> All GNOME
> applications based upon GStreamer will then be able to output either
> directly to the native sound architecture or through sound servers such
> as ESD, artsd or gnostream. 

Thats what CSL could do. But still, as mentioned above there are /reasons/
for using a more sophisticated soundserver. Namely latency and sophisticated
inter-application communication.

Saying: "oh well, our applications will run with esd and artsd and gnostream"
is probably *not* true, unless applications just use the lowest common
denominator of all.

So what I am trying to say is: it is *good* to standarize on a common server
technology and stick to it. For instance the KDE media player noatun will
not even start if there is no aRts around. And that is not because it is
implemented broken, but because it actually exploits all possibilities the
server has (such as server-side effects and decoding).

> While artsd is really good as a soundserver, its design is not that
> ideal for videoserving. Using GStreamer we can output to artsd for KDE
> compatability (gstreamer already supports artsd) but also switch to a
> more video&audio based setup like gnostream
> (http://gnostream.sourceforge.net)
> when it becomes available witouth having to rewrite applications.
> 
> Having something as GStreamer as part of the GNOME development
> platform means that we could for instance use it in Nautilus to easily
> create a music view which supports all audio formats GStreamer supports
> instead of having to create new code for each format. We could also do
> video preview if that is of interest.

Just as a side note: the aRts framework allows doing likewise.


Conclusion:

 * I think the first choice GNOME should make is: what are we going to use
   for apps that "just" play a sound or a stream, because thats the worst
   portability problem, and the largest group of affected apps. I'd say CSL
   is the way to go.

 * Another challange to keep in mind is: /integration/ and /interoperability/.

   We don't have different K and G display servers and that is GOOD. I don't
   think it is reasonable to go that way for sound (media) stuff. Issues here
   are -latency and -interoperability of apps.

   Saying "well, we can use gnostream or artsd or esd or something" is NOT
   going to be a solution, because all apps should always run, so either all
   apps don't use advanced features (bad idea), or we standarize on backend
   technology.

   Running GStreamer inside the aRts server should be technically possible.

   Running a GNoStream server and an aRts server will be a problem, as both
   will by design want to claim exclusive access to the hardware (like a G
   and K display server would ... correct me if I am wrong).

   So standarizing on using aRts as server sounds good to me.

 * Please - whatever you do - keep in mind that the message after GUADEC was:
   we want to enable application developers long term to develop with either
   toolkit under either desktop, and the application should run everywhere.
   Mixmatching your desktop should be possible and easy.
   
Cu... Stefan
-- 
  -* Stefan Westerfeld, stefan space twc de (PGP!), Hamburg/Germany
     KDE Developer, project infos at http://space.twc.de/~stefan/kde *-         




[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]