Linux Audio Conference 2007 impressions



   Hi!

Here is a short summary of my impressions from the Linux Audio
Conference 2007, which I attended. There were official talks, and
private conversations with other linux audio developers.

Talks
=====
I of course didn't attend every talk, but I think two are worth mentioning:

1. The talk on "Blue" by Stephen Yi (http://csounds.com/stevenyi/blue/), which
is a graphical frontend to CSound. I always found CSound rather scary to use,
because it requires you to write your music as text files to get anything out
of it.

Especially interesting (when compared to for instance BEAST, Rosegarden or
MuSE) is that it allows to arrange not only notes in the timeline but "Sound
Objects", which are more general. That way, you can for instance write a python
script instead of notes, which allows for more "modern" musical ideas to be
expressed, which not necessarily take the form of notes (on the other
hand the result can sound really scary - at least to me - as some demos
showed).

2. The talk on volume metering, which with scientific exactness didn't only
cover the standard (simple) solutions implemented for instance in BEAST and
aRts, but gave some view on industry standards for instance broadcasting
companies like the BBC require, to get their programs adjusted at some
standarized volume.

Slides: http://www.nescivi.nl/presentations/lac07_slides_Cabrera.pdf
Paper:  http://www.nescivi.nl/papers/lac07_Cabrera.pdf
Code is available here: http://sourceforge.net/projects/postqc

Exchanging Ideas
================
Then there was some more informal exchange of ideas.

Stefan Kost (Buzztard) and I discussed what kind of code/ideas could possibly
exchanged between Buzztard and Beast. The ability to run Buzz machines could be
incorporated in BEAST, by using the library that Buzztard uses. This would
allow to run binary only machines on x86, and machines for which the source is
available on any architecture. However open source Buzz machines seem to be
far less common than closed source ones.

Another point for code sharing could be the code that allows reading control
events from external devices, such as joysticks.

Finally, there seems to be some interest in sharing widgets between different
Gtk+ based audio programs, such as a tracker widget, volume metering, gui
controls, sample view and so on.

Marc-Andre Lureau and I had some discussion on desktop audio.  As this list
isn't really the place to discuss this, here are just a few questions (without
necessarily being complete):
 * Gnome sound events still depend on ESound. What needs to be done to
   change that?
 * How can PulseAudio be used, without depending new applications on yet
   another sound server?
 * If Phonon suits the needs of KDE4, is there the need of something
   equivalent for Gnome?
 * How can everything "just work" out of the box, if you mix KDE4 and
   Gnome?
 * What form could a freedesktop.org standard take?

Wave Field Synthesis
====================
Finally, listening to heavily spatialized sound in the wave field synthesis
installation of the TU Berlin [ from the conference page "In 2006/2007, the TU
Berlin launched a project to equip one of the lecture halls with a large WFS
system, of in total 840 loudspeaker channels, both for sound reinforcement
during the regular lectures, as well as to have a large scale WFS system for
both scientific and artistic research purposes." ] sounded really cool... :)

   Cu... Stefan
-- 
Stefan Westerfeld, Hamburg/Germany, http://space.twc.de/~stefan



[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]