Re: DataCache n_channels alignment



On Wed, 14 Dec 2005, Stefan Westerfeld wrote:

On Tue, Dec 13, 2005 at 11:12:06PM +0100, Tim Janik wrote:

Index: bse/bseloader.c
===================================================================
RCS file: /cvs/gnome/beast/bse/bseloader.c,v
retrieving revision 1.23
diff -u -p -r1.23 bseloader.c
--- bse/bseloader.c	19 Apr 2005 14:13:12 -0000	1.23
+++ bse/bseloader.c	7 Dec 2005 14:53:29 -0000
@@ -373,7 +373,7 @@ bse_wave_chunk_create (BseWaveDsc   *wav

  /* FIXME: we essentially create a dcache for each wchunk here ;( */

-  dcache = gsl_data_cache_from_dhandle (dhandle, gsl_get_config
()->wave_chunk_padding * wave_dsc->n_channels);
+  dcache = gsl_data_cache_from_dhandle (dhandle, gsl_get_config
()->wave_chunk_padding * wave_dsc->n_channels, wave_dsc->n_channels);

hm, this looks like you now need to make sure that the padding is
n_channels algined. with the new n_channels argument passed in,
i'd rather see us specify per-channel padding, and also reorder
arguments, so we we get:
GslDataCache*     gsl_data_cache_new            (GslDataHandle
*dhandle,
                                                 guint
                                                 n_channels,
                                                 guint
                                                 per_channel_padding);

Ok, I changed the API. However, I am wondering if we should start
introducing the word "frame" into our terminology. Then the API becomes

GslDataCache*     gsl_data_cache_new (GslDataHandle      *dhandle,
                                     guint               n_channels,
                                     guint               n_frames_padding);

Let me know if you think thats better than per_channel_padding. But I
can live with either.

that looks ok to me.

+  g_return_val_if_fail (n_channels <= node_size, NULL);

hmmm...
840 is probably a resonable upper channel limit.
however we need corresponding checks at relevant GUI interfaces
to properly deal with waves that have too many channels (so we show
an error/warning there instead of crashing).

that means we need to determine a fixed limit here, and export it
somewhere (maybe per GslConfig) and at least identify all GUI interfaces
affected by this. (identifying the relevant GUI/UI portions is
good enough for you, the implementation, et least for the GUI parts
can be left up to me if you don't want to tackle it, since i'm probably
more experienced with the GUI stuff).

Ok, I think maybe the solution that is easier to implement is dealing
correctly with all values of n_channels in the data cache, as to not
impose arbitary limits on the files that can be loaded. I do this by
rounding _up_ instead of _down_ to the next n_channels-aligned node size
in the new patch. Of course dealing with a 1024 channel wave file will
probably be inefficient.

ok, sounds reasonable and spares significant amounts of work then ;)

As for the GUI, the only thing that I see failing is the wave editor. It
should probably check if the n_channels of a BseWave is within
reasonable limits, and otherwise refuse to draw it (or draw channels >
some limit). Or you invest days of coding work into displaying 1024
channels files as efficiently as 1 channel files. But I don't think
thats worth the work.

yeah, the editor is broken on my machine atm anyway.
checking that it works for more than 2 channels should prolly be added
to the TODO then though.

@@ -545,13 +561,13 @@ gsl_data_cache_unref_node (GslDataCache

  if (check_cache)
    {
-      guint node_size = CONFIG_NODE_SIZE ();
+      guint node_mem_size = (dcache->node_size + (dcache->padding << 1))
* sizeof (GslDataType);

i usually prefer writing *2 instead of <<1, since modern compilers can
optimize that if needed, and *2 looks more natural to most people.

Ok, I changed it (I like *2 better, too, I just stuck to what you used
before). But then, I changed _all_ occurrences of (dcache->padding << 1)
in the file, not only those I added.  Attached is the new version.



Index: ChangeLog
===================================================================
RCS file: /cvs/gnome/beast/ChangeLog,v
retrieving revision 1.911
diff -u -p -r1.911 ChangeLog
--- ChangeLog	4 Dec 2005 17:15:13 -0000	1.911
+++ ChangeLog	14 Dec 2005 15:56:25 -0000
@@ -1,3 +1,9 @@
+Wed Dec 14 16:51:54 2005  Stefan Westerfeld  <stefan space twc de>
+
+	* tools/bseloopfuncs.c: Adapt to the data cache API change: pass
+	additional n_channels argument; padding is given as
+	per_channel_padding.
+
 Sun Dec  4 18:07:03 2005  Stefan Westerfeld  <stefan space twc de>

 	* configure.in:
Index: bse/ChangeLog
===================================================================
RCS file: /cvs/gnome/beast/bse/ChangeLog,v
retrieving revision 1.588
diff -u -p -r1.588 ChangeLog
--- bse/ChangeLog	1 Dec 2005 12:24:44 -0000	1.588
+++ bse/ChangeLog	14 Dec 2005 15:56:29 -0000
@@ -1,3 +1,19 @@
+Wed Dec 14 16:51:54 2005  Stefan Westerfeld  <stefan space twc de>
+
+	* gsldatacache.[hc]: Extend the data cache API to allow specifying
+	n_channels. Ensure that the data cache node size is n_channels
+	aligned, so that data cache nodes always contain whole frames, and no
+	frames wrap around the end of one data cache node and continue within
+	the next.
+	Padding is specified as per_channel_padding now. The data cache itself
+	ensures that padding is n_channels * per_channel_padding.


hm, it just occoured to me, that specifying the number of channels to
data caches is superfluous and may create inconsistencies.
since data caches simply wrap data handles, includeing their open semantics,
the number of channels and similar information is implicitely determined
by the data handle properties upon opening anyway.
so while the data cache code needs adaptions to varying numbers of channels,
the actual number of channels is determined by the data handle and only
available for opened data handles.

i.e. you'll need to revert the n_channel API changes of your patch again...
sorry for not catching that earlier.

+
+	* tests/testwavechunk.c:
+	* bseloader.c:
+	* bsewave.c:
+	* gsldatautils.c: Adapt to the data cache API change: pass additional
+	n_channels argument; padding is given as per_channel_padding.
+
 Thu Nov 17 18:10:56 2005  Stefan Westerfeld  <stefan space twc de>

 	* gslfilter.[hc]: Added new argument to gsl_filter_fir_approx, to
Index: bse/bseloader.c
===================================================================
RCS file: /cvs/gnome/beast/bse/bseloader.c,v
retrieving revision 1.23
diff -u -p -r1.23 bseloader.c
--- bse/bseloader.c	19 Apr 2005 14:13:12 -0000	1.23
+++ bse/bseloader.c	14 Dec 2005 15:56:29 -0000
@@ -373,7 +373,7 @@ bse_wave_chunk_create (BseWaveDsc   *wav

   /* FIXME: we essentially create a dcache for each wchunk here ;( */

-  dcache = gsl_data_cache_from_dhandle (dhandle, gsl_get_config ()->wave_chunk_padding * wave_dsc->n_channels);
+  dcache = gsl_data_cache_from_dhandle (dhandle, wave_dsc->n_channels, gsl_get_config ()->wave_chunk_padding);
   gsl_data_handle_unref (dhandle);
   if (!dcache)
     return NULL;
Index: bse/bsewave.c
===================================================================
RCS file: /cvs/gnome/beast/bse/bsewave.c,v
retrieving revision 1.38
diff -u -p -r1.38 bsewave.c
--- bse/bsewave.c	28 Jul 2005 12:05:50 -0000	1.38
+++ bse/bsewave.c	14 Dec 2005 15:56:29 -0000
@@ -632,7 +632,8 @@ bse_wave_restore_private (BseObject  *ob
               gsl_data_handle_unref (tmp_handle);
             }
 	  GslDataCache *dcache = gsl_data_cache_from_dhandle (parsed_wchunk.data_handle,
-                                                              gsl_get_config ()->wave_chunk_padding * parsed_wchunk.wh_n_channels);
+							      parsed_wchunk.wh_n_channels,
+                                                              gsl_get_config ()->wave_chunk_padding);
           const gchar *ltype = bse_xinfos_get_value (parsed_wchunk.xinfos, "loop-type");
           GslWaveLoopType loop_type = ltype ? gsl_wave_loop_type_from_string (ltype) : GSL_WAVE_LOOP_NONE;
           SfiNum loop_start = bse_xinfos_get_num (parsed_wchunk.xinfos, "loop-start");
Index: bse/gsldatacache.c
===================================================================
RCS file: /cvs/gnome/beast/bse/gsldatacache.c,v
retrieving revision 1.18
diff -u -p -r1.18 gsldatacache.c
--- bse/gsldatacache.c	31 Dec 2004 12:46:26 -0000	1.18
+++ bse/gsldatacache.c	14 Dec 2005 15:56:29 -0000
@@ -88,16 +88,31 @@ _gsl_init_data_caches (void)

 GslDataCache*
 gsl_data_cache_new (GslDataHandle *dhandle,
-		    guint	   padding)
+		    guint          n_channels,
+		    guint	   per_channel_padding)

no n_channels here then, and padding is alöways specified in frames.

 {
-  guint node_size = CONFIG_NODE_SIZE () / sizeof (GslDataType);
+  /*
+   * We ensure that (node_size % n_channels) == 0, so that data cache nodes
+   * always contain whole frames, and no frames wrap around the end of one
+   * data cache node and continue within the next.

you'll have to leave the node size set to 0 until opening then and determine
it after that (and reset upon close).

+   *
+   * The default node_size is 840, because 840 = 2 * 2 * 2 * 3 * 5 * 7.
+ * + * This is suitable for many common values of n_channels, such as 1-8, 10,
+   * 12, 24, and of course many others. For other values of n_channels (such
+   * as 9), we adapt the node_size below.
+   */
+  guint node_size = 840;
   GslDataCache *dcache;

   g_return_val_if_fail (dhandle != NULL, NULL);
-  g_return_val_if_fail (padding > 0, NULL);
+  g_return_val_if_fail (per_channel_padding > 0, NULL);
   g_return_val_if_fail (dhandle->name != NULL, NULL);
-  g_assert (node_size == sfi_alloc_upper_power2 (node_size));
-  g_return_val_if_fail (padding < node_size / 2, NULL);
+
+  /* adapt node_size to be n_channels aligned */
+  node_size += n_channels - 1;
+  node_size /= n_channels;
+  node_size *= n_channels;


;)
i think writiing this would also be ok:

  node_size = n_channels * ((node_size + n_channels - 1) / n_channels);

   /* allocate new closed dcache if necessary */
   dcache = sfi_new_struct (GslDataCache, 1);
@@ -106,7 +121,7 @@ gsl_data_cache_new (GslDataHandle *dhand
   sfi_mutex_init (&dcache->mutex);
   dcache->ref_count = 1;
   dcache->node_size = node_size;
-  dcache->padding = padding;
+  dcache->padding = per_channel_padding * n_channels;

you'll probably want to rename this to padding_frames then.
and maybe provide an accessor macro (valid for opened data caches only):

#define data_cache_padding(dcache) ((dcache)->padding_frames * (dcache)->dhandle->setup.n_channels)

   dcache->max_age = 0;
   dcache->high_persistency = FALSE;
   dcache->n_nodes = 0;
@@ -200,7 +215,7 @@ dcache_free (GslDataCache *dcache)
       GslDataCacheNode *node = dcache->nodes[i];
       guint size;

-      size = dcache->node_size + (dcache->padding << 1);
+      size = dcache->node_size + (dcache->padding * 2);
       sfi_delete_structs (GslDataType, size, node->data - dcache->padding);
       sfi_delete_struct (GslDataCacheNode, node);
     }
@@ -307,13 +322,13 @@ data_cache_new_node_L (GslDataCache *dca
   g_memmove (node_p + 1, node_p, (i - pos) * sizeof (*node_p));
   dnode = sfi_new_struct (GslDataCacheNode, 1);
   (*node_p) = dnode;
-  dnode->offset = offset & ~(dcache->node_size - 1);
+  dnode->offset = offset - (offset % dcache->node_size);
   dnode->ref_count = 1;
   dnode->age = 0;
   dnode->data = NULL;
   GSL_SPIN_UNLOCK (&dcache->mutex);

-  size = dcache->node_size + (dcache->padding << 1);
+  size = dcache->node_size + (dcache->padding * 2);
   data = sfi_new_struct (GslDataType, size);
   node_data = data + dcache->padding;
   offset = dnode->offset;
@@ -340,7 +355,7 @@ data_cache_new_node_L (GslDataCache *dca
       GslDataType *prev_node_data = prev_node->data;

       /* padding around prev_node */
-      prev_node_size += dcache->padding << 1;
+      prev_node_size += dcache->padding * 2;
       prev_node_offset -= dcache->padding;
       prev_node_data -= dcache->padding;

@@ -478,7 +493,7 @@ data_cache_free_olders_Lunlock (GslDataC
   if (0)
     g_print ("start sweep: dcache (%p) with %u nodes, max_age: %u, rejuvenate: %u (max_lru: %u)\n",
 	     dcache, dcache->n_nodes, dcache->max_age, rejuvenate, max_lru);
-  size = dcache->node_size + (dcache->padding << 1);
+  size = dcache->node_size + (dcache->padding * 2);

if you do that, the paranthesis should also vanish.

   slot_p = NULL;
   for (i = 0; i < dcache->n_nodes; i++)
     {
@@ -545,13 +560,13 @@ gsl_data_cache_unref_node (GslDataCache
   if (check_cache)
     {
-      guint node_size = CONFIG_NODE_SIZE ();
+      guint node_mem_size = (dcache->node_size + (dcache->padding * 2)) * sizeof (GslDataType);

you don't need paranthesis to give '*' precedence over '+'

       guint cache_mem = gsl_get_config ()->dcache_cache_memory;
       guint current_mem;

       GSL_SPIN_LOCK (&global_dcache_mutex);
       global_dcache_n_aged_nodes++;
-      current_mem = node_size * global_dcache_n_aged_nodes;
+      current_mem = node_mem_size * global_dcache_n_aged_nodes;
       if (current_mem > cache_mem)              /* round-robin cache trashing */
 	{
 	  guint dcache_count, needs_unlock;
@@ -575,7 +590,7 @@ gsl_data_cache_unref_node (GslDataCache */
               current_mem -= cache_mem;		/* overhang */
               current_mem += cache_mem >> 4;	/* overflow = overhang + 6% */
-              current_mem /= node_size;		/* n_nodes to free */
+              current_mem /= node_mem_size;	/* n_nodes to free */
               current_mem = MIN (current_mem, dcache->n_nodes);
               guint max_lru = dcache->n_nodes;
               max_lru >>= 1;                    /* keep at least 75% of n_nodes */
@@ -594,8 +609,8 @@ gsl_data_cache_unref_node (GslDataCache g_printerr ("shrunk dcache by: dhandle=%p - %s - highp=%d: %d bytes (kept: %d)\n",
                         dcache->dhandle, gsl_data_handle_name (dcache->dhandle),
                         dcache->high_persistency,
-                        -(gint) node_size * (debug_gnaged - global_dcache_n_aged_nodes),
-                        node_size * dcache->n_nodes);
+                        -(gint) node_mem_size * (debug_gnaged - global_dcache_n_aged_nodes),
+                        node_mem_size * dcache->n_nodes);
 #endif
 	  if (needs_unlock)
 	    GSL_SPIN_UNLOCK (&dcache->mutex);
@@ -619,18 +634,20 @@ gsl_data_cache_free_olders (GslDataCache

 GslDataCache*
 gsl_data_cache_from_dhandle (GslDataHandle *dhandle,
-			     guint          min_padding)
+			     guint          n_channels,
+			     guint          per_channels_min_padding)

here, n_channels can also be retrived from dhandle.


 {
   SfiRing *ring;

   g_return_val_if_fail (dhandle != NULL, NULL);
+  g_return_val_if_fail (n_channels > 0, NULL);

   GSL_SPIN_LOCK (&global_dcache_mutex);
   for (ring = global_dcache_list; ring; ring = sfi_ring_walk (ring, global_dcache_list))
     {
       GslDataCache *dcache = ring->data;

-      if (dcache->dhandle == dhandle && dcache->padding >= min_padding)
+      if (dcache->dhandle == dhandle && dcache->n_channels == n_channels && dcache->padding >= (per_channels_min_padding * n_channels))

ah, and here is the consistency assertion is suspected.
this particular code enables triggering an assertion in the applicaiton
(library) just by exchanging a sample file during runtime. that should
never happen, it is why the opened state was introduced for data handles
in the first place.

 	{
 	  gsl_data_cache_ref (dcache);
 	  GSL_SPIN_UNLOCK (&global_dcache_mutex);
@@ -639,5 +656,5 @@ gsl_data_cache_from_dhandle (GslDataHand
     }
   GSL_SPIN_UNLOCK (&global_dcache_mutex);

-  return gsl_data_cache_new (dhandle, min_padding);
+  return gsl_data_cache_new (dhandle, n_channels, per_channels_min_padding);
 }
Index: bse/gsldatacache.h
===================================================================
RCS file: /cvs/gnome/beast/bse/gsldatacache.h,v
retrieving revision 1.7
diff -u -p -r1.7 gsldatacache.h
--- bse/gsldatacache.h	29 Dec 2004 03:24:43 -0000	1.7
+++ bse/gsldatacache.h	14 Dec 2005 15:56:30 -0000
@@ -36,8 +36,9 @@ struct _GslDataCache
   guint			open_count;
   SfiMutex		mutex;
   guint			ref_count;
-  guint			node_size;	        /* power of 2, const for all dcaches */
+  guint			node_size;	        /* multiple of n_channels */

will be valid during open now only. (you should check the beast source
for dependencies on this, but there shouldn't be any that depend on this
being available for closed data handles)

   guint			padding;	        /* n_values around blocks */

probably better renamed to padding_frames.

+  guint                 n_channels;

needs to go, is availabel from dhandle.

   guint			max_age;
   gboolean		high_persistency;       /* valid for opened caches only */
   guint			n_nodes;
@@ -60,7 +61,8 @@ typedef enum

 /* --- prototypes --- */
 GslDataCache*	  gsl_data_cache_new		(GslDataHandle	    *dhandle,
-						 guint		     padding);
+						 guint               n_channels,
+						 guint		     per_channel_padding);
 GslDataCache*	  gsl_data_cache_ref		(GslDataCache	    *dcache);
 void		  gsl_data_cache_unref		(GslDataCache	    *dcache);
 void		  gsl_data_cache_open		(GslDataCache	    *dcache);
@@ -73,7 +75,8 @@ void		  gsl_data_cache_unref_node	(GslDa
 void		  gsl_data_cache_free_olders	(GslDataCache	    *dcache,
 						 guint		     max_age);
 GslDataCache*	  gsl_data_cache_from_dhandle	(GslDataHandle	    *dhandle,
-						 guint		     min_padding);
+						 guint               n_channels,
+						 guint		     per_channel_min_padding);

 G_END_DECLS


  Cu... Stefan

---
ciaoTJ


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]