[gparted] Display usage for multi-device btrfs file systems (#723842)



commit 1712809e016af4f471dd06be64eea860ef78cddb
Author: Mike Fleetwood <mike fleetwood googlemail com>
Date:   Sat Mar 29 21:12:01 2014 +0000

    Display usage for multi-device btrfs file systems (#723842)
    
    Currently GParted fails to report the usage of a multi-device btrfs file
    system if it is mounted or if the used space is larger than the size of
    an individual member device.  When GParted does display usage figures it
    also incorrectly reports the file system wide used figure against every
    member device.
    
    Mounted case:
        statvfs() provides an FS size which is larger than any individual
        device so is rejected.  See:
            GParted_Core::mounted_set_used_sectors()
                Utils::get_mounted_filesystem_usage()
                partition .set_sector_usage()
    
    Unmounted case, FS used > device size:
        FS used figure is larger than any individual device so free space is
        calculated as a negative number and rejected.  See:
            btrfs::set_used_sectors()
    
    Btrfs has a volume manager layer within the file system which allows it
    to provide multiple levels of data redundancy, RAID levels, and use
    multiple devices both of which can be changed while the file system is
    mounted.  To achieve this btrfs has to allocate space at two different
    level: (1) chunks of 256 MiB or more at the volume manager level; and
    (2) extents at the file data level.
    References:
    *   Btrfs: Working with multiple devices
        https://lwn.net/Articles/577961/
    *   Btrfs wiki: Glossary
        https://btrfs.wiki.kernel.org/index.php/Glossary
    
    This makes the question of how much disk space is being used in an
    individual device a complicated question to answer.  Further, the
    current btrfs tools don't provide the required information.
    
    Btrfs filesystem show only provides space usage information at the chunk
    level per device.  At the file extent level only a single figure for the
    whole file system is provided.  It also reports size of the data and
    metadata being stored, not the larger figure of the amount of space
    taken after redundancy is applied.  So it is impossible to answer the
    question of how much disk space is being used in an individual device.
    Example output:
    
        Label: none  uuid: 36eb51a2-2927-4c92-820f-b2f0b5cdae50
                Total devices 2 FS bytes used 156.00KB
                devid    2 size 2.00GB used 512.00MB path /dev/sdb2
                devid    1 size 2.00GB used 240.75MB path /dev/sdb1
    
    Fix by guesstimating the per device used figure as the fraction of the
    file system wide extent usage based on chunk usage per device.
    Calculation:
        ptn fs used = total fs used * devid used / sum devid used
    
    Positives:
    1) Per device used figure will correctly be between zero and allocated
       chunk size.
    
    Known inaccuracies:
    [for single and multi-device btrfs file systems]
    1) Btrfs filesystem show reports file system wide file extent usage
       without considering redundancy applied to that data.  (By default
       btrfs stores two copies of metadata and one copy of data).
    2) At minimum size when all data has been consolidated there will be a
       few partly filled chunks of 256 MiB or more for data and metadata of
       each storage profile (RAID level).
    [for multi-device btrfs file systems only]
    3) Data may be far from evenly distributed between the chunks on
       multiple devices.
    4) Extents can be and are relocated to other devices within the file
       system when shrinking a device.
    
    Bug #723842 - GParted resizes the wrong filesystem (does not pass the
                  devid to btrfs filesystem resize)

 src/btrfs.cc |  125 +++++++++++++++++++++++++++++++++++++++++++++++++---------
 1 files changed, 106 insertions(+), 19 deletions(-)
---
diff --git a/src/btrfs.cc b/src/btrfs.cc
index 6e8ec4d..30b668f 100644
--- a/src/btrfs.cc
+++ b/src/btrfs.cc
@@ -113,7 +113,7 @@ FS btrfs::get_filesystem_support()
                fs .move = GParted::FS::GPARTED ;
        }
 
-       fs .online_read = FS::GPARTED ;
+       fs .online_read = FS::EXTERNAL ;
 #ifdef ENABLE_ONLINE_RESIZE
        if ( Utils::kernel_version_at_least( 3, 6, 0 ) )
        {
@@ -156,34 +156,121 @@ bool btrfs::check_repair( const Partition & partition, OperationDetail & operati
 
 void btrfs::set_used_sectors( Partition & partition )
 {
+       //Called when the file system is unmounted *and* when mounted.
+       //
+       //  Btrfs has a volume manager layer within the file system which allows it to
+       //  provide multiple levels of data redundancy, RAID levels, and use multiple
+       //  devices both of which can be changed while the file system is mounted.  To
+       //  achieve this btrfs has to allocate space at two different levels: (1) chunks
+       //  of 256 MiB or more at the volume manager level; and (2) extents at the file
+       //  data level.
+       //  References:
+       //  *   Btrfs: Working with multiple devices
+       //      https://lwn.net/Articles/577961/
+       //  *   Btrfs wiki: Glossary
+       //      https://btrfs.wiki.kernel.org/index.php/Glossary
+       //
+       //  This makes the question of how much disk space is being used in an individual
+       //  device a complicated question to answer.  Further, the current btrfs tools
+       //  don't provide the required information.
+       //
+       //  Btrfs filesystem show only provides space usage information at the chunk level
+       //  per device.  At the file extent level only a single figure for the whole file
+       //  system is provided.  It also reports size of the data and metadata being
+       //  stored, not the larger figure of the amount of space taken after redundancy is
+       //  applied.  So it is impossible to answer the question of how much disk space is
+       //  being used in an individual device.  Example output:
+       //
+       //      Label: none  uuid: 36eb51a2-2927-4c92-820f-b2f0b5cdae50
+       //              Total devices 2 FS bytes used 156.00KB
+       //              devid    2 size 2.00GB used 512.00MB path /dev/sdb2
+       //              devid    1 size 2.00GB used 240.75MB path /dev/sdb1
+       //
+       //  Guesstimate the per device used figure as the fraction of the file system wide
+       //  extent usage based on chunk usage per device.
+       //
+       //  Positives:
+       //  1) Per device used figure will correctly be between zero and allocated chunk
+       //     size.
+       //
+       //  Known inaccuracies:
+       //  [for single and multi-device btrfs file systems]
+       //  1) Btrfs filesystem show reports file system wide file extent usage without
+       //     considering redundancy applied to that data.  (By default btrfs stores two
+       //     copies of metadata and one copy of data).
+       //  2) At minimum size when all data has been consolidated there will be a few
+       //     partly filled chunks of 256 MiB or more for data and metadata of each
+       //     storage profile (RAID level).
+       //  [for multi-device btrfs file systems only]
+       //  3) Data may be far from evenly distributed between the chunks on multiple
+       //     devices.
+       //  4) Extents can be and are relocated to other devices within the file system
+       //     when shrinking a device.
        if ( btrfs_found )
                exit_status = Utils::execute_command( "btrfs filesystem show " + partition .get_path(), 
output, error, true ) ;
        else
                exit_status = Utils::execute_command( "btrfs-show " + partition .get_path(), output, error, 
true ) ;
        if ( ! exit_status )
        {
-               //FIXME: Improve free space calculation for multi-device
-               //  btrfs file systems.  Currently uses the size of the
-               //  btrfs device in this partition (spot on) and the
-               //  file system wide used bytes (wrong for multi-device
-               //  file systems).
-
-               Byte_Value ptn_bytes = partition .get_byte_length() ;
+               //Extract the per device size figure.  Guesstimate the per device used
+               // figure as discussed above.  Example output:
+               //
+               //      Label: none  uuid: 36eb51a2-2927-4c92-820f-b2f0b5cdae50
+               //              Total devices 2 FS bytes used 156.00KB
+               //              devid    2 size 2.00GB used 512.00MB path /dev/sdb2
+               //              devid    1 size 2.00GB used 240.75MB path /dev/sdb1
+               //
+               // Calculations:
+               //      ptn fs size = devid size
+               //      ptn fs used = total fs used * devid used / sum devid used
+
+               Byte_Value ptn_size = partition .get_byte_length() ;
+               Byte_Value total_fs_used = -1 ;  //total fs used
+               Byte_Value sum_devid_used = 0 ;  //sum devid used
+               Byte_Value devid_used = -1 ;     //devid used
+               Byte_Value devid_size = -1 ;     //devid size
+
+               //Btrfs file system wide used bytes (extents and items)
                Glib::ustring str ;
-               //Btrfs file system device size
-               Glib::ustring regexp = "devid .* size ([0-9\\.]+( ?[KMGTPE]?i?B)?) .* path " + partition 
.get_path() ;
-               if ( ! ( str = Utils::regexp_label( output, regexp ) ) .empty() )
-                       T = btrfs_size_to_num( str, ptn_bytes, true ) ;
-
-               //Btrfs file system wide used bytes
                if ( ! ( str = Utils::regexp_label( output, "FS bytes used ([0-9\\.]+( ?[KMGTPE]?i?B)?)" ) ) 
.empty() )
-                       N = T - btrfs_size_to_num( str, ptn_bytes, false ) ;
+                       total_fs_used = Utils::round( btrfs_size_to_gdouble( str ) ) ;
+
+               Glib::ustring::size_type offset = 0 ;
+               Glib::ustring::size_type index ;
+               while ( ( index = output .find( "devid ", offset ) ) != Glib::ustring::npos )
+               {
+                       Glib::ustring devid_path = Utils::regexp_label( output .substr( index ),
+                                                                       "devid .* path (/dev/[[:graph:]]+)" ) 
;
+                       if ( ! devid_path .empty() )
+                       {
+                               //Btrfs per devid used bytes (chunks)
+                               Byte_Value used = -1 ;
+                               if ( ! ( str = Utils::regexp_label( output .substr( index ),
+                                                                   "devid .* used ([0-9\\.]+( 
?[KMGTPE]?i?B)?) path" ) ) .empty() )
+                               {
+                                       used = btrfs_size_to_num( str, ptn_size, false ) ;
+                                       sum_devid_used += used ;
+                                       if ( devid_path == partition .get_path() )
+                                               devid_used = used ;
+                               }
+
+                               if ( devid_path == partition .get_path() )
+                               {
+                                       //Btrfs per device size bytes (chunks)
+                                       if ( ! ( str = Utils::regexp_label( output .substr( index ),
+                                                                           "devid .* size ([0-9\\.]+( 
?[KMGTPE]?i?B)?) used " ) ) .empty() )
+                                               devid_size = btrfs_size_to_num( str, ptn_size, true ) ;
+                               }
+                       }
+                       offset = index + 5 ;  //Next find starts immediately after current "devid"
+               }
 
-               if ( T > -1 && N > -1 )
+               if ( total_fs_used > -1 && devid_size > -1 && devid_used > -1 && sum_devid_used > 0 )
                {
-                       T = Utils::round( T / double(partition .sector_size) ) ;
-                       N = Utils::round( N / double(partition .sector_size) ) ;
-                       partition .set_sector_usage( T, N );
+                       T = Utils::round( devid_size / double(partition .sector_size) ) ;               //ptn 
fs size
+                       double ptn_fs_used = total_fs_used * ( devid_used / double(sum_devid_used) ) ;  //ptn 
fs used
+                       N = T - Utils::round( ptn_fs_used / double(partition .sector_size) ) ;
+                       partition .set_sector_usage( T, N ) ;
                }
        }
        else


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]