Re: [PATCH 1/6] xstat: Add a pair of system calls to make extended file stats available
- From: David Howells <dhowells redhat com>
- To: Steve French <smfrench gmail com>
- Cc: "J. Bruce Fields" <bfields fieldses org>, linux-nfs vger kernel org, nautilus-list gnome org, libc-alpha sourceware org, kfm-devel kde org, linux-cifs vger kernel org, wine-devel winehq org, samba-technical lists samba org, dhowells redhat com, linux-api vger kernel org, linux-fsdevel vger kernel org, linux-ext4 vger kernel org
- Subject: Re: [PATCH 1/6] xstat: Add a pair of system calls to make extended file stats available
- Date: Thu, 26 Apr 2012 14:45:54 +0100
Steve French <smfrench gmail com> wrote:
> I also would prefer that we simply treat the time granularity as part
> of the superblock (mounted volume) ie returned on fstat rather than on
> every stat of the filesystem. For cifs mounts we could conceivably
> have different time granularity (1 or 2 second) on mounts to old
> servers rather than 100 nanoseconds.
The question is whether you want to have to do a statfs in addition to a stat?
I suppose you can potentially cache the statfs based on device number.
That said, there are cases where caching filesystem-level info based on i_dev
doesn't work. OpenAFS springs to mind as that only has one superblock and
thus one set of device numbers, but keeps all the inodes for all the different
volumes it may have mounted there.
I don't know whether this would be a problem for CIFS too - say on a windows
server you fabricate P:, for example, by joining together several filesystems
(with junctions?). How does this appear on a Linux client when it steps from
one filesystem to another within a mounted share?
David
[
Date Prev][
Date Next] [
Thread Prev][
Thread Next]
[
Thread Index]
[
Date Index]
[
Author Index]