How large file is really large - pathconf results

Started by Zdenek Kotalaalmost 18 years ago3 messages
#1Zdenek Kotala
Zdenek.Kotala@Sun.COM
1 attachment(s)

Regarding to discussion about large segment size of table files a test
pathconf function (see
http://www.opengroup.org/onlinepubs/009695399/functions/pathconf.html).

You can see output there:

_PC_FILESIZEBITS - 3rd column
_PC_LINK_MAX - 4th column
_PC_NAME_MAX - 5th column
_PC_PATH_MAX - 6th column

Solaris Nevada ZFS 64 -1 255 1024
UFS 41 32767 255 1024
FAT 33 1 8 1024
NFS 41 32767 255 1024
Solaris 8 UFS 41 32767 255 1024
NFS 40 32767 255 1024
Centos4(2.6.11) EXT3 64 32000 255 4096
XFS 64 2147483647 255 4096
Mac OSX leopard HFS+ 64 32767 255 1024

The result is not really good :(. I tested it also on HP.UX 11.11/11.23,
Tru64 v4.0 and MacOS tiger (big thanks to Tomas Honzak for machine
access) and Tiger and Tru64 does not recognize _PC_FILESIZEBITS
definition and HP_UX returns errno=EINVAL. I also don't trust Linux
result on EXT3. It seems that only Solaris and Leopard returns
relatively correct result (33 bit on FAT FS is probably not correct).

I attached my test program, please let me know your result from your
favorite OS/FS (binary must be saved on tested FS).

However, I think we cannot use this method to test max file size on FS :(.

Comments, ideas?

Zdenek

PS: Does pg_dump strip a large file or not?

Attachments:

pathconf.ctext/x-csrc; name=pathconf.cDownload
#include <unistd.h>
#include <stdio.h>
#include <errno.h>
#include <string.h>
#include <limits.h>

int main(int argc, char** argv)
{
	long ret;
	int err;

	errno = 0;
	ret = pathconf(argv[0],_PC_FILESIZEBITS);
	if ( ret == -1)
		if ( errno == 0)
			printf("_PC_FILESIZEBITS = unlimited\n");
		else
			printf("_PC_FILESIZEBITS = %s\n", strerror(errno));
	else	
		printf("_PC_FILESIZEBITS = %li\n", ret);

	/* ******************** */
	errno = 0;
	ret = pathconf(argv[0],_PC_LINK_MAX);
	if ( ret == -1)
		if ( errno == 0)
			printf("_PC_LINK_MAX = unlimited\n");
		else
			printf("_PC_LINK_MAX = %s\n", strerror(errno));
	else	
		printf("_PC_LINK_MAX = %li\n", ret);

	/* ******************** */
	errno = 0;
	ret = pathconf(argv[0],_PC_NAME_MAX);
	if ( ret == -1)
		if ( errno == 0)
			printf("_PC_NAME_MAX = unlimited\n");
		else
			printf("_PC_NAME_MAX = %s\n", strerror(errno));
	else	
		printf("_PC_NAME_MAX = %li\n", ret);

	/* ******************** */
	errno = 0;
	ret = pathconf(argv[0],_PC_PATH_MAX);
	if ( ret == -1)
		if ( errno == 0)
			printf("_PC_PATH_MAX = unlimited\n");
		else
			printf("_PC_PATH_MAX = %s\n", strerror(errno));
	else	
		printf("_PC_PATH_MAX = %li\n", ret);

	return 0;
}
#2Reini Urban
rurban@x-ray.at
In reply to: Zdenek Kotala (#1)
Re: How large file is really large - pathconf results

Zdenek Kotala schrieb:

Regarding to discussion about large segment size of table files a test
pathconf function (see
http://www.opengroup.org/onlinepubs/009695399/functions/pathconf.html).

You can see output there:

_PC_FILESIZEBITS - 3rd column
_PC_LINK_MAX - 4th column
_PC_NAME_MAX - 5th column
_PC_PATH_MAX - 6th column

Solaris Nevada ZFS 64 -1 255 1024
UFS 41 32767 255 1024
FAT 33 1 8 1024
NFS 41 32767 255 1024
Solaris 8 UFS 41 32767 255 1024
NFS 40 32767 255 1024
Centos4(2.6.11) EXT3 64 32000 255 4096
XFS 64 2147483647 255 4096
Mac OSX leopard HFS+ 64 32767 255 1024

cygwin 1.5 on NTFS. But 1.7 will a have much larger _PC_PATH_MAX.

_PC_FILESIZEBITS undefined
_PC_LINK_MAX = 8
_PC_NAME_MAX = 260
_PC_PATH_MAX = 257

So this is really bad.
--
Reini Urban

#3Zdenek Kotala
Zdenek.Kotala@Sun.COM
In reply to: Reini Urban (#2)
Re: How large file is really large - pathconf results

Reini Urban napsal(a):

cygwin 1.5 on NTFS. But 1.7 will a have much larger _PC_PATH_MAX.

_PC_FILESIZEBITS undefined _PC_LINK_MAX = 8 _PC_NAME_MAX = 260 _PC_PATH_MAX = 257

So this is really bad.

Thanks for reporting. It seems not good because postgreSQL assumes that _PC_PATH_MAX
is minimal 1024 for all platforms. If this values is correct then user on cygwin can have a
trouble. Please, could you test postgres with a long PATH?

thanks Zdenek