Re: PostgreSQL under BSD/OS

Started by Bruce Momjianover 27 years ago9 messageshackers
Jump to latest
#1Bruce Momjian
bruce@momjian.us

Thanks to some answers you gave me some time back, I've been doing some
testing of PostgreSQL under BSD/OS-3.1 in preparation for a project that
I'm working on. So far, it all seems to be working fine, apart from one
(relatively minor) bug in psql -- it fails to close files it reads for a
COPY command, meaning it can keep a multi-megabyte file open for days.
The workaround is to do a new connect to the same database after the
COPY, at which point the data file gets closed. Maybe you can get that
fixed in a future release.

This is the first I have heard of this. The file commands/copy.c does
use a file descriptor cache, but that is really just used for allowing
more file opens that the OS permits. Actual opens and closes are
happending.

I assume the files you are talking about are the database table files.
Yes, they stay open because the backend may want to use them someday.

Is that a problem?

-- 
Bruce Momjian                          |  830 Blythe Avenue
maillist@candle.pha.pa.us              |  Drexel Hill, Pennsylvania 19026
  +  If your life is a hard drive,     |  (610) 353-9879(w)
  +  Christ can be your backup.        |  (610) 853-3000(h)
#2Greg Black
gjb@acm.org
In reply to: Bruce Momjian (#1)

(relatively minor) bug in psql -- it fails to close files it reads for a
COPY command, meaning it can keep a multi-megabyte file open for days.
The workaround is to do a new connect to the same database after the
COPY, at which point the data file gets closed. Maybe you can get that
fixed in a future release.

This is the first I have heard of this. The file commands/copy.c does
use a file descriptor cache, but that is really just used for allowing
more file opens that the OS permits. Actual opens and closes are
happending.

I assume the files you are talking about are the database table files.
Yes, they stay open because the backend may want to use them someday.

No, that's not what I meant. Perhaps my attempt to be concise made my
explanation unclear. Here's a more complete explanation of the problem.

I am planning to use PostgreSQL to manage some databases that have been
handled by completely different software up until now. Therefore, there
is a lot of data to be extracted from the old databases and loaded into
PostgreSQL databases.

I do this with C programs which output two principal files: the first is
an input file for psql which contains various commands to create tables
and indexes, etc.; the second, much larger, file contains the actual
data in a suitable format.

The first file is input to psql by the \i command. The last thing in
that first file is a SQL COPY command which copies the data from the big
data file into the appropriate table. After an hour or so, the data
input completes and I can issue psql commands to play with the data to
see if it's the way I expect. At this point, psql ought to close the
data file that it copied the data from so that I can delete it. As
things stand, I have three copies of all the data -- the original
database (which I can't remove until this process is completed in a few
weeks); the temporary data file, used as input to psql (which I want to
delete since it can be recreated if needed); and the PostgreSQL database
which I have just created.

But if I remove the temporary file, I don't get any disk space back
because psql still has it open. If I \connect to the same database,
then psql closes the input file, but my contention is that I should not
have to do that. I hope this explanation is clear.

--
Greg Black <gjb@acm.org>

#3Bruce Momjian
bruce@momjian.us
In reply to: Greg Black (#2)

(relatively minor) bug in psql -- it fails to close files it reads for a
COPY command, meaning it can keep a multi-megabyte file open for days.
The workaround is to do a new connect to the same database after the
COPY, at which point the data file gets closed. Maybe you can get that
fixed in a future release.

This is the first I have heard of this. The file commands/copy.c does
use a file descriptor cache, but that is really just used for allowing
more file opens that the OS permits. Actual opens and closes are
happending.

I assume the files you are talking about are the database table files.
Yes, they stay open because the backend may want to use them someday.

No, that's not what I meant. Perhaps my attempt to be concise made my
explanation unclear. Here's a more complete explanation of the problem.

I am planning to use PostgreSQL to manage some databases that have been
handled by completely different software up until now. Therefore, there
is a lot of data to be extracted from the old databases and loaded into
PostgreSQL databases.

I do this with C programs which output two principal files: the first is
an input file for psql which contains various commands to create tables
and indexes, etc.; the second, much larger, file contains the actual
data in a suitable format.

The first file is input to psql by the \i command. The last thing in
that first file is a SQL COPY command which copies the data from the big
data file into the appropriate table. After an hour or so, the data
input completes and I can issue psql commands to play with the data to
see if it's the way I expect. At this point, psql ought to close the
data file that it copied the data from so that I can delete it. As
things stand, I have three copies of all the data -- the original
database (which I can't remove until this process is completed in a few
weeks); the temporary data file, used as input to psql (which I want to
delete since it can be recreated if needed); and the PostgreSQL database
which I have just created.

But if I remove the temporary file, I don't get any disk space back
because psql still has it open. If I \connect to the same database,
then psql closes the input file, but my contention is that I should not
have to do that. I hope this explanation is clear.

OK, the file you are using for COPY is still open. Let me try and find
the cause, and I can fix it.

Are you using the COPY command, or psql's \copy command? After the
copy, if you do an 'ls -i data_file', you get the inode number. If you
grep 'fstat' what process is holding the file as open? Is it psql or
the postgres backend process? During the copy, is it failing or
succeeding? I can see a case were a copy failure will not properly
close the file.

-- 
Bruce Momjian                          |  830 Blythe Avenue
maillist@candle.pha.pa.us              |  Drexel Hill, Pennsylvania 19026
  +  If your life is a hard drive,     |  (610) 353-9879(w)
  +  Christ can be your backup.        |  (610) 853-3000(h)
#4Greg Black
gjb@acm.org
In reply to: Bruce Momjian (#3)

I do this with C programs which output two principal files: the first is
an input file for psql which contains various commands to create tables
and indexes, etc.; the second, much larger, file contains the actual
data in a suitable format.

The first file is input to psql by the \i command. The last thing in
that first file is a SQL COPY command which copies the data from the big
data file into the appropriate table. After an hour or so, the data
input completes and I can issue psql commands to play with the data to
see if it's the way I expect. At this point, psql ought to close the
data file that it copied the data from so that I can delete it. As
things stand, I have three copies of all the data -- the original
database (which I can't remove until this process is completed in a few
weeks); the temporary data file, used as input to psql (which I want to
delete since it can be recreated if needed); and the PostgreSQL database
which I have just created.

But if I remove the temporary file, I don't get any disk space back
because psql still has it open. If I \connect to the same database,
then psql closes the input file, but my contention is that I should not
have to do that. I hope this explanation is clear.

OK, the file you are using for COPY is still open. Let me try and find
the cause, and I can fix it.

Are you using the COPY command, or psql's \copy command?

I've tried to be clear above, showing `\i' and `SQL COPY command'. No,
I did not use psql's `\copy' command.

After the
copy, if you do an 'ls -i data_file', you get the inode number. If you
grep 'fstat' what process is holding the file as open? Is it psql or
the postgres backend process?

The command is `postgres'. It has two FDs for the file that was read in
as the target of the COPY command. If you do `\connect dbname' in psql,
the open files are closed.

BTW, because I didn't have an hour to wait while I did this with the
real data, I tried it with a test file with only four rows of data. The
first time, it happened as described. However, after doing the \connect,
the problem did not repeat on the next couple of tries. It was
consistent when I was working with a table with 50,000 rows, however.

During the copy, is it failing or
succeeding? I can see a case were a copy failure will not properly
close the file.

The copy completes successfully.

--
Greg Black <gjb@acm.org>

#5Bruce Momjian
bruce@momjian.us
In reply to: Greg Black (#2)

(relatively minor) bug in psql -- it fails to close files it reads for a
COPY command, meaning it can keep a multi-megabyte file open for days.
The workaround is to do a new connect to the same database after the
COPY, at which point the data file gets closed. Maybe you can get that
fixed in a future release.

This is the first I have heard of this. The file commands/copy.c does
use a file descriptor cache, but that is really just used for allowing
more file opens that the OS permits. Actual opens and closes are
happending.

I assume the files you are talking about are the database table files.
Yes, they stay open because the backend may want to use them someday.

No, that's not what I meant. Perhaps my attempt to be concise made my
explanation unclear. Here's a more complete explanation of the problem.

I am planning to use PostgreSQL to manage some databases that have been
handled by completely different software up until now. Therefore, there
is a lot of data to be extracted from the old databases and loaded into
PostgreSQL databases.

I do this with C programs which output two principal files: the first is
an input file for psql which contains various commands to create tables
and indexes, etc.; the second, much larger, file contains the actual
data in a suitable format.

The first file is input to psql by the \i command. The last thing in
that first file is a SQL COPY command which copies the data from the big
data file into the appropriate table. After an hour or so, the data
input completes and I can issue psql commands to play with the data to
see if it's the way I expect. At this point, psql ought to close the
data file that it copied the data from so that I can delete it. As
things stand, I have three copies of all the data -- the original
database (which I can't remove until this process is completed in a few
weeks); the temporary data file, used as input to psql (which I want to
delete since it can be recreated if needed); and the PostgreSQL database
which I have just created.

But if I remove the temporary file, I don't get any disk space back
because psql still has it open. If I \connect to the same database,
then psql closes the input file, but my contention is that I should not
have to do that. I hope this explanation is clear.

OK, in pgsql/src/backend/command/copy.c, you should see a function call
to FreeFile(). That is what is supposed to be called to free the open
file. AllocateFile opens the file a few lines above it, in either read
or write mode.

If you can, can you put a little printf statement just before the
FreeFile, and see if it is getting called. You have to look in the
postmaster log file to see the output of the printf(). If it is getting
called, I have no idea why it would still be holding the file
descriptor. If it is not calling that function, I am confused because I
can't see how it could get out of that function without calling it.

-- 
Bruce Momjian                          |  830 Blythe Avenue
maillist@candle.pha.pa.us              |  Drexel Hill, Pennsylvania 19026
  +  If your life is a hard drive,     |  (610) 353-9879(w)
  +  Christ can be your backup.        |  (610) 853-3000(h)
#6Bruce Momjian
bruce@momjian.us
In reply to: Greg Black (#4)

OK, the file you are using for COPY is still open. Let me try and find
the cause, and I can fix it.

Are you using the COPY command, or psql's \copy command?

I've tried to be clear above, showing `\i' and `SQL COPY command'. No,
I did not use psql's `\copy' command.

After the
copy, if you do an 'ls -i data_file', you get the inode number. If you
grep 'fstat' what process is holding the file as open? Is it psql or
the postgres backend process?

The command is `postgres'. It has two FDs for the file that was read in
as the target of the COPY command. If you do `\connect dbname' in psql,
the open files are closed.

Two file descriptors. That seems strange to me.

I just tried it on a file with 200000 integers:

#$ fstat |grep 'tmp'
postgres postmaster 29288 27 /tmp 5 -rw-r--r-- 1288895 r
maillist ema 29265 3 /tmp 6 -rwx------ 622165 r
maillist sh 29264 3 /tmp 6 -rwx------ 622165 r
maillist sh 29263 3 /tmp 6 -rwx------ 622165 r
maillist elm 26332 3 /tmp 6 -rwx------ 622165 r
#$ fstat |grep 'tmp'
maillist ema 29265 3 /tmp 6 -rwx------ 622165 r
maillist sh 29264 3 /tmp 6 -rwx------ 622165 r
maillist sh 29263 3 /tmp 6 -rwx------ 622165 r
maillist elm 26332 3 /tmp 6 -rwx------ 622165 r

As you can see, the file with inode 5 in /tmp was open during the copy,
but closed after the copy completed. I tried it several times, and it
worked every time. Now, if I got an error in the COPY, it did not close
the file descriptor, as it should. Not quite sure how to fix that, but
it should be fixed.

BTW, because I didn't have an hour to wait while I did this with the
real data, I tried it with a test file with only four rows of data. The
first time, it happened as described. However, after doing the \connect,
the problem did not repeat on the next couple of tries. It was
consistent when I was working with a table with 50,000 rows, however.

During the copy, is it failing or
succeeding? I can see a case were a copy failure will not properly
close the file.

The copy completes successfully.

--
Greg Black <gjb@acm.org>

-- 
Bruce Momjian                          |  830 Blythe Avenue
maillist@candle.pha.pa.us              |  Drexel Hill, Pennsylvania 19026
  +  If your life is a hard drive,     |  (610) 353-9879(w)
  +  Christ can be your backup.        |  (610) 853-3000(h)
#7Greg Black
gjb@acm.org
In reply to: Bruce Momjian (#6)

OK, in pgsql/src/backend/command/copy.c, you should see a function call
to FreeFile(). That is what is supposed to be called to free the open
file. AllocateFile opens the file a few lines above it, in either read
or write mode.

If you can, can you put a little printf statement just before the
FreeFile, and see if it is getting called. You have to look in the
postmaster log file to see the output of the printf(). If it is getting
called, I have no idea why it would still be holding the file
descriptor. If it is not calling that function, I am confused because I
can't see how it could get out of that function without calling it.

I'm willing to have a look at this in a few weeks, but I don't have time
to do it now. I'll get back to you when I've got some additional info.

--
Greg Black <gjb@acm.org>

#8Bruce Momjian
bruce@momjian.us
In reply to: Greg Black (#7)

OK, in pgsql/src/backend/command/copy.c, you should see a function call
to FreeFile(). That is what is supposed to be called to free the open
file. AllocateFile opens the file a few lines above it, in either read
or write mode.

If you can, can you put a little printf statement just before the
FreeFile, and see if it is getting called. You have to look in the
postmaster log file to see the output of the printf(). If it is getting
called, I have no idea why it would still be holding the file
descriptor. If it is not calling that function, I am confused because I
can't see how it could get out of that function without calling it.

I'm willing to have a look at this in a few weeks, but I don't have time
to do it now. I'll get back to you when I've got some additional info.

I am going to add some code to copy.c so if COPY is called again, and
the file descriptor was not closed because of an error somewhere deep in
the code, it will close the descriptor before opening a new one. Should
prevent file descriptor leaking.

-- 
Bruce Momjian                          |  830 Blythe Avenue
maillist@candle.pha.pa.us              |  Drexel Hill, Pennsylvania 19026
  +  If your life is a hard drive,     |  (610) 353-9879(w)
  +  Christ can be your backup.        |  (610) 853-3000(h)
#9Bruce Momjian
bruce@momjian.us
In reply to: Greg Black (#7)

We have a 6.4 beta starting September 1, with final October 1. It is
possible that somehow there is a problem with 6.3.2 that I am not seeing
in the current sources. The new code that closes a file descriptor left
open from a failed COPY will be in there, so you should try that release
first.

OK, in pgsql/src/backend/command/copy.c, you should see a function call
to FreeFile(). That is what is supposed to be called to free the open
file. AllocateFile opens the file a few lines above it, in either read
or write mode.

If you can, can you put a little printf statement just before the
FreeFile, and see if it is getting called. You have to look in the
postmaster log file to see the output of the printf(). If it is getting
called, I have no idea why it would still be holding the file
descriptor. If it is not calling that function, I am confused because I
can't see how it could get out of that function without calling it.

I'm willing to have a look at this in a few weeks, but I don't have time
to do it now. I'll get back to you when I've got some additional info.

--
Greg Black <gjb@acm.org>

-- 
Bruce Momjian                          |  830 Blythe Avenue
maillist@candle.pha.pa.us              |  Drexel Hill, Pennsylvania 19026
  +  If your life is a hard drive,     |  (610) 353-9879(w)
  +  Christ can be your backup.        |  (610) 853-3000(h)