VACUUM ANALYZE problem on linux
Hello!
A week ago I reported a problem with VACUUM ANALYZE on linux and memory
error. Three good guys saw my database and two of them for VACUUM problem,
I hope (Tom Lane and Thomas Lockhart).
Have you reproduced the case?
I ran VACUUM ANALYZE on table basis. Results:
----- Script -----
VACUUM ANALYZE sections;
VACUUM ANALYZE subsections;
VACUUM ANALYZE positions;
VACUUM ANALYZE cities;
VACUUM ANALYZE districts;
VACUUM ANALYZE shop_types;
VACUUM ANALYZE shops;
VACUUM ANALYZE producers;
VACUUM ANALYZE products;
VACUUM ANALYZE correspondents;
VACUUM ANALYZE shop_corr;
VACUUM ANALYZE money4corr;
VACUUM ANALYZE raw_maillog;
VACUUM ANALYZE corr_mail_errors;
VACUUM ANALYZE pos_rating;
VACUUM ANALYZE motd;
VACUUM ANALYZE central;
VACUUM ANALYZE bad_data;
VACUUM ANALYZE today_history;
VACUUM ANALYZE currencies;
VACUUM ANALYZE currency_exch;
VACUUM ANALYZE param_int;
VACUUM ANALYZE param_str;
VACUUM ANALYZE param_float;
VACUUM ANALYZE param_datetime;
VACUUM ANALYZE palette;
VACUUM ANALYZE units;
VACUUM ANALYZE mail_ecod;
----- Log -----
VACUUM
VACUUM
VACUUM
VACUUM
VACUUM
VACUUM
VACUUM
VACUUM
----- Errors -----
VACUUM ANALYZE sections;
VACUUM ANALYZE subsections;
VACUUM ANALYZE positions;
VACUUM ANALYZE cities;
VACUUM ANALYZE districts;
VACUUM ANALYZE shop_types;
VACUUM ANALYZE shops;
VACUUM ANALYZE producers;
VACUUM ANALYZE products;
pqReadData() -- backend closed the channel unexpectedly.
This probably means the backend terminated abnormally before or while processing the request.
We have lost the connection to the backend, so further processing is impossible. Terminating.
----- End -----
If I remove "products" from script, the next table to fail is "palette". If
I remove "palette" - all goes well.
Oleg.
----
Oleg Broytmann http://members.xoom.com/phd2/ phd2@earthling.net
Programmers don't die, they just GOSUB without RETURN.
Oleg Broytmann <phd@sun.med.ru> writes:
Hello!
A week ago I reported a problem with VACUUM ANALYZE on linux and memory
error. Three good guys saw my database and two of them for VACUUM problem,
I hope (Tom Lane and Thomas Lockhart).
Have you reproduced the case?
Oh! I'm sorry, I thought I saw a report that someone had already fixed
the problem, so I didn't look at it.
regards, tom lane
Import Notes
Reply to msg id not found: YourmessageofMon8Feb1999153737+0300Pine.SOL2.3.96.SK.990208152921.719B-100000@sun.med.ru | Resolved by subject fallback
Oleg Broytmann <phd@sun.med.ru> writes:
Hello!
A week ago I reported a problem with VACUUM ANALYZE on linux and memory
error. Three good guys saw my database and two of them for VACUUM problem,
I hope (Tom Lane and Thomas Lockhart).
Have you reproduced the case?Oh! I'm sorry, I thought I saw a report that someone had already fixed
the problem, so I didn't look at it.
Maybe a little misunderstanding. Oleg reported a memory
exhaustion problem on COPY FROM in the same context (which
also happened on large updates). I've tracked that down in
the executor. It was because his table had a CHECK clause and
that got stringToNode()'ed for each single tuple.
This problem is fixed in CURRENT along with a speedup of
factor 2++ for the case of CHECK on large ranges. The check's
are only once stringToNode()'ed now and live until the
executor's memory context get's destroyed (the portal level
the plan is executed in).
I don't know if the same caused the VACUUM problem. Oleg,
could you please check against the CURRENT source tree if the
problem still exists?
Jan
--
#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me. #
#======================================== jwieck@debis.com (Jan Wieck) #
I reported a problem with VACUUM ANALYZE on linux and memory
error. Three good guys saw my database and two of them for VACUUM
problem, I hope (Tom Lane and Thomas Lockhart).
Have you reproduced the case?
I understood someone else was starting to look at it, so I have not
(yet). Can do so later if needed...
- Tom
Hello!
I'll try CURRENT a bit later (things gonna get real slow these days :).
But I am sure these are two different problems.
First, I had memory problem on a big table, and VACUUM ANALYZE problem
on two very small tables (few lines).
Second, I have memory problem on 3 systems - RedHat 5.1 on Pentium,
Debian 2.0 on Pentium, and Solaris on Ultra-1.
But I have VACUUM ANALYZE problem only on linucies.
BTW, I noted bot linucies are glibc2-based. It would be interesting to
try libc5-based system. May be we can narrow the problem down to
glibc2-based linux?
Have someone libc5-based linux ready to test?
On Mon, 8 Feb 1999, Jan Wieck wrote:
Oleg Broytmann <phd@sun.med.ru> writes:
Hello!
A week ago I reported a problem with VACUUM ANALYZE on linux and memory
error. Three good guys saw my database and two of them for VACUUM problem,
I hope (Tom Lane and Thomas Lockhart).
Have you reproduced the case?Oh! I'm sorry, I thought I saw a report that someone had already fixed
the problem, so I didn't look at it.Maybe a little misunderstanding. Oleg reported a memory
exhaustion problem on COPY FROM in the same context (which
also happened on large updates). I've tracked that down in
the executor. It was because his table had a CHECK clause and
that got stringToNode()'ed for each single tuple.This problem is fixed in CURRENT along with a speedup of
factor 2++ for the case of CHECK on large ranges. The check's
are only once stringToNode()'ed now and live until the
executor's memory context get's destroyed (the portal level
the plan is executed in).I don't know if the same caused the VACUUM problem. Oleg,
could you please check against the CURRENT source tree if the
problem still exists?
Oleg.
----
Oleg Broytmann National Research Surgery Centre http://sun.med.ru/~phd/
Programmers don't die, they just GOSUB without RETURN.
Second, I have memory problem on 3 systems - RedHat 5.1 on Pentium,
Debian 2.0 on Pentium, and Solaris on Ultra-1.
But I have VACUUM ANALYZE problem only on linucies.
BTW, I noted bot linucies are glibc2-based. It would be interesting to
try libc5-based system. May be we can narrow the problem down to
glibc2-based linux?
Have someone libc5-based linux ready to test?
I can test on libc5 if you still see trouble after you have verified
Jan's fixes for your memory exhaution problem.
Let me know...
- Tom
Hello!
On Tue, 9 Feb 1999, Thomas G. Lockhart wrote:
I can test on libc5 if you still see trouble after you have verified
Jan's fixes for your memory exhaution problem.
I've downloaded latest snapshot (9 Feb) and reproduced the problem with
VACUUM ANALYZE on Debian 2.0 (glibc2).
Oleg.
----
Oleg Broytmann http://members.xoom.com/phd2/ phd2@earthling.net
Programmers don't die, they just GOSUB without RETURN.
Oleg Broytmann <phd@sun.med.ru> writes:
I can test on libc5 if you still see trouble after you have verified
Jan's fixes for your memory exhaution problem.
I've downloaded latest snapshot (9 Feb) and reproduced the problem with
VACUUM ANALYZE on Debian 2.0 (glibc2).
I am not able to reproduce the problem on HPUX, using either current
sources or 6.4.2. Looks like it must be platform specific.
Could you build the backend with -g and send a gdb backtrace from the
corefile produced when the crash occurs?
regards, tom lane
Import Notes
Reply to msg id not found: YourmessageofThu11Feb1999193535+0300Pine.SOL2.3.96.SK.990211193349.948C-100000@sun.med.ru | Resolved by subject fallback
On Thu, 11 Feb 1999, Tom Lane wrote:
I am not able to reproduce the problem on HPUX, using either current
sources or 6.4.2. Looks like it must be platform specific.
Of course it is platform-specific. I reported the problem on
glibc2-based linucies, but the same database works fine (and allows VACUUM
ANALYZE) on sparc-solaris.
Don't know about libc5 linux - I have no one in hand.
Could you build the backend with -g and send a gdb backtrace from the
corefile produced when the crash occurs?
I'll do it this Saturday.
Oleg.
----
Oleg Broytmann National Research Surgery Centre http://sun.med.ru/~phd/
Programmers don't die, they just GOSUB without RETURN.
On Thu, 11 Feb 1999, Tom Lane wrote:
I am not able to reproduce the problem on HPUX, using either current
sources or 6.4.2. Looks like it must be platform specific.
Of course it is platform-specific. I reported the problem on
glibc2-based linucies, but the same database works fine (and allows VACUUM
ANALYZE) on sparc-solaris.
Don't know about libc5 linux - I have no one in hand.
Could you build the backend with -g and send a gdb backtrace from the
corefile produced when the crash occurs?
I'll do it this Saturday.
Oleg.
----
Oleg Broytmann National Research Surgery Centre http://sun.med.ru/~phd/
Programmers don't die, they just GOSUB without RETURN.
Import Notes
Resolved by subject fallback
Hi!
On Thu, 11 Feb 1999, Tom Lane wrote:
Could you build the backend with -g and send a gdb backtrace from the
corefile produced when the crash occurs?
Problem compiling with -g:
make[3]: Entering directory
`/usr/local/src/PostgreSQL/postgresql-6.4.2/src/interfaces/ecpg/preproc'
gcc -I../../../include -I../../../backend -g -Wall -Wmissing-prototypes
-I../include -DMAJOR_VERSION=2 -DMINOR_VERSION=4 -DPATCHLEVEL=4
-DINCLUDE_PATH=\"/usr/local/stow/pgsql-debug/include\"
-c y.tab.c -o y.tab.o
preproc.y:2389: parse error at end of input
preproc.y:20: warning: `struct_level' defined but not used
preproc.y:22: warning: `QueryIsRule' defined but not used
preproc.y:23: warning: `actual_type' defined but not used
preproc.y:24: warning: `actual_storage' defined but not used
preproc.y:219: warning: `remove_variables' defined but not used
preproc.y:254: warning: `reset_variables' defined but not used
preproc.y:263: warning: `add_variable' defined but not used
preproc.y:332: warning: `make1_str' defined but not used
preproc.y:341: warning: `make2_str' defined but not used
preproc.y:353: warning: `cat2_str' defined but not used
preproc.y:366: warning: `make3_str' defined but not used
preproc.y:380: warning: `cat3_str' defined but not used
preproc.y:396: warning: `make4_str' defined but not used
preproc.y:412: warning: `cat4_str' defined but not used
preproc.y:431: warning: `make5_str' defined but not used
preproc.y:449: warning: `cat5_str' defined but not used
preproc.y:471: warning: `make_name' defined but not used
preproc.y:481: warning: `output_statement' defined but not used
make[3]: *** [y.tab.o] Error 1
make[3]: Leaving directory
`/usr/local/src/PostgreSQL/postgresql-6.4.2/src/interfaces/ecpg/preproc'
Oleg.
----
Oleg Broytmann National Research Surgery Centre http://sun.med.ru/~phd/
Programmers don't die, they just GOSUB without RETURN.
Could you build the backend with -g and send a gdb backtrace from
the corefile produced when the crash occurs?Problem compiling with -g:
I'd be suprised if "-g" would do that to you. Are you sure the input
file is well-formed?
- Tom
Hi!
On Sat, 13 Feb 1999, Thomas G. Lockhart wrote:
Could you build the backend with -g and send a gdb backtrace from
the corefile produced when the crash occurs?Problem compiling with -g:
I'd be suprised if "-g" would do that to you. Are you sure the input
file is well-formed?
May be not enough memory or disk space. In the beginnig of next week
I'll have new linux box, so I'll retry.
Oleg.
----
Oleg Broytmann National Research Surgery Centre http://sun.med.ru/~phd/
Programmers don't die, they just GOSUB without RETURN.
Hello!
Sorry for making this late :(
On Thu, 11 Feb 1999, Tom Lane wrote:
Could you build the backend with -g and send a gdb backtrace from the
corefile produced when the crash occurs?
I have compiled with -g, but postgres didn't produce core. Do I need
something special on startup to generate core on crash?
Oleg.
----
Oleg Broytmann http://members.xoom.com/phd2/ phd2@earthling.net
Programmers don't die, they just GOSUB without RETURN.
Oleg Broytmann <phd@sun.med.ru> writes:
I have compiled with -g, but postgres didn't produce core. Do I need
something special on startup to generate core on crash?
Ordinarily not, but perhaps you have a shell 'limit' setting in place
that prevents a corefile from being made? I think csh has such a
setting but I forget the details. Anyway, if postmaster is started from
a shell with any limit variables enabled, they will apply to the
backends too.
Or you might just not be looking in the right place. Backend crashes
produce corefiles in the database subdirectory, eg,
/usr/local/pgsql/data/base/MyDatabase/core
regards, tom lane
Import Notes
Reply to msg id not found: YourmessageofWed24Feb1999124448+0300Pine.SOL2.3.96.SK.990224124321.10537B-100000@sun.med.ru | Resolved by subject fallback
Hi!
On Wed, 24 Feb 1999, Tom Lane wrote:
Oleg Broytmann <phd@sun.med.ru> writes:
I have compiled with -g, but postgres didn't produce core. Do I need
something special on startup to generate core on crash?Ordinarily not, but perhaps you have a shell 'limit' setting in place
that prevents a corefile from being made? I think csh has such a
I am using bash all the time.
setting but I forget the details. Anyway, if postmaster is started from
a shell with any limit variables enabled, they will apply to the
backends too.
Ok, I'll retest this.
Or you might just not be looking in the right place. Backend crashes
produce corefiles in the database subdirectory, eg,
/usr/local/pgsql/data/base/MyDatabase/core
I search with find / -name core. I got /dev/core and
/usr/src/linux/.../core :)
Oleg.
----
Oleg Broytmann http://members.xoom.com/phd2/ phd2@earthling.net
Programmers don't die, they just GOSUB without RETURN.
On Wed, 24 Feb 1999, Oleg Broytmann wrote:
Or you might just not be looking in the right place. Backend crashes
produce corefiles in the database subdirectory, eg,
/usr/local/pgsql/data/base/MyDatabase/coreI search with find / -name core. I got /dev/core and
/usr/src/linux/.../core :)
Try this instead:
# find / -name '*.core'
and you should find the other core dumps.
Vince.
--
==========================================================================
Vince Vielhaber -- KA8CSH email: vev@michvhf.com flame-mail: /dev/null
# include <std/disclaimers.h> TEAM-OS2
Online Campground Directory http://www.camping-usa.com
Online Giftshop Superstore http://www.cloudninegifts.com
==========================================================================
Hi!
I ran postmaster from command line (usually I run it from /etc/init.d/),
connected to it and ran VACUUM ANALYZE.
It worked.
I don't know should I use :) or :( - it failed on production server and
worked on debugging server...
Oleg.
----
Oleg Broytmann http://members.xoom.com/phd2/ phd2@earthling.net
Programmers don't die, they just GOSUB without RETURN.
Followup to myself...
On Wed, 24 Feb 1999, Oleg Broytmann wrote:
I ran postmaster from command line (usually I run it from /etc/init.d/),
connected to it and ran VACUUM ANALYZE.
It worked.
I tested the following way:
1. Run postmaster without parameters; connect and run VACUUM ANALYZE -
worked.
2. Run postmaster -b -D/usr/local/pgsql/data -o -Fe
and run VACUUM ANALYZE - worked
3. Run postmaster -b -D/usr/local/pgsql/data -o -Fe -S (to detach it)
and run VACUUM ANALYZE - worked
(I took these parameters from script /etc/init.d/postgres)
4. Run /etc/init.d/postgres start
and run VACUUM ANALYZE - failed, no core file.
Oleg.
----
Oleg Broytmann http://members.xoom.com/phd2/ phd2@earthling.net
Programmers don't die, they just GOSUB without RETURN.
Oleg Broytmann <phd@sun.med.ru> writes:
3. Run postmaster -b -D/usr/local/pgsql/data -o -Fe -S (to detach it)
and run VACUUM ANALYZE - worked
(I took these parameters from script /etc/init.d/postgres)
4. Run /etc/init.d/postgres start
and run VACUUM ANALYZE - failed, no core file.
So there is something different about the environment of your postmaster
when it's started by init.d versus when it's started by hand. Now you
just have to figure out what.
I thought of environment variables, ulimit settings,
ownership/permission settings ... but it's not clear why any of these
would affect VACUUM in particular yet leave you able to do other stuff
successfully. Puzzling.
regards, tom lane
Import Notes
Reply to msg id not found: YourmessageofWed24Feb1999185829+0300Pine.SOL2.3.96.SK.990224185336.10537N-100000@sun.med.ru | Resolved by subject fallback