BUG #8013: Memory leak

Started by Rae Stieningabout 13 years ago10 messagesbugs
Jump to latest
#1Rae Stiening
stiening@comcast.net

The following bug has been logged on the website:

Bug reference: 8013
Logged by: Rae Stiening
Email address: stiening@comcast.net
PostgreSQL version: 9.2.3
Operating system: Suse Linux 9.3
Description:

The query:
SELECT pts_key,count(*)
FROM tm_tm_pairs GROUP BY pts_key HAVING count(*) !=1 ORDER BY
pts_key

Which is executed as:
GroupAggregate (cost=108680937.80..119278286.60 rows=470993280 width=4)
Filter: (count(*) <> 1)
-> Sort (cost=108680937.80..109858421.00 rows=470993280 width=4)
Sort Key: pts_key
-> Seq Scan on tm_tm_pairs (cost=0.00..8634876.80 rows=470993280
width=4)

uses all available memory (32GB). pts_key is an integer and the table
contains about 500 million rows.

PostgreSQL 9.2.3 on x86_64-unknown-linux-gnu, compiled by gcc (SUSE Linux)
4.7.2 20130108 [gcc-4_7-branch revision 195012], 64-bit
(

compile options:

env CFLAGS='-O3 -march=native' ./configure --with-segsize=128

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

#2ajmcello
ajmcello78@gmail.com
In reply to: Rae Stiening (#1)
Re: BUG #8013: Memory leak

unsubscribe

On Sat, Mar 30, 2013 at 7:01 AM, <stiening@comcast.net> wrote:

Show quoted text

The following bug has been logged on the website:

Bug reference: 8013
Logged by: Rae Stiening
Email address: stiening@comcast.net
PostgreSQL version: 9.2.3
Operating system: Suse Linux 9.3
Description:

The query:
SELECT pts_key,count(*)
FROM tm_tm_pairs GROUP BY pts_key HAVING count(*) !=1 ORDER BY
pts_key

Which is executed as:
GroupAggregate (cost=108680937.80..119278286.60 rows=470993280 width=4)
Filter: (count(*) <> 1)
-> Sort (cost=108680937.80..109858421.00 rows=470993280 width=4)
Sort Key: pts_key
-> Seq Scan on tm_tm_pairs (cost=0.00..8634876.80 rows=470993280
width=4)

uses all available memory (32GB). pts_key is an integer and the table
contains about 500 million rows.

PostgreSQL 9.2.3 on x86_64-unknown-linux-gnu, compiled by gcc (SUSE Linux)
4.7.2 20130108 [gcc-4_7-branch revision 195012], 64-bit
(

compile options:

env CFLAGS='-O3 -march=native' ./configure --with-segsize=128

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

#3Jeff Lake
admin@michiganwxsystem.com
In reply to: Rae Stiening (#1)
Re: BUG #8013: Memory leak

memory leak with 500 Million rows ??
sounds like to big of a db

-----------------------------------------------------
-Jeff Lake K8JSL
MichiganWxSystem.com
AllisonHouse.com
TheWeatherCenter.net
GRLevelXStuff.com

On 3/30/2013 10:01, stiening@comcast.net wrote:

The following bug has been logged on the website:

Bug reference: 8013
Logged by: Rae Stiening
Email address: stiening@comcast.net
PostgreSQL version: 9.2.3
Operating system: Suse Linux 9.3
Description:

The query:
SELECT pts_key,count(*)
FROM tm_tm_pairs GROUP BY pts_key HAVING count(*) !=1 ORDER BY
pts_key

Which is executed as:
GroupAggregate (cost=108680937.80..119278286.60 rows=470993280 width=4)
Filter: (count(*) <> 1)
-> Sort (cost=108680937.80..109858421.00 rows=470993280 width=4)
Sort Key: pts_key
-> Seq Scan on tm_tm_pairs (cost=0.00..8634876.80 rows=470993280
width=4)

uses all available memory (32GB). pts_key is an integer and the table
contains about 500 million rows.

PostgreSQL 9.2.3 on x86_64-unknown-linux-gnu, compiled by gcc (SUSE Linux)
4.7.2 20130108 [gcc-4_7-branch revision 195012], 64-bit
(

compile options:

env CFLAGS='-O3 -march=native' ./configure --with-segsize=128

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

#4Bruce Momjian
bruce@momjian.us
In reply to: Rae Stiening (#1)
Re: BUG #8013: Memory leak

On Sat, Mar 30, 2013 at 2:01 PM, <stiening@comcast.net> wrote:

uses all available memory (32GB). pts_key is an integer and the table
contains about 500 million rows.

Please post the schema definition and all the log messages that occur
from this. If it Postgres runs out memory it should include a dump of
the memory usage.

--
greg

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

In reply to: Rae Stiening (#1)
Re: BUG #8013: Memory leak

On 30 March 2013 14:01, <stiening@comcast.net> wrote:

env CFLAGS='-O3 -march=native' ./configure --with-segsize=128

Why did you build with a segment size of 128GB? Postgres binaries
built with a non-standard segment size are not widely used.

--
Regards,
Peter Geoghegan

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

#6ajmcello
ajmcello78@gmail.com
In reply to: Jeff Lake (#3)
Re: BUG #8013: Memory leak

unsubscribe

On Sat, Mar 30, 2013 at 8:42 PM, Jeff Lake <admin@michiganwxsystem.com>wrote:

Show quoted text

memory leak with 500 Million rows ??
sounds like to big of a db

------------------------------**-----------------------
-Jeff Lake K8JSL
MichiganWxSystem.com
AllisonHouse.com
TheWeatherCenter.net
GRLevelXStuff.com

On 3/30/2013 10:01, stiening@comcast.net wrote:

The following bug has been logged on the website:

Bug reference: 8013
Logged by: Rae Stiening
Email address: stiening@comcast.net
PostgreSQL version: 9.2.3
Operating system: Suse Linux 9.3
Description:

The query:
SELECT pts_key,count(*)
FROM tm_tm_pairs GROUP BY pts_key HAVING count(*) !=1 ORDER BY
pts_key

Which is executed as:
GroupAggregate (cost=108680937.80..119278286.**60 rows=470993280
width=4)
Filter: (count(*) <> 1)
-> Sort (cost=108680937.80..109858421.**00 rows=470993280 width=4)
Sort Key: pts_key
-> Seq Scan on tm_tm_pairs (cost=0.00..8634876.80
rows=470993280
width=4)

uses all available memory (32GB). pts_key is an integer and the table
contains about 500 million rows.

PostgreSQL 9.2.3 on x86_64-unknown-linux-gnu, compiled by gcc (SUSE Linux)
4.7.2 20130108 [gcc-4_7-branch revision 195012], 64-bit
(

compile options:

env CFLAGS='-O3 -march=native' ./configure --with-segsize=128

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/**mailpref/pgsql-bugs&lt;http://www.postgresql.org/mailpref/pgsql-bugs&gt;

#7Tom Lane
tgl@sss.pgh.pa.us
In reply to: Rae Stiening (#1)
Re: BUG #8013: Memory leak

stiening@comcast.net writes:

The query:
SELECT pts_key,count(*)
FROM tm_tm_pairs GROUP BY pts_key HAVING count(*) !=1 ORDER BY
pts_key

Which is executed as:
GroupAggregate (cost=108680937.80..119278286.60 rows=470993280 width=4)
Filter: (count(*) <> 1)
-> Sort (cost=108680937.80..109858421.00 rows=470993280 width=4)
Sort Key: pts_key
-> Seq Scan on tm_tm_pairs (cost=0.00..8634876.80 rows=470993280
width=4)

uses all available memory (32GB). pts_key is an integer and the table
contains about 500 million rows.

That query plan doesn't look like it should produce any undue memory
consumption on the server side. How many distinct values of pts_key are
there, and what are you using to collect the query result client-side?
psql, for instance, would try to absorb the whole query result
in-memory, so there'd be a lot of memory consumed by psql if there are
a lot of pts_key values. (You can set FETCH_COUNT to alleviate that.)

A different line of thought is that you might have set work_mem to
an unreasonably large value --- the sort step will happily try to
consume work_mem worth of memory.

regards, tom lane

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

#8Jeff Janes
jeff.janes@gmail.com
In reply to: Tom Lane (#7)

On Sunday, March 31, 2013, Tom Lane wrote:

A different line of thought is that you might have set work_mem to
an unreasonably large value --- the sort step will happily try to
consume work_mem worth of memory.

I don't think that that can be the problem here, because memtuples can
never be more than 1GB even if work_mem is much larger than that. Even if
his sort is using pass-by-reference (I don't think it would be), they
should be skinny enough that that limitation should prevent it from blowing
out memory.

Cheers,

Jeff

#9Daniel Farina
daniel@heroku.com
In reply to: ajmcello (#2)
Re: BUG #8013: Memory leak

On Sat, Mar 30, 2013 at 8:41 PM, ajmcello <ajmcello78@gmail.com> wrote:

unsubscribe

That's not how you unsubscribe from this list; you'll want to do that here:

http://www.postgresql.org/community/lists/subscribe/

--
fdr

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

#10Rae Stiening
stiening@comcast.net
In reply to: Tom Lane (#7)
Re: BUG #8013: Memory leak

I found that by replacing the postgresql.conf file with the original that is present following an initdb the query ran without a memory problem. I looked at the "bad" configuration file and couldn't see anything wrong with it. I regret that because of a typing error the bad file was accidentally deleted. I have subsequently been unable to reproduce the bad behavior. After editing the original file to be the same as what I had intended for the erased file the query still ran without a problem. Memory usage topped out at about 2.1 GB. Even setting work_mem and maintenance_work_mem to 30000MB did not change the maximum memory usage during the query.

Regards,
Rae Stiening

On Mar 31, 2013, at 1:16 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:

stiening@comcast.net writes:

The query:
SELECT pts_key,count(*)
FROM tm_tm_pairs GROUP BY pts_key HAVING count(*) !=1 ORDER BY
pts_key

Which is executed as:
GroupAggregate (cost=108680937.80..119278286.60 rows=470993280 width=4)
Filter: (count(*) <> 1)
-> Sort (cost=108680937.80..109858421.00 rows=470993280 width=4)
Sort Key: pts_key
-> Seq Scan on tm_tm_pairs (cost=0.00..8634876.80 rows=470993280
width=4)

uses all available memory (32GB). pts_key is an integer and the table
contains about 500 million rows.

That query plan doesn't look like it should produce any undue memory
consumption on the server side. How many distinct values of pts_key are
there, and what are you using to collect the query result client-side?
psql, for instance, would try to absorb the whole query result
in-memory, so there'd be a lot of memory consumed by psql if there are
a lot of pts_key values. (You can set FETCH_COUNT to alleviate that.)

A different line of thought is that you might have set work_mem to
an unreasonably large value --- the sort step will happily try to
consume work_mem worth of memory.

regards, tom lane

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs