BUG #16443: Too much memory usage on insert query

Started by PG Bug reporting formalmost 6 years ago4 messagesbugs
Jump to latest
#1PG Bug reporting form
noreply@postgresql.org

The following bug has been logged on the website:

Bug reference: 16443
Logged by: Kurt Roeckx
Email address: kurt@roeckx.be
PostgreSQL version: 12.3
Operating system: Debian
Description:

Hi,

I have this in my config file:
shared_buffers = 2048MB # min 128kB
work_mem = 1024MB # min 64kB

I was executing this query:
insert into ct_entry_chain (ct_entry_id, certificate_id) select id,
unnest(certificate_chain_id) from ct_entry;

The process was using at least 14 GB, of the 8 GB of RAM that's available.
Swap was also using around the 8GB maximum, while normally around 200 MB is
swapped out. I didn't expect this process to use more than around 3 GB.

This has resulted in an out of memory condition.

I was converting my schema, ct_entry_chain is a new table that looks like:
Table "public.ct_entry_chain"
Column | Type | Collation | Nullable | Default
----------------+--------+-----------+----------+--------------------------------------------
id | bigint | | not null |
nextval('ct_entry_chain_id_seq'::regclass)
ct_entry_id | bigint | | not null |
certificate_id | bigint | | not null |
Indexes:
"ct_entry_chain_pkey" PRIMARY KEY, btree (id)
"ct_entry_chain_ct_entry_id_certificate_id_key" UNIQUE CONSTRAINT, btree
(ct_entry_id, certificate_id)
Foreign-key constraints:
"ct_entry_chain_certificate_id_fkey" FOREIGN KEY (certificate_id)
REFERENCES raw_certificates(id)
"ct_entry_chain_ct_entry_id_fkey" FOREIGN KEY (ct_entry_id) REFERENCES
ct_entry(id)

ct_entry itself looks like:
Table "public.ct_entry"
Column | Type | Collation | Nullable |
Default
----------------------+--------------------------+-----------+----------+--------------------------------------
id | bigint | | not null |
nextval('ct_entry_id_seq'::regclass)
ct_log_id | integer | | not null |
raw_certificate_id | bigint | | not null |
log_entry | bigint | | not null |
timestamp | timestamp with time zone | | not null |
version | integer | | not null |
entry_type | integer | | not null |
leaf_type | integer | | not null |
extensions | bytea | | |
issuer_key_hash | bytea | | |
pre_certificate_id | bigint | | |
certificate_chain_id | bigint[] | | |
Indexes:
"ct_entry_pkey" PRIMARY KEY, btree (id)
"ct_entry_raw_certificate_id_idx" btree (raw_certificate_id)
Foreign-key constraints:
"ct_entry_ct_log_id_fkey" FOREIGN KEY (ct_log_id) REFERENCES
ct_logs(id)
"ct_entry_pre_certificate_id_fkey" FOREIGN KEY (pre_certificate_id)
REFERENCES raw_certificates(id)
"ct_entry_raw_certificate_id_fkey" FOREIGN KEY (raw_certificate_id)
REFERENCES raw_certificates(id)
Referenced by:
TABLE "ct_entry_chain" CONSTRAINT "ct_entry_chain_ct_entry_id_fkey"
FOREIGN KEY (ct_entry_id) REFERENCES ct_entry(id)

#2Kurt Roeckx
kurt@roeckx.be
In reply to: PG Bug reporting form (#1)
Re: BUG #16443: Too much memory usage on insert query

If I retry the query, I see the memory grow slowly. If I
cancel the query, memory returns to normal.

#3Tom Lane
tgl@sss.pgh.pa.us
In reply to: PG Bug reporting form (#1)
Re: BUG #16443: Too much memory usage on insert query

PG Bug reporting form <noreply@postgresql.org> writes:

I was executing this query:
insert into ct_entry_chain (ct_entry_id, certificate_id) select id,
unnest(certificate_chain_id) from ct_entry;

How much data is that?

The process was using at least 14 GB, of the 8 GB of RAM that's available.

My first guess is that the space was being eaten by trigger list entries
to verify the foreign-key constraints on the target table. You might be
better advised to fill the new table first and then create its FK
constraints. (Building the indexes afterwards wouldn't be a bad idea,
either.)

We are looking at better mechanisms for handling FK verification, but
that won't see the light of day before v14 at the earliest.

regards, tom lane

#4Kurt Roeckx
kurt@roeckx.be
In reply to: Tom Lane (#3)
Re: BUG #16443: Too much memory usage on insert query

On Sun, May 17, 2020 at 11:40:53AM -0400, Tom Lane wrote:

PG Bug reporting form <noreply@postgresql.org> writes:

I was executing this query:
insert into ct_entry_chain (ct_entry_id, certificate_id) select id,
unnest(certificate_chain_id) from ct_entry;

How much data is that?

The ct_entry table contains 305 GB data over about 2.1e9 rows.
certificate_chain_id contains about 2.2 entries per row.

The process was using at least 14 GB, of the 8 GB of RAM that's available.

My first guess is that the space was being eaten by trigger list entries
to verify the foreign-key constraints on the target table. You might be
better advised to fill the new table first and then create its FK
constraints. (Building the indexes afterwards wouldn't be a bad idea,
either.)

We are looking at better mechanisms for handling FK verification, but
that won't see the light of day before v14 at the earliest.

I will try that, thanks.

Kurt