Postgres Analog of Oracle APPEND hint
All,
Using PostgreSQL 13.1
I am new to PostgreSQL transitioning from Oracle. One of the many Oracle
tricks I learned is that large inserts can be sped up by adding the direct
path load hint /*+APPEND*/ . I am faced with having to perform many large
inserts (100K->100M rows) in my PostgreSQL database.
My questions are:
- Is there something comparable within the PostgreSQL community edition
product?
- Are my only options to un-log the table and tune instance memory
parameters?
I've googled for this and can't find a definitive statement on this point.
--
Rumpi Gravenstein
Using PostgreSQL 13.1
I am new to PostgreSQL transitioning from Oracle. One of the many Oracle
tricks I learned is that large inserts can be sped up by adding the direct
path load hint /*+APPEND*/ . I am faced with having to perform many large
inserts (100K->100M rows) in my PostgreSQL database.My questions are:
- Is there something comparable within the PostgreSQL community
edition product?
- Are my only options to un-log the table and tune instance memory
parameters?I remember trying this some time ago. It is not part of the PG community
edition. Still, worth a look -
https://ossc-db.github.io/pg_bulkload/pg_bulkload.html
<https://ossc-db.github.io/pg_bulkload/pg_bulkload.html>
Regards,
Jayadevan
Since INSERT /*+APPEND*/ is generally used when bulk loading data into
Oracle from external files you should probably look at the PostgreSQL COPY
command (https://www.postgresql.org/docs/13/sql-copy.html) and additional
utilities like pg_bulkload (https://github.com/ossc-db/pg_bulkload) .
On Thu, Feb 25, 2021 at 9:45 AM Rumpi Gravenstein <rgravens@gmail.com>
wrote:
Show quoted text
All,
Using PostgreSQL 13.1
I am new to PostgreSQL transitioning from Oracle. One of the many Oracle
tricks I learned is that large inserts can be sped up by adding the direct
path load hint /*+APPEND*/ . I am faced with having to perform many large
inserts (100K->100M rows) in my PostgreSQL database.My questions are:
- Is there something comparable within the PostgreSQL community
edition product?
- Are my only options to un-log the table and tune instance memory
parameters?I've googled for this and can't find a definitive statement on this
point.--
Rumpi Gravenstein
Unfortunately, I am not looking to load from an external source. My
process is moving data from source PostgreSQL tables to target PostgreSQL
tables.
On Thu, Feb 25, 2021 at 10:36 AM Mark Johnson <remi9898@gmail.com> wrote:
Since INSERT /*+APPEND*/ is generally used when bulk loading data into
Oracle from external files you should probably look at the PostgreSQL COPY
command (https://www.postgresql.org/docs/13/sql-copy.html) and additional
utilities like pg_bulkload (https://github.com/ossc-db/pg_bulkload) .On Thu, Feb 25, 2021 at 9:45 AM Rumpi Gravenstein <rgravens@gmail.com>
wrote:All,
Using PostgreSQL 13.1
I am new to PostgreSQL transitioning from Oracle. One of the many Oracle
tricks I learned is that large inserts can be sped up by adding the direct
path load hint /*+APPEND*/ . I am faced with having to perform many large
inserts (100K->100M rows) in my PostgreSQL database.My questions are:
- Is there something comparable within the PostgreSQL community
edition product?
- Are my only options to un-log the table and tune instance memory
parameters?I've googled for this and can't find a definitive statement on this
point.--
Rumpi Gravenstein
--
Rumpi Gravenstein
Rumpi Gravenstein <rgravens@gmail.com> writes:
Unfortunately, I am not looking to load from an external source. My
process is moving data from source PostgreSQL tables to target PostgreSQL
tables.
The hints in
https://www.postgresql.org/docs/current/populate.html
would still largely apply, though of course not the advice to use COPY.
regards, tom lane
On 2/25/21 9:26 AM, Rumpi Gravenstein wrote:
Unfortunately, I am not looking to load from an external source. My
process is moving data from source PostgreSQL tables to target
PostgreSQL tables.
Are you trying to duplicate the source tables in the target tables? If
so, there are replication tools for this. If not, temporary tables in
the target db (possible loaded with copy) and smallish batches of
inserts from those to target might be an option.
My use case involves complicated joins on source tables in one schema
loading a target table in the same or a different schema.
On Thu, Feb 25, 2021 at 11:41 AM Rob Sargent <robjsargent@gmail.com> wrote:
On 2/25/21 9:26 AM, Rumpi Gravenstein wrote:
Unfortunately, I am not looking to load from an external source. My
process is moving data from source PostgreSQL tables to target
PostgreSQL tables.Are you trying to duplicate the source tables in the target tables? If
so, there are replication tools for this. If not, temporary tables in
the target db (possible loaded with copy) and smallish batches of
inserts from those to target might be an option.
--
Rumpi Gravenstein
On 2021-02-25 14:52:20 -0500, Rumpi Gravenstein wrote:
My use case involves complicated joins on source tables in one schema loading a
target table in the same or a different schema.
So source(s) and target are in the same database? That wasn't clear from
your earlier mails.
If you are doing "complicated joins on source tables" that's probably
where the bottleneck will be, so you shouldn't worry about the insert
speed unless (or until) you notice that the bottleneck is writing the
data, not reading it.
hp
--
_ | Peter J. Holzer | Story must make more sense than reality.
|_|_) | |
| | | hjp@hjp.at | -- Charles Stross, "Creative writing
__/ | http://www.hjp.at/ | challenge!"
On Thu, Feb 25, 2021 at 10:26 AM Rumpi Gravenstein <rgravens@gmail.com> wrote:
Unfortunately, I am not looking to load from an external source. My process is moving data from source PostgreSQL tables to target PostgreSQL tables.
INSERT INTO ...SELECT ... is one of the fastest ways possible to move
data around.
merlin