Insertion time is very high for inserting data in postgres

Started by prachi surangalikarabout 5 years ago5 messagesgeneral
Jump to latest
#1prachi surangalikar
surangalikarprachi100@gmail.com

Hello Team,
Greetings!

We are using Postgres 12.2.1 for fetching per minute data for about 25
machines but running parallely via a single thread in python.
But suddenly the insertion time has increased to a very high level, about
30 second for one machine.
We are in so much problem as the data fetching is becoming slow.

if anyone could help us to solve this problem it would be of great help to
us.

#2Ganesh Korde
ganeshakorde@gmail.com
In reply to: prachi surangalikar (#1)
Re: Insertion time is very high for inserting data in postgres

On Wed, 10 Feb 2021, 1:56 pm prachi surangalikar, <
surangalikarprachi100@gmail.com> wrote:

Hello Team,
Greetings!

We are using Postgres 12.2.1 for fetching per minute data for about 25
machines but running parallely via a single thread in python.
But suddenly the insertion time has increased to a very high level, about
30 second for one machine.
We are in so much problem as the data fetching is becoming slow.

if anyone could help us to solve this problem it would be of great help to
us.

Are you doing vaccuum analyze table regularly? If not then that might
delay insertion.

In reply to: prachi surangalikar (#1)
SV: Insertion time is very high for inserting data in postgres

Fra: prachi surangalikar <surangalikarprachi100@gmail.com>

Hello Team,
Greetings!

We are using Postgres 12.2.1 for fetching per minute data for about 25 machines but running parallely via a single thread in python.
But suddenly the insertion time has increased to a very high level, about 30 second for one machine.
We are in so much problem as the data fetching is becoming slow.

if anyone could help us to solve this problem it would be of great help to us.

Get your data into a Text.IO memory structure and then use copy https://www.psycopg.org/docs/usage.html#using-copy-to-and-copy-from

This is THE way of high-performant inserts using Postgres.

Regards Niels Jespersen

#4Dave Cramer
pg@fastcrypt.com
In reply to: Niels Jespersen (#3)
Re: Insertion time is very high for inserting data in postgres

On Wed, 10 Feb 2021 at 06:11, Niels Jespersen <NJN@dst.dk> wrote:

Fra: prachi surangalikar <surangalikarprachi100@gmail.com>

Hello Team,

Greetings!

We are using Postgres 12.2.1 for fetching per minute data for about 25

machines but running parallely via a single thread in python.

But suddenly the insertion time has increased to a very high level, about

30 second for one machine.

We are in so much problem as the data fetching is becoming slow.

Before anyone can help you, you will have to provide much more information.

Schema, data that you are inserting, size of the machine, configuration
settings. etc.

Dave

Show quoted text

if anyone could help us to solve this problem it would be of great help

to us.

Get your data into a Text.IO memory structure and then use copy
https://www.psycopg.org/docs/usage.html#using-copy-to-and-copy-from

This is THE way of high-performant inserts using Postgres.

Regards Niels Jespersen

#5Peter J. Holzer
hjp-pgsql@hjp.at
In reply to: Niels Jespersen (#3)
Re: SV: Insertion time is very high for inserting data in postgres

On 2021-02-10 11:10:41 +0000, Niels Jespersen wrote:

Fra: prachi surangalikar <surangalikarprachi100@gmail.com>
We are using Postgres 12.2.1 for fetching per minute data for about 25

machines but running parallely via a single thread in python.

But suddenly the insertion time has increased to a very high level, about 30
second for one machine.

We are in so much problem as the data fetching is becoming slow.

if anyone could help us to solve this problem it would be of great help to us.

Get your data into a Text.IO memory structure and then use copy https://
www.psycopg.org/docs/usage.html#using-copy-to-and-copy-from

This is THE way of high-performant inserts using Postgres.

True, but Prachi wrote that the insert times "suddenly ... increased to a
very high level". It's better to investigate what went wrong than to
blindly make some changes to the code.

As a first measure I would at least turn on statement logging and/or
pg_stat_statements to see which statements are slow, and then
investigate the slow statements further. auto_explain might also be
useful.

hp

--
_ | Peter J. Holzer | Story must make more sense than reality.
|_|_) | |
| | | hjp@hjp.at | -- Charles Stross, "Creative writing
__/ | http://www.hjp.at/ | challenge!"