Fwd: postgresql performance question

Started by 許耀彰about 8 years ago3 messagesbugs
Jump to latest
#1許耀彰
kpm906@gmail.com

Dear Support Team,
I create a table by command as listed as below:
*CREATE TABLE public.log2*
*(*
* d3 text COLLATE pg_catalog."default"*
*)*
*WITH (*

* OIDS = FALSE*

*)*

*TABLESPACE pg_default;*

And then use command as listed as below to import data to log2 table

*COPY log2 FROM '/home/anderson/0107.csv' CSV HEADER;*

My purpose is to import log information to log2 table to analysis, but I
found one situation : for example , I have 166856 records in log2 table,
when I use select command to list data , it spent a lot time, can we adjust
the situation ?
*select * from log2 *
Thanks for your kindly assistance.
Additional information: The attachment is log format
Best Regards, Anderson Hsu

Attachments:

0101.csvtext/csv; charset=US-ASCII; name=0101.csvDownload
#2Pavan Teja
pavan.postgresdba@gmail.com
In reply to: 許耀彰 (#1)
Re: Fwd: postgresql performance question

Hi,

You can filter by using error_severity condition in the where Clause like:
Select * from log2 where error_severity ='error';

Regards,
Pavan

On Feb 11, 2018 9:31 PM, "許耀彰" <kpm906@gmail.com> wrote:

Show quoted text

Dear Support Team,
I create a table by command as listed as below:
*CREATE TABLE public.log2*
*(*
* d3 text COLLATE pg_catalog."default"*
*)*
*WITH (*

* OIDS = FALSE*

*)*

*TABLESPACE pg_default;*

And then use command as listed as below to import data to log2 table

*COPY log2 FROM '/home/anderson/0107.csv' CSV HEADER;*

My purpose is to import log information to log2 table to analysis, but I
found one situation : for example , I have 166856 records in log2 table,
when I use select command to list data , it spent a lot time, can we adjust
the situation ?
*select * from log2 *
Thanks for your kindly assistance.
Additional information: The attachment is log format
Best Regards, Anderson Hsu

#3Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: 許耀彰 (#1)
Re: Fwd: postgresql performance question

On 02/11/2018 04:15 PM, 許耀彰 wrote:

Dear Support Team, 
I create a table by command as listed as below:
*CREATE TABLE public.log2*
*(*
*        d3 text COLLATE pg_catalog."default"*
*)*
*WITH (*
*
*
*    OIDS = FALSE*
*
*
*)*
*
*
*TABLESPACE pg_default;*

And then use command as listed as below to import data to log2 table

*COPY log2 FROM '/home/anderson/0107.csv' CSV HEADER;*

My purpose is to import log information to log2 table to analysis, but I
found one situation : for example , I have 166856 records in log2 table,
when I use select command to list data , it spent a lot time, can we
adjust the situation ? 
*select * from log2 *
Thanks for your kindly assistance.

Sorry for being annoying, but this mailing list is for bug reports, and
your post is clearly not one. Please, send it to pgsql-performance, more
people are watching that list and you're more likely to get help.

Furthermore, I strongly recommend reading this:

https://wiki.postgresql.org/wiki/Slow_Query_Questions

It may actually have answer to your question, and in case it does not it
lists things you need to include in your post. For example query plan or
information about the hardware/system would be very helpful.

regards

--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services