SELECT INTO

Started by Roberto (SmartBit)over 23 years ago7 messagesgeneral
Jump to latest
#1Roberto (SmartBit)
roberto@smartbit.inf.br

Hi all

I'm tring to do some trigger and I must use a SELECT INTO,
look this:

CREATE OR REPLACE FUNCTION f_lancamentos_pagos() RETURNS opaque AS
' DECLARE V_VALUE NUMERIC(12,2);
DECLARE V_PAYMENT TIMESTAMP;
begin
if tg_op = ''INSERT'' then
SELECT INTO V_PAYMENT "DATE_PAYMENT",
V_VALUE "VALUE"
FROM "PAYMENTS"
WHERE
"ID" = NEW.ID;

....
end if
end'
LANGUAGE plpgsql;

but, when this function is called I'm getting the following error message:
ERROR: parser: parse error at or near ''''''

but if do it in 2 SELECTs it runs normally..

SELECT INTO V_PAYMENT "DATE_PAYMENT"
FROM "PAYMENTS"
WHERE
"ID" = NEW.ID;

SELECT INTO V_VALUE "VALUE"
FROM "PAYMENTS"
WHERE
"ID" = NEW.ID;

so, if I want get 10 columns, must I do 10 SELETCs???

thats wrong there??

Roberto de Amorim - +55 48 346-2243
Software engineer at SmartBit Software
Delphi and Interbase consultant
roberto@smartbit.inf.br

#2Areski
areski5@hotmail.com
In reply to: Roberto (SmartBit) (#1)
Performance : Optimize query

Hi Everybody,

I have this following query : select count(*) from "Data" where
"IDOrigin"='29';
It's take more less 2 minutes to run... It's really slow...
Some one maybe knows about how to optimize "select count"

Below, there are the expalin analyse of the query.

EXPLAIN ANALYZE select count(*) from "Data" where "IDOrigin"='29';
NOTICE: QUERY PLAN:

Aggregate (cost=108945.77..108945.77 rows=1 width=0) (actual
time=108845.29..108845.30 rows=1 loops=1)
-> Seq Scan on Data (cost=0.00..107152.11 rows=717462 width=0) (actual
time=18233.46..106927.60 rows=1457826 loops=1)
Total runtime: 108845.50 msec.

I did already a lot of optimization such as modify shared_buffers and
sort_mem...
But always too slow...

Thanks, Areski

#3Bjoern Metzdorf
bm@turtle-entertainment.de
In reply to: Roberto (SmartBit) (#1)
Re: Performance : Optimize query

EXPLAIN ANALYZE select count(*) from "Data" where "IDOrigin"='29';
NOTICE: QUERY PLAN:

Aggregate (cost=108945.77..108945.77 rows=1 width=0) (actual
time=108845.29..108845.30 rows=1 loops=1)
-> Seq Scan on Data (cost=0.00..107152.11 rows=717462 width=0) (actual
time=18233.46..106927.60 rows=1457826 loops=1)
Total runtime: 108845.50 msec.

Well, do you have an index on Data.IDOrigin ?

Regards,
Bjoern

#4Richard Huxton
dev@archonet.com
In reply to: Roberto (SmartBit) (#1)
Re: SELECT INTO

On Wednesday 13 Nov 2002 10:24 am, Roberto de Amorim wrote:

Hi all

I'm tring to do some trigger and I must use a SELECT INTO,
look this:

CREATE OR REPLACE FUNCTION f_lancamentos_pagos() RETURNS opaque AS
' DECLARE V_VALUE NUMERIC(12,2);
DECLARE V_PAYMENT TIMESTAMP;
begin
if tg_op = ''INSERT'' then
SELECT INTO V_PAYMENT "DATE_PAYMENT",
V_VALUE "VALUE"
FROM "PAYMENTS"

I think perhaps it should be:
SELECT INTO V_PAYMENT,V_VALUE "DATE_PAYMENT","VALUE"...

so, if I want get 10 columns, must I do 10 SELETCs???

If you want 10 columns, you might prefer to select into a record variable and
reference the columns as required.

http://www.postgresql.org/idocs/index.php?plpgsql-statements.html

--
Richard Huxton

#5Tom Lane
tgl@sss.pgh.pa.us
In reply to: Roberto (SmartBit) (#1)
Re: SELECT INTO

"Roberto de Amorim" <roberto@smartbit.inf.br> writes:

SELECT INTO V_PAYMENT "DATE_PAYMENT",
V_VALUE "VALUE"
FROM "PAYMENTS"
WHERE
"ID" = NEW.ID;

That's not the right syntax. Try

SELECT INTO V_PAYMENT, V_VALUE "DATE_PAYMENT", "VALUE"
FROM ... etc ...

regards, tom lane

#6Areski
areski5@hotmail.com
In reply to: Roberto (SmartBit) (#1)
Re: Performance : Optimize query

Yes of course...
I have more less 2 millions of records on this table...

----- Original Message -----
From: "Bj�rn Metzdorf" <bm@turtle-entertainment.de>
To: <pgsql-general@postgresql.org>
Sent: Wednesday, November 13, 2002 12:25 PM
Subject: Re: [GENERAL] Performance : Optimize query

EXPLAIN ANALYZE select count(*) from "Data" where "IDOrigin"='29';
NOTICE: QUERY PLAN:

Aggregate (cost=108945.77..108945.77 rows=1 width=0) (actual
time=108845.29..108845.30 rows=1 loops=1)
-> Seq Scan on Data (cost=0.00..107152.11 rows=717462 width=0)

(actual

Show quoted text

time=18233.46..106927.60 rows=1457826 loops=1)
Total runtime: 108845.50 msec.

Well, do you have an index on Data.IDOrigin ?

Regards,
Bjoern

---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to majordomo@postgresql.org)

#7Stephan Szabo
sszabo@megazone23.bigpanda.com
In reply to: Areski (#2)
Re: Performance : Optimize query

On Wed, 13 Nov 2002, Areski wrote:

Hi Everybody,

I have this following query : select count(*) from "Data" where
"IDOrigin"='29';
It's take more less 2 minutes to run... It's really slow...
Some one maybe knows about how to optimize "select count"

Below, there are the expalin analyse of the query.

EXPLAIN ANALYZE select count(*) from "Data" where "IDOrigin"='29';
NOTICE: QUERY PLAN:

Aggregate (cost=108945.77..108945.77 rows=1 width=0) (actual
time=108845.29..108845.30 rows=1 loops=1)
-> Seq Scan on Data (cost=0.00..107152.11 rows=717462 width=0) (actual
time=18233.46..106927.60 rows=1457826 loops=1)
Total runtime: 108845.50 msec.

Let's go through the normal stuff :)

Have you used vacuum analyze recently? How many rows are in the table?
How many rows actually have IDOrigin=29 (ie, is 717462 a valid estimate)?
If it's not a reasonable estimate, you might want to raise the number of
statistic buckets the column is getting (alter table "Data" alter
column "IDOrigin" SET STATISTICS <number> where the default value is 10)
and running vacuum analyze again.