Losing memory references - SRF + SPI

Started by Anderson Carnielover 9 years ago4 messages
#1Anderson Carniel
accarniel@gmail.com

I am writing a function that returns a set of tuples by using also the
PostGIS. Thuis, I am using SRF too. It successfully returns the expected
result when it has at most 4 tuples. However, this is not the case when
more than 4 tuples have to be returned. When I debug the code, I found that
the problem is in my function that transforms a cstring after a
SPI_connection. It seems that this cstring is not valid anymore in the
moment of this conversion (see my comment below). I know that the SPI uses
different contexts when it init and finish its process. But, I don't
understand why I have this problem here. Please, note that I tried to copy
the values of the whole tuple, but I have the same problem: system crash
after the forth call of the function. Also note that I call this function
only in the init call of the SRF. Please I would appreciate any suggestion
and help.

----------- code of the problematic function here ---------------

LWGEOM *retrieve_geom_from_postgis(int row_id) {
char query[100];
int err;
char *wkt;
int srid;
LWGEOM *lwgeom = NULL;
HeapTuple cop;
bool null;
TupleDesc tupdesc;

//refin is a prepared select command that returns 2 columns
sprintf(query, "EXECUTE refinplan(%d);", row_id);

if (SPI_OK_CONNECT != SPI_connect()) {
SPI_finish();
_DEBUG(ERROR, "retrieve_geom_from_postgis: could not connect to SPI
manager");
return NULL;
}
err = SPI_execute(query, false, 1);
if (err < 0) {
SPI_finish();
_DEBUG(ERROR, "retrieve_geom_from_postgis: could not execute the
EXECUTE command");
return NULL;
}

if (SPI_processed <= 0) {
SPI_finish();
_DEBUGF(ERROR, "the row_id (%d) does not exist in the table",
row_id)
return NULL;
}
cop = SPI_copytuple(SPI_tuptable->vals[0]);
tupdesc = SPI_tuptable->tupdesc;

/* disconnect from SPI */
SPI_finish();

wkt = text2cstring(DatumGetTextP(heap_getattr(cop, 1, tupdesc, &null)));
srid = DatumGetInt32(heap_getattr(cop, 2, tupdesc, &null));

lwgeom = lwgeom_from_wkt(wkt, LW_PARSER_CHECK_NONE); //error here...
only after the forth call
lwgeom_set_srid(lwgeom, srid);

lwfree(wkt);

return lwgeom;
}

#2Joe Conway
mail@joeconway.com
In reply to: Anderson Carniel (#1)
Re: Losing memory references - SRF + SPI

On 05/13/2016 09:35 PM, Anderson Carniel wrote:

I am writing a function that returns a set of tuples by using also the
PostGIS. Thuis, I am using SRF too. It successfully returns the expected
result when it has at most 4 tuples. However, this is not the case when
more than 4 tuples have to be returned. When I debug the code, I found
that the problem is in my function that transforms a cstring after a
SPI_connection. It seems that this cstring is not valid anymore in the
moment of this conversion (see my comment below). I know that the SPI
uses different contexts when it init and finish its process. But, I
don't understand why I have this problem here. Please, note that I tried
to copy the values of the whole tuple, but I have the same problem:
system crash after the forth call of the function. Also note that I call
this function only in the init call of the SRF. Please I would
appreciate any suggestion and help.

You probably need to allocate your returned values in a per query memory
context. Take a look at how it is done in, for example, crosstab() in
contrib/tablefunc.

HTH,

Joe

--
Crunchy Data - http://crunchydata.com
PostgreSQL Support for Secure Enterprises
Consulting, Training, & Open Source Development

#3Anderson Carniel
accarniel@gmail.com
In reply to: Joe Conway (#2)
Re: Losing memory references - SRF + SPI

Thank you very much Joe.

I have followed the crosstab() implementation and understood the idea of
per query memory context. Now, I am using a unique SPI instance (which I
perform several sql queries), process the result, transform my result into
a tuplestore, close the SPI and done. It works perfectly.

I have a curiosity with regard to the tuplestore: is there a problem with
performance if my tuplestore form a big table with million of tuples? Other
question is regarding to SPI: is there a problem to use only one instance
of SPI (for instance, if multiple users call the same function)?

Thank you again,
Anderson Carniel

2016-05-14 12:19 GMT-03:00 Joe Conway <mail@joeconway.com>:

Show quoted text

On 05/13/2016 09:35 PM, Anderson Carniel wrote:

I am writing a function that returns a set of tuples by using also the
PostGIS. Thuis, I am using SRF too. It successfully returns the expected
result when it has at most 4 tuples. However, this is not the case when
more than 4 tuples have to be returned. When I debug the code, I found
that the problem is in my function that transforms a cstring after a
SPI_connection. It seems that this cstring is not valid anymore in the
moment of this conversion (see my comment below). I know that the SPI
uses different contexts when it init and finish its process. But, I
don't understand why I have this problem here. Please, note that I tried
to copy the values of the whole tuple, but I have the same problem:
system crash after the forth call of the function. Also note that I call
this function only in the init call of the SRF. Please I would
appreciate any suggestion and help.

You probably need to allocate your returned values in a per query memory
context. Take a look at how it is done in, for example, crosstab() in
contrib/tablefunc.

HTH,

Joe

--
Crunchy Data - http://crunchydata.com
PostgreSQL Support for Secure Enterprises
Consulting, Training, & Open Source Development

#4Michael Paquier
michael.paquier@gmail.com
In reply to: Anderson Carniel (#3)
Re: Losing memory references - SRF + SPI

On Sun, May 15, 2016 at 10:22 AM, Anderson Carniel <accarniel@gmail.com> wrote:

Thank you very much Joe.

I have followed the crosstab() implementation and understood the idea of per
query memory context. Now, I am using a unique SPI instance (which I perform
several sql queries), process the result, transform my result into a
tuplestore, close the SPI and done. It works perfectly.

I have a curiosity with regard to the tuplestore: is there a problem with
performance if my tuplestore form a big table with million of tuples? Other
question is regarding to SPI: is there a problem to use only one instance
of SPI (for instance, if multiple users call the same function)?

When using a tuplestore, one concern for performance is the moment
data is going to spill into disk, something that is set with maxKBytes
in tuplestore_begin_heap(). Using work_mem is the recommendation,
though you could tune it better depending on your needs.
--
Michael

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers