update problem?
Could it be true??
I've a table with < 46000 rows. And a little Update like
UPDATE foo set xxx = 'X';
needs about 15 seconds???
What's wrong?
Greetings
Juergen
On Tue, 9 Jul 2002, Juergen G. Mischke wrote:
Could it be true??
I've a table with < 46000 rows. And a little Update like
UPDATE foo set xxx = 'X';
needs about 15 seconds???
What's wrong?
Does the table have any foreign key constraints or other
tables that reference it in foreign key constraints? What's
the schema of the table?
Hi Stephan,
nothin at all. There're no contraints in the table.
Take a look at the schema :
create table KARTE_ARCHIV
(
AR_KARTENNR VARCHAR(8) not null,
AR_KARTE_OBJEKT VARCHAR(8) not null,
AR_K_ERST_DATUM DATE ,
AR_K_RUECK_DATUM DATE ,
AR_KARTENDRUCKTYP CHAR ,
AR_K_NUMMER VARCHAR(8) ,
AR_K_OBJ_NR VARCHAR(8) ,
AR_K_TOUR VARCHAR(2) ,
AR_K_GEDRUCK DATE ,
AR_K_GERECHNET CHAR ,
AR_K_RECHDAT DATE ,
AR_K_ANZ_POS INTEGER ,
AR_K_ANZ_MIT INTEGER ,
AR_K_BEMERKUNG VARCHAR(200) ,
AR_K_LEISTUNG TEXT ,
AR_RECHGEDRUCKT CHAR ,
AR_K_ERLEDIGT_AM DATE ,
AR_K_KUNDE VARCHAR(60) ,
AR_K_OBJEKT VARCHAR(60) ,
AR_K_RECHMOD CHAR
);
(and some indexes too)
That's all. Any idea??
Hi,
I'm working with Juergen at the same Problem.
Consider the following table:
create table test (a char);
Now fill the table with some data from test.txt
(100.000 rows each row one character)
\copy test from test.txt
Now update the table
update test set a='x';
This takes nearly 3 seconds. That would be ok for us.
But adding a column and fill it with some data
alter table test add b text;
update test set b='foobar...foobar...foobar... ...';
now the update
update test set a='y';
takes 8 seconds. When I add further columns or increase
the amount of data a column holds it gets even worse.
Before the column was added explain said:
explain update test set a='x';
Seq Scan on test2 (cost=0.00..20.00 rows=1000 width=6)
(btw: why rows=1000? there are 100.000 rows in the table)
And after it was added:
explain update test set a='y';
Seq Scan on test2 (cost=0.00..20.00 rows=1000 width=38)
And for our "real world" table (plain table, no foreign keys) it says:
explain update karte_archiv set ar_k_rechmod='2';
Seq Scan on karte_archiv (cost=0.00..1609.99 rows=45399 width=359)
This update takes over 30 seconds!
It uses 7 or 8 logfiles in the pg_xlog directory 16MB each.
But does this explain why it take *so* long?
J�rg
Hi,
There are some ways to create or make table shared across all databases in
the cluster?
I have created a table in the template1 and I updated his definition in
pg_class like in the exemple below:
$ psql template1
.....
template1=# CREATE TABLE shared_table (ID SERIAL8,
template1=#
name VARCHAR UNIQUE);
create sequence......
create index ...........
CREATE
template1=# UPDATE pg_class SET relisshared = TRUE WHERE relname
='shared_table';
UPDATE 1
template1=# insert into shared_table values (1, 'test');
NOTICE: RelationBuildDesc: can't open shared_table: No such file or
directory
ERROR: cannot open shared_table: No such file or directory
NOTICE: RelationBuildDesc: can't open shared_table: No such file or
directory
ERROR: cannot open shared_table: No such file or directory
So, I made some investigation and I realised that shared tables are stored
in C:\cygwin\usr\local\pgsql\data\global.
But the shared_table I created is stored in the
C:\cygwin\usr\local\pgsql\data\base\test.
There is some command which allows shared tables creation?
Thanks for your help.
Mourad.
On Wed, 10 Jul 2002, [iso-8859-1] J�rgen Mischke wrote:
Hi Stephan,
nothin at all. There're no contraints in the table.
That's all. Any idea??
Have you vacuumed the table recently? I wonder if you've
got a lot of dead rows to ignore.
On Wed, 10 Jul 2002, [ISO-8859-1] J�rg Schulz wrote:
Before the column was added explain said:
explain update test set a='x';
Seq Scan on test2 (cost=0.00..20.00 rows=1000 width=6)(btw: why rows=1000? there are 100.000 rows in the table)
Because you haven't vacuum analyzed since loading the data.
If you're doing an update of every row of a table, you almost
certainly want to at least do a normal vacuum (and possibly
a vacuum analyze) of the table afterwards.
"JS" == J�rg Schulz <jschulz@sgbs.de> writes:
JS> This update takes over 30 seconds!
JS> It uses 7 or 8 logfiles in the pg_xlog directory 16MB each.
JS> But does this explain why it take *so* long?
Try running a vacuum between your updates. Perhaps you're using more
disk pages than you can fit into memory, causing lots of disk I/O.
--
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Vivek Khera, Ph.D. Khera Communications, Inc.
Internet: khera@kciLink.com Rockville, MD +1-240-453-8497
AIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/
Hi there,
I've just moved up from non-free os's to debian linux, and installed
postgresql, with the hope of getting started on some projects I've been
thinking about. Several of these projects involve web archives. The
idea is, a url is entered with a bunch of bibliographic-type data in
other fields (keywords, author, date, etc). The html (and hopefully,
accompanying images/css's/etc) are then grabbed using curl, and archived
in a postgresql database. A web or other gui interface then provides
fully-searchable access to the archive for later use.
So my question: does anyone know of a similar tool which already
exists? I'm a complete novice at database programming (and at php, too,
which is what I figured I'd use as the scripting language, though I'd
consider learning perl or java if folks think that's a much better
idea), and I'd rather work with some pre-existing code than start from
the ground up. Any suggestings? Is this the right list to be asking
this quesiton on?
Thanks loads,
Matt
Not to discourage you from using postgresql or writing it yourself, but
you might want to take a look at wget (for downloading the web pages) and
mngosearch or htdig for searching them.
mngosearch supports postgresql and has a PHP interface so you can have fun
with that...
On 10 Jul 2002, Matt Price wrote:
Show quoted text
Hi there,
I've just moved up from non-free os's to debian linux, and installed
postgresql, with the hope of getting started on some projects I've been
thinking about. Several of these projects involve web archives. The
idea is, a url is entered with a bunch of bibliographic-type data in
other fields (keywords, author, date, etc). The html (and hopefully,
accompanying images/css's/etc) are then grabbed using curl, and archived
in a postgresql database. A web or other gui interface then provides
fully-searchable access to the archive for later use.So my question: does anyone know of a similar tool which already
exists? I'm a complete novice at database programming (and at php, too,
which is what I figured I'd use as the scripting language, though I'd
consider learning perl or java if folks think that's a much better
idea), and I'd rather work with some pre-existing code than start from
the ground up. Any suggestings? Is this the right list to be asking
this quesiton on?Thanks loads,
Matt---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org
Hi Phialip, et al,
well, wget is nice, and htdig/mngosearch both seem great; but I want to
be able to enter extra data about the web pages (author names, comments,
subject/key word entries...)so that the database starts to resemble a
bibliographic database. That is, I want other people to be able to take
advantage of work that I and other data-entry slaves do when we enter
the url's.
does htat seem silly?
matt
Show quoted text
On Wed, 2002-07-10 at 18:21, Philip Hallstrom wrote:
Not to discourage you from using postgresql or writing it yourself, but
you might want to take a look at wget (for downloading the web pages) and
mngosearch or htdig for searching them.mngosearch supports postgresql and has a PHP interface so you can have fun
with that...On 10 Jul 2002, Matt Price wrote:
Hi there,
I've just moved up from non-free os's to debian linux, and installed
postgresql, with the hope of getting started on some projects I've been
thinking about. Several of these projects involve web archives. The
idea is, a url is entered with a bunch of bibliographic-type data in
other fields (keywords, author, date, etc). The html (and hopefully,
accompanying images/css's/etc) are then grabbed using curl, and archived
in a postgresql database. A web or other gui interface then provides
fully-searchable access to the archive for later use.So my question: does anyone know of a similar tool which already
exists? I'm a complete novice at database programming (and at php, too,
which is what I figured I'd use as the scripting language, though I'd
consider learning perl or java if folks think that's a much better
idea), and I'd rather work with some pre-existing code than start from
the ground up. Any suggestings? Is this the right list to be asking
this quesiton on?Thanks loads,
Matt---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster
On Thu, 2002-07-11 at 12:52, Matt Price wrote:
Hi Phialip, et al,
well, wget is nice, and htdig/mngosearch both seem great; but I want to
be able to enter extra data about the web pages (author names, comments,
subject/key word entries...)so that the database starts to resemble a
bibliographic database. That is, I want other people to be able to take
advantage of work that I and other data-entry slaves do when we enter
the url's.does htat seem silly?
What you could do is use wget to store the html "trees" in individual
directories. Then you can store the web "metadata" plus location in
the database. That would minimize the size of the db, plus store the
html in it's "natural habitat": the file system, where it's available
to Apache/etc.
On Wed, 2002-07-10 at 18:21, Philip Hallstrom wrote:
Not to discourage you from using postgresql or writing it yourself, but
you might want to take a look at wget (for downloading the web pages) and
mngosearch or htdig for searching them.mngosearch supports postgresql and has a PHP interface so you can have fun
with that...On 10 Jul 2002, Matt Price wrote:
Hi there,
I've just moved up from non-free os's to debian linux, and installed
postgresql, with the hope of getting started on some projects I've been
thinking about. Several of these projects involve web archives. The
idea is, a url is entered with a bunch of bibliographic-type data in
other fields (keywords, author, date, etc). The html (and hopefully,
accompanying images/css's/etc) are then grabbed using curl, and archived
in a postgresql database. A web or other gui interface then provides
fully-searchable access to the archive for later use.So my question: does anyone know of a similar tool which already
exists? I'm a complete novice at database programming (and at php, too,
which is what I figured I'd use as the scripting language, though I'd
consider learning perl or java if folks think that's a much better
idea), and I'd rather work with some pre-existing code than start from
the ground up. Any suggestings? Is this the right list to be asking
this quesiton on?Thanks loads,
Matt
--
+-----------------------------------------------------------------+
| Ron Johnson, Jr. Home: ron.l.johnson@cox.net |
| Jefferson, LA USA http://ronandheather.dhs.org:81 |
| |
| "Experience should teach us to be most on our guard to protect |
| liberty when the government's purposes are beneficent. Men |
| born to freedom are naturally alert to repel invasion of their |
| liberty by evil minded rulers. The greatest dangers to liberty |
| lurk in insidious encroachment by men of zeal, well-meaning |
| but without understanding." |
| Justice Louis Brandeis, dissenting, Olmstead v US (1928) |
+-----------------------------------------------------------------+