Great Bridge benchmark results for Postgres, 4 others

Started by Ned Lillyover 25 years ago36 messagesgeneral
Jump to latest
#1Ned Lilly
ned@greatbridge.com

Greetings all,

At long last, here are the results of the benchmarking tests that
Great Bridge conducted in its initial exploration of PostgreSQL. We
held it up so we could test the shipping release of the new
Interbase 6.0. This is a news release that went out today.

The release is also on our website at
http://www.greatbridge.com/news/p_081420001.html. Graphics of the
AS3AP and TPC-C test results are at
http:/www.greatbridge.com/img/as3ap.gif and
http://www.greatbridge.com/img/tpc-c.gif respectively.

I'll try and field any questions anyone has, or refer you to someone
who can.

Best regards,

Ned Lilly
VP Hacker Relations
Great Bridge, LLC

--

Open source database routs competition in new benchmark tests

PostgreSQL meets or exceeds speed and scalability of proprietary
database leaders, and significantly surpasses open source
competitors

NORFOLK, Va, August 14, 2000 -PostgreSQL, the world's most advanced
open source database, routed the competition in recent benchmark
testing, topping the proprietary database leaders in
industry-standard transaction-processing tests. PostgreSQL, also
known as "Postgres," is an object-relational database management
system (DBMS) that newly formed Great Bridge LLC will professionally
market, service and support. Postgres also consistently outperformed
open source competitors, including MySQL and Interbase, in the
benchmark tests. Great Bridge will market Postgres-based open source
solutions as a highly reliable and lower cost option for businesses
seeking an alternative to proprietary databases.

On the ANSI SQL Standard Scalable And Portable (AS3AP) benchmark, a
rudimentary information retrieval test that measures raw speed and
scalability, Postgres performed an average of four to five times
faster than every other database tested, including two major
proprietary DBMS packages, the MySQL open source database, and
Interbase, a formerly proprietary product which was recently made
open source by Inprise/Borland. (See Exhibit 1)

In the Transaction Processing Council's TPC-C test, which simulates
a real-world online transaction processing (OLTP) environment,
Postgres consistently matched the performance of the two leading
proprietary database applications. (See Exhibit 2) The two industry
leaders cannot be mentioned by name because their restrictive
licensing agreements prohibit anyone who buys their closed-source
products from publishing their company names in benchmark testing
results without the companies' prior approval.

"The test results show that Postgres is a robust, well-built product
that must be considered in the same category as enterprise-level
competition," said Robert Gilbert, Great Bridge President and CEO.
"Look at the trendlines in the AS3AP test: Postgres, like the
proprietary leaders, kept a relatively consistent output level all
the way up to 100 concurrent users - and that output was four to
five times faster than the proprietary products. Interbase and
MySQL fell apart under heavy usage. That's a strong affirmation
that Postgres today is a viable alternative to the market-leading
proprietary databases in terms of performance and scalability-and
the clear leader among open source databases."

The tests were conducted by Xperts Inc. of Richmond, Virginia, an
independent technology solutions company, using Quest Software's
Benchmark Factory application. Both the AS3AP and the TPC-C
benchmarks simulated transactions by one to 100 simultaneous users
in a client-server environment. One hundred concurrent users
approximates the middle range of a traditional database user pool;
many applications never see more than a few users on the system at
any given time, while other more sophisticated enterprise platforms
number concurrent users in the thousands. In a Web-based
application, where the connection to the database is measured in
milliseconds, 100 simultaneous users would represent a substantial
load-the equivalent of 100 customers hitting the "submit" button on
an order form at exactly the same time.

The AS3AP test measures raw database data retrieval power, showing
an application's scalability, portability and ease of use and
interpretation through the use of simple ANSI standard SQL queries.
The TPC-C test simulates a warehouse distribution system, including
order creation, customer payments, order status checking, delivery,
and inventory management.

"What stood out for us was the consistent performance of Postgres,
which stayed the same or tested better than those of the leading
proprietary applications. Postgres performed consistently whether it
was being used by one or 100 people," said Richard Brosnahan, senior
software developer at Xperts.

Postgres is a standards-based object-relational SQL database
designed for e-business and enterprise applications. The software is
open source and freely owned, continuously augmented by a global
collaborative community of elite programmers who volunteer their
time and expertise to improve the product. In the last two years,
with the introduction of versions 6.5 and 7.0 of the software,
Postgres has seen rapid enhancement through a series of high-level
refinements.

"Postgres' performance is a powerful affirmation of the open source
method of development," said Gilbert of Great Bridge. "Hundreds,
even thousands, of open source developers work on this software,
demonstrating a rate of innovation and improvement that the
proprietary competition simply can't match. And it's only going to
get better."

A closer look

Xperts ran the benchmark tests on Compaq Proliant ML350 servers with
512 mb of RAM and two 18.2 Gb hard disks, equipped with Intel
Pentium III processors and Red Hat Linux 6.1 and Windows NT
operating systems. The company ensured the tests' consistency by
using the same computers for each test, with each product connecting
to the tests through its own preferred ODBC driver. While Benchmark
Factory does provide native drivers for some commercial databases,
using each product's own ODBC ensured the most valid "apples to
apples" comparison.

In the AS3AP tests, PostgreSQL 7.0 significantly outperformed both
the leading commercial and open source applications in speed and
scalability. In the tested configuration, Postgres peaked at 1127.8
transactions per second with five users, and still processed at a
steady rate of 1070.4 with 100 users. The proprietary leader also
performed consistently, with a high of 314.15 transactions per
second with eight users, which fell slightly to 288.37 transactions
per second with 100 users. The other leading proprietary database
also demonstrated consistency, running at 200.21 transactions per
second with six users and 197.4 with 100.

The other databases tested against the AS3AP benchmarks, open source
competitors MySQL 3.22 and Interbase 6.0, demonstrated some speed
with a low number of users but a distinct lack of scalability. MySQL
reached a peak of 803.48 with two users, but its performance fell
precipitously under the stress of additional users to a rate of
117.87 transactions per second with 100 users. Similarly, Interbase
reached 424 transactions per second with four users, but its
performance declined steadily with additional users, dropping off to
146.86 transactions per second with 100 users.

"It's just astounding, and unexpected," said Xperts' Brosnahan of
Postgres' performance. "I ran the test twice to make sure it was
running right. Postgres is just a really powerful database."

In the TPC-C tests, Postgres performed neck and neck with the two
leading proprietary databases. The test simultaneously runs five
different types of simulated transactions; the attached graph of
test results (Exhibit 2) shows steadily ascending intertwined lines
representing all three databases, suggesting the applications scaled
at comparable rates. With all five transactions running with 100
users, the three databases performed at a rate of slightly above
five transactions per second.

"The TPC-C is a challenging test with five transactions running at
once while querying against the database and the stress of a growing
number of users. It showed that all the databases we tested handle
higher loads very well, the way they should," Brosnahan explained.

Neither Interbase nor MySQL could be tested for TPC-C benchmarks.
MySQL could not run the test because the application is not
adequately compliant with minimal ANSI SQL standards set in 1992.
Interbase 6.0, recently released as open source, does not have a
stable ODBC driver yet; while Xperts was able to adapt the version 5
ODBC driver for the AS3AP tests, the TPC-C test would not run.
"With MySQL it's an inherent design issue. Interbase 6 should run
the TPC-C test, and perhaps would with tweaking of the test's code,"
said Brosnahan.

Great Bridge's Gilbert attributes Postgres' high performance to a
quality differential that comes from the open source development
process; the source code for Postgres has been subjected to years of
rigorous peer review by some of the best programmers in the world,
many of whom use the product in their work environments. "Great
Bridge believes that Postgres is by far the most robust open source
database available. These tests provide strong affirmation of that
belief," he said. The company intends to work with hardware vendors
and other interested parties to continue larger-scale testing of
Postgres and other leading open source technologies.

About Great Bridge

Great Bridge LLC provides open source solutions powered by
PostgreSQL, the world's most advanced open source database. Great
Bridge delivers value-added open source software and support
services based on PostgreSQL, empowering e-business builders with an
enterprise-class database and tools at a fraction of the cost of
closed, proprietary alternatives.

Headquartered in Norfolk, Virginia, Great Bridge is a privately held
company funded by Landmark Communications, Inc., the media company
that also owns The Weather Channel, weather.com, and national and
international interests in newspapers, broadcasting, electronic
publishing, and interactive media.

# # #

#2Bryan White
bryan@arcamax.com
In reply to: Ned Lilly (#1)
Re: Great Bridge benchmark results for Postgres, 4 others

Greetings all,

At long last, here are the results of the benchmarking tests that
Great Bridge conducted in its initial exploration of PostgreSQL. We
held it up so we could test the shipping release of the new
Interbase 6.0. This is a news release that went out today.

The release is also on our website at
http://www.greatbridge.com/news/p_081420001.html. Graphics of the
AS3AP and TPC-C test results are at
http:/www.greatbridge.com/img/as3ap.gif and
http://www.greatbridge.com/img/tpc-c.gif respectively.

This looks great. Better than I would have expected. However I have some
concerns.

1) Using only ODBC drivers. I don't know how much of an impact a driver can
make but it would seem that using native drivers would shutdown one source
of objections.

2) Postgres has the 'vacuum' process which is typically run nightly which if
not accounted for in the benchmark would give Postgres an artificial edge.
I don't know how you would account for it but in fairness I think it should
be acknowledged. Do the other big databases have similar maintenance
issues?

3) The test system has 512MB RAM. Given the licensing structure and high
licencing fees, users have an incentive to use much larger amounts of RAM.
Someone who can only afford 512MB probably can't afford the big names
anyway.

4) The artical does not mention the Speed or Number of CPUs or anything
about the disks other than size. I can halfway infer that they are SCSI but
how are they layed out.

I am not trying to tear the benchmark down. Just wanting it more immune to
such attempts.

#3Steve Wolfe
steve@iboats.com
In reply to: Ned Lilly (#1)
Re: Great Bridge benchmark results for Postgres, 4 others

1) Using only ODBC drivers. I don't know how much of an impact a driver

can

make but it would seem that using native drivers would shutdown one source
of objections.

Using ODBC is guaranteed to slow down the benchmark. I've seen native
database drivers beat ODBC by anywhere from a factor of two to an order of
magnitude.

steve

#4The Hermit Hacker
scrappy@hub.org
In reply to: Steve Wolfe (#3)
Re: Great Bridge benchmark results for Postgres, 4 others

On Mon, 14 Aug 2000, Steve Wolfe wrote:

1) Using only ODBC drivers. I don't know how much of an impact a driver

can

make but it would seem that using native drivers would shutdown one source
of objections.

Using ODBC is guaranteed to slow down the benchmark. I've seen native
database drivers beat ODBC by anywhere from a factor of two to an order of
magnitude.

I haven't had a chance to take a look at the benchmarks yet, having just
seen this, but *if* Great Bridge performed their benchmarks such that all
the databases were access via ODBC, then they are using an
'apples-to-apples' approach, as each will have similar slowdowns as a
result ...

#5Ned Lilly
ned@greatbridge.com
In reply to: The Hermit Hacker (#4)
Re: Great Bridge benchmark results for Postgres, 4 others

Marc's right... we opted for ODBC to ensure as much of an "apples to apples"
comparison as possible. Of the 5 databases we tested, a native driver existed for
only the two (ahem) unnamed proprietary products - Postgres, Interbase, and MySQL
had to rely on ODBC. So we used the vendor's own ODBC for each of the other two
cases.

<disclaimer>
As with all benchmarks, your mileage will vary according to hardware, OS, and of
course the specific application. What we attempted to do here was use two
industry-standard benchmarks and treat all five products the same.
</disclaimer>

Presumably, if the vendor had taken the time to write a native driver for
Postgres, the results would have seen an even bigger kick. We don't have any
reason to think that the results for all five tests in native driver mode would be
out of proportion to the results we got through ODBC.

Regards,
Ned

The Hermit Hacker wrote:

Show quoted text

On Mon, 14 Aug 2000, Steve Wolfe wrote:

1) Using only ODBC drivers. I don't know how much of an impact a driver

can

make but it would seem that using native drivers would shutdown one source
of objections.

Using ODBC is guaranteed to slow down the benchmark. I've seen native
database drivers beat ODBC by anywhere from a factor of two to an order of
magnitude.

I haven't had a chance to take a look at the benchmarks yet, having just
seen this, but *if* Great Bridge performed their benchmarks such that all
the databases were access via ODBC, then they are using an
'apples-to-apples' approach, as each will have similar slowdowns as a
result ...

#6Ned Lilly
ned@greatbridge.com
In reply to: Ned Lilly (#1)
Re: Great Bridge benchmark results for Postgres, 4 others

Bryan, see my earlier post re: ODBC... will try and answer your other questions
here...

2) Postgres has the 'vacuum' process which is typically run nightly which if
not accounted for in the benchmark would give Postgres an artificial edge.
I don't know how you would account for it but in fairness I think it should
be acknowledged. Do the other big databases have similar maintenance
issues?

Don't know how this would affect the results directly. The benchmark app builds
the database clean each time, and takes about 18 hours to run for the full 100
users (for each product). So each database created was coming in with a clean
slate, with no issues of unclaimed space or what have you...

3) The test system has 512MB RAM. Given the licensing structure and high
licencing fees, users have an incentive to use much larger amounts of RAM.
Someone who can only afford 512MB probably can't afford the big names
anyway.

True, and it's a fair question how each database would make use of more RAM. My
guess, however, is that it wouldn't boost the transactions per second number -
where more RAM would impact the numbers would be more sustained performance in
higher numbers of concurrent users. Postgres and the two proprietary databases
all kept fairly flat lines (good) as the number of users edged up. We plan to
continuously re-run the tests with more users and bigger iron, so as we do that,
we'll keep the community informed.

4) The artical does not mention the Speed or Number of CPUs or anything
about the disks other than size. I can halfway infer that they are SCSI but
how are they layed out.

Yep, the disks were 2x 18 gig Wide SCSI, hot pluggable. The CPU was a single
600 Mhz Pentium III.

I am not trying to tear the benchmark down. Just wanting it more immune to
such attempts.

Not a problem, happy to try and answer any questions. Again, this is not
intended as a categoric statement of Postgres' superiority in any and all
circumstances. It's an attempt to share our research with the community on our
best attempt at a first-pass "apples to apples" comparison among the 5
products. I should also note that since the source to the benchmarks was not
available to us, including in many cases even the SQL queries, we couldn't do
much in the way of "tuning" that you'd normally want your DBA to do. Although
again, that limitation applied for all five products.

Regards,
Ned

#7The Hermit Hacker
scrappy@hub.org
In reply to: Ned Lilly (#6)
Re: Great Bridge benchmark results for Postgres, 4 others

On Mon, 14 Aug 2000, Ned Lilly wrote:

Bryan, see my earlier post re: ODBC... will try and answer your other questions
here...

2) Postgres has the 'vacuum' process which is typically run nightly which if
not accounted for in the benchmark would give Postgres an artificial edge.
I don't know how you would account for it but in fairness I think it should
be acknowledged. Do the other big databases have similar maintenance
issues?

Don't know how this would affect the results directly. The benchmark
app builds the database clean each time, and takes about 18 hours to
run for the full 100 users (for each product). So each database
created was coming in with a clean slate, with no issues of unclaimed
space or what have you...

do the tests only perform SELECTs? Any UPDATEs or DELETEs will create
unclaimed space ...

True, and it's a fair question how each database would make use of
more RAM. My guess, however, is that it wouldn't boost the
transactions per second number - where more RAM would impact the
numbers would be more sustained performance in higher numbers of
concurrent users. Postgres and the two proprietary databases all kept
fairly flat lines (good) as the number of users edged up. We plan to
continuously re-run the tests with more users and bigger iron, so as
we do that, we'll keep the community informed.

Actually, more RAM would permit you to increase both the -B parameters as
well as the -S one ... which are both noted for providing performance
increases ... -B more on repeative queries and -S on anything involving
ORDER BY or GROUP BY ...

Again, without knowing the specifics of the queries, whether either of the
above would make a difference is unknown ...

#8Andrew Snow
als@fl.net.au
In reply to: Ned Lilly (#6)
Re: Great Bridge benchmark results for Postgres, 4 others

On Mon, 14 Aug 2000, Ned Lilly wrote:

Bryan, see my earlier post re: ODBC... will try and answer your other questions
here...

2) Postgres has the 'vacuum' process which is typically run nightly which if
not accounted for in the benchmark would give Postgres an artificial edge.
I don't know how you would account for it but in fairness I think it should
be acknowledged. Do the other big databases have similar maintenance
issues?

Don't know how this would affect the results directly. The benchmark app builds
the database clean each time, and takes about 18 hours to run for the full 100
users (for each product). So each database created was coming in with a clean
slate, with no issues of unclaimed space or what have you...

Does a vacuum analyze not get run at all? Could this affect performance or
is it that not relevant in these benchmarks?

Regards,
Andrew

#9Alex Pilosov
alex@pilosoft.com
In reply to: Ned Lilly (#5)
TPC (was Great Bridge benchmark results for Postgres, 4 others)

A more interesting benchmark would be to compare TPC/C results on same
kind of hardware other vendors use for THEIR TPC benchmarks, which are
posted on tpc.org, as well as comparing price/performance of each.

TPC as run by company 'commissioned by GB' cannot be validated and
accepted into TPC database, they must be run under close supervision by
TPC-approved monitors. I hope GB actually springs for the price of running
the REAL TPC benchmark (last I heard it was around 25k$).

To see how postgres performs on low-end (for TPC low-end is <8 processors)
would be interesting to say the least.

A problem with a real TPC is the strong suggestion to run a transaction
manager, to improve speed. No transaction manager supports postgres yet.

Another note on TPC is that they require to include as a final price
support contract, on which GreatBridge should be able to compete.

On Mon, 14 Aug 2000, Ned Lilly wrote:

Show quoted text

Marc's right... we opted for ODBC to ensure as much of an "apples to apples"
comparison as possible. Of the 5 databases we tested, a native driver existed for
only the two (ahem) unnamed proprietary products - Postgres, Interbase, and MySQL
had to rely on ODBC. So we used the vendor's own ODBC for each of the other two
cases.

<disclaimer>
As with all benchmarks, your mileage will vary according to hardware, OS, and of
course the specific application. What we attempted to do here was use two
industry-standard benchmarks and treat all five products the same.
</disclaimer>

Presumably, if the vendor had taken the time to write a native driver for
Postgres, the results would have seen an even bigger kick. We don't have any
reason to think that the results for all five tests in native driver mode would be
out of proportion to the results we got through ODBC.

Regards,
Ned

The Hermit Hacker wrote:

On Mon, 14 Aug 2000, Steve Wolfe wrote:

1) Using only ODBC drivers. I don't know how much of an impact a driver

can

make but it would seem that using native drivers would shutdown one source
of objections.

Using ODBC is guaranteed to slow down the benchmark. I've seen native
database drivers beat ODBC by anywhere from a factor of two to an order of
magnitude.

I haven't had a chance to take a look at the benchmarks yet, having just
seen this, but *if* Great Bridge performed their benchmarks such that all
the databases were access via ODBC, then they are using an
'apples-to-apples' approach, as each will have similar slowdowns as a
result ...

#10Ned Lilly
ned@greatbridge.com
In reply to: Alex Pilosov (#9)
Re: TPC (was Great Bridge benchmark results for Postgres, 4others)

Hi Alex,

Absolutely, as I said, we did this benchmarking for our own internal due diligence in
understanding PostgreSQL's capabilities. It's not intended to be a formal big-iron TPC
test, like you see at tpc.org. The software we used was one commercial vendor's
implementation of the published AS3AP and TPC-C specs - it's the same one used by a lot
of trade magazines.

Benchmarking will be a significant part of Great Bridge's ongoing contribution to the
PostgreSQL community - starting with these relatively simple tests, and scaling up to
larger systems over time.

Regards,
Ned

Alex Pilosov wrote:

Show quoted text

A more interesting benchmark would be to compare TPC/C results on same
kind of hardware other vendors use for THEIR TPC benchmarks, which are
posted on tpc.org, as well as comparing price/performance of each.

TPC as run by company 'commissioned by GB' cannot be validated and
accepted into TPC database, they must be run under close supervision by
TPC-approved monitors. I hope GB actually springs for the price of running
the REAL TPC benchmark (last I heard it was around 25k$).

To see how postgres performs on low-end (for TPC low-end is <8 processors)
would be interesting to say the least.

A problem with a real TPC is the strong suggestion to run a transaction
manager, to improve speed. No transaction manager supports postgres yet.

Another note on TPC is that they require to include as a final price
support contract, on which GreatBridge should be able to compete.

On Mon, 14 Aug 2000, Ned Lilly wrote:

Marc's right... we opted for ODBC to ensure as much of an "apples to apples"
comparison as possible. Of the 5 databases we tested, a native driver existed for
only the two (ahem) unnamed proprietary products - Postgres, Interbase, and MySQL
had to rely on ODBC. So we used the vendor's own ODBC for each of the other two
cases.

<disclaimer>
As with all benchmarks, your mileage will vary according to hardware, OS, and of
course the specific application. What we attempted to do here was use two
industry-standard benchmarks and treat all five products the same.
</disclaimer>

Presumably, if the vendor had taken the time to write a native driver for
Postgres, the results would have seen an even bigger kick. We don't have any
reason to think that the results for all five tests in native driver mode would be
out of proportion to the results we got through ODBC.

Regards,
Ned

The Hermit Hacker wrote:

On Mon, 14 Aug 2000, Steve Wolfe wrote:

1) Using only ODBC drivers. I don't know how much of an impact a driver

can

make but it would seem that using native drivers would shutdown one source
of objections.

Using ODBC is guaranteed to slow down the benchmark. I've seen native
database drivers beat ODBC by anywhere from a factor of two to an order of
magnitude.

I haven't had a chance to take a look at the benchmarks yet, having just
seen this, but *if* Great Bridge performed their benchmarks such that all
the databases were access via ODBC, then they are using an
'apples-to-apples' approach, as each will have similar slowdowns as a
result ...

#11Dan Browning
danb@cyclonecomputers.com
In reply to: Ned Lilly (#10)
RE: TPC (was Great Bridge benchmark results for Postgres, 4others)

This benchmark had a lot of value for the job that was going to use ODBC.
Pretty obvious that Postgres blows away everyone else in the ODBC dept. I'm
not sure if this shows that PGsql is best-performer, or if it just shows
that the other db's have sucky ODBC implimentations.

Too bad it doesn't show us what the performance would have been with native
drivers. Hopefully someone will develop an interface that each db supports
at full speed on a minimum-functionality level (like ODBC, but faster).
Maybe that would shed some more light. Perl::DBI comes close to this, but
your still relying on the quality of the module's implimentation. Oh well.

But, I must detail that I will be using PGsql for my current .com project
(online ordering, etc). This choice is over Sybase, Interbase, and MySQL.
I found the price / performance / features just couldn't be beat with PGsql.

Show quoted text

-----Original Message-----
From: pgsql-general-owner@hub.org
[mailto:pgsql-general-owner@hub.org]On
Behalf Of Ned Lilly
Sent: Monday, August 14, 2000 9:35 PM
To: Alex Pilosov
Cc: The Hermit Hacker; PostgreSQL General
Subject: Re: TPC (was [GENERAL] Great Bridge benchmark results for
Postgres, 4others)

Hi Alex,

Absolutely, as I said, we did this benchmarking for our own
internal due diligence in
understanding PostgreSQL's capabilities. It's not intended
to be a formal big-iron TPC
test, like you see at tpc.org. The software we used was one
commercial vendor's
implementation of the published AS3AP and TPC-C specs - it's
the same one used by a lot
of trade magazines.

Benchmarking will be a significant part of Great Bridge's
ongoing contribution to the
PostgreSQL community - starting with these relatively simple
tests, and scaling up to
larger systems over time.

Regards,
Ned

Alex Pilosov wrote:

A more interesting benchmark would be to compare TPC/C

results on same

kind of hardware other vendors use for THEIR TPC

benchmarks, which are

posted on tpc.org, as well as comparing price/performance of each.

TPC as run by company 'commissioned by GB' cannot be validated and
accepted into TPC database, they must be run under close

supervision by

TPC-approved monitors. I hope GB actually springs for the

price of running

the REAL TPC benchmark (last I heard it was around 25k$).

To see how postgres performs on low-end (for TPC low-end is

<8 processors)

would be interesting to say the least.

A problem with a real TPC is the strong suggestion to run a

transaction

manager, to improve speed. No transaction manager supports

postgres yet.

Another note on TPC is that they require to include as a final price
support contract, on which GreatBridge should be able to compete.

On Mon, 14 Aug 2000, Ned Lilly wrote:

Marc's right... we opted for ODBC to ensure as much of an

"apples to apples"

comparison as possible. Of the 5 databases we tested, a

native driver existed for

only the two (ahem) unnamed proprietary products -

Postgres, Interbase, and MySQL

had to rely on ODBC. So we used the vendor's own ODBC

for each of the other two

cases.

<disclaimer>
As with all benchmarks, your mileage will vary according

to hardware, OS, and of

course the specific application. What we attempted to do

here was use two

industry-standard benchmarks and treat all five products the same.
</disclaimer>

Presumably, if the vendor had taken the time to write a

native driver for

Postgres, the results would have seen an even bigger

kick. We don't have any

reason to think that the results for all five tests in

native driver mode would be

out of proportion to the results we got through ODBC.

Regards,
Ned

The Hermit Hacker wrote:

On Mon, 14 Aug 2000, Steve Wolfe wrote:

1) Using only ODBC drivers. I don't know how much

of an impact a driver

can

make but it would seem that using native drivers

would shutdown one source

of objections.

Using ODBC is guaranteed to slow down the

benchmark. I've seen native

database drivers beat ODBC by anywhere from a factor

of two to an order of

magnitude.

I haven't had a chance to take a look at the benchmarks

yet, having just

seen this, but *if* Great Bridge performed their

benchmarks such that all

the databases were access via ODBC, then they are using an
'apples-to-apples' approach, as each will have similar

slowdowns as a

result ...

#12Jeff Hoffmann
jeff@propertykey.com
In reply to: Ned Lilly (#1)
Re: Great Bridge benchmark results for Postgres, 4 others

Ned Lilly wrote:

Greetings all,

At long last, here are the results of the benchmarking tests that
Great Bridge conducted in its initial exploration of PostgreSQL. We
held it up so we could test the shipping release of the new
Interbase 6.0. This is a news release that went out today.

The release is also on our website at
http://www.greatbridge.com/news/p_081420001.html. Graphics of the
AS3AP and TPC-C test results are at
http:/www.greatbridge.com/img/as3ap.gif and
http://www.greatbridge.com/img/tpc-c.gif respectively.

I'll try and field any questions anyone has, or refer you to someone
who can.

i haven't played with interbase yet, but my understanding is they have
two types of server -- the "classic" (process per connection?) and a
"superserver" (multithreaded). i'm guessing the multithreaded is faster
(why bother with the added complexity if it isn't?) so which version
did you run this test against?

the other question i have is if it was possible that the disks were a
bottleneck in the test process. it seems strange that three databases
would perform nearly identically for so long if there wasn't a
bottleneck somewhere. were the drives striped? did you consider
performing the test with faster raid arrays? on a related note, i was
looking through a couple of back issues of db2 magazine, and it struck
me how much optimization and other performance hints there were
available there & how little there was for postgres. is great bridge
planning on creating a knowledge base of these optimizations for the
public? or are you planning optimization as one of the commercial
services you provide? or some of both?

jeff

#13Chris Bitmead
chrisb@nimrod.itg.telstra.com.au
In reply to: The Hermit Hacker (#4)
Re: Great Bridge benchmark results for Postgres, 4 others

Can you tell us what version of the (ahem) unnamed proprietary products
you used? :-). For example if you used version 8i of an unnamed
proprietry product, that might be informative :-).

Ned Lilly wrote:

Show quoted text

Marc's right... we opted for ODBC to ensure as much of an "apples to apples"
comparison as possible. Of the 5 databases we tested, a native driver existed for
only the two (ahem) unnamed proprietary products - Postgres, Interbase, and MySQL
had to rely on ODBC. So we used the vendor's own ODBC for each of the other two
cases.

<disclaimer>
As with all benchmarks, your mileage will vary according to hardware, OS, and of
course the specific application. What we attempted to do here was use two
industry-standard benchmarks and treat all five products the same.
</disclaimer>

Presumably, if the vendor had taken the time to write a native driver for
Postgres, the results would have seen an even bigger kick. We don't have any
reason to think that the results for all five tests in native driver mode would be
out of proportion to the results we got through ODBC.

Regards,
Ned

The Hermit Hacker wrote:

On Mon, 14 Aug 2000, Steve Wolfe wrote:

1) Using only ODBC drivers. I don't know how much of an impact a driver

can

make but it would seem that using native drivers would shutdown one source
of objections.

Using ODBC is guaranteed to slow down the benchmark. I've seen native
database drivers beat ODBC by anywhere from a factor of two to an order of
magnitude.

I haven't had a chance to take a look at the benchmarks yet, having just
seen this, but *if* Great Bridge performed their benchmarks such that all
the databases were access via ODBC, then they are using an
'apples-to-apples' approach, as each will have similar slowdowns as a
result ...

#14Tatsuo Ishii
t-ishii@sra.co.jp
In reply to: Ned Lilly (#1)
Re: Great Bridge benchmark results for Postgres, 4 others

Greetings all,

At long last, here are the results of the benchmarking tests that
Great Bridge conducted in its initial exploration of PostgreSQL. We
held it up so we could test the shipping release of the new
Interbase 6.0. This is a news release that went out today.

The release is also on our website at
http://www.greatbridge.com/news/p_081420001.html. Graphics of the
AS3AP and TPC-C test results are at
http:/www.greatbridge.com/img/as3ap.gif and
http://www.greatbridge.com/img/tpc-c.gif respectively.

I'll try and field any questions anyone has, or refer you to someone
who can.

Great work!

BTW, was the postmaster configured to have an option "-o -F" to
disable fsync()?
--
Tatsuo Ishii

#15Dan Browning
danb@cyclonecomputers.com
In reply to: Chris Bitmead (#13)
RE: Great Bridge benchmark results for Postgres, 4 others

Can you tell us what version of the (ahem) unnamed
proprietary products
you used? :-). For example if you used version 8i of an unnamed
proprietry product, that might be informative :-).

Oh, but even if you can't tell us what version was used, I'm sure you could
tell us that story about the monster you saw last week. But which monster
was it? Was it the monster that ATE EYEs? And I remember you once said
there was a second monster, could you describe it as well?

#16Mark Kirkwood
mark.kirkwood@catalyst.net.nz
In reply to: Dan Browning (#15)
Re: Great Bridge benchmark results for Postgres, 4 others

Excellent result ! -

Great to see some benchmarking of Postgresql and the competition....and to see it kick ass !

.... but a cautionary note about test "even handedness" - certain current versions of
"proprietary databases" will exhaust 512MB RAM with 100 users... I know this because I have
performed similar tests of Posgresql
+ "other unspecified databases" myself. It would be interesting to see memory + swap + disk
utilization
profiles of the test machine with the various databases.

To give the show away a bit, against a certian well known "propriety database" I had to enable
"nofsync" to
match its performance ( which invalidates a tpc c benchmark I think - no failsafe...) .

Not to be a negative Elephant about this, the low memory footprint of Postgresql is a great
strength, and should be marketed as such.... !

In a related vein, is it possible that any relevant database parameter settings might be
published to help folk get the best out of their Postgresql systems ? ( apologies if they are
there and I missed them )

Regards

Mark

#17Adrian Phillips
adrianp@powertech.no
In reply to: Ned Lilly (#1)
Re: Great Bridge benchmark results for Postgres, 4 others

"Ned" == Ned Lilly <ned@greatbridge.com> writes:

<snip>

Ned> The other databases tested against the AS3AP benchmarks, open
Ned> source competitors MySQL 3.22 and Interbase 6.0, demonstrated
Ned> some speed with a low number of users but a distinct lack of
Ned> scalability. MySQL reached a peak of 803.48 with two users,
Ned> but its performance fell precipitously under the stress of
Ned> additional users to a rate of 117.87 transactions per second
Ned> with 100 users. Similarly, Interbase reached 424 transactions
Ned> per second with four users, but its performance declined
Ned> steadily with additional users, dropping off to 146.86
Ned> transactions per second with 100 users.

It would have been more interesting if MySQL 3.23 had been tested as
this has reached what seems to be a fairly stable beta and seems to
perform some operations significantly faster than 3.22 and I believe
may scale somewhat better as well. Of course it may not be so
interesting for most PostgreSQL users :-)

Sincerely,

Adrian Phillips

--
Your mouse has moved.
Windows NT must be restarted for the change to take effect.
Reboot now? [OK]

#18Steve Heaven
steve@thornet.co.uk
In reply to: Ned Lilly (#1)
warning message

When I do a
vacuum analyze ma_b;

I get this message
NOTICE: Index ma_idx: NUMBER OF INDEX' TUPLES (17953) IS NOT THE SAME AS
HEAP'
(17952)

It it anything to worry about and how do I fix it?

Thanks

Steve
--
thorNET - Internet Consultancy, Services & Training
Phone: 01454 854413
Fax: 01454 854412
http://www.thornet.co.uk

#19Ned Lilly
ned@greatbridge.com
In reply to: Ned Lilly (#1)
Re: Great Bridge benchmark results for Postgres, 4 others

Hi Jeff,

i haven't played with interbase yet, but my understanding is they have
two types of server -- the "classic" (process per connection?) and a
"superserver" (multithreaded). i'm guessing the multithreaded is faster
(why bother with the added complexity if it isn't?) so which version
did you run this test against?

Classic. Superserver didn't work with the ODBC driver. Richard Brosnahan,
the lead engineer on the project at Xperts, could connect, but could not
successfully build tables and load them due to SQL errors. Feel free to
contact him directly (he's cc'ed here).

the other question i have is if it was possible that the disks were a
bottleneck in the test process. it seems strange that three databases
would perform nearly identically for so long if there wasn't a
bottleneck somewhere. were the drives striped? did you consider
performing the test with faster raid arrays?

The disks were not striped. We may look at RAID in the future, but again,
this was only a simple low-end test. It seems reasonable to assume that the
disks were a bottleneck, but they would have been a bottleneck for all of
the databases.

on a related note, i was
looking through a couple of back issues of db2 magazine, and it struck
me how much optimization and other performance hints there were
available there & how little there was for postgres. is great bridge
planning on creating a knowledge base of these optimizations for the
public? or are you planning optimization as one of the commercial
services you provide? or some of both?

Yes, yes, and yes. We'll have more to say about our commercial services in
the near future, but there will always be a substantial free, publicly
available knowledgebase as part of our commitment to the open source
community.

Regards,
Ned

#20Ned Lilly
ned@greatbridge.com
In reply to: Ned Lilly (#1)
Re: Great Bridge benchmark results for Postgres, 4 others

Good question, Tatsuo... We ran it with and without fsync() - there was
only a 2-3% difference between the two.

Tatsuo Ishii wrote:

Show quoted text

Great work!

BTW, was the postmaster configured to have an option "-o -F" to
disable fsync()?
--
Tatsuo Ishii

#21Ned Lilly
ned@greatbridge.com
In reply to: Dan Browning (#15)
#22Ned Lilly
ned@greatbridge.com
In reply to: Ned Lilly (#1)
#23Ned Lilly
ned@greatbridge.com
In reply to: Mark Kirkwood (#16)
#24Ned Lilly
ned@greatbridge.com
In reply to: Ned Lilly (#1)
#25Ross J. Reedstrom
reedstrm@rice.edu
In reply to: Ned Lilly (#21)
#26Steve Wolfe
steve@iboats.com
In reply to: The Hermit Hacker (#7)
#27Ned Lilly
ned@greatbridge.com
In reply to: Dan Browning (#15)
#28Chris Bitmead
chrisb@nimrod.itg.telstra.com.au
In reply to: Dan Browning (#15)
#29Ned Lilly
ned@greatbridge.com
In reply to: Dan Browning (#15)
#30Chris Bitmead
chrisb@nimrod.itg.telstra.com.au
In reply to: Dan Browning (#15)
#31Alfred Perlstein
bright@wintelcom.net
In reply to: Ned Lilly (#29)
#32Fabrice Scemama
fabrice@scemama.org
In reply to: Ned Lilly (#1)
#33Ned Lilly
ned@greatbridge.com
In reply to: Ned Lilly (#1)
#34Adam Ruth
aruth@intercation.com
In reply to: Ned Lilly (#33)
#35Tom Lane
tgl@sss.pgh.pa.us
In reply to: Steve Wolfe (#26)
#36Tom Lane
tgl@sss.pgh.pa.us
In reply to: Steve Heaven (#18)