skip WAL on COPY patch

Started by Steve Singerover 14 years ago10 messages
#1Steve Singer
ssinger@ca.afilias.info
1 attachment(s)

The attached patch adds an option to the COPY command to skip writing
WAL when the following conditions are all met:

1) The table is empty (zero size on disk)
2) The copy command can obtain an access exclusive lock on the table
with out blocking.
3) The WAL isn't needed for replication

For example

COPY a FROM '/tmp/a.txt' (SKIP_WAL);

A non-default option to the copy command is required because the copy
will block out any concurrent access to the table which would be
undesirable in some cases and is different from the current behaviour.

This can safely be done because if the transaction does not commit the
empty version of the data files are still available. The COPY command
already skips WAL if the table was created in the current transaction.

There was a discussion on something similar before[1] but I didn't see
any discussion of having it only obtain the lock if it can do so without
waiting (nor could I find in the archives what happened to that patch).
I'm not attached to the SKIP_WAL vs LOCK as the option

1- see http://archives.postgresql.org/pgsql-patches/2005-12/msg00206.php

Steve

Attachments:

skip_wal_copy.difftext/x-patch; name=skip_wal_copy.diffDownload
diff --git a/doc/src/sgml/ref/copy.sgml b/doc/src/sgml/ref/copy.sgml
index a73b022..3a0e521 100644
*** a/doc/src/sgml/ref/copy.sgml
--- b/doc/src/sgml/ref/copy.sgml
*************** COPY { <replaceable class="parameter">ta
*** 42,47 ****
--- 42,48 ----
      FORCE_QUOTE { ( <replaceable class="parameter">column</replaceable> [, ...] ) | * }
      FORCE_NOT_NULL ( <replaceable class="parameter">column</replaceable> [, ...] ) |
      ENCODING '<replaceable class="parameter">encoding_name</replaceable>'
+     SKIP_WAL 
  </synopsis>
   </refsynopsisdiv>
  
*************** COPY { <replaceable class="parameter">ta
*** 293,298 ****
--- 294,312 ----
        for more details.
       </para>
      </listitem>
+ 	</varlistentry>
+ 	<varlistentry>
+ 	<term><literal>SKIP_WAL</></term>
+      <listitem>
+ 	   <para>
+         Specifies that the writing of WAL should be skipped if possible.
+         WAL can be skipped if the table being copied into is empty and
+         if an exclusive lock can be obtained without waiting.  If this
+         option is specified and WAL is skipped then the transaction will
+         hold an exclusive lock on the table being copied until the transaction
+         commits.
+ 		</para>
+ 	   </listitem>
     </varlistentry>
  
    </variablelist>
diff --git a/src/backend/commands/copy.c b/src/backend/commands/copy.c
index 528a3a1..bd81a4b 100644
*** a/src/backend/commands/copy.c
--- b/src/backend/commands/copy.c
***************
*** 29,34 ****
--- 29,35 ----
  #include "commands/defrem.h"
  #include "commands/trigger.h"
  #include "executor/executor.h"
+ #include "commands/tablecmds.h"
  #include "libpq/libpq.h"
  #include "libpq/pqformat.h"
  #include "mb/pg_wchar.h"
***************
*** 37,42 ****
--- 38,44 ----
  #include "parser/parse_relation.h"
  #include "rewrite/rewriteHandler.h"
  #include "storage/fd.h"
+ #include "storage/lmgr.h"
  #include "tcop/tcopprot.h"
  #include "utils/acl.h"
  #include "utils/builtins.h"
*************** typedef struct CopyStateData
*** 120,125 ****
--- 122,128 ----
  	bool	   *force_quote_flags;		/* per-column CSV FQ flags */
  	List	   *force_notnull;	/* list of column names */
  	bool	   *force_notnull_flags;	/* per-column CSV FNN flags */
+ 	bool		skip_wal;				/* skip WAL if able */
  
  	/* these are just for error messages, see CopyFromErrorCallback */
  	const char *cur_relname;	/* table name for error messages */
*************** ProcessCopyOptions(CopyState cstate,
*** 965,970 ****
--- 968,978 ----
  						 errmsg("argument to option \"%s\" must be a valid encoding name",
  								defel->defname)));
  		}
+ 		else if (strcmp(defel->defname,"skip_wal") == 0)
+ 		{
+ 
+ 			cstate->skip_wal=true;
+ 		}
  		else
  			ereport(ERROR,
  					(errcode(ERRCODE_SYNTAX_ERROR),
*************** CopyFrom(CopyState cstate)
*** 1910,1915 ****
--- 1918,1957 ----
  		if (!XLogIsNeeded())
  			hi_options |= HEAP_INSERT_SKIP_WAL;
  	}
+ 	
+ 	/*
+ 	 * if SKIP_WAL was requested we try to avoid writing
+ 	 * WAL if the table is 0 bytes on disk (empty) and
+ 	 * that we can obtain an exclusive lock on it without blocking. 
+ 	 * 
+ 	 */
+ 	if(cstate->skip_wal && !XLogIsNeeded() && 
+ 	   ConditionalLockRelationOid(cstate->rel->rd_id,AccessExclusiveLock))
+ 	{
+ 		
+ 		Datum size = DirectFunctionCall2(pg_relation_size,
+ 										 ObjectIdGetDatum(cstate->rel->rd_id),
+ 										 PointerGetDatum(cstring_to_text("main")));
+ 		if ( DatumGetInt64(size)==0)
+ 		{
+ 			/**
+ 			 * The relation is empty + unused.
+ 			 * truncate it so that if this transaction
+ 			 * rollsback then the changes to the relation files
+ 			 * will dissapear (the current relation files will
+ 			 * remain untouched)
+ 			 */
+ 			truncate_relation(cstate->rel);
+ 			hi_options |= HEAP_INSERT_SKIP_FSM;
+ 			hi_options |= HEAP_INSERT_SKIP_WAL;
+ 		}
+ 		else
+ 		{
+ 			UnlockRelation(cstate->rel,AccessExclusiveLock);
+ 		}
+ 					  
+ 	}
+ 
  
  	/*
  	 * We need a ResultRelInfo so we can use the regular executor's
diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index 4509cda..ff5bf8d 100644
*** a/src/backend/commands/tablecmds.c
--- b/src/backend/commands/tablecmds.c
*************** ExecuteTruncate(TruncateStmt *stmt)
*** 1053,1095 ****
  		}
  		else
  		{
! 			Oid			heap_relid;
! 			Oid			toast_relid;
! 
! 			/*
! 			 * This effectively deletes all rows in the table, and may be done
! 			 * in a serializable transaction.  In that case we must record a
! 			 * rw-conflict in to this transaction from each transaction
! 			 * holding a predicate lock on the table.
! 			 */
! 			CheckTableForSerializableConflictIn(rel);
! 
! 			/*
! 			 * Need the full transaction-safe pushups.
! 			 *
! 			 * Create a new empty storage file for the relation, and assign it
! 			 * as the relfilenode value. The old storage file is scheduled for
! 			 * deletion at commit.
! 			 */
! 			RelationSetNewRelfilenode(rel, RecentXmin);
! 
! 			heap_relid = RelationGetRelid(rel);
! 			toast_relid = rel->rd_rel->reltoastrelid;
! 
! 			/*
! 			 * The same for the toast table, if any.
! 			 */
! 			if (OidIsValid(toast_relid))
! 			{
! 				rel = relation_open(toast_relid, AccessExclusiveLock);
! 				RelationSetNewRelfilenode(rel, RecentXmin);
! 				heap_close(rel, NoLock);
! 			}
! 
! 			/*
! 			 * Reconstruct the indexes to match, and we're done.
! 			 */
! 			reindex_relation(heap_relid, REINDEX_REL_PROCESS_TOAST);
  		}
  	}
  
--- 1053,1059 ----
  		}
  		else
  		{
! 			truncate_relation(rel);
  		}
  	}
  
*************** AtEOSubXact_on_commit_actions(bool isCom
*** 9752,9754 ****
--- 9716,9759 ----
  		}
  	}
  }
+ 
+ void truncate_relation(Relation rel)
+ {
+ 	Oid			heap_relid;
+ 	Oid			toast_relid;
+ 	
+ 	/*
+ 	 * This effectively deletes all rows in the table, and may be done
+ 	 * in a serializable transaction.  In that case we must record a
+ 	 * rw-conflict in to this transaction from each transaction
+ 	 * holding a predicate lock on the table.
+ 	 */
+ 	CheckTableForSerializableConflictIn(rel);
+ 	
+ 	/*
+ 	 * Need the full transaction-safe pushups.
+ 	 *
+ 	 * Create a new empty storage file for the relation, and assign it
+ 	 * as the relfilenode value. The old storage file is scheduled for
+ 	 * deletion at commit.
+ 	 */
+ 	RelationSetNewRelfilenode(rel, RecentXmin);
+ 	
+ 	heap_relid = RelationGetRelid(rel);
+ 	toast_relid = rel->rd_rel->reltoastrelid;
+ 
+ 	/*
+ 	 * The same for the toast table, if any.
+ 	 */
+ 	if (OidIsValid(toast_relid))
+ 	{
+ 		rel = relation_open(toast_relid, AccessExclusiveLock);
+ 		RelationSetNewRelfilenode(rel, RecentXmin);
+ 		heap_close(rel, NoLock);
+ 	}
+ 	
+ 	/*
+ 	 * Reconstruct the indexes to match, and we're done.
+ 	 */
+ 	reindex_relation(heap_relid, REINDEX_REL_PROCESS_TOAST);
+ }
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index e9f3896..354cc3f 100644
*** a/src/backend/parser/gram.y
--- b/src/backend/parser/gram.y
*************** static void processCASbits(int cas_bits,
*** 553,560 ****
  
  	SAVEPOINT SCHEMA SCROLL SEARCH SECOND_P SECURITY SELECT SEQUENCE SEQUENCES
  	SERIALIZABLE SERVER SESSION SESSION_USER SET SETOF SHARE
! 	SHOW SIMILAR SIMPLE SMALLINT SOME STABLE STANDALONE_P START STATEMENT
! 	STATISTICS STDIN STDOUT STORAGE STRICT_P STRIP_P SUBSTRING
  	SYMMETRIC SYSID SYSTEM_P
  
  	TABLE TABLES TABLESPACE TEMP TEMPLATE TEMPORARY TEXT_P THEN TIME TIMESTAMP
--- 553,560 ----
  
  	SAVEPOINT SCHEMA SCROLL SEARCH SECOND_P SECURITY SELECT SEQUENCE SEQUENCES
  	SERIALIZABLE SERVER SESSION SESSION_USER SET SETOF SHARE
! 	SHOW SIMILAR SIMPLE SKIP_WAL SMALLINT SOME STABLE STANDALONE_P 
! 	START STATEMENT STATISTICS STDIN STDOUT STORAGE STRICT_P STRIP_P SUBSTRING
  	SYMMETRIC SYSID SYSTEM_P
  
  	TABLE TABLES TABLESPACE TEMP TEMPLATE TEMPORARY TEXT_P THEN TIME TIMESTAMP
*************** copy_opt_item:
*** 2297,2302 ****
--- 2297,2306 ----
  				{
  					$$ = makeDefElem("encoding", (Node *)makeString($2));
  				}
+ 			| SKIP_WAL
+ 				{
+ 				  $$ = makeDefElem("skip_wal", (Node *)makeString("skip_wal"));
+ 				}
  		;
  
  /* The following exist for backward compatibility with very old versions */
diff --git a/src/include/commands/tablecmds.h b/src/include/commands/tablecmds.h
index 0e8bbe0..5627c70 100644
*** a/src/include/commands/tablecmds.h
--- b/src/include/commands/tablecmds.h
*************** extern void AtEOXact_on_commit_actions(b
*** 71,75 ****
  extern void AtEOSubXact_on_commit_actions(bool isCommit,
  							  SubTransactionId mySubid,
  							  SubTransactionId parentSubid);
! 
  #endif   /* TABLECMDS_H */
--- 71,75 ----
  extern void AtEOSubXact_on_commit_actions(bool isCommit,
  							  SubTransactionId mySubid,
  							  SubTransactionId parentSubid);
! extern void truncate_relation(Relation rel);
  #endif   /* TABLECMDS_H */
diff --git a/src/test/regress/expected/copy2.out b/src/test/regress/expected/copy2.out
index 8e2bc0c..d4915e7 100644
*** a/src/test/regress/expected/copy2.out
--- b/src/test/regress/expected/copy2.out
*************** a\.
*** 239,244 ****
--- 239,285 ----
  \.b
  c\.d
  "\."
+ -- test SKIP_WAL option
+ BEGIN;
+ CREATE TABLE test_notemp ( 
+ 	   a int4);
+ COMMIT;
+ select pg_relation_size(oid) from pg_class where relname='test_notemp';
+  pg_relation_size 
+ ------------------
+                 0
+ (1 row)
+ 
+ COPY test_notemp FROM stdin WITH (skip_wal);
+ COPY test_notemp FROM stdin WITH (skip_wal);
+ select pg_relation_size(oid) from pg_class where relname='test_notemp';
+  pg_relation_size 
+ ------------------
+              8192
+ (1 row)
+ 
+ truncate test_notemp;
+ BEGIN;
+ COPY test_notemp FROM stdin WITH (skip_wal);
+ ROLLBACK;
+ --expect size of 0
+ select pg_relation_size(oid) from pg_class where relname='test_notemp';
+  pg_relation_size 
+ ------------------
+                 0
+ (1 row)
+ 
+ BEGIN;
+ COPY test_notemp FROM stdin csv;
+ ROLLBACK;
+ --expect non-zero size
+ select pg_relation_size(oid) from pg_class where relname='test_notemp';
+  pg_relation_size 
+ ------------------
+              8192
+ (1 row)
+ 
+ DROP TABLE test_notemp;
  DROP TABLE x, y;
  DROP FUNCTION fn_x_before();
  DROP FUNCTION fn_x_after();
diff --git a/src/test/regress/sql/copy2.sql b/src/test/regress/sql/copy2.sql
index 6322c8f..e41a9d8 100644
*** a/src/test/regress/sql/copy2.sql
--- b/src/test/regress/sql/copy2.sql
*************** c\.d
*** 164,169 ****
--- 164,206 ----
  
  COPY testeoc TO stdout CSV;
  
+ -- test SKIP_WAL option
+ 
+ BEGIN;
+ CREATE TABLE test_notemp ( 
+ 	   a int4);
+ COMMIT;
+ select pg_relation_size(oid) from pg_class where relname='test_notemp';
+ COPY test_notemp FROM stdin WITH (skip_wal);
+ 1
+ 2
+ \.
+ COPY test_notemp FROM stdin WITH (skip_wal);
+ 1
+ 2
+ \.
+ select pg_relation_size(oid) from pg_class where relname='test_notemp';
+ truncate test_notemp;
+ BEGIN;
+ COPY test_notemp FROM stdin WITH (skip_wal);
+ 1
+ 2
+ \.
+ ROLLBACK;
+ --expect size of 0
+ select pg_relation_size(oid) from pg_class where relname='test_notemp';
+ 
+ BEGIN;
+ COPY test_notemp FROM stdin csv;
+ 1
+ 2
+ \.
+ ROLLBACK;
+ --expect non-zero size
+ select pg_relation_size(oid) from pg_class where relname='test_notemp';
+ DROP TABLE test_notemp;
+ 
+ 
  DROP TABLE x, y;
  DROP FUNCTION fn_x_before();
  DROP FUNCTION fn_x_after();
#2Tom Lane
tgl@sss.pgh.pa.us
In reply to: Steve Singer (#1)
Re: skip WAL on COPY patch

Steve Singer <ssinger@ca.afilias.info> writes:

The attached patch adds an option to the COPY command to skip writing
WAL when the following conditions are all met:

1) The table is empty (zero size on disk)
2) The copy command can obtain an access exclusive lock on the table
with out blocking.
3) The WAL isn't needed for replication

Exposing this as a user-visible option seems a seriously bad idea.
We'd have to support that forever. ISTM it ought to be possible to
avoid the exclusive lock ... maybe not with this particular
implementation, but somehow.

regards, tom lane

#3Robert Haas
robertmhaas@gmail.com
In reply to: Tom Lane (#2)
Re: skip WAL on COPY patch

On Tue, Aug 23, 2011 at 3:05 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:

Steve Singer <ssinger@ca.afilias.info> writes:

The attached patch adds an option to the COPY command to skip writing
WAL when the following conditions are all met:

1) The table is empty (zero size on disk)
2) The copy command can obtain an access exclusive lock on the table
with out blocking.
3) The WAL isn't needed for replication

Exposing this as a user-visible option seems a seriously bad idea.
We'd have to support that forever.  ISTM it ought to be possible to
avoid the exclusive lock ... maybe not with this particular
implementation, but somehow.

Also, if it only works when the table is zero size on disk, you might
as well just let people truncate their already-empty tables when they
want this optimization.

What I think would be really interesting is a way to make this work
when the table *isn't* empty. In other words, have a COPY option that
(1) takes an exclusive lock on the table, (2) writes the data being
inserted into new pages beyond the old EOF, and (3) arranges for crash
recovery or transaction abort to truncate the table back to its
previous length. Then you could do fast bulk loads even into a table
that's already populated, so long as you don't mind that the table
will be excusive-locked and freespace within existing heap pages won't
be reused.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

#4Tom Lane
tgl@sss.pgh.pa.us
In reply to: Robert Haas (#3)
Re: skip WAL on COPY patch

Robert Haas <robertmhaas@gmail.com> writes:

What I think would be really interesting is a way to make this work
when the table *isn't* empty. In other words, have a COPY option that
(1) takes an exclusive lock on the table, (2) writes the data being
inserted into new pages beyond the old EOF, and (3) arranges for crash
recovery or transaction abort to truncate the table back to its
previous length. Then you could do fast bulk loads even into a table
that's already populated, so long as you don't mind that the table
will be excusive-locked and freespace within existing heap pages won't
be reused.

What are you going to do with the table's indexes?

regards, tom lane

#5Alvaro Herrera
alvherre@commandprompt.com
In reply to: Robert Haas (#3)
Re: skip WAL on COPY patch

Excerpts from Robert Haas's message of mar ago 23 17:08:50 -0300 2011:

What I think would be really interesting is a way to make this work
when the table *isn't* empty. In other words, have a COPY option that
(1) takes an exclusive lock on the table, (2) writes the data being
inserted into new pages beyond the old EOF, and (3) arranges for crash
recovery or transaction abort to truncate the table back to its
previous length. Then you could do fast bulk loads even into a table
that's already populated, so long as you don't mind that the table
will be excusive-locked and freespace within existing heap pages won't
be reused.

It seems to me this would be relatively simple if we allowed segments
that are not a full GB in length. That way, COPY could write into a
whole segment and "attach" it to the table at commit time (say, by
renaming).

--
Álvaro Herrera <alvherre@commandprompt.com>
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

#6Steve Singer
ssinger@ca.afilias.info
In reply to: Tom Lane (#4)
Re: skip WAL on COPY patch

On 11-08-23 04:17 PM, Tom Lane wrote:

Robert Haas<robertmhaas@gmail.com> writes:

What I think would be really interesting is a way to make this work
when the table *isn't* empty. In other words, have a COPY option that
(1) takes an exclusive lock on the table, (2) writes the data being
inserted into new pages beyond the old EOF, and (3) arranges for crash
recovery or transaction abort to truncate the table back to its
previous length. Then you could do fast bulk loads even into a table
that's already populated, so long as you don't mind that the table
will be excusive-locked and freespace within existing heap pages won't
be reused.

What are you going to do with the table's indexes?

regards, tom lane

What about not updating the indexes during the copy operation then to an
automatic rebuild of the indexes after the copy (but during the same
transaction). If your only adding a few rows to a large table this
wouldn't be what you want, but if your only adding a few rows then a
small amount of WAL isn't a big concern either.

#7Robert Haas
robertmhaas@gmail.com
In reply to: Tom Lane (#4)
Re: skip WAL on COPY patch

On Tue, Aug 23, 2011 at 4:17 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:

Robert Haas <robertmhaas@gmail.com> writes:

What I think would be really interesting is a way to make this work
when the table *isn't* empty.  In other words, have a COPY option that
(1) takes an exclusive lock on the table, (2) writes the data being
inserted into new pages beyond the old EOF, and (3) arranges for crash
recovery or transaction abort to truncate the table back to its
previous length.  Then you could do fast bulk loads even into a table
that's already populated, so long as you don't mind that the table
will be excusive-locked and freespace within existing heap pages won't
be reused.

What are you going to do with the table's indexes?

Oh, hmm. That's awkward.

I suppose you could come up with some solution that involved saving
preimages of each already-existing index page that was modified until
commit. If you crash before commit, you truncate away all the added
pages and roll back to the preimages of any modified pages. That's
pretty complex, though, and I'm not sure that it would be enough of a
win to justify the effort.

It also sounds suspiciously like a poor-man's implementation of a
rollback segment; and if we ever decide we want to have an option for
rollback segments, we probably want more than a poor man's version.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

#8Alvaro Herrera
alvherre@commandprompt.com
In reply to: Robert Haas (#7)
Re: skip WAL on COPY patch

Excerpts from Robert Haas's message of mar ago 23 17:43:13 -0300 2011:

On Tue, Aug 23, 2011 at 4:17 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:

Robert Haas <robertmhaas@gmail.com> writes:

What I think would be really interesting is a way to make this work
when the table *isn't* empty.  In other words, have a COPY option that
(1) takes an exclusive lock on the table, (2) writes the data being
inserted into new pages beyond the old EOF, and (3) arranges for crash
recovery or transaction abort to truncate the table back to its
previous length.  Then you could do fast bulk loads even into a table
that's already populated, so long as you don't mind that the table
will be excusive-locked and freespace within existing heap pages won't
be reused.

What are you going to do with the table's indexes?

Oh, hmm. That's awkward.

If you see what I proposed, it's simple: you can scan the new segment(s)
and index the tuples found there (maybe in bulk which would be even
faster).

--
Álvaro Herrera <alvherre@commandprompt.com>
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

#9Jeff Davis
pgsql@j-davis.com
In reply to: Tom Lane (#2)
Re: skip WAL on COPY patch

On Tue, 2011-08-23 at 15:05 -0400, Tom Lane wrote:

Steve Singer <ssinger@ca.afilias.info> writes:

The attached patch adds an option to the COPY command to skip writing
WAL when the following conditions are all met:

1) The table is empty (zero size on disk)
2) The copy command can obtain an access exclusive lock on the table
with out blocking.
3) The WAL isn't needed for replication

Exposing this as a user-visible option seems a seriously bad idea.

In that particular way, I agree. But it might be useful if there were a
more general declarative option like "BULKLOAD". We might then use that
information for a number of optimizations that make sense for large
loads.

Regards,
Jeff Davis

#10Robert Haas
robertmhaas@gmail.com
In reply to: Alvaro Herrera (#8)
Re: skip WAL on COPY patch

On Tue, Aug 23, 2011 at 4:51 PM, Alvaro Herrera
<alvherre@commandprompt.com> wrote:

Excerpts from Robert Haas's message of mar ago 23 17:43:13 -0300 2011:

On Tue, Aug 23, 2011 at 4:17 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:

Robert Haas <robertmhaas@gmail.com> writes:

What I think would be really interesting is a way to make this work
when the table *isn't* empty.  In other words, have a COPY option that
(1) takes an exclusive lock on the table, (2) writes the data being
inserted into new pages beyond the old EOF, and (3) arranges for crash
recovery or transaction abort to truncate the table back to its
previous length.  Then you could do fast bulk loads even into a table
that's already populated, so long as you don't mind that the table
will be excusive-locked and freespace within existing heap pages won't
be reused.

What are you going to do with the table's indexes?

Oh, hmm.  That's awkward.

If you see what I proposed, it's simple: you can scan the new segment(s)
and index the tuples found there (maybe in bulk which would be even
faster).

You can do that much even if you just append to the file - you don't
need variable-length segments to make that part work.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company