Fake async rep target
How easy would it be to implement a fake async rep target?
Perhaps even as something that a server could allow a connection to
request? (ie a suitably permissioned connection could convert itself to
receive n async replication stream, rather than being statically
configured?)
I know that it sounds a bit bonkers, but a while back I worked on a
system where we configured a rep target (using OpenServer) we could
observe changes to tables and enqueue secondary processing. Rather
painful in that case because of the way that repserver is configured,
and I'm not sure it was worth the pain when configuring test and dev
environments.
However, in principle, it seems that this is quite an elegant standing
for a whole raft of trigger functions - and probably a lot cheaper to
execute. The key, I think, is to be able to allow dynamic attachment of
such a 'change feed' by an account that has god-like read access.
Is the existing async rep code amenable to this sort of abuse?
On 5/29/12 2:46 PM, james wrote:
How easy would it be to implement a fake async rep target?
Perhaps even as something that a server could allow a connection to request? (ie a suitably permissioned connection could convert itself to receive n async replication stream, rather than being statically configured?)
I know that it sounds a bit bonkers, but a while back I worked on a system where we configured a rep target (using OpenServer) we could observe changes to tables and enqueue secondary processing. Rather painful in that case because of the way that repserver is configured,
and I'm not sure it was worth the pain when configuring test and dev environments.However, in principle, it seems that this is quite an elegant standing for a whole raft of trigger functions - and probably a lot cheaper to execute. The key, I think, is to be able to allow dynamic attachment of such a 'change feed' by an account that has god-like read access.
Is the existing async rep code amenable to this sort of abuse?
How would a client actually use the *binary* information it was handed? There is no ability to turn it into SQL or anything similar; what is sent is in a completely proprietary, internal-use-only format. Unless that changes (which there has been some discussion of) I doubt such a connection would be of any value.
--
Jim C. Nasby, Database Architect jim@nasby.net
512.569.9461 (cell) http://jim.nasby.net
Well, I was assuming that there was some intelligence in the receiver
that could effectively parse this for the application; are you
suggesting that is effectively binary deltas to apply to raw pages?
Certainly, Sybase rep server works by creating function calls or SQL
updates (depending on how you set it all up), and a fairly simple parser
can be used to process the received stream.
I was hoping that the receiver code could be used to at least identify
which tuples are affected and their before/after primary keys.
Ideally the result would be a packet by transaction, listing the
impacted tables and for each table a list of affected primary keys and
whether they were inserted, deleted or updated.
James
Well, I was assuming that there was some intelligence in the receiver
that could effectively parse this for the application; are you
suggesting that is effectively binary deltas to apply to raw pages?
Certainly, Sybase rep server works by creating function calls or SQL
updates (depending on how you set it all up), and a fairly simple parser
can be used to process the received stream.
I was hoping that the receiver code could be used to at least identify
which tuples are affected and their before/after primary keys.
Ideally the result would be a packet by transaction, listing the
impacted tables and for each table a list of affected primary keys and
whether they were inserted, deleted or updated.
James
On May30, 2012, at 22:28 , james wrote:
Well, I was assuming that there was some intelligence in the receiver that could effectively parse this for the application; are you suggesting that is effectively binary deltas to apply to raw pages?
In parts. The log that is streamed to replication slaves is the same as the write-ahead log that postgres uses for crash recovery. Some records in this log are simply page images, while others describe operations (like inserting a tuple). But none of them include enough information to convert them into, say, SQL statements which would redo that operation.
Tuple insertion records, for example, simply contain the binary representation of the tuple to be inserted. To convert that back into SQL, you need meta-data (like the tuple layout for the table in question). Now, in theory you could query that meta-data from the masters. In practice, however, doing that correctly would be extremely hard, since you'd need to get the meta-data present at the time the record was created, *not* at the time it is shipped out. Otherwise, there'll be a window after every DDL statement where you decode the records incorrectly, since the records refer to a different table structure than currently present on the master.
Also, records generally describe pretty low-level operations. An actual INSERT, for example, will produce records for inserting the tuple into the heap, and separate records for inserting them into whatever indexes are defined on the table. If one of the inserted fields is larger than the TOAST threshold, you'll also get a separate record for the TOAST-table insertion, and the main tuple will only contain references to the chunks in the TOAST table.
best regards,
Florian Pflug