libpq compression
Hi,
There was already some discussion about compressing libpq data [1]http://archives.postgresql.org/pgsql-hackers/2012-03/msg00929.php[2]http://archives.postgresql.org/pgsql-hackers/2011-01/msg00337.php[3]http://archives.postgresql.org/pgsql-hackers/2002-03/msg00664.php.
Recently, I faced a scenario that would become less problematic if we have had
compression support. The scenario is frequent data load (aka COPY) over
slow/unstable links. It should be executed in a few hundreds of PostgreSQL
servers all over Brazil. Someone could argue that I could use ssh tunnel to
solve the problem but (i) it is complex because it involves a different port
in the firewall and (ii) it's an opportunity to improve other scenarios like
reducing bandwidth consumption during replication or normal operation over
slow/unstable links.
AFAICS there aren't objections about implementing compression in libpq. The
problem is what algorithm use for compression. I mean, there is a lot of
patents in this area. As others spotted at [4]http://archives.postgresql.org/pgsql-performance/2009-08/msg00053.php, we should not implement
algorithms that possibly infringe patents in core. Derivated products are free
to plug whatever algorithms they want. There will be an API to do it.
This work will be sponsored by a company that is interested in this feature.
=== Design ===
- algorithm: zlib, bzip2, (another patent free and bsd licensed?)
- compiled-in option: --with-bzip2
- PGCOMPRESSMODE env
* disable: only try non-compressed connection (default)
* prefer: try compressed connection; if that fails, try a non-compressed
connection
* require: only try compressed connection
- PGCOMPRESSALGO env
* zlib
* bzip2
- compressmode and compressalgo string connection
- compress all data
- compress before send() and decompress after recv()
I am all ears for improving this design. Some of my choices are based on my
research in compression at protocols and PostgreSQL internals.
Keep in mind that I prefer compressing all data instead of a selected set of
messages because (i) every new data message could be coded with compression
support and (ii) avoid that the protocol code turns into a spaghetti.
I'll try to post a patch soon with the ideas discussed at this thread.
[1]: http://archives.postgresql.org/pgsql-hackers/2012-03/msg00929.php
[2]: http://archives.postgresql.org/pgsql-hackers/2011-01/msg00337.php
[3]: http://archives.postgresql.org/pgsql-hackers/2002-03/msg00664.php
[4]: http://archives.postgresql.org/pgsql-performance/2009-08/msg00053.php
--
Euler Taveira de Oliveira - Timbira http://www.timbira.com.br/
PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento
Euler Taveira <euler@timbira.com> writes:
There was already some discussion about compressing libpq data [1][2][3].
Recently, I faced a scenario that would become less problematic if we have had
compression support. The scenario is frequent data load (aka COPY) over
slow/unstable links. It should be executed in a few hundreds of PostgreSQL
servers all over Brazil. Someone could argue that I could use ssh tunnel to
solve the problem but (i) it is complex because it involves a different port
in the firewall and (ii) it's an opportunity to improve other scenarios like
reducing bandwidth consumption during replication or normal operation over
slow/unstable links.
I still think that pushing this off to openssl (not an ssh tunnel, but
the underlying transport library) would be an adequate solution.
If you are shoving data over a connection that is long enough to need
compression, the odds that every bit of it is trustworthy seem pretty
small, so you need encryption too.
We do need the ability to tell openssl to use compression. We don't
need to implement it ourselves, nor to bring a bunch of new library
dependencies into our builds. I especially think that importing bzip2
is a pretty bad idea --- it's not only a new dependency, but bzip2's
compression versus speed tradeoff is entirely inappropriate for this
use-case.
regards, tom lane
Euler Taveira wrote:
There was already some discussion about compressing libpq data
[1]: [2][3].
Recently, I faced a scenario that would become less problematic if we
have had
compression support. The scenario is frequent data load (aka COPY)
over
slow/unstable links. It should be executed in a few hundreds of
PostgreSQL
servers all over Brazil. Someone could argue that I could use ssh
tunnel to
solve the problem but (i) it is complex because it involves a
different port
in the firewall and (ii) it's an opportunity to improve other
scenarios like
reducing bandwidth consumption during replication or normal operation
over
slow/unstable links.
Maybe I'm missing something obvious, but shouldn't a regular SSL
connection (sslmode=require) do what you are asking for?
At least from OpenSSL 0.9.8 on, data get compressed by default.
You don't need an extra port in the firewall for that.
Yours,
Laurenz Albe
On 14-06-2012 02:19, Tom Lane wrote:
I still think that pushing this off to openssl (not an ssh tunnel, but
the underlying transport library) would be an adequate solution.
If you are shoving data over a connection that is long enough to need
compression, the odds that every bit of it is trustworthy seem pretty
small, so you need encryption too.
I don't want to pay the SSL connection overhead. Also I just want compression,
encryption is not required. OpenSSL give us encryption with/without
compression; we need an option to obtain compression in non-SSL connections.
We do need the ability to tell openssl to use compression. We don't
need to implement it ourselves, nor to bring a bunch of new library
dependencies into our builds. I especially think that importing bzip2
is a pretty bad idea --- it's not only a new dependency, but bzip2's
compression versus speed tradeoff is entirely inappropriate for this
use-case.
I see, the idea is that bzip2 would be a compiled-in option (not enabled by
default) just to give another compression option. I don't have a strong
opinion about including it as another dependency. We already depend on zlib
and implementing compression using it won't add another dependency.
What do you think about adding a hook at libpq to load an extension that does
the compression? That way we don't add another dependency at libpq and also a
lot of extensions could be coded to cover a variety of algorithms without
putting us in trouble because of patent infringement.
--
Euler Taveira de Oliveira - Timbira http://www.timbira.com.br/
PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento
On Jun14, 2012, at 15:28 , Euler Taveira wrote:
On 14-06-2012 02:19, Tom Lane wrote:
I still think that pushing this off to openssl (not an ssh tunnel, but
the underlying transport library) would be an adequate solution.
If you are shoving data over a connection that is long enough to need
compression, the odds that every bit of it is trustworthy seem pretty
small, so you need encryption too.I don't want to pay the SSL connection overhead. Also I just want compression,
encryption is not required. OpenSSL give us encryption with/without
compression; we need an option to obtain compression in non-SSL connections.
AFAIR, openssl supports a NULL cipher which doesn't do any encryption. We
could have a connection parameter, say compress=on, which selects that
cipher (unless sslmode is set to prefer or higher, of course).
SSL NULL-chipher connections would be treated like unencrypted connections
when matching against pg_hba.conf.
best regards,
Florian Pflug
On Thu, Jun 14, 2012 at 10:14 AM, Florian Pflug <fgp@phlo.org> wrote:
On Jun14, 2012, at 15:28 , Euler Taveira wrote:
On 14-06-2012 02:19, Tom Lane wrote:
I still think that pushing this off to openssl (not an ssh tunnel, but
the underlying transport library) would be an adequate solution.
If you are shoving data over a connection that is long enough to need
compression, the odds that every bit of it is trustworthy seem pretty
small, so you need encryption too.I don't want to pay the SSL connection overhead. Also I just want compression,
encryption is not required. OpenSSL give us encryption with/without
compression; we need an option to obtain compression in non-SSL connections.AFAIR, openssl supports a NULL cipher which doesn't do any encryption. We
could have a connection parameter, say compress=on, which selects that
cipher (unless sslmode is set to prefer or higher, of course).SSL NULL-chipher connections would be treated like unencrypted connections
when matching against pg_hba.conf.best regards,
Florian Pflug--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
It doesn't sound like there is a lot of support for this idea, but I
think it would be nice to get something like lz4
(http://code.google.com/p/lz4/) or snappy
(http://code.google.com/p/snappy/) support. Both are BSD-ish licensed.
It could be useful for streaming replication as well. A hook (as Euler
mentioned) might be a nice compromise.
On Thu, Jun 14, 2012 at 9:57 AM, Phil Sorber <phil@omniti.com> wrote:
It doesn't sound like there is a lot of support for this idea, but I
think it would be nice to get something like lz4
(http://code.google.com/p/lz4/) or snappy
(http://code.google.com/p/snappy/) support. Both are BSD-ish licensed.
It could be useful for streaming replication as well. A hook (as Euler
mentioned) might be a nice compromise.
There is a lot of support for the idea: it's one of the more requested
features. I think a well thought out framework that bypassed the
dependency issues via plugging might get some serious traction.
Emphasis 'on well thought out' :-).
merlin
Euler Taveira <euler@timbira.com> writes:
On 14-06-2012 02:19, Tom Lane wrote:
... I especially think that importing bzip2
is a pretty bad idea --- it's not only a new dependency, but bzip2's
compression versus speed tradeoff is entirely inappropriate for this
use-case.
I see, the idea is that bzip2 would be a compiled-in option (not enabled by
default) just to give another compression option.
I'm not particularly thrilled with "let's have more compression options
just to have options". Each such option you add is another source of
fail-to-connect incompatibilities (when either the client or the server
doesn't have it). Moreover, while there are a lot of compression
algorithms out there, a lot of them are completely unsuited for this
use-case. If memory serves, bzip2 for example requires fairly large
data blocks in order to get decent compression, which means you are
either going to get terrible compression or suffer very bad latency
when trying to apply it to a connection data stream.
So I've got very little patience with the idea of "let's put in some
hooks and then great things will happen". It would be far better all
around if we supported exactly one, well-chosen, method. But really
I still don't see a reason not to let openssl do it for us.
regards, tom lane
On Thu, Jun 14, 2012 at 1:43 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
So I've got very little patience with the idea of "let's put in some
hooks and then great things will happen". It would be far better all
around if we supported exactly one, well-chosen, method. But really
I still don't see a reason not to let openssl do it for us.
Well, for toast compression the right choice is definitely one of the
lz based algorithms (not libz!). For transport compression you have
the case of sending large data over very slow and/or expensive links
in which case you want to use bzip type methods. But in the majority
of cases I'd probably be using lz there too. So if I had to pick just
one, there you go. But which one? the lz algorithm with arguably the
best pedigree (lzo) is gnu but there are many other decent candidates,
some of which have really tiny implementations.
merlin
On Thu, Jun 14, 2012 at 02:38:02PM -0500, Merlin Moncure wrote:
On Thu, Jun 14, 2012 at 1:43 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
So I've got very little patience with the idea of "let's put in some
hooks and then great things will happen". It would be far better all
around if we supported exactly one, well-chosen, method. But really
I still don't see a reason not to let openssl do it for us.Well, for toast compression the right choice is definitely one of the
lz based algorithms (not libz!). For transport compression you have
the case of sending large data over very slow and/or expensive links
in which case you want to use bzip type methods. But in the majority
of cases I'd probably be using lz there too. So if I had to pick just
one, there you go. But which one? the lz algorithm with arguably the
best pedigree (lzo) is gnu but there are many other decent candidates,
some of which have really tiny implementations.merlin
+1 for a very fast compressor/de-compressor. lz4 from Google has
a BSD license and at 8.5X faster compression than zlib(-1) and
5X faster de-compression the zlib (-1), 2X faster than LZO even
would be my pick.
Regards,
Ken
On Thu, Jun 14, 2012 at 02:43:04PM -0400, Tom Lane wrote:
Euler Taveira <euler@timbira.com> writes:
On 14-06-2012 02:19, Tom Lane wrote:
... I especially think that importing bzip2
is a pretty bad idea --- it's not only a new dependency, but bzip2's
compression versus speed tradeoff is entirely inappropriate for this
use-case.I see, the idea is that bzip2 would be a compiled-in option (not enabled by
default) just to give another compression option.I'm not particularly thrilled with "let's have more compression options
just to have options". Each such option you add is another source of
fail-to-connect incompatibilities (when either the client or the server
doesn't have it). Moreover, while there are a lot of compression
algorithms out there, a lot of them are completely unsuited for this
use-case. If memory serves, bzip2 for example requires fairly large
data blocks in order to get decent compression, which means you are
either going to get terrible compression or suffer very bad latency
when trying to apply it to a connection data stream.So I've got very little patience with the idea of "let's put in some
hooks and then great things will happen". It would be far better all
around if we supported exactly one, well-chosen, method. But really
I still don't see a reason not to let openssl do it for us.
Do we just need to document SSL's NULL encryption option?
--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com
+ It's impossible for everything to be true. +
On Fri, Jun 15, 2012 at 10:19 AM, Bruce Momjian <bruce@momjian.us> wrote:
On Thu, Jun 14, 2012 at 02:43:04PM -0400, Tom Lane wrote:
Euler Taveira <euler@timbira.com> writes:
On 14-06-2012 02:19, Tom Lane wrote:
... I especially think that importing bzip2
is a pretty bad idea --- it's not only a new dependency, but bzip2's
compression versus speed tradeoff is entirely inappropriate for this
use-case.I see, the idea is that bzip2 would be a compiled-in option (not enabled by
default) just to give another compression option.I'm not particularly thrilled with "let's have more compression options
just to have options". Each such option you add is another source of
fail-to-connect incompatibilities (when either the client or the server
doesn't have it). Moreover, while there are a lot of compression
algorithms out there, a lot of them are completely unsuited for this
use-case. If memory serves, bzip2 for example requires fairly large
data blocks in order to get decent compression, which means you are
either going to get terrible compression or suffer very bad latency
when trying to apply it to a connection data stream.
Agreed. I think there's probably arguments to be had for supporting
compression without openssl (see below), but I don't think we need to
have a whole set of potentially incompatible ways of doing it. Picking
one that's good for the common case and not completely crap for the
corner cases would be a better choice (meaning bzip2 is probably a
very bad choice).
So I've got very little patience with the idea of "let's put in some
hooks and then great things will happen". It would be far better all
around if we supported exactly one, well-chosen, method. But really
I still don't see a reason not to let openssl do it for us.Do we just need to document SSL's NULL encryption option?
Does the SSL NULL encryption+compression thing work if you're not
using openssl?
For one thing, some of us still hold a hope to support non-openssl
libraries in both libpq and server side, so it's something that would
need to be supported by the standard and thus available in most
libraries not to invalidate that.
Second, we also have things like the JDBC driver and the .Net driver
that don't use libpq. the JDBC driver uses the native java ssl
support, AFAIK. Does that one support the compression, and does it
support controlling it?
--
Magnus Hagander
Me: http://www.hagander.net/
Work: http://www.redpill-linpro.com/
On Jun15, 2012, at 07:50 , Magnus Hagander wrote:
So I've got very little patience with the idea of "let's put in some
hooks and then great things will happen". It would be far better all
around if we supported exactly one, well-chosen, method. But really
I still don't see a reason not to let openssl do it for us.Do we just need to document SSL's NULL encryption option?
Does the SSL NULL encryption+compression thing work if you're not
using openssl?
The compression support is defined in RFC 3749, and according to
http://en.wikipedia.org/wiki/Comparison_of_TLS_Implementations it's
supported in openssl and gnutls.
gnutls also seems to support a NULL cipher - gnutls-cli on my Ubuntu
10.04 box prints
Ciphers: AES-256-CBC, AES-128-CBC, 3DES-CBC, DES-CBC, ARCFOUR-128,
ARCFOUR-40, RC2-40, CAMELLIA-256-CBC, CAMELLIA-128-CBC, NULL.
For one thing, some of us still hold a hope to support non-openssl
libraries in both libpq and server side, so it's something that would
need to be supported by the standard and thus available in most
libraries not to invalidate that.
Well, it's a standard a least, and both openssl and gnutls seem to
support it. Are there any other ssl implementations beside gnutls and
openssl that we need to worry about?
Second, we also have things like the JDBC driver and the .Net driver
that don't use libpq. the JDBC driver uses the native java ssl
support, AFAIK. Does that one support the compression, and does it
support controlling it?
Java uses pluggable providers with standardized interfaces for most
things related to encryption. SSL support is provided by JSSE
(Java Secure Socket Extension). The JSSE implementation included with
the oracle JRE doesn't seem to support compression according to the
wikipedia page quoted above. But chances are that there exists an
alternative implementation which does.
best regards,
Florian Pflug
On Fri, Jun 15, 2012 at 5:52 PM, Florian Pflug <fgp@phlo.org> wrote:
On Jun15, 2012, at 07:50 , Magnus Hagander wrote:
So I've got very little patience with the idea of "let's put in some
hooks and then great things will happen". It would be far better all
around if we supported exactly one, well-chosen, method. But really
I still don't see a reason not to let openssl do it for us.Do we just need to document SSL's NULL encryption option?
Does the SSL NULL encryption+compression thing work if you're not
using openssl?The compression support is defined in RFC 3749, and according to
http://en.wikipedia.org/wiki/Comparison_of_TLS_Implementations it's
supported in openssl and gnutls.gnutls also seems to support a NULL cipher - gnutls-cli on my Ubuntu
10.04 box printsCiphers: AES-256-CBC, AES-128-CBC, 3DES-CBC, DES-CBC, ARCFOUR-128,
ARCFOUR-40, RC2-40, CAMELLIA-256-CBC, CAMELLIA-128-CBC, NULL.
ah, thanks for looking that up for me!
The other big one to consider would be GNUTLS - which also has support
for compression, I see.
I guess a related question is if they all alow us to turn it *off*,
which we now do support on openssl :) gnutls does, I didn't look into
nss.
For one thing, some of us still hold a hope to support non-openssl
libraries in both libpq and server side, so it's something that would
need to be supported by the standard and thus available in most
libraries not to invalidate that.Well, it's a standard a least, and both openssl and gnutls seem to
support it. Are there any other ssl implementations beside gnutls and
openssl that we need to worry about?
NSS would be the big one, an din theory microsoft schannel if we were
to go there (that would give us access to easy use of the windows
certificate store so ther emight be a reason - but not a very big one,
to support that).
Second, we also have things like the JDBC driver and the .Net driver
that don't use libpq. the JDBC driver uses the native java ssl
support, AFAIK. Does that one support the compression, and does it
support controlling it?Java uses pluggable providers with standardized interfaces for most
things related to encryption. SSL support is provided by JSSE
(Java Secure Socket Extension). The JSSE implementation included with
the oracle JRE doesn't seem to support compression according to the
wikipedia page quoted above. But chances are that there exists an
alternative implementation which does.
Yeah, but that alone is IMO a rather big blocker for claiming that
this is the only way to do it :( And I think the fact that that
wikipedia page doesn't list any other ones, is a sign that there might
not be a lot of other choices out there in reality - expecially not
opensource...
--
Magnus Hagander
Me: http://www.hagander.net/
Work: http://www.redpill-linpro.com/
On Jun15, 2012, at 12:09 , Magnus Hagander wrote:
On Fri, Jun 15, 2012 at 5:52 PM, Florian Pflug <fgp@phlo.org> wrote:
On Jun15, 2012, at 07:50 , Magnus Hagander wrote:
Second, we also have things like the JDBC driver and the .Net driver
that don't use libpq. the JDBC driver uses the native java ssl
support, AFAIK. Does that one support the compression, and does it
support controlling it?Java uses pluggable providers with standardized interfaces for most
things related to encryption. SSL support is provided by JSSE
(Java Secure Socket Extension). The JSSE implementation included with
the oracle JRE doesn't seem to support compression according to the
wikipedia page quoted above. But chances are that there exists an
alternative implementation which does.Yeah, but that alone is IMO a rather big blocker for claiming that
this is the only way to do it :( And I think the fact that that
wikipedia page doesn't list any other ones, is a sign that there might
not be a lot of other choices out there in reality - expecially not
opensource…
Hm, but things get even harder for the JDBC and .NET folks if we go
with a third-party compression method. Or would we require that the
existence of a free Java (and maybe .NET) implementation of such a
method would be an absolute must?
The way I see it, if we use SSL-based compression then non-libpq clients
there's at least a chance of those clients being able to use it easily
(if their SSL implementation supports it). If we go with a third-party
compression method, they *all* need to add yet another dependency, or may
even need to re-implement the compression method in their implementation
language of choice.
best regards,
Florian Pflug
On Fri, Jun 15, 2012 at 5:48 AM, Florian Pflug <fgp@phlo.org> wrote:
On Jun15, 2012, at 12:09 , Magnus Hagander wrote:
On Fri, Jun 15, 2012 at 5:52 PM, Florian Pflug <fgp@phlo.org> wrote:
On Jun15, 2012, at 07:50 , Magnus Hagander wrote:
Second, we also have things like the JDBC driver and the .Net driver
that don't use libpq. the JDBC driver uses the native java ssl
support, AFAIK. Does that one support the compression, and does it
support controlling it?Java uses pluggable providers with standardized interfaces for most
things related to encryption. SSL support is provided by JSSE
(Java Secure Socket Extension). The JSSE implementation included with
the oracle JRE doesn't seem to support compression according to the
wikipedia page quoted above. But chances are that there exists an
alternative implementation which does.Yeah, but that alone is IMO a rather big blocker for claiming that
this is the only way to do it :( And I think the fact that that
wikipedia page doesn't list any other ones, is a sign that there might
not be a lot of other choices out there in reality - expecially not
opensource…Hm, but things get even harder for the JDBC and .NET folks if we go
with a third-party compression method. Or would we require that the
existence of a free Java (and maybe .NET) implementation of such a
method would be an absolute must?The way I see it, if we use SSL-based compression then non-libpq clients
there's at least a chance of those clients being able to use it easily
(if their SSL implementation supports it). If we go with a third-party
compression method, they *all* need to add yet another dependency, or may
even need to re-implement the compression method in their implementation
language of choice.
hm, that's a really excellent point.
merlin
On Fri, Jun 15, 2012 at 07:18:34AM -0500, Merlin Moncure wrote:
On Fri, Jun 15, 2012 at 5:48 AM, Florian Pflug <fgp@phlo.org> wrote:
On Jun15, 2012, at 12:09 , Magnus Hagander wrote:
On Fri, Jun 15, 2012 at 5:52 PM, Florian Pflug <fgp@phlo.org> wrote:
On Jun15, 2012, at 07:50 , Magnus Hagander wrote:
Second, we also have things like the JDBC driver and the .Net driver
that don't use libpq. the JDBC driver uses the native java ssl
support, AFAIK. Does that one support the compression, and does it
support controlling it?Java uses pluggable providers with standardized interfaces for most
things related to encryption. SSL support is provided by JSSE
(Java Secure Socket Extension). The JSSE implementation included with
the oracle JRE doesn't seem to support compression according to the
wikipedia page quoted above. But chances are that there exists an
alternative implementation which does.Yeah, but that alone is IMO a rather big blocker for claiming that
this is the only way to do it :( And I think the fact that that
wikipedia page doesn't list any other ones, is a sign that there might
not be a lot of other choices out there in reality - expecially not
opensource…Hm, but things get even harder for the JDBC and .NET folks if we go
with a third-party compression method. Or would we require that the
existence of a free Java (and maybe .NET) implementation of such a
method would be an absolute must?The way I see it, if we use SSL-based compression then non-libpq clients
there's at least a chance of those clients being able to use it easily
(if their SSL implementation supports it). If we go with a third-party
compression method, they *all* need to add yet another dependency, or may
even need to re-implement the compression method in their implementation
language of choice.hm, that's a really excellent point.
merlin
I agree and think that the SSL-based compression is an excellent default
compression scheme. The plugable compression approach allows for the
choice of the most appropriate compression implementation based on the
application needs. It really addresses corner cases such as high-
performance system.
Regards,
Ken
On Fri, Jun 15, 2012 at 6:48 PM, Florian Pflug <fgp@phlo.org> wrote:
On Jun15, 2012, at 12:09 , Magnus Hagander wrote:
On Fri, Jun 15, 2012 at 5:52 PM, Florian Pflug <fgp@phlo.org> wrote:
On Jun15, 2012, at 07:50 , Magnus Hagander wrote:
Second, we also have things like the JDBC driver and the .Net driver
that don't use libpq. the JDBC driver uses the native java ssl
support, AFAIK. Does that one support the compression, and does it
support controlling it?Java uses pluggable providers with standardized interfaces for most
things related to encryption. SSL support is provided by JSSE
(Java Secure Socket Extension). The JSSE implementation included with
the oracle JRE doesn't seem to support compression according to the
wikipedia page quoted above. But chances are that there exists an
alternative implementation which does.Yeah, but that alone is IMO a rather big blocker for claiming that
this is the only way to do it :( And I think the fact that that
wikipedia page doesn't list any other ones, is a sign that there might
not be a lot of other choices out there in reality - expecially not
opensource…Hm, but things get even harder for the JDBC and .NET folks if we go
with a third-party compression method. Or would we require that the
existence of a free Java (and maybe .NET) implementation of such a
method would be an absolute must?
As long as a free implementation exists, it can be ported to
Java/.Net. Sure, it takes more work, but it *can be done*.
The way I see it, if we use SSL-based compression then non-libpq clients
there's at least a chance of those clients being able to use it easily
(if their SSL implementation supports it). If we go with a third-party
compression method, they *all* need to add yet another dependency, or may
even need to re-implement the compression method in their implementation
language of choice.
I only partially agree. If there *is* no third party SSL libary that
does support it, then they're stuck reimplementing an *entire SSL
library*, which is surely many orders of magnitude more work, and
suddenly steps into writing encryption code which is a lot more
sensitive. Basically if they have to do that, then they're stuck
*never* being able to fix the problem.
If we can prove such a third party library *exists*, that makes it
different. But from what I can tell so far, I haven't seen a single
one - let alone one that supports compression.
--
Magnus Hagander
Me: http://www.hagander.net/
Work: http://www.redpill-linpro.com/
On 15.06.2012 17:39, Magnus Hagander wrote:
On Fri, Jun 15, 2012 at 6:48 PM, Florian Pflug<fgp@phlo.org> wrote:
The way I see it, if we use SSL-based compression then non-libpq clients
there's at least a chance of those clients being able to use it easily
(if their SSL implementation supports it). If we go with a third-party
compression method, they *all* need to add yet another dependency, or may
even need to re-implement the compression method in their implementation
language of choice.I only partially agree. If there *is* no third party SSL libary that
does support it, then they're stuck reimplementing an *entire SSL
library*, which is surely many orders of magnitude more work, and
suddenly steps into writing encryption code which is a lot more
sensitive.
You could write a dummy SSL implementation that only does compression,
not encryption. Ie. only support the 'null' encryption method. That
should be about the same amount of work as writing an implementation of
compression using whatever protocol we would decide to use for
negotiating the compression.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
On Fri, Jun 15, 2012 at 10:56 PM, Heikki Linnakangas
<heikki.linnakangas@enterprisedb.com> wrote:
On 15.06.2012 17:39, Magnus Hagander wrote:
On Fri, Jun 15, 2012 at 6:48 PM, Florian Pflug<fgp@phlo.org> wrote:
The way I see it, if we use SSL-based compression then non-libpq clients
there's at least a chance of those clients being able to use it easily
(if their SSL implementation supports it). If we go with a third-party
compression method, they *all* need to add yet another dependency, or may
even need to re-implement the compression method in their implementation
language of choice.I only partially agree. If there *is* no third party SSL libary that
does support it, then they're stuck reimplementing an *entire SSL
library*, which is surely many orders of magnitude more work, and
suddenly steps into writing encryption code which is a lot more
sensitive.You could write a dummy SSL implementation that only does compression, not
encryption. Ie. only support the 'null' encryption method. That should be
about the same amount of work as writing an implementation of compression
using whatever protocol we would decide to use for negotiating the
compression.
Sure, but then what do you do if you actually want both?
--
Magnus Hagander
Me: http://www.hagander.net/
Work: http://www.redpill-linpro.com/