how to show time zone with numerical offset in CSV log?
Hello everybody,
I am saving PostgreSQL log file data (CVS logs) with Logstash and
Elasticsearch. My problem with this is that the time zone value is
with the name of the time zone like
2015-09-22 12:02:59.836 CEST
which Logstash can not process.
What Logstash needs are date/time stamps like
2015-09-22 12:02:59.836 +0200.
How can I setup Postgres to log with a numerical offset in the CSV
logs and not with the name of the time zone?
Any hints and links to the corresponding documentation would be appreciated.
Regards,
Michael
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Hi
-----Original Message-----
From: pgsql-general-owner@postgresql.org [mailto:pgsql-general-owner@postgresql.org] On Behalf Of Michael Zoet
Sent: Dienstag, 22. September 2015 12:07
To: pgsql-general@postgresql.org
Subject: [GENERAL] how to show time zone with numerical offset in CSV log?Hello everybody,
I am saving PostgreSQL log file data (CVS logs) with Logstash and
Elasticsearch. My problem with this is that the time zone value is
with the name of the time zone like2015-09-22 12:02:59.836 CEST
which Logstash can not process.
What Logstash needs are date/time stamps like
2015-09-22 12:02:59.836 +0200.
How can I setup Postgres to log with a numerical offset in the CSV
logs and not with the name of the time zone?
Not exactly the same format, but this may help:
kofadmin@kofdb.localhost=> SET datestyle TO 'ISO';
SET
kofadmin@kofdb.localhost=> select now();
now
----------------------------
2015-09-22 12:53:38.123+02
(1 row)
If you want the change to be persistent you can use:
ALTER DATABASE database_name SET datestyle TO 'ISO';
And then reconnect to see the change.
Bye
Charles
Any hints and links to the corresponding documentation would be appreciated.
Regards,
Michael--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Hi Charles,
thanks for the quick response and it looked promising but did not work
as expected.
I can set the datestyle to ISO on database level but this does not
seem to effect the way the CSV logs are written. I still get
2015-09-22 13:06:01.658 UTC (or CEST and so on) in the log files. And
as I see it is not only in the CSV logs, also in the none CSV logs I
have.
Is there a way to convince Postgres to write the date/time with
numerical time zone values to the log files?
Regards,
Michael
Hi
-----Original Message-----
From: pgsql-general-owner@postgresql.org
[mailto:pgsql-general-owner@postgresql.org] On Behalf Of Michael Zoet
Sent: Dienstag, 22. September 2015 12:07
To: pgsql-general@postgresql.org
Subject: [GENERAL] how to show time zone with numerical offset in CSV log?Hello everybody,
I am saving PostgreSQL log file data (CVS logs) with Logstash and
Elasticsearch. My problem with this is that the time zone value is
with the name of the time zone like2015-09-22 12:02:59.836 CEST
which Logstash can not process.
What Logstash needs are date/time stamps like
2015-09-22 12:02:59.836 +0200.
How can I setup Postgres to log with a numerical offset in the CSV
logs and not with the name of the time zone?Not exactly the same format, but this may help:
kofadmin@kofdb.localhost=> SET datestyle TO 'ISO';
SET
kofadmin@kofdb.localhost=> select now();
now
----------------------------
2015-09-22 12:53:38.123+02
(1 row)If you want the change to be persistent you can use:
ALTER DATABASE database_name SET datestyle TO 'ISO';
And then reconnect to see the change.
Bye
CharlesAny hints and links to the corresponding documentation would be appreciated.
Regards,
Michael--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
----- Ende der Nachricht von Charles Clavadetscher
<clavadetscher@swisspug.org> -----
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On 09/22/2015 06:31 AM, Michael Zoet wrote:
Hi Charles,
thanks for the quick response and it looked promising but did not work
as expected.I can set the datestyle to ISO on database level but this does not seem
to effect the way the CSV logs are written. I still get 2015-09-22
13:06:01.658 UTC (or CEST and so on) in the log files. And as I see it
is not only in the CSV logs, also in the none CSV logs I have.Is there a way to convince Postgres to write the date/time with
numerical time zone values to the log files?
I don't know of a way, but it seems Logstash can be told how to do the
right thing:
See match and:
http://joda-time.sourceforge.net/apidocs/org/joda/time/format/DateTimeFormat.html
z time zone text Pacific Standard
Time; PST
Regards,
MichaelHi
-----Original Message-----
From: pgsql-general-owner@postgresql.org
[mailto:pgsql-general-owner@postgresql.org] On Behalf Of Michael Zoet
Sent: Dienstag, 22. September 2015 12:07
To: pgsql-general@postgresql.org
Subject: [GENERAL] how to show time zone with numerical offset in CSV
log?Hello everybody,
I am saving PostgreSQL log file data (CVS logs) with Logstash and
Elasticsearch. My problem with this is that the time zone value is
with the name of the time zone like2015-09-22 12:02:59.836 CEST
which Logstash can not process.
What Logstash needs are date/time stamps like
2015-09-22 12:02:59.836 +0200.
How can I setup Postgres to log with a numerical offset in the CSV
logs and not with the name of the time zone?Not exactly the same format, but this may help:
kofadmin@kofdb.localhost=> SET datestyle TO 'ISO';
SET
kofadmin@kofdb.localhost=> select now();
now
----------------------------
2015-09-22 12:53:38.123+02
(1 row)If you want the change to be persistent you can use:
ALTER DATABASE database_name SET datestyle TO 'ISO';
And then reconnect to see the change.
Bye
CharlesAny hints and links to the corresponding documentation would be
appreciated.Regards,
Michael--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general----- Ende der Nachricht von Charles Clavadetscher
<clavadetscher@swisspug.org> -----
--
Adrian Klaver
adrian.klaver@aklaver.com
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Hi
thanks for the quick response and it looked promising but did not work
as expected.I can set the datestyle to ISO on database level but this does not
seem to effect the way the CSV logs are written. I still get
2015-09-22 13:06:01.658 UTC (or CEST and so on) in the log files. And
as I see it is not only in the CSV logs, also in the none CSV logs I
have.
I guess this means that the datestyle affects the way how the client wants the information displayed.
Is there a way to convince Postgres to write the date/time with
numerical time zone values to the log files?
Unfortunately I am a bit short of time right at the moment. I would suggest that you look into how the tools you are using generate the csv.
Here what I could see using psql:
db=> create table testdate (timedate timestamptz);
CREATE TABLE
db=> \d testdate
Table "public.testdate"
Column | Type | Modifiers
----------+--------------------------+-----------
timedate | timestamp with time zone |
db=> insert into testdate values(clock_timestamp());
INSERT 0 1
db=> insert into testdate values(clock_timestamp());
INSERT 0 1
db=> insert into testdate values(clock_timestamp());
INSERT 0 1
kofadmin@kofdb.localhost=> SET datestyle TO "GERMAN";
SET
db=> \copy testdate to testdate.csv csv header ;
COPY 3
Content of file:
timedate
22.09.2015 15:53:48.268 CEST
22.09.2015 15:53:49.612 CEST
22.09.2015 15:53:50.44 CEST
db=> SET datestyle TO "ISO";
SET
db=> \copy testdate to testdate.csv csv header ;
COPY 3
Content of file:
timedate
2015-09-22 15:53:48.268+02
2015-09-22 15:53:49.612+02
2015-09-22 15:53:50.44+02
So it is actually a matter of instructing the client to print the date in the format that you wish.
Hope this helps.
Bye
Charles
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On 09/22/2015 06:31 AM, Michael Zoet wrote:
Hi Charles,
thanks for the quick response and it looked promising but did not work
as expected.I can set the datestyle to ISO on database level but this does not seem
to effect the way the CSV logs are written. I still get 2015-09-22
13:06:01.658 UTC (or CEST and so on) in the log files. And as I see it
is not only in the CSV logs, also in the none CSV logs I have.Is there a way to convince Postgres to write the date/time with
numerical time zone values to the log files?I don't know of a way, but it seems Logstash can be told how to do
the right thing:
Yes and no. I asked this already for Logstash ;-):
https://discuss.elastic.co/t/logstash-timestamp-error-when-cest-is-at-the-end/27843
....
http://joda-time.sourceforge.net/apidocs/org/joda/time/format/DateTimeFormat.html
z time zone text Pacific Standard Time; PST
A little further down it says:
Zone names: Time zone names ('z') cannot be parsed.
This means Logstash (and the Joda JAVA time library it uses) can not
parse the time zone if it is provided as a name. That's why I
need/want to change it on the Postgres level. Everything else is
really complicated to do in Logstash. I would need to provide a
conversion of the time zone name to the numerical value in my Logstash
configuration. But I still hope that this is easy fixable on the
Postgres level. Otherwise it gets unnecessary complicated...
Michael
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Michael Zoet <Michael.Zoet@zoet.de> writes:
Is there a way to convince Postgres to write the date/time with
numerical time zone values to the log files?
Try something like
log_timezone = '<-0400>+4'
See the discussion of POSIX timezone names here:
http://www.postgresql.org/docs/9.4/static/datatype-datetime.html#DATATYPE-TIMEZONES
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On 09/22/2015 07:04 AM, Michael Zoet wrote:
On 09/22/2015 06:31 AM, Michael Zoet wrote:
Hi Charles,
thanks for the quick response and it looked promising but did not work
as expected.I can set the datestyle to ISO on database level but this does not seem
to effect the way the CSV logs are written. I still get 2015-09-22
13:06:01.658 UTC (or CEST and so on) in the log files. And as I see it
is not only in the CSV logs, also in the none CSV logs I have.Is there a way to convince Postgres to write the date/time with
numerical time zone values to the log files?I don't know of a way, but it seems Logstash can be told how to do the
right thing:Yes and no. I asked this already for Logstash ;-):
https://discuss.elastic.co/t/logstash-timestamp-error-when-cest-is-at-the-end/27843
....
http://joda-time.sourceforge.net/apidocs/org/joda/time/format/DateTimeFormat.html
z time zone text Pacific Standard
Time; PSTA little further down it says:
Zone names: Time zone names ('z') cannot be parsed.
Hmm, that would be a problem. Sorry for the misdirection.
This means Logstash (and the Joda JAVA time library it uses) can not
parse the time zone if it is provided as a name. That's why I need/want
to change it on the Postgres level. Everything else is really
complicated to do in Logstash. I would need to provide a conversion of
the time zone name to the numerical value in my Logstash configuration.
But I still hope that this is easy fixable on the Postgres level.
Otherwise it gets unnecessary complicated...Michael
--
Adrian Klaver
adrian.klaver@aklaver.com
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On 09/22/2015 07:04 AM, Michael Zoet wrote:
On 09/22/2015 06:31 AM, Michael Zoet wrote:
Hi Charles,
thanks for the quick response and it looked promising but did not work
as expected.I can set the datestyle to ISO on database level but this does not seem
to effect the way the CSV logs are written. I still get 2015-09-22
13:06:01.658 UTC (or CEST and so on) in the log files. And as I see it
is not only in the CSV logs, also in the none CSV logs I have.Is there a way to convince Postgres to write the date/time with
numerical time zone values to the log files?I don't know of a way, but it seems Logstash can be told how to do the
right thing:Yes and no. I asked this already for Logstash ;-):
https://discuss.elastic.co/t/logstash-timestamp-error-when-cest-is-at-the-end/27843
....
http://joda-time.sourceforge.net/apidocs/org/joda/time/format/DateTimeFormat.html
z time zone text Pacific Standard
Time; PSTA little further down it says:
Zone names: Time zone names ('z') cannot be parsed.
Some more digging found that DateTimeFormat can deal with Z which is
either the offset or the timezone id, in particular as ZZZ.
http://joda-time.sourceforge.net/apidocs/org/joda/time/format/DateTimeFormat.html
Zone: 'Z' outputs offset without a colon, 'ZZ' outputs the offset with a
colon, 'ZZZ' or more outputs the zone id.
The timezone names in Postgres are available from:
select * from pg_timezone_names ;
So in addition to Tom's suggestion, you might try setting the
log_timezone to a name. Examples: Europe/Brussels for CEST, Etc/UTC for UTC
This means Logstash (and the Joda JAVA time library it uses) can not
parse the time zone if it is provided as a name. That's why I need/want
to change it on the Postgres level. Everything else is really
complicated to do in Logstash. I would need to provide a conversion of
the time zone name to the numerical value in my Logstash configuration.
But I still hope that this is easy fixable on the Postgres level.
Otherwise it gets unnecessary complicated...Michael
--
Adrian Klaver
adrian.klaver@aklaver.com
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Hi Tom,
Michael Zoet <Michael.Zoet@zoet.de> writes:
Is there a way to convince Postgres to write the date/time with
numerical time zone values to the log files?Try something like
log_timezone = '<-0400>+4'
OK this points me in a directions I haven't read anything about in the
Postgres documentation so far.
Can you explain what '<-0400>+4' exactly means? And why the string
'<+0200>-2' prints the date & time with the correct time and +0200 for
my time zone CEST?
And how can this automatically be changed if Germany switches from
summer time (CEST with +0200) to winter time (CET +0100)?
See the discussion of POSIX timezone names here:
http://www.postgresql.org/docs/9.4/static/datatype-datetime.html#DATATYPE-TIMEZONES
I have read this several times today but really get no connections to
my problem on setting this for the date/time output on the log files.
But I am beginning to understand ;-).
THX
Michael
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
----- Nachricht von Adrian Klaver <adrian.klaver@aklaver.com> ---------
Datum: Tue, 22 Sep 2015 07:46:24 -0700
Zone names: Time zone names ('z') cannot be parsed.
Some more digging found that DateTimeFormat can deal with Z which
is either the offset or the timezone id, in particular as ZZZ.http://joda-time.sourceforge.net/apidocs/org/joda/time/format/DateTimeFormat.html
Zone: 'Z' outputs offset without a colon, 'ZZ' outputs the offset
with a colon, 'ZZZ' or more outputs the zone id.
That's why I am asking "how to get the numerical offset printed in the
log files". Logstash can parse the numerical value. Otherwise I will
always have a parsing error in Logstash. We could live with this but
if it is possible I'd like to change this on the Postgres level. But I
never thought that this is much more complicated than expected.
The timezone names in Postgres are available from:
select * from pg_timezone_names ;
So in addition to Tom's suggestion, you might try setting the
log_timezone to a name. Examples: Europe/Brussels for CEST, Etc/UTC
for UTC
As far as I understand the log_timezone configuration option, it will
always print me the name if I use a name for the time zone. And that
is the no go for Logstash. So I really need a numerical value to parse
it with Logstash.
Michael
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Michael Zoet <Michael.Zoet@zoet.de> writes:
Can you explain what '<-0400>+4' exactly means?
It's a POSIX-style zone name specifying the STD abbreviation "-0400",
UTC offset 4 hours west of Greenwich, and no DST behavior.
And why the string
'<+0200>-2' prints the date & time with the correct time and +0200 for
my time zone CEST?
Same thing for 2 hours east of Greenwich. Remember POSIX and ISO have
opposite sign conventions.
And how can this automatically be changed if Germany switches from
summer time (CEST with +0200) to winter time (CET +0100)?
Well, you could write <+0200>-2<+0100> but I'm not sure I would recommend
it. That would result in switching on the DST transition days specified
in the "posixrules" timezone database file, which by default will be USA
not European rules. You could replace the posixrules file with some
suitable European zone file, but that would be more invasive than you
might want (especially if the zone database is shared with non-Postgres
applications); and even if that's OK, it's practically certain you'd
forget to re-fix it after some future software update overwrites the zone
files.
The best compromise might be to just use <+0000>+0, ie force it to
print in GMT always.
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Hi Tom,
And how can this automatically be changed if Germany switches from
summer time (CEST with +0200) to winter time (CET +0100)?Well, you could write <+0200>-2<+0100> but I'm not sure I would recommend
it. That would result in switching on the DST transition days specified
in the "posixrules" timezone database file, which by default will be USA
not European rules. You could replace the posixrules file with some
suitable European zone file, but that would be more invasive than you
might want (especially if the zone database is shared with non-Postgres
applications); and even if that's OK, it's practically certain you'd
forget to re-fix it after some future software update overwrites the zone
files.
Yes and that's why I would like to avoid messing around with the setup
to much.
The best compromise might be to just use <+0000>+0, ie force it to
print in GMT always.
That's it! Having everything in numeric UTC +0000 seems the easiest
solution. With that I shouldn't have any parsing problems with
Logstash. So I do not need to think about the offset. Great and
obvious :-).
Michael
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On 09/22/2015 08:16 AM, Michael Zoet wrote:
----- Nachricht von Adrian Klaver <adrian.klaver@aklaver.com> ---------
Datum: Tue, 22 Sep 2015 07:46:24 -0700Zone names: Time zone names ('z') cannot be parsed.
Some more digging found that DateTimeFormat can deal with Z which is
either the offset or the timezone id, in particular as ZZZ.http://joda-time.sourceforge.net/apidocs/org/joda/time/format/DateTimeFormat.html
Zone: 'Z' outputs offset without a colon, 'ZZ' outputs the offset with
a colon, 'ZZZ' or more outputs the zone id.That's why I am asking "how to get the numerical offset printed in the
log files". Logstash can parse the numerical value. Otherwise I will
always have a parsing error in Logstash. We could live with this but if
it is possible I'd like to change this on the Postgres level. But I
never thought that this is much more complicated than expected.
From the above link:
Z time zone offset/id zone -0800; -08:00;
America/Los_Angeles
So DateTimeFormat does understand names, though not necessarily
abbreviations which is what z is for. The Logstash match is supposed to
understand what DateTimeFormat parses.
The timezone names in Postgres are available from:
select * from pg_timezone_names ;
So in addition to Tom's suggestion, you might try setting the
log_timezone to a name. Examples: Europe/Brussels for CEST, Etc/UTC
for UTCAs far as I understand the log_timezone configuration option, it will
always print me the name if I use a name for the time zone. And that is
the no go for Logstash. So I really need a numerical value to parse it
with Logstash.Michael
--
Adrian Klaver
adrian.klaver@aklaver.com
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general