logical replication restrictions
One thing is needed and is not solved yet is delayed replication on logical
replication. Would be interesting to document it on Restrictions page,
right ?
regards,
Marcos
On Mon, Sep 20, 2021 at 9:47 PM Marcos Pegoraro <marcos@f10.com.br> wrote:
One thing is needed and is not solved yet is delayed replication on logical replication. Would be interesting to document it on Restrictions page, right ?
What do you mean by delayed replication? Is it that by default we send
the transactions at commit?
--
With Regards,
Amit Kapila.
No, I´m talking about that configuration you can have on standby servers
recovery_min_apply_delay = '8h'
Atenciosamente,
Em seg., 20 de set. de 2021 às 23:44, Amit Kapila <amit.kapila16@gmail.com>
escreveu:
Show quoted text
On Mon, Sep 20, 2021 at 9:47 PM Marcos Pegoraro <marcos@f10.com.br> wrote:
One thing is needed and is not solved yet is delayed replication on
logical replication. Would be interesting to document it on Restrictions
page, right ?What do you mean by delayed replication? Is it that by default we send
the transactions at commit?--
With Regards,
Amit Kapila.
On Tue, Sep 21, 2021 at 4:21 PM Marcos Pegoraro <marcos@f10.com.br> wrote:
No, I´m talking about that configuration you can have on standby servers
recovery_min_apply_delay = '8h'
oh okay, I think this can be useful in some cases where we want to avoid
data loss similar to its use for physical standby. For example, if the user
has by mistake truncated the table (or deleted some required data) on the
publisher, we can always it from the subscriber if we have such a feature.
Having said that, I am not sure if we can call it a restriction. It is more
of a TODO kind of thing. It doesn't sound advisable to me to keep growing
the current Restrictions page [1]https://wiki.postgresql.org/wiki/Todo.
[1]: https://wiki.postgresql.org/wiki/Todo
[2]: https://www.postgresql.org/docs/devel/logical-replication-restrictions.html
https://www.postgresql.org/docs/devel/logical-replication-restrictions.html
--
With Regards,
Amit Kapila.
oh okay, I think this can be useful in some cases where we want to avoid
data loss similar to its use for physical standby. For example, if the user
has by mistake truncated the table (or deleted some required data) on the
publisher, we can always it from the subscriber if we have such a feature.Having said that, I am not sure if we can call it a restriction. It is
more of a TODO kind of thing. It doesn't sound advisable to me to keep
growing the current Restrictions page
OK, so, could you guide me where to start on this feature ?
regards,
Marcos
On Wed, Sep 22, 2021, at 1:18 AM, Amit Kapila wrote:
On Tue, Sep 21, 2021 at 4:21 PM Marcos Pegoraro <marcos@f10.com.br> wrote:
No, I´m talking about that configuration you can have on standby servers
recovery_min_apply_delay = '8h'oh okay, I think this can be useful in some cases where we want to avoid data loss similar to its use for physical standby. For example, if the user has by mistake truncated the table (or deleted some required data) on the publisher, we can always it from the subscriber if we have such a feature.
Having said that, I am not sure if we can call it a restriction. It is more of a TODO kind of thing. It doesn't sound advisable to me to keep growing the current Restrictions page [1].
It is a new feature. pglogical supports it and it is useful for delayed
secondary server and if, for some business reason, you have to delay when data
is available. There might be other use cases but these are the ones I regularly
heard from customers.
BTW, I have a WIP patch for this feature. I didn't have enough time to post it
because it lacks documentation and tests. I'm planning to do it as soon as this
CF ends.
--
Euler Taveira
EDB https://www.enterprisedb.com/
Show quoted text
No, I´m talking about that configuration you can have on standby servers
recovery_min_apply_delay = '8h'oh okay, I think this can be useful in some cases where we want to avoid
data loss similar to its use for physical standby. For example, if the user
has by mistake truncated the table (or deleted some required data) on the
publisher, we can always it from the subscriber if we have such a feature.Having said that, I am not sure if we can call it a restriction. It is
more of a TODO kind of thing. It doesn't sound advisable to me to keep
growing the current Restrictions page [1].It is a new feature. pglogical supports it and it is useful for delayed
secondary server and if, for some business reason, you have to delay when
data
is available. There might be other use cases but these are the ones I
regularly
heard from customers.BTW, I have a WIP patch for this feature. I didn't have enough time to
post it
because it lacks documentation and tests. I'm planning to do it as soon as
this
CF ends.Fine, let me know if you need any help, testing, for example.
On Wed, Sep 22, 2021 at 10:27 PM Euler Taveira <euler@eulerto.com> wrote:
On Wed, Sep 22, 2021, at 1:18 AM, Amit Kapila wrote:
On Tue, Sep 21, 2021 at 4:21 PM Marcos Pegoraro <marcos@f10.com.br> wrote:
No, I´m talking about that configuration you can have on standby servers
recovery_min_apply_delay = '8h'oh okay, I think this can be useful in some cases where we want to avoid data loss similar to its use for physical standby. For example, if the user has by mistake truncated the table (or deleted some required data) on the publisher, we can always it from the subscriber if we have such a feature.
Having said that, I am not sure if we can call it a restriction. It is more of a TODO kind of thing. It doesn't sound advisable to me to keep growing the current Restrictions page [1].
It is a new feature. pglogical supports it and it is useful for delayed
secondary server and if, for some business reason, you have to delay when data
is available.
What kind of reasons do you see where users prefer to delay except to
avoid data loss in the case where users unintentionally removed some
data from the primary?
--
With Regards,
Amit Kapila.
What kind of reasons do you see where users prefer to delay except to
avoid data loss in the case where users unintentionally removed some
data from the primary?Debugging. Suppose I have a problem, but that problem occurs once a week
or a month. When this problem occurs again a monitoring system sends me a
message ... Hey, that problem occurred again. Then, as I configured my
replica to Delay = '30 min', I have time to connect to it and wait, record
by record coming and see exactly what made that mistake.
On Wed, Sep 22, 2021 at 6:18 AM Amit Kapila <amit.kapila16@gmail.com> wrote:
On Tue, Sep 21, 2021 at 4:21 PM Marcos Pegoraro <marcos@f10.com.br> wrote:
No, I´m talking about that configuration you can have on standby servers
recovery_min_apply_delay = '8h'oh okay, I think this can be useful in some cases where we want to avoid data loss similar to its use for physical standby. For example, if the user has by mistake truncated the table (or deleted some required data) on the publisher, we can always it from the subscriber if we have such a feature.
Having said that, I am not sure if we can call it a restriction. It is more of a TODO kind of thing. It doesn't sound advisable to me to keep growing the current Restrictions page [1].
One could argue that not having delayed apply *is* a restriction
compared to both physical replication and "the original upstream"
pg_logical.
I think therefore it should be mentioned in "Restrictions" so people
considering moving from physical streaming to pg_logical or just
trying to decide whether to use pg_logical are warned.
Also, the Restrictions page starts with " These might be addressed in
future releases." so there is no exclusivity of being either a
restriction or TODO.
[1] - https://wiki.postgresql.org/wiki/Todo
[2] - https://www.postgresql.org/docs/devel/logical-replication-restrictions.html
-----
Hannu Krosing
Google Cloud - We have a long list of planned contributions and we are hiring.
Contact me if interested.
On Wed, Sep 22, 2021, at 1:57 PM, Euler Taveira wrote:
On Wed, Sep 22, 2021, at 1:18 AM, Amit Kapila wrote:
On Tue, Sep 21, 2021 at 4:21 PM Marcos Pegoraro <marcos@f10.com.br> wrote:
No, I´m talking about that configuration you can have on standby servers
recovery_min_apply_delay = '8h'oh okay, I think this can be useful in some cases where we want to avoid data loss similar to its use for physical standby. For example, if the user has by mistake truncated the table (or deleted some required data) on the publisher, we can always it from the subscriber if we have such a feature.
Having said that, I am not sure if we can call it a restriction. It is more of a TODO kind of thing. It doesn't sound advisable to me to keep growing the current Restrictions page [1].
It is a new feature. pglogical supports it and it is useful for delayed
secondary server and if, for some business reason, you have to delay when data
is available. There might be other use cases but these are the ones I regularly
heard from customers.BTW, I have a WIP patch for this feature. I didn't have enough time to post it
because it lacks documentation and tests. I'm planning to do it as soon as this
CF ends.
Long time, no patch. Here it is. I will provide documentation in the next
version. I would appreciate some feedback.
--
Euler Taveira
EDB https://www.enterprisedb.com/
Attachments:
v1-0001-Time-delayed-logical-replication-subscriber.patchtext/x-patch; name=v1-0001-Time-delayed-logical-replication-subscriber.patchDownload+248-42
On Tuesday, March 1, 2022 9:19 AM Euler Taveira <euler@eulerto.com> wrote:
Long time, no patch. Here it is. I will provide documentation in the next
version. I would appreciate some feedback.
Hi, thank you for posting the patch !
$ git am v1-0001-Time-delayed-logical-replication-subscriber.patch
Applying: Time-delayed logical replication subscriber
error: patch failed: src/backend/catalog/system_views.sql:1261
error: src/backend/catalog/system_views.sql: patch does not apply
FYI, by one recent commit(7a85073), the HEAD redesigned pg_stat_subscription_workers.
Thus, the blow change can't be applied. Could you please rebase v1 ?
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 3cb69b1f87..1cc0d86f2e 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -1261,7 +1261,8 @@ REVOKE ALL ON pg_replication_origin_status FROM public;
-- All columns of pg_subscription except subconninfo are publicly readable.
REVOKE ALL ON pg_subscription FROM public;
GRANT SELECT (oid, subdbid, subname, subowner, subenabled, subbinary,
- substream, subtwophasestate, subslotname, subsynccommit, subpublications)
+ substream, subtwophasestate, subslotname, subsynccommit,
+ subapplydelay, subpublications)
ON pg_subscription TO public;
CREATE VIEW pg_stat_subscription_workers AS
Best Regards,
Takamichi Osumi
On Tue, Mar 1, 2022, at 3:27 AM, osumi.takamichi@fujitsu.com wrote:
$ git am v1-0001-Time-delayed-logical-replication-subscriber.patch
I generally use -3 to fall back on 3-way merge. Doesn't it work for you?
--
Euler Taveira
EDB https://www.enterprisedb.com/
On Wednesday, March 2, 2022 8:54 AM Euler Taveira <euler@eulerto.com> wrote:
On Tue, Mar 1, 2022, at 3:27 AM, osumi.takamichi@fujitsu.com
<mailto:osumi.takamichi@fujitsu.com> wrote:$ git am v1-0001-Time-delayed-logical-replication-subscriber.patch
I generally use -3 to fall back on 3-way merge. Doesn't it work for you?
It did. Excuse me for making noises.
Best Regards,
Takamichi Osumi
On Mon, Feb 28, 2022, at 9:18 PM, Euler Taveira wrote:
Long time, no patch. Here it is. I will provide documentation in the next
version. I would appreciate some feedback.
This patch is broken since commit 705e20f8550c0e8e47c0b6b20b5f5ffd6ffd9e33. I
rebased it.
I added documentation that explains how this parameter works. I decided to
rename the parameter from apply_delay to min_apply_delay to use the same
terminology from the physical replication. IMO the new name seems clear that
there isn't a guarantee that we are always x ms behind the publisher. Indeed,
due to processing/transferring the delay might be higher than the specified
interval.
I refactored the way the delay is applied. The previous patch is only covering
a regular transaction. This new one also covers prepared transaction. The
current design intercepts the transaction during the first change (at the time
it will start the transaction to apply the changes) and applies the delay
before effectively starting the transaction. The previous patch uses
begin_replication_step() as this point. However, to support prepared
transactions I changed the apply_delay signature to accepts a timestamp
parameter (because we use another variable to calculate the delay for prepared
transactions -- prepare_time). Hence, the apply_delay() moved to another places
-- apply_handle_begin and apply_handle_begin_prepare().
The new code does not apply the delay in 2 situations:
* STREAM START: streamed transactions might not have commit_time or
prepare_time set. I'm afraid it is not possible to use the referred variables
because at STREAM START time we don't have a transaction commit time. The
protocol could provide a timestamp that indicates when it starts streaming
the transaction then we could use it to apply the delay. Unfortunately, we
don't have it. Having said that this new patch does not apply delay for
streamed transactions.
* non-transaction messages: the delay could be applied to non-transaction
messages too. It is sent independently of the transaction that contains it.
Since the logical replication does not send messages to the subscriber, this
is not an issue. However, consumers that use pgoutput and wants to implement
a delay will require it.
I'm still looking for a way to support streamed transactions without much
surgery into the logical replication protocol.
--
Euler Taveira
EDB https://www.enterprisedb.com/
Attachments:
v2-0001-Time-delayed-logical-replication-subscriber.patchtext/x-patch; name=v2-0001-Time-delayed-logical-replication-subscriber.patchDownload+358-53
On 2022-03-20 21:40:40 -0300, Euler Taveira wrote:
On Mon, Feb 28, 2022, at 9:18 PM, Euler Taveira wrote:
Long time, no patch. Here it is. I will provide documentation in the next
version. I would appreciate some feedback.This patch is broken since commit 705e20f8550c0e8e47c0b6b20b5f5ffd6ffd9e33. I
rebased it.
This fails tests, specifically it seems psql crashes:
https://cirrus-ci.com/task/6592281292570624?logs=cores#L46
Marked as waiting-on-author.
Greetings,
Andres Freund
On Mon, Mar 21, 2022, at 10:04 PM, Andres Freund wrote:
On 2022-03-20 21:40:40 -0300, Euler Taveira wrote:
On Mon, Feb 28, 2022, at 9:18 PM, Euler Taveira wrote:
Long time, no patch. Here it is. I will provide documentation in the next
version. I would appreciate some feedback.This patch is broken since commit 705e20f8550c0e8e47c0b6b20b5f5ffd6ffd9e33. I
rebased it.This fails tests, specifically it seems psql crashes:
https://cirrus-ci.com/task/6592281292570624?logs=cores#L46
Yeah. I forgot to test this patch with cassert before sending it. :( I didn't
send a new patch because there is another issue (with int128) that I'm
currently reworking. I'll send another patch soon.
--
Euler Taveira
EDB https://www.enterprisedb.com/
On Mon, Mar 21, 2022, at 10:09 PM, Euler Taveira wrote:
On Mon, Mar 21, 2022, at 10:04 PM, Andres Freund wrote:
On 2022-03-20 21:40:40 -0300, Euler Taveira wrote:
On Mon, Feb 28, 2022, at 9:18 PM, Euler Taveira wrote:
Long time, no patch. Here it is. I will provide documentation in the next
version. I would appreciate some feedback.This patch is broken since commit 705e20f8550c0e8e47c0b6b20b5f5ffd6ffd9e33. I
rebased it.This fails tests, specifically it seems psql crashes:
https://cirrus-ci.com/task/6592281292570624?logs=cores#L46Yeah. I forgot to test this patch with cassert before sending it. :( I didn't
send a new patch because there is another issue (with int128) that I'm
currently reworking. I'll send another patch soon.
Here is another version after rebasing it. In this version I fixed the psql
issue and rewrote interval_to_ms function.
--
Euler Taveira
EDB https://www.enterprisedb.com/
Attachments:
v3-0001-Time-delayed-logical-replication-subscriber.patchtext/x-patch; name=v3-0001-Time-delayed-logical-replication-subscriber.patchDownload+389-57
On Wed, Mar 23, 2022, at 6:19 PM, Euler Taveira wrote:
On Mon, Mar 21, 2022, at 10:09 PM, Euler Taveira wrote:
On Mon, Mar 21, 2022, at 10:04 PM, Andres Freund wrote:
On 2022-03-20 21:40:40 -0300, Euler Taveira wrote:
On Mon, Feb 28, 2022, at 9:18 PM, Euler Taveira wrote:
Long time, no patch. Here it is. I will provide documentation in the next
version. I would appreciate some feedback.This patch is broken since commit 705e20f8550c0e8e47c0b6b20b5f5ffd6ffd9e33. I
rebased it.This fails tests, specifically it seems psql crashes:
https://cirrus-ci.com/task/6592281292570624?logs=cores#L46Yeah. I forgot to test this patch with cassert before sending it. :( I didn't
send a new patch because there is another issue (with int128) that I'm
currently reworking. I'll send another patch soon.Here is another version after rebasing it. In this version I fixed the psql
issue and rewrote interval_to_ms function.
From the previous version, I added support for streamed transactions. For
streamed transactions, the delay is applied during STREAM COMMIT message.
That's ok if we add the delay before applying the spooled messages. Hence, we
guarantee that the delay is applied *before* each transaction. The same logic
is applied to prepared transactions. The delay is introduced before applying
the spooled messages in STREAM PREPARE message.
Tests were refactored a bit. A test for streamed transaction was included too.
Version 4 is attached.
--
Euler Taveira
EDB https://www.enterprisedb.com/
Attachments:
v4-0001-Time-delayed-logical-replication-subscriber.patchtext/x-patch; name=v4-0001-Time-delayed-logical-replication-subscriber.patchDownload+455-57
Here are some review comments for your v4-0001 patch. I hope they are
useful for you.
======
1. General
This thread name "logical replication restrictions" seems quite
unrelated to the patch here. Maybe it's better to start a new thread
otherwise nobody is going to recognise what this thread is really
about.
======
2. Commit message
Similar to physical replication, a time-delayed copy of the data for
logical replication is useful for some scenarios (specially to fix
errors that might cause data loss).
"specially" -> "particularly" ?
~~~
3. Commit message
Maybe take some examples from the regression tests to show usage of
the new parameter
======
4. doc/src/sgml/catalogs.sgml
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>subapplydelay</structfield> <type>int8</type>
+ </para>
+ <para>
+ Delay the application of changes by a specified amount of time.
+ </para></entry>
+ </row>
I think this should say that the units are ms.
======
5. doc/src/sgml/ref/create_subscription.sgml
+ <varlistentry>
+ <term><literal>min_apply_delay</literal> (<type>integer</type>)</term>
+ <listitem>
Is the "integer" type here correct? It might eventually be stored as
an integer, but IIUC (going by the tests) from the user point-of-view
this parameter is really "text" type for representing ms or interval,
right?
~~~
6. doc/src/sgml/ref/create_subscription.sgml
Similar
+ to the physical replication feature
+ (<xref linkend="guc-recovery-min-apply-delay"/>), it may be useful to
+ have a time-delayed copy of data for logical replication.
SUGGESTION
As with the physical replication feature (recovery_min_apply_delay),
it can be useful for logical replication to delay the data
replication.
~~~
7. doc/src/sgml/ref/create_subscription.sgml
Delays in logical
+ decoding and in transfer the transaction may reduce the actual wait
+ time.
SUGGESTION
Time spent in logical decoding and in transferring the transaction may
reduce the actual wait time.
~~~
8. doc/src/sgml/ref/create_subscription.sgml
If the system clocks on publisher and subscriber are not
+ synchronized, this may lead to apply changes earlier than expected.
Why just say "earlier than expected"? If the publisher's time is ahead
of the subscriber then the changes might also be *later* than
expected, right? So, perhaps it is better to just say "other than
expected".
~~~
9. doc/src/sgml/ref/create_subscription.sgml
Should there also be a big warning box about the impact if using
synchronous_commit (like the other streaming replication page has this
warning)?
~~~
10. doc/src/sgml/ref/create_subscription.sgml
I think there should be some examples somewhere showing how to specify
this parameter. Maybe they are better added somewhere in "31.2
Subscription" and xrefed from here.
======
11. src/backend/commands/subscriptioncmds.c - parse_subscription_options
I think there should be a default assignment to 0 (done where all the
other supported option defaults are set)
~~~
12. src/backend/commands/subscriptioncmds.c - parse_subscription_options
+ if (opts->min_apply_delay < 0)
+ ereport(ERROR,
+ errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE),
+ errmsg("option \"%s\" must not be negative", "min_apply_delay"));
+
I thought this check only needs to be do within scope of the preceding
if - (IsSet(supported_opts, SUBOPT_MIN_APPLY_DELAY) &&
strcmp(defel->defname, "min_apply_delay") == 0)
======
13. src/backend/commands/subscriptioncmds.c - AlterSubscription
@@ -1093,6 +1126,17 @@ AlterSubscription(ParseState *pstate,
AlterSubscriptionStmt *stmt,
if (opts.enabled)
ApplyLauncherWakeupAtCommit();
+ /*
+ * If this subscription has been disabled and it has an apply
+ * delay set, wake up the logical replication worker to finish
+ * it as soon as possible.
+ */
+ if (!opts.enabled && sub->applydelay > 0)
I did not really understand the logic why should the min_apply_delay
override the enabled=false? It is a called *minimum* delay so if it
ends up being way over the parameter value (because the subscription
is disabled) then why does that matter?
======
14. src/backend/replication/logical/worker.c
@@ -252,6 +252,7 @@ WalReceiverConn *LogRepWorkerWalRcvConn = NULL;
Subscription *MySubscription = NULL;
static bool MySubscriptionValid = false;
+TimestampTz MySubscriptionMinApplyDelayUntil = 0;
Looking at the only usage of this variable (in apply_delay) and how it
is used I did see why this cannot just be a local member of the
apply_delay function?
~~~
15. src/backend/replication/logical/worker.c - apply_delay
+/*
+ * Apply the informed delay for the transaction.
+ *
+ * A regular transaction uses the commit time to calculate the delay. A
+ * prepared transaction uses the prepare time to calculate the delay.
+ */
+static void
+apply_delay(TimestampTz ts)
I didn't think it needs to mention here about the different kinds of
transactions because where it comes from has nothing really to do with
this function's logic.
~~~
16. src/backend/replication/logical/worker.c - apply_delay
Refer to comment #14 about MySubscriptionMinApplyDelayUntil.
~~~
17. src/backend/replication/logical/worker.c - apply_handle_stream_prepare
@@ -1090,6 +1146,19 @@ apply_handle_stream_prepare(StringInfo s)
elog(DEBUG1, "received prepare for streamed transaction %u",
prepare_data.xid);
+ /*
+ * Should we delay the current prepared transaction?
+ *
+ * Although the delay is applied in BEGIN PREPARE messages, streamed
+ * prepared transactions apply the delay in a STREAM PREPARE message.
+ * That's ok because no changes have been applied yet
+ * (apply_spooled_messages() will do it).
+ * The STREAM START message does not contain a prepare time (it will be
+ * available when the in-progress prepared transaction finishes), hence, it
+ * was not possible to apply a delay at that time.
+ */
+ apply_delay(prepare_data.prepare_time);
+
It seems to rely on the spooling happening at the end. But won't this
cause a problem later when/if the "parallel apply" patch [1]/messages/by-id/CAA4eK1+wyN6zpaHUkCLorEWNx75MG0xhMwcFhvjqm2KURZEAGw@mail.gmail.com is pushed
and the stream bgworkers are doing stuff on the fly instead of
spooling at the end?
Or are you expecting that the "parallel apply" feature should be
disabled if there is any min_apply_delay parameter specified?
~~~
18. src/backend/replication/logical/worker.c - apply_handle_stream_commit
Ditto comment #17.
======
19. src/bin/psql/tab-complete.c
Let's keep the alphabetical order of the parameters in COMPLETE_WITH, as per [2]/messages/by-id/CAHut+PucvKZgg_eJzUW--iL6DXHg1Jwj6F09tQziE3kUF67uLg@mail.gmail.com
======
20. src/include/catalog/pg_subscription.h
@@ -58,6 +58,8 @@ CATALOG(pg_subscription,6100,SubscriptionRelationId)
BKI_SHARED_RELATION BKI_ROW
XLogRecPtr subskiplsn; /* All changes finished at this LSN are
* skipped */
+ int64 subapplydelay; /* Replication apply delay */
+
IMO the comment should mention the units "(ms)"
======
21. src/test/regress/sql/subscription.sql
There are some test cases for CREATE SUBSCRIPTION but there are no
test cases for ALTER SUBSCRIPTION changing this new parameter.
====
22. src/test/subscription/t/032_apply_delay.pl
I received the following error when trying to run these 'subscription' tests:
t/032_apply_delay.pl ............... No such class log_location at
t/032_apply_delay.pl line 49, near "my log_location"
syntax error at t/032_apply_delay.pl line 49, near "my log_location ="
Global symbol "$log_location" requires explicit package name at
t/032_apply_delay.pl line 103.
Global symbol "$log_location" requires explicit package name at
t/032_apply_delay.pl line 105.
Global symbol "$log_location" requires explicit package name at
t/032_apply_delay.pl line 105.
Global symbol "$log_location" requires explicit package name at
t/032_apply_delay.pl line 107.
Global symbol "$sect" requires explicit package name at
t/032_apply_delay.pl line 108.
Execution of t/032_apply_delay.pl aborted due to compilation errors.
t/032_apply_delay.pl ............... Dubious, test returned 255 (wstat
65280, 0xff00)
No subtests run
t/100_bugs.pl ...................... ok
Test Summary Report
-------------------
t/032_apply_delay.pl (Wstat: 65280 Tests: 0 Failed: 0)
Non-zero exit status: 255
Parse errors: No plan found in TAP output
------
[1]: /messages/by-id/CAA4eK1+wyN6zpaHUkCLorEWNx75MG0xhMwcFhvjqm2KURZEAGw@mail.gmail.com
[2]: /messages/by-id/CAHut+PucvKZgg_eJzUW--iL6DXHg1Jwj6F09tQziE3kUF67uLg@mail.gmail.com
Kind Regards,
Peter Smith.
Fujitsu Australia