pgpool High Availability Issue

Started by a venkateshover 6 years ago3 messagesgeneral
Jump to latest
#1a venkatesh
venkatesh.sasi@gmail.com

Hi,

I'm working on configuring high availability for pgpool using watchdog.
Initially, I tried with two pgpool nodes (along with a pgmaster and
pgslave). In this scenario, assuming pgpool node 1 was started first and
became the leader. After sometime , the node got disconnected with pgpool
node 2 and pgpool node 2 as well declared itself as leader.

To handle this kind of scenario, I tried provisioning an additional pgpool
node and made a cluster with total 5 nodes (3 pgpool nodes, 1 pgmaster and
1 pgslave), assuming it will create a quorum to handle such situations.
Unfortunately, the situation still remains the same. (In case of any
disconnection between node that became leader and the first stand by node,
both the nodes try to manage the pgmaster and slave simultaneously).

Please help me understand if this is expected behavior or some additional
configurations are required to be made, so that two pgpool nodes don't
become leader simultaneously. If it's an expected behavior, how can we
handle this ?

(A point to note is that I'm not using elastic IP address here, instead I
have created a network load balancer in AWS, created a target group with
all the three pgpool nodes as targets).

Regards,
Venkatesh.

#2Bruce Momjian
bruce@momjian.us
In reply to: a venkatesh (#1)
Re: pgpool High Availability Issue

The pgpool email lists are the right place to ask this question:

https://www.pgpool.net/mediawiki/index.php/Mailing_lists

---------------------------------------------------------------------------

On Fri, Nov 15, 2019 at 11:04:22AM -0800, a venkatesh wrote:

Hi,�

I'm working on configuring high availability for pgpool using watchdog.
Initially, I tried with two pgpool nodes (along with a pgmaster and pgslave).
In this scenario, assuming pgpool node 1 was started first and became the
leader. After sometime , the node got disconnected with� pgpool node 2 and
pgpool node 2 as well declared itself as leader.� � � � � � � � � ��

To handle this kind of scenario, I tried provisioning an additional pgpool node
and made a cluster with total 5 nodes (3 pgpool nodes, 1 pgmaster and 1
pgslave), assuming it will create a quorum to handle such situations.
Unfortunately, the situation still remains the same. (In case of any
disconnection between node that became leader and the first stand by node, both
the nodes try to manage the pgmaster and slave simultaneously).

Please help me understand if this is expected behavior or some additional
configurations are required to be made, so that two pgpool nodes don't become
leader simultaneously.� If it's an expected behavior, how can we handle this ?�

(A point to note is that I'm not using elastic IP address here, instead I have
created a network load balancer in AWS, created a target group with all the
three pgpool nodes as targets).�

Regards,�
Venkatesh.�

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+                      Ancient Roman grave inscription +
#3Tatsuo Ishii
t-ishii@sra.co.jp
In reply to: a venkatesh (#1)
Re: pgpool High Availability Issue

I'm working on configuring high availability for pgpool using watchdog.
Initially, I tried with two pgpool nodes (along with a pgmaster and
pgslave). In this scenario, assuming pgpool node 1 was started first and
became the leader. After sometime , the node got disconnected with pgpool
node 2 and pgpool node 2 as well declared itself as leader.

To handle this kind of scenario, I tried provisioning an additional pgpool
node and made a cluster with total 5 nodes (3 pgpool nodes, 1 pgmaster and
1 pgslave), assuming it will create a quorum to handle such situations.
Unfortunately, the situation still remains the same. (In case of any
disconnection between node that became leader and the first stand by node,
both the nodes try to manage the pgmaster and slave simultaneously).

Please help me understand if this is expected behavior or some additional
configurations are required to be made, so that two pgpool nodes don't
become leader simultaneously. If it's an expected behavior, how can we
handle this ?

Definitely it's not an expected behavior unless there's something
wrong with Pgpool-II version or with Pgpool-II configuration.

To investigate the problem we need:

- exact Pgpool-II version
- pgpool.conf on all 3 pgpool nodes
- pgpool log when one of pgpool nodes went down

(A point to note is that I'm not using elastic IP address here, instead I
have created a network load balancer in AWS, created a target group with
all the three pgpool nodes as targets).

Regards,
Venkatesh.

Best regards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp