PostgreSQL Service

Service is an abstraction for database functionality & access

The concepts of Service and Access are proposed for HA PostgreSQL clusters.

Personal User

After completing the singleton deployment, port 5432 of this node provides PostgreSQL services, and port 80 provides UI class services.

On the current meta node, executing psql with no parameters using the admin user can connect directly to the local pre-defined meta database.

When accessing PG from the host using the client tool, you can use the URL.

psql postgres://dbuser_dba:DBUser.DBA@         # dbsu direct connection
psql postgres://dbuser_meta:DBUser.Meta@       # business user direct connect

You can use the admin user specified by pg_admin_username and pg_admin_password or a business user (dbuser_meta) pre-defined in the meta database to access this database.

When using a HA database cluster deployed with Pigsty, it is not recommended to access the database service using IP direct connection.


Service in the form of functionality that a database cluster provides.

In a production env, a replication-based primary-replica database cluster is used. There is one and only one primary in the cluster that can accept writes, while the other replicas will continuously get logs from the primary to keep up with it. Also, replicas can host read-only requests.

In addition, for production envs with short high-frequency connections, we also pool requests via Pgbouncer to reduce connection creation overhead. However, in ETL and change execution scenarios, we need to bypass the connection pool and access the database directly.

In addition, HA clusters have a failover feature that causes changes to the cluster’s primary. HA clustering solutions, therefore, require that write traffic can automatically adapt to changes in the cluster’s primary.

These different access requirements (read/write separation, pooling, and direct connection, failover auto-adaptation) are eventually abstracted into the concept of Service.

In general, a database cluster must provide a service.

  • read-write service (primary): can write to the database

For a production database cluster, at least two services should be provided.

  • read-write service (primary): can write to the database

  • read-only service (replica): access to the replica

There may be other services.

  • offline: For ETL and personal queries.
  • standby: Read-only service with synchronous commit and no replication delay.
  • delayed: Allows to access old data before a fixed time interval.
  • default: Service that allows admin users to manage the database directly, bypassing the connection pool.

Default Services

Pigsty provides four services by default: primary, replica, default, and offline.

New services can be defined for global or individual clusters via config files.

service port purpose description
primary 5433 production read/write connect to primary via connection pool
replica 5434 production read-only connection to replica via connection pool
default 5436 management direct connection to primary
offline 5438 ETL/personal user direct connection to offline

Take the meta DB pg-meta as an example:

psql postgres://dbuser_meta:DBUser.Meta@pg-meta:5433/meta   # production read/write
psql postgres://dbuser_meta:DBUser.Meta@pg-meta:5434/meta   # production read-only
psql postgres://dbuser_dba:DBUser.DBA@pg-meta:5436/meta     # Direct connect primary
psql postgres://dbuser_stats:DBUser.Stats@pg-meta:5438/meta # Direct connect offline

These four services are described in detail below.

Primary Service

The Primary service is used for online production read and write access, and it maps the cluster’s port 5433 to the primary connection pool (default 6432) port.

The Primary service selects all instances in the cluster as members, but only the primary can take on traffic because there is one and only one instance /primary with a true health check.

# primary service will route {ip|name}:5433 to primary pgbouncer (5433->6432 rw)
- name: primary           # service name {{ pg_cluster }}-primary
  src_ip: "*"
  src_port: 5433
  dst_port: pgbouncer     # 5433 route to pgbouncer
  check_url: /primary     # primary health check, success when instance is primary
  selector: "[]"            # select all instance as primary service candidate

The HA component Patroni on the primary returns 200 against the Primary health check and is used to ensure that the cluster does not have another primary.

When the cluster fails over, the health check is true for the new primary and false for the old one, so traffic is migrated to the new primary. The business side will notice about 30 seconds of Primary service unavailability time.

Replica Service

The Replica service is used to online produce read-only access, and it maps the cluster’s port 5434, to the replica connection pool (default 6432) port.

The Replica service selects all instances in the cluster as members, but only those with an accurate health check /read-only can take on traffic, and that health check returns success for all instances that can take on read-only traffic.

By default, only replicas carry read-only requests, and the Replica service defines selector_backup, a selector that adds the cluster’s primary as a backup instance to the Replica service. The primary will only start taking read-only traffic when all replicas are down.

Another role as a backup instance is offline, which is usually dedicated to OLAP/ETL/personal queries and is not suitable for mixing with online queries, so offline is only used to take on read-only traffic when all replicas are down.

# replica service will route {ip|name}:5434 to replica pgbouncer (5434->6432 ro)
- name: replica           # service name {{ pg_cluster }}-replica
  src_ip: "*"
  src_port: 5434
  dst_port: pgbouncer
  check_url: /read-only   # read-only health check. (including primary)
  selector: "[]"          # select all instance as replica service candidate
  selector_backup: "[? pg_role == `primary` || pg_role == `offline` ]"

Default Service

The Default service is used for online primary direct connections, which map the cluster’s port 5436 to the primary Postgres (default 5432) port.

Default service targets interactive read and writes access, including executing admin commands, performing DDL changes, connecting to the primary to perform DML, and performing CDC. Default service forwards traffic directly to Postgres, bypassing Pgbouncer.

The Default service is similar to the Primary service, using the same config entry.

# default service will route {ip|name}:5436 to primary postgres (5436->5432 primary)
- name: default           # service's actual name is {{ pg_cluster }}-default
  src_ip: "*"             # service bind ip address, * for all, vip for cluster virtual ip address
  src_port: 5436          # bind port, mandatory
  dst_port: postgres      # target port: postgres|pgbouncer|port_number , pgbouncer(6432) by default
  check_method: http      # health check method: only http is available for now
  check_port: patroni     # health check port:  patroni|pg_exporter|port_number , patroni by default
  check_url: /primary     # health check url path, / as default
  check_code: 200         # health check http code, 200 as default
  selector: "[]"          # instance selector
  haproxy:                # haproxy specific fields
    maxconn: 3000         # default front-end connection
    balance: roundrobin   # load balance algorithm (roundrobin by default)
    default_server_options: 'inter 3s fastinter 1s downinter 5s rise 3 fall 3 on-marked-down shutdown-sessions slowstart 30s maxconn 3000 maxqueue 128 weight 100'

Offline Service

Offline service is used for offline access and personal queries. It maps the cluster’s 5438 port, to the offline Postgres (default 5432) port.

The Offline service targets interactive read-only access, including ETL, offline analytics queries, and individual user queries. Offline service also forwards traffic directly to Postgres, bypassing Pgbouncer.

Offline instances are those where pg_role is offline or tagged with pg_offline_query. The other replica outside the Offline will act as a backup instance for Offline and will still be able to get services from other replicas when the Offline is down.

# offline service will route {ip|name}:5438 to offline postgres (5438->5432 offline)
- name: offline           # service name {{ pg_cluster }}-offline
  src_ip: "*"
  src_port: 5438
  dst_port: postgres
  check_url: /replica     # offline MUST be a replica
  selector: "[? pg_role == `offline` || pg_offline_query ]"         # instances with pg_role == 'offline' or instance marked with 'pg_offline_query == true'
  selector_backup: "[? pg_role == `replica` && !pg_offline_query]"  # replica are used as backup server in offline service

User-Defined Service

In addition to the default services configured by pg_services above, users can define additional services for the PostgreSQL cluster in the pg_services_extra config entry.

A cluster can define multiple services, each containing any number of cluster members, distinguished by port. The following code defines a new service standby that uses port 5435 to provide sync read functionality. This service will read from standby (or primary) in the cluster, thus ensuring that all reads are done without latency.

# standby service will route {ip|name}:5435 to sync replica's pgbouncer (5435->6432 standby)
- name: standby                   # required, service name, the actual svc name will be prefixed with `pg_cluster`, e.g: pg-meta-standby
  src_ip: "*"                     # required, service bind ip address, `*` for all ip, `vip` for cluster `vip_address`
  src_port: 5435                  # required, service exposed port (work as kubernetes service node port mode)
  dst_port: postgres              # optional, destination port, postgres|pgbouncer|<port_number>   , pgbouncer(6432) by default
  check_method: http              # optional, health check method: http is the only available method for now
  check_port: patroni             # optional, health check port: patroni|pg_exporter|<port_number> , patroni(8008) by default
  check_url: /read-only?lag=0     # optional, health check url path, / by default
  check_code: 200                 # optional, health check expected http code, 200 by default
  selector: "[]"                  # required, JMESPath to filter inventory ()
  selector_backup: "[? pg_role == `primary`]"  # primary used as backup server for standby service (will not work because /sync for )
  haproxy:                        # optional, adhoc parameters for haproxy service provider (vip_l4 is another service provider)
    maxconn: 3000                 # optional, max allowed front-end connection
    balance: roundrobin           # optional, haproxy load balance algorithm (roundrobin by default, other: leastconn)
    default_server_options: 'inter 3s fastinter 1s downinter 5s rise 3 fall 3 on-marked-down shutdown-sessions slowstart 30s maxconn 3000 maxqueue 128 weight 100'


  • Name (

    The full name of the service is prefixed by the database cluster name and suffixed by, connected by -.

  • Port (service.port).

    In Pigsty, services are exposed as NodePort by default, so the port is mandatory. However, if you use an LB service access scheme, you can also differentiate the services in other ways.

  • selector (service.selector).

    The selector specifies the instance members of the service, in the form of JMESPath, filtering variables from all cluster instances. The default [] selector picks all cluster members.


  • backup selector (service.selector).

    The backup selector selects or marks the list of instances for service backup, i.e., the backup instance takes over the service only when all other members of the cluster fail.

  • source_ip (service.src_ip).

    Indicates the IP used externally by the service. The default is *, which is all IP on the localhost. Using vip will use the vip_address variable to take the value, or you can also fill in the specific IP supported by the NIC.

  • Host port (service.dst_port).

    Indicates which port the service’s traffic will be directed to on the target instance. postgres will point to the port the database is listening on, pgbouncer will point to the port the connection pool is listening on, or you can fill in a fixed port.

  • health check method (service.check_method):

    How does the service check the health status of the instance? Currently, only HTTP is supported.

  • Health check port (service.check_port):

    Which port does the service check the instance on to get the health status of the instance? patroni will get it from Patroni (default 8008), pg_exporter will get it from PG Exporter (default 9630), or the user can fill in a custom port.

  • Health check path (service.check_url):

    The URL PATH is used by the service to perform HTTP checks. / is used by default for health checks, and PG Exporter and Patroni provide a variety of health check methods that can be used to differentiate between primary and replica traffic. For example, /primary will only return success for the primary, and /replica will only return success for the replica. /read-only, on the other hand, will return success for any instance that supports read-only (including the primary).

  • health check code (service.check_code):

    The code expected for HTTP health checks, default, is 200.

  • Haproxy-specific configuration (service.haproxy) :

    Proprietary config entries about the service provisioning software (HAProxy).

Service Implementation

Pigsty currently uses HAProxy-based service implementation by default and provides a sample implementation based on Layer 4 LB (L4VIP). For details, please refer to the section access.

Last modified 2022-06-04: fii en docs batch 2 (61bf601)