Node

Pigsty use node for deployment. It could be bare metal, vm or even pod

You can manage more nodes with Pigsty, and use them to deploy various databases or your applications.

The nodes managed by Pigsty are adjusted by nodes.yml to the state described by Config: NODES, and the node monitoring and log collection components are installed so you can check the node status and logs from the monitoring system.

Node Identity

Each node has identity parameters that are configured by parameters in <cluster>.hosts and <cluster>.vars.

There are two important node identity parameters: nodename and node_cluster, which will be used as the node’s instance identity (ins) and cluster identity (cls) in the monitoring system. nodename and node_cluster are NOT REQUIRED since they all have proper default values: Hostname and constant nodes.

Besides, Pigsty uses an IP address as a unique node identifier, too. Which is the inventory_hostname reflected as the key in the <cluster>.hosts object. A node may have multiple interfaces & IP addresses. But you must explicitly designate one as the PRIMARY IP ADDRESS. Which should be an intranet IP for service access. It’s not mandatory to use that same IP address to ssh from the meta node, you can use ssh tunnel & jump server with Ansible Connect parameters.

Name Type Level Attribute Description
inventory_hostname ip - REQUIRED Node IP
nodename string I Optional Node Name
node_cluster string C Optional Node Cluster Name

The following cluster configuration declares a three-node cluster.

node-test:
  hosts:
    10.10.10.11: { nodename: node-test-1 }
    10.10.10.12: { pg_hostname: true } # Borrowed identity pg-test-2
    10.10.10.13: {  } # Use the original hostname: node-3
  vars:
    node_cluster: node-test
host node_cluster nodename instance
10.10.10.11 node-test node-test-1 pg-test-1
10.10.10.12 node-test pg-test-2 pg-test-2
10.10.10.13 node-test node-3 pg-test-3

IIn the monitoring system, the time-series monitoring data are labeled as follows.

node_load1{cls="pg-meta", ins="pg-meta-1", ip="10.10.10.10", job="nodes"}
node_load1{cls="pg-test", ins="pg-test-1", ip="10.10.10.11", job="nodes"}
node_load1{cls="pg-test", ins="pg-test-2", ip="10.10.10.12", job="nodes"}
node_load1{cls="pg-test", ins="pg-test-3", ip="10.10.10.13", job="nodes"}

Node Services

Component Port Description
Consul Agent 8500 Distributed Configuration Management and Service Discovery
Node Exporter 9100 Node Monitoring Metrics Exporter
Promtail 9080 Collection of Postgres, Pgbouncer, Patroni logs (Optional)
Consul DNS 8600 DNS Service

PGSQL Node

A PGSQL Node is a node with a PGSQL module installed.

Pigsty uses exclusively deploy policy for PGSQL. This means the node’s identity and pgsql’s identity are exchangeable. The pg_hostname parameter is designed to assign the Postgres identity to its underlying node: pg_instance and pg_cluster will be assigned to the node’s nodename & node_cluster.

In addition to node default services, the following services are available on PGSQL nodes.

Component Port Description
Postgres 5432 Pigsty CMDB
Pgbouncer 6432 Pgbouncer Connection Pooling Service
Patroni 8008 Patroni HA Component
Consul 8500 Distributed Configuration Management and Service Discovery
Haproxy Primary 5433 Primary connection pool: Read/Write Service
Haproxy Replica 5434 Replica connection pool: Read-only Service
Haproxy Default 5436 Primary Direct Connect Service
Haproxy Offline 5438 Offline Direct Connect: Offline Read Service
Haproxy service 543x Customized Services
Haproxy Admin 9101 Monitoring metrics and traffic management
PG Exporter 9630 PG Monitoring Metrics Exporter
PGBouncer Exporter 9631 PGBouncer Monitoring Metrics Exporter
Node Exporter 9100 Node Monitoring Metrics Exporter
Promtail 9080 Collection of Postgres, Pgbouncer, Patroni logs (Optional)
Consul DNS 8600 DNS Service
vip-manager - Bind VIP to the primary

Last modified 2022-06-04: fii en docs batch 2 (61bf601)