ha/trio
三节点标准高可用配置模板,允许任意一台服务器宕机。
三节点是实现真正高可用的最小规格。ha/trio 模板使用三节点标准 HA 架构,INFRA、ETCD、PGSQL 三个核心模块均采用三节点部署,允许任意一台服务器宕机。
配置概览
- 配置名称:
ha/trio - 节点数量: 三节点
- 配置说明:三节点标准高可用架构,允许任意一台服务器宕机
- 适用系统:
el8,el9,el10,d12,d13,u22,u24 - 适用架构:
x86_64,aarch64 - 相关配置:
ha/dual,ha/full,ha/safe
启用方式:
./configure -c ha/trio [-i <primary_ip>]
配置生成后,需要将占位 IP 10.10.10.11 和 10.10.10.12 修改为实际的节点 IP 地址。
配置内容
源文件地址:pigsty/conf/ha/trio.yml
---
#==============================================================#
# File : trio.yml
# Desc : Pigsty 3-node security enhance template
# Ctime : 2020-05-23
# Mtime : 2026-01-20
# Docs : https://pigsty.io/docs/conf/trio
# License : Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright : 2018-2026 Ruohang Feng / Vonng (rh@vonng.com)
#==============================================================#
# 3 infra node, 3 etcd node, 3 pgsql node, and 1 minio node
all: # top level object
#==============================================================#
# Clusters, Nodes, and Modules
#==============================================================#
children:
#----------------------------------#
# infra: monitor, alert, repo, etc..
#----------------------------------#
infra: # infra cluster for proxy, monitor, alert, etc
hosts: # 1 for common usage, 3 nodes for production
10.10.10.10: { infra_seq: 1 } # identity required
10.10.10.11: { infra_seq: 2, repo_enabled: false }
10.10.10.12: { infra_seq: 3, repo_enabled: false }
vars:
patroni_watchdog_mode: off # do not fencing infra
etcd: # dcs service for postgres/patroni ha consensus
hosts: # 1 node for testing, 3 or 5 for production
10.10.10.10: { etcd_seq: 1 } # etcd_seq required
10.10.10.11: { etcd_seq: 2 } # assign from 1 ~ n
10.10.10.12: { etcd_seq: 3 } # odd number please
vars: # cluster level parameter override roles/etcd
etcd_cluster: etcd # mark etcd cluster name etcd
etcd_safeguard: false # safeguard against purging
etcd_clean: true # purge etcd during init process
minio: # minio cluster, s3 compatible object storage
hosts: { 10.10.10.10: { minio_seq: 1 } }
vars: { minio_cluster: minio }
pg-meta: # 3 instance postgres cluster `pg-meta`
hosts: # pg-meta-3 is marked as offline readable replica
10.10.10.10: { pg_seq: 1, pg_role: primary }
10.10.10.11: { pg_seq: 2, pg_role: replica }
10.10.10.12: { pg_seq: 3, pg_role: replica , pg_offline_query: true }
vars: # cluster level parameters
pg_cluster: pg-meta
pg_users: # https://pigsty.io/docs/pgsql/config/user
- { name: dbuser_meta , password: DBUser.Meta ,pgbouncer: true ,roles: [ dbrole_admin ] ,comment: pigsty admin user }
- { name: dbuser_view , password: DBUser.Viewer ,pgbouncer: true ,roles: [ dbrole_readonly ] ,comment: read-only viewer for meta database }
pg_databases:
- { name: meta ,baseline: cmdb.sql ,comment: pigsty meta database ,schemas: [ pigsty ] ,extensions: [ { name: vector } ] }
pg_hba_rules: # https://pigsty.io/docs/pgsql/config/hba
- { user: all ,db: all ,addr: intra ,auth: pwd ,title: 'everyone intranet access with password' ,order: 800 }
pg_crontab: # https://pigsty.io/docs/pgsql/admin/crontab
- '00 01 * * * /pg/bin/pg-backup full'
pg_vip_enabled: true
pg_vip_address: 10.10.10.2/24
pg_vip_interface: eth1
#==============================================================#
# Global Parameters
#==============================================================#
vars:
#----------------------------------#
# Meta Data
#----------------------------------#
version: v4.0.0 # pigsty version string
admin_ip: 10.10.10.10 # admin node ip address
region: default # upstream mirror region: default|china|europe
node_tune: oltp # node tuning specs: oltp,olap,tiny,crit
pg_conf: oltp.yml # pgsql tuning specs: {oltp,olap,tiny,crit}.yml
#docker_registry_mirrors: ["https://docker.1panel.live","https://docker.1ms.run","https://docker.xuanyuan.me","https://registry-1.docker.io"]
proxy_env: # global proxy env when downloading packages
no_proxy: "localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,*.pigsty,*.aliyun.com,mirrors.*,*.myqcloud.com,*.tsinghua.edu.cn"
# http_proxy: # set your proxy here: e.g http://user:pass@proxy.xxx.com
# https_proxy: # set your proxy here: e.g http://user:pass@proxy.xxx.com
# all_proxy: # set your proxy here: e.g http://user:pass@proxy.xxx.com
infra_portal: # infra services exposed via portal
home : { domain: i.pigsty } # default domain name
minio : { domain: m.pigsty ,endpoint: "${admin_ip}:9001" ,scheme: https ,websocket: true }
#----------------------------------#
# Repo, Node, Packages
#----------------------------------#
repo_remove: true # remove existing repo on admin node during repo bootstrap
node_repo_remove: true # remove existing node repo for node managed by pigsty
repo_extra_packages: [ pg18-main ] #,pg18-core ,pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-olap ,pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl]
pg_version: 18 # default postgres version
#pg_extensions: [ pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-olap ,pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl]
#----------------------------------#
# MinIO Related Options
#----------------------------------#
node_etc_hosts:
- '${admin_ip} i.pigsty' # static dns record that point to repo node
- '${admin_ip} sss.pigsty' # static dns record that point to minio
pgbackrest_method: minio # if you want to use minio as backup repo instead of 'local' fs, uncomment this
pgbackrest_repo: # pgbackrest repo: https://pgbackrest.org/configuration.html#section-repository
local: # default pgbackrest repo with local posix fs
path: /pg/backup # local backup directory, `/pg/backup` by default
retention_full_type: count # retention full backups by count
retention_full: 2 # keep 2, at most 3 full backup when using local fs repo
minio: # optional minio repo for pgbackrest
type: s3 # minio is s3-compatible, so s3 is used
s3_endpoint: sss.pigsty # minio endpoint domain name, `sss.pigsty` by default
s3_region: us-east-1 # minio region, us-east-1 by default, useless for minio
s3_bucket: pgsql # minio bucket name, `pgsql` by default
s3_key: pgbackrest # minio user access key for pgbackrest
s3_key_secret: S3User.Backup # minio user secret key for pgbackrest
s3_uri_style: path # use path style uri for minio rather than host style
path: /pgbackrest # minio backup path, default is `/pgbackrest`
storage_port: 9000 # minio port, 9000 by default
storage_ca_file: /etc/pki/ca.crt # minio ca file path, `/etc/pki/ca.crt` by default
block: y # Enable block incremental backup
bundle: y # bundle small files into a single file
bundle_limit: 20MiB # Limit for file bundles, 20MiB for object storage
bundle_size: 128MiB # Target size for file bundles, 128MiB for object storage
cipher_type: aes-256-cbc # enable AES encryption for remote backup repo
cipher_pass: pgBackRest # AES encryption password, default is 'pgBackRest'
retention_full_type: time # retention full backup by time on minio repo
retention_full: 14 # keep full backup for last 14 days
#----------------------------------------------#
# PASSWORD : https://pigsty.io/docs/setup/security/
#----------------------------------------------#
grafana_admin_password: pigsty
grafana_view_password: DBUser.Viewer
pg_admin_password: DBUser.DBA
pg_monitor_password: DBUser.Monitor
pg_replication_password: DBUser.Replicator
patroni_password: Patroni.API
haproxy_admin_password: pigsty
minio_secret_key: S3User.MinIO
etcd_root_password: Etcd.Root
...
配置解读
ha/trio 模板是 Pigsty 的 标准高可用配置,提供真正的故障自动恢复能力。
架构说明:
- 三节点 INFRA:Prometheus/Grafana/Nginx 分布式部署
- 三节点 ETCD:DCS 多数派选举,容忍单点故障
- 三节点 PostgreSQL:一主两从,自动故障转移
- 单节点 MinIO:可按需扩展为多节点
高可用保障:
- ETCD 三节点可容忍一节点故障,保持多数派
- PostgreSQL 主库故障时,Patroni 自动选举新主
- L2 VIP 随主库漂移,应用无需修改连接配置
适用场景:
- 生产环境最小高可用部署
- 需要自动故障转移的关键业务
- 作为更大规模部署的基础架构
扩展建议: