3-node: trio
Three-node configuration template providing a standard High Availability (HA) architecture, tolerating failure of one out of three nodes.
Categories:
The 3-node configuration template is a standard HA architecture, tolerating failure of 1 out of 3 nodes. This configuration is the minimum spec for achieving true high availability. In this case, the DCS (etcd) can tolerate the failure of 1 node.
The INFRA, ETCD, and PGSQL core modules are all deployed with 3 nodes, allowing one node to fail.
Overview
- Conf Name:
trio
- Node Count: 3-node /
pigsty/vagrant/spec/trio.rb
- Description: Three-node configuration template providing a standard High Availability (HA) architecture, tolerating failure of one out of three nodes.
- OS Distro:
el8
,el9
,d12
,u22
,u24
- OS Arch:
x86_64
- Related:
safe
Usage: configure
with -c trio
:
./configure -c trio
Note: This is a 3-node template, you may have to change the other two node IP address after generating the configuration.
Content
Source: pigsty/conf/trio.yml
all:
#==============================================================#
# Clusters, Nodes, and Modules
#==============================================================#
children:
#----------------------------------#
# infra: monitor, alert, repo, etc..
#----------------------------------#
infra: # infra cluster for proxy, monitor, alert, etc
hosts: # 1 for common usage, 3 nodes for production
10.10.10.10: { infra_seq: 1 } # identity required
10.10.10.11: { infra_seq: 2, repo_enabled: false }
10.10.10.12: { infra_seq: 3, repo_enabled: false }
vars:
patroni_watchdog_mode: off # do not fencing infra
docker_enabled: true # install with ./docker.yml
etcd: # dcs service for postgres/patroni ha consensus
hosts: # 1 node for testing, 3 or 5 for production
10.10.10.10: { etcd_seq: 1 } # etcd_seq required
10.10.10.11: { etcd_seq: 2 } # assign from 1 ~ n
10.10.10.12: { etcd_seq: 3 } # odd number please
vars: # cluster level parameter override roles/etcd
etcd_cluster: etcd # mark etcd cluster name etcd
etcd_safeguard: false # safeguard against purging
etcd_clean: true # purge etcd during init process
minio: # minio cluster, s3 compatible object storage
hosts: { 10.10.10.10: { minio_seq: 1 } }
vars: { minio_cluster: minio }
pg-meta: # 3 instance postgres cluster `pg-meta`
hosts:
10.10.10.10: { pg_seq: 1, pg_role: primary }
10.10.10.11: { pg_seq: 2, pg_role: replica }
10.10.10.12: { pg_seq: 3, pg_role: replica , pg_offline_query: true }
vars:
pg_cluster: pg-meta
pg_users:
- { name: dbuser_meta , password: DBUser.Meta ,pgbouncer: true ,roles: [ dbrole_admin ] ,comment: pigsty admin user }
- { name: dbuser_view , password: DBUser.View ,pgbouncer: true ,roles: [ dbrole_readonly ] ,comment: read-only viewer for meta database }
pg_databases:
- { name: meta ,baseline: cmdb.sql ,comment: pigsty meta database ,schemas: [ pigsty ] ,extensions: [ { name: vector } ] }
pg_vip_enabled: true
pg_vip_address: 10.10.10.2/24
pg_vip_interface: eth1
#==============================================================#
# Global Parameters
#==============================================================#
vars:
#----------------------------------#
# Meta Data
#----------------------------------#
version: v3.1.0 # pigsty version string
admin_ip: 10.10.10.10 # admin node ip address
region: default # upstream mirror region: default|china|europe
node_tune: oltp # node tuning specs: oltp,olap,tiny,crit
pg_conf: oltp.yml # pgsql tuning specs: {oltp,olap,tiny,crit}.yml
proxy_env: # global proxy env when downloading packages
no_proxy: "localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,*.pigsty,*.aliyun.com,mirrors.*,*.myqcloud.com,*.tsinghua.edu.cn"
# http_proxy: # set your proxy here: e.g http://user:pass@proxy.xxx.com
# https_proxy: # set your proxy here: e.g http://user:pass@proxy.xxx.com
# all_proxy: # set your proxy here: e.g http://user:pass@proxy.xxx.com
infra_portal: # domain names and upstream servers
home : { domain: h.pigsty }
grafana : { domain: g.pigsty ,endpoint: "${admin_ip}:3000" , websocket: true }
prometheus : { domain: p.pigsty ,endpoint: "${admin_ip}:9090" }
alertmanager : { domain: a.pigsty ,endpoint: "${admin_ip}:9093" }
blackbox : { endpoint: "${admin_ip}:9115" }
loki : { endpoint: "${admin_ip}:3100" }
#minio : { domain: m.pigsty ,endpoint: "${admin_ip}:9001" ,scheme: https ,websocket: true }
#----------------------------------#
# Repo, Node, Packages
#----------------------------------#
# if you wish to customize your own repo, change these settings:
repo_modules: infra,node,pgsql # install upstream repo during repo bootstrap
repo_remove: true # remove existing repo on admin node during repo bootstrap
node_repo_modules: local # install the local module in repo_upstream for all nodes
node_repo_remove: true # remove existing node repo for node managed by pigsty
repo_packages: [ # default packages to be downloaded (if `repo_packages` is not explicitly set)
node-bootstrap, infra-package, infra-addons, node-package1, node-package2, pgsql-common #,docker
]
repo_extra_packages: [ # default postgres packages to be downloaded
pgsql-main
#,pgsql-core,pgsql-time,pgsql-gis,pgsql-rag,pgsql-fts,pgsql-olap,pgsql-feat,pgsql-lang,pgsql-type,pgsql-func,pgsql-admin,pgsql-stat,pgsql-sec,pgsql-fdw,pgsql-sim,pgsql-etl,
#,pg17-core,pg17-time,pg17-gis,pg17-rag,pg17-fts,pg17-olap,pg17-feat,pg17-lang,pg17-type,pg17-func,pg17-admin,pg17-stat,pg17-sec,pg17-fdw,pg17-sim,pg17-etl,
#,pg16-core,pg16-time,pg16-gis,pg16-rag,pg16-fts,pg16-olap,pg16-feat,pg16-lang,pg16-type,pg16-func,pg16-admin,pg16-stat,pg16-sec,pg16-fdw,pg16-sim,pg16-etl,
]
Caveat
Feedback
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve.