demo/debian
Debian/Ubuntu 专用配置模板
demo/debian 配置模板是针对 Debian 和 Ubuntu 发行版优化的配置模板。
配置概览
- 配置名称:
demo/debian - 节点数量: 单节点
- 配置说明:Debian/Ubuntu 专用配置模板
- 适用系统:
d12,d13,u22,u24 - 适用架构:
x86_64,aarch64 - 相关配置:
meta,demo/el
启用方式:
./configure -c demo/debian [-i <primary_ip>]
配置内容
源文件地址:pigsty/conf/demo/debian.yml
---
#==============================================================#
# File : debian.yml
# Desc : Default parameters for Debian/Ubuntu in Pigsty
# Ctime : 2020-05-22
# Mtime : 2025-12-27
# Docs : https://doc.pgsty.com/config
# License : Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright : 2018-2026 Ruohang Feng / Vonng (rh@vonng.com)
#==============================================================#
#==============================================================#
# Sandbox (4-node) #
#==============================================================#
# admin user : vagrant (nopass ssh & sudo already set) #
# 1. meta : 10.10.10.10 (2 Core | 4GB) pg-meta #
# 2. node-1 : 10.10.10.11 (1 Core | 1GB) pg-test-1 #
# 3. node-2 : 10.10.10.12 (1 Core | 1GB) pg-test-2 #
# 4. node-3 : 10.10.10.13 (1 Core | 1GB) pg-test-3 #
# (replace these ip if your 4-node env have different ip addr) #
# VIP 2: (l2 vip is available inside same LAN ) #
# pg-meta ---> 10.10.10.2 ---> 10.10.10.10 #
# pg-test ---> 10.10.10.3 ---> 10.10.10.1{1,2,3} #
#==============================================================#
all:
##################################################################
# CLUSTERS #
##################################################################
# meta nodes, nodes, pgsql, redis, pgsql clusters are defined as
# k:v pair inside `all.children`. Where the key is cluster name
# and value is cluster definition consist of two parts:
# `hosts`: cluster members ip and instance level variables
# `vars` : cluster level variables
##################################################################
children: # groups definition
# infra cluster for proxy, monitor, alert, etc..
infra: { hosts: { 10.10.10.10: { infra_seq: 1 } } }
# etcd cluster for ha postgres
etcd: { hosts: { 10.10.10.10: { etcd_seq: 1 } }, vars: { etcd_cluster: etcd } }
# minio cluster, s3 compatible object storage
minio: { hosts: { 10.10.10.10: { minio_seq: 1 } }, vars: { minio_cluster: minio } }
#----------------------------------#
# pgsql cluster: pg-meta (CMDB) #
#----------------------------------#
pg-meta:
hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary , pg_offline_query: true } }
vars:
pg_cluster: pg-meta
# define business databases here: https://doc.pgsty.com/pgsql/db
pg_databases: # define business databases on this cluster, array of database definition
- name: meta # REQUIRED, `name` is the only mandatory field of a database definition
baseline: cmdb.sql # optional, database sql baseline path, (relative path among ansible search path, e.g: files/)
schemas: [pigsty] # optional, additional schemas to be created, array of schema names
extensions: # optional, additional extensions to be installed: array of `{name[,schema]}`
- { name: vector } # install pgvector extension on this database by default
comment: pigsty meta database # optional, comment string for this database
#pgbouncer: true # optional, add this database to pgbouncer database list? true by default
#owner: postgres # optional, database owner, postgres by default
#template: template1 # optional, which template to use, template1 by default
#encoding: UTF8 # optional, database encoding, UTF8 by default. (MUST same as template database)
#locale: C # optional, database locale, C by default. (MUST same as template database)
#lc_collate: C # optional, database collate, C by default. (MUST same as template database)
#lc_ctype: C # optional, database ctype, C by default. (MUST same as template database)
#tablespace: pg_default # optional, default tablespace, 'pg_default' by default.
#allowconn: true # optional, allow connection, true by default. false will disable connect at all
#revokeconn: false # optional, revoke public connection privilege. false by default. (leave connect with grant option to owner)
#register_datasource: true # optional, register this database to grafana datasources? true by default
#connlimit: -1 # optional, database connection limit, default -1 disable limit
#pool_auth_user: dbuser_meta # optional, all connection to this pgbouncer database will be authenticated by this user
#pool_mode: transaction # optional, pgbouncer pool mode at database level, default transaction
#pool_size: 64 # optional, pgbouncer pool size at database level, default 64
#pool_size_reserve: 32 # optional, pgbouncer pool size reserve at database level, default 32
#pool_size_min: 0 # optional, pgbouncer pool size min at database level, default 0
#pool_max_db_conn: 100 # optional, max database connections at database level, default 100
#- { name: grafana ,owner: dbuser_grafana ,revokeconn: true ,comment: grafana primary database }
#- { name: bytebase ,owner: dbuser_bytebase ,revokeconn: true ,comment: bytebase primary database }
#- { name: kong ,owner: dbuser_kong ,revokeconn: true ,comment: kong the api gateway database }
#- { name: gitea ,owner: dbuser_gitea ,revokeconn: true ,comment: gitea meta database }
#- { name: wiki ,owner: dbuser_wiki ,revokeconn: true ,comment: wiki meta database }
# define business users here: https://doc.pgsty.com/pgsql/user
pg_users: # define business users/roles on this cluster, array of user definition
- name: dbuser_meta # REQUIRED, `name` is the only mandatory field of a user definition
password: DBUser.Meta # optional, password, can be a scram-sha-256 hash string or plain text
login: true # optional, can log in, true by default (new biz ROLE should be false)
superuser: false # optional, is superuser? false by default
createdb: false # optional, can create database? false by default
createrole: false # optional, can create role? false by default
inherit: true # optional, can this role use inherited privileges? true by default
replication: false # optional, can this role do replication? false by default
bypassrls: false # optional, can this role bypass row level security? false by default
pgbouncer: true # optional, add this user to pgbouncer user-list? false by default (production user should be true explicitly)
connlimit: -1 # optional, user connection limit, default -1 disable limit
expire_in: 3650 # optional, now + n days when this role is expired (OVERWRITE expire_at)
expire_at: '2030-12-31' # optional, YYYY-MM-DD 'timestamp' when this role is expired (OVERWRITTEN by expire_in)
comment: pigsty admin user # optional, comment string for this user/role
roles: [dbrole_admin] # optional, belonged roles. default roles are: dbrole_{admin,readonly,readwrite,offline}
parameters: {} # optional, role level parameters with `ALTER ROLE SET`
pool_mode: transaction # optional, pgbouncer pool mode at user level, transaction by default
pool_connlimit: -1 # optional, max database connections at user level, default -1 disable limit
- {name: dbuser_view ,password: DBUser.Viewer ,pgbouncer: true ,roles: [dbrole_readonly], comment: read-only viewer for meta database}
#- {name: dbuser_grafana ,password: DBUser.Grafana ,pgbouncer: true ,roles: [dbrole_admin] ,comment: admin user for grafana database }
#- {name: dbuser_bytebase ,password: DBUser.Bytebase ,pgbouncer: true ,roles: [dbrole_admin] ,comment: admin user for bytebase database }
#- {name: dbuser_gitea ,password: DBUser.Gitea ,pgbouncer: true ,roles: [dbrole_admin] ,comment: admin user for gitea service }
#- {name: dbuser_wiki ,password: DBUser.Wiki ,pgbouncer: true ,roles: [dbrole_admin] ,comment: admin user for wiki.js service }
# define business service here: https://doc.pgsty.com/pgsql/service
pg_services: # extra services in addition to pg_default_services, array of service definition
# standby service will route {ip|name}:5435 to sync replica's pgbouncer (5435->6432 standby)
- name: standby # required, service name, the actual svc name will be prefixed with `pg_cluster`, e.g: pg-meta-standby
port: 5435 # required, service exposed port (work as kubernetes service node port mode)
ip: "*" # optional, service bind ip address, `*` for all ip by default
selector: "[]" # required, service member selector, use JMESPath to filter inventory
dest: default # optional, destination port, default|postgres|pgbouncer|<port_number>, 'default' by default
check: /sync # optional, health check url path, / by default
backup: "[? pg_role == `primary`]" # backup server selector
maxconn: 3000 # optional, max allowed front-end connection
balance: roundrobin # optional, haproxy load balance algorithm (roundrobin by default, other: leastconn)
options: 'inter 3s fastinter 1s downinter 5s rise 3 fall 3 on-marked-down shutdown-sessions slowstart 30s maxconn 3000 maxqueue 128 weight 100'
# define pg extensions: https://doc.pgsty.com/pgsql/extension
pg_libs: 'pg_stat_statements, auto_explain' # add timescaledb to shared_preload_libraries
#pg_extensions: [] # extensions to be installed on this cluster
# define HBA rules here: https://doc.pgsty.com/pgsql/hba
pg_hba_rules:
- {user: dbuser_view , db: all ,addr: infra ,auth: pwd ,title: 'allow grafana dashboard access cmdb from infra nodes'}
pg_vip_enabled: true
pg_vip_address: 10.10.10.2/24
pg_vip_interface: eth1
node_crontab: # make a full backup 1 am everyday
- '00 01 * * * postgres /pg/bin/pg-backup full'
#----------------------------------#
# pgsql cluster: pg-test (3 nodes) #
#----------------------------------#
# pg-test ---> 10.10.10.3 ---> 10.10.10.1{1,2,3}
pg-test: # define the new 3-node cluster pg-test
hosts:
10.10.10.11: { pg_seq: 1, pg_role: primary } # primary instance, leader of cluster
10.10.10.12: { pg_seq: 2, pg_role: replica } # replica instance, follower of leader
10.10.10.13: { pg_seq: 3, pg_role: replica, pg_offline_query: true } # replica with offline access
vars:
pg_cluster: pg-test # define pgsql cluster name
pg_users: [{ name: test , password: test , pgbouncer: true , roles: [ dbrole_admin ] }]
pg_databases: [{ name: test }] # create a database and user named 'test'
node_tune: tiny
pg_conf: tiny.yml
pg_vip_enabled: true
pg_vip_address: 10.10.10.3/24
pg_vip_interface: eth1
node_crontab: # make a full backup on monday 1am, and an incremental backup during weekdays
- '00 01 * * 1 postgres /pg/bin/pg-backup full'
- '00 01 * * 2,3,4,5,6,7 postgres /pg/bin/pg-backup'
#----------------------------------#
# redis ms, sentinel, native cluster
#----------------------------------#
redis-ms: # redis classic primary & replica
hosts: { 10.10.10.10: { redis_node: 1 , redis_instances: { 6379: { }, 6380: { replica_of: '10.10.10.10 6379' } } } }
vars: { redis_cluster: redis-ms ,redis_password: 'redis.ms' ,redis_max_memory: 64MB }
redis-meta: # redis sentinel x 3
hosts: { 10.10.10.11: { redis_node: 1 , redis_instances: { 26379: { } ,26380: { } ,26381: { } } } }
vars:
redis_cluster: redis-meta
redis_password: 'redis.meta'
redis_mode: sentinel
redis_max_memory: 16MB
redis_sentinel_monitor: # primary list for redis sentinel, use cls as name, primary ip:port
- { name: redis-ms, host: 10.10.10.10, port: 6379 ,password: redis.ms, quorum: 2 }
redis-test: # redis native cluster: 3m x 3s
hosts:
10.10.10.12: { redis_node: 1 ,redis_instances: { 6379: { } ,6380: { } ,6381: { } } }
10.10.10.13: { redis_node: 2 ,redis_instances: { 6379: { } ,6380: { } ,6381: { } } }
vars: { redis_cluster: redis-test ,redis_password: 'redis.test' ,redis_mode: cluster, redis_max_memory: 32MB }
####################################################################
# VARS #
####################################################################
vars: # global variables
#================================================================#
# VARS: INFRA #
#================================================================#
#-----------------------------------------------------------------
# META
#-----------------------------------------------------------------
version: v4.0.0 # pigsty version string
admin_ip: 10.10.10.10 # admin node ip address
region: default # upstream mirror region: default,china,europe
language: en # default language: en, zh
proxy_env: # global proxy env when downloading packages
no_proxy: "localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,*.pigsty,*.aliyun.com,mirrors.*,*.myqcloud.com,*.tsinghua.edu.cn"
# http_proxy: # set your proxy here: e.g http://user:pass@proxy.xxx.com
# https_proxy: # set your proxy here: e.g http://user:pass@proxy.xxx.com
# all_proxy: # set your proxy here: e.g http://user:pass@proxy.xxx.com
#-----------------------------------------------------------------
# CA
#-----------------------------------------------------------------
ca_create: true # create ca if not exists? or just abort
ca_cn: pigsty-ca # ca common name, fixed as pigsty-ca
cert_validity: 7300d # cert validity, 20 years by default
#-----------------------------------------------------------------
# INFRA_IDENTITY
#-----------------------------------------------------------------
#infra_seq: 1 # infra node identity, explicitly required
infra_portal: # infra services exposed via portal
home : { domain: i.pigsty } # default domain name
infra_data: /data/infra # default data path for infrastructure data
#-----------------------------------------------------------------
# REPO
#-----------------------------------------------------------------
repo_enabled: true # create a yum repo on this infra node?
repo_home: /www # repo home dir, `/www` by default
repo_name: pigsty # repo name, pigsty by default
repo_endpoint: http://${admin_ip}:80 # access point to this repo by domain or ip:port
repo_remove: true # remove existing upstream repo
repo_modules: infra,node,pgsql # which repo modules are installed in repo_upstream
repo_upstream: # where to download
- { name: pigsty-local ,description: 'Pigsty Local' ,module: local ,releases: [11,12,13,20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'http://${admin_ip}/pigsty ./' }}
- { name: pigsty-pgsql ,description: 'Pigsty PgSQL' ,module: pgsql ,releases: [11,12,13,20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://repo.pigsty.io/apt/pgsql/${distro_codename} ${distro_codename} main', china: 'https://repo.pigsty.cc/apt/pgsql/${distro_codename} ${distro_codename} main' }}
- { name: pigsty-infra ,description: 'Pigsty Infra' ,module: infra ,releases: [11,12,13,20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://repo.pigsty.io/apt/infra/ generic main' ,china: 'https://repo.pigsty.cc/apt/infra/ generic main' }}
- { name: nginx ,description: 'Nginx' ,module: infra ,releases: [11,12,13,20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'http://nginx.org/packages/${distro_name} ${distro_codename} nginx' }}
- { name: docker-ce ,description: 'Docker' ,module: infra ,releases: [11,12,13,20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://download.docker.com/linux/${distro_name} ${distro_codename} stable' ,china: 'https://mirrors.aliyun.com/docker-ce/linux/${distro_name} ${distro_codename} stable' }}
- { name: base ,description: 'Debian Basic' ,module: node ,releases: [11,12,13 ] ,arch: [x86_64, aarch64] ,baseurl: { default: 'http://deb.debian.org/debian/ ${distro_codename} main non-free-firmware' ,china: 'https://mirrors.aliyun.com/debian/ ${distro_codename} main restricted universe multiverse' }}
- { name: updates ,description: 'Debian Updates' ,module: node ,releases: [11,12,13 ] ,arch: [x86_64, aarch64] ,baseurl: { default: 'http://deb.debian.org/debian/ ${distro_codename}-updates main non-free-firmware' ,china: 'https://mirrors.aliyun.com/debian/ ${distro_codename}-updates main restricted universe multiverse' }}
- { name: security ,description: 'Debian Security' ,module: node ,releases: [11,12,13 ] ,arch: [x86_64, aarch64] ,baseurl: { default: 'http://security.debian.org/debian-security ${distro_codename}-security main non-free-firmware' ,china: 'https://mirrors.aliyun.com/debian-security/ ${distro_codename}-security main non-free-firmware' }}
- { name: base ,description: 'Ubuntu Basic' ,module: node ,releases: [ 20,22,24] ,arch: [x86_64 ] ,baseurl: { default: 'https://mirrors.edge.kernel.org/ubuntu/ ${distro_codename} main universe multiverse restricted' ,china: 'https://mirrors.aliyun.com/ubuntu/ ${distro_codename} main restricted universe multiverse' }}
- { name: updates ,description: 'Ubuntu Updates' ,module: node ,releases: [ 20,22,24] ,arch: [x86_64 ] ,baseurl: { default: 'https://mirrors.edge.kernel.org/ubuntu/ ${distro_codename}-backports main restricted universe multiverse' ,china: 'https://mirrors.aliyun.com/ubuntu/ ${distro_codename}-updates main restricted universe multiverse' }}
- { name: backports ,description: 'Ubuntu Backports' ,module: node ,releases: [ 20,22,24] ,arch: [x86_64 ] ,baseurl: { default: 'https://mirrors.edge.kernel.org/ubuntu/ ${distro_codename}-security main restricted universe multiverse' ,china: 'https://mirrors.aliyun.com/ubuntu/ ${distro_codename}-backports main restricted universe multiverse' }}
- { name: security ,description: 'Ubuntu Security' ,module: node ,releases: [ 20,22,24] ,arch: [x86_64 ] ,baseurl: { default: 'https://mirrors.edge.kernel.org/ubuntu/ ${distro_codename}-updates main restricted universe multiverse' ,china: 'https://mirrors.aliyun.com/ubuntu/ ${distro_codename}-security main restricted universe multiverse' }}
- { name: base ,description: 'Ubuntu Basic' ,module: node ,releases: [ 20,22,24] ,arch: [ aarch64] ,baseurl: { default: 'http://ports.ubuntu.com/ubuntu-ports/ ${distro_codename} main universe multiverse restricted' ,china: 'https://mirrors.aliyun.com/ubuntu-ports/ ${distro_codename} main restricted universe multiverse' }}
- { name: updates ,description: 'Ubuntu Updates' ,module: node ,releases: [ 20,22,24] ,arch: [ aarch64] ,baseurl: { default: 'http://ports.ubuntu.com/ubuntu-ports/ ${distro_codename}-backports main restricted universe multiverse' ,china: 'https://mirrors.aliyun.com/ubuntu-ports/ ${distro_codename}-updates main restricted universe multiverse' }}
- { name: backports ,description: 'Ubuntu Backports' ,module: node ,releases: [ 20,22,24] ,arch: [ aarch64] ,baseurl: { default: 'http://ports.ubuntu.com/ubuntu-ports/ ${distro_codename}-security main restricted universe multiverse' ,china: 'https://mirrors.aliyun.com/ubuntu-ports/ ${distro_codename}-backports main restricted universe multiverse' }}
- { name: security ,description: 'Ubuntu Security' ,module: node ,releases: [ 20,22,24] ,arch: [ aarch64] ,baseurl: { default: 'http://ports.ubuntu.com/ubuntu-ports/ ${distro_codename}-updates main restricted universe multiverse' ,china: 'https://mirrors.aliyun.com/ubuntu-ports/ ${distro_codename}-security main restricted universe multiverse' }}
- { name: pgdg ,description: 'PGDG' ,module: pgsql ,releases: [11,12,13, 22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'http://apt.postgresql.org/pub/repos/apt/ ${distro_codename}-pgdg main' ,china: 'https://mirrors.aliyun.com/postgresql/repos/apt/ ${distro_codename}-pgdg main' }}
- { name: pgdg-beta ,description: 'PGDG Beta' ,module: beta ,releases: [11,12,13, 22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'http://apt.postgresql.org/pub/repos/apt/ ${distro_codename}-pgdg-testing main 19' ,china: 'https://mirrors.aliyun.com/postgresql/repos/apt/ ${distro_codename}-pgdg-testing main 19' }}
- { name: timescaledb ,description: 'TimescaleDB' ,module: extra ,releases: [11,12,13,20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://packagecloud.io/timescale/timescaledb/${distro_name}/ ${distro_codename} main' }}
- { name: citus ,description: 'Citus' ,module: extra ,releases: [11,12, 20,22 ] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://packagecloud.io/citusdata/community/${distro_name}/ ${distro_codename} main' } }
- { name: percona ,description: 'Percona TDE' ,module: percona ,releases: [11,12,13,20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://repo.pigsty.io/apt/percona ${distro_codename} main' ,china: 'https://repo.pigsty.cc/apt/percona ${distro_codename} main' ,origin: 'http://repo.percona.com/ppg-18.1/apt ${distro_codename} main' }}
- { name: wiltondb ,description: 'WiltonDB' ,module: mssql ,releases: [ 20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://repo.pigsty.io/apt/mssql/ ${distro_codename} main' ,china: 'https://repo.pigsty.cc/apt/mssql/ ${distro_codename} main' ,origin: 'https://ppa.launchpadcontent.net/wiltondb/wiltondb/ubuntu/ ${distro_codename} main' }}
- { name: groonga ,description: 'Groonga Debian' ,module: groonga ,releases: [11,12,13 ] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://packages.groonga.org/debian/ ${distro_codename} main' }}
- { name: groonga ,description: 'Groonga Ubuntu' ,module: groonga ,releases: [ 20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://ppa.launchpadcontent.net/groonga/ppa/ubuntu/ ${distro_codename} main' }}
- { name: mysql ,description: 'MySQL' ,module: mysql ,releases: [11,12, 20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://repo.mysql.com/apt/${distro_name} ${distro_codename} mysql-8.0 mysql-tools', china: 'https://mirrors.tuna.tsinghua.edu.cn/mysql/apt/${distro_name} ${distro_codename} mysql-8.0 mysql-tools' }}
- { name: mongo ,description: 'MongoDB' ,module: mongo ,releases: [11,12, 20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://repo.mongodb.org/apt/${distro_name} ${distro_codename}/mongodb-org/8.0 multiverse', china: 'https://mirrors.aliyun.com/mongodb/apt/${distro_name} ${distro_codename}/mongodb-org/8.0 multiverse' }}
- { name: redis ,description: 'Redis' ,module: redis ,releases: [11,12, 20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://packages.redis.io/deb ${distro_codename} main' }}
- { name: llvm ,description: 'LLVM' ,module: llvm ,releases: [11,12,13,20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'http://apt.llvm.org/${distro_codename}/ llvm-toolchain-${distro_codename} main' ,china: 'https://mirrors.tuna.tsinghua.edu.cn/llvm-apt/${distro_codename}/ llvm-toolchain-${distro_codename} main' }}
- { name: haproxyd ,description: 'Haproxy Debian' ,module: haproxy ,releases: [11,12 ] ,arch: [x86_64, aarch64] ,baseurl: { default: 'http://haproxy.debian.net/ ${distro_codename}-backports-3.1 main' }}
- { name: haproxyu ,description: 'Haproxy Ubuntu' ,module: haproxy ,releases: [ 20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://ppa.launchpadcontent.net/vbernat/haproxy-3.1/ubuntu/ ${distro_codename} main' }}
- { name: grafana ,description: 'Grafana' ,module: grafana ,releases: [11,12,13,20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://apt.grafana.com stable main' ,china: 'https://mirrors.aliyun.com/grafana/apt/ stable main' }}
- { name: kubernetes ,description: 'Kubernetes' ,module: kube ,releases: [11,12,13,20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://pkgs.k8s.io/core:/stable:/v1.33/deb/ /', china: 'https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.33/deb/ /' }}
- { name: gitlab-ee ,description: 'Gitlab EE' ,module: gitlab ,releases: [11,12,13,20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://packages.gitlab.com/gitlab/gitlab-ee/${distro_name}/ ${distro_codename} main' }}
- { name: gitlab-ce ,description: 'Gitlab CE' ,module: gitlab ,releases: [11,12,13,20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://packages.gitlab.com/gitlab/gitlab-ce/${distro_name}/ ${distro_codename} main' }}
- { name: clickhouse ,description: 'ClickHouse' ,module: click ,releases: [11,12,13,20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://packages.clickhouse.com/deb/ stable main', china: 'https://mirrors.aliyun.com/clickhouse/deb/ stable main' }}
repo_packages: [ node-bootstrap, infra-package, infra-addons, node-package1, node-package2, pgsql-utility, extra-modules ]
repo_extra_packages: [ pgsql-main ]
repo_url_packages: []
#-----------------------------------------------------------------
# INFRA_PACKAGE
#-----------------------------------------------------------------
infra_packages: # packages to be installed on infra nodes
- grafana,grafana-plugins,grafana-victorialogs-ds,grafana-victoriametrics-ds,victoria-metrics,victoria-logs,victoria-traces,vmutils,vlogscli,alertmanager
- node-exporter,blackbox-exporter,nginx-exporter,pg-exporter,pev2,nginx,dnsmasq,ansible,etcd,python3-requests,redis,mcli,restic,certbot,python3-certbot-nginx
infra_packages_pip: '' # pip installed packages for infra nodes
#-----------------------------------------------------------------
# NGINX
#-----------------------------------------------------------------
nginx_enabled: true # enable nginx on this infra node?
nginx_clean: false # clean existing nginx config during init?
nginx_exporter_enabled: true # enable nginx_exporter on this infra node?
nginx_exporter_port: 9113 # nginx_exporter listen port, 9113 by default
nginx_sslmode: enable # nginx ssl mode? disable,enable,enforce
nginx_cert_validity: 397d # nginx self-signed cert validity, 397d by default
nginx_home: /www # nginx content dir, `/www` by default (soft link to nginx_data)
nginx_data: /data/nginx # nginx actual data dir, /data/nginx by default
nginx_users: { admin : pigsty } # nginx basic auth users: name and pass dict
nginx_port: 80 # nginx listen port, 80 by default
nginx_ssl_port: 443 # nginx ssl listen port, 443 by default
certbot_sign: false # sign nginx cert with certbot during setup?
certbot_email: your@email.com # certbot email address, used for free ssl
certbot_options: '' # certbot extra options
#-----------------------------------------------------------------
# DNS
#-----------------------------------------------------------------
dns_enabled: true # setup dnsmasq on this infra node?
dns_port: 53 # dns server listen port, 53 by default
dns_records: # dynamic dns records resolved by dnsmasq
- "${admin_ip} i.pigsty"
- "${admin_ip} m.pigsty supa.pigsty api.pigsty adm.pigsty cli.pigsty ddl.pigsty"
#-----------------------------------------------------------------
# VICTORIA
#-----------------------------------------------------------------
vmetrics_enabled: true # enable victoria-metrics on this infra node?
vmetrics_clean: false # whether clean existing victoria metrics data during init?
vmetrics_port: 8428 # victoria-metrics listen port, 8428 by default
vmetrics_scrape_interval: 10s # victoria global scrape interval, 10s by default
vmetrics_scrape_timeout: 8s # victoria global scrape timeout, 8s by default
vmetrics_options: >-
-retentionPeriod=15d
-promscrape.fileSDCheckInterval=5s
vlogs_enabled: true # enable victoria-logs on this infra node?
vlogs_clean: false # clean victoria-logs data during init?
vlogs_port: 9428 # victoria-logs listen port, 9428 by default
vlogs_options: >-
-retentionPeriod=15d
-retention.maxDiskSpaceUsageBytes=50GiB
-insert.maxLineSizeBytes=1MB
-search.maxQueryDuration=120s
vtraces_enabled: true # enable victoria-traces on this infra node?
vtraces_clean: false # clean victoria-trace data during inti?
vtraces_port: 10428 # victoria-traces listen port, 10428 by default
vtraces_options: >-
-retentionPeriod=15d
-retention.maxDiskSpaceUsageBytes=50GiB
vmalert_enabled: true # enable vmalert on this infra node?
vmalert_port: 8880 # vmalert listen port, 8880 by default
vmalert_options: '' # vmalert extra server options
#-----------------------------------------------------------------
# PROMETHEUS
#-----------------------------------------------------------------
blackbox_enabled: true # setup blackbox_exporter on this infra node?
blackbox_port: 9115 # blackbox_exporter listen port, 9115 by default
blackbox_options: '' # blackbox_exporter extra server options
alertmanager_enabled: true # setup alertmanager on this infra node?
alertmanager_port: 9059 # alertmanager listen port, 9059 by default
alertmanager_options: '' # alertmanager extra server options
exporter_metrics_path: /metrics # exporter metric path, `/metrics` by default
#-----------------------------------------------------------------
# GRAFANA
#-----------------------------------------------------------------
grafana_enabled: true # enable grafana on this infra node?
grafana_port: 3000 # default listen port for grafana
grafana_clean: true # clean grafana data during init?
grafana_admin_username: admin # grafana admin username, `admin` by default
grafana_admin_password: pigsty # grafana admin password, `pigsty` by default
grafana_auth_proxy: false # enable grafana auth proxy?
grafana_pgurl: '' # external postgres database url for grafana if given
grafana_view_password: DBUser.Viewer # password for grafana meta pg datasource
#================================================================#
# VARS: NODE #
#================================================================#
#-----------------------------------------------------------------
# NODE_IDENTITY
#-----------------------------------------------------------------
#nodename: # [INSTANCE] # node instance identity, use hostname if missing, optional
node_cluster: nodes # [CLUSTER] # node cluster identity, use 'nodes' if missing, optional
nodename_overwrite: true # overwrite node's hostname with nodename?
nodename_exchange: false # exchange nodename among play hosts?
node_id_from_pg: true # use postgres identity as node identity if applicable?
#-----------------------------------------------------------------
# NODE_DNS
#-----------------------------------------------------------------
node_write_etc_hosts: true # modify `/etc/hosts` on target node?
node_default_etc_hosts: # static dns records in `/etc/hosts`
- "${admin_ip} i.pigsty"
node_etc_hosts: [] # extra static dns records in `/etc/hosts`
node_dns_method: add # how to handle dns servers: add,none,overwrite
node_dns_servers: ['${admin_ip}'] # dynamic nameserver in `/etc/resolv.conf`
node_dns_options: # dns resolv options in `/etc/resolv.conf`
- options single-request-reopen timeout:1
#-----------------------------------------------------------------
# NODE_PACKAGE
#-----------------------------------------------------------------
node_repo_modules: local # upstream repo to be added on node, local by default
node_repo_remove: true # remove existing repo on node?
node_packages: [openssh-server] # packages to be installed current nodes with latest version
node_default_packages: # default packages to be installed on all nodes
- lz4,unzip,bzip2,pv,jq,git,ncdu,make,patch,bash,lsof,wget,uuid,tuned,nvme-cli,numactl,sysstat,iotop,htop,rsync,tcpdump
- python3,python3-pip,socat,lrzsz,net-tools,ipvsadm,telnet,ca-certificates,openssl,keepalived,etcd,haproxy,chrony,pig
- zlib1g,acl,dnsutils,libreadline-dev,vim-tiny,node-exporter,openssh-server,openssh-client,vector
#-----------------------------------------------------------------
# NODE_SEC
#-----------------------------------------------------------------
node_selinux_mode: permissive # set selinux mode: enforcing,permissive,disabled
node_firewall_mode: zone # firewall mode: off, none, zone, zone by default
node_firewall_intranet: # which intranet cidr considered as internal network
- 10.0.0.0/8
- 192.168.0.0/16
- 172.16.0.0/12
node_firewall_public_port: # expose these ports to public network in (zone, strict) mode
- 22 # enable ssh access
- 80 # enable http access
- 443 # enable https access
- 5432 # enable postgresql access (think twice before exposing it!)
#-----------------------------------------------------------------
# NODE_TUNE
#-----------------------------------------------------------------
node_disable_numa: false # disable node numa, reboot required
node_disable_swap: false # disable node swap, use with caution
node_static_network: true # preserve dns resolver settings after reboot
node_disk_prefetch: false # setup disk prefetch on HDD to increase performance
node_kernel_modules: [ softdog, ip_vs, ip_vs_rr, ip_vs_wrr, ip_vs_sh ]
node_hugepage_count: 0 # number of 2MB hugepage, take precedence over ratio
node_hugepage_ratio: 0 # node mem hugepage ratio, 0 disable it by default
node_overcommit_ratio: 0 # node mem overcommit ratio, 0 disable it by default
node_tune: oltp # node tuned profile: none,oltp,olap,crit,tiny
node_sysctl_params: { } # sysctl parameters in k:v format in addition to tuned
#-----------------------------------------------------------------
# NODE_ADMIN
#-----------------------------------------------------------------
node_data: /data # node main data directory, `/data` by default
node_admin_enabled: true # create a admin user on target node?
node_admin_uid: 88 # uid and gid for node admin user
node_admin_username: dba # name of node admin user, `dba` by default
node_admin_sudo: nopass # admin sudo privilege, all,nopass. nopass by default
node_admin_ssh_exchange: true # exchange admin ssh key among node cluster
node_admin_pk_current: true # add current user's ssh pk to admin authorized_keys
node_admin_pk_list: [] # ssh public keys to be added to admin user
node_aliases: {} # extra shell aliases to be added, k:v dict
#-----------------------------------------------------------------
# NODE_TIME
#-----------------------------------------------------------------
node_timezone: '' # setup node timezone, empty string to skip
node_ntp_enabled: true # enable chronyd time sync service?
node_ntp_servers: # ntp servers in `/etc/chrony.conf`
- pool pool.ntp.org iburst
node_crontab_overwrite: true # overwrite or append to `/etc/crontab`?
node_crontab: [ ] # crontab entries in `/etc/crontab`
#-----------------------------------------------------------------
# NODE_VIP
#-----------------------------------------------------------------
vip_enabled: false # enable vip on this node cluster?
# vip_address: [IDENTITY] # node vip address in ipv4 format, required if vip is enabled
# vip_vrid: [IDENTITY] # required, integer, 1-254, should be unique among same VLAN
vip_role: backup # optional, `master|backup`, backup by default, use as init role
vip_preempt: false # optional, `true/false`, false by default, enable vip preemption
vip_interface: eth0 # node vip network interface to listen, `eth0` by default
vip_dns_suffix: '' # node vip dns name suffix, empty string by default
vip_exporter_port: 9650 # keepalived exporter listen port, 9650 by default
#-----------------------------------------------------------------
# HAPROXY
#-----------------------------------------------------------------
haproxy_enabled: true # enable haproxy on this node?
haproxy_clean: false # cleanup all existing haproxy config?
haproxy_reload: true # reload haproxy after config?
haproxy_auth_enabled: true # enable authentication for haproxy admin page
haproxy_admin_username: admin # haproxy admin username, `admin` by default
haproxy_admin_password: pigsty # haproxy admin password, `pigsty` by default
haproxy_exporter_port: 9101 # haproxy admin/exporter port, 9101 by default
haproxy_client_timeout: 24h # client side connection timeout, 24h by default
haproxy_server_timeout: 24h # server side connection timeout, 24h by default
haproxy_services: [] # list of haproxy service to be exposed on node
#-----------------------------------------------------------------
# NODE_EXPORTER
#-----------------------------------------------------------------
node_exporter_enabled: true # setup node_exporter on this node?
node_exporter_port: 9100 # node exporter listen port, 9100 by default
node_exporter_options: '--no-collector.softnet --no-collector.nvme --collector.tcpstat --collector.processes'
#-----------------------------------------------------------------
# VECTOR
#-----------------------------------------------------------------
vector_enabled: true # enable vector log collector?
vector_clean: false # purge vector data dir during init?
vector_data: /data/vector # vector data dir, /data/vector by default
vector_port: 9598 # vector metrics port, 9598 by default
vector_read_from: beginning # vector read from beginning or end
vector_log_endpoint: [ infra ] # if defined, sending vector log to this endpoint.
#================================================================#
# VARS: DOCKER #
#================================================================#
docker_enabled: false # enable docker on this node?
docker_data: /var/lib/docker # docker data directory, /var/lib/docker by default
docker_storage_driver: overlay2 # docker storage driver, can be zfs, btrfs
docker_cgroups_driver: systemd # docker cgroup fs driver: cgroupfs,systemd
docker_registry_mirrors: [] # docker registry mirror list
docker_exporter_port: 9323 # docker metrics exporter port, 9323 by default
docker_image: [] # docker image to be pulled after bootstrap
docker_image_cache: /tmp/docker/*.tgz # docker image cache glob pattern
#================================================================#
# VARS: ETCD #
#================================================================#
#etcd_seq: 1 # etcd instance identifier, explicitly required
etcd_cluster: etcd # etcd cluster & group name, etcd by default
etcd_safeguard: false # prevent purging running etcd instance?
etcd_clean: true # purging existing etcd during initialization?
etcd_data: /data/etcd # etcd data directory, /data/etcd by default
etcd_port: 2379 # etcd client port, 2379 by default
etcd_peer_port: 2380 # etcd peer port, 2380 by default
etcd_init: new # etcd initial cluster state, new or existing
etcd_election_timeout: 1000 # etcd election timeout, 1000ms by default
etcd_heartbeat_interval: 100 # etcd heartbeat interval, 100ms by default
etcd_root_password: Etcd.Root # etcd root password for RBAC, change it!
#================================================================#
# VARS: MINIO #
#================================================================#
#minio_seq: 1 # minio instance identifier, REQUIRED
minio_cluster: minio # minio cluster identifier, REQUIRED
minio_clean: false # cleanup minio during init?, false by default
minio_user: minio # minio os user, `minio` by default
minio_https: true # use https for minio, true by default
minio_node: '${minio_cluster}-${minio_seq}.pigsty' # minio node name pattern
minio_data: '/data/minio' # minio data dir(s), use {x...y} to specify multi drivers
#minio_volumes: # minio data volumes, override defaults if specified
minio_domain: sss.pigsty # minio external domain name, `sss.pigsty` by default
minio_port: 9000 # minio service port, 9000 by default
minio_admin_port: 9001 # minio console port, 9001 by default
minio_access_key: minioadmin # root access key, `minioadmin` by default
minio_secret_key: S3User.MinIO # root secret key, `S3User.MinIO` by default
minio_extra_vars: '' # extra environment variables
minio_provision: true # run minio provisioning tasks?
minio_alias: sss # alias name for local minio deployment
#minio_endpoint: https://sss.pigsty:9000 # if not specified, overwritten by defaults
minio_buckets: # list of minio bucket to be created
- { name: pgsql }
- { name: meta ,versioning: true }
- { name: data }
minio_users: # list of minio user to be created
- { access_key: pgbackrest ,secret_key: S3User.Backup ,policy: pgsql }
- { access_key: s3user_meta ,secret_key: S3User.Meta ,policy: meta }
- { access_key: s3user_data ,secret_key: S3User.Data ,policy: data }
#================================================================#
# VARS: REDIS #
#================================================================#
#redis_cluster: <CLUSTER> # redis cluster name, required identity parameter
#redis_node: 1 <NODE> # redis node sequence number, node int id required
#redis_instances: {} <NODE> # redis instances definition on this redis node
redis_fs_main: /data # redis main data mountpoint, `/data` by default
redis_exporter_enabled: true # install redis exporter on redis nodes?
redis_exporter_port: 9121 # redis exporter listen port, 9121 by default
redis_exporter_options: '' # cli args and extra options for redis exporter
redis_safeguard: false # prevent purging running redis instance?
redis_clean: true # purging existing redis during init?
redis_rmdata: true # remove redis data when purging redis server?
redis_mode: standalone # redis mode: standalone,cluster,sentinel
redis_conf: redis.conf # redis config template path, except sentinel
redis_bind_address: '0.0.0.0' # redis bind address, empty string will use host ip
redis_max_memory: 1GB # max memory used by each redis instance
redis_mem_policy: allkeys-lru # redis memory eviction policy
redis_password: '' # redis password, empty string will disable password
redis_rdb_save: ['1200 1'] # redis rdb save directives, disable with empty list
redis_aof_enabled: false # enable redis append only file?
redis_rename_commands: {} # rename redis dangerous commands
redis_cluster_replicas: 1 # replica number for one master in redis cluster
redis_sentinel_monitor: [] # sentinel master list, works on sentinel cluster only
#================================================================#
# VARS: PGSQL #
#================================================================#
#-----------------------------------------------------------------
# PG_IDENTITY
#-----------------------------------------------------------------
pg_mode: pgsql #CLUSTER # pgsql cluster mode: pgsql,citus,gpsql,mssql,mysql,ivory,polar
# pg_cluster: #CLUSTER # pgsql cluster name, required identity parameter
# pg_seq: 0 #INSTANCE # pgsql instance seq number, required identity parameter
# pg_role: replica #INSTANCE # pgsql role, required, could be primary,replica,offline
# pg_instances: {} #INSTANCE # define multiple pg instances on node in `{port:ins_vars}` format
# pg_upstream: #INSTANCE # repl upstream ip addr for standby cluster or cascade replica
# pg_shard: #CLUSTER # pgsql shard name, optional identity for sharding clusters
# pg_group: 0 #CLUSTER # pgsql shard index number, optional identity for sharding clusters
# gp_role: master #CLUSTER # greenplum role of this cluster, could be master or segment
pg_offline_query: false #INSTANCE # set to true to enable offline queries on this instance
#-----------------------------------------------------------------
# PG_BUSINESS
#-----------------------------------------------------------------
# postgres business object definition, overwrite in group vars
pg_users: [] # postgres business users
pg_databases: [] # postgres business databases
pg_services: [] # postgres business services
pg_hba_rules: [] # business hba rules for postgres
pgb_hba_rules: [] # business hba rules for pgbouncer
# global credentials, overwrite in global vars
pg_dbsu_password: '' # dbsu password, empty string means no dbsu password by default
pg_replication_username: replicator
pg_replication_password: DBUser.Replicator
pg_admin_username: dbuser_dba
pg_admin_password: DBUser.DBA
pg_monitor_username: dbuser_monitor
pg_monitor_password: DBUser.Monitor
#-----------------------------------------------------------------
# PG_INSTALL
#-----------------------------------------------------------------
pg_dbsu: postgres # os dbsu name, postgres by default, better not change it
pg_dbsu_uid: 543 # os dbsu uid and gid, 26 for default postgres users and groups
pg_dbsu_sudo: limit # dbsu sudo privilege, none,limit,all,nopass. limit by default
pg_dbsu_home: /var/lib/pgsql # postgresql home directory, `/var/lib/pgsql` by default
pg_dbsu_ssh_exchange: true # exchange postgres dbsu ssh key among same pgsql cluster
pg_version: 18 # postgres major version to be installed, 18 by default
pg_bin_dir: /usr/pgsql/bin # postgres binary dir, `/usr/pgsql/bin` by default
pg_log_dir: /pg/log/postgres # postgres log dir, `/pg/log/postgres` by default
pg_packages: # pg packages to be installed, alias can be used
- pgsql-main pgsql-common
pg_extensions: [] # pg extensions to be installed, alias can be used
#-----------------------------------------------------------------
# PG_BOOTSTRAP
#-----------------------------------------------------------------
pg_data: /pg/data # postgres data directory, `/pg/data` by default
pg_fs_main: /data/postgres # postgres main data directory, `/data/postgres` by default
pg_fs_backup: /data/backups # postgres backup data directory, `/data/backups` by default
pg_storage_type: SSD # storage type for pg main data, SSD,HDD, SSD by default
pg_dummy_filesize: 64MiB # size of `/pg/dummy`, hold 64MB disk space for emergency use
pg_listen: '0.0.0.0' # postgres/pgbouncer listen addresses, comma separated list
pg_port: 5432 # postgres listen port, 5432 by default
pg_localhost: /var/run/postgresql # postgres unix socket dir for localhost connection
patroni_enabled: true # if disabled, no postgres cluster will be created during init
patroni_mode: default # patroni working mode: default,pause,remove
pg_namespace: /pg # top level key namespace in etcd, used by patroni & vip
patroni_port: 8008 # patroni listen port, 8008 by default
patroni_log_dir: /pg/log/patroni # patroni log dir, `/pg/log/patroni` by default
patroni_ssl_enabled: false # secure patroni RestAPI communications with SSL?
patroni_watchdog_mode: off # patroni watchdog mode: automatic,required,off. off by default
patroni_username: postgres # patroni restapi username, `postgres` by default
patroni_password: Patroni.API # patroni restapi password, `Patroni.API` by default
pg_etcd_password: '' # etcd password for this pg cluster, '' to use pg_cluster
pg_primary_db: postgres # primary database name, used by citus,etc... ,postgres by default
pg_parameters: {} # extra parameters in postgresql.auto.conf
pg_files: [] # extra files to be copied to postgres data directory (e.g. license)
pg_conf: oltp.yml # config template: oltp,olap,crit,tiny. `oltp.yml` by default
pg_max_conn: auto # postgres max connections, `auto` will use recommended value
pg_shared_buffer_ratio: 0.25 # postgres shared buffers ratio, 0.25 by default, 0.1~0.4
pg_io_method: worker # io method for postgres, auto,fsync,worker,io_uring, auto by default
pg_rto: 30 # recovery time objective in seconds, `30s` by default
pg_rpo: 1048576 # recovery point objective in bytes, `1MiB` at most by default
pg_libs: 'pg_stat_statements, auto_explain' # preloaded libraries, `pg_stat_statements,auto_explain` by default
pg_delay: 0 # replication apply delay for standby cluster leader
pg_checksum: true # enable data checksum for postgres cluster?
pg_encoding: UTF8 # database cluster encoding, `UTF8` by default
pg_locale: C # database cluster local, `C` by default
pg_lc_collate: C # database cluster collate, `C` by default
pg_lc_ctype: C # database character type, `C` by default
#pgsodium_key: "" # pgsodium key, 64 hex digit, default to sha256(pg_cluster)
#pgsodium_getkey_script: "" # pgsodium getkey script path, pgsodium_getkey by default
#-----------------------------------------------------------------
# PG_PROVISION
#-----------------------------------------------------------------
pg_provision: true # provision postgres cluster after bootstrap
pg_init: pg-init # provision init script for cluster template, `pg-init` by default
pg_default_roles: # default roles and users in postgres cluster
- { name: dbrole_readonly ,login: false ,comment: role for global read-only access }
- { name: dbrole_offline ,login: false ,comment: role for restricted read-only access }
- { name: dbrole_readwrite ,login: false ,roles: [dbrole_readonly] ,comment: role for global read-write access }
- { name: dbrole_admin ,login: false ,roles: [pg_monitor, dbrole_readwrite] ,comment: role for object creation }
- { name: postgres ,superuser: true ,comment: system superuser }
- { name: replicator ,replication: true ,roles: [pg_monitor, dbrole_readonly] ,comment: system replicator }
- { name: dbuser_dba ,superuser: true ,roles: [dbrole_admin] ,pgbouncer: true ,pool_mode: session, pool_connlimit: 16 ,comment: pgsql admin user }
- { name: dbuser_monitor ,roles: [pg_monitor] ,pgbouncer: true ,parameters: {log_min_duration_statement: 1000 } ,pool_mode: session ,pool_connlimit: 8 ,comment: pgsql monitor user }
pg_default_privileges: # default privileges when created by admin user
- GRANT USAGE ON SCHEMAS TO dbrole_readonly
- GRANT SELECT ON TABLES TO dbrole_readonly
- GRANT SELECT ON SEQUENCES TO dbrole_readonly
- GRANT EXECUTE ON FUNCTIONS TO dbrole_readonly
- GRANT USAGE ON SCHEMAS TO dbrole_offline
- GRANT SELECT ON TABLES TO dbrole_offline
- GRANT SELECT ON SEQUENCES TO dbrole_offline
- GRANT EXECUTE ON FUNCTIONS TO dbrole_offline
- GRANT INSERT ON TABLES TO dbrole_readwrite
- GRANT UPDATE ON TABLES TO dbrole_readwrite
- GRANT DELETE ON TABLES TO dbrole_readwrite
- GRANT USAGE ON SEQUENCES TO dbrole_readwrite
- GRANT UPDATE ON SEQUENCES TO dbrole_readwrite
- GRANT TRUNCATE ON TABLES TO dbrole_admin
- GRANT REFERENCES ON TABLES TO dbrole_admin
- GRANT TRIGGER ON TABLES TO dbrole_admin
- GRANT CREATE ON SCHEMAS TO dbrole_admin
pg_default_schemas: [ monitor ] # default schemas to be created
pg_default_extensions: # default extensions to be created
- { name: pg_stat_statements ,schema: monitor }
- { name: pgstattuple ,schema: monitor }
- { name: pg_buffercache ,schema: monitor }
- { name: pageinspect ,schema: monitor }
- { name: pg_prewarm ,schema: monitor }
- { name: pg_visibility ,schema: monitor }
- { name: pg_freespacemap ,schema: monitor }
- { name: postgres_fdw ,schema: public }
- { name: file_fdw ,schema: public }
- { name: btree_gist ,schema: public }
- { name: btree_gin ,schema: public }
- { name: pg_trgm ,schema: public }
- { name: intagg ,schema: public }
- { name: intarray ,schema: public }
- { name: pg_repack }
pg_reload: true # reload postgres after hba changes
pg_default_hba_rules: # postgres default host-based authentication rules
- {user: '${dbsu}' ,db: all ,addr: local ,auth: ident ,title: 'dbsu access via local os user ident' }
- {user: '${dbsu}' ,db: replication ,addr: local ,auth: ident ,title: 'dbsu replication from local os ident' }
- {user: '${repl}' ,db: replication ,addr: localhost ,auth: pwd ,title: 'replicator replication from localhost'}
- {user: '${repl}' ,db: replication ,addr: intra ,auth: pwd ,title: 'replicator replication from intranet' }
- {user: '${repl}' ,db: postgres ,addr: intra ,auth: pwd ,title: 'replicator postgres db from intranet' }
- {user: '${monitor}' ,db: all ,addr: localhost ,auth: pwd ,title: 'monitor from localhost with password' }
- {user: '${monitor}' ,db: all ,addr: infra ,auth: pwd ,title: 'monitor from infra host with password'}
- {user: '${admin}' ,db: all ,addr: infra ,auth: ssl ,title: 'admin @ infra nodes with pwd & ssl' }
- {user: '${admin}' ,db: all ,addr: world ,auth: ssl ,title: 'admin @ everywhere with ssl & pwd' }
- {user: '+dbrole_readonly',db: all ,addr: localhost ,auth: pwd ,title: 'pgbouncer read/write via local socket'}
- {user: '+dbrole_readonly',db: all ,addr: intra ,auth: pwd ,title: 'read/write biz user via password' }
- {user: '+dbrole_offline' ,db: all ,addr: intra ,auth: pwd ,title: 'allow etl offline tasks from intranet'}
pgb_default_hba_rules: # pgbouncer default host-based authentication rules
- {user: '${dbsu}' ,db: pgbouncer ,addr: local ,auth: peer ,title: 'dbsu local admin access with os ident'}
- {user: 'all' ,db: all ,addr: localhost ,auth: pwd ,title: 'allow all user local access with pwd' }
- {user: '${monitor}' ,db: pgbouncer ,addr: intra ,auth: pwd ,title: 'monitor access via intranet with pwd' }
- {user: '${monitor}' ,db: all ,addr: world ,auth: deny ,title: 'reject all other monitor access addr' }
- {user: '${admin}' ,db: all ,addr: intra ,auth: pwd ,title: 'admin access via intranet with pwd' }
- {user: '${admin}' ,db: all ,addr: world ,auth: deny ,title: 'reject all other admin access addr' }
- {user: 'all' ,db: all ,addr: intra ,auth: pwd ,title: 'allow all user intra access with pwd' }
#-----------------------------------------------------------------
# PG_BACKUP
#-----------------------------------------------------------------
pgbackrest_enabled: true # enable pgbackrest on pgsql host?
pgbackrest_log_dir: /pg/log/pgbackrest # pgbackrest log dir, `/pg/log/pgbackrest` by default
pgbackrest_method: local # pgbackrest repo method: local,minio,[user-defined...]
pgbackrest_init_backup: true # take a full backup after pgbackrest is initialized?
pgbackrest_repo: # pgbackrest repo: https://pgbackrest.org/configuration.html#section-repository
local: # default pgbackrest repo with local posix fs
path: /pg/backup # local backup directory, `/pg/backup` by default
retention_full_type: count # retention full backups by count
retention_full: 2 # keep 2, at most 3 full backups when using local fs repo
minio: # optional minio repo for pgbackrest
type: s3 # minio is s3-compatible, so s3 is used
s3_endpoint: sss.pigsty # minio endpoint domain name, `sss.pigsty` by default
s3_region: us-east-1 # minio region, us-east-1 by default, useless for minio
s3_bucket: pgsql # minio bucket name, `pgsql` by default
s3_key: pgbackrest # minio user access key for pgbackrest
s3_key_secret: S3User.Backup # minio user secret key for pgbackrest
s3_uri_style: path # use path style uri for minio rather than host style
path: /pgbackrest # minio backup path, default is `/pgbackrest`
storage_port: 9000 # minio port, 9000 by default
storage_ca_file: /etc/pki/ca.crt # minio ca file path, `/etc/pki/ca.crt` by default
block: y # Enable block incremental backup
bundle: y # bundle small files into a single file
bundle_limit: 20MiB # Limit for file bundles, 20MiB for object storage
bundle_size: 128MiB # Target size for file bundles, 128MiB for object storage
cipher_type: aes-256-cbc # enable AES encryption for remote backup repo
cipher_pass: pgBackRest # AES encryption password, default is 'pgBackRest'
retention_full_type: time # retention full backup by time on minio repo
retention_full: 14 # keep full backup for the the last 14 days
#-----------------------------------------------------------------
# PG_ACCESS
#-----------------------------------------------------------------
pgbouncer_enabled: true # if disabled, pgbouncer will not be launched on pgsql host
pgbouncer_port: 6432 # pgbouncer listen port, 6432 by default
pgbouncer_log_dir: /pg/log/pgbouncer # pgbouncer log dir, `/pg/log/pgbouncer` by default
pgbouncer_auth_query: false # query postgres to retrieve unlisted business users?
pgbouncer_poolmode: transaction # pooling mode: transaction,session,statement, transaction by default
pgbouncer_sslmode: disable # pgbouncer client ssl mode, disable by default
pgbouncer_ignore_param: [ extra_float_digits, application_name, TimeZone, DateStyle, IntervalStyle, search_path ]
pg_weight: 100 #INSTANCE # relative load balance weight in service, 100 by default, 0-255
pg_service_provider: '' # dedicate haproxy node group name, or empty string for local nodes by default
pg_default_service_dest: pgbouncer # default service destination if svc.dest='default'
pg_default_services: # postgres default service definitions
- { name: primary ,port: 5433 ,dest: default ,check: /primary ,selector: "[]" }
- { name: replica ,port: 5434 ,dest: default ,check: /read-only ,selector: "[]" , backup: "[? pg_role == `primary` || pg_role == `offline` ]" }
- { name: default ,port: 5436 ,dest: postgres ,check: /primary ,selector: "[]" }
- { name: offline ,port: 5438 ,dest: postgres ,check: /replica ,selector: "[? pg_role == `offline` || pg_offline_query ]" , backup: "[? pg_role == `replica` && !pg_offline_query]"}
pg_vip_enabled: false # enable a l2 vip for pgsql primary? false by default
pg_vip_address: 127.0.0.1/24 # vip address in `<ipv4>/<mask>` format, require if vip is enabled
pg_vip_interface: eth0 # vip network interface to listen, eth0 by default
pg_dns_suffix: '' # pgsql dns suffix, '' by default
pg_dns_target: auto # auto, primary, vip, none, or ad hoc ip
#-----------------------------------------------------------------
# PG_MONITOR
#-----------------------------------------------------------------
pg_exporter_enabled: true # enable pg_exporter on pgsql hosts?
pg_exporter_config: pg_exporter.yml # pg_exporter configuration file name
pg_exporter_cache_ttls: '1,10,60,300' # pg_exporter collector ttl stage in seconds, '1,10,60,300' by default
pg_exporter_port: 9630 # pg_exporter listen port, 9630 by default
pg_exporter_params: 'sslmode=disable' # extra url parameters for pg_exporter dsn
pg_exporter_url: '' # overwrite auto-generate pg dsn if specified
pg_exporter_auto_discovery: true # enable auto database discovery? enabled by default
pg_exporter_exclude_database: 'template0,template1,postgres' # csv of database that WILL NOT be monitored during auto-discovery
pg_exporter_include_database: '' # csv of database that WILL BE monitored during auto-discovery
pg_exporter_connect_timeout: 200 # pg_exporter connect timeout in ms, 200 by default
pg_exporter_options: '' # overwrite extra options for pg_exporter
pgbouncer_exporter_enabled: true # enable pgbouncer_exporter on pgsql hosts?
pgbouncer_exporter_port: 9631 # pgbouncer_exporter listen port, 9631 by default
pgbouncer_exporter_url: '' # overwrite auto-generate pgbouncer dsn if specified
pgbouncer_exporter_options: '' # overwrite extra options for pgbouncer_exporter
pgbackrest_exporter_enabled: true # enable pgbackrest_exporter on pgsql hosts?
pgbackrest_exporter_port: 9854 # pgbackrest_exporter listen port, 9854 by default
pgbackrest_exporter_options: >
--collect.interval=120
--log.level=info
#-----------------------------------------------------------------
# PG_REMOVE
#-----------------------------------------------------------------
pg_safeguard: false # stop pg_remove running if pg_safeguard is enabled, false by default
pg_rm_data: true # remove postgres data during remove? true by default
pg_rm_backup: true # remove pgbackrest backup during primary remove? true by default
pg_rm_pkg: true # uninstall postgres packages during remove? true by default
...配置解读
demo/debian 模板是针对 Debian 和 Ubuntu 发行版优化的配置。
支持的发行版:
- Debian 12 (Bookworm)
- Debian 13 (Trixie)
- Ubuntu 22.04 LTS (Jammy)
- Ubuntu 24.04 LTS (Noble)
关键特性:
- 使用 PGDG APT 软件源
- 针对 APT 包管理器优化
- 支持 Debian/Ubuntu 特定的软件包名称
适用场景:
- 云服务器(Ubuntu 广泛使用)
- 容器环境(Debian 常用作基础镜像)
- 开发测试环境