Infra as Code工具箱
HashiCorp for Database! 简单易用!从优到易,让用户省心!
-
Pigsty秉持 Infra as Data 的设计理念,用户只需用几行声明式的配置文件描述自己想要的数据库,即可使用幂等剧本,一键将其创建。Just like Kubernetes!
-
Pigsty向开发者交付简单易用的数据库工具箱:一键下载安装,自动配置;一键部署各类开源数据库,一键迁移备份、扩容缩容,极大拉低数据库管理使用门槛,量产DBA!
-
Pigsty能够简化数据库部署与交付、解决环境配置统一的难题:无论是上千套数据库几万核的生产环境,还是本地1C1G的笔记本均可完整运行;基于Vagrant的本地沙箱与基于Terraform的多云部署,云上云下,一键拉起!
声明式API,一键拉起
例如,在三台机器上创建一主两从的数据库集群pg-test
,只需要几行配置与一行命令pgsql.yml -l pg-test
,即可创建出如下一节所介绍的高可用数据库集群。
使用更多参数对PGSQL数据库集群进行定制
#----------------------------------#
# cluster: pg-meta (on meta node) #
#----------------------------------#
# pg-meta is the default SINGLE-NODE pgsql cluster deployed on meta node (10.10.10.10)
# if you have multiple n meta nodes, consider deploying pg-meta as n-node cluster too
pg-meta: # required, ansible group name , pgsql cluster name. should be unique among environment
hosts: # `<cluster>.hosts` holds instances definition of this cluster
10.10.10.10: # INSTANCE-LEVEL CONFIG: ip address is the key. values are instance level config entries (dict)
pg_seq: 1 # required, unique identity parameter (+integer) among pg_cluster
pg_role: primary # required, pg_role is mandatory identity parameter, primary|replica|offline|delayed
pg_offline_query: true # instance with `pg_offline_query: true` will take offline traffic (saga, etl,...)
# some variables can be overwritten on instance level. e.g: pg_upstream, pg_weight, etc...
#---------------
# mandatory # all configuration above (`ip`, `pg_seq`, `pg_role`) and `pg_cluster` are mandatory
#---------------
vars: # `<cluster>.vars` holds CLUSTER LEVEL CONFIG of this pgsql cluster
pg_cluster: pg-meta # required, pgsql cluster name, unique among cluster, used as namespace of cluster resources
#---------------
# optional # all configuration below are OPTIONAL for a pgsql cluster (Overwrite global default)
#---------------
pg_version: 14 # pgsql version to be installed (use global version if missing)
node_tune: tiny # node optimization profile: {oltp|olap|crit|tiny}, use tiny for vm sandbox
pg_conf: tiny.yml # pgsql template: {oltp|olap|crit|tiny}, use tiny for sandbox
patroni_mode: default # entering patroni pause mode after bootstrap {default|pause|remove}
patroni_watchdog_mode: off # disable patroni watchdog on meta node {off|require|automatic}
pg_lc_ctype: en_US.UTF8 # use en_US.UTF8 locale for i18n char support (required by `pg_trgm`)
#---------------
# biz databases # Defining Business Databases (Optional)
#---------------
pg_databases: # define business databases on this cluster, array of database definition
# define the default `meta` database
- name: meta # required, `name` is the only mandatory field of a database definition
baseline: cmdb.sql # optional, database sql baseline path, (relative path among ansible search path, e.g files/)
# owner: postgres # optional, database owner, postgres by default
# template: template1 # optional, which template to use, template1 by default
# encoding: UTF8 # optional, database encoding, UTF8 by default. (MUST same as template database)
# locale: C # optional, database locale, C by default. (MUST same as template database)
# lc_collate: C # optional, database collate, C by default. (MUST same as template database)
# lc_ctype: C # optional, database ctype, C by default. (MUST same as template database)
# tablespace: pg_default # optional, default tablespace, 'pg_default' by default.
# allowconn: true # optional, allow connection, true by default. false will disable connect at all
# revokeconn: false # optional, revoke public connection privilege. false by default. (leave connect with grant option to owner)
# pgbouncer: true # optional, add this database to pgbouncer database list? true by default
comment: pigsty meta database # optional, comment string for this database
connlimit: -1 # optional, database connection limit, default -1 disable limit
schemas: [pigsty] # optional, additional schemas to be created, array of schema names
extensions: # optional, additional extensions to be installed: array of schema definition `{name,schema}`
- { name: adminpack, schema: pg_catalog } # install adminpack to pg_catalog
- { name: postgis, schema: public } # if schema is omitted, extension will be installed according to search_path.
- { name: timescaledb } # some extensions are not relocatable, you can just omit the schema part
# define an additional database named grafana & prometheus (optional)
# - { name: grafana, owner: dbuser_grafana , revokeconn: true , comment: grafana primary database }
# - { name: prometheus, owner: dbuser_prometheus , revokeconn: true , comment: prometheus primary database , extensions: [{ name: timescaledb }]}
#---------------
# biz users # Defining Business Users (Optional)
#---------------
pg_users: # define business users/roles on this cluster, array of user definition
# define admin user for meta database (This user are used for pigsty app deployment by default)
- name: dbuser_meta # required, `name` is the only mandatory field of a user definition
password: md5d3d10d8cad606308bdb180148bf663e1 # md5 salted password of 'DBUser.Meta'
# optional, plain text and md5 password are both acceptable (prefixed with `md5`)
login: true # optional, can login, true by default (new biz ROLE should be false)
superuser: false # optional, is superuser? false by default
createdb: false # optional, can create database? false by default
createrole: false # optional, can create role? false by default
inherit: true # optional, can this role use inherited privileges? true by default
replication: false # optional, can this role do replication? false by default
bypassrls: false # optional, can this role bypass row level security? false by default
pgbouncer: true # optional, add this user to pgbouncer user-list? false by default (production user should be true explicitly)
connlimit: -1 # optional, user connection limit, default -1 disable limit
expire_in: 3650 # optional, now + n days when this role is expired (OVERWRITE expire_at)
expire_at: '2030-12-31' # optional, YYYY-MM-DD 'timestamp' when this role is expired (OVERWRITTEN by expire_in)
comment: pigsty admin user # optional, comment string for this user/role
roles: [dbrole_admin] # optional, belonged roles. default roles are: dbrole_{admin,readonly,readwrite,offline}
parameters: {} # optional, role level parameters with `ALTER ROLE SET`
# search_path: public # key value config parameters according to postgresql documentation (e.g: use pigsty as default search_path)
- {name: dbuser_view , password: DBUser.Viewer ,pgbouncer: true ,roles: [dbrole_readonly], comment: read-only viewer for meta database}
# define additional business users for prometheus & grafana (optional)
- {name: dbuser_grafana , password: DBUser.Grafana ,pgbouncer: true ,roles: [dbrole_admin], comment: admin user for grafana database }
- {name: dbuser_prometheus , password: DBUser.Prometheus ,pgbouncer: true ,roles: [dbrole_admin], comment: admin user for prometheus database , createrole: true }
#---------------
# hba rules # Defining extra HBA rules on this cluster (Optional)
#---------------
pg_hba_rules_extra: # Extra HBA rules to be installed on this cluster
- title: reject grafana non-local access # required, rule title (used as hba description & comment string)
role: common # required, which roles will be applied? ('common' applies to all roles)
rules: # required, rule content: array of hba string
- local grafana dbuser_grafana md5
- host grafana dbuser_grafana 127.0.0.1/32 md5
- host grafana dbuser_grafana 10.10.10.10/32 md5
vip_mode: l2 # setup a level-2 vip for cluster pg-meta
vip_address: 10.10.10.2 # virtual ip address that binds to primary instance of cluster pg-meta
vip_cidrmask: 8 # cidr network mask length
vip_interface: eth1 # interface to add virtual ip
Pigsty 1.3+:定制不同类型的Redis集群
#----------------------------------#
# redis sentinel example #
#----------------------------------#
redis-meta:
hosts:
10.10.10.10:
redis_node: 1
redis_instances: { 6001 : {} ,6002 : {} , 6003 : {} }
vars:
redis_cluster: redis-meta
redis_mode: sentinel
redis_max_memory: 128MB
#----------------------------------#
# redis cluster example #
#----------------------------------#
redis-test:
hosts:
10.10.10.11:
redis_node: 1
redis_instances: { 6501 : {} ,6502 : {} ,6503 : {} ,6504 : {} ,6505 : {} ,6506 : {} }
10.10.10.12:
redis_node: 2
redis_instances: { 6501 : {} ,6502 : {} ,6503 : {} ,6504 : {} ,6505 : {} ,6506 : {} }
vars:
redis_cluster: redis-test # name of this redis 'cluster'
redis_mode: cluster # standalone,cluster,sentinel
redis_max_memory: 64MB # max memory used by each redis instance
redis_mem_policy: allkeys-lru # memory eviction policy
#----------------------------------#
# redis standalone example #
#----------------------------------#
redis-common:
hosts:
10.10.10.13:
redis_node: 1
redis_instances:
6501: {}
6502: { replica_of: '10.10.10.13 6501' }
6503: { replica_of: '10.10.10.13 6501' }
vars:
redis_cluster: redis-common # name of this redis 'cluster'
redis_mode: standalone # standalone,cluster,sentinel
redis_max_memory: 64MB # max memory used by each redis instance
Pigsty 1.4+:安装并监控一套MatrixDB集群
#----------------------------------#
# cluster: mx-mdw (gp master)
#----------------------------------#
mx-mdw:
hosts:
10.10.10.10: { pg_seq: 1, pg_role: primary , nodename: mx-mdw-1 }
vars:
gp_role: master # this cluster is used as greenplum master
pg_shard: mx # pgsql sharding name & gpsql deployment name
pg_cluster: mx-mdw # this master cluster name is mx-mdw
pg_databases:
- { name: matrixmgr , extensions: [ { name: matrixdbts } ] }
- { name: meta }
pg_users:
- { name: meta , password: DBUser.Meta , pgbouncer: true }
- { name: dbuser_monitor , password: DBUser.Monitor , roles: [ dbrole_readonly ], superuser: true }
pgbouncer_enabled: true # enable pgbouncer for greenplum master
pgbouncer_exporter_enabled: false # enable pgbouncer_exporter for greenplum master
pg_exporter_params: 'host=127.0.0.1&sslmode=disable' # use 127.0.0.1 as local monitor host
#----------------------------------#
# cluster: mx-sdw (gp master)
#----------------------------------#
mx-sdw:
hosts:
10.10.10.11:
nodename: mx-sdw-1 # greenplum segment node
pg_instances: # greenplum segment instances
6000: { pg_cluster: mx-seg1, pg_seq: 1, pg_role: primary , pg_exporter_port: 9633 }
6001: { pg_cluster: mx-seg2, pg_seq: 2, pg_role: replica , pg_exporter_port: 9634 }
10.10.10.12:
nodename: mx-sdw-2
pg_instances:
6000: { pg_cluster: mx-seg2, pg_seq: 1, pg_role: primary , pg_exporter_port: 9633 }
6001: { pg_cluster: mx-seg3, pg_seq: 2, pg_role: replica , pg_exporter_port: 9634 }
10.10.10.13:
nodename: mx-sdw-3
pg_instances:
6000: { pg_cluster: mx-seg3, pg_seq: 1, pg_role: primary , pg_exporter_port: 9633 }
6001: { pg_cluster: mx-seg1, pg_seq: 2, pg_role: replica , pg_exporter_port: 9634 }
vars:
gp_role: segment # these are nodes for gp segments
pg_shard: mx # pgsql sharding name & gpsql deployment name
pg_cluster: mx-sdw # these segment clusters name is mx-sdw
pg_preflight_skip: true # skip preflight check (since pg_seq & pg_role & pg_cluster not exists)
pg_exporter_config: pg_exporter_basic.yml # use basic config to avoid segment server crash
pg_exporter_params: 'options=-c%20gp_role%3Dutility&sslmode=disable' # use gp_role = utility to connect to segments
Feedback
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve.
最后修改 2022-05-27: init commit (1e3e284)