1 - Nginx: Expose Web Service

How to expose, proxy, and forward web services using Nginx?

Pigsty will install Nginx on INFRA Node, as a Web service proxy.

Nginx is the access entry for all WebUI services of Pigsty, and it defaults to the use the 80/443 port on INFRA nodes.

Pigsty provides a global parameter infra_portal to configure Nginx proxy rules and corresponding upstream services.

infra_portal:  # domain names and upstream servers
  home         : { domain: h.pigsty }
  grafana      : { domain: g.pigsty ,endpoint: "${admin_ip}:3000" , websocket: true }
  prometheus   : { domain: p.pigsty ,endpoint: "${admin_ip}:9090" }
  alertmanager : { domain: a.pigsty ,endpoint: "${admin_ip}:9093" }
  blackbox     : { endpoint: "${admin_ip}:9115" }
  loki         : { endpoint: "${admin_ip}:3100" }
  #minio        : { domain: sss.pigsty  ,endpoint: "${admin_ip}:9001" ,scheme: https ,websocket: true }

If you access Nginx directly through the ip:port, it will route to h.pigsty, which is the Pigsty homepage (/www/ directory, served as software repo).

Because Nginx provides multiple services through the same port, it must be distinguished by the domain name (HOST header by the browser). Therefore, by default, Nginx only exposes services with the domain parameter.

And Pigsty will expose grafana, prometheus, and alertmanager services by default in addition to the home server.


How to configure nginx upstream?

Pigsty has a built-in configuration template full.yml, could be used as a reference, and also exposes some Web services in addition to the default services.

    infra_portal:                     # domain names and upstream servers
      home         : { domain: h.pigsty }
      grafana      : { domain: g.pigsty ,endpoint: "${admin_ip}:3000" , websocket: true }
      prometheus   : { domain: p.pigsty ,endpoint: "${admin_ip}:9090" }
      alertmanager : { domain: a.pigsty ,endpoint: "${admin_ip}:9093" }
      blackbox     : { endpoint: "${admin_ip}:9115" }
      loki         : { endpoint: "${admin_ip}:3100" }
      
      minio        : { domain: sss.pigsty  ,endpoint: "${admin_ip}:9001" ,scheme: https ,websocket: true }
      postgrest    : { domain: api.pigsty  ,endpoint: "127.0.0.1:8884" }
      pgadmin      : { domain: adm.pigsty  ,endpoint: "127.0.0.1:8885" }
      pgweb        : { domain: cli.pigsty  ,endpoint: "127.0.0.1:8886" }
      bytebase     : { domain: ddl.pigsty  ,endpoint: "127.0.0.1:8887" }
      jupyter      : { domain: lab.pigsty  ,endpoint: "127.0.0.1:8888", websocket: true }
      gitea        : { domain: git.pigsty  ,endpoint: "127.0.0.1:8889" }
      wiki         : { domain: wiki.pigsty ,endpoint: "127.0.0.1:9002" }
      noco         : { domain: noco.pigsty ,endpoint: "127.0.0.1:9003" }
      supa         : { domain: supa.pigsty ,endpoint: "10.10.10.10:8000", websocket: true }

Each record in infra_portal is a key-value pair, where the key is the name of the service, and the value is a dictionary. Currently, there are four available configuration items in the configuration dictionary:

  • endpoint: REQUIRED, specifies the address of the upstream service, which can be IP:PORT or DOMAIN:PORT.
    • In this parameter, you can use the placeholder ${admin_ip}, and Pigsty will fill in the value of admin_ip.
  • domain: OPTIONAL, specifies the domain name of the proxy. If not filled in, Nginx will not expose this service.
    • For services that need to know the endpoint address but do not want to expose them (such as Loki, Blackbox Exporter), you can leave the domain blank.
  • scheme: OPTIONAL, specifies the protocol (http/https) when forwarding, leave it blank to default to http.
    • For services that require HTTPS access (such as the MinIO management interface), you need to specify scheme: https.
  • websocket: OPTIONAL, specifies whether to enable WebSocket, leave it blank to default to off.
    • Services that require WebSocket (such as Grafana, Jupyter) need to be set to true to work properly.

If you need to add a new Web service exposed by Nginx, you need to add the corresponding record in the infra_portal parameter in the pigsty.yml file, and then execute the playbook to take effect:

./infra.yml -t nginx           # Tune nginx into desired state

To avoid service interruption, you can also execute the following tasks separately:

./infra.yml -t nginx_config    # re-generate nginx upstream config in /etc/nginx/conf.d
./infra.yml -t nginx_cert      # re-generate nginx ssl cert to include new domain names
nginx -s reload                # online reload nginx configuration

Or the simple way:

Nginx related configuration parameters are located at: Parameters: INFRA - NGINX


How to access via domain names?

Nginx distinguishes between different services using the domain name in the HOST header set by the browser. Thus, by default, except for the software repository, you need to access services via domain name.

You can directly access these services via IP address + port, but we recommend accessing all components through Nginx on ports 80/443 using domain names.

When accessing the Pigsty WebUI via domain name, you need to configure DNS resolution or modify the local /etc/hosts file for static resolution. There are several typical methods:

  • If your service needs to be publicly accessible, you should resolve the internet domain name via a DNS provider (Cloudflare, Aliyun DNS, etc.). Note that in this case, you usually need to modify the Pigsty infra_portalparameter, as the default *.pigsty is not suitable for public use.
  • If your service needs to be shared within an office network, you should resolve the internal domain name via an internal DNS provider (company internal DNS server) and point it to the IP of the Nginx server. You can request the network administrator to add the appropriate resolution records in the internal DNS server, or ask the system users to manually configure static DNS resolution records.
  • If your service is only for personal use or a few users (e.g., DBA), you can ask these users to use static domain name resolution records. On Linux/MacOS systems, modify the /etc/hosts file (requires sudo permissions) or C:\Windows\System32\drivers\etc\hosts (Windows) file.

We recommend ordinary single-machine users use the third method, adding the following resolution records on the machine used to access the web system via a browser:

<your_public_ip_address>  h.pigsty a.pigsty p.pigsty g.pigsty

The IP address here is the public IP address where the Pigsty service is installed, and then you can access Pigsty subsystems in the browser via a domain like: http://g.pigsty.

Other web services and custom domains can be added similarly. For example, the following are possible domain resolution records for the Pigsty sandbox demo:

10.10.10.10 h.pigsty a.pigsty p.pigsty g.pigsty
10.10.10.10 api.pigsty ddl.pigsty adm.pigsty cli.pigsty lab.pigsty
10.10.10.10 supa.pigsty noco.pigsty odoo.pigsty dify.pigsty

How to access via HTTPS?

If nginx_sslmode is set to enabled or enforced, you can trust self-signed ca: files/pki/ca/ca.crt to use https in your browser.

Pigsty will generate self-signed certs for Nginx, if you wish to access via HTTPS without “Warning”, here are some options:

  • Apply & add real certs from trusted CA: such as Let’s Encrypt
  • Trust your generated CA crt as root ca in your OS and browser
  • Type thisisunsafe in Chrome will supress the warning

You can access these web UI directly via IP + port. While the common best practice would be access them through Nginx and distinguish via domain names. You’ll need configure DNS records, or use the local static records (/etc/hosts) for that.


How to access Pigsty Web UI by domain name?

There are several options:

  1. Resolve internet domain names through a DNS service provider, suitable for systems accessible from the public internet.
  2. Configure internal network DNS server resolution records for internal domain name resolution.
  3. Modify the local machine’s /etc/hosts file to add static resolution records. (For Windows, it’s located at:)

We recommend the third method for common users. On the machine (which runs the browser), add the following record into /etc/hosts (sudo required) or C:\Windows\System32\drivers\etc\hosts in Windows:

<your_public_ip_address>  h.pigsty a.pigsty p.pigsty g.pigsty

You have to use the external IP address of the node here.


How to configure server side domain names?

The server-side domain name is configured with Nginx. If you want to replace the default domain name, simply enter the domain you wish to use in the parameter infra_portal. When you access the Grafana monitoring homepage via http://g.pigsty, it is actually accessed through the Nginx proxy to Grafana’s WebUI:

http://g.pigsty ️-> http://10.10.10.10:80 (nginx) -> http://10.10.10.10:3000 (grafana)

2 - Docker: Container Support & Proxy

How to configure container support in Pigsty? and how to configure proxy for DockerHub?

Pigsty has a DOCKER module, which provides a set of playbooks to install and manage Docker on the target nodes.

This document will guide you through how to enable Docker support in Pigsty, and how to configure a proxy server for DockerHub.


Install Docker

To install docker on specified nodes, you can use the docker.yml playbook:

./docker.yml -l <ip|group|cls>

That’s it.


Proxy 101

Assuming you have a working HTTP(s) proxy server, if you wish to connect to docker hub or other registry sites via the proxy server:

You proxy service should provide you with something like

  • http://<ip|domain>:<port> | https://[user]:[pass]@<ip|domain>:<port>

For example, if you have a proxy server configuring like this:

export ALL_PROXY=http://192.168.0.104:8118
export HTTP_PROXY=http://192.168.0.104:8118
export HTTPS_PROXY=http://192.168.0.104:8118
export NO_PROXY="localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,*.pigsty,*.aliyun.com,mirrors.*,*.myqcloud.com,*.tsinghua.edu.cn"

You can check the proxy server by using the curl command, for example:

curl -x http://192.168.0.104:8118 -I http://www.google.com

How to configure proxy for Docker Daemon?

If you wish to use a proxy server when Docker pulls images, you should specify the proxy_env parameter in the global variables of the pigsty.yml configuration file:

all:
  vars:
    proxy_env:                        # global proxy env when downloading packages
      no_proxy: "localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,*.pigsty,*.aliyun.com,mirrors.*,*.myqcloud.com,*.tsinghua.edu.cn"
      http_proxy: http://192.168.0.104:8118
      all_proxy: http://192.168.0.104:8118
      https_proxy: http://192.168.0.104:8118

And when the Docker playbook is executed, these configurations will be rendered as proxy configurations in /etc/docker/daemon.json:

{
  "proxies": {
    "http-proxy": "{{ proxy_env['http_proxy'] }}",
    "https-proxy": "{{ proxy_env['http_proxy'] }}",
    "no-proxy": "{{ proxy_env['no_proxy'] }}"
  }
}

Please note that Docker Daemon does not use the all_proxy parameter

If you wish to manually specify a proxy server, you can directly modify the proxies configuration in /etc/docker/daemon.json;

Or you can modify the service definition in /lib/systemd/system/docker.service (Debian/Ubuntu) and /usr/lib/systemd/system/docker.service to add environment variable declarations in the [Service] section

[Service]
Environment="HTTP_PROXY=http://192.168.0.104:8118"
Environment="HTTPS_PROXY=http://192.168.0.104:8118"
Environment="NO_PROXY=localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,*.pigsty,*.aliyun.com,mirrors.*,*.myqcloud.com,*.tsinghua.edu.cn"

And restart dockerd service to take effect:

systemctl restart docker

How to use other registry?

You can specify other registry sites in the docker_registry_mirrors parameter.

It may look like this:

[ "https://mirror.ccs.tencentyun.com" ]         # tencent cloud mirror, intranet only
["https://registry.cn-hangzhou.aliyuncs.com"]   # aliyun cloud mirror, login required

You can also log in to other mirror sites, such as quay.io, by executing:

docker login quay.io
username #> # input your username
password #> # input your password

3 - Use PostgreSQL as Ansible Config Inventory CMDB

Use PostgreSQL instead of static YAML config file as Ansible config inventory

You can use PostgreSQL as a configuration source for Pigsty, replacing the static YAML configuration file.

There are some advantages to using CMDB as a dynamic inventory: metadata is presented in a highly structured way in the form of data tables, and consistency is ensured through database constraints. At the same time, using CMDB allows you to use third-party tools to edit and manage Pigsty metadata, making it easier to integrate with external systems.


Ansible Inventory

Pigsty’s default configuration file path is specified in ansible.cfg as inventory = pigsty.yml.

Changing this parameter will change the default configuration file path used by Ansible. If you point it to an executable script file, Ansible will use the dynamic inventory mechanism, execute the script, and expect the script to return a configuration file.

Using CMDB is implemented by editing the ansible.cfg in the Pigsty directory:

---
inventory = pigsty.yml
+++
inventory = inventory.sh

And the inventory.sh is a simple script that generates equivalent YAML/JSON configuration files from the records in the PostgreSQL CMDB.

You can use bin/inventory_cmdb to switch to CMDB inventory, and use bin/inventory_conf to switch back to the local YAML configuration file.

You may also need bin/inventory_load to load the YAML configuration file into the CMDB.


Load Configuration

Pigsty will init a CMDB (optional) on the default pg-meta cluster’s meta database, under pigsty schema.

the CMDB is available only after infra.yml is fully executed on the admin node.

You can load YAML config file into CMDB with bin/inventory_load:

usage: inventory_load [-h] [-p PATH] [-d CMDB_URL]

load config arguments

optional arguments:
  -h, --help            show this help message and exit„
  -p PATH, --path PATH  config path, ${PIGSTY_HOME}/pigsty.yml by default
  -d DATA, --data DATA  postgres cmdb pgurl, ${METADB_URL} by default

If you run bin/inventory_load without any arguments, it will load the default pigsty.yml into the default CMDB.

bin/inventory_load
bin/inventory_load -p files/conf/pigsty-demo.yml
bin/inventory_load -p files/conf/pigsty-dcs3.yml -d postgresql://dbuser_meta:DBUser.Meta@10.10.10.10:5432/meta

You can switch to dynamic CMDB inventory with:

bin/inventory_cmdb

To switch back to local YAML config file:

bin/inventory_conf

4 - Use PG as Grafana Backend

Use PostgreSQL instead of default SQLite, as the backend storage for Grafana

You can use postgres as the database used by the Grafana backend.

In this tutorial, you will learn about the following.


TL; DR

vi pigsty.yml   # uncomment user/db definition:dbuser_grafana  grafana 
bin/pgsql-user  pg-meta  dbuser_grafana
bin/pgsql-db    pg-meta  grafana

psql postgres://dbuser_grafana:DBUser.Grafana@meta:5436/grafana -c \
  'CREATE TABLE t(); DROP TABLE t;'    # check pgurl connectivity
  
vi /etc/grafana/grafana.ini            # edit [database] section: type & url
systemctl restart grafana-server

Create Postgres Cluster

We can define a new database grafana on pg-meta. A Grafana-specific database cluster can also be created on a new machine node: pg-grafana.

Define Cluster

To create a new dedicated database cluster pg-grafana on two bare nodes 10.10.10.11, 10.10.10.12, define it in the config file.

pg-grafana: 
  hosts: 
    10.10.10.11: {pg_seq: 1, pg_role: primary}
    10.10.10.12: {pg_seq: 2, pg_role: replica}
  vars:
    pg_cluster: pg-grafana
    pg_databases:
      - name: grafana
        owner: dbuser_grafana
        revokeconn: true
        comment: grafana primary database
    pg_users:
      - name: dbuser_grafana
        password: DBUser.Grafana
        pgbouncer: true
        roles: [dbrole_admin]
        comment: admin user for grafana database

Create Cluster

Complete the creation of the database cluster pg-grafana with the following command: pgsql.yml.

bin/createpg pg-grafana # Initialize the pg-grafana cluster

This command calls Ansible Playbook pgsql.yml to create the database cluster.

. /pgsql.yml -l pg-grafana # The actual equivalent Ansible playbook command executed 

The business users and databases defined in pg_users and pg_databases are created automatically when the cluster is initialized. After creating the cluster using this configuration, the following connection string access database can be used.

postgres://dbuser_grafana:DBUser.Grafana@10.10.10.11:5432/grafana # direct connection to the primary
postgres://dbuser_grafana:DBUser.Grafana@10.10.10.11:5436/grafana # direct connection to the default service
postgres://dbuser_grafana:DBUser.Grafana@10.10.10.11:5433/grafana # Connect to the string read/write service

postgres://dbuser_grafana:DBUser.Grafana@10.10.10.12:5432/grafana # direct connection to the primary
postgres://dbuser_grafana:DBUser.Grafana@10.10.10.12:5436/grafana # Direct connection to default service
postgres://dbuser_grafana:DBUser.Grafana@10.10.10.12:5433/grafana # Connected string read/write service

By default, Pigsty is installed on a single meta node. Then the required users and databases for Grafana are created on the existing pg-meta database cluster instead of using the pg-grafana cluster.


Create Biz User

The convention for business object management is to create users first and then create the database.

Define User

To create a user dbuser_grafana on a pg-meta cluster, add the following user definition to pg-meta’s cluster definition.

Add location: all.children.pg-meta.vars.pg_users.

- name: dbuser_grafana
  password: DBUser.Grafana
  comment: admin user for grafana database
  pgbouncer: true
  roles: [ dbrole_admin ]

If you have defined a different password here, replace the corresponding parameter with the new password.

Create User

Complete the creation of the dbuser_grafana user with the following command.

bin/pgsql-user pg-meta dbuser_grafana # Create the `dbuser_grafana` user on the pg-meta cluster

Calls Ansible Playbook pgsql-user.yml to create the user

. /pgsql-user.yml -l pg-meta -e pg_user=dbuser_grafana # Ansible

The dbrole_admin role has the privilege to perform DDL changes in the database, which is precisely what Grafana needs.


Create Biz Database

Define database

Create business databases in the same way as business users. First, add the definition of the new database grafana to the cluster definition of pg-meta.

Add location: all.children.pg-meta.vars.pg_databases.

- { name: grafana, owner: dbuser_grafana, revokeconn: true }

Create database

Use the following command to complete the creation of the grafana database.

bin/pgsql-db pg-meta grafana # Create the `grafana` database on the `pg-meta` cluster

Calls Ansible Playbook pgsql-db.yml to create the database.

. /pgsql-db.yml -l pg-meta -e pg_database=grafana # The actual Ansible playbook to execute

Access Database

Check Connectivity

You can access the database using different services or access methods.

postgres://dbuser_grafana:DBUser.Grafana@meta:5432/grafana # Direct connection
postgres://dbuser_grafana:DBUser.Grafana@meta:5436/grafana # default service
postgres://dbuser_grafana:DBUser.Grafana@meta:5433/grafana # primary service

We will use the default service that accesses the database directly from the primary through the LB.

First, check if the connection string is reachable and if you have privileges to execute DDL commands.

psql postgres://dbuser_grafana:DBUser.Grafana@meta:5436/grafana -c \
  'CREATE TABLE t(); DROP TABLE t;'

Config Grafana

For Grafana to use the Postgres data source, you need to edit /etc/grafana/grafana.ini and modify the config entries.

[database]
;type = sqlite3
;host = 127.0.0.1:3306
;name = grafana
;user = root
# If the password contains # or ; you have to wrap it with triple quotes. Ex """#password;"""
;password =
;url =

Change the default config entries.

[database]
type = postgres
url = postgres://dbuser_grafana:DBUser.Grafana@meta/grafana

Subsequently, restart Grafana.

systemctl restart grafana-server

See from the monitor system that the new grafana database is already active, then Grafana has started using Postgres as the primary backend database. However, the original Dashboards and Datasources in Grafana have disappeared. You need to re-import Dashboards and Postgres Datasources.


Manage Dashboard

You can reload the Pigsty monitor dashboard by going to the files/ui dir in the Pigsty dir using the admin user and executing grafana.py init.

cd ~/pigsty/files/ui
. /grafana.py init # Initialize the Grafana monitor dashboard using the Dashboards in the current directory

Execution results in:

vagrant@meta:~/pigsty/files/ui
$ ./grafana.py init
Grafana API: admin:pigsty @ http://10.10.10.10:3000
init dashboard : home.json
init folder pgcat
init dashboard: pgcat / pgcat-table.json
init dashboard: pgcat / pgcat-bloat.json
init dashboard: pgcat / pgcat-query.json
init folder pgsql
init dashboard: pgsql / pgsql-replication.json
init dashboard: pgsql / pgsql-table.json
init dashboard: pgsql / pgsql-activity.json
init dashboard: pgsql / pgsql-cluster.json
init dashboard: pgsql / pgsql-node.json
init dashboard: pgsql / pgsql-database.json
init dashboard: pgsql / pgsql-xacts.json
init dashboard: pgsql / pgsql-overview.json
init dashboard: pgsql / pgsql-session.json
init dashboard: pgsql / pgsql-tables.json
init dashboard: pgsql / pgsql-instance.json
init dashboard: pgsql / pgsql-queries.json
init dashboard: pgsql / pgsql-alert.json
init dashboard: pgsql / pgsql-service.json
init dashboard: pgsql / pgsql-persist.json
init dashboard: pgsql / pgsql-proxy.json
init dashboard: pgsql / pgsql-query.json
init folder pglog
init dashboard: pglog / pglog-instance.json
init dashboard: pglog / pglog-analysis.json
init dashboard: pglog / pglog-session.json

This script detects the current environment (defined at ~/pigsty during installation), gets Grafana access information, and replaces the URL connection placeholder domain name (*.pigsty) in the monitor dashboard with the real one in use.

export GRAFANA_ENDPOINT=http://10.10.10.10:3000
export GRAFANA_USERNAME=admin
export GRAFANA_PASSWORD=pigsty

export NGINX_UPSTREAM_YUMREPO=yum.pigsty
export NGINX_UPSTREAM_CONSUL=c.pigsty
export NGINX_UPSTREAM_PROMETHEUS=p.pigsty
export NGINX_UPSTREAM_ALERTMANAGER=a.pigsty
export NGINX_UPSTREAM_GRAFANA=g.pigsty
export NGINX_UPSTREAM_HAPROXY=h.pigsty

As a reminder, using grafana.py clean will clear the target monitor dashboard, and using grafana.py load will load all the monitor dashboards in the current dir. When Pigsty’s monitor dashboard changes, you can use these two commands to upgrade all the monitor dashboards.


Manage DataSources

When creating a new PostgreSQL cluster with pgsql.yml or a new business database with pgsql-db.yml, Pigsty will register the new PostgreSQL data source in Grafana, and you can access the target database instance directly through Grafana using the default admin user. Most of the functionality of the application pgcat relies on this.

To register a Postgres database, you can use the register_grafana task in pgsql.yml.

./pgsql.yml -t register_grafana # Re-register all Postgres data sources in the current environment
./pgsql.yml -t register_grafana -l pg-test # Re-register all the databases in the pg-test cluster

Update Grafana Database

You can directly change the backend data source used by Grafana by modifying the Pigsty config file. Edit the grafana_database and grafana_pgurl parameters in pigsty.yml and change them.

grafana_database: postgres
grafana_pgurl: postgres://dbuser_grafana:DBUser.Grafana@meta:5436/grafana

Then re-execute the grafana task in infral.yml to complete the Grafana upgrade.

./infra.yml -t grafana

5 - Use PG as Prometheus Backend

Persist prometheus metrics with PostgreSQL + TimescaleDB through Promscale

It is not recommended to use PostgreSQL as a backend for Prometheus, but it is a good opportunity to understand the Pigsty deployment system.


Postgres Preparation

vi pigsty.yml # dbuser_prometheus  prometheus

pg_databases:                           # define business users/roles on this cluster, array of user definition
  - { name: prometheus, owner: dbuser_prometheus , revokeconn: true, comment: prometheus primary database }
pg_users:                           # define business users/roles on this cluster, array of user definition
  - {name: dbuser_prometheus , password: DBUser.Prometheus ,pgbouncer: true , createrole: true,  roles: [dbrole_admin], comment: admin user for prometheus database }

Create the database and user:

bin/pgsql-user  pg-meta  dbuser_prometheus
bin/pgsql-db    pg-meta  prometheus

Check the connection string:

psql postgres://dbuser_prometheus:DBUser.Prometheus@10.10.10.10:5432/prometheus -c 'CREATE EXTENSION timescaledb;'

Promscale Configuration

Install promscale package:

yum install -y promscale 

If the package is not available in the default repository, you can download it directly:

wget https://github.com/timescale/promscale/releases/download/0.6.1/promscale_0.6.1_Linux_x86_64.rpm
sudo rpm -ivh promscale_0.6.1_Linux_x86_64.rpm

Edit the promscale configuration file /etc/sysconfig/promscale.conf

PROMSCALE_DB_HOST="127.0.0.1"
PROMSCALE_DB_NAME="prometheus"
PROMSCALE_DB_PASSWORD="DBUser.Prometheus"
PROMSCALE_DB_PORT="5432"
PROMSCALE_DB_SSL_MODE="disable"
PROMSCALE_DB_USER="dbuser_prometheus"

Launch promscale service, it will create schema in prometheus database.

# launch 
cat /usr/lib/systemd/system/promscale.service
systemctl start promscale && systemctl status promscale

Prometheus Configuration

Prometheus can use Remote Write/ Remote Read to store metrics in Postgres via Promscale.

Edit the Prometheus configuration file:

vi /etc/prometheus/prometheus.yml

Add the following configuration to the remote_write and remote_read sections:

remote_write:
  - url: "http://127.0.0.1:9201/write"
remote_read:
  - url: "http://127.0.0.1:9201/read"

Metrics are loaded into Postgres after Prometheus is restarted.

systemctl restart prometheus

6 - Bind a L2 VIP to Node Cluster with Keepalived

You can bind an optional L2 VIP on a node cluster with vip_enabled.

proxy:
  hosts:
    10.10.10.29: { nodename: proxy-1 } 
    10.10.10.30: { nodename: proxy-2 } # , vip_role: master }
  vars:
    node_cluster: proxy
    vip_enabled: true
    vip_vrid: 128
    vip_address: 10.10.10.99
    vip_interface: eth1
./node.yml -l proxy -t node_vip     # enable for the first time
./node.yml -l proxy -t vip_refresh  # refresh vip config (e.g. designated master) 

7 - Bind a L2 VIP to PostgreSQL Primary with VIP-Manager

You can define an OPTIONAL L2 VIP on a PostgreSQL cluster, provided that all nodes in the cluster are in the same L2 network.

This VIP works on Master-Backup mode and always points to the node where the primary instance of the database cluster is located.

This VIP is managed by the VIP-Manager, which reads the Leader Key written by Patroni from DCS (etcd) to determine whether it is the master.


Enable VIP

Define pg_vip_enabled parameter as true in the cluster level:

# pgsql 3 node ha cluster: pg-test
pg-test:
  hosts:
    10.10.10.11: { pg_seq: 1, pg_role: primary }   # primary instance, leader of cluster
    10.10.10.12: { pg_seq: 2, pg_role: replica }   # replica instance, follower of leader
    10.10.10.13: { pg_seq: 3, pg_role: replica, pg_offline_query: true } # replica with offline access
  vars:
    pg_cluster: pg-test           # define pgsql cluster name
    pg_users:  [{ name: test , password: test , pgbouncer: true , roles: [ dbrole_admin ] }]
    pg_databases: [{ name: test }]

    # 启用 L2 VIP
    pg_vip_enabled: true
    pg_vip_address: 10.10.10.3/24
    pg_vip_interface: eth1

Beware that pg_vip_address must be a valid IP address with subnet and available in the current L2 network.

Beware that pg_vip_interface must be a valid network interface name and should be the same as the one using IPv4 address in the inventory.

If the network interface name is different among cluster members, users should explicitly specify the pg_vip_interface parameter for each instance, for example:

pg-test:
  hosts:
    10.10.10.11: { pg_seq: 1, pg_role: primary , pg_vip_interface: eth0  }
    10.10.10.12: { pg_seq: 2, pg_role: replica , pg_vip_interface: eth1  }
    10.10.10.13: { pg_seq: 3, pg_role: replica , pg_vip_interface: ens33 }
  vars:
    pg_cluster: pg-test           # define pgsql cluster name
    pg_users:  [{ name: test , password: test , pgbouncer: true , roles: [ dbrole_admin ] }]
    pg_databases: [{ name: test }]

    # 启用 L2 VIP
    pg_vip_enabled: true
    pg_vip_address: 10.10.10.3/24
    #pg_vip_interface: eth1

To refresh the VIP configuration and restart the VIP-Manager, use the following command:

./pgsql.yml -t pg_vip