Citus Deployment
Citus is a distributed extension plugin for PostgreSQL. By default, Pigsty installs Citus but does not enable it. pigsty-citus.yml
provides a config file case for deploying a Citus cluster. To allow Citus to, you need to modify the following parameters.
max_prepared_transaction
: Modify to a value greater thanmax_connections
, e.g. 800.pg_libs
: Must containcitus
and be placed in the top position.- You need to include the
citus
extension plugin in the business database (but you can also manually install it viaCREATE EXTENSION
).
Citus cluster sample config
#----------------------------------#
# cluster: citus coordinator
#----------------------------------#
pg-meta:
hosts:
10.10.10.10: { pg_seq: 1, pg_role: primary , pg_offline_query: true }
vars:
pg_cluster: pg-meta
vip_address: 10.10.10.2
pg_users: [ { name: citus , password: citus , pgbouncer: true , roles: [ dbrole_admin ] } ]
pg_databases: [ { name: meta , owner: citus , extensions: [ { name: citus } ] } ]
#----------------------------------#
# cluster: citus data nodes
#----------------------------------#
pg-node1:
hosts:
10.10.10.11: { pg_seq: 1, pg_role: primary }
vars:
pg_cluster: pg-node1
vip_address: 10.10.10.3
pg_users: [ { name: citus , password: citus , pgbouncer: true , roles: [ dbrole_admin ] } ]
pg_databases: [ { name: meta , owner: citus , extensions: [ { name: citus } ] } ]
pg-node2:
hosts:
10.10.10.12: { pg_seq: 1, pg_role: primary , pg_offline_query: true }
vars:
pg_cluster: pg-node2
vip_address: 10.10.10.4
pg_users: [ { name: citus , password: citus , pgbouncer: true , roles: [ dbrole_admin ] } ]
pg_databases: [ { name: meta , owner: citus , extensions: [ { name: citus } ] } ]
pg-node3:
hosts:
10.10.10.13: { pg_seq: 1, pg_role: primary , pg_offline_query: true }
vars:
pg_cluster: pg-node3
vip_address: 10.10.10.5
pg_users: [ { name: citus , password: citus , pgbouncer: true , roles: [ dbrole_admin ] } ]
pg_databases: [ { name: meta , owner: citus , extensions: [ { name: citus } ] } ]
Next, you need to refer to the Citus Multi-Node Deployment Guide, and on the Coordinator node, execute the following command to add a data node.
sudo su - postgres; psql meta
SELECT * from citus_add_node('10.10.10.11', 5432);
SELECT * from citus_add_node('10.10.10.12', 5432);
SELECT * from citus_add_node('10.10.10.13', 5432);
SELECT * FROM citus_get_active_worker_nodes();
node_name | node_port
-------------+-----------
10.10.10.11 | 5432
10.10.10.13 | 5432
10.10.10.12 | 5432
(3 rows)
After successfully adding data nodes, you can use the following command to create sample data tables on the coordinator and distribute them to each data node.
-- Declare a distributed table
CREATE TABLE github_events
(
event_id bigint,
event_type text,
event_public boolean,
repo_id bigint,
payload jsonb,
repo jsonb,
actor jsonb,
org jsonb,
created_at timestamp
) PARTITION BY RANGE (created_at);
-- Creating Distributed Tables
SELECT create_distributed_table('github_events', 'repo_id');
For more information about Citus, please refer to the Citus official doc.
Feedback
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve.