Provisioning
Module:
Pigsty runs on nodes, which are Bare Metals or Virtual Machines. You can prepare them manually, or using terraform & vagrant for provisioning.
Sandbox
Pigsty has a sandbox, which is a 4-node deployment with fixed IP addresses and other identifiers.
Check full.yml
for details.
The sandbox consists of 4 nodes with fixed IP addresses: 10.10.10.10
, 10.10.10.11
, 10.10.10.12
, 10.10.10.13
.
There’s a primary singleton PostgreSQL cluster: pg-meta
on the meta
node, which can be used alone if you don’t care about PostgreSQL high availability.
meta 10.10.10.10 pg-meta pg-meta-1
There are 3 additional nodes in the sandbox, form a 3-instance PostgreSQL HA cluster pg-test
.
node-1 10.10.10.11 pg-test.pg-test-1
node-2 10.10.10.12 pg-test.pg-test-2
node-3 10.10.10.13 pg-test.pg-test-3
Two optional L2 VIP are bind on primary instances of cluster pg-meta
and pg-test
:
10.10.10.2 pg-meta
10.10.10.3 pg-test
There’s also a 1-instance etcd
cluster, and 1-instance minio
cluster on the meta
node, too.
You can run sandbox on local VMs or cloud VMs. Pigsty offers a local sandbox based on Vagrant (pulling up local VMs using Virtualbox or libvirt), and a cloud sandbox based on Terraform (creating VMs using the cloud vendor API).
-
Local sandbox can be run on your Mac/PC for free. Your Mac/PC should have at least 4C/8G to run the full 4-node sandbox.
-
Cloud sandbox can be easily created and shared. You will have to create a cloud account for that. VMs are created on-demand and can be destroyed with one command, which is also very cheap for a quick glance.
Vagrant
Vagrant can create local VMs according to specs in a declarative way. Check Vagrant Templates Intro for details
Vagrant use VirtualBox as the default VM provider. however libvirt, docker, parallel desktop and vmware can also be used. We will use VirtualBox in this guide.
Installation
Make sure Vagrant and Virtualbox are installed and available on your OS.
If you are using macOS, You can use homebrew
to install both of them with one command (reboot required). You can also use vagrant-libvirt on Linux.
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
brew install vagrant virtualbox ansible # Run on MacOS with one command, but only works on x86_64 Intel chips
Configuration
Vagranfile
is a ruby script file describing VM nodes. Here are some default specs of Pigsty.
Templates | Nodes | Spec | Comment | Alias |
---|---|---|---|---|
meta.rb | 1 node | 2c4g x 1 | Single Node Meta | Devbox |
dual.rb | 2 node | 1c2g x 2 | Dual Nodes | |
trio.rb | 3 node | 1c2G x 3 | Three Nodes | |
full.rb | 4 node | 2c4g + 1c2g x 3 | Full-Featured 4 Node | Sandbox |
prod.rb | 36 node | misc | Prod Env Simulation | Simubox |
build.rb | 5 node | 1c2g x 4 | 4-Node Building Env | Buildbox |
rpm.rb | 3 node | 1c2G x 3 | 3-Node EL Building Env | |
deb.rb | 5 node | 1c2G x 5 | 5-Node Deb Building Env | |
all.rb | 7 node | 1c2G x 7 | 7-Node All Building Env |
Each spec file contains a Specs
variable describe VM nodes. For example, the full.rb
contains:
# full: pigsty full-featured 4-node sandbox for HA-testing & tutorial & practices
Specs = [
{ "name" => "meta" , "ip" => "10.10.10.10" , "cpu" => "2" , "mem" => "4096" , "image" => "generic/rocky8" },
{ "name" => "node-1" , "ip" => "10.10.10.11" , "cpu" => "1" , "mem" => "2048" , "image" => "generic/rocky8" },
{ "name" => "node-2" , "ip" => "10.10.10.12" , "cpu" => "1" , "mem" => "2048" , "image" => "generic/rocky8" },
{ "name" => "node-3" , "ip" => "10.10.10.13" , "cpu" => "1" , "mem" => "2048" , "image" => "generic/rocky8" },
]
You can use specs with the config
script, it will render the final Vagrantfile
according to the spec and other options
cd ~/pigsty
vagrant/config [spec] [image] [scale] [provider]
vagrant/config meta # use the 1-node spec, default el8 image
vagrant/config dual el9 # use the 2-node spec, use el9 image instead
vagrant/config trio d12 2 # use the 3-node spec, use debian12 image, double the cpu/mem resource
vagrant/config full u22 4 # use the 4-node spec, use ubuntu22 image instead, use 4x cpu/mem resource
vagrant/config prod u20 1 libvirt # use the 43-node spec, use ubuntu20 image instead, use libvirt as provider instead of virtualbox
You can scale the resource unit with environment variable VM_SCALE
, the default value is 1
.
For example, if you use VM_SCALE=2
with vagrant/config meta
, it will double the cpu / mem resources of the meta node.
# pigsty singleton-meta environment (4C8G)
Specs = [
{ "name" => "meta" , "ip" => "10.10.10.10" , "cpu" => "8" , "mem" => "16384" , "image" => "generic/rocky8" },
]
Shortcuts
Create pre-configured environment with make
shortcuts:
make meta # 1-node devbox for quick start, dev, test & playground
make full # 4-node sandbox for HA-testing & feature demonstration
make prod # 43-node simubox for production environment simulation
# seldom used templates:
make dual # 2-node env
make trio # 3-node env
You can use variant alias to create environment with different base image:
make meta9 # create singleton-meta node with generic/rocky9 image
make full22 # create 4-node sandbox with generic/ubuntu22 image
make prod12 # create 43-node production env simubox with generic/debian12 image
... # available suffix: 7,8,9,11,12,20,22,24
You can also launch pigsty building env with these alias, base image will not be substituted:
make build # 4-node building environment
make rpm # 3-node el7/8/9 building env
make deb # 5-node debian11/12 ubuntu20/22/24
make all # 7-node building env with all base images
Management
After describing the VM nodes with specs and generate the vagrant/Vagrantfile
. you can create the VMs with vagrant up
command.
Pigsty templates will use your ~/.ssh/id_rsa[.pub]
as the default ssh key for vagrant provisioning.
Make sure you have a valid ssh key pair before you start, you can generate one by: ssh-keygen -t rsa -b 2048
There are some makefile shortcuts that wrap the vagrant commands, you can use them to manage the VMs.
make # = make start
make new # destroy existing vm and create new ones
make ssh # write VM ssh config to ~/.ssh/ (required)
make dns # write VM DNS records to /etc/hosts (optional)
make start # launch VMs and write ssh config (up + ssh)
make up # launch VMs with vagrant up
make halt # shutdown VMs (down,dw)
make clean # destroy VMs (clean/del/destroy)
make status # show VM status (st)
make pause # pause VMs (suspend,pause)
make resume # pause VMs (resume)
make nuke # destroy all vm & volumes with virsh (if using libvirt)
Caveat
If you are using virtualbox, you have to add 10.0.0.0/8
to /etc/vbox/networks.conf
first to use 10.x.x.x in host-only networks.
# /etc/vbox/networks.conf
* 10.0.0.0/8
Reference: https://discuss.hashicorp.com/t/vagran-can-not-assign-ip-address-to-virtualbox-machine/30930/3
Terraform
Terraform is an open-source tool to practice ‘Infra as Code’. Describe the cloud resource you want and create them with one command.
Pigsty has Terraform templates for AWS, Aliyun, and Tencent Cloud, you can use them to create VMs on the cloud for Pigsty Demo.
Terraform can be easily installed with homebrew, too: brew install terraform
. You will have to create a cloud account to obtain AccessKey and AccessSecret credentials to proceed.
Quick Start
brew install terraform # install via homebrew
terraform init # install terraform provider: aliyun , aws, only required for the first time
terraform apply # plan and apply: create VMs, etc...
Print public IP Address:
terraform output | grep -Eo '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}'
Specifications
- spec/aliyun-meta.tf : Aliyun 1 meta node template for all distro & amd/arm (default)
- spec/aliyun-full.tf : Aliyun 4-node sandbox template for all distro & amd/arm.
- spec/aliyun-oss.tf : Aliyun 5-node building template for all distro & amd/arm.
- spec/aws-cn.tf : AWS 4 node CentOS7 environment
- spec/tencentcloud.tf : QCloud 4 node CentOS7 environment
Aliyun Credential
You can add your aliyun credentials to the environment file, such as ~/.bash_profile
export ALICLOUD_ACCESS_KEY="<your_access_key>"
export ALICLOUD_SECRET_KEY="<your_secret_key>"
export ALICLOUD_REGION="cn-beijing"
AWS Credential
You have to set up aws config & credential to use AWS provider.
# ~/.aws
# ~/.aws/config
[default]
region = cn-northwest-1
# ~/.aws/credentials
[default]
aws_access_key_id = <YOUR_AWS_ACCESS_KEY>
aws_secret_access_key = <AWS_ACCESS_SECRET>
# ~/.aws/pigsty-key
# ~/.aws/pigsty-key.pub
Feedback
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve.