4-node: minio
4-node HA MinIO MNMD cluster to provide S3-compatible object storage services.
Categories:
The minio
template is based on the full
template.
It has a four-node x four-drive configuration that defines a 16-drive MinIO cluster.
Check the MINIO module docs for more information.
Overview
- Conf Name :
minio
- Node Count : 4-node,
pigsty/vagrant/spec/minio.rb
- Description : 4-node HA MinIO MNMD cluster to provide S3-compatible object storage services.
- Content :
pigsty/conf/minio.yml
- OS Distro :
el7
,el8
,el9
,d12
,u20
,u22
,u24
- OS Arch :
x86_64
,aarch64
- Related :
full
To enable: Use the -c minio
parameter during the configure
process:
./configure -c minio
This is a 4-node template, you need to modify the IP address of the other 3 nodes after configure
Content
Source: pigsty/conf/mssql.yml
all:
children:
# infra cluster for proxy, monitor, alert, etc..
infra: { hosts: { 10.10.10.10: { infra_seq: 1 } } }
# minio cluster with 4 nodes and 4 drivers per node
minio:
hosts:
10.10.10.10: { minio_seq: 1 , nodename: minio-1 }
10.10.10.11: { minio_seq: 2 , nodename: minio-2 }
10.10.10.12: { minio_seq: 3 , nodename: minio-3 }
10.10.10.13: { minio_seq: 4 , nodename: minio-4 }
vars:
minio_cluster: minio
minio_data: '/data{1...4}'
minio_buckets: [ { name: pgsql }, { name: infra }, { name: redis } ]
minio_users:
- { access_key: dba , secret_key: S3User.DBA, policy: consoleAdmin }
- { access_key: pgbackrest , secret_key: S3User.SomeNewPassWord , policy: readwrite }
# bind a node l2 vip (10.10.10.9) to minio cluster (optional)
node_cluster: minio
vip_enabled: true
vip_vrid: 128
vip_address: 10.10.10.9
vip_interface: eth1
# expose minio service with haproxy on all nodes
haproxy_services:
- name: minio # [REQUIRED] service name, unique
port: 9002 # [REQUIRED] service port, unique
balance: leastconn # [OPTIONAL] load balancer algorithm
options: # [OPTIONAL] minio health check
- option httpchk
- option http-keep-alive
- http-check send meth OPTIONS uri /minio/health/live
- http-check expect status 200
servers:
- { name: minio-1 ,ip: 10.10.10.10 ,port: 9000 ,options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }
- { name: minio-2 ,ip: 10.10.10.11 ,port: 9000 ,options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }
- { name: minio-3 ,ip: 10.10.10.12 ,port: 9000 ,options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }
- { name: minio-4 ,ip: 10.10.10.13 ,port: 9000 ,options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }
vars:
version: v3.2.0 # pigsty version string
admin_ip: 10.10.10.10 # admin node ip address
region: default # upstream mirror region: default|china|europe
infra_portal: # domain names and upstream servers
home : { domain: h.pigsty }
grafana : { domain: g.pigsty ,endpoint: "${admin_ip}:3000" , websocket: true }
prometheus : { domain: p.pigsty ,endpoint: "${admin_ip}:9090" }
alertmanager : { domain: a.pigsty ,endpoint: "${admin_ip}:9093" }
blackbox : { endpoint: "${admin_ip}:9115" }
loki : { endpoint: "${admin_ip}:3100" }
# domain names to access minio web console via nginx web portal (optional)
minio : { domain: m.pigsty ,endpoint: "10.10.10.10:9001" ,scheme: https ,websocket: true }
minio10 : { domain: m10.pigsty ,endpoint: "10.10.10.10:9001" ,scheme: https ,websocket: true }
minio11 : { domain: m11.pigsty ,endpoint: "10.10.10.11:9001" ,scheme: https ,websocket: true }
minio12 : { domain: m12.pigsty ,endpoint: "10.10.10.12:9001" ,scheme: https ,websocket: true }
minio13 : { domain: m13.pigsty ,endpoint: "10.10.10.13:9001" ,scheme: https ,websocket: true }
minio_endpoint: https://sss.pigsty:9002 # explicit overwrite minio endpoint with haproxy port
node_etc_hosts: ["10.10.10.9 sss.pigsty"] # domain name to access minio from all nodes (required)
Caveat
Feedback
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve.