This is the multi-page printable view of this section.
Click here to print.
Return to the regular view of this page.
Software & Tools
Software and tools that use PostgreSQL can be managed by the docker daemon
PostgreSQL is the most popular database in the world, and countless software is built on PostgreSQL, around PostgreSQL, or serves PostgreSQL itself, such as
- “Application software” that uses PostgreSQL as the preferred database
- “Tooling software” that serves PostgreSQL software development and management
- “Database software” that derives, wraps, forks, modifies, or extends PostgreSQL
And Pigsty just have a series of Docker Compose templates for these software, application and databases:
Name |
Website |
Type |
State |
Port |
Domain |
Description |
Supabase |
Supabase |
DB |
GA |
8000 |
supa.pigsty |
OSS Firebase Alternative, Backend as Platform |
PolarDB |
PolarDB |
DB |
GA |
5532 |
|
OSS RAC for PostgreSQL |
FerretDB |
FerretDB |
DB |
GA |
27017 |
|
OSS Mongo Alternative base on PostgreSQL |
MinIO |
MinIO |
DB |
GA |
9000 |
sss.pigsty |
OSS AWS S3 Alternative, Simple Storage Service |
EdgeDB |
EdgeDB |
DB |
TBD |
|
|
OSS Graph Database base on PostgreSQL |
NocoDB |
NocoDB |
APP |
GA |
8080 |
noco.pigsty |
OSS Airtable Alternative over PostgreSQL |
Odoo |
Odoo |
APP |
GA |
8069 |
odoo.pigsty |
OSS ERP Software base on PostgreSQL |
Dify |
Dify |
APP |
GA |
8001 |
dify.pigsty |
OSS AI Workflow Orachestration & LLMOps Platform |
Jupyter |
Jupyter |
APP |
GA |
|
lab.pigsty |
OSS AI Python Notebook & Data Analysis IDE |
Gitea |
Gitea |
APP |
GA |
8889 |
git.pigsty |
OSS DevOps Git Service |
Wiki |
Wiki.js |
APP |
GA |
9002 |
wiki.pigsty |
OSS Wiki Software |
GitLab |
GitLab |
APP |
TBD |
|
|
OSS GitHub Alternative, Code Management Platform |
Mastodon |
Mastodon |
APP |
TBD |
|
|
OSS Decentralized Social Network |
Keycloak |
Keycloak |
APP |
TBD |
|
|
OSS Identity & Access Management Component |
Harbour |
Harbour |
APP |
TBD |
|
|
OSS Docker/K8S Image Repository |
Confluence |
Confluence |
APP |
TBD |
|
|
Enterprise Knowledge Management System |
Jira |
Jira |
APP |
TBD |
|
|
Enterprise Project Management Tools |
Zabbix |
Zabbix 7 |
APP |
TBD |
|
|
OSS Monitoring Platform for Enterprise |
Grafana |
Grafana |
APP |
TBD |
|
|
Dashboard, Data Visualization & Monitoring Platform |
Metabase |
Metabase |
APP |
GA |
9004 |
mtbs.pigsty |
Fast analysis of data from multiple data sources |
ByteBase |
ByteBase |
APP |
GA |
8887 |
ddl.pigsty |
Database Migration Tool for PostgreSQL |
Kong |
Kong |
TOOL |
GA |
8000 |
api.pigsty |
OSS API Gateway based on Nginx/OpenResty |
PostgREST |
PostgREST |
TOOL |
GA |
8884 |
api.pigsty |
Generate RESTAPI from PostgreSQL Schemas |
pgAdmin4 |
pgAdmin4 |
TOOL |
GA |
8885 |
adm.pigsty |
PostgreSQL GUI Admin Tools |
pgWeb |
pgWeb |
TOOL |
GA |
8886 |
cli.pigsty |
PostgreSQL Web GUI Client |
SchemaSpy |
SchemaSpy |
TOOL |
TBD |
|
|
Dump & Visualize PostgreSQL Schema |
pgBadger |
pgBadger |
TOOL |
TBD |
|
|
PostgreSQL Log Analysis |
pg_exporter |
pg_exporter |
TOOL |
GA |
9630 |
|
Expose PostgreSQL & Pgbouncer Metrics for Prometheus |
How to prepare Docker?
To run docker compose templates, you need to install the
DOCKER
module on the
node,
If you don’t have the Internet access or having firewall issues, you may need to configure a DockerHub proxy, check the
tutorial.
1 - Dify: AI Workflow and LLMOps
How to use Pigsty to build an AI Workflow LLMOps platform — Dify, and use external PostgreSQL, PGVector, Redis as storage?
Dify – The Innovation Engine for GenAI Applications
Dify is an open-source LLM app development platform. Orchestrate LLM apps from agents to complex AI workflows, with an RAG engine.
Which claims to be more production-ready than LangChain.
Of course, a workflow orchestration software like this needs a database underneath — Dify uses PostgreSQL for metadata storage, as well as Redis for caching and a dedicated vector database.
You can pull the Docker images and play locally, but for production deployment, this setup won’t suffice — there’s no HA, backup, PITR, monitoring, and many other things.
Fortunately, Pigsty provides a battery-include production-grade highly available PostgreSQL cluster, along with the Redis and S3 (MinIO) capabilities that Dify needs, as well as Nginx to expose the Web service, making it the perfect companion for Dify.
Off-load the stateful part to Pigsty, you only need to pull up the stateless blue circle part with a simple docker compose up
.
BTW, I have to criticize the design of the Dify template. Since the metadata is already stored in PostgreSQL, why not add pgvector
to use it as a vector database? What’s even more baffling is that pgvector
is a separate image and container. Why not just use a PG image with pgvector
included?
Dify “supports” a bunch of flashy vector databases, but since PostgreSQL is already chosen, using pgvector
as the default vector database is the natural choice. Similarly, I think the Dify team should consider removing Redis. Celery task queues can use PostgreSQL as backend storage, so having multiple databases is unnecessary. Entities should not be multiplied without necessity.
Therefore, the Pigsty-provided Dify Docker Compose template has made some adjustments to the official example. It removes the db
and redis
database images, using instances managed by Pigsty. The vector database is fixed to use pgvector
, reusing the same PostgreSQL instance.
In the end, the architecture is simplified to three stateless containers: dify-api
, dify-web
, and dify-worker
, which can be created and destroyed at will. There are also two optional containers, ssrf_proxy
and nginx
, for providing proxy and some security features.
There’s a bit of state management left with file system volumes, storing things like private keys. Regular backups are sufficient.
Reference:
Pigsty Preparation
Let’s take the single-node installation of Pigsty as an example. Suppose you have a machine with the IP address 10.10.10.10
and already pigsty installed.
We need to define the database clusters required in the Pigsty configuration file pigsty.yml
.
Here, we define a cluster named pg-meta
, which includes a superuser named dbuser_dify
(the implementation is a bit rough as the Migration script executes CREATE EXTENSION
which require dbsu privilege for now),
And there’s a database named dify
with the pgvector
extension installed, and a specific firewall rule allowing users to access the database from anywhere using a password (you can also restrict it to a more precise range, such as the Docker subnet 172.0.0.0/8
).
Additionally, a standard single-instance Redis cluster redis-dify
with the password redis.dify
is defined.
pg-meta:
hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary } }
vars:
pg_cluster: pg-meta
pg_users: [ { name: dbuser_dify ,password: DBUser.Dify ,superuser: true ,pgbouncer: true ,roles: [ dbrole_admin ] } ]
pg_databases: [ { name: dify, owner: dbuser_dify, extensions: [ { name: pgvector } ] } ]
pg_hba_rules: [ { user: dbuser_dify , db: all ,addr: world ,auth: pwd ,title: 'allow dify user world pwd access' } ]
redis-dify:
hosts: { 10.10.10.10: { redis_node: 1 , redis_instances: { 6379: { } } } }
vars: { redis_cluster: redis-dify ,redis_password: 'redis.dify' ,redis_max_memory: 64MB }
For demonstration purposes, we use single-instance configurations. You can refer to the Pigsty documentation to deploy high availability PG and Redis clusters. After defining the clusters, use the following commands to create the PG and Redis clusters:
bin/pgsql-add pg-meta # create the dify database cluster
bin/redis-add redis-dify # create redis cluster
Alternatively, you can define a new business user and business database on an existing PostgreSQL cluster, such as pg-meta
, and create them with the following commands:
bin/pgsql-user pg-meta dbuser_dify # create dify biz user
bin/pgsql-db pg-meta dify # create dify biz database
You should be able to access PostgreSQL and Redis with the following connection strings, adjusting the connection information as needed:
psql postgres://dbuser_dify:DBUser.Dify@10.10.10.10:5432/dify -c 'SELECT 1'
redis-cli -u redis://redis.dify@10.10.10.10:6379/0 ping
Once you confirm these connection strings are working, you’re all set to start deploying Dify.
For demonstration purposes, we’re using direct IP connections. For a multi-node high availability PG cluster, please refer to the service access section.
The above assumes you are already a Pigsty user familiar with deploying PostgreSQL and Redis clusters. You can skip the next section and proceed to see how to configure Dify.
Starting from Scratch
If you’re already familiar with setting up Pigsty, feel free to skip this section.
Prepare a fresh Linux x86_64 node that runs compatible OS, then run as a sudo-able user:
curl -fsSL https://repo.pigsty.io/get | bash
It will download Pigsty source to your home, then perform configure and install to finish the installation.
cd ~/pigsty # get pigsty source and entering dir
./bootstrap # download bootstrap pkgs & ansible [optional]
./configure # pre-check and config templating [optional]
# change pigsty.yml, adding those cluster definitions above into all.children
./install.yml # install pigsty according to pigsty.yml
You should insert the above PostgreSQL cluster and Redis cluster definitions into the pigsty.yml
file, then run install.yml
to complete the installation.
Redis Deploy
Pigsty will not deploy redis in install.yml
, so you have to run redis.yml
playbook to install Redis explicitly:
Docker Deploy
Pigsty will not deploy Docker by default, so you need to install Docker with the docker.yml
playbook.
Dify Confiugration
You can configure dify in the .env
file:
All parameters are self-explanatory and filled in with default values that work directly in the Pigsty sandbox env.
Fill in the database connection information according to your actual conf, consistent with the PG/Redis cluster configuration above.
Changing the SECRET_KEY
field is recommended. You can generate a strong key with openssl rand -base64 42
:
# meta parameter
DIFY_PORT=8001 # expose dify nginx service with port 8001 by default
LOG_LEVEL=INFO # The log level for the application. Supported values are `DEBUG`, `INFO`, `WARNING`, `ERROR`, `CRITICAL`
SECRET_KEY=sk-9f73s3ljTXVcMT3Blb3ljTqtsKiGHXVcMT3BlbkFJLK7U # A secret key for signing and encryption, gen with `openssl rand -base64 42`
# postgres credential
PG_USERNAME=dbuser_dify
PG_PASSWORD=DBUser.Dify
PG_HOST=10.10.10.10
PG_PORT=5432
PG_DATABASE=dify
# redis credential
REDIS_HOST=10.10.10.10
REDIS_PORT=6379
REDIS_USERNAME=''
REDIS_PASSWORD=redis.dify
# minio/s3 [OPTIONAL] when STORAGE_TYPE=s3
STORAGE_TYPE=local
S3_ENDPOINT='https://sss.pigsty'
S3_BUCKET_NAME='infra'
S3_ACCESS_KEY='dba'
S3_SECRET_KEY='S3User.DBA'
S3_REGION='us-east-1'
Now we can pull up dify with docker compose:
cd pigsty/app/dify && make up
Expose Dify Service via Nginx
Dify expose web/api via its own nginx through port 80 by default, while pigsty uses port 80 for its own Nginx. T
herefore, we expose Dify via port 8001
by default, and use Pigsty’s Nginx to forward to this port.
Change infra_portal
in pigsty.yml
, with the new dify
line:
infra_portal: # domain names and upstream servers
home : { domain: h.pigsty }
grafana : { domain: g.pigsty ,endpoint: "${admin_ip}:3000" , websocket: true }
prometheus : { domain: p.pigsty ,endpoint: "${admin_ip}:9090" }
alertmanager : { domain: a.pigsty ,endpoint: "${admin_ip}:9093" }
blackbox : { endpoint: "${admin_ip}:9115" }
loki : { endpoint: "${admin_ip}:3100" }
dify : { domain: dify.pigsty ,endpoint: "10.10.10.10:8001", websocket: true }
Then expose dify web service via Pigsty’s Nginx server:
Don’t forget to add dify.pigsty
to your DNS or local /etc/hosts
/ C:\Windows\System32\drivers\etc\hosts
to access via domain name.
2 - Odoo: OSS ERP for Enterprise
How to self-hosting the eopn source ERP – odoo
Odoo is an open-source enterprise resource planning (ERP) software
that provides a full suite of business applications, including CRM, sales, purchasing, inventory, production, accounting, and other management functions. Odoo is a typical web application that uses PostgreSQL as the underlying database.
All your business on one platform, Simple, efficient, yet affordable
Get Started
Check .env
file for configurable environment variables:
# https://hub.docker.com/_/odoo#
PG_HOST=10.10.10.10
PG_PORT=5432
PG_USER=dbuser_odoo
PG_PASS=DBUser.Odoo
ODOO_PORT=8069
Then launch odoo with:
make up # docker compose up
Visit http://ddl.pigsty or http://10.10.10.10:8887
Makefile
make up # pull up odoo with docker compose in minimal mode
make run # launch odoo with docker , local data dir and external PostgreSQL
make view # print odoo access point
make log # tail -f odoo logs
make info # introspect odoo with jq
make stop # stop odoo container
make clean # remove odoo container
make pull # pull latest odoo image
make rmi # remove odoo image
make save # save odoo image to /tmp/docker/odoo.tgz
make load # load odoo image from /tmp/docker/odoo.tgz
Use External PostgreSQL
You can use external PostgreSQL for Odoo. Odoo will create its own database during setup, so you don’t need to do that
pg_users: [ { name: dbuser_odoo ,password: DBUser.Odoo ,pgbouncer: true ,roles: [ dbrole_admin ] ,comment: admin user for odoo database } ]
pg_databases: [ { name: odoo ,owner: dbuser_odoo ,revokeconn: true ,comment: odoo primary database } ]
And create business user & database with:
bin/pgsql-user pg-meta dbuser_odoo
#bin/pgsql-db pg-meta odoo # odoo will create the database during setup
Check connectivity:
psql postgres://dbuser_odoo:DBUser.Odoo@10.10.10.10:5432/odoo
Expose Odoo Service
Expose odoo seb service via nginx portal:
infra_portal: # domain names and upstream servers
home : { domain: h.pigsty }
grafana : { domain: g.pigsty ,endpoint: "${admin_ip}:3000" , websocket: true }
prometheus : { domain: p.pigsty ,endpoint: "${admin_ip}:9090" }
alertmanager : { domain: a.pigsty ,endpoint: "${admin_ip}:9093" }
blackbox : { endpoint: "${admin_ip}:9115" }
loki : { endpoint: "${admin_ip}:3100" }
odoo : { domain: odoo.pigsty, endpoint: "127.0.0.1:8069", websocket: true } # <------ add this line
./infra.yml -t nginx # setup nginx infra portal
Odoo Addons
There are lots of Odoo modules available in the community, you can install them by downloading and placing them in the addons
folder.
volumes:
- ./addons:/mnt/extra-addons
You can mount the ./addons
dir to the /mnt/extra-addons
in the container, then download and unzip to the addons
folder,
To enable addon module, first entering the Developer mode
Settings -> Generic Settings -> Developer Tools -> Activate the developer Mode
Then goes to the > Apps -> Update Apps List, then you can find the extra addons and install from the panel.
Frequently used free addons: Accounting Kit
Demo
Check public demo: http://odoo.pigsty.cc, username: test@pigsty.cc
, password: pigsty
If you want to access odoo through SSL, you have to trust files/pki/ca/ca.crt
on your browser (or use the dirty hack thisisunsafe
in chrome)
3 - Supabase: Self-Hosting OSS Firebase
How to self-host Supabase with existing managed HA PostgreSQL cluster, and launch the stateless part with docker-compose?
Supabase, The open-source Firebase alternative based on PostgreSQL.
Pigsty allow you to self-host supabase with existing managed HA postgres cluster, and launch the stateless part of supabase with docker-compose.
Notice: Supabase is GA since 2024.04.15
Quick Start
To run supabase with existing postgres instance, prepare the database with supa.yml
then launch the stateless part with the docker-compose
file:
cd app/supabase; make up # https://supabase.com/docs/guides/self-hosting/docker
Then you can access the supabase studio dashboard via http://<admin_ip>:8000
by default, the default dashboard username is supabase
and password is pigsty
.
You can also configure the infra_portal
to expose the WebUI to the public through Nginx and SSL.
Database
Supabase require certain PostgreSQL extensions, schemas, and roles to work, which can be pre-configured by Pigsty: supa.yml
.
The following example will configure the default pg-meta
cluster as underlying postgres for supabase:
# supabase example cluster: pg-meta, this cluster needs to be migrated with ~/pigsty/app/supabase/migration.sql :
pg-meta:
hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary } }
vars:
pg_cluster: pg-meta
pg_version: 15
pg_users:
# supabase roles: anon, authenticated, dashboard_user
- { name: anon ,login: false }
- { name: authenticated ,login: false }
- { name: dashboard_user ,login: false ,replication: true ,createdb: true ,createrole: true }
- { name: service_role ,login: false ,bypassrls: true }
# supabase users: please use the same password
- { name: supabase_admin ,password: 'DBUser.Supa' ,pgbouncer: true ,inherit: true ,superuser: true ,replication: true ,createdb: true ,createrole: true ,bypassrls: true }
- { name: authenticator ,password: 'DBUser.Supa' ,pgbouncer: true ,inherit: false ,roles: [ authenticated ,anon ,service_role ] }
- { name: supabase_auth_admin ,password: 'DBUser.Supa' ,pgbouncer: true ,inherit: false ,createrole: true }
- { name: supabase_storage_admin ,password: 'DBUser.Supa' ,pgbouncer: true ,inherit: false ,createrole: true ,roles: [ authenticated ,anon ,service_role ] }
- { name: supabase_functions_admin ,password: 'DBUser.Supa' ,pgbouncer: true ,inherit: false ,createrole: true }
- { name: supabase_replication_admin ,password: 'DBUser.Supa' ,replication: true }
- { name: supabase_read_only_user ,password: 'DBUser.Supa' ,bypassrls: true ,roles: [ pg_read_all_data ] }
pg_databases:
- { name: meta ,baseline: cmdb.sql ,comment: pigsty meta database ,schemas: [ pigsty ]} # the pigsty cmdb, optional
- name: supa
baseline: supa.sql # the init-scripts: https://github.com/supabase/postgres/tree/develop/migrations/db/init-scripts
owner: supabase_admin
comment: supabase postgres database
schemas: [ extensions ,auth ,realtime ,storage ,graphql_public ,supabase_functions ,_analytics ,_realtime ]
extensions:
- { name: pgcrypto ,schema: extensions } # 1.3 : cryptographic functions
- { name: pg_net ,schema: extensions } # 0.7.1 : Async HTTP
- { name: pgjwt ,schema: extensions } # 0.2.0 : JSON Web Token API for Postgresql
- { name: uuid-ossp ,schema: extensions } # 1.1 : generate universally unique identifiers (UUIDs)
- { name: pgsodium } # 3.1.8 : pgsodium is a modern cryptography library for Postgres.
- { name: supabase_vault } # 0.2.8 : Supabase Vault Extension
- { name: pg_graphql } # 1.3.0 : pg_graphql: GraphQL support
pg_hba_rules:
- { user: all ,db: supa ,addr: intra ,auth: pwd ,title: 'allow supa database access from intranet'}
- { user: all ,db: supa ,addr: 172.0.0.0/8 ,auth: pwd ,title: 'allow supa database access from docker network'}
pg_extensions: # required extensions
- pg_repack_${pg_version}* wal2json_${pg_version}* pgvector_${pg_version}* pg_cron_${pg_version}* pgsodium_${pg_version}*
- vault_${pg_version}* pg_graphql_${pg_version}* pgjwt_${pg_version}* pg_net_${pg_version}* pgsql-http_${pg_version}*
pg_libs: 'pg_net, pg_stat_statements, auto_explain' # add pg_net to shared_preload_libraries
Beware that baseline: supa.sql
parameter will use the files/supa.sql
as database baseline schema, which is gathered from here.
You also have to run the migration script: migration.sql
after the cluster provisioning, which is gathered from supabase/postgres/migrations/db/migrations in chronological order and slightly modified to fit Pigsty.
You can check the latest migration files and add them to migration.sql
, the current script is synced with 20231013070755.
You can run migration on provisioned postgres cluster pg-meta
with simple psql
command:
Known issue: “ERROR: unrecognized configuration parameter “pgsodium.enable_event_trigger”: https://github.com/Vonng/pigsty/issues/350 , fix with:
pg edit-config pg-meta --force -p pgsodium.enable_event_trigger='off' # setup pgsodium event trigger
psql ${PGURL} -c 'SHOW pgsodium.enable_event_trigger;' # should be off or false
pg restart pg-meta # restart pg-meta to enable the new configuration
# connection string
PGURL=postgres://supabase_admin:DBUser.Supa@10.10.10.10:5432/supa
# check connectivity & extensions
psql ${PGURL} -c '\dx'
# perform migration
psql ${PGURL} -v ON_ERROR_STOP=1 --no-psqlrc -f ~/pigsty/app/supabase/migration.sql
The database is now ready for supabase!
Optional: setup pg_cron
extension
# install pg_cron
pg edit-config pg-meta --force -p cron.database_name='supa' # setup pg_cron database_name parameter
psql ${PGURL} -c 'SHOW cron.database_name;' # should be the underlying database name for supabase
psql ${PGURL} -c 'CREATE EXTENSION pg_cron;'; # create pg_cron extension on the database 'supa'
Stateless Part
Supabase stateless part is managed by docker-compose
, the docker-compose
file we use here is a simplified version of github.com/supabase/docker/docker-compose.yml.
Everything you need to care about is in the .env
file, which contains important settings for supabase. It is already configured to use the pg-meta
.supa
database by default, You have to change that according to your actual deployment.
############
# Secrets - YOU MUST CHANGE THESE BEFORE GOING INTO PRODUCTION
############
# you have to change the JWT_SECRET to a random string with at least 32 characters long
# and issue new ANON_KEY/SERVICE_ROLE_KEY JWT with that new secret, check the tutorial:
# https://supabase.com/docs/guides/self-hosting/docker#securing-your-services
JWT_SECRET=your-super-secret-jwt-token-with-at-least-32-characters-long
ANON_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyAgCiAgICAicm9sZSI6ICJhbm9uIiwKICAgICJpc3MiOiAic3VwYWJhc2UtZGVtbyIsCiAgICAiaWF0IjogMTY0MTc2OTIwMCwKICAgICJleHAiOiAxNzk5NTM1NjAwCn0.dc_X5iR_VP_qT0zsiyj_I_OZ2T9FtRU2BBNWN8Bu4GE
SERVICE_ROLE_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyAgCiAgICAicm9sZSI6ICJzZXJ2aWNlX3JvbGUiLAogICAgImlzcyI6ICJzdXBhYmFzZS1kZW1vIiwKICAgICJpYXQiOiAxNjQxNzY5MjAwLAogICAgImV4cCI6IDE3OTk1MzU2MDAKfQ.DaYlNEoUrrEn2Ig7tqibS-PHK5vgusbcbo7X36XVt4Q
############
# Dashboard - Credentials for the Supabase Studio WebUI
############
DASHBOARD_USERNAME=supabase # change to your own username
DASHBOARD_PASSWORD=pigsty # change to your own password
############
# Database - You can change these to any PostgreSQL database that has logical replication enabled.
############
POSTGRES_HOST=10.10.10.10 # change to Pigsty managed PostgreSQL cluster/instance VIP/IP/Hostname
POSTGRES_PORT=5432 # you can use other service port such as 5433, 5436, 6432, etc...
POSTGRES_DB=supa # change to supabase database name, `supa` by default in pigsty
POSTGRES_PASSWORD=DBUser.Supa # supabase dbsu password (shared by multiple supabase biz users)
Usually you’ll have to change these parameters accordingly. Here we’ll use fixed username, password and IP:Port database connstr for simplicity.
The postgres username is fixed as supabase_admin
and the password is DBUser.Supa
, change that according to your supa.yml
And the supabase studio WebUI credential is managed by DASHBOARD_USERNAME
and DASHBOARD_PASSWORD
, which is supabase
and pigsty
by default.
The official tutorial: Self-Hosting with Docker just have all the details you need.
Hint
You can use the Primary Service of that cluster through DNS/VIP and other service ports, or whatever access method you like.
You can also configure supabase.storage
service to use the MinIO service managed by pigsty, too
Once configured, you can launch the stateless part with docker-compose
or make up
shortcut:
cd ~/pigsty/app/supabase; make up # = docker compose up
Expose Service
The supabase studio dashboard is exposed on port 8000
by default, you can add this service to the infra_portal
to expose it to the public through Nginx and SSL.
infra_portal: # domain names and upstream servers
# ...
supa : { domain: supa.pigsty ,endpoint: "10.10.10.10:8000", websocket: true }
To expose the service, you can run the infra.yml
playbook with the nginx
tag:
Make suare supa.pigsty
or your own domain is resolvable to the infra_portal
server, and you can access the supabase studio dashboard via https://supa.pigsty
.
4 - Kong: the Nginx API Gateway
Learn how to deploy Kong, the API gateway, with Docker Compose and use external PostgreSQL as the backend database
TL;DR
cd app/kong ; docker-compose up -d
make up # pull up kong with docker-compose
make ui # run swagger ui container
make log # tail -f kong logs
make info # introspect kong with jq
make stop # stop kong container
make clean # remove kong container
make rmui # remove swagger ui container
make pull # pull latest kong image
make rmi # remove kong image
make save # save kong image to /tmp/kong.tgz
make load # load kong image from /tmp
Scripts
- Default Port: 8000
- Default SSL Port: 8443
- Default Admin Port: 8001
- Default Postgres Database:
postgres://dbuser_kong:DBUser.Kong@10.10.10.10:5432/kong
# postgres://dbuser_kong:DBUser.Kong@10.10.10.10:5432/kong
- { name: kong, owner: dbuser_kong, revokeconn: true , comment: kong the api gateway database }
- { name: dbuser_kong, password: DBUser.Kong , pgbouncer: true , roles: [ dbrole_admin ] }
5 - Jupyter: AI Notebook & IDE
Run Jupyter Lab in container, and access PostgreSQL database
Run jupyter notebook with docker, you have to:
-
- change the default password in
.env
: JUPYTER_TOKEN
-
- create data dir with proper permission:
make dir
, owned by 1000:100
-
make up
to pull up jupyter with docker compose
cd ~/pigsty/app/jupyter ; make dir up
Visit http://lab.pigsty or http://10.10.10.10:8888, the default password is pigsty
Prepare
Create a data directory /data/jupyter
, with the default uid & gid 1000:100
:
make dir # mkdir -p /data/jupyter; chown -R 1000:100 /data/jupyter
Connect to Postgres
Use the jupyter terminal to install psycopg2-binary
& psycopg2
package.
pip install psycopg2-binary psycopg2
# install with a mirror
pip install -i https://pypi.tuna.tsinghua.edu.cn/simple psycopg2-binary psycopg2
pip install --upgrade pip
pip config set global.index-url https://pypi.tuna.tsinghua.edu.cn/simple
Or installation with conda
:
conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free/
conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main/
conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge/
then use the driver in your notebook
import psycopg2
conn = psycopg2.connect('postgres://dbuser_dba:DBUser.DBA@10.10.10.10:5432/meta')
cursor = conn.cursor()
cursor.execute('SELECT * FROM pg_stat_activity')
for i in cursor.fetchall():
print(i)
Alias
make up # pull up jupyter with docker compose
make dir # create required /data/jupyter and set owner
make run # launch jupyter with docker
make view # print jupyter access point
make log # tail -f jupyter logs
make info # introspect jupyter with jq
make stop # stop jupyter container
make clean # remove jupyter container
make pull # pull latest jupyter image
make rmi # remove jupyter image
make save # save jupyter image to /tmp/docker/jupyter.tgz
make load # load jupyter image from /tmp/docker/jupyter.tgz
6 - Gitea: Simple Self-Hosting Git Service
Launch the self-hosting Git service with Gitea and Pigsty managed PostgreSQL
Public Demo: http://git.pigsty.cc
TL;DR
cd ~/pigsty/app/gitea; make up
Pigsty use 8889
port for gitea by default
http://git.pigsty or http://10.10.10.10:8889
make up # pull up gitea with docker-compose in minimal mode
make run # launch gitea with docker , local data dir and external PostgreSQL
make view # print gitea access point
make log # tail -f gitea logs
make info # introspect gitea with jq
make stop # stop gitea container
make clean # remove gitea container
make pull # pull latest gitea image
make rmi # remove gitea image
make save # save gitea image to /tmp/gitea.tgz
make load # load gitea image from /tmp
PostgreSQL Preparation
Gitea use built-in SQLite as default metadata storage, you can let Gitea use external PostgreSQL by setting connection string environment variable
# postgres://dbuser_gitea:DBUser.gitea@10.10.10.10:5432/gitea
db: { name: gitea, owner: dbuser_gitea, comment: gitea primary database }
user: { name: dbuser_gitea , password: DBUser.gitea, roles: [ dbrole_admin ] }
7 - Wiki.js: OSS Wiki Software
How to self-hosting your own wikipedia with Wiki.js and use Pigsty managed PostgreSQL as the backend database
Public Demo: http://wiki.pigsty.cc
TL; DR
cd app/wiki ; docker-compose up -d
Postgres Preparation
# postgres://dbuser_wiki:DBUser.Wiki@10.10.10.10:5432/wiki
- { name: wiki, owner: dbuser_wiki, revokeconn: true , comment: wiki the api gateway database }
- { name: dbuser_wiki, password: DBUser.Wiki , pgbouncer: true , roles: [ dbrole_admin ] }
bin/createuser pg-meta dbuser_wiki
bin/createdb pg-meta wiki
Configuration
version: "3"
services:
wiki:
container_name: wiki
image: requarks/wiki:2
environment:
DB_TYPE: postgres
DB_HOST: 10.10.10.10
DB_PORT: 5432
DB_USER: dbuser_wiki
DB_PASS: DBUser.Wiki
DB_NAME: wiki
restart: unless-stopped
ports:
- "9002:3000"
Access
- Default Port for wiki: 9002
# add to nginx_upstream
- { name: wiki , domain: wiki.pigsty.cc , endpoint: "127.0.0.1:9002" }
./infra.yml -t nginx_config
ansible all -b -a 'nginx -s reload'
8 - Minio: OSS AWS S3 Alternative
Launch minio with docker for testing & development
Public Demo: http://sss.pigsty.cc
Credential: admin
/ pigsty.minio
TL; DR
Launch minio (s3) service on 9000 & 9001
cd ~/pigsty/app/minio ; docker-compose up -d
docker run -p 9000:9000 -p 9001:9001 \
-e "MINIO_ROOT_USER=admin" \
-e "MINIO_ROOT_PASSWORD=pigsty.minio" \
minio/minio server /data --console-address ":9001"
visit http://10.10.10.10:9000 with user admin
and password pigsty.minio
make up # pull up minio with docker-compose
make run # launch minio with docker
make view # print minio access point
make log # tail -f minio logs
make info # introspect minio with jq
make stop # stop minio container
make clean # remove minio container
make pull # pull latest minio image
make rmi # remove minio image
make save # save minio image to /tmp/minio.tgz
make load # load minio image from /tmp
9 - ByteBase: PG Schema Migration
Self-hosting bytebase with PostgreSQL managed by Pigsty
ByteBase
ByteBase is a database schema change management tool, which is a tool for database schema changes. The following command will start a ByteBase on the meta node 8887
port by default.
mkdir -p /data/bytebase/data;
docker run --init --name bytebase --restart always --detach --publish 8887:8887 --volume /data/bytebase/data:/var/opt/bytebase \
bytebase/bytebase:1.0.4 --data /var/opt/bytebase --host http://ddl.pigsty --port 8887
访问 http://10.10.10.10:8887/ 或 http://ddl.pigsty 即可使用 ByteBase,您需要依次创建项目、环境、实例、数据库,即可开始进行模式变更。 公开Demo地址: http://ddl.pigsty.cc
Public Demo: http://ddl.pigsty.cc
Default username & password: admin
/ pigsty
Bytebase Overview
Schema Migrator for PostgreSQL
Visit http://ddl.pigsty or http://10.10.10.10:8887
make up # pull up bytebase with docker-compose in minimal mode
make run # launch bytebase with docker , local data dir and external PostgreSQL
make view # print bytebase access point
make log # tail -f bytebase logs
make info # introspect bytebase with jq
make stop # stop bytebase container
make clean # remove bytebase container
make pull # pull latest bytebase image
make rmi # remove bytebase image
make save # save bytebase image to /tmp/bytebase.tgz
make load # load bytebase image from /tmp
PostgreSQL Preparation
Bytebase use its internal PostgreSQL database by default, You can use external PostgreSQL for higher durability.
# postgres://dbuser_bytebase:DBUser.Bytebase@10.10.10.10:5432/bytebase
db: { name: bytebase, owner: dbuser_bytebase, comment: bytebase primary database }
user: { name: dbuser_bytebase , password: DBUser.Bytebase, roles: [ dbrole_admin ] }
if you wish to user an external PostgreSQL, drop monitor extensions and views & pg_repack
DROP SCHEMA monitor CASCADE;
DROP EXTENSION pg_repack;
After bytebase initialized, you can create them back with /pg/tmp/pg-init-template.sql
psql bytebase < /pg/tmp/pg-init-template.sql
10 - PGAdmin4: PG Admin GUI Tool
Launch pgAdmin4 with docker, and load Pigsty server list into it
pgAdmin4 is a useful PostgreSQL management tool. Execute the following command to launch the pgadmin service on the admin node:
cd ~/pigsty/app/pgadmin ; docker-compose up -d
The default port for pgadmin is 8885
, and you can access it through the following address:
http://adm.pigsty
Demo
Public Demo:http://adm.pigsty.cc。
Credentials: admin@pigsty.cc
/ pigsty
。
TL; DR
cd ~/pigsty/app/pgadmin # enter docker compose dir
make up # launch pgadmin container
make conf view # load pigsty server list
Shortcuts:
make up # pull up pgadmin with docker-compose
make run # launch pgadmin with docker
make view # print pgadmin access point
make log # tail -f pgadmin logs
make info # introspect pgadmin with jq
make stop # stop pgadmin container
make clean # remove pgadmin container
make conf # provision pgadmin with pigsty pg servers list
make dump # dump servers.json from pgadmin container
make pull # pull latest pgadmin image
make rmi # remove pgadmin image
make save # save pgadmin image to /tmp/pgadmin.tgz
make load # load pgadmin image from /tmp
11 - PostgREST: Generate REST API from Schema
Launch postgREST to generate REST API from PostgreSQL schema automatically
PostgREST is a binary component that automatically generates a REST API
based on the PostgreSQL database schema.
For example, the following command will launch postgrest with docker (local port 8884, using default admin user, and
expose Pigsty CMDB schema):
docker run --init --name postgrest --restart always --detach --publish 8884:8081 postgrest/postgrest
Visit http://10.10.10.10:8884 will show all auto-generated API definitions and automatically
expose API documentation using Swagger Editor.
If you wish to perform CRUD operations and design more fine-grained permission control, please refer
to Tutorial 1 - The Golden Key to generate a signed JWT.
This is an example of creating pigsty cmdb API with PostgREST
cd ~/pigsty/app/postgrest ; docker-compose up -d
http://10.10.10.10:8884 is the default endpoint for PostgREST
http://10.10.10.10:8883 is the default api docs for PostgREST
make up # pull up postgrest with docker-compose
make run # launch postgrest with docker
make ui # run swagger ui container
make view # print postgrest access point
make log # tail -f postgrest logs
make info # introspect postgrest with jq
make stop # stop postgrest container
make clean # remove postgrest container
make rmui # remove swagger ui container
make pull # pull latest postgrest image
make rmi # remove postgrest image
make save # save postgrest image to /tmp/postgrest.tgz
make load # load postgrest image from /tmp
Swagger UI
Launch a swagger OpenAPI UI and visualize PostgREST API on 8883 with:
docker run --init --name postgrest --name swagger -p 8883:8080 -e API_URL=http://10.10.10.10:8884 swaggerapi/swagger-ui
# docker run -d -e API_URL=http://10.10.10.10:8884 -p 8883:8080 swaggerapi/swagger-editor # swagger editor
Check http://10.10.10.10:8883/
12 - SchemaSPY: Dump & Visualize PG Schema
Dump & visualize PostgreSQL schema with SchemaSPY
Use the following docker
command to generate a database schema report, using CMDB as an example:
docker run -v /www/schema/pg-meta/meta/pigsty:/output andrewjones/schemaspy-postgres:latest -host 10.10.10.10 -port 5432 -u dbuser_dba -p DBUser.DBA -db meta -s pigsty
Then visit http://h.pigsty/schema/pg-meta/meta/pigsty to access the schema report.
13 - PGWeb: Browser-based PG Client
Launch pgweb to access PostgreSQL via web browser
PGWEB: https://github.com/sosedoff/pgweb
Simple web-based and cross-platform PostgreSQL database explorer.
Public Demo: http://cli.pigsty.cc
TL; DR
cd ~/pigsty/app/pgweb ; make up
Visit http://cli.pigsty or http://10.10.10.10:8886
Try connecting with example URLs:
postgres://dbuser_meta:DBUser.Meta@10.10.10.10:5432/meta?sslmode=disable
postgres://test:test@10.10.10.11:5432/test?sslmode=disable
make up # pull up pgweb with docker compose
make run # launch pgweb with docker
make view # print pgweb access point
make log # tail -f pgweb logs
make info # introspect pgweb with jq
make stop # stop pgweb container
make clean # remove pgweb container
make pull # pull latest pgweb image
make rmi # remove pgweb image
make save # save pgweb image to /tmp/docker/pgweb.tgz
make load # load pgweb image from /tmp/docker/pgweb.tgz
快捷方式
make up # pull up pgweb with docker-compose
make run # launch pgweb with docker
make view # print pgweb access point
make log # tail -f pgweb logs
make info # introspect pgweb with jq
make stop # stop pgweb container
make clean # remove pgweb container
make pull # pull latest pgweb image
make rmi # remove pgweb image
make save # save pgweb image to /tmp/pgweb.tgz
make load # load pgweb image from /tmp
14 - Discourse: OSS Forum Software
How to self-hosting open source forum software – Discourse, and use external PG managed by Pigsty
To setup an open-source forum Discourse, you need to adjust the default app.yml
config, focusing on the SMTP part of the configuration.
Discourse Config Example
templates:
- "templates/web.china.template.yml"
- "templates/postgres.template.yml"
- "templates/redis.template.yml"
- "templates/web.template.yml"
- "templates/web.ratelimited.template.yml"
## Uncomment these two lines if you wish to add Lets Encrypt (https)
# - "templates/web.ssl.template.yml"
# - "templates/web.letsencrypt.ssl.template.yml"
expose:
- "80:80" # http
- "443:443" # https
params:
db_default_text_search_config: "pg_catalog.english"
db_shared_buffers: "768MB"
env:
LC_ALL: en_US.UTF-8
LANG: en_US.UTF-8
LANGUAGE: en_US.UTF-8
EMBER_CLI_PROD_ASSETS: 1
UNICORN_WORKERS: 4
DISCOURSE_HOSTNAME: forum.pigsty
DISCOURSE_DEVELOPER_EMAILS: 'fengruohang@outlook.com,rh@vonng.com'
DISCOURSE_SMTP_ENABLE_START_TLS: false
DISCOURSE_SMTP_AUTHENTICATION: login
DISCOURSE_SMTP_OPENSSL_VERIFY_MODE: none
DISCOURSE_SMTP_ADDRESS: smtpdm.server.address
DISCOURSE_SMTP_PORT: 80
DISCOURSE_SMTP_USER_NAME: no_reply@mail.pigsty.cc
DISCOURSE_SMTP_PASSWORD: "<password>"
DISCOURSE_SMTP_DOMAIN: mail.pigsty.cc
volumes:
- volume:
host: /var/discourse/shared/standalone
guest: /shared
- volume:
host: /var/discourse/shared/standalone/log/var-log
guest: /var/log
hooks:
after_code:
- exec:
cd: $home/plugins
cmd:
- git clone https://github.com/discourse/docker_manager.git
run:
- exec: echo "Beginning of custom commands"
# - exec: rails r "SiteSetting.notification_email='no_reply@mail.pigsty.cc'"
- exec: echo "End of custom commands"
Then launch Discourse with the following command:
15 - GitLab: OSS GitHub Alternative
How to self-hosting GitLab, and use external HA PostgreSQL cluster?
Check Gitlab Docker Deployment to finish docker deployment.
export GITLAB_HOME=/data/gitlab
sudo docker run --detach \
--hostname gitlab.example.com \
--publish 443:443 --publish 80:80 --publish 23:22 \
--name gitlab \
--restart always \
--volume $GITLAB_HOME/config:/etc/gitlab \
--volume $GITLAB_HOME/logs:/var/log/gitlab \
--volume $GITLAB_HOME/data:/var/opt/gitlab \
--shm-size 256m \
gitlab/gitlab-ee:latest
sudo docker exec -it gitlab grep 'Password:' /etc/gitlab/initial_root_password
Then change the connection string to the Pigsty managed PostgreSQL clusters.