This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

模块:ETCD

Pigsty 内置了 etcd 支持,这是一个可靠的分布式配置存储数据库,作为 DCS 为 PostgreSQL 高可用提供支持。

ETCD 是一个分布式的、可靠的键-值存储,用于存放系统中最为关键的配置数据。

Pigsty 使用 etcd 作为 DCS:分布式配置存储(或称为分布式共识服务)。这对于 PostgreSQL 的高可用性与自动故障转移至关重要。

在安装 ETCD 模块之前,需要安装 NODE 模块将节点纳管。 此外,除非您决定使用外部的现有 etcd 集群,否则在部署任何 PGSQL 集群之前,你必须先安装 ETCD 模块,因为 patronivip-manager 会依赖 etcd 模块实现高可用与L2 VIP绑定。

1 - 集群配置

根据需求场景选择合适的 Etcd 集群规模,并对外提供可靠的接入。

在部署 Etcd 之前,你需要在 配置清单 中定义一个 Etcd 集群,通常来说,你可以选择:

  • 单节点,没有高可用性,适用于开发、测试、演示,或者依赖外部S3备份进行PITR的无高可用单机部署。
  • 三节点,具有基本的高可用性,可以容忍一个节点的故障,适用于中小规模的生产环境
  • 五节点,具有更好的高可用性,可以容忍两个节点的故障,适用于大规模生产环境。

偶数节点的 Etcd 集群没有意义,超过五节点的 Etcd 集群并不常见,因此通常使用的规格就是单节点,三节点,五节点。


单节点

在 Pigsty 中,定义一个单例 Etcd 实例非常简单,只需要一行配置即可:

etcd: { hosts: { 10.10.10.10: { etcd_seq: 1 } }, vars: { etcd_cluster: etcd } }

在 Pigsty 提供的所有单机配置模板中,都有这样一项,其中的占位 IP 地址:10.10.10.10 默认会被替换为当前管理节点的 IP。

除了 IP 地址外,这里唯一必要的参数是 etcd_seqetcd_cluster,它们会唯一标识每一个 Etcd 实例。


三节点

三节点的 Etcd 集群最为常见,它可以容忍一个节点的故障,适用于中小规模的生产环境。

例如,Pigsty 的三节点模板:triosafe 就使用了三节点的 Etcd 集群,如下所示:

etcd: 
  hosts:
    10.10.10.10: { etcd_seq: 1 }  # etcd_seq (etcd实例号)是必须指定的身份参数
    10.10.10.11: { etcd_seq: 2 }  # 实例号是正整数,一般从 0 或 1 开始依次分配
    10.10.10.12: { etcd_seq: 3 }  # 实例号应当终生不可变,一旦分配就不再回收使用。
  vars: # 集群层面的参数
    etcd_cluster: etcd    # 默认情况下,etcd集群名就叫 etcd, 除非您想要部署多套 etcd 集群,否则不要改这个名字
    etcd_safeguard: false # 是否打开 etcd 的防误删安全保险? 在生产环境初始化完成后,可以考虑打开这个选项,避免误删。
    etcd_clean: true      # 在初始化过程中,是否强制移除现有的 etcd 实例?测试的时候可以打开,这样剧本就是真正幂等的。

五节点

五节点的 Etcd 集群可以容忍两个节点的故障,适用于大规模生产环境。

例如,Pigsty 的生产仿真模板:prod 中就使用了一个五节点的 Etcd 集群:

etcd:
  hosts:
    10.10.10.21 : { etcd_seq: 1 }
    10.10.10.22 : { etcd_seq: 2 }
    10.10.10.23 : { etcd_seq: 3 }
    10.10.10.24 : { etcd_seq: 4 }
    10.10.10.25 : { etcd_seq: 5 }
  vars: { etcd_cluster: etcd    }

使用Etcd的服务

目前使用 Etcd 的服务有:

  • patroni: 用于 PostgreSQL 高可用,Etcd 的配置将填入 Patroni 配置文件。
  • vip-manager: 用于在 PostgreSQL 集群上绑定一个可选的 L2 VIP,会从 Etcd 中读取集群的领导者信息。

当 etcd 集群的成员信息发生永久性变更时,您应当 重载相关服务的配置,以确保服务能够正确访问 Etcd 集群。

patroni 上更新 etcd 端点引用:

./pgsql.yml -t pg_conf                            # 重新生成 patroni 配置
ansible all -f 1 -b -a 'systemctl reload patroni' # 重新加载 patroni 配置

vip-manager 上更新 etcd 端点引用(如果你正在使用 PGSQL L2 VIP 才需要执行此操作):

./pgsql.yml -t pg_vip_config                           # 重新生成 vip-manager 配置
ansible all -f 1 -b -a 'systemctl restart vip-manager' # 重启 vip-manager 以使用新配置 



2 - 参数列表

Etcd 模块提供了 10 个相关配置参数,用于定制所需的 Etcd 集群。

ETCD 是一个分布式的、可靠的键-值存储,用于存放系统中最为关键的数据。在 Pigsty 中,etcd 作为高可用组件 Patroni 使用的 DCS,它对于 PG 的高可用非常重要。

Pigsty 为 etcd 集群使用一个硬编码的默认集群组名 etcd,它可以是一套现有的外部 etcd 集群,或者是默认由 Pigsty 使用 etcd.yml 剧本部署创建的新etcd集群。


参数列表

ETCD 模块有 10 个相关参数:

参数 类型 级别 注释
etcd_seq int I etcd 实例标识符,必填
etcd_cluster string C etcd 集群名,默认固定为 etcd
etcd_safeguard bool G/C/A etcd 防误删保险,阻止清除正在运行的 etcd 实例?
etcd_clean bool G/C/A etcd 清除指令:在初始化时清除现有的 etcd 实例?
etcd_data path C etcd 数据目录,默认为 /data/etcd
etcd_port port C etcd 客户端端口,默认为 2379
etcd_peer_port port C etcd 同伴端口,默认为 2380
etcd_init enum C etcd 初始集群状态,新建或已存在
etcd_election_timeout int C etcd 选举超时,默认为 1000ms
etcd_heartbeat_interval int C etcd 心跳间隔,默认为 100ms

默认参数

Etcd 模块的默认参数定义于 roles/etcd/defaults/main.yml

#-----------------------------------------------------------------
# etcd
#-----------------------------------------------------------------
#etcd_seq: 1                      # etcd instance identifier, explicitly required
etcd_cluster: etcd                # etcd cluster & group name, etcd by default
etcd_safeguard: false             # prevent purging running etcd instance?
etcd_clean: true                  # purging existing etcd during initialization?
etcd_data: /data/etcd             # etcd data directory, /data/etcd by default
etcd_port: 2379                   # etcd client port, 2379 by default
etcd_peer_port: 2380              # etcd peer port, 2380 by default
etcd_init: new                    # etcd initial cluster state, new or existing
etcd_election_timeout: 1000       # etcd election timeout, 1000ms by default
etcd_heartbeat_interval: 100      # etcd heartbeat interval, 100ms by default

etcd_seq

参数名称: etcd_seq, 类型: int, 层次:I

etcd 实例标号, 这是必选参数,必须为每一个 etcd 实例指定一个唯一的标号。

以下是一个3节点etcd集群的示例,分配了 1 ~ 3 三个标号。

etcd: # dcs service for postgres/patroni ha consensus
  hosts:  # 1 node for testing, 3 or 5 for production
    10.10.10.10: { etcd_seq: 1 }  # etcd_seq required
    10.10.10.11: { etcd_seq: 2 }  # assign from 1 ~ n
    10.10.10.12: { etcd_seq: 3 }  # odd number please
  vars: # cluster level parameter override roles/etcd
    etcd_cluster: etcd  # mark etcd cluster name etcd
    etcd_safeguard: false # safeguard against purging
    etcd_clean: true # purge etcd during init process

etcd_cluster

参数名称: etcd_cluster, 类型: string, 层次:C

etcd 集群 & 分组名称,默认值为硬编码值 etcd

当您想要部署另外的 etcd 集群备用时,可以修改此参数并使用其他集群名。


etcd_safeguard

参数名称: etcd_safeguard, 类型: bool, 层次:G/C/A

安全保险参数,防止清除正在运行的etcd实例?默认值为 false

如果启用安全保险,etcd.yml 剧本不会清除正在运行的etcd实例。


etcd_clean

参数名称: etcd_clean, 类型: bool, 层次:G/C/A

在初始化时清除现有的 etcd ?默认值为true

如果启用,etcd.yml 剧本将清除正在运行的 etcd 实例,这将使其成为一个真正幂等的剧本(总是抹除现有集群)。

但是如果启用了etcd_safeguard,即使设置了此参数,剧本依然会在遇到运行中的 etcd 实例时中止,避免误删。


etcd_data

参数名称: etcd_data, 类型: path, 层次:C

etcd 数据目录,默认为/data/etcd


etcd_port

参数名称: etcd_port, 类型: port, 层次:C

etcd 客户端端口号,默认为2379


etcd_peer_port

参数名称: etcd_peer_port, 类型: port, 层次:C

etcd peer 端口,默认为 2380


etcd_init

参数名称: etcd_init, 类型: enum, 层次:C

etcd初始集群状态,可以是newexisting,默认值:new

默认将创建一个独立的新etcd集群,当尝试向现有etcd集群 添加新成员 时,应当使用 existing


etcd_election_timeout

参数名称: etcd_election_timeout, 类型: int, 层次:C

etcd 选举超时,默认为 1000 (毫秒),也就是 1 秒。


etcd_heartbeat_interval

参数名称: etcd_heartbeat_interval, 类型: int, 层次:C

etcd心跳间隔,默认为 100 (毫秒)。




3 - 预置剧本

如何使用预置的 ansible 剧本来管理 Etcd 集群,常用管理命令速查。

Etcd 模块提供了一个默认的剧本 etcd.yml ,用于安装 Etcd 集群。


etcd.yml

剧本原始文件:etcd.yml

执行本剧本,将会在硬编码的固定分组 etcd 上安装配置 Etcd 集群,并启动 etcd 服务。

etcd.yml 中,提供了以下是可用的任务子集:

  • etcd_assert :生成 etcd 身份
  • etcd_install :安装 etcd rpm 包
  • etcd_clean :清理现有的 etcd 实例
    • etcd_check :检查 etcd 实例是否在运行
    • etcd_purge :删除正在运行的 etcd 实例和数据
  • etcd_dir :创建 etcd 数据和配置目录
  • etcd_config :生成 etcd 配置
    • etcd_conf :生成 etcd 主配置
    • etcd_cert :生成 etcd ssl 证书
  • etcd_launch :启动 etcd 服务
  • etcd_register : 将 etcd 注册到 prometheus

Etcd 模块没有提供专门的卸载剧本,如果您需要卸载 Etcd,可以使用剧本中的 etcd_clean 子任务,请参考 保护机制 中的介绍。


执行演示

asciicast


命令速查

Etcd 剧本与快捷方式:

./etcd.yml                                      # 初始化 etcd 集群 
./etcd.yml -t etcd_launch                       # 重启整个 etcd 集群
./etcd.yml -t etcd_clean                        # 移除整个集群,会检查现有实例是否存在,根据安全保险判断是否执行
./etcd.yml -t etcd_purge                        # 强制移除整个集群,根本不管安全保险是否启用。
./etcd.yml -t etcd_conf                         # 使用最新状态刷新 /etc/etcd/etcd.conf
./etcd.yml -l 10.10.10.12 -e etcd_init=existing # 扩容节点:一定要添加 existing 参数,命令行或配置文件均可
./etcd.yml -l 10.10.10.12 -t etcd_purge         # 删除节点

保护机制

出于防止误删的目的,Pigsty 的 ETCD 模块提供了防误删保险,由以下参数控制:

  • etcd_clean 默认为 true,即,默认在初始化时清理现有实例。
  • etcd_safeguard: 默认为 false,即默认不打开防误删保护。

默认配置使得您可以使用剧本重置 etcd 集群的状态,这对于开发、测试和生产环境中紧急重建 etcd 集群非常有用。

如果您希望在初始化时清理现有实例,请修改配置文件,显式关闭此保险,或者在执行时使用命令行参数 -e etcd_clean=true 进行覆盖。

如果您单纯希望清理现有实例,而不安装新实例,直接执行 etcd_clean 子任务即可:

./etcd.yml -l <cls> -e etcd_clean=true -t etcd_clean

如果您确定要摧毁这个 etcd 集群,更简单暴力直接的方式是:

./etcd.yml -l <cls> -t etcd_purge

除非您清楚地知道自己在做什么,我们并不建议用户这样清理 Etcd 集群。




4 - 管理预案

Etcd 集群管理 SOP,创建,销毁,扩容,缩容的详细说明

以下是一些常见的 etcd 管理任务 SOP(预案):

  • 创建集群:如何初始化 etcd 集群?
  • 销毁集群:如何销毁 etcd 集群?
  • 环境变量:如何配置 etcd 客户端,以访问 etcd 服务器集群?
  • 重载配置:如何更新客户端使用的 etcd 服务器成员列表?
  • 添加成员:如何向现有 etcd 集群添加新成员?
  • 移除成员:如何从 etcd 集群移除老成员?

更多问题请参考 FAQ:ETCD


创建集群

要创建一个集群,首先需要在 配置清单 中定义 etcd 集群:

etcd:
  hosts:
    10.10.10.10: { etcd_seq: 1 }
    10.10.10.11: { etcd_seq: 2 }
    10.10.10.12: { etcd_seq: 3 }
  vars: { etcd_cluster: etcd }

执行 etcd.yml 剧本即可。

./etcd.yml  # 初始化 etcd 集群 

注意,Pigsty 的 etcd 模块提供了防误删保护机制。在默认配置下, etcd_clean 配置打开,且 etcd_safeguard 配置关闭, 那么执行此剧本的过程中即使遇到存活的etcd实例,也会强制移除,在这种情况下 etcd.yml 剧本是真正幂等的。这种配置对于开发,测试,以及生产环境紧急强制重建 etcd 集群来说是有用的。

对于生产环境已经初始化好的 etcd 集群,可以打开防误删保护,避免误删现有的 etcd 实例。此时当剧本检测到存活 etcd 实例时会主动中止,避免误删现有 etcd 实例,您可以使用命令行参数来覆盖这一行为。


销毁集群

要销毁一个 etcd 集群,只需使用 etcd.yml 剧本的 etcd_clean 子任务即可。执行此命令前请务必三思!

./etcd.yml -t etcd_clean  # 移除整个集群,会检查现有实例是否存在,根据安全保险判断是否执行
./etcd.yml -t etcd_purge  # 强制移除整个集群,根本不管安全保险是否启用。

使用 etcd_clean 子任务会尊重 etcd_safeguard 防误删保险的配置,使用 etcd_purge 子任务则会无视一切清理现有 etcd 集群。


环境变量

Pigsty 默认使用 etcd v3 API,以下是etcd客户端配置环境变量的示例。

alias e="etcdctl"
alias em="etcdctl member"
export ETCDCTL_API=3
export ETCDCTL_ENDPOINTS=https://10.10.10.10:2379
export ETCDCTL_CACERT=/etc/pki/ca.crt
export ETCDCTL_CERT=/etc/etcd/server.crt
export ETCDCTL_KEY=/etc/etcd/server.key

配置好客户端环境变量后,你可以使用以下命令进行 etcd CRUD 操作:

e put a 10 ; e get a; e del a ; # V3 API

重载配置

如果 etcd 集群的成员发生变化,我们需要刷新对 etcd 服务端点的引用,目前 Pigsty 中有四处 etcd 引用:

  • 现有 etcd 实例成员的配置文件
  • etcdctl 客户端环境变量(infra节点上)
  • patroni DCS 端点配置(pgsql节点上)
  • vip-manager DCS 端点配置(可选)

要在现有etcd成员上刷新 etcd 配置文件 /etc/etcd/etcd.conf

./etcd.yml -t etcd_conf                           # 使用最新状态刷新 /etc/etcd/etcd.conf
ansible etcd -f 1 -b -a 'systemctl restart etcd'  # 可选操作:重启 etcd 

刷新 etcdctl 客户端环境变量:

$ ./etcd.yml -t etcd_env                          # 刷新 /etc/profile.d/etcdctl.sh (管理节点)

patroni 上更新 etcd 端点引用:

./pgsql.yml -t pg_conf                            # 重新生成 patroni 配置
ansible all -f 1 -b -a 'systemctl reload patroni' # 重新加载 patroni 配置

vip-manager 上更新 etcd 端点引用(如果你正在使用 PGSQL L2 VIP 才需要执行此操作):

./pgsql.yml -t pg_vip_config                           # 重新生成 vip-manager 配置
ansible all -f 1 -b -a 'systemctl restart vip-manager' # 重启 vip-manager 以使用新配置 

添加成员

ETCD 参考: 添加成员

向现有的 etcd 集群添加新成员通常需要五个步骤:

简短版本

  1. 执行 etcdctl member add 命令,通知现有集群即将有新成员加入(使用学习者模式)
  2. 更新配置清单,将新实例写入配置文件 etcd 组中。
  3. 使用 etcd_init=existing 的方式初始化新的 etcd 实例,使其加入现有集群而不是创建一个新集群(非常重要
  4. 将新成员从学习者提升为追随者,正式成为集群中具有投票权的一员。
  5. 重载配置 以更新客户端使用的 etcd 服务端点。
etcdctl member add <etcd-?> --learner=true --peer-urls=https://<new_ins_ip>:2380  # 通知集群
./etcd.yml -l <new_ins_ip> -e etcd_init=existing                                  # 初始化新实例
etcdctl member promote <new_ins_server_id>                                        # 提升实例为追随者
详细步骤:向etcd集群添加成员

下面是具体操作的详细细节,让我们从一个单实例 etcd 集群开始:

etcd:
  hosts:
    10.10.10.10: { etcd_seq: 1 } # <--- 集群中原本存在的唯一实例
    10.10.10.11: { etcd_seq: 2 } # <--- 将此新成员定义添加到清单中
  vars: { etcd_cluster: etcd }

使用 etcd member add 向现有 etcd 集群宣告新的学习者实例 etcd-2 即将到来:

$ etcdctl member add etcd-2 --learner=true --peer-urls=https://10.10.10.11:2380
Member 33631ba6ced84cf8 added to cluster 6646fbcf5debc68f

ETCD_NAME="etcd-2"
ETCD_INITIAL_CLUSTER="etcd-2=https://10.10.10.11:2380,etcd-1=https://10.10.10.10:2380"
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.10.10.11:2380"
ETCD_INITIAL_CLUSTER_STATE="existing"

使用 etcdctl member list(或 em list)检查成员列表,我们可以看到一个 unstarted 新成员:

33631ba6ced84cf8, unstarted, , https://10.10.10.11:2380, , true       # 这里有一个未启动的新成员
429ee12c7fbab5c1, started, etcd-1, https://10.10.10.10:2380, https://10.10.10.10:2379, false

接下来使用 etcd.yml 剧本初始化新的 etcd 实例 etcd-2,完成后,我们可以看到新成员已经启动:

$ ./etcd.yml -l 10.10.10.11 -e etcd_init=existing    # 一定要添加 existing 参数,命令行或配置文件均可
...
33631ba6ced84cf8, started, etcd-2, https://10.10.10.11:2380, https://10.10.10.11:2379, true
429ee12c7fbab5c1, started, etcd-1, https://10.10.10.10:2380, https://10.10.10.10:2379, false

新成员初始化完成并稳定运行后,可以将新成员从学习者提升为追随者:

$ etcdctl member promote 33631ba6ced84cf8   # 将学习者提升为追随者,这里需要使用 etcd 实例的 ID
Member 33631ba6ced84cf8 promoted in cluster 6646fbcf5debc68f

$ em list                # check again, the new member is started
33631ba6ced84cf8, started, etcd-2, https://10.10.10.11:2380, https://10.10.10.11:2379, false
429ee12c7fbab5c1, started, etcd-1, https://10.10.10.10:2380, https://10.10.10.10:2379, fals

新成员添加完成,请不要忘记 重载配置 ,让所有客户端也知道新成员的存在。

重复以上步骤,可以添加更多成员。记住,生产环境中至少要使用 3 个成员。


移除成员

要从 etcd 集群中删除一个成员实例,通常需要以下三个步骤:

  1. 从配置清单中注释/屏蔽/删除该实例,并重载配置,让客户端不再使用该实例。
  2. 使用 etcdctl member remove <server_id> 命令将它从集群中踢除
  3. 将该实例临时添加回配置清单,使用剧本彻底移除下线该实例,然后永久从配置中删除
详细步骤:从etcd集群移除成员

让我们以一个 3 节点的 etcd 集群为例,从中移除 3 号实例。

为了刷新配置,您需要 注释 要待删除的成员,然后 重载配置,让所有客户端都不要再使用此实例。

etcd:
  hosts:
    10.10.10.10: { etcd_seq: 1 }
    10.10.10.11: { etcd_seq: 2 }
    10.10.10.12: { etcd_seq: 3 }   # <---- 首先注释掉这个成员,然后重载配置
  vars: { etcd_cluster: etcd }

然后,您需要使用 etcdctl member remove 命令,将它从集群中踢出去:

$ etcdctl member list 
429ee12c7fbab5c1, started, etcd-1, https://10.10.10.10:2380, https://10.10.10.10:2379, false
33631ba6ced84cf8, started, etcd-2, https://10.10.10.11:2380, https://10.10.10.11:2379, false
93fcf23b220473fb, started, etcd-3, https://10.10.10.12:2380, https://10.10.10.12:2379, false  # <--- 移除这个

$ etcdctl member remove 93fcf23b220473fb  # kick it from cluster
Member 93fcf23b220473fb removed from cluster 6646fbcf5debc68f

最后,您要将该成员临时添加回配置清单中以便运行下线任务,将实例彻底关停移除。

./etcd.yml -t etcd_purge -l 10.10.10.12   # 下线该实例(注意:执行这个命令要求这个实例的定义还在配置清单里)

执行完毕后,您可以将其从配置清单中永久删除,移除成员至此完成。

重复以上步骤,可以移除更多成员,与添加成员配合使用,可以对 etcd 集群进行滚动升级搬迁。

5 - 监控告警

如何监控 Etcd?有哪些告警规则值得关注?

监控面板

ETCD 模块提供了一个监控面板:Etcd Overview。

ETCD Overview Dashboard

ETCD Overview:ETCD 集群概览

这个监控面板提供了 ETCD 状态的关键信息:最值得关注的是 ETCD Aliveness,它显示了 ETCD 集群整体的服务状态。

红色的条带标识着实例不可用的时间段,而底下蓝灰色的条带标识着整个集群处于不可用的时间段。

etcd-overview.jpg


告警规则

Pigsty 针对 Etcd 提供了以下五条预置告警规则,定义于 files/prometheus/rules/etcd.yml

  • EtcdServerDown:Etcd 节点宕机,严重警报
  • EtcdNoLeader:Etcd 集群没有领导者,严重警报
  • EtcdQuotaFull:Etcd 配额使用超过 90%,警告
  • EtcdNetworkPeerRTSlow:Etcd 网络时延缓慢,提醒
  • EtcdWalFsyncSlow:Etcd 磁盘刷盘缓慢,提醒
#==============================================================#
#                         Aliveness                            #
#==============================================================#
# etcd server instance down
- alert: EtcdServerDown
  expr: etcd_up < 1
  for: 1m
  labels: { level: 0, severity: CRIT, category: etcd }
  annotations:
    summary: "CRIT EtcdServerDown {{ $labels.ins }}@{{ $labels.instance }}"
    description: |
      etcd_up[ins={{ $labels.ins }}, instance={{ $labels.instance }}] = {{ $value }} < 1
      http://g.pigsty/d/etcd-overview      

#==============================================================#
#                         Error                                #
#==============================================================#
# Etcd no Leader triggers a P0 alert immediately
# if dcs_failsafe mode is not enabled, this may lead to global outage
- alert: EtcdNoLeader
  expr: min(etcd_server_has_leader) by (cls) < 1
  for: 15s
  labels: { level: 0, severity: CRIT, category: etcd }
  annotations:
    summary: "CRIT EtcdNoLeader: {{ $labels.cls }} {{ $value }}"
    description: |
      etcd_server_has_leader[cls={{ $labels.cls }}] = {{ $value }} < 1
      http://g.pigsty/d/etcd-overview?from=now-5m&to=now&var-cls={{$labels.cls}}      

#==============================================================#
#                        Saturation                            #
#==============================================================#
- alert: EtcdQuotaFull
  expr: etcd:cls:quota_usage > 0.90
  for: 1m
  labels: { level: 1, severity: WARN, category: etcd }
  annotations:
    summary: "WARN EtcdQuotaFull: {{ $labels.cls }}"
    description: |
      etcd:cls:quota_usage[cls={{ $labels.cls }}] = {{ $value | printf "%.3f" }} > 90%      

#==============================================================#
#                         Latency                              #
#==============================================================#
# etcd network peer rt p95 > 200ms for 1m
- alert: EtcdNetworkPeerRTSlow
  expr: etcd:ins:network_peer_rt_p95_5m > 0.200
  for: 1m
  labels: { level: 2, severity: INFO, category: etcd }
  annotations:
    summary: "INFO EtcdNetworkPeerRTSlow: {{ $labels.cls }} {{ $labels.ins }}"
    description: |
      etcd:ins:network_peer_rt_p95_5m[cls={{ $labels.cls }}, ins={{ $labels.ins }}] = {{ $value }} > 200ms
      http://g.pigsty/d/etcd-instance?from=now-10m&to=now&var-cls={{ $labels.cls }}      

# Etcd wal fsync rt p95 > 50ms
- alert: EtcdWalFsyncSlow
  expr: etcd:ins:wal_fsync_rt_p95_5m > 0.050
  for: 1m
  labels: { level: 2, severity: INFO, category: etcd }
  annotations:
    summary: "INFO EtcdWalFsyncSlow: {{ $labels.cls }} {{ $labels.ins }}"
    description: |
      etcd:ins:wal_fsync_rt_p95_5m[cls={{ $labels.cls }}, ins={{ $labels.ins }}] = {{ $value }} > 50ms
      http://g.pigsty/d/etcd-instance?from=now-10m&to=now&var-cls={{ $labels.cls }}      

6 - 指标列表

Pigsty ETCD 模块提供的完整监控指标列表与释义

ETCD 模块包含有 177 类可用监控指标。

Metric Name Type Labels Description
etcd:ins:backend_commit_rt_p99_5m Unknown cls, ins, instance, job, ip N/A
etcd:ins:disk_fsync_rt_p99_5m Unknown cls, ins, instance, job, ip N/A
etcd:ins:network_peer_rt_p99_1m Unknown cls, To, ins, instance, job, ip N/A
etcd_cluster_version gauge cls, cluster_version, ins, instance, job, ip Which version is running. 1 for ‘cluster_version’ label with current cluster version
etcd_debugging_auth_revision gauge cls, ins, instance, job, ip The current revision of auth store.
etcd_debugging_disk_backend_commit_rebalance_duration_seconds_bucket Unknown cls, ins, instance, job, le, ip N/A
etcd_debugging_disk_backend_commit_rebalance_duration_seconds_count Unknown cls, ins, instance, job, ip N/A
etcd_debugging_disk_backend_commit_rebalance_duration_seconds_sum Unknown cls, ins, instance, job, ip N/A
etcd_debugging_disk_backend_commit_spill_duration_seconds_bucket Unknown cls, ins, instance, job, le, ip N/A
etcd_debugging_disk_backend_commit_spill_duration_seconds_count Unknown cls, ins, instance, job, ip N/A
etcd_debugging_disk_backend_commit_spill_duration_seconds_sum Unknown cls, ins, instance, job, ip N/A
etcd_debugging_disk_backend_commit_write_duration_seconds_bucket Unknown cls, ins, instance, job, le, ip N/A
etcd_debugging_disk_backend_commit_write_duration_seconds_count Unknown cls, ins, instance, job, ip N/A
etcd_debugging_disk_backend_commit_write_duration_seconds_sum Unknown cls, ins, instance, job, ip N/A
etcd_debugging_lease_granted_total counter cls, ins, instance, job, ip The total number of granted leases.
etcd_debugging_lease_renewed_total counter cls, ins, instance, job, ip The number of renewed leases seen by the leader.
etcd_debugging_lease_revoked_total counter cls, ins, instance, job, ip The total number of revoked leases.
etcd_debugging_lease_ttl_total_bucket Unknown cls, ins, instance, job, le, ip N/A
etcd_debugging_lease_ttl_total_count Unknown cls, ins, instance, job, ip N/A
etcd_debugging_lease_ttl_total_sum Unknown cls, ins, instance, job, ip N/A
etcd_debugging_mvcc_compact_revision gauge cls, ins, instance, job, ip The revision of the last compaction in store.
etcd_debugging_mvcc_current_revision gauge cls, ins, instance, job, ip The current revision of store.
etcd_debugging_mvcc_db_compaction_keys_total counter cls, ins, instance, job, ip Total number of db keys compacted.
etcd_debugging_mvcc_db_compaction_last gauge cls, ins, instance, job, ip The unix time of the last db compaction. Resets to 0 on start.
etcd_debugging_mvcc_db_compaction_pause_duration_milliseconds_bucket Unknown cls, ins, instance, job, le, ip N/A
etcd_debugging_mvcc_db_compaction_pause_duration_milliseconds_count Unknown cls, ins, instance, job, ip N/A
etcd_debugging_mvcc_db_compaction_pause_duration_milliseconds_sum Unknown cls, ins, instance, job, ip N/A
etcd_debugging_mvcc_db_compaction_total_duration_milliseconds_bucket Unknown cls, ins, instance, job, le, ip N/A
etcd_debugging_mvcc_db_compaction_total_duration_milliseconds_count Unknown cls, ins, instance, job, ip N/A
etcd_debugging_mvcc_db_compaction_total_duration_milliseconds_sum Unknown cls, ins, instance, job, ip N/A
etcd_debugging_mvcc_events_total counter cls, ins, instance, job, ip Total number of events sent by this member.
etcd_debugging_mvcc_index_compaction_pause_duration_milliseconds_bucket Unknown cls, ins, instance, job, le, ip N/A
etcd_debugging_mvcc_index_compaction_pause_duration_milliseconds_count Unknown cls, ins, instance, job, ip N/A
etcd_debugging_mvcc_index_compaction_pause_duration_milliseconds_sum Unknown cls, ins, instance, job, ip N/A
etcd_debugging_mvcc_keys_total gauge cls, ins, instance, job, ip Total number of keys.
etcd_debugging_mvcc_pending_events_total gauge cls, ins, instance, job, ip Total number of pending events to be sent.
etcd_debugging_mvcc_range_total counter cls, ins, instance, job, ip Total number of ranges seen by this member.
etcd_debugging_mvcc_slow_watcher_total gauge cls, ins, instance, job, ip Total number of unsynced slow watchers.
etcd_debugging_mvcc_total_put_size_in_bytes gauge cls, ins, instance, job, ip The total size of put kv pairs seen by this member.
etcd_debugging_mvcc_watch_stream_total gauge cls, ins, instance, job, ip Total number of watch streams.
etcd_debugging_mvcc_watcher_total gauge cls, ins, instance, job, ip Total number of watchers.
etcd_debugging_server_lease_expired_total counter cls, ins, instance, job, ip The total number of expired leases.
etcd_debugging_snap_save_marshalling_duration_seconds_bucket Unknown cls, ins, instance, job, le, ip N/A
etcd_debugging_snap_save_marshalling_duration_seconds_count Unknown cls, ins, instance, job, ip N/A
etcd_debugging_snap_save_marshalling_duration_seconds_sum Unknown cls, ins, instance, job, ip N/A
etcd_debugging_snap_save_total_duration_seconds_bucket Unknown cls, ins, instance, job, le, ip N/A
etcd_debugging_snap_save_total_duration_seconds_count Unknown cls, ins, instance, job, ip N/A
etcd_debugging_snap_save_total_duration_seconds_sum Unknown cls, ins, instance, job, ip N/A
etcd_debugging_store_expires_total counter cls, ins, instance, job, ip Total number of expired keys.
etcd_debugging_store_reads_total counter cls, action, ins, instance, job, ip Total number of reads action by (get/getRecursive), local to this member.
etcd_debugging_store_watch_requests_total counter cls, ins, instance, job, ip Total number of incoming watch requests (new or reestablished).
etcd_debugging_store_watchers gauge cls, ins, instance, job, ip Count of currently active watchers.
etcd_debugging_store_writes_total counter cls, action, ins, instance, job, ip Total number of writes (e.g. set/compareAndDelete) seen by this member.
etcd_disk_backend_commit_duration_seconds_bucket Unknown cls, ins, instance, job, le, ip N/A
etcd_disk_backend_commit_duration_seconds_count Unknown cls, ins, instance, job, ip N/A
etcd_disk_backend_commit_duration_seconds_sum Unknown cls, ins, instance, job, ip N/A
etcd_disk_backend_defrag_duration_seconds_bucket Unknown cls, ins, instance, job, le, ip N/A
etcd_disk_backend_defrag_duration_seconds_count Unknown cls, ins, instance, job, ip N/A
etcd_disk_backend_defrag_duration_seconds_sum Unknown cls, ins, instance, job, ip N/A
etcd_disk_backend_snapshot_duration_seconds_bucket Unknown cls, ins, instance, job, le, ip N/A
etcd_disk_backend_snapshot_duration_seconds_count Unknown cls, ins, instance, job, ip N/A
etcd_disk_backend_snapshot_duration_seconds_sum Unknown cls, ins, instance, job, ip N/A
etcd_disk_defrag_inflight gauge cls, ins, instance, job, ip Whether or not defrag is active on the member. 1 means active, 0 means not.
etcd_disk_wal_fsync_duration_seconds_bucket Unknown cls, ins, instance, job, le, ip N/A
etcd_disk_wal_fsync_duration_seconds_count Unknown cls, ins, instance, job, ip N/A
etcd_disk_wal_fsync_duration_seconds_sum Unknown cls, ins, instance, job, ip N/A
etcd_disk_wal_write_bytes_total gauge cls, ins, instance, job, ip Total number of bytes written in WAL.
etcd_grpc_proxy_cache_hits_total gauge cls, ins, instance, job, ip Total number of cache hits
etcd_grpc_proxy_cache_keys_total gauge cls, ins, instance, job, ip Total number of keys/ranges cached
etcd_grpc_proxy_cache_misses_total gauge cls, ins, instance, job, ip Total number of cache misses
etcd_grpc_proxy_events_coalescing_total counter cls, ins, instance, job, ip Total number of events coalescing
etcd_grpc_proxy_watchers_coalescing_total gauge cls, ins, instance, job, ip Total number of current watchers coalescing
etcd_mvcc_db_open_read_transactions gauge cls, ins, instance, job, ip The number of currently open read transactions
etcd_mvcc_db_total_size_in_bytes gauge cls, ins, instance, job, ip Total size of the underlying database physically allocated in bytes.
etcd_mvcc_db_total_size_in_use_in_bytes gauge cls, ins, instance, job, ip Total size of the underlying database logically in use in bytes.
etcd_mvcc_delete_total counter cls, ins, instance, job, ip Total number of deletes seen by this member.
etcd_mvcc_hash_duration_seconds_bucket Unknown cls, ins, instance, job, le, ip N/A
etcd_mvcc_hash_duration_seconds_count Unknown cls, ins, instance, job, ip N/A
etcd_mvcc_hash_duration_seconds_sum Unknown cls, ins, instance, job, ip N/A
etcd_mvcc_hash_rev_duration_seconds_bucket Unknown cls, ins, instance, job, le, ip N/A
etcd_mvcc_hash_rev_duration_seconds_count Unknown cls, ins, instance, job, ip N/A
etcd_mvcc_hash_rev_duration_seconds_sum Unknown cls, ins, instance, job, ip N/A
etcd_mvcc_put_total counter cls, ins, instance, job, ip Total number of puts seen by this member.
etcd_mvcc_range_total counter cls, ins, instance, job, ip Total number of ranges seen by this member.
etcd_mvcc_txn_total counter cls, ins, instance, job, ip Total number of txns seen by this member.
etcd_network_active_peers gauge cls, ins, Local, instance, job, ip, Remote The current number of active peer connections.
etcd_network_client_grpc_received_bytes_total counter cls, ins, instance, job, ip The total number of bytes received from grpc clients.
etcd_network_client_grpc_sent_bytes_total counter cls, ins, instance, job, ip The total number of bytes sent to grpc clients.
etcd_network_peer_received_bytes_total counter cls, ins, instance, job, ip, From The total number of bytes received from peers.
etcd_network_peer_round_trip_time_seconds_bucket Unknown cls, To, ins, instance, job, le, ip N/A
etcd_network_peer_round_trip_time_seconds_count Unknown cls, To, ins, instance, job, ip N/A
etcd_network_peer_round_trip_time_seconds_sum Unknown cls, To, ins, instance, job, ip N/A
etcd_network_peer_sent_bytes_total counter cls, To, ins, instance, job, ip The total number of bytes sent to peers.
etcd_server_apply_duration_seconds_bucket Unknown cls, version, ins, instance, job, le, success, ip, op N/A
etcd_server_apply_duration_seconds_count Unknown cls, version, ins, instance, job, success, ip, op N/A
etcd_server_apply_duration_seconds_sum Unknown cls, version, ins, instance, job, success, ip, op N/A
etcd_server_client_requests_total counter client_api_version, cls, ins, instance, type, job, ip The total number of client requests per client version.
etcd_server_go_version gauge cls, ins, instance, job, server_go_version, ip Which Go version server is running with. 1 for ‘server_go_version’ label with current version.
etcd_server_has_leader gauge cls, ins, instance, job, ip Whether or not a leader exists. 1 is existence, 0 is not.
etcd_server_health_failures counter cls, ins, instance, job, ip The total number of failed health checks
etcd_server_health_success counter cls, ins, instance, job, ip The total number of successful health checks
etcd_server_heartbeat_send_failures_total counter cls, ins, instance, job, ip The total number of leader heartbeat send failures (likely overloaded from slow disk).
etcd_server_id gauge cls, ins, instance, job, server_id, ip Server or member ID in hexadecimal format. 1 for ‘server_id’ label with current ID.
etcd_server_is_leader gauge cls, ins, instance, job, ip Whether or not this member is a leader. 1 if is, 0 otherwise.
etcd_server_is_learner gauge cls, ins, instance, job, ip Whether or not this member is a learner. 1 if is, 0 otherwise.
etcd_server_leader_changes_seen_total counter cls, ins, instance, job, ip The number of leader changes seen.
etcd_server_learner_promote_successes counter cls, ins, instance, job, ip The total number of successful learner promotions while this member is leader.
etcd_server_proposals_applied_total gauge cls, ins, instance, job, ip The total number of consensus proposals applied.
etcd_server_proposals_committed_total gauge cls, ins, instance, job, ip The total number of consensus proposals committed.
etcd_server_proposals_failed_total counter cls, ins, instance, job, ip The total number of failed proposals seen.
etcd_server_proposals_pending gauge cls, ins, instance, job, ip The current number of pending proposals to commit.
etcd_server_quota_backend_bytes gauge cls, ins, instance, job, ip Current backend storage quota size in bytes.
etcd_server_read_indexes_failed_total counter cls, ins, instance, job, ip The total number of failed read indexes seen.
etcd_server_slow_apply_total counter cls, ins, instance, job, ip The total number of slow apply requests (likely overloaded from slow disk).
etcd_server_slow_read_indexes_total counter cls, ins, instance, job, ip The total number of pending read indexes not in sync with leader’s or timed out read index requests.
etcd_server_snapshot_apply_in_progress_total gauge cls, ins, instance, job, ip 1 if the server is applying the incoming snapshot. 0 if none.
etcd_server_version gauge cls, server_version, ins, instance, job, ip Which version is running. 1 for ‘server_version’ label with current version.
etcd_snap_db_fsync_duration_seconds_bucket Unknown cls, ins, instance, job, le, ip N/A
etcd_snap_db_fsync_duration_seconds_count Unknown cls, ins, instance, job, ip N/A
etcd_snap_db_fsync_duration_seconds_sum Unknown cls, ins, instance, job, ip N/A
etcd_snap_db_save_total_duration_seconds_bucket Unknown cls, ins, instance, job, le, ip N/A
etcd_snap_db_save_total_duration_seconds_count Unknown cls, ins, instance, job, ip N/A
etcd_snap_db_save_total_duration_seconds_sum Unknown cls, ins, instance, job, ip N/A
etcd_snap_fsync_duration_seconds_bucket Unknown cls, ins, instance, job, le, ip N/A
etcd_snap_fsync_duration_seconds_count Unknown cls, ins, instance, job, ip N/A
etcd_snap_fsync_duration_seconds_sum Unknown cls, ins, instance, job, ip N/A
etcd_up Unknown cls, ins, instance, job, ip N/A
go_gc_duration_seconds summary cls, ins, instance, quantile, job, ip A summary of the pause duration of garbage collection cycles.
go_gc_duration_seconds_count Unknown cls, ins, instance, job, ip N/A
go_gc_duration_seconds_sum Unknown cls, ins, instance, job, ip N/A
go_goroutines gauge cls, ins, instance, job, ip Number of goroutines that currently exist.
go_info gauge cls, version, ins, instance, job, ip Information about the Go environment.
go_memstats_alloc_bytes gauge cls, ins, instance, job, ip Number of bytes allocated and still in use.
go_memstats_alloc_bytes_total counter cls, ins, instance, job, ip Total number of bytes allocated, even if freed.
go_memstats_buck_hash_sys_bytes gauge cls, ins, instance, job, ip Number of bytes used by the profiling bucket hash table.
go_memstats_frees_total counter cls, ins, instance, job, ip Total number of frees.
go_memstats_gc_cpu_fraction gauge cls, ins, instance, job, ip The fraction of this program’s available CPU time used by the GC since the program started.
go_memstats_gc_sys_bytes gauge cls, ins, instance, job, ip Number of bytes used for garbage collection system metadata.
go_memstats_heap_alloc_bytes gauge cls, ins, instance, job, ip Number of heap bytes allocated and still in use.
go_memstats_heap_idle_bytes gauge cls, ins, instance, job, ip Number of heap bytes waiting to be used.
go_memstats_heap_inuse_bytes gauge cls, ins, instance, job, ip Number of heap bytes that are in use.
go_memstats_heap_objects gauge cls, ins, instance, job, ip Number of allocated objects.
go_memstats_heap_released_bytes gauge cls, ins, instance, job, ip Number of heap bytes released to OS.
go_memstats_heap_sys_bytes gauge cls, ins, instance, job, ip Number of heap bytes obtained from system.
go_memstats_last_gc_time_seconds gauge cls, ins, instance, job, ip Number of seconds since 1970 of last garbage collection.
go_memstats_lookups_total counter cls, ins, instance, job, ip Total number of pointer lookups.
go_memstats_mallocs_total counter cls, ins, instance, job, ip Total number of mallocs.
go_memstats_mcache_inuse_bytes gauge cls, ins, instance, job, ip Number of bytes in use by mcache structures.
go_memstats_mcache_sys_bytes gauge cls, ins, instance, job, ip Number of bytes used for mcache structures obtained from system.
go_memstats_mspan_inuse_bytes gauge cls, ins, instance, job, ip Number of bytes in use by mspan structures.
go_memstats_mspan_sys_bytes gauge cls, ins, instance, job, ip Number of bytes used for mspan structures obtained from system.
go_memstats_next_gc_bytes gauge cls, ins, instance, job, ip Number of heap bytes when next garbage collection will take place.
go_memstats_other_sys_bytes gauge cls, ins, instance, job, ip Number of bytes used for other system allocations.
go_memstats_stack_inuse_bytes gauge cls, ins, instance, job, ip Number of bytes in use by the stack allocator.
go_memstats_stack_sys_bytes gauge cls, ins, instance, job, ip Number of bytes obtained from system for stack allocator.
go_memstats_sys_bytes gauge cls, ins, instance, job, ip Number of bytes obtained from system.
go_threads gauge cls, ins, instance, job, ip Number of OS threads created.
grpc_server_handled_total counter cls, ins, instance, grpc_code, job, grpc_method, grpc_type, ip, grpc_service Total number of RPCs completed on the server, regardless of success or failure.
grpc_server_msg_received_total counter cls, ins, instance, job, grpc_type, grpc_method, ip, grpc_service Total number of RPC stream messages received on the server.
grpc_server_msg_sent_total counter cls, ins, instance, job, grpc_type, grpc_method, ip, grpc_service Total number of gRPC stream messages sent by the server.
grpc_server_started_total counter cls, ins, instance, job, grpc_type, grpc_method, ip, grpc_service Total number of RPCs started on the server.
os_fd_limit gauge cls, ins, instance, job, ip The file descriptor limit.
os_fd_used gauge cls, ins, instance, job, ip The number of used file descriptors.
process_cpu_seconds_total counter cls, ins, instance, job, ip Total user and system CPU time spent in seconds.
process_max_fds gauge cls, ins, instance, job, ip Maximum number of open file descriptors.
process_open_fds gauge cls, ins, instance, job, ip Number of open file descriptors.
process_resident_memory_bytes gauge cls, ins, instance, job, ip Resident memory size in bytes.
process_start_time_seconds gauge cls, ins, instance, job, ip Start time of the process since unix epoch in seconds.
process_virtual_memory_bytes gauge cls, ins, instance, job, ip Virtual memory size in bytes.
process_virtual_memory_max_bytes gauge cls, ins, instance, job, ip Maximum amount of virtual memory available in bytes.
promhttp_metric_handler_requests_in_flight gauge cls, ins, instance, job, ip Current number of scrapes being served.
promhttp_metric_handler_requests_total counter cls, ins, instance, job, ip, code Total number of scrapes by HTTP status code.
scrape_duration_seconds Unknown cls, ins, instance, job, ip N/A
scrape_samples_post_metric_relabeling Unknown cls, ins, instance, job, ip N/A
scrape_samples_scraped Unknown cls, ins, instance, job, ip N/A
scrape_series_added Unknown cls, ins, instance, job, ip N/A
up Unknown cls, ins, instance, job, ip N/A

7 - 常见问题

Pigsty etcd 模块常见问题答疑

etcd集群起什么作用?

etcd 是一个分布式的、可靠的键-值存储,用于存放系统中最为关键的数据,Pigsty 使用 etcd 作为 Patroni 的 DCS(分布式配置存储)服务,用于存储 PostgreSQL 集群的高可用状态信息。

Patroni 将通过 etcd,实现集群故障检测、自动故障转移、主从切换,集群配置管理等功能。

etcd 对于 PostgreSQL 集群的高可用至关重要,而 etcd 本身的可用性与容灾,是通过使用多个分布式的节点来保证的。


etcd集群使用多大规模合适?

如果超过集群成员数一半(包括正好一半)的 etcd 实例不可用,那么 etcd 集群将进入不可用状态,拒绝对外提供服务。

例如:使用 3 节点的 etcd 集群允许最多一个节点宕机,而其他两个节点仍然可以正常工作;而使用 5 节点的 etcd 集群则可以容忍 2 节点失效。

请注意,etcd 集群中的 学习者(Learner)实例不计入成员数,因此在 3 节点 etcd 集群中,如果有一个学习者实例,那么实际上成员数量为 2,不能容忍任一节点失效。

在生产环境中,我们建议使用奇数个 etcd 实例,对于生产环境,建议使用 3 节点或 5 节点的 etcd 集群部署以确保足够的可靠性。


etcd集群不可用会有什么影响?

如果 etcd 集群不可用,那么会影响 PostgreSQL 的管控平面,但不会影响数据平面 —— 现有的 PostgreSQL 集群将继续运行,但通过 Patroni 进行的管理操作将无法执行。

etcd 故障期间,PostgreSQL 高可用将无法实现自动故障转移,您也无法使用 patronictl 对 PostgreSQL 集群发起管理操作,例如修改配置,执行手动故障转移等。 通过 Ansible 发起的管理命令不受 etcd 故障影响:例如创建数据库,创建用户,刷新 HBA 与 Service 配置等,etcd 故障期间,您依然可以直接操作 PostgreSQL 集群来实现这些功能。

请注意,以上描述的行为仅适用于较新版本的 Patroni (>=3.0,对应 Pigsty >= 2.0)。如果您使用的是较老版本的 Patroni (<3.0,对应 Pigsty 版本为 1.x),则 etcd / consul 故障会引发极为严重的全局性影响: 所有 PostgreSQL 集群将发生降级:主库将降级为从库,拒绝写请求,etcd 故障将放大为全局性 PostgreSQL 故障。在 Patroni 3.0 引入 DCS Failsafe 功能后,这种情况得到了显著改善。


etcd集群中存储着什么数据?

在 Pigsty 中,etcd 仅用于 PostgreSQL 高可用,并不会用于存储任何其他配置或状态数据。

而 PG 高可用组件 Patroni 会自动生成并管理 etcd 中的数据,当这些数据在 etcd 中丢失时,Patroni 会自动重建。

因此默认情况下,Pigsty 中的 etcd 可以视作 “无状态服务”,可以进行销毁与重建,这为维护工作带来了极大的便利。

如果您将 etcd 用于其他目的,例如作为 Kubernetes 的元数据存储,或自行存储其他数据,那么您需要自行备份 etcd 数据,并在 etcd 集群恢复后进行数据恢复。


如何从etcd故障中恢复?

因为 Pigsty 中的 etcd 只用于 PostgreSQL 高可用,本质上是可销毁、可重建的 “无状态服务”,因此在出现故障时,您可以通过 “重启” / “重置” 来进行快速止血。

重启 etcd 集群,您可以使用以下 Ansible 命令:

./etcd.yml -t etcd_launch

重置 etcd 集群,您可以直接执行以下剧本,实现覆盖抹除式重装:

./etcd.yml

如果您自行使用 etcd 存储了其他数据,那么通常需要备份 etcd 数据,并在 etcd 集群恢复后进行数据恢复。


维护etcd有什么注意事项?

简单的版本是:不要写爆 etcd 就好

etcd 默认设置了一个 2GB 的数据库容量上限,如果您的 etcd 数据库容量超过了这个限制,etcd 将会拒绝写入请求,这可能导致依赖 etcd 的 PostgreSQL 高可用机制无法正常工作。 与此同时,etcd 的数据模型 使得每一次写入都会产生一个新的版本,因此如果您的 etcd 集群频繁写入,即使只有极个别的 Key, etcd 数据库的大小也可能会不断增长,并在达到容量上限时出现故障。

您可以通过 启动自动压实手工压实碎片整理提高配额等方式实现这一点,详情请阅读 etcd 官方文档维护指南

Pigsty 在 v2.6 之后默认启用了 etcd 自动压实(Auto Compact),通常无需担心写满 etcd 的问题。对于 v2.6 之前的版本,我们 强烈建议您 在生产环境中启用 etcd 的自动压实功能


如何启动etcd自动垃圾回收?

如果您使用的早先版本的 Pigsty (v2.0 - v2.5),我们强烈建议您通过以下步骤,在生产环境中启用 etcd 的自动压实功能,从而避免 etcd 容量配额写满导致的 etcd 不可用故障。

在 Pigsty 源码目录中,编辑 etcd 配置文件模板:roles/etcd/templates/etcd.conf.j2,添加以下三条配置项:

auto-compaction-mode: periodic
auto-compaction-retention: "24h"
quota-backend-bytes: 17179869184

然后将所有相关 PostgreSQL 集群设置为 维护模式 后,重新使用 ./etcd.yml 覆盖部署 etcd 集群即可。

该配置会将 etcd 默认的容量配额从 2 GiB 提高到 16 GiB,并确保只保留最近一天的写入历史版本,从而避免了 etcd 数据库大小的无限增长。


etcd中的PostgreSQL高可用数据存储在哪里?

默认情况下,Patroni 使用 pg_namespace 指定的前缀(默认为 /pg)作为所有元数据键的前缀,随后是 PostgreSQL 集群名称,例如,名为 pg-meta 的 PG 集群,其元数据键将存储在 /pg/pg-meta 下。

etcdctl get /pg/pg-meta --prefix

其中的数据样本如下所示:

/pg/pg-meta/config
{"ttl":30,"loop_wait":10,"retry_timeout":10,"primary_start_timeout":10,"maximum_lag_on_failover":1048576,"maximum_lag_on_syncnode":-1,"primary_stop_timeout":30,"synchronous_mode":false,"synchronous_mode_strict":false,"failsafe_mode":true,"pg_version":16,"pg_cluster":"pg-meta","pg_shard":"pg-meta","pg_group":0,"postgresql":{"use_slots":true,"use_pg_rewind":true,"remove_data_directory_on_rewind_failure":true,"parameters":{"max_connections":100,"superuser_reserved_connections":10,"max_locks_per_transaction":200,"max_prepared_transactions":0,"track_commit_timestamp":"on","wal_level":"logical","wal_log_hints":"on","max_worker_processes":16,"max_wal_senders":50,"max_replication_slots":50,"password_encryption":"scram-sha-256","ssl":"on","ssl_cert_file":"/pg/cert/server.crt","ssl_key_file":"/pg/cert/server.key","ssl_ca_file":"/pg/cert/ca.crt","shared_buffers":"7969MB","maintenance_work_mem":"1993MB","work_mem":"79MB","max_parallel_workers":8,"max_parallel_maintenance_workers":2,"max_parallel_workers_per_gather":0,"hash_mem_multiplier":8.0,"huge_pages":"try","temp_file_limit":"7GB","vacuum_cost_delay":"20ms","vacuum_cost_limit":2000,"bgwriter_delay":"10ms","bgwriter_lru_maxpages":800,"bgwriter_lru_multiplier":5.0,"min_wal_size":"7GB","max_wal_size":"28GB","max_slot_wal_keep_size":"42GB","wal_buffers":"16MB","wal_writer_delay":"20ms","wal_writer_flush_after":"1MB","commit_delay":20,"commit_siblings":10,"checkpoint_timeout":"15min","checkpoint_completion_target":0.8,"archive_mode":"on","archive_timeout":300,"archive_command":"pgbackrest --stanza=pg-meta archive-push %p","max_standby_archive_delay":"10min","max_standby_streaming_delay":"3min","wal_receiver_status_interval":"1s","hot_standby_feedback":"on","wal_receiver_timeout":"60s","max_logical_replication_workers":8,"max_sync_workers_per_subscription":6,"random_page_cost":1.1,"effective_io_concurrency":1000,"effective_cache_size":"23907MB","default_statistics_target":200,"log_destination":"csvlog","logging_collector":"on","log_directory":"/pg/log/postgres","log_filename":"postgresql-%Y-%m-%d.log","log_checkpoints":"on","log_lock_waits":"on","log_replication_commands":"on","log_statement":"ddl","log_min_duration_statement":100,"track_io_timing":"on","track_functions":"all","track_activity_query_size":8192,"log_autovacuum_min_duration":"1s","autovacuum_max_workers":2,"autovacuum_naptime":"1min","autovacuum_vacuum_cost_delay":-1,"autovacuum_vacuum_cost_limit":-1,"autovacuum_freeze_max_age":1000000000,"deadlock_timeout":"50ms","idle_in_transaction_session_timeout":"10min","shared_preload_libraries":"timescaledb, pg_stat_statements, auto_explain","auto_explain.log_min_duration":"1s","auto_explain.log_analyze":"on","auto_explain.log_verbose":"on","auto_explain.log_timing":"on","auto_explain.log_nested_statements":true,"pg_stat_statements.max":5000,"pg_stat_statements.track":"all","pg_stat_statements.track_utility":"off","pg_stat_statements.track_planning":"off","timescaledb.telemetry_level":"off","timescaledb.max_background_workers":8,"citus.node_conninfo":"sslm
ode=prefer"}}}
/pg/pg-meta/failsafe
{"pg-meta-2":"http://10.10.10.11:8008/patroni","pg-meta-1":"http://10.10.10.10:8008/patroni"}
/pg/pg-meta/initialize
7418384210787662172
/pg/pg-meta/leader
pg-meta-1
/pg/pg-meta/members/pg-meta-1
{"conn_url":"postgres://10.10.10.10:5432/postgres","api_url":"http://10.10.10.10:8008/patroni","state":"running","role":"primary","version":"4.0.1","tags":{"clonefrom":true,"version":"16","spec":"8C.32G.125G","conf":"tiny.yml"},"xlog_location":184549376,"timeline":1}
/pg/pg-meta/members/pg-meta-2
{"conn_url":"postgres://10.10.10.11:5432/postgres","api_url":"http://10.10.10.11:8008/patroni","state":"running","role":"replica","version":"4.0.1","tags":{"clonefrom":true,"version":"16","spec":"8C.32G.125G","conf":"tiny.yml"},"xlog_location":184549376,"replication_state":"streaming","timeline":1}
/pg/pg-meta/status
{"optime":184549376,"slots":{"pg_meta_2":184549376,"pg_meta_1":184549376},"retain_slots":["pg_meta_1","pg_meta_2"]}

如何使用一个外部的已经存在的 etcd 集群?

配置清单中硬编码了所使用 etcd 的分组名为 etcd,这个分组里的成员将被用作 PGSQL 的 DCS 服务器。您可以使用 etcd.yml 对它们进行初始化,或直接假设它是一个已存在的外部 etcd 集群。

要使用现有的外部 etcd 集群,只要像往常一样定义它们即可,您可以跳过 etcd.yml 剧本的执行,因为集群已经存在,不需要部署。

但用户必须确保 现有 etcd 集群证书是由 Pigsty 使用的相同 CA 签名颁发的。否则客户端无法使用 Pigsty 自签名 CA 颁发的证书来访问外部的 etcd 集群。


如何向现有etcd集群添加新的成员?

详细过程,请参考向 etcd 集群添加成员

etcdctl member add <etcd-?> --learner=true --peer-urls=https://<new_ins_ip>:2380 # 在管理节点上宣告新成员加入
./etcd.yml -l <new_ins_ip> -e etcd_init=existing                                 # 真正初始化新 etcd 成员
etcdctl member promote <new_ins_server_id>                                       # 在管理节点上提升新成员为正式成员

如何从现有etcd集群中移除成员?

详细过程,请参考从 etcd 集群中移除成员

etcdctl member remove <etcd_server_id>   # 在管理节点上从集群中踢出成员
./etcd.yml -l <ins_ip> -t etcd_purge     # 真正清除下线 etcd 实例