Bootstrap

Debezium connector for MySQL连接器参数详解

MySQL的Debezium连接器

MySQL has a binary log (binlog) that records all operations in the order in which they are committed to the database. This includes changes to table schemas as well as changes to the data in tables. MySQL uses the binlog for replication and recovery.
MySQL有一个二进制日志(binlog),它记录了所有操作,按照它们提交到数据库的顺序。这包括对表架构的更改以及对表中数据的更改。MySQL使用binlog进行复制和恢复。

The Debezium MySQL connector reads the binlog, produces change events for row-level INSERT, UPDATE, and DELETE operations, and emits the change events to Kafka topics. Client applications read those Kafka topics.
Debezium MySQL连接器读取binlog,为行级别的INSERTUPDATEDELETE操作生成更改事件,并将更改事件发射到Kafka主题。客户端应用程序读取这些Kafka主题。

As MySQL is typically set up to purge binlogs after a specified period of time, the MySQL connector performs an initial consistent snapshot of each of your databases. The MySQL connector reads the binlog from the point at which the snapshot was made.
由于MySQL通常设置为在指定时间段后清除binlog,因此MySQL连接器会对每个数据库执行初始一致快照。MySQL连接器从创建快照的点读取binlog。

For information about the MySQL Database versions that are compatible with this connector, see the Debezium release overview.
有关与此连接器兼容的MySQL数据库版本的信息,请参阅Debezium版本概述

How the connector works 连接器如何工作

An overview of the MySQL topologies that the connector supports is useful for planning your application. To optimally configure and run a Debezium MySQL connector, it is helpful to understand how the connector tracks the structure of tables, exposes schema changes, performs snapshots, and determines Kafka topic names.
连接器支持的MySQL拓扑的概述对于规划应用程序很有用。要以最佳方式配置和运行Debezium MySQL连接器,了解连接器如何跟踪表的结构、公开模式更改、执行快照以及确定Kafka主题名称是很有帮助的。

The Debezium MySQL connector has yet to be tested with MariaDB, but multiple reports from the community indicate successful usage of the connector with this database. Official support for MariaDB is planned for a future Debezium version. Debezium MySQL连接器尚未与MariaDB进行测试,但来自社区的多份报告表明该连接器已成功用于此数据库。MariaDB的官方支持计划在未来的Debezium版本中提供。

Supported MySQL topologies 支持的MySQL拓扑

The Debezium MySQL connector supports the following MySQL topologies:
Debezium MySQL连接器支持以下MySQL拓扑:

  • Standalone 独立

    When a single MySQL server is used, the server must have the binlog enabled (and optionally GTIDs enabled) so the Debezium MySQL connector can monitor the server. This is often acceptable, since the binary log can also be used as an incremental backup. In this case, the MySQL connector always connects to and follows this standalone MySQL server instance. 当使用单个MySQL服务器时,服务器必须启用binlog(并可选地启用GTID),以便Debezium MySQL连接器可以监视服务器。这通常是可以接受的,因为二进制日志也可以用作增量备份。在这种情况下,MySQL连接器始终连接到并跟随这个独立的MySQL服务器实例。

  • Primary and replica 主实例和复制副本

    The Debezium MySQL connector can follow one of the primary servers or one of the replicas (if that replica has its binlog enabled), but the connector sees changes in only the cluster that is visible to that server. Generally, this is not a problem except for the multi-primary topologies. Debezium MySQL连接器可以跟踪其中一个主服务器或其中一个副本(如果该副本启用了binlog),但连接器只会看到该服务器可见的集群中的更改。一般来说,除了多主拓扑之外,这不是问题。The connector records its position in the server’s binlog, which is different on each server in the cluster. Therefore, the connector must follow just one MySQL server instance. If that server fails, that server must be restarted or recovered before the connector can continue. 连接器在服务器的binlog中记录其位置,这在集群中的每个服务器上都是不同的。因此,连接器必须只跟随一个MySQL服务器实例。如果该服务器出现故障,则必须重新启动或恢复该服务器,连接器才能继续。

  • High available clusters 高可用性集群

    A variety of high availability solutions exist for MySQL, and they make it significantly easier to tolerate and almost immediately recover from problems and failures. Most HA MySQL clusters use GTIDs so that replicas are able to keep track of all changes on any of the primary servers. MySQL有多种高可用性解决方案,它们使它更容易容忍并几乎立即从问题和故障中恢复。大多数HA MySQL集群使用GTID,以便副本能够跟踪任何主服务器上的所有更改。

  • Multi-primary 多基色

    Network Database (NDB) cluster replication uses one or more MySQL replica nodes that each replicate from multiple primary servers. This is a powerful way to aggregate the replication of multiple MySQL clusters. This topology requires the use of GTIDs. 网络数据库(NDB)集群复制使用一个或多个MySQL副本节点,每个节点从多个主服务器复制。这是聚合多个MySQL集群的复制的强大方法。此拓扑需要使用GTID。A Debezium MySQL connector can use these multi-primary MySQL replicas as sources, and can fail over to different multi-primary MySQL replicas as long as the new replica is caught up to the old replica. That is, the new replica has all transactions that were seen on the first replica. This works even if the connector is using only a subset of databases and/or tables, as the connector can be configured to include or exclude specific GTID sources when attempting to reconnect to a new multi-primary MySQL replica and find the correct position in the binlog. Debezium MySQL连接器可以使用这些多主MySQL副本作为源,并且只要新副本赶上旧副本,就可以故障转移到不同的多主MySQL副本。也就是说,新的副本具有在第一个副本上看到的所有事务。即使连接器仅使用数据库和/或表的子集,这也有效,因为连接器可以配置为在尝试重新连接到新的多主MySQL副本并在binlog中找到正确位置时包含或排除特定的GTID源。

  • Hosted 主办

    There is support for the Debezium MySQL connector to use hosted options such as Amazon RDS and Amazon Aurora. Debezium MySQL连接器支持使用Amazon RDS和Amazon Aurora等托管选项。Because these hosted options do not allow a global read lock, table-level locks are used to create the consistent snapshot. 因为这些装载选项不允许全域读取锁定,所以会使用数据表层级锁定来建立一致快照集

Schema history topic 架构历史记录主题

When a database client queries a database, the client uses the database’s current schema. However, the database schema can be changed at any time, which means that the connector must be able to identify what the schema was at the time each insert, update, or delete operation was recorded. Also, a connector cannot just use the current schema because the connector might be processing events that are relatively old that were recorded before the tables’ schemas were changed.
当数据库客户端查询数据库时,客户端将使用数据库的当前模式。但是,数据库模式可以随时更改,这意味着连接器必须能够识别在记录每个插入、更新或删除操作时的模式。另外,连接器不能只使用当前架构,因为连接器可能正在处理在表的架构更改之前记录的相对较旧的事件。

To ensure correct processing of changes that occur after a schema change, MySQL includes in the binlog not only the row-level changes to the data, but also the DDL statements that are applied to the database. As the connector reads the binlog and comes across these DDL statements, it parses them and updates an in-memory representation of each table’s schema. The connector uses this schema representation to identify the structure of the tables at the time of each insert, update, or delete operation and to produce the appropriate change event. In a separate database history Kafka topic, the connector records all DDL statements along with the position in the binlog where each DDL statement appeared.
为了确保正确处理模式更改后发生的更改,MySQL在binlog中不仅包括对数据的行级更改,还包括应用于数据库的SQL语句。当连接器读取binlog并遇到这些binlog语句时,它将解析这些语句并更新每个表模式的内存表示。连接器使用此模式表示来在每次插入、更新或删除操作时标识表的结构,并生成相应的更改事件。在一个单独的数据库历史Kafka主题中,连接器记录所有的SQL语句沿着以及每个SQL语句在binlog中出现的位置。

When the connector restarts after having crashed or been stopped gracefully, the connector starts reading the binlog from a specific position, that is, from a specific point in time. The connector rebuilds the table structures that existed at this point in time by reading the database history Kafka topic and parsing all DDL statements up to the point in the binlog where the connector is starting.
当连接器在崩溃或正常停止后重新启动时,连接器从特定位置(即,从特定时间点)开始阅读binlog。连接器通过阅读数据库历史Kafka主题并解析binlog中连接器启动点之前的所有SQL语句,重建此时存在的表结构。

This database history topic is for connector use only. The connector can optionally emit schema change events to a different topic that is intended for consumer applications.
此数据库历史记录主题仅供连接器使用。连接器可以选择性地将架构更改事件发送到用于使用者应用程序的不同主题

When the MySQL connector captures changes in a table to which a schema change tool such as gh-ost or pt-online-schema-change is applied, there are helper tables created during the migration process. The connector needs to be configured to capture change to these helper tables. If consumers do not need the records generated for helper tables, then a single message transform can be applied to filter them out.
当MySQL连接器捕获应用了模式更改工具(如gh-ostpt-online-schema-change)的表中的更改时,在迁移过程中会创建帮助表。连接器需要配置为捕获对这些帮助器表的更改。如果使用者不需要为helper表生成的记录,那么可以应用单个消息转换来过滤掉它们。

See default names for topics that receive Debezium event records.
请参阅接收Debezium事件记录的主题的默认名称

Schema change topic 架构更改主题

You can configure a Debezium MySQL connector to produce schema change events that describe schema changes that are applied to captured tables in the database. The connector writes schema change events to a Kafka topic named *<serverName>*, where *serverName* is the logical server name that is specified in the database.server.name connector configuration property. Messages that the connector sends to the schema change topic contain a payload, and, optionally, also contain the schema of the change event message.
您可以配置Debezium MySQL连接器来生成描述应用于数据库中捕获的表的模式更改的模式更改事件。连接器将架构更改事件写入名为*<serverName>*的Kafka主题,其中*serverName*是在database.server.name连接器配置属性中指定的逻辑服务器名称。连接器发送到模式更改主题的消息包含一个有效负载,还可以选择包含更改事件消息的模式。

The payload of a schema change event message includes the following elements:
架构更改事件消息的有效负载包括以下元素:

  • ddl

    Provides the SQL CREATE, ALTER, or DROP statement that results in the schema change. 提供导致架构更改的SQLCREATEALTERDROP语句。

  • databaseName

    The name of the database to which the DDL statements are applied. The value of databaseName serves as the message key. 要应用SQL语句的数据库的名称。databaseName的值用作消息键。

  • pos

    The position in the binlog where the statements appear. binlog中语句出现的位置。

  • tableChanges 表格更改

    A structured representation of the entire table schema after the schema change. The tableChanges field contains an array that includes entries for each column of the table. Because the structured representation presents data in JSON or Avro format, consumers can easily read messages without first processing them through a DDL parser. 架构更改后整个表架构的结构化表示。tableChanges字段包含一个数组,该数组包含表中每列的条目。由于结构化表示以JSON或Avro格式表示数据,因此消费者可以轻松地读取消息,而无需首先通过JavaScript解析器处理它们。

For a table that is in capture mode, the connector not only stores the history of schema changes in the schema change topic, but also in an internal database history topic. The internal database history topic is for connector use only and it is not intended for direct use by consuming applications. Ensure that applications that require notifications about schema changes consume that information only from the schema change topic. 对于处于捕获模式的表,连接器不仅将模式更改的历史记录存储在模式更改主题中,还将其存储在内部数据库历史记录主题中。内部数据库历史记录主题仅供连接器使用,而不是供使用应用程序直接使用。确保需要有关架构更改的通知的应用程序只使用来自架构更改主题的信息。
Never partition the database history topic. For the database history topic to function correctly, it must maintain a consistent, global order of the event records that the connector emits to it. 永远不要对数据库历史主题进行分区。要使数据库历史记录主题正常工作,它必须维护连接器向其发出的事件记录的一致的全局顺序。To ensure that the topic is not split among partitions, set the partition count for the topic by using one of the following methods: 若要确保主题不会在分区之间拆分,请使用以下方法之一设置主题的分区计数:If you create the database history topic manually, specify a partition count of 1. 如果手动创建数据库历史记录主题,请将分区计数指定为1。If you use the Apache Kafka broker to create the database history topic automatically, the topic is created, set the value of the Kafka num.partitions configuration option to 1. 如果您使用Apache Kafka代理自动创建数据库历史主题,则会创建主题,将Kafkanum.partitions配置选项的值设置为1
The format of the messages that a connector emits to its schema change topic is in an incubating state and is subject to change without notice. 连接器向其架构更改主题发出的消息的格式处于孵化状态,可能会在不通知的情况下更改。

Example: Message emitted to the MySQL connector schema change topic
示例:发送到MySQL连接器架构更改主题的消息

The following example shows a typical schema change message in JSON format. The message contains a logical representation of the table schema.
下面的示例显示了JSON格式的典型模式更改消息。该消息包含表模式的逻辑表示。

{
  "schema": {
  ...
  },
  "payload": {
        "source": {  // (1)
        "version": "1.9.8.Final",
        "connector": "mysql",
        "name": "dbserver1",
        "ts_ms": 0,
        "snapshot": "false",
        "db": "inventory",
        "sequence": null,
        "table": "customers",
        "server_id": 0,
        "gtid": null,
        "file": "mysql-bin.000003",
        "pos": 219,
        "row": 0,
        "thread": null,
        "query": null
    },
    "databaseName": "inventory", // (2)
    "schemaName": null,
    "ddl": "ALTER TABLE customers ADD COLUMN middle_name VARCHAR(2000)", // (3)
    "tableChanges": [ // (4)
        {
        "type": "ALTER", // (5)
        "id": "\"inventory\".\"customers\"",  // (6)
        "table": { // (7)
            "defaultCharsetName": "latin1",
            "primaryKeyColumnNames": [  // (8)
                "id"
            ],
            "columns": [ // (9)
                {
                "name": "id",
                "jdbcType": 4,
                "nativeType": null,
                "typeName": "INT",
                "typeExpression": "INT",
                "charsetName": null,
                "length": 11,
                "scale": null,
                "position": 1,
                "optional": false,
                "autoIncremented": true,
                "generated": true
            },
            {
                "name": "first_name",
                "jdbcType": 12,
                "nativeType": null,
                "typeName": "VARCHAR",
                "typeExpression": "VARCHAR",
                "charsetName": "latin1",
                "length": 255,
                "scale": null,
                "position": 2,
                "optional": false,
                "autoIncremented": false,
                "generated": false
            },                        {
                "name": "last_name",
                "jdbcType": 12,
                "nativeType": null,
                "typeName": "VARCHAR",
                "typeExpression": "VARCHAR",
                "charsetName": "latin1",
                "length": 255,
                "scale": null,
                "position": 3,
                "optional": false,
                "autoIncremented": false,
                "generated": false
            },
            {
                "name": "email",
                "jdbcType": 12,
                "nativeType": null,
                "typeName": "VARCHAR",
                "typeExpression": "VARCHAR",
                "charsetName": "latin1",
                "length": 255,
                "scale": null,
                "position": 4,
                "optional": false,
                "autoIncremented": false,
                "generated": false
            },
            {
                "name": "middle_name",
                "jdbcType": 12,
                "nativeType": null,
                "typeName": "VARCHAR",
                "typeExpression": "VARCHAR",
                "charsetName": "latin1",
                "length": 2000,
                "scale": null,
                "position": 5,
                "optional": true,
                "autoIncremented": false,
                "generated": false
            }
          ]
        }
      }
    ]
  },
  "payload": {
    "databaseName": "inventory",
    "ddl": "CREATE TABLE products ( id INTEGER NOT NULL AUTO_INCREMENT PRIMARY KEY, name VARCHAR(255) NOT NULL, description VARCHAR(512), weight FLOAT ); ALTER TABLE products AUTO_INCREMENT = 101;",
    "source" : {
      "version": "1.9.8.Final",
      "name": "mysql-server-1",
      "server_id": 0,
      "ts_ms": 0,
      "gtid": null,
      "file": "mysql-bin.000003",
      "pos": 154,
      "row": 0,
      "snapshot": true,
      "thread": null,
      "db": null,
      "table": null,
      "query": null
    }
  }
}
Item 项目Field name 字段名称Description 描述
1source The source field is structured exactly as standard data change events that the connector writes to table-specific topics. This field is useful to correlate events on different topics. 字段的结构与连接器写入特定于表的主题的标准数据更改事件完全相同。此字段可用于关联不同主题的事件。
2databaseName schemaNameIdentifies the database and the schema that contains the change. The value of the databaseName field is used as the message key for the record. 重新配置包含更改的数据库和架构。databaseName字段的值用作记录的消息键。
3ddlThis field contains the DDL that is responsible for the schema change. The ddl field can contain multiple DDL statements. Each statement applies to the database in the databaseName field. Multiple DDL statements appear in the order in which they were applied to the database. 此字段包含负责架构更改的服务器。 ddl字段可以包含多个DDL语句。每个语句都应用于databaseName字段中的数据库。多个SQL语句将按它们应用于数据库的顺序出现。 Clients can submit multiple DDL statements that apply to multiple databases. If MySQL applies them atomically, the connector takes the DDL statements in order, groups them by database, and creates a schema change event for each group. If MySQL applies them individually, the connector creates a separate schema change event for each statement. 客户端可以提交适用于多个数据库的多个SQL语句。 如果MySQL以原子方式应用它们,连接器将按顺序获取SQL语句,按数据库对它们进行分组,并为每个组创建一个模式更改事件。 如果MySQL单独应用它们,连接器会为每个语句创建一个单独的模式更改事件。
4tableChanges 表格更改An array of one or more items that contain the schema changes generated by a DDL command. 包含一个或多个项的数组,这些项包含由SQL命令生成的架构更改。
5type 类型Describes the kind of change. The value is one of the following: 描述了变化的类型。该值为下列值之一:CREATE 创建Table created. 表已创建。ALTER 改变Table modified. 表格已修改。DROP 下降Table deleted. 表已删除。
6idFull identifier of the table that was created, altered, or dropped. In the case of a table rename, this identifier is a concatenation of *<old>*,*<new>* table names. 已创建、更改或删除的表的完整标识符。在表重命名的情况下,此标识符是*<old>*、*<new>*表名的串联。
7table Represents table metadata after the applied change. 表示应用更改后的表元数据。
8primaryKeyColumnNamesList of columns that compose the table’s primary key. 组成表主键的列的列表。
9columns Metadata for each column in the changed table. 已更改表中每列的元数据。

See also: schema history topic.
参见:schema history主题

Snapshots 快照

When a Debezium MySQL connector is first started, it performs an initial consistent snapshot of your database. The following flow describes how the connector creates this snapshot. This flow is for the default snapshot mode, which is initial. For information about other snapshot modes, see the MySQL connector snapshot.mode configuration property.
当Debezium MySQL连接器首次启动时,它会执行数据库的初始一致快照。以下流程描述连接器如何创建此快照。此流程用于默认快照模式,即初始模式。有关其他快照模式的信息,请参见MySQL连接器snapshot.mode配置属性

Step 步骤Action 行动
1Grabs a global read lock that blocks writes by other database clients. 获取阻止其他数据库客户端写入的全局读锁。 The snapshot itself does not prevent other clients from applying DDL that might interfere with the connector’s attempt to read the binlog position and table schemas. The connector keeps the global read lock while it reads the binlog position, and releases the lock as described in a later step. 快照本身不会阻止其他客户端应用可能会干扰连接器读取binlog位置和表架构的尝试的快照。连接器在读取binlog位置时保持全局读锁,并按照后面的步骤释放锁。
2Starts a transaction with repeatable read semantics to ensure that all subsequent reads within the transaction are done against the consistent snapshot. 以可重复的读取语义启动事务,以确保事务中的所有后续读取都是根据一致的快照完成的。
3Reads the current binlog position. 读取当前binlog位置。
4Reads the schema of the databases and tables for which the connector is configured to capture changes. 读取数据库和表的架构,连接器配置为捕获这些数据库和表的更改。
5Releases the global read lock. Other database clients can now write to the database. 释放全局读锁。其他数据库客户端现在可以写入数据库。
6If applicable, writes the DDL changes to the schema change topic, including all necessary DROP… and CREATE… DDL statements. 如果适用,将SQL更改写入架构更改主题,包括所有必要的DROP...CREATE...SQL语句。
7Scans the database tables. For each row, the connector emits CREATE events to the relevant table-specific Kafka topics. 扫描数据库表。对于每一行,连接器都会向相关的特定于表的Kafka主题发出CREATE事件。
8Commits the transaction. 提交事务。
9Records the completed snapshot in the connector offsets. 在连接器偏移中记录完成的快照。
  • Connector restarts 连接器重新启动

    If the connector fails, stops, or is rebalanced while performing the initial snapshot, then after the connector restarts, it performs a new snapshot. After that intial snapshot is completed, the Debezium MySQL connector restarts from the same position in the binlog so it does not miss any updates. 如果连接器在执行初始快照时失败、停止或重新平衡,则在连接器重新启动后,它将执行新快照。初始快照完成后,Debezium MySQL连接器从binlog中的相同位置重新启动,因此不会错过任何更新。If the connector stops for long enough, MySQL could purge old binlog files and the connector’s position would be lost. If the position is lost, the connector reverts to the initial snapshot for its starting position. For more tips on troubleshooting the Debezium MySQL connector, see behavior when things go wrong.

  • Global read locks not allowed

    Some environments do not allow global read locks. If the Debezium MySQL connector detects that global read locks are not permitted, the connector uses table-level locks instead and performs a snapshot with this method. This requires the database user for the Debezium connector to have LOCK TABLES privileges.Table 3. Workflow for performing an initial snapshot with table-level locksStepAction1Obtains table-level locks.2Starts a transaction with repeatable read semantics to ensure that all subsequent reads within the transaction are done against the consistent snapshot.3Reads and filters the names of the databases and tables.4Reads the current binlog position.5Reads the schema of the databases and tables for which the connector is configured to capture changes.6If applicable, writes the DDL changes to the schema change topic, including all necessary DROP… and CREATE… DDL statements.7Scans the database tables. For each row, the connector emits CREATE events to the relevant table-specific Kafka topics.8Commits the transaction.9Releases the table-level locks.10Records the completed snapshot in the connector offsets.

Ad hoc snapshots

By default, a connector runs an initial snapshot operation only after it starts for the first time. Following this initial snapshot, under normal circumstances, the connector does not repeat the snapshot process. Any future change event data that the connector captures comes in through the streaming process only.

However, in some situations the data that the connector obtained during the initial snapshot might become stale, lost, or incomplete. To provide a mechanism for recapturing table data, Debezium includes an option to perform ad hoc snapshots. The following changes in a database might be cause for performing an ad hoc snapshot:

  • The connector configuration is modified to capture a different set of tables.
  • Kafka topics are deleted and must be rebuilt.
  • Data corruption occurs due to a configuration error or some other problem.

You can re-run a snapshot for a table for which you previously captured a snapshot by initiating a so-called ad-hoc snapshot. Ad hoc snapshots require the use of signaling tables. You initiate an ad hoc snapshot by sending a signal request to the Debezium signaling table.

When you initiate an ad hoc snapshot of an existing table, the connector appends content to the topic that already exists for the table. If a previously existing topic was removed, Debezium can create a topic automatically if automatic topic creation is enabled.

Ad hoc snapshot signals specify the tables to include in the snapshot. The snapshot can capture the entire contents of the database, or capture only a subset of the tables in the database.

You specify the tables to capture by sending an execute-snapshot message to the signaling table. Set the type of the execute-snapshot signal to incremental, and provide the names of the tables to include in the snapshot, as described in the following table:

FieldDefaultValue
typeincrementalSpecifies the type of snapshot that you want to run. Setting the type is optional. Currently, you can request only incremental snapshots.
data-collectionsN/AAn array that contains the fully-qualified names of the table to be snapshotted. The format of the names is the same as for the signal.data.collection configuration option.

Triggering an ad hoc snapshot

You initiate an ad hoc snapshot by adding an entry with the execute-snapshot signal type to the signaling table. After the connector processes the message, it begins the snapshot operation. The snapshot process reads the first and last primary key values and uses those values as the start and end point for each table. Based on the number of entries in the table, and the configured chunk size, Debezium divides the table into chunks, and proceeds to snapshot each chunk, in succession, one at a time.

Currently, the execute-snapshot action type triggers incremental snapshots only. For more information, see Incremental snapshots.

Incremental snapshots

To provide flexibility in managing snapshots, Debezium includes a supplementary snapshot mechanism, known as incremental snapshotting. Incremental snapshots rely on the Debezium mechanism for sending signals to a Debezium connector. Incremental snapshots are based on the DDD-3 design document.

In an incremental snapshot, instead of capturing the full state of a database all at once, as in an initial snapshot, Debezium captures each table in phases, in a series of configurable chunks. You can specify the tables that you want the snapshot to capture and the size of each chunk. The chunk size determines the number of rows that the snapshot collects during each fetch operation on the database. The default chunk size for incremental snapshots is 1 KB.

As an incremental snapshot proceeds, Debezium uses watermarks to track its progress, maintaining a record of each table row that it captures. This phased approach to capturing data provides the following advantages over the standard initial snapshot process:

  • You can run incremental snapshots in parallel with streamed data capture, instead of postponing streaming until the snapshot completes. The connector continues to capture near real-time events from the change log throughout the snapshot process, and neither operation blocks the other.
  • If the progress of an incremental snapshot is interrupted, you can resume it without losing any data. After the process resumes, the snapshot begins at the point where it stopped, rather than recapturing the table from the beginning.
  • You can run an incremental snapshot on demand at any time, and repeat the process as needed to adapt to database updates. For example, you might re-run a snapshot after you modify the connector configuration to add a table to its table.include.list property.

Incremental snapshot process

When you run an incremental snapshot, Debezium sorts each table by primary key and then splits the table into chunks based on the configured chunk size. Working chunk by chunk, it then captures each table row in a chunk. For each row that it captures, the snapshot emits a READ event. That event represents the value of the row when the snapshot for the chunk began.

As a snapshot proceeds, it’s likely that other processes continue to access the database, potentially modifying table records. To reflect such changes, INSERT, UPDATE, or DELETE operations are committed to the transaction log as per usual. Similarly, the ongoing Debezium streaming process continues to detect these change events and emits corresponding change event records to Kafka.

How Debezium resolves collisions among records with the same primary key

In some cases, the UPDATE or DELETE events that the streaming process emits are received out of sequence. That is, the streaming process might emit an event that modifies a table row before the snapshot captures the chunk that contains the READ event for that row. When the snapshot eventually emits the corresponding READ event for the row, its value is already superseded. To ensure that incremental snapshot events that arrive out of sequence are processed in the correct logical order, Debezium employs a buffering scheme for resolving collisions. Only after collisions between the snapshot events and the streamed events are resolved does Debezium emit an event record to Kafka.

Snapshot window

To assist in resolving collisions between late-arriving READ events and streamed events that modify the same table row, Debezium employs a so-called snapshot window. The snapshot windows demarcates the interval during which an incremental snapshot captures data for a specified table chunk. Before the snapshot window for a chunk opens, Debezium follows its usual behavior and emits events from the transaction log directly downstream to the target Kafka topic. But from the moment that the snapshot for a particular chunk opens, until it closes, Debezium performs a de-duplication step to resolve collisions between events that have the same primary key…

For each data collection, the Debezium emits two types of events, and stores the records for them both in a single destination Kafka topic. The snapshot records that it captures directly from a table are emitted as READ operations. Meanwhile, as users continue to update records in the data collection, and the transaction log is updated to reflect each commit, Debezium emits UPDATE or DELETE operations for each change.

As the snapshot window opens, and Debezium begins processing a snapshot chunk, it delivers snapshot records to a memory buffer. During the snapshot windows, the primary keys of the READ events in the buffer are compared to the primary keys of the incoming streamed events. If no match is found, the streamed event record is sent directly to Kafka. If Debezium detects a match, it discards the buffered READ event, and writes the streamed record to the destination topic, because the streamed event logically supersede the static snapshot event. After the snapshot window for the chunk closes, the buffer contains only READ events for which no related transaction log events exist. Debezium emits these remaining READ events to the table’s Kafka topic.

The connector repeats the process for each snapshot chunk.

Triggering an incremental snapshot

Currently, the only way to initiate an incremental snapshot is to send an ad hoc snapshot signal to the signaling table on the source database. You submit a signal to the signaling table as SQL INSERT queries.

After Debezium detects the change in the signaling table, it reads the signal, and runs the requested snapshot operation.

The query that you submit specifies the tables to include in the snapshot, and, optionally, specifies the kind of snapshot operation. Currently, the only valid option for snapshots operations is the default value, incremental.

To specify the tables to include in the snapshot, provide a data-collections array that lists the tables or an array of regular expressions used to match tables, for example,
{"data-collections": ["public.MyFirstTable", "public.MySecondTable"]}

The data-collections array for an incremental snapshot signal has no default value. If the data-collections array is empty, Debezium detects that no action is required and does not perform a snapshot.

If the name of a table that you want to include in a snapshot contains a dot (.) in the name of the database, schema, or table, to add the table to the data-collections array, you must escape each part of the name in double quotes. For example, to include a table that exists in the **public** schema and that has the name **My.Table**, use the following format: **"public"."My.Table"**.

Prerequisites

Procedure

  1. Send a SQL query to add the ad hoc incremental snapshot request to the signaling table:

    INSERT INTO <signalTable> (id, type, data) VALUES ('<id>', '<snapshotType>', '{"data-collections": ["<tableName>","<tableName>"],"type":"<snapshotType>","additional-condition":"<additional-condition>"}');
    

    For example,

    INSERT INTO myschema.debezium_signal (id, type, data) 
    values ('ad-hoc-1',   
        'execute-snapshot',  
        '{"data-collections": ["schema1.table1", "schema2.table2"], 
        "type":"incremental"}, 
        "additional-condition":"color=blue"}'); 
    

    The values of the id,type, and data parameters in the command correspond to the fields of the signaling table.

    The following table describes the parameters in the example:

    ItemValueDescription
    1myschema.debezium_signalSpecifies the fully-qualified name of the signaling table on the source database.
    2ad-hoc-1The id parameter specifies an arbitrary string that is assigned as the id identifier for the signal request. Use this string to identify logging messages to entries in the signaling table. Debezium does not use this string. Rather, during the snapshot, Debezium generates its own id string as a watermarking signal.
    3execute-snapshotSpecifies type parameter specifies the operation that the signal is intended to trigger.
    4data-collectionsA required component of the data field of a signal that specifies an array of table names or regular expressions to match table names to include in the snapshot. The array lists regular expressions which match tables by their fully-qualified names, using the same format as you use to specify the name of the connector’s signaling table in the signal.data.collection configuration property.
    5incrementalAn optional type component of the data field of a signal that specifies the kind of snapshot operation to run. Currently, the only valid option is the default value, incremental. If you do not specify a value, the connector runs an incremental snapshot.
    6additional-conditionAn optional string, which specifies a condition based on the column(s) of the table(s), to capture a subset of the contents of the tables. For more information about the additional-condition parameter, see Ad hoc incremental snapshots with additional-condition.

Ad hoc incremental snapshots with additional-condition

If you want a snapshot to include only a subset of the content in a table, you can modify the signal request by appending an additional-condition parameter to the snapshot signal.

The SQL query for a typical snapshot takes the following form:

SELECT * FROM <tableName> ....

By adding an additional-condition parameter, you append a WHERE condition to the SQL query, as in the following example:

SELECT * FROM <tableName> WHERE <additional-condition> ....

The following example shows a SQL query to send an ad hoc incremental snapshot request with an additional condition to the signaling table:

INSERT INTO <signalTable> (id, type, data) VALUES ('<id>', '<snapshotType>', '{"data-collections": ["<tableName>","<tableName>"],"type":"<snapshotType>","additional-condition":"<additional-condition>"}');

For example, suppose you have a products table that contains the following columns:

  • id (primary key)
  • color
  • quantity

If you want an incremental snapshot of the products table to include only the data items where color=blue, you can use the following SQL statement to trigger the snapshot:

INSERT INTO myschema.debezium_signal (id, type, data) VALUES('ad-hoc-1', 'execute-snapshot', '{"data-collections": ["schema1.products"],"type":"incremental", "additional-condition":"color=blue"}');

The additional-condition parameter also enables you to pass conditions that are based on more than on column. For example, using the products table from the previous example, you can submit a query that triggers an incremental snapshot that includes the data of only those items for which color=blue and quantity>10:

INSERT INTO myschema.debezium_signal (id, type, data) VALUES('ad-hoc-1', 'execute-snapshot', '{"data-collections": ["schema1.products"],"type":"incremental", "additional-condition":"color=blue AND quantity>10"}');

The following example, shows the JSON for an incremental snapshot event that is captured by a connector.

Example: Incremental snapshot event message

{
    "before":null,
    "after": {
        "pk":"1",
        "value":"New data"
    },
    "source": {
        ...
        "snapshot":"incremental" 
    },
    "op":"r", 
    "ts_ms":"1620393591654",
    "transaction":null
}
ItemField nameDescription
1snapshotSpecifies the type of snapshot operation to run. Currently, the only valid option is the default value, incremental. Specifying a type value in the SQL query that you submit to the signaling table is optional. If you do not specify a value, the connector runs an incremental snapshot.
2opSpecifies the event type. The value for snapshot events is r, signifying a READ operation.
Stopping an incremental snapshot

You can also stop an incremental snapshot by sending a signal to the table on the source database. You submit a stop snapshot signal to the table by sending a SQL INSERT query. After Debezium detects the change in the signaling table, it reads the signal, and stops the incremental snapshot operation if it’s in progress.

The query that you submit specifies the snapshot operation of incremental, and, optionally, the tables of the current running snapshot to be removed.

Prerequisites

Procedure

  1. Send a SQL query to stop the ad hoc incremental snapshot to the signaling table:

    INSERT INTO <signalTable> (id, type, data) values ('<id>', 'stop-snapshot', '{"data-collections": ["<tableName>","<tableName>"],"type":"incremental"}');
    

    For example,

    INSERT INTO myschema.debezium_signal (id, type, data) 
    values ('ad-hoc-1',   
        'stop-snapshot',  
        '{"data-collections": ["schema1.table1", "schema2.table2"], 
        "type":"incremental"}'); 
    

    The values of the id, type, and data parameters in the signal command correspond to the fields of the signaling table.

    The following table describes the parameters in the example:

    ItemValueDescription
    1myschema.debezium_signalSpecifies the fully-qualified name of the signaling table on the source database.
    2ad-hoc-1The id parameter specifies an arbitrary string that is assigned as the id identifier for the signal request. Use this string to identify logging messages to entries in the signaling table. Debezium does not use this string.
    3stop-snapshotSpecifies type parameter specifies the operation that the signal is intended to trigger.
    4data-collectionsAn optional component of the data field of a signal that specifies an array of table names or regular expressions to match table names to remove from the snapshot. The array lists regular expressions which match tables by their fully-qualified names, using the same format as you use to specify the name of the connector’s signaling table in the signal.data.collection configuration property. If this component of the data field is omitted, the signal stops the entire incremental snapshot that is in progress.
    5incrementalA required component of the data field of a signal that specifies the kind of snapshot operation that is to be stopped. Currently, the only valid option is incremental. If you do not specify a type value, the signal fails to stop the incremental snapshot.
Read-only incremental snapshots

The MySQL connector allows for running incremental snapshots with a read-only connection to the database. To run an incremental snapshot with read-only access, the connector uses the executed global transaction IDs (GTID) set as high and low watermarks. The state of a chunk’s window is updated by comparing the GTIDs of binary log (binlog) events or the server’s heartbeats against low and high watermarks.

To switch to a read-only implementation, set the value of the read.only property to true.

Prerequisites

  • Enable MySQL GTIDs.
  • If the connector reads from a multi-threaded replica (that is, a replica for which the value of replica_parallel_workers is greater than 0) you must set one of the following options:
    • replica_preserve_commit_order=ON
    • slave_preserve_commit_order=ON
Ad hoc read-only incremental snapshots

When the MySQL connection is read-only, the signaling table mechanism can also run a snapshot by sending a message to the Kafka topic that is specified in the signal.kafka.topic property.

The key of the Kafka message must match the value of the database.server.name connector configuration option.

The value is a JSON object with type and data fields.

The signal type is execute-snapshot and the data field must have the following fields:

FieldDefaultValue
typeincrementalThe type of the snapshot to be executed. Currently only incremental is supported. See the next section for more details.
data-collectionsN/AAn array of qualified names of table to be snapshotted. The format of the names is the same as for signal.data.collection configuration option.

An example of the execute-snapshot Kafka message:

Key = `test_connector`

Value = `{"type":"execute-snapshot","data": {"data-collections": ["schema1.table1", "schema1.table2"], "type": "INCREMENTAL"}}`

Operation type of snapshot events

The MySQL connector emits snapshot events as READ operations ("op" : "r"). If you prefer that the connector emits snapshot events as CREATE (c) events, configure the Debezium ReadToInsertEvent single message transform (SMT) to modify the event type.

The following example shows how to configure the SMT:

Example: Using the ReadToInsertEvent SMT to change the type of snapshot events

transforms=snapshotasinsert,...
transforms.snapshotasinsert.type=io.debezium.connector.mysql.transforms.ReadToInsertEvent

Topic names

By default, the MySQL connector writes change events for all of the INSERT, UPDATE, and DELETE operations that occur in a table to a single Apache Kafka topic that is specific to that table.

The connector uses the following convention to name change event topics:

serverName.databaseName.tableName

Suppose that fulfillment is the server name, inventory is the database name, and the database contains tables named orders, customers, and products. The Debezium MySQL connector emits events to three Kafka topics, one for each table in the database:

fulfillment.inventory.orders
fulfillment.inventory.customers
fulfillment.inventory.products

The following list provides definitions for the components of the default name:

  • serverName

    The logical name of the server as specified by the database.server.name connector configuration property.

  • schemaName

    The name of the schema in which the operation occurred.

  • tableName

    The name of the table in which the operation occurred.

The connector applies similar naming conventions to label its internal database history topics, schema change topics, and transaction metadata topics.

If the default topic name do not meet your requirements, you can configure custom topic names. To configure custom topic names, you specify regular expressions in the logical topic routing SMT. For more information about using the logical topic routing SMT to customize topic naming, see Topic routing.

Transaction metadata

Debezium can generate events that represent transaction boundaries and that enrich data change event messages.

Limits on when Debezium receives transaction metadataDebezium registers and receives metadata only for transactions that occur after you deploy the connector. Metadata for transactions that occur before you deploy the connector is not available.

Debezium generates transaction boundary events for the BEGIN and END delimiters in every transaction. Transaction boundary events contain the following fields:

  • status

    BEGIN or END.

  • id

    String representation of the unique transaction identifier.

  • event_count (for END events)

    Total number of events emitted by the transaction.

  • data_collections (for END events)

    An array of pairs of data_collection and event_count elements. that indicates the number of events that the connector emits for changes that originate from a data collection.

Example

{
  "status": "BEGIN",
  "id": "0e4d5dcd-a33b-11ea-80f1-02010a22a99e:10",
  "event_count": null,
  "data_collections": null
}

{
  "status": "END",
  "id": "0e4d5dcd-a33b-11ea-80f1-02010a22a99e:10",
  "event_count": 2,
  "data_collections": [
    {
      "data_collection": "s1.a",
      "event_count": 1
    },
    {
      "data_collection": "s2.a",
      "event_count": 1
    }
  ]
}

Unless overridden via the transaction.topic option, the connector emits transaction events to the **.transaction topic.

Change data event enrichment

When transaction metadata is enabled the data message Envelope is enriched with a new transaction field. This field provides information about every event in the form of a composite of fields:

  • id - string representation of unique transaction identifier
  • total_order - absolute position of the event among all events generated by the transaction
  • data_collection_order - the per-data collection position of the event among all events that were emitted by the transaction

Following is an example of a message:

{
  "before": null,
  "after": {
    "pk": "2",
    "aa": "1"
  },
  "source": {
...
  },
  "op": "c",
  "ts_ms": "1580390884335",
  "transaction": {
    "id": "0e4d5dcd-a33b-11ea-80f1-02010a22a99e:10",
    "total_order": "1",
    "data_collection_order": "1"
  }
}

For systems which don’t have GTID enabled, the transaction identifier is constructed using the combination of binlog filename and binlog position. For example, if the binlog filename and position corresponding to the transaction BEGIN event are mysql-bin.000002 and 1913 respectively then the Debezium constructed transaction identifier would be file=mysql-bin.000002,pos=1913.

Data change events

The Debezium MySQL connector generates a data change event for each row-level INSERT, UPDATE, and DELETE operation. Each event contains a key and a value. The structure of the key and the value depends on the table that was changed.

Debezium and Kafka Connect are designed around continuous streams of event messages. However, the structure of these events may change over time, which can be difficult for consumers to handle. To address this, each event contains the schema for its content or, if you are using a schema registry, a schema ID that a consumer can use to obtain the schema from the registry. This makes each event self-contained.

The following skeleton JSON shows the basic four parts of a change event. However, how you configure the Kafka Connect converter that you choose to use in your application determines the representation of these four parts in change events. A schema field is in a change event only when you configure the converter to produce it. Likewise, the event key and event payload are in a change event only if you configure a converter to produce it. If you use the JSON converter and you configure it to produce all four basic change event parts, change events have this structure:

{
 "schema": { 
   ...
  },
 "payload": { 
   ...
 },
 "schema": { 
   ...
 },
 "payload": { 
   ...
 },
}
ItemField nameDescription
1schemaThe first schema field is part of the event key. It specifies a Kafka Connect schema that describes what is in the event key’s payload portion. In other words, the first schema field describes the structure of the primary key, or the unique key if the table does not have a primary key, for the table that was changed. It is possible to override the table’s primary key by setting the message.key.columns connector configuration property. In this case, the first schema field describes the structure of the key identified by that property.
2payloadThe first payload field is part of the event key. It has the structure described by the previous schema field and it contains the key for the row that was changed.
3schemaThe second schema field is part of the event value. It specifies the Kafka Connect schema that describes what is in the event value’s payload portion. In other words, the second schema describes the structure of the row that was changed. Typically, this schema contains nested schemas.
4payloadThe second payload field is part of the event value. It has the structure described by the previous schema field and it contains the actual data for the row that was changed.

By default, the connector streams change event records to topics with names that are the same as the event’s originating table. See topic names.

The MySQL connector ensures that all Kafka Connect schema names adhere to the Avro schema name format. This means that the logical server name must start with a Latin letter or an underscore, that is, a-z, A-Z, or _. Each remaining character in the logical server name and each character in the database and table names must be a Latin letter, a digit, or an underscore, that is, a-z, A-Z, 0-9, or _. If there is an invalid character it is replaced with an underscore character.This can lead to unexpected conflicts if the logical server name, a database name, or a table name contains invalid characters, and the only characters that distinguish names from one another are invalid and thus replaced with underscores.

Change event keys

A change event’s key contains the schema for the changed table’s key and the changed row’s actual key. Both the schema and its corresponding payload contain a field for each column in the changed table’s PRIMARY KEY (or unique constraint) at the time the connector created the event.

Consider the following customers table, which is followed by an example of a change event key for this table.

CREATE TABLE customers (
  id INTEGER NOT NULL AUTO_INCREMENT PRIMARY KEY,
  first_name VARCHAR(255) NOT NULL,
  last_name VARCHAR(255) NOT NULL,
  email VARCHAR(255) NOT NULL UNIQUE KEY
) AUTO_INCREMENT=1001;

Every change event that captures a change to the customers table has the same event key schema. For as long as the customers table has the previous definition, every change event that captures a change to the customers table has the following key structure. In JSON, it looks like this:

{
 "schema": { 
    "type": "struct",
    "name": "mysql-server-1.inventory.customers.Key", 
    "optional": false, 
    "fields": [ 
      {
        "field": "id",
        "type": "int32",
        "optional": false
      }
    ]
  },
 "payload": { 
    "id": 1001
  }
}
ItemField nameDescription
1schemaThe schema portion of the key specifies a Kafka Connect schema that describes what is in the key’s payload portion.
2mysql-server-1.inventory.customers.KeyName of the schema that defines the structure of the key’s payload. This schema describes the structure of the primary key for the table that was changed. Key schema names have the format connector-name.database-name.table-name.Key. In this example: mysql-server-1 is the name of the connector that generated this event. inventory is the database that contains the table that was changed. customers is the table that was updated.
3optionalIndicates whether the event key must contain a value in its payload field. In this example, a value in the key’s payload is required. A value in the key’s payload field is optional when a table does not have a primary key.
4fieldsSpecifies each field that is expected in the payload, including each field’s name, type, and whether it is required.
5payloadContains the key for the row for which this change event was generated. In this example, the key, contains a single id field whose value is 1001.

Change event values

The value in a change event is a bit more complicated than the key. Like the key, the value has a schema section and a payload section. The schema section contains the schema that describes the Envelope structure of the payload section, including its nested fields. Change events for operations that create, update or delete data all have a value payload with an envelope structure.

Consider the same sample table that was used to show an example of a change event key:

CREATE TABLE customers (
  id INTEGER NOT NULL AUTO_INCREMENT PRIMARY KEY,
  first_name VARCHAR(255) NOT NULL,
  last_name VARCHAR(255) NOT NULL,
  email VARCHAR(255) NOT NULL UNIQUE KEY
) AUTO_INCREMENT=1001;

The value portion of a change event for a change to this table is described for:

create events

The following example shows the value portion of a change event that the connector generates for an operation that creates data in the customers table:

{
  "schema": { 
    "type": "struct",
    "fields": [
      {
        "type": "struct",
        "fields": [
          {
            "type": "int32",
            "optional": false,
            "field": "id"
          },
          {
            "type": "string",
            "optional": false,
            "field": "first_name"
          },
          {
            "type": "string",
            "optional": false,
            "field": "last_name"
          },
          {
            "type": "string",
            "optional": false,
            "field": "email"
          }
        ],
        "optional": true,
        "name": "mysql-server-1.inventory.customers.Value", 
        "field": "before"
      },
      {
        "type": "struct",
        "fields": [
          {
            "type": "int32",
            "optional": false,
            "field": "id"
          },
          {
            "type": "string",
            "optional": false,
            "field": "first_name"
          },
          {
            "type": "string",
            "optional": false,
            "field": "last_name"
          },
          {
            "type": "string",
            "optional": false,
            "field": "email"
          }
        ],
        "optional": true,
        "name": "mysql-server-1.inventory.customers.Value",
        "field": "after"
      },
      {
        "type": "struct",
        "fields": [
          {
            "type": "string",
            "optional": false,
            "field": "version"
          },
          {
            "type": "string",
            "optional": false,
            "field": "connector"
          },
          {
            "type": "string",
            "optional": false,
            "field": "name"
          },
          {
            "type": "int64",
            "optional": false,
            "field": "ts_ms"
          },
          {
            "type": "boolean",
            "optional": true,
            "default": false,
            "field": "snapshot"
          },
          {
            "type": "string",
            "optional": false,
            "field": "db"
          },
          {
            "type": "string",
            "optional": true,
            "field": "table"
          },
          {
            "type": "int64",
            "optional": false,
            "field": "server_id"
          },
          {
            "type": "string",
            "optional": true,
            "field": "gtid"
          },
          {
            "type": "string",
            "optional": false,
            "field": "file"
          },
          {
            "type": "int64",
            "optional": false,
            "field": "pos"
          },
          {
            "type": "int32",
            "optional": false,
            "field": "row"
          },
          {
            "type": "int64",
            "optional": true,
            "field": "thread"
          },
          {
            "type": "string",
            "optional": true,
            "field": "query"
          }
        ],
        "optional": false,
        "name": "io.debezium.connector.mysql.Source", 
        "field": "source"
      },
      {
        "type": "string",
        "optional": false,
        "field": "op"
      },
      {
        "type": "int64",
        "optional": true,
        "field": "ts_ms"
      }
    ],
    "optional": false,
    "name": "mysql-server-1.inventory.customers.Envelope" 
  },
  "payload": { 
    "op": "c", 
    "ts_ms": 1465491411815, 
    "before": null, 
    "after": { 
      "id": 1004,
      "first_name": "Anne",
      "last_name": "Kretchmar",
      "email": "[email protected]"
    },
    "source": { 
      "version": "1.9.8.Final",
      "connector": "mysql",
      "name": "mysql-server-1",
      "ts_ms": 0,
      "snapshot": false,
      "db": "inventory",
      "table": "customers",
      "server_id": 0,
      "gtid": null,
      "file": "mysql-bin.000003",
      "pos": 154,
      "row": 0,
      "thread": 7,
      "query": "INSERT INTO customers (first_name, last_name, email) VALUES ('Anne', 'Kretchmar', '[email protected]')"
    }
  }
}
ItemField nameDescription
1schemaThe value’s schema, which describes the structure of the value’s payload. A change event’s value schema is the same in every change event that the connector generates for a particular table.
2nameIn the schema section, each name field specifies the schema for a field in the value’s payload. mysql-server-1.inventory.customers.Value is the schema for the payload’s before and after fields. This schema is specific to the customers table. Names of schemas for before and after fields are of the form *logicalName*.*tableName*.Value, which ensures that the schema name is unique in the database. This means that when using the Avro converter, the resulting Avro schema for each table in each logical source has its own evolution and history.
3nameio.debezium.connector.mysql.Source is the schema for the payload’s source field. This schema is specific to the MySQL connector. The connector uses it for all events that it generates.
4namemysql-server-1.inventory.customers.Envelope is the schema for the overall structure of the payload, where mysql-server-1 is the connector name, inventory is the database, and customers is the table.
5payloadThe value’s actual data. This is the information that the change event is providing. It may appear that the JSON representations of the events are much larger than the rows they describe. This is because the JSON representation must include the schema and the payload portions of the message. However, by using the Avro converter, you can significantly decrease the size of the messages that the connector streams to Kafka topics.
6opMandatory string that describes the type of operation that caused the connector to generate the event. In this example, c indicates that the operation created a row. Valid values are:c = createu = updated = deleter = read (applies to only snapshots)
7ts_msOptional field that displays the time at which the connector processed the event. The time is based on the system clock in the JVM running the Kafka Connect task. In the source object, ts_ms indicates the time that the change was made in the database. By comparing the value for payload.source.ts_ms with the value for payload.ts_ms, you can determine the lag between the source database update and Debezium.
8beforeAn optional field that specifies the state of the row before the event occurred. When the op field is c for create, as it is in this example, the before field is null since this change event is for new content.
9afterAn optional field that specifies the state of the row after the event occurred. In this example, the after field contains the values of the new row’s id, first_name, last_name, and email columns.
10sourceMandatory field that describes the source metadata for the event. This field contains information that you can use to compare this event with other events, with regard to the origin of the events, the order in which the events occurred, and whether events were part of the same transaction. The source metadata includes:Debezium versionConnector namebinlog name where the event was recordedbinlog positionRow within the eventIf the event was part of a snapshotName of the database and table that contain the new rowID of the MySQL thread that created the event (non-snapshot only)MySQL server ID (if available)Timestamp for when the change was made in the databaseIf the binlog_rows_query_log_events MySQL configuration option is enabled and the connector configuration include.query property is enabled, the source field also provides the query field, which contains the original SQL statement that caused the change event.

update events

The value of a change event for an update in the sample customers table has the same schema as a create event for that table. Likewise, the event value’s payload has the same structure. However, the event value payload contains different values in an update event. Here is an example of a change event value in an event that the connector generates for an update in the customers table:

{
  "schema": { ... },
  "payload": {
    "before": { 
      "id": 1004,
      "first_name": "Anne",
      "last_name": "Kretchmar",
      "email": "[email protected]"
    },
    "after": { 
      "id": 1004,
      "first_name": "Anne Marie",
      "last_name": "Kretchmar",
      "email": "[email protected]"
    },
    "source": { 
      "version": "1.9.8.Final",
      "name": "mysql-server-1",
      "connector": "mysql",
      "name": "mysql-server-1",
      "ts_ms": 1465581029100,
      "snapshot": false,
      "db": "inventory",
      "table": "customers",
      "server_id": 223344,
      "gtid": null,
      "file": "mysql-bin.000003",
      "pos": 484,
      "row": 0,
      "thread": 7,
      "query": "UPDATE customers SET first_name='Anne Marie' WHERE id=1004"
    },
    "op": "u", 
    "ts_ms": 1465581029523 
  }
}
ItemField nameDescription
1beforeAn optional field that specifies the state of the row before the event occurred. In an update event value, the before field contains a field for each table column and the value that was in that column before the database commit. In this example, the first_name value is Anne.
2afterAn optional field that specifies the state of the row after the event occurred. You can compare the before and after structures to determine what the update to this row was. In the example, the first_name value is now Anne Marie.
3sourceMandatory field that describes the source metadata for the event. The source field structure has the same fields as in a create event, but some values are different, for example, the sample update event is from a different position in the binlog. The source metadata includes:Debezium versionConnector namebinlog name where the event was recordedbinlog positionRow within the eventIf the event was part of a snapshotName of the database and table that contain the updated rowID of the MySQL thread that created the event (non-snapshot only)MySQL server ID (if available)Timestamp for when the change was made in the databaseIf the binlog_rows_query_log_events MySQL configuration option is enabled and the connector configuration include.query property is enabled, the source field also provides the query field, which contains the original SQL statement that caused the change event.
4opMandatory string that describes the type of operation. In an update event value, the op field value is u, signifying that this row changed because of an update.
5ts_msOptional field that displays the time at which the connector processed the event. The time is based on the system clock in the JVM running the Kafka Connect task. In the source object, ts_ms indicates the time that the change was made in the database. By comparing the value for payload.source.ts_ms with the value for payload.ts_ms, you can determine the lag between the source database update and Debezium.
Updating the columns for a row’s primary/unique key changes the value of the row’s key. When a key changes, Debezium outputs three events: a DELETE event and a tombstone event with the old key for the row, followed by an event with the new key for the row. Details are in the next section.

Primary key updates

An UPDATE operation that changes a row’s primary key field(s) is known as a primary key change. For a primary key change, in place of an UPDATE event record, the connector emits a DELETE event record for the old key and a CREATE event record for the new (updated) key. These events have the usual structure and content, and in addition, each one has a message header related to the primary key change:

  • The DELETE event record has __debezium.newkey as a message header. The value of this header is the new primary key for the updated row.
  • The CREATE event record has __debezium.oldkey as a message header. The value of this header is the previous (old) primary key that the updated row had.

delete events

The value in a delete change event has the same schema portion as create and update events for the same table. The payload portion in a delete event for the sample customers table looks like this:

{
  "schema": { ... },
  "payload": {
    "before": { 
      "id": 1004,
      "first_name": "Anne Marie",
      "last_name": "Kretchmar",
      "email": "[email protected]"
    },
    "after": null, 
    "source": { 
      "version": "1.9.8.Final",
      "connector": "mysql",
      "name": "mysql-server-1",
      "ts_ms": 1465581902300,
      "snapshot": false,
      "db": "inventory",
      "table": "customers",
      "server_id": 223344,
      "gtid": null,
      "file": "mysql-bin.000003",
      "pos": 805,
      "row": 0,
      "thread": 7,
      "query": "DELETE FROM customers WHERE id=1004"
    },
    "op": "d", 
    "ts_ms": 1465581902461 
  }
}
ItemField nameDescription
1beforeOptional field that specifies the state of the row before the event occurred. In a delete event value, the before field contains the values that were in the row before it was deleted with the database commit.
2afterOptional field that specifies the state of the row after the event occurred. In a delete event value, the after field is null, signifying that the row no longer exists.
3sourceMandatory field that describes the source metadata for the event. In a delete event value, the source field structure is the same as for create and update events for the same table. Many source field values are also the same. In a delete event value, the ts_ms and pos field values, as well as other values, might have changed. But the source field in a delete event value provides the same metadata:Debezium versionConnector namebinlog name where the event was recordedbinlog positionRow within the eventIf the event was part of a snapshotName of the database and table that contain the updated rowID of the MySQL thread that created the event (non-snapshot only)MySQL server ID (if available)Timestamp for when the change was made in the databaseIf the binlog_rows_query_log_events MySQL configuration option is enabled and the connector configuration include.query property is enabled, the source field also provides the query field, which contains the original SQL statement that caused the change event.
4opMandatory string that describes the type of operation. The op field value is d, signifying that this row was deleted.
5ts_msOptional field that displays the time at which the connector processed the event. The time is based on the system clock in the JVM running the Kafka Connect task. In the source object, ts_ms indicates the time that the change was made in the database. By comparing the value for payload.source.ts_ms with the value for payload.ts_ms, you can determine the lag between the source database update and Debezium.

A delete change event record provides a consumer with the information it needs to process the removal of this row. The old values are included because some consumers might require them in order to properly handle the removal.

MySQL connector events are designed to work with Kafka log compaction. Log compaction enables removal of some older messages as long as at least the most recent message for every key is kept. This lets Kafka reclaim storage space while ensuring that the topic contains a complete data set and can be used for reloading key-based state.

Tombstone events

When a row is deleted, the delete event value still works with log compaction, because Kafka can remove all earlier messages that have that same key. However, for Kafka to remove all messages that have that same key, the message value must be null. To make this possible, after Debezium’s MySQL connector emits a delete event, the connector emits a special tombstone event that has the same key but a null value.

Data type mappings

The Debezium MySQL connector represents changes to rows with events that are structured like the table in which the row exists. The event contains a field for each column value. The MySQL data type of that column dictates how Debezium represents the value in the event.

Columns that store strings are defined in MySQL with a character set and collation. The MySQL connector uses the column’s character set when reading the binary representation of the column values in the binlog events.

The connector can map MySQL data types to both literal and semantic types.

  • Literal type: how the value is represented using Kafka Connect schema types.
  • Semantic type: how the Kafka Connect schema captures the meaning of the field (schema name).

If the default data type conversions do not meet your needs, you can create a custom converter for the connector.

Basic types

The following table shows how the connector maps basic MySQL data types.

MySQL typeLiteral typeSemantic type
BOOLEAN, BOOLBOOLEANn/a
BIT(1)BOOLEANn/a
BIT(>1)BYTESio.debezium.data.Bits The length schema parameter contains an integer that represents the number of bits. The byte[] contains the bits in little-endian form and is sized to contain the specified number of bits. For example, where n is bits: numBytes = n/8 + (n%8== 0 ? 0 : 1)
TINYINTINT16n/a
SMALLINT[(M)]INT16n/a
MEDIUMINT[(M)]INT32n/a
INT, INTEGER[(M)]INT32n/a
BIGINT[(M)]INT64n/a
REAL[(M,D)]FLOAT32n/a
FLOAT[(M,D)]FLOAT64n/a
DOUBLE[(M,D)]FLOAT64n/a
CHAR(M)]STRINGn/a
VARCHAR(M)]STRINGn/a
BINARY(M)]BYTES or STRINGn/a Either the raw bytes (the default), a base64-encoded String, or a hex-encoded String, based on the binary.handling.mode connector configuration property setting.
VARBINARY(M)]BYTES or STRINGn/a Either the raw bytes (the default), a base64-encoded String, or a hex-encoded String, based on the binary.handling.mode connector configuration property setting.
TINYBLOBBYTES or STRINGn/a Either the raw bytes (the default), a base64-encoded String, or a hex-encoded String, based on the binary.handling.mode connector configuration property setting.
TINYTEXTSTRINGn/a
BLOBBYTES or STRINGn/a Either the raw bytes (the default), a base64-encoded String, or a hex-encoded String, based on the binary.handling.mode connector configuration property setting. Only values with a size of up to 2GB are supported. It is recommended to externalize large column values, using the claim check pattern.
TEXTSTRINGn/a Only values with a size of up to 2GB are supported. It is recommended to externalize large column values, using the claim check pattern.
MEDIUMBLOBBYTES or STRINGn/a Either the raw bytes (the default), a base64-encoded String, or a hex-encoded String, based on the binary.handling.mode connector configuration property setting.
MEDIUMTEXTSTRINGn/a
LONGBLOBBYTES or STRINGn/a Either the raw bytes (the default), a base64-encoded String, or a hex-encoded String, based on the binary.handling.mode connector configuration property setting. Only values with a size of up to 2GB are supported. It is recommended to externalize large column values, using the claim check pattern.
LONGTEXTSTRINGn/a Only values with a size of up to 2GB are supported. It is recommended to externalize large column values, using the claim check pattern.
JSONSTRINGio.debezium.data.Json Contains the string representation of a JSON document, array, or scalar.
ENUMSTRINGio.debezium.data.Enum The allowed schema parameter contains the comma-separated list of allowed values.
SETSTRINGio.debezium.data.EnumSet The allowed schema parameter contains the comma-separated list of allowed values.
`YEAR[(24)]`INT32
TIMESTAMP[(M)]STRINGio.debezium.time.ZonedTimestamp In ISO 8601 format with microsecond precision. MySQL allows M to be in the range of 0-6.

Temporal types

Excluding the TIMESTAMP data type, MySQL temporal types depend on the value of the time.precision.mode connector configuration property. For TIMESTAMP columns whose default value is specified as CURRENT_TIMESTAMP or NOW, the value 1970-01-01 00:00:00 is used as the default value in the Kafka Connect schema.

MySQL allows zero-values for DATE, DATETIME, and TIMESTAMP columns because zero-values are sometimes preferred over null values. The MySQL connector represents zero-values as null values when the column definition allows null values, or as the epoch day when the column does not allow null values.

Temporal values without time zones

The DATETIME type represents a local date and time such as “2018-01-13 09:48:27”. As you can see, there is no time zone information. Such columns are converted into epoch milliseconds or microseconds based on the column’s precision by using UTC. The TIMESTAMP type represents a timestamp without time zone information. It is converted by MySQL from the server (or session’s) current time zone into UTC when writing and from UTC into the server (or session’s) current time zone when reading back the value. For example:

  • DATETIME with a value of 2018-06-20 06:37:03 becomes 1529476623000.
  • TIMESTAMP with a value of 2018-06-20 06:37:03 becomes 2018-06-20T13:37:03Z.

Such columns are converted into an equivalent io.debezium.time.ZonedTimestamp in UTC based on the server (or session’s) current time zone. The time zone will be queried from the server by default. If this fails, it must be specified explicitly by the database connectionTimeZone MySQL configuration option. For example, if the database’s time zone (either globally or configured for the connector by means of the connectionTimeZone option) is “America/Los_Angeles”, the TIMESTAMP value “2018-06-20 06:37:03” is represented by a ZonedTimestamp with the value “2018-06-20T13:37:03Z”.

The time zone of the JVM running Kafka Connect and Debezium does not affect these conversions.

More details about properties related to temporal values are in the documentation for MySQL connector configuration properties.

  • time.precision.mode=adaptive_time_microseconds(default)

    The MySQL connector determines the literal type and semantic type based on the column’s data type definition so that events represent exactly the values in the database. All time fields are in microseconds. Only positive TIME field values in the range of 00:00:00.000000 to 23:59:59.999999 can be captured correctly.Table 14. Mappings when time.precision.mode=adaptive_time_microsecondsMySQL typeLiteral typeSemantic typeDATE``INT32``io.debezium.time.Date Represents the number of days since the epoch.TIME[(M)]``INT64``io.debezium.time.MicroTime Represents the time value in microseconds and does not include time zone information. MySQL allows M to be in the range of 0-6.DATETIME, DATETIME(0), DATETIME(1), DATETIME(2), DATETIME(3)``INT64``io.debezium.time.Timestamp Represents the number of milliseconds past the epoch and does not include time zone information.DATETIME(4), DATETIME(5), DATETIME(6)``INT64``io.debezium.time.MicroTimestamp Represents the number of microseconds past the epoch and does not include time zone information.

  • time.precision.mode=connect

    The MySQL connector uses defined Kafka Connect logical types. This approach is less precise than the default approach and the events could be less precise if the database column has a fractional second precision value of greater than 3. Values in only the range of 00:00:00.000 to 23:59:59.999 can be handled. Set time.precision.mode=connect only if you can ensure that the TIME values in your tables never exceed the supported ranges. The connect setting is expected to be removed in a future version of Debezium.Table 15. Mappings when time.precision.mode=connectMySQL typeLiteral typeSemantic typeDATE``INT32``org.apache.kafka.connect.data.Date Represents the number of days since the epoch.TIME[(M)]``INT64``org.apache.kafka.connect.data.Time Represents the time value in microseconds since midnight and does not include time zone information.DATETIME[(M)]``INT64``org.apache.kafka.connect.data.Timestamp Represents the number of milliseconds since the epoch, and does not include time zone information.

Decimal types

Debezium connectors handle decimals according to the setting of the decimal.handling.mode connector configuration property.

  • decimal.handling.mode=precise

    Table 16. Mappings when decimal.handling.mode=preciseMySQL typeLiteral typeSemantic typeNUMERIC[(M[,D])]``BYTES``org.apache.kafka.connect.data.Decimal The scale schema parameter contains an integer that represents how many digits the decimal point shifted.DECIMAL[(M[,D])]``BYTES``org.apache.kafka.connect.data.Decimal The scale schema parameter contains an integer that represents how many digits the decimal point shifted.

  • decimal.handling.mode=double

    Table 17. Mappings when decimal.handling.mode=doubleMySQL typeLiteral typeSemantic typeNUMERIC[(M[,D])]``FLOAT64n/aDECIMAL[(M[,D])]``FLOAT64n/a

  • decimal.handling.mode=string

    Table 18. Mappings when decimal.handling.mode=stringMySQL typeLiteral typeSemantic typeNUMERIC[(M[,D])]``STRINGn/aDECIMAL[(M[,D])]``STRINGn/a

Boolean values

MySQL handles the BOOLEAN value internally in a specific way. The BOOLEAN column is internally mapped to the TINYINT(1) data type. When the table is created during streaming then it uses proper BOOLEAN mapping as Debezium receives the original DDL. During snapshots, Debezium executes SHOW CREATE TABLE to obtain table definitions that return TINYINT(1) for both BOOLEAN and TINYINT(1) columns. Debezium then has no way to obtain the original type mapping and so maps to TINYINT(1).

To enable you to convert source columns to Boolean data types, Debezium provides a TinyIntOneToBooleanConverter custom converter that you can use in one of the following ways:

  • Map all TINYINT(1) or TINYINT(1) UNSIGNED columns to BOOLEAN types.

  • Enumerate a subset of columns by using a comma-separated list of regular expressions.
    To use this type of conversion, you must set the converters configuration property with the selector parameter, as shown in the following example:

    converters=boolean
    boolean.type=io.debezium.connector.mysql.converters.TinyIntOneToBooleanConverter
    boolean.selector=db1.table1.*, db1.table2.column1
    
  • NOTE: MySQL8 not showing the length of tinyint unsigned type when snapshot executes SHOW CREATE TABLE, which means this converter doesn’t work. The new option length.checker can solve this issue, the default value is true. Disable the length.checker and specify the columns that need to be converted to selector property instead of converting all columns based on type, as shown in the following example:

    converters=boolean
    boolean.type=io.debezium.connector.mysql.converters.TinyIntOneToBooleanConverter
    boolean.length.checker=false
    boolean.selector=db1.table1.*, db1.table2.column1
    

Spatial types

Currently, the Debezium MySQL connector supports the following spatial data types.

MySQL typeLiteral typeSemantic type
GEOMETRY,LINESTRING,POLYGON,MULTIPOINT,MULTILINESTRING,MULTIPOLYGON,GEOMETRYCOLLECTIONSTRUCTio.debezium.data.geometry.Geometry Contains a structure with two fields:srid (INT32: spatial reference system ID that defines the type of geometry object stored in the structurewkb (BYTES): binary representation of the geometry object encoded in the Well-Known-Binary (wkb) format. See the Open Geospatial Consortium for more details.

Setting up MySQL

Some MySQL setup tasks are required before you can install and run a Debezium connector.

Creating a user

A Debezium MySQL connector requires a MySQL user account. This MySQL user must have appropriate permissions on all databases for which the Debezium MySQL connector captures changes.

Prerequisites

  • A MySQL server.
  • Basic knowledge of SQL commands.

Procedure

  1. Create the MySQL user:

    mysql> CREATE USER 'user'@'localhost' IDENTIFIED BY 'password';
    
  2. Grant the required permissions to the user:

    mysql> GRANT SELECT, RELOAD, SHOW DATABASES, REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'user' IDENTIFIED BY 'password';
    

    The table below describes the permissions.

    If using a hosted option such as Amazon RDS or Amazon Aurora that does not allow a global read lock, table-level locks are used to create the consistent snapshot. In this case, you need to also grant LOCK TABLES permissions to the user that you create. See snapshots for more details.
  3. Finalize the user’s permissions:

    mysql> FLUSH PRIVILEGES;
    
KeywordDescription
SELECTEnables the connector to select rows from tables in databases. This is used only when performing a snapshot.
RELOADEnables the connector the use of the FLUSH statement to clear or reload internal caches, flush tables, or acquire locks. This is used only when performing a snapshot.
SHOW DATABASESEnables the connector to see database names by issuing the SHOW DATABASE statement. This is used only when performing a snapshot.
REPLICATION SLAVEEnables the connector to connect to and read the MySQL server binlog.
REPLICATION CLIENTEnables the connector the use of the following statements:SHOW MASTER STATUS``SHOW SLAVE STATUS``SHOW BINARY LOGSThe connector always requires this.
ONIdentifies the database to which the permissions apply.
TO 'user'Specifies the user to grant the permissions to.
IDENTIFIED BY 'password'Specifies the user’s MySQL password.

Enabling the binlog

You must enable binary logging for MySQL replication. The binary logs record transaction updates for replication tools to propagate changes.

Prerequisites

  • A MySQL server.
  • Appropriate MySQL user privileges.

Procedure

  1. Check whether the log-bin option is already on:

    // for MySql 5.x
    mysql> SELECT variable_value as "BINARY LOGGING STATUS (log-bin) ::"
    FROM information_schema.global_variables WHERE variable_name='log_bin';
    // for MySql 8.x
    mysql> SELECT variable_value as "BINARY LOGGING STATUS (log-bin) ::"
    FROM performance_schema.global_variables WHERE variable_name='log_bin';
    
  2. If it is OFF, configure your MySQL server configuration file with the following properties, which are described in the table below:

    server-id         = 223344
    log_bin           = mysql-bin
    binlog_format     = ROW
    binlog_row_image  = FULL
    expire_logs_days  = 10
    
  3. Confirm your changes by checking the binlog status once more:

    // for MySql 5.x
    mysql> SELECT variable_value as "BINARY LOGGING STATUS (log-bin) ::"
    FROM information_schema.global_variables WHERE variable_name='log_bin';
    // for MySql 8.x
    mysql> SELECT variable_value as "BINARY LOGGING STATUS (log-bin) ::"
    FROM performance_schema.global_variables WHERE variable_name='log_bin';
    
PropertyDescription
server-idThe value for the server-id must be unique for each server and replication client in the MySQL cluster. During MySQL connector set up, Debezium assigns a unique server ID to the connector.
log_binThe value of log_bin is the base name of the sequence of binlog files.
binlog_formatThe binlog-format must be set to ROW or row.
binlog_row_imageThe binlog_row_image must be set to FULL or full.
expire_logs_daysThis is the number of days for automatic binlog file removal. The default is 0, which means no automatic removal. Set the value to match the needs of your environment. See MySQL purges binlog files.

Enabling GTIDs

Global transaction identifiers (GTIDs) uniquely identify transactions that occur on a server within a cluster. Though not required for a Debezium MySQL connector, using GTIDs simplifies replication and enables you to more easily confirm if primary and replica servers are consistent.

GTIDs are available in MySQL 5.6.5 and later. See the MySQL documentation for more details.

Prerequisites

  • A MySQL server.
  • Basic knowledge of SQL commands.
  • Access to the MySQL configuration file.

Procedure

  1. Enable gtid_mode:

    mysql> gtid_mode=ON
    
  2. Enable enforce_gtid_consistency:

    mysql> enforce_gtid_consistency=ON
    
  3. Confirm the changes:

    mysql> show global variables like '%GTID%';
    

Result

+--------------------------+-------+
| Variable_name            | Value |
+--------------------------+-------+
| enforce_gtid_consistency | ON    |
| gtid_mode                | ON    |
+--------------------------+-------+
OptionDescription
gtid_modeBoolean that specifies whether GTID mode of the MySQL server is enabled or not.ON = enabledOFF = disabled
enforce_gtid_consistencyBoolean that specifies whether the server enforces GTID consistency by allowing the execution of statements that can be logged in a transactionally safe manner. Required when using GTIDs.ON = enabledOFF = disabled

Configuring session timeouts

When an initial consistent snapshot is made for large databases, your established connection could timeout while the tables are being read. You can prevent this behavior by configuring interactive_timeout and wait_timeout in your MySQL configuration file.

Prerequisites

  • A MySQL server.
  • Basic knowledge of SQL commands.
  • Access to the MySQL configuration file.

Procedure

  1. Configure interactive_timeout:

    mysql> interactive_timeout=<duration-in-seconds>
    
  2. Configure wait_timeout:

    mysql> wait_timeout=<duration-in-seconds>
    
OptionDescription
interactive_timeoutThe number of seconds the server waits for activity on an interactive connection before closing it. See MySQL’s documentation for more details.
wait_timeoutThe number of seconds the server waits for activity on a non-interactive connection before closing it. See MySQL’s documentation for more details.

Enabling query log events

You might want to see the original SQL statement for each binlog event. Enabling the binlog_rows_query_log_events option in the MySQL configuration file allows you to do this.

This option is available in MySQL 5.6 and later.

Prerequisites

  • A MySQL server.
  • Basic knowledge of SQL commands.
  • Access to the MySQL configuration file.

Procedure

  • Enable binlog_rows_query_log_events:

    mysql> binlog_rows_query_log_events=ON
    

    binlog_rows_query_log_events is set to a value that enables/disables support for including the original SQL statement in the binlog entry.

    • ON = enabled
    • OFF = disabled

Deployment

To deploy a Debezium MySQL connector, you install the Debezium MySQL connector archive, configure the connector, and start the connector by adding its configuration to Kafka Connect.

Prerequisites

Procedure

  1. Download the Debezium MySQL connector plug-in.
  2. Extract the files into your Kafka Connect environment.
  3. Add the directory with the JAR files to Kafka Connect’s plugin.path.
  4. Configure the connector and add the configuration to your Kafka Connect cluster.
  5. Restart your Kafka Connect process to pick up the new JAR files.

If you are working with immutable containers, see Debezium’s Container images for Apache Zookeeper, Apache Kafka, MySQL, and Kafka Connect with the MySQL connector already installed and ready to run.

You can also run Debezium on Kubernetes and OpenShift.

MySQL connector configuration example

Following is an example of the configuration for a connector instance that captures data from a MySQL server on port 3306 at 192.168.99.100, which we logically name fullfillment. Typically, you configure the Debezium MySQL connector in a JSON file by setting the configuration properties that are available for the connector.

You can choose to produce events for a subset of the schemas and tables in a database. Optionally, you can ignore, mask, or truncate columns that contain sensitive data, that are larger than a specified size, or that you do not need.

{
    "name": "inventory-connector", 
    "config": {
        "connector.class": "io.debezium.connector.mysql.MySqlConnector", 
        "database.hostname": "192.168.99.100", 
        "database.port": "3306", 
        "database.user": "debezium-user", 
        "database.password": "debezium-user-pw", 
        "database.server.id": "184054", 
        "database.server.name": "fullfillment", 
        "database.include.list": "inventory", 
        "database.history.kafka.bootstrap.servers": "kafka:9092", 
        "database.history.kafka.topic": "dbhistory.fullfillment", 
        "include.schema.changes": "true" 
    }
}
Connector’s name when registered with the Kafka Connect service.
Connector’s class name.
MySQL server address.
MySQL server port number.
MySQL user with the appropriate privileges.
MySQL user’s password.
Unique ID of the connector.
Logical name of the MySQL server or cluster.
List of databases hosted by the specified server.
List of Kafka brokers that the connector uses to write and recover DDL statements to the database history topic.
Name of the database history topic. This topic is for internal use only and should not be used by consumers.
Flag that specifies if the connector should generate events for DDL changes and emit them to the fulfillment schema change topic for use by consumers.

For the complete list of the configuration properties that you can set for the Debezium MySQL connector, see MySQL connector configuration properties.

You can send this configuration with a POST command to a running Kafka Connect service. The service records the configuration and starts one connector task that performs the following actions:

  • Connects to the MySQL database.
  • Reads change-data tables for tables in capture mode.
  • Streams change event records to Kafka topics.

Adding connector configuration

To start running a MySQL connector, configure a connector configuration, and add the configuration to your Kafka Connect cluster.

Prerequisites

Procedure

  1. Create a configuration for the MySQL connector.
  2. Use the Kafka Connect REST API to add that connector configuration to your Kafka Connect cluster.

Results

After the connector starts, it performs a consistent snapshot of the MySQL databases that the connector is configured for. The connector then starts generating data change events for row-level operations and streaming change event records to Kafka topics.

Connector properties

The Debezium MySQL connector has numerous configuration properties that you can use to achieve the right connector behavior for your application. Many properties have default values. Information about the properties is organized as follows:

The following configuration properties are required unless a default value is available.

Required Debezium MySQL connector configuration properties
PropertyDefaultDescription
nameNo defaultUnique name for the connector. Attempting to register again with the same name fails. This property is required by all Kafka Connect connectors.
connector.classNo defaultThe name of the Java class for the connector. Always specify io.debezium.connector.mysql.MySqlConnector for the MySQL connector.
tasks.max1The maximum number of tasks that should be created for this connector. The MySQL connector always uses a single task and therefore does not use this value, so the default is always acceptable.
database.hostnameNo defaultIP address or host name of the MySQL database server.
database.port3306Integer port number of the MySQL database server.
database.userNo defaultName of the MySQL user to use when connecting to the MySQL database server.
database.passwordNo defaultPassword to use when connecting to the MySQL database server.
database.server.nameNo defaultLogical name that identifies and provides a namespace for the particular MySQL database server/cluster in which Debezium is capturing changes. The logical name should be unique across all other connectors, since it is used as a prefix for all Kafka topic names that receive events emitted by this connector. Only alphanumeric characters, hyphens, dots and underscores must be used in the database server logical name. +Do not change the value of this property. If you change the name value, after a restart, instead of continuing to emit events to the original topics, the connector emits subsequent events to topics whose names are based on the new value. The connector is also unable to recover its database history topic.
database.server.idrandomA numeric ID of this database client, which must be unique across all currently-running database processes in the MySQL cluster. This connector joins the MySQL database cluster as another server (with this unique ID) so it can read the binlog. By default, a random number between 5400 and 6400 is generated, though the recommendation is to explicitly set a value.
database.include.listempty stringAn optional, comma-separated list of regular expressions that match the names of the databases for which to capture changes. The connector does not capture changes in any database whose name is not in database.include.list. By default, the connector captures changes in all databases. Do not also set the database.exclude.list connector confiuration property.
database.exclude.listempty stringAn optional, comma-separated list of regular expressions that match the names of databases for which you do not want to capture changes. The connector captures changes in any database whose name is not in the database.exclude.list. Do not also set the database.include.list connector configuration property.
table.include.listempty stringAn optional, comma-separated list of regular expressions that match fully-qualified table identifiers of tables whose changes you want to capture. The connector does not capture changes in any table not included in table.include.list. Each identifier is of the form databaseName.tableName. By default, the connector captures changes in every non-system table in each database whose changes are being captured. Do not also specify the table.exclude.list connector configuration property.
table.exclude.listempty stringAn optional, comma-separated list of regular expressions that match fully-qualified table identifiers for tables whose changes you do not want to capture. The connector captures changes in any table not included in table.exclude.list. Each identifier is of the form databaseName.tableName. Do not also specify the table.include.list connector configuration property.
column.exclude.listempty stringAn optional, comma-separated list of regular expressions that match the fully-qualified names of columns to exclude from change event record values. Fully-qualified names for columns are of the form databaseName.tableName.columnName.
column.include.listempty stringAn optional, comma-separated list of regular expressions that match the fully-qualified names of columns to include in change event record values. Fully-qualified names for columns are of the form databaseName.tableName.columnName.
column.truncate.to._length_.charsn/aAn optional, comma-separated list of regular expressions that match the fully-qualified names of character-based columns whose values should be truncated in the change event record values if the field values are longer than the specified number of characters. You can configure multiple properties with different lengths in a single configuration. The length must be a positive integer. Fully-qualified names for columns are of the form databaseName.tableName.columnName.
column.mask.with._length_.charsn/aAn optional, comma-separated list of regular expressions that match the fully-qualified names of character-based columns whose values should be replaced in the change event message values with a field value consisting of the specified number of asterisk (*) characters. You can configure multiple properties with different lengths in a single configuration. Each length must be a positive integer or zero. Fully-qualified names for columns are of the form databaseName.tableName.columnName.
column.mask.hash.*hashAlgorithm*.with.salt.*salt*; column.mask.hash.v2.*hashAlgorithm*.with.salt.*salt*n/aAn optional, comma-separated list of regular expressions that match the fully-qualified names of character-based columns. Fully-qualified names for columns are of the form *<databaseName>*.*<tableName>*.*<columnName>*. In the resulting change event record, the values for the specified columns are replaced with pseudonyms. A pseudonym consists of the hashed value that results from applying the specified hashAlgorithm and salt. Based on the hash function that is used, referential integrity is maintained, while column values are replaced with pseudonyms. Supported hash functions are described in the MessageDigest section of the Java Cryptography Architecture Standard Algorithm Name Documentation. In the following example, CzQMA0cB5K is a randomly selected salt. column.mask.hash.SHA-256.with.salt.CzQMA0cB5K = inventory.orders.customerName, inventory.shipment.customerNameIf necessary, the pseudonym is automatically shortened to the length of the column. The connector configuration can include multiple properties that specify different hash algorithms and salts. Depending on the hashAlgorithm used, the salt selected, and the actual data set, the resulting data set might not be completely masked. Hashing strategy version 2 should be used to ensure fidelity if the value is being hashed in different places or systems.
column.propagate.source.typen/aAn optional, comma-separated list of regular expressions that match the fully-qualified names of columns whose original type and length should be added as a parameter to the corresponding field schemas in the emitted change event records. These schema parameters:__Debezium.source.column.type``__Debezium.source.column.length``__Debezium.source.column.scaleare used to propagate the original type name and length for variable-width types, respectively. This is useful to properly size corresponding columns in sink databases. Fully-qualified names for columns are of one of these forms:databaseName.tableName.columnName**databaseName.schemaName.tableName.columnName
datatype.propagate.source.typen/aAn optional, comma-separated list of regular expressions that match the database-specific data type name of columns whose original type and length should be added as a parameter to the corresponding field schemas in the emitted change event records. These schema parameters:__debezium.source.column.type``__debezium.source.column.length``__debezium.source.column.scaleare used to propagate the original type name and length for variable-width types, respectively. This is useful to properly size corresponding columns in sink databases. Fully-qualified data type names are of one of these forms:databaseName.tableName.typeName**databaseName.schemaName.tableName.typeNameSee how MySQL connectors map data types for the list of MySQL-specific data type names.
time.precision.modeadaptive_time_microsecondsTime, date, and timestamps can be represented with different kinds of precision, including: adaptive_time_microseconds (the default) captures the date, datetime and timestamp values exactly as in the database using either millisecond, microsecond, or nanosecond precision values based on the database column’s type, with the exception of TIME type fields, which are always captured as microseconds. adaptive (deprecated) captures the time and timestamp values exactly as in the database using either millisecond, microsecond, or nanosecond precision values based on the database column’s type. connect always represents time and timestamp values using Kafka Connect’s built-in representations for Time, Date, and Timestamp, which use millisecond precision regardless of the database columns’ precision.
decimal.handling.modepreciseSpecifies how the connector should handle values for DECIMAL and NUMERIC columns: precise (the default) represents them precisely using java.math.BigDecimal values represented in change events in a binary form. double represents them using double values, which may result in a loss of precision but is easier to use. string encodes values as formatted strings, which is easy to consume but semantic information about the real type is lost.
bigint.unsigned.handling.modelongSpecifies how BIGINT UNSIGNED columns should be represented in change events. Possible settings are: long represents values by using Java’s long, which might not offer the precision but which is easy to use in consumers. long is usually the preferred setting. precise uses java.math.BigDecimal to represent values, which are encoded in the change events by using a binary representation and Kafka Connect’s org.apache.kafka.connect.data.Decimal type. Use this setting when working with values larger than 2^63, because these values cannot be conveyed by using long.
include.schema.changestrueBoolean value that specifies whether the connector should publish changes in the database schema to a Kafka topic with the same name as the database server ID. Each schema change is recorded by using a key that contains the database name and whose value includes the DDL statement(s). This is independent of how the connector internally records database history.
include.schema.commentsfalseBoolean value that specifies whether the connector should parse and publish table and column comments on metadata objects. Enabling this option will bring the implications on memory usage. The number and size of logical schema objects is what largely impacts how much memory is consumed by the Debezium connectors, and adding potentially large string data to each of them can potentially be quite expensive. 一个布尔值,指定连接器是否应分析和发布元数据对象上的表和列注释。启用此选项将对内存使用情况产生影响。逻辑模式对象的数量和大小在很大程度上影响Debezium连接器消耗的内存量,并且向每个连接器添加可能很大的字符串数据可能非常昂贵。
include.queryfalse Boolean value that specifies whether the connector should include the original SQL query that generated the change event. 一个布尔值,指定连接器是否应包括生成更改事件的原始SQL查询。 If you set this option to true then you must also configure MySQL with the binlog_rows_query_log_events option set to ON. When include.query is true, the query is not present for events that the snapshot process generates. 如果将此选项设置为true,则还必须在配置MySQL时将binlog_rows_query_log_events选项设置为ON.当include.querytrue时,快照集行程序所产生的事件就不会出现查询。 Setting include.query to true might expose tables or fields that are explicitly excluded or masked by including the original SQL statement in the change event. For this reason, the default setting is false. 将include.query设置为true可能会公开通过在更改事件中包含原始SQL语句而显式排除或屏蔽的表或字段。因此,默认设置为
event.deserialization.failure.handling.modefail 失败Specifies how the connector should react to exceptions during deserialization of binlog events. 指定连接器在binlog事件的异常化期间应如何对异常作出反应。 fail propagates the exception, which indicates the problematic event and its binlog offset, and causes the connector to stop. fail将传播异常,该异常指示有问题的事件及其binlog偏移量,并导致连接器停止。 warn logs the problematic event and its binlog offset and then skips the event. warn记录有问题的事件及其binlog偏移量,然后跳过该事件。 ignore passes over the problematic event and does not log anything. ignore忽略有问题的事件,不记录任何内容。
inconsistent.schema.handling.modefail 失败Specifies how the connector should react to binlog events that relate to tables that are not present in internal schema representation. That is, the internal representation is not consistent with the database. 指定连接器应如何对与内部架构表示形式中不存在的表相关的binlog事件作出反应。即内部表示与数据库不一致。 fail throws an exception that indicates the problematic event and its binlog offset, and causes the connector to stop. fail引发异常,指示有问题的事件及其binlog偏移量,并导致连接器停止。 warn logs the problematic event and its binlog offset and skips the event. warn记录有问题的事件及其binlog偏移量,并跳过该事件。 skip passes over the problematic event and does not log anything. skip跳过有问题的事件,不记录任何内容。
max.batch.size2048Positive integer value that specifies the maximum size of each batch of events that should be processed during each iteration of this connector. Defaults to 2048. 正整数值,指定在此连接器的每次迭代期间应处理的每批事件的最大大小。到2048年。
max.queue.size8192Positive integer value that specifies the maximum number of records that the blocking queue can hold. When Debezium reads events streamed from the database, it places the events in the blocking queue before it writes them to Kafka. The blocking queue can provide backpressure for reading change events from the database in cases where the connector ingests messages faster than it can write them to Kafka, or when Kafka becomes unavailable. Events that are held in the queue are disregarded when the connector periodically records offsets. Always set the value of max.queue.size to be larger than the value of max.batch.size. 一个正整数值,指定阻塞队列可以容纳的最大记录数。当Debezium从数据库读取事件流时,它会将事件放在阻塞队列中,然后再将它们写入Kafka。如果连接器接收消息的速度快于将消息写入Kafka的速度,或者当Kafka不可用时,阻塞队列可以为从数据库中阅读更改事件提供背压。当连接器定期记录偏移量时,队列中保存的事件将被忽略。始终将max.queue.size的值设置为大于max.batch.size的值。
max.queue.size.in.bytes0A long integer value that specifies the maximum volume of the blocking queue in bytes. By default, volume limits are not specified for the blocking queue. To specify the number of bytes that the queue can consume, set this property to a positive long value. 一个长整型值,指定阻塞队列的最大容量(以字节为单位)。 默认情况下,不为阻塞队列指定卷限制。 若要指定队列可以使用的字节数,请将此属性设置为一个正的long值。 If max.queue.size is also set, writing to the queue is blocked when the size of the queue reaches the limit specified by either property. For example, if you set max.queue.size=1000, and max.queue.size.in.bytes=5000, writing to the queue is blocked after the queue contains 1000 records, or after the volume of the records in the queue reaches 5000 bytes. 如果还设置了max.queue.size,则当队列的大小达到任一属性指定的限制时,将阻止写入队列。例如,如果设置max.queue.size=1000,max.queue.size.in.bytes=5000,则在队列包含1000条记录后,或在队列中的记录量达到5000字节后,将阻止写入队列。
poll.interval.ms1000Positive integer value that specifies the number of milliseconds the connector should wait for new change events to appear before it starts processing a batch of events. Defaults to 1000 milliseconds, or 1 second. 一个正整数值,指定连接器在开始处理一批事件之前等待新更改事件出现的毫秒数。1000毫秒或1秒。
connect.timeout.ms30000A positive integer value that specifies the maximum time in milliseconds this connector should wait after trying to connect to the MySQL database server before timing out. Defaults to 30 seconds. 一个正整数值,指定在尝试连接到MySQL数据库服务器之后,此连接器在超时之前应等待的最长时间(以毫秒为单位)。30秒内。
gtid.source.includesNo default 没有默认A comma-separated list of regular expressions that match source UUIDs in the GTID set used to find the binlog position in the MySQL server. Only the GTID ranges that have sources that match one of these include patterns are used. Do not also specify a setting for gtid.source.excludes. 与GTID集中的源UUID匹配的正则表达式的逗号分隔列表,用于在MySQL服务器中查找binlog位置。仅使用具有与这些包含模式之一匹配的源的GTID范围。不要同时指定gtid.source.excludes的设置。
gtid.source.excludesNo default 没有默认A comma-separated list of regular expressions that match source UUIDs in the GTID set used to find the binlog position in the MySQL server. Only the GTID ranges that have sources that do not match any of these exclude patterns are used. Do not also specify a value for gtid.source.includes. 与GTID集中的源UUID匹配的正则表达式的逗号分隔列表,用于在MySQL服务器中查找binlog位置。仅使用具有不匹配任何这些排除模式的源的GTID范围。不要同时指定gtid.source.includes的值。
gtid.new.channel.position deprecated and scheduled for removal 已弃用并计划删除earliest 最早When set to latest, when the connector sees a new GTID channel, it starts consuming from the last executed transaction in that GTID channel. If set to earliest (default), the connector starts reading that channel from the first available (not purged) GTID position. earliest is useful when you have an active-passive MySQL setup where Debezium is connected to the primary server. In this case, during failover, the replica with the new UUID (and GTID channel) starts receiving writes before Debezium is connected. These writes would be lost when using latest. 当设置为latest时,当连接器看到新的GTID通道时,它将从该GTID通道中最后执行的事务开始消费。如果设置为最早(默认),则连接器从第一个可用(未清除)GTID位置开始阅读该通道。当您有一个主动-被动MySQL设置,其中Debezium连接到主服务器时,early很有用。在这种情况下,在故障切换期间,具有新UUID(和GTID通道)的复制副本在连接Debezium之前开始接收写入。使用latest时,这些写入将丢失。
tombstones.on.deletetrue Controls whether a delete event is followed by a tombstone event. 控制删除事件后是否跟有逻辑删除事件。 true - a delete operation is represented by a delete event and a subsequent tombstone event. true-删除操作由删除事件和后续的逻辑删除事件表示。 false - only a delete event is emitted. false-只会发出delete事件。 After a source record is deleted, emitting a tombstone event (the default behavior) allows Kafka to completely delete all events that pertain to the key of the deleted row in case log compaction is enabled for the topic. 在删除源记录后,如果为主题启用了日志压缩,则发出tombstone事件(默认行为)允许Kafka完全删除与已删除行的键相关的所有事件。
message.key.columnsn/aA list of expressions that specify the columns that the connector uses to form custom message keys for change event records that it publishes to the Kafka topics for specified tables. 一个表达式列表,指定连接器用于形成更改事件记录的自定义消息键的列,该事件记录发布到指定表的Kafka主题。By default, Debezium uses the primary key column of a table as the message key for records that it emits. In place of the default, or to specify a key for tables that lack a primary key, you can configure custom message keys based on one or more columns. 默认情况下,Debezium使用表的主键列作为它发出的记录的消息键。 可以基于一个或多个列配置自定义消息键,以代替默认值,或者为缺少主键的表指定键。 To establish a custom message key for a table, list the table, followed by the columns to use as the message key. Each list entry takes the following format: 若要为表建立自定义消息键,请列出该表,然后列出要用作消息键的列。 每个列表条目采用以下格式: *<fully-qualified_tableName>*:_<keyColumn>_,*<keyColumn>* To base a table key on multiple column names, insert commas between the column names. 若要使表键基于多个列名,请在列名之间插入逗号。Each fully-qualified table name is a regular expression in the following format: 每个完全限定的表名都是以下格式的正则表达式: *<databaseName>*.*<tableName>* *<数据库名称>*。*<tableName>* The property can include entries for multiple tables. Use a semicolon to separate table entries in the list. 该属性可以包含多个表的项。 使用分隔符分隔列表中的表条目。 The following example sets the message key for the tables inventory.customers and purchase.orders: 下面的示例为inventory.customerspurchase.orders表设置消息键: inventory.customers:pk1,pk2;(.*).purchaseorders:pk3,pk4 For the table inventory.customer, the columns pk1 and pk2 are specified as the message key. For the purchaseorders tables in any database, the columns pk3 and pk4 server as the message key. 对于inventory.customer表,列pk1pk2被指定为消息键。对于任何数据库中的purchaseorders表,列pk3pk4用作消息键。There is no limit to the number of columns that you use to create custom message keys. However, it’s best to use the minimum number that are required to specify a unique key. 对用于创建自定义消息键的列数没有限制。但是,最好使用指定唯一键所需的最小数量。
binary.handling.modebytes 字节Specifies how binary columns, for example, blob, binary, varbinary, should be represented in change events. Possible settings: 指定二进制列(例如,blobbinaryvarbinary)应如何在更改事件中表示。可能的设置: bytes represents binary data as a byte array. bytes将二进制数据表示为字节数组。 base64 represents binary data as a base64-encoded String. base64将二进制数据表示为base64编码的字符串。 hex represents binary data as a hex-encoded (base16) String. hex将二进制数据表示为十六进制编码(base16)字符串。
schema.name.adjustment.modeavroSpecifies how schema names should be adjusted for compatibility with the message converter used by the connector. Possible settings: 指定应如何调整架构名称以与连接器使用的消息转换器兼容。可能的设置: avro replaces the characters that cannot be used in the Avro type name with underscore. avro将Avro类型名称中不能使用的字符替换为下划线。 none does not apply any adjustment. 无一不适用任何调整。
Advanced MySQL connector configuration properties 高级MySQL连接器配置属性

The following table describes advanced MySQL connector properties. The default values for these properties rarely need to be changed. Therefore, you do not need to specify them in the connector configuration.
下表描述了高级MySQL连接器属性。这些属性的默认值很少需要更改。因此,您不需要在连接器配置中指定它们。

Property 财产Default 默认Description 描述
connect.keep.alivetrue A Boolean value that specifies whether a separate thread should be used to ensure that the connection to the MySQL server/cluster is kept alive. 一个布尔值,指定是否应该使用单独的线程来确保与MySQL服务器/集群的连接保持活动。
converters 转换器No default 没有默认Enumerates a comma-separated list of the symbolic names of the custom converter instances that the connector can use. 枚举连接器可以使用的自定义转换器实例的符号名称的逗号分隔列表。 For example, boolean. 例如,布尔值。 This property is required to enable the connector to use a custom converter. 此属性是使连接器能够使用自定义转换器所必需的。For each converter that you configure for a connector, you must also add a .type property, which specifies the fully-qualifed name of the class that implements the converter interface. The .type property uses the following format: 对于为连接器配置的每个转换器,还必须添加.type属性,该属性指定实现转换器接口的类的完全限定名称。.type属性使用以下格式: *<converterSymbolicName>*.type *<converterSymbolicName>*.类型 For example, 比如说, boolean.type: io.debezium.connector.mysql.converters.TinyIntOneToBooleanConverterIf you want to further control the behavior of a configured converter, you can add one or more configuration parameters to pass values to the converter. To associate these additional configuration parameter with a converter, prefix the paraemeter name with the symbolic name of the converter. 如果要进一步控制已配置的转换器的行为,可以添加一个或多个配置参数以向转换器传递值。 要将这些附加配置参数与转换器相关联,请在参数名称前加上转换器的符号名称。 For example, to define a selector parameter that specifies the subset of columns that the boolean converter processes, add the following property: 例如,若要定义指定布尔转换器处理的列子集的选择器参数,请添加以下属性: boolean.selector=db1.table1.*, db1.table2.column1
table.ignore.builtintrue A Boolean value that specifies whether built-in system tables should be ignored. This applies regardless of the table include and exclude lists. By default, system tables are excluded from having their changes captured, and no events are generated when changes are made to any system tables. 一个布尔值,指定是否应忽略内置系统表。无论表的包含和排除列表如何,这都适用。默认情况下,系统表不会被捕获其更改,并且在对任何系统表进行更改时都不会生成事件。
database.ssl.modedisabled 残疾Specifies whether to use an encrypted connection. Possible settings are: 指定是否使用加密连接。可能的设置包括: disabled specifies the use of an unencrypted connection. disabled指定使用未加密的连接。 preferred establishes an encrypted connection if the server supports secure connections. If the server does not support secure connections, falls back to an unencrypted connection. 如果服务器支持安全连接,则preferred建立加密连接。如果服务器不支持安全连接,则福尔斯会退回到未加密的连接。 required establishes an encrypted connection or fails if one cannot be made for any reason. required会建立加密连接,如果出于任何原因无法建立连接,则会失败。 verify_ca behaves like required but additionally it verifies the server TLS certificate against the configured Certificate Authority (CA) certificates and fails if the server TLS certificate does not match any valid CA certificates. verify_ca的行为与必需的一样,但它还根据配置的证书颁发机构(CA)证书验证服务器TLS证书,如果服务器TLS证书与任何有效的CA证书不匹配,则会失败。 verify_identity behaves like verify_ca but additionally verifies that the server certificate matches the host of the remote connection. verify_identity的行为与verify_ca类似,但还验证服务器证书是否与远程连接的主机匹配。
binlog.buffer.size0The size of a look-ahead buffer used by the binlog reader. The default setting of 0 disables buffering. binlog读取器使用的前瞻缓冲区的大小。默认设置0禁用缓冲。 Under specific conditions, it is possible that the MySQL binlog contains uncommitted data finished by a ROLLBACK statement. Typical examples are using savepoints or mixing temporary and regular table changes in a single transaction. 在特定条件下,MySQL binlog可能包含由ROLLBACK语句完成的未提交数据。典型的例子是在单个事务中使用保存点或混合临时和常规表更改。 When a beginning of a transaction is detected then Debezium tries to roll forward the binlog position and find either COMMIT or ROLLBACK so it can determine whether to stream the changes from the transaction. The size of the binlog buffer defines the maximum number of changes in the transaction that Debezium can buffer while searching for transaction boundaries. If the size of the transaction is larger than the buffer then Debezium must rewind and re-read the events that have not fit into the buffer while streaming. 当检测到事务的开始时,Debezium会尝试前滚binlog位置,并找到COMMITROLLBACK,以便确定是否从事务流式传输更改。binlog缓冲区的大小定义了Debezium在搜索事务边界时可以缓冲的最大事务更改数。如果事务的大小大于缓冲区,那么Debezium必须在流式传输时回退并重新读取缓冲区中不适合的事件。 NOTE: This feature is incubating. Feedback is encouraged. It is expected that this feature is not completely polished. 注:此功能正在孵育中。鼓励反馈。预计这一功能没有完全抛光。
snapshot.modeinitial 初始Specifies the criteria for running a snapshot when the connector starts. Possible settings are: 指定连接器启动时运行快照的条件。可能的设置包括: initial - the connector runs a snapshot only when no offsets have been recorded for the logical server name. initial-仅当没有记录逻辑服务器名称的偏移量时,连接器才运行快照。 initial_only - the connector runs a snapshot only when no offsets have been recorded for the logical server name and then stops; i.e. it will not read change events from the binlog. initial_only-连接器仅在没有记录逻辑服务器名称的偏移量时运行快照,然后停止;即,它不会从binlog读取更改事件。 when_needed - the connector runs a snapshot upon startup whenever it deems it necessary. That is, when no offsets are available, or when a previously recorded offset specifies a binlog location or GTID that is not available in the server. when_needed-只要连接器认为有必要,它就会在启动时运行快照。也就是说,当没有可用的偏移量时,或者当先前记录的偏移量指定了服务器中不可用的binlog位置或GTID时。 never - the connector never uses snapshots. Upon first startup with a logical server name, the connector reads from the beginning of the binlog. Configure this behavior with care. It is valid only when the binlog is guaranteed to contain the entire history of the database. never-连接器从不使用快照。第一次使用逻辑服务器名启动时,连接器从binlog的开头读取。请小心配置此行为。只有当binlog保证包含数据库的整个历史记录时,它才有效。 schema_only - the connector runs a snapshot of the schemas and not the data. This setting is useful when you do not need the topics to contain a consistent snapshot of the data but need them to have only the changes since the connector was started. schema_only-连接器运行模式的快照,而不是数据。当您不需要主题包含一致的数据快照,但需要主题仅包含自连接器启动以来的更改时,此设置非常有用。 schema_only_recovery - this is a recovery setting for a connector that has already been capturing changes. When you restart the connector, this setting enables recovery of a corrupted or lost database history topic. You might set it periodically to “clean up” a database history topic that has been growing unexpectedly. Database history topics require infinite retention. schema_only_recovery-这是一个已经捕获更改的连接器的恢复设置。重新启动连接器时,此设置将启用损坏或丢失的数据库历史记录主题的恢复。您可以定期将其设置为“清理”意外增长的数据库历史主题。数据库历史主题需要无限保留。
snapshot.locking.modeminimal 最小Controls whether and how long the connector holds the global MySQL read lock, which prevents any updates to the database, while the connector is performing a snapshot. Possible settings are: 控制连接器是否持有全局MySQL读锁以及持有多长时间,这将防止连接器执行快照时对数据库进行任何更新。可能的设置包括: minimal - the connector holds the global read lock for only the initial portion of the snapshot during which the connector reads the database schemas and other metadata. The remaining work in a snapshot involves selecting all rows from each table. The connector can do this in a consistent fashion by using a REPEATABLE READ transaction. This is the case even when the global read lock is no longer held and other MySQL clients are updating the database. 最小-连接器仅为快照的初始部分保留全局读锁,在此期间连接器读取数据库架构和其他元数据。快照中的其余工作涉及从每个表中选择所有行。连接器可以通过使用REPEATABLE READ事务以一致的方式执行此操作。即使全局读锁不再持有,并且其他MySQL客户端正在更新数据库,情况也是如此。 minimal_percona - the connector holds the global backup lock for only the initial portion of the snapshot during which the connector reads the database schemas and other metadata. The remaining work in a snapshot involves selecting all rows from each table. The connector can do this in a consistent fashion by using a REPEATABLE READ transaction. This is the case even when the global backup lock is no longer held and other MySQL clients are updating the database. This mode does not flush tables to disk, is not blocked by long-running reads, and is available only in Percona Server. minimum_percona-连接器仅为快照的初始部分保留全局备份锁,在此期间连接器读取数据库模式和其他元数据。快照中的其余工作涉及从每个表中选择所有行。连接器可以通过使用REPEATABLE READ事务以一致的方式执行此操作。即使全局备份锁不再持有,并且其他MySQL客户端正在更新数据库,情况也是如此。此模式不会将表刷新到磁盘,不会被长时间运行的读取阻塞,并且仅在Percona Server中可用。 extended - blocks all writes for the duration of the snapshot. Use this setting if there are clients that are submitting operations that MySQL excludes from REPEATABLE READ semantics. 扩展-在快照期间阻止所有写入。如果有客户端正在提交MySQL从REPEATABLE READ语义中排除的操作,请使用此设置。 none - prevents the connector from acquiring any table locks during the snapshot. While this setting is allowed with all snapshot modes, it is safe to use if and only if no schema changes are happening while the snapshot is running. For tables defined with MyISAM engine, the tables would still be locked despite this property being set as MyISAM acquires a table lock. This behavior is unlike InnoDB engine, which acquires row level locks. none-防止连接器在快照期间获取任何表锁。虽然所有快照模式都允许使用此设置,但当且当快照运行时未发生架构更改时,使用此设置才是安全的。对于使用MyISAM引擎定义的表,尽管该属性被设置为MyISAM获取表锁,但表仍将被锁定。这种行为与InnoDB引擎不同,InnoDB引擎获取行级锁。
snapshot.include.collection.listAll tables specified in table.include.listtable.include.list中指定的所有表An optional, comma-separated list of regular expressions that match the fully-qualified names (*<databaseName>.<tableName>*) of the tables to include in a snapshot. The specified items must be named in the connector’s table.include.list property. This property takes effect only if the connector’s snapshot.mode property is set to a value other than never. 匹配完全限定名称(*<databaseName>.% 3CtableName>*)。指定的项必须在连接器的table.include.list属性中命名。仅当连接器的snapshot.mode属性设置为never以外的值时,此属性才生效。 This property does not affect the behavior of incremental snapshots. 此属性不影响增量快照的行为。
snapshot.select.statement.overridesNo default 没有默认Specifies the table rows to include in a snapshot. Use the property if you want a snapshot to include only a subset of the rows in a table. This property affects snapshots only. It does not apply to events that the connector reads from the log. 指定要包含在快照中的表行。如果希望快照仅包括表中行的子集,请使用属性。此属性仅影响快照。它不适用于连接器从日志中读取的事件。The property contains a comma-separated list of fully-qualified table names in the form *<databaseName>.<tableName>*. For example, 该属性包含一个以逗号分隔的完全限定表名列表,格式为*<databaseName>。% 3CtableName>*。比如说, "snapshot.select.statement.overrides": "inventory.products,customers.orders" For each table in the list, add a further configuration property that specifies the SELECT statement for the connector to run on the table when it takes a snapshot. The specified SELECT statement determines the subset of table rows to include in the snapshot. Use the following format to specify the name of this SELECT statement property: 对于列表中的每个表,添加进一步的配置属性,该属性指定连接器在获取快照时在表上运行的SELECT语句。指定的SELECT语句确定要包括在快照中的表行的子集。使用以下格式指定此SELECT语句属性的名称: snapshot.select.statement.overrides.*<databaseName>*.*<tableName>*. For example, snapshot.select.statement.overrides.customers.orders. snapshot.select.statement.overrides.*<databaseName>*.*<tableName>* . 比如说, snapshot.select.statement.overrides.customers.orders . Example: 范例:From a customers.orders table that includes the soft-delete column, delete_flag, add the following properties if you want a snapshot to include only those records that are not soft-deleted: 如果希望快照仅包括未软删除的记录,请从包含软删除列delete_flag``的customers.orders表中添加以下属性:"snapshot.select.statement.overrides": "customer.orders", "snapshot.select.statement.overrides.customer.orders": "SELECT * FROM [customers].[orders] WHERE delete_flag = 0 ORDER BY id DESC"In the resulting snapshot, the connector includes only the records for which delete_flag = 0. 在生成的快照中,连接器仅包括delete_flag = 0的记录。
min.row.count.to.stream.results1000During a snapshot, the connector queries each table for which the connector is configured to capture changes. The connector uses each query result to produce a read event that contains data for all rows in that table. This property determines whether the MySQL connector puts results for a table into memory, which is fast but requires large amounts of memory, or streams the results, which can be slower but work for very large tables. The setting of this property specifies the minimum number of rows a table must contain before the connector streams results. 在快照期间,连接器查询每个表,连接器配置为捕获这些表的更改。连接器使用每个查询结果生成一个包含该表中所有行的数据的读事件。此属性决定MySQL连接器是将表的结果放入内存中,这是快速的,但需要大量内存,还是流式传输结果,这可能较慢,但适用于非常大的表。此属性的设置指定在连接器流产生结果之前表必须包含的最小行数。 To skip all table size checks and always stream all results during a snapshot, set this property to 0. 若要跳过所有表大小检查并始终在快照期间流式传输所有结果,请将此属性设置为0
heartbeat.interval.ms0Controls how frequently the connector sends heartbeat messages to a Kafka topic. The default behavior is that the connector does not send heartbeat messages. 控制连接器向Kafka主题发送心跳消息的频率。默认行为是连接器不发送检测信号消息。 Heartbeat messages are useful for monitoring whether the connector is receiving change events from the database. Heartbeat messages might help decrease the number of change events that need to be re-sent when a connector restarts. To send heartbeat messages, set this property to a positive integer, which indicates the number of milliseconds between heartbeat messages. 检测信号消息对于监视连接器是否从数据库接收更改事件非常有用。心跳消息可能有助于减少连接器重新启动时需要重新发送的更改事件的数量。若要发送检测信号消息,请将此属性设置为正整数,该整数指示检测信号消息之间的毫秒数。
heartbeat.topics.prefix__debezium-heartbeatControls the name of the topic to which the connector sends heartbeat messages. The topic name has this pattern: 控制连接器向其发送检测信号消息的主题的名称。主题名称具有以下模式: heartbeat.topics.prefix.server.name heartbeat.topics.prefix. server.name For example, if the database server name is fulfillment, the default topic name is __debezium-heartbeat.fulfillment. 例如,如果数据库服务器名称为fulfillment,则默认主题名称为 __debezium-heartbeat.fulfillment
heartbeat.action.queryNo default 没有默认Specifies a query that the connector executes on the source database when the connector sends a heartbeat message. 指定当连接器发送检测信号消息时连接器对源数据库执行的查询。 For example, this can be used to periodically capture the state of the executed GTID set in the source database. 例如,这可以用于定期捕获源数据库中已执行GTID集的状态。 INSERT INTO gtid_history_table (select * from mysql.gtid_executed)
database.initial.statementsNo default 没有默认A semicolon separated list of SQL statements to be executed when a JDBC connection, not the connection that is reading the transaction log, to the database is established. To specify a semicolon as a character in a SQL statement and not as a delimiter, use two semicolons, (;;). 当建立到数据库的JDBC连接(而不是阅读事务日志的连接)时,要执行的SQL语句的以字符串分隔的列表。 若要在SQL语句中将一个字符指定为一个字符而不是一个字符,请使用两个字符(;;)。 The connector might establish JDBC connections at its own discretion, so this property is ony for configuring session parameters. It is not for executing DML statements. 连接器可以自行建立JDBC连接,因此此属性仅用于配置会话参数。它不是用来执行DML语句的。
snapshot.delay.msNo default 没有默认An interval in milliseconds that the connector should wait before performing a snapshot when the connector starts. If you are starting multiple connectors in a cluster, this property is useful for avoiding snapshot interruptions, which might cause re-balancing of connectors. 当连接器启动时,连接器在执行快照之前应等待的时间间隔(以毫秒为单位)。如果要启动群集中的多个连接器,则此属性对于避免快照中断非常有用,快照中断可能会导致连接器的重新平衡。
snapshot.fetch.sizeNo default 没有默认During a snapshot, the connector reads table content in batches of rows. This property specifies the maximum number of rows in a batch. 在快照期间,连接器成批读取表内容。此属性指定批处理中的最大行数。
snapshot.lock.timeout.ms10000Positive integer that specifies the maximum amount of time (in milliseconds) to wait to obtain table locks when performing a snapshot. If the connector cannot acquire table locks in this time interval, the snapshot fails. See how MySQL connectors perform database snapshots. 正整数,指定执行快照时等待获取表锁的最长时间(毫秒)。如果连接器在此时间间隔内无法获取表锁,则快照将失败。了解MySQL连接器如何执行数据库快照
enable.time.adjustertrue Boolean value that indicates whether the connector converts a 2-digit year specification to 4 digits. Set to false when conversion is fully delegated to the database. 一个布尔值,指示连接器是否将2位数的年份规格转换为4位数。将转换完全委托给数据库时设置为false。 MySQL allows users to insert year values with either 2-digits or 4-digits. For 2-digit values, the value gets mapped to a year in the range 1970 - 2069. The default behavior is that the connector does the conversion. MySQL允许用户插入2位或4位的年份值。对于2位数的值,该值将映射到1970 - 2069范围内的年份。默认行为是由连接器执行转换。
source.struct.versionv2Schema version for the source block in Debezium events. Debezium 0.10 introduced a few breaking changes to the structure of the source block in order to unify the exposed structure across all the connectors. Debezium事件中块的架构版本。 Debezium 0.10对源代码块的结构进行了一些突破性的更改,以便统一所有连接器的公开结构。 By setting this option to v1, the structure used in earlier versions can be produced. However, this setting is not recommended and is planned for removal in a future Debezium version. 通过将此选项设置为v1,可以生成早期版本中使用的结构。但是,不建议使用此设置,并计划在未来的Debezium版本中删除此设置。
sanitize.field.namestrue if connector configuration sets the key.converter or value.converter property to the Avro converter. 如果连接器配置将key.convertervalue.converter属性设置为Avro转换器,则为truefalse if not. 假的,如果不是。Indicates whether field names are sanitized to adhere to Avro naming requirements. 指示字段名称是否经过清理以遵守Avro命名要求
skipped.operationsNo default 没有默认Comma-separated list of operation types to skip during streaming. The following values are possible: c for inserts/create, u for updates, d for deletes. By default, no operations are skipped. 以逗号分隔的操作类型列表,以在流式传输期间跳过。可以使用以下值:c表示插入/创建,u表示更新,d表示删除。默认情况下,不跳过任何操作。
signal.data.collectionNo default value 没有默认值Fully-qualified name of the data collection that is used to send signals to the connector. 用于向连接器发送信号的数据集合的完全限定名。 Use the following format to specify the collection name: 使用以下格式指定集合名称: *<databaseName>*.*<tableName>* *<数据库名称>*。*<tableName>*
incremental.snapshot.allow.schema.changesfalse Allow schema changes during an incremental snapshot. When enabled the connector will detect schema change during an incremental snapshot and re-select a current chunk to avoid locking DDLs. 允许在增量快照期间更改架构。启用后,连接器将在增量快照期间检测模式更改,并重新选择当前块以避免锁定DDL。 Note that changes to a primary key are not supported and can cause incorrect results if performed during an incremental snapshot. Another limitation is that if a schema change affects only columns’ default values, then the change won’t be detected until the DDL is processed from the binlog stream. This doesn’t affect the snapshot events’ values, but the schema of snapshot events may have outdated defaults. 请注意,不支持对主键的更改,如果在增量快照期间执行更改,可能会导致错误的结果。另一个限制是,如果模式更改只影响列的默认值,那么只有在从binlog流中处理了SQL语句之后才能检测到更改。这不会影响快照事件的值,但快照事件的架构可能具有过时的默认值。
incremental.snapshot.chunk.size1024The maximum number of rows that the connector fetches and reads into memory during an incremental snapshot chunk. Increasing the chunk size provides greater efficiency, because the snapshot runs fewer snapshot queries of a greater size. However, larger chunk sizes also require more memory to buffer the snapshot data. Adjust the chunk size to a value that provides the best performance in your environment. 连接器在增量快照区块期间提取并读入内存的最大行数。增加块大小可以提高效率,因为快照运行的快照查询较少,但大小较大。但是,较大的块大小也需要更多的内存来缓冲快照数据。将区块大小调整为在您的环境中提供最佳性能的值。
read.onlyfalse Switch to alternative incremental snapshot watermarks implementation to avoid writes to signal data collection 切换到替代增量快照水印实现,以避免写入信号数据收集
provide.transaction.metadatafalse Determines whether the connector generates events with transaction boundaries and enriches change event envelopes with transaction metadata. Specify true if you want the connector to do this. See Transaction metadata for details. 确定连接器是否生成具有事务边界的事件,并使用事务元数据丰富更改事件信封。如果希望连接器执行此操作,请指定true。有关详细信息,请参阅交易元数据
transaction.topic${database.server.name}.transactionControls the name of the topic to which the connector sends transaction metadata messages. The placeholder ${database.server.name} can be used for referring to the connector’s logical name; defaults to ${database.server.name}.transaction, for example dbserver1.transaction. 控制连接器将事务元数据消息发送到的主题的名称。占位符${database.server.name}可用于引用连接器的逻辑名称;默认为 ${database.server.name}.transaction ,例如dbserver1.transaction
Debezium connector database history configuration properties Debezium连接器数据库历史记录配置属性

Debezium provides a set of database.history.* properties that control how the connector interacts with the schema history topic.
Debezium提供了一组database.history。* 控制连接器如何与架构历史主题交互的属性。

The following table describes the database.history properties for configuring the Debezium connector.
下表描述了用于配置Debezium连接器的database.history属性。

Property 财产Default 默认Description 描述
database.history.kafka.topicThe full name of the Kafka topic where the connector stores the database schema history. 连接器存储数据库模式历史的Kafka主题的全名。
database.history.kafka.bootstrap.serversA list of host/port pairs that the connector uses for establishing an initial connection to the Kafka cluster. This connection is used for retrieving the database schema history previously stored by the connector, and for writing each DDL statement read from the source database. Each pair should point to the same Kafka cluster used by the Kafka Connect process. 连接器用于建立到Kafka群集的初始连接的主机/端口对的列表。此连接用于检索连接器先前存储的数据库模式历史记录,并用于编写从源数据库读取的每个SQL语句。每对都应指向Kafka Connect进程使用的相同Kafka集群。
database.history.kafka.recovery.poll.interval.ms100An integer value that specifies the maximum number of milliseconds the connector should wait during startup/recovery while polling for persisted data. The default is 100ms. 一个整数值,指定连接器在启动/恢复期间轮询持久化数据时应等待的最大毫秒数。默认值为100 ms。
database.history.kafka.query.timeout.ms3000An integer value that specifies the maximum number of milliseconds the connector should wait while fetching cluster information using Kafka admin client. 一个整数值,指定连接器在使用Kafka管理客户端获取群集信息时应等待的最大毫秒数。
database.history.kafka.recovery.attempts4The maximum number of times that the connector should try to read persisted history data before the connector recovery fails with an error. The maximum amount of time to wait after receiving no data is recovery.attempts x recovery.poll.interval.ms. 在连接器复原因错误而失败之前,连接器应尝试读取保存的历程记录数据的最大次数。没有收到数据后等待的最长时间是recovery.attemptsxrecovery.poll.interval.ms
database.history.skip.unparseable.ddlfalse A Boolean value that specifies whether the connector should ignore malformed or unknown database statements or stop processing so a human can fix the issue. The safe default is false. Skipping should be used only with care as it can lead to data loss or mangling when the binlog is being processed. 一个布尔值,指定连接器是否应忽略格式错误或未知的数据库语句,或停止处理以便人工修复问题。安全的预设值为false。跳过应该小心使用,因为它可能会导致数据丢失或在处理binlog时损坏。
database.history.store.only.monitored.tables.ddl Deprecated and scheduled for removal in a future release; use database.history.store.only.captured.tables.ddl instead. 已弃用,并计划在未来版本中删除;请改用 database.history.store.only.captured.tables.ddl false A Boolean value that specifies whether the connector should record all DDL statements 一个布尔值,它指定连接器是否应记录所有SQL语句 true records only those DDL statements that are relevant to tables whose changes are being captured by Debezium. Set to true with care because missing data might become necessary if you change which tables have their changes captured. true只记录那些与Debezium捕获其更改的表相关的SQL语句。请小心设置为true,因为如果更改了捕获其更改的表,则可能需要缺少数据。 The safe default is false. 安全默认值为false
database.history.store.only.captured.tables.ddlfalse A Boolean value that specifies whether the connector should record all DDL statements 一个布尔值,它指定连接器是否应记录所有SQL语句 true records only those DDL statements that are relevant to tables whose changes are being captured by Debezium. Set to true with care because missing data might become necessary if you change which tables have their changes captured. true只记录那些与Debezium捕获其更改的表相关的SQL语句。请小心设置为true,因为如果更改了捕获其更改的表,则可能需要缺少数据。 The safe default is false. 安全默认值为false

Pass-through database history properties for configuring producer and consumer clients
用于配置生产者和消费者客户端的传递数据库历史记录属性

Debezium relies on a Kafka producer to write schema changes to database history topics. Similarly, it relies on a Kafka consumer to read from database history topics when a connector starts. You define the configuration for the Kafka producer and consumer clients by assigning values to a set of pass-through configuration properties that begin with the database.history.producer.* and database.history.consumer.* prefixes. The pass-through producer and consumer database history properties control a range of behaviors, such as how these clients secure connections with the Kafka broker, as shown in the following example:
Debezium依赖于一个Kafka生产者来将模式更改写入数据库历史主题。类似地,它依赖于Kafka消费者在连接器启动时从数据库历史主题中读取。您可以通过为一组以database.history.producer为开始的直通配置属性赋值来定义Kafka生产者和消费者客户端的配置。* database.history.consumer。* 前缀。直通生产者和消费者数据库历史属性控制一系列行为,例如这些客户端如何保护与Kafka代理的连接,如以下示例所示:

database.history.producer.security.protocol=SSL
database.history.producer.ssl.keystore.location=/var/private/ssl/kafka.server.keystore.jks
database.history.producer.ssl.keystore.password=test1234
database.history.producer.ssl.truststore.location=/var/private/ssl/kafka.server.truststore.jks
database.history.producer.ssl.truststore.password=test1234
database.history.producer.ssl.key.password=test1234

database.history.consumer.security.protocol=SSL
database.history.consumer.ssl.keystore.location=/var/private/ssl/kafka.server.keystore.jks
database.history.consumer.ssl.keystore.password=test1234
database.history.consumer.ssl.truststore.location=/var/private/ssl/kafka.server.truststore.jks
database.history.consumer.ssl.truststore.password=test1234
database.history.consumer.ssl.key.password=test1234

Debezium strips the prefix from the property name before it passes the property to the Kafka client.
Debezium会在将属性传递给Kafka客户端之前从属性名称中去掉前缀。

See the Kafka documentation for more details about Kafka producer configuration properties and Kafka consumer configuration properties.
有关Kafka生产者配置属性Kafka消费者配置属性的更多详细信息,请参阅Kafka文档。

Debezium connector Kafka signals configuration properties Debezium连接器Kafka信号配置属性

When the MySQL connector is configured as read-only, the alternative for the signaling table is the signals Kafka topic.
当MySQL连接器配置为只读时,信令表的替代方案是信号Kafka主题。

Debezium provides a set of signal.* properties that control how the connector interacts with the Kafka signals topic.
Debezium提供了一组信号。控制连接器如何与Kafka信号主题交互的属性。

The following table describes the signal properties.
下表描述了信号属性。

Property 财产Default 默认Description 描述
signal.kafka.topicThe name of the Kafka topic that the connector monitors for ad hoc signals. 连接器监视即席信号的Kafka主题的名称。
signal.kafka.bootstrap.serversA list of host/port pairs that the connector uses for establishing an initial connection to the Kafka cluster. Each pair should point to the same Kafka cluster used by the Kafka Connect process. 连接器用于建立到Kafka群集的初始连接的主机/端口对的列表。每对都应该指向Kafka Connect进程使用的同一个Kafka集群。
signal.kafka.poll.timeout.ms100An integer value that specifies the maximum number of milliseconds the connector should wait when polling signals. The default is 100ms. 一个整数值,指定连接器在轮询信号时应等待的最大毫秒数。默认值为100 ms。
Debezium connector pass-through signals Kafka consumer client configuration properties Debezium连接器直通信号Kafka消费者客户端配置属性

The Debezium connector provides for pass-through configuration of the signals Kafka consumer. Pass-through signals properties begin with the prefix signals.consumer.*. For example, the connector passes properties such as signal.consumer.security.protocol=SSL to the Kafka consumer.
Debezium连接器提供信号Kafka消费者的直通配置。传递信号属性开始以前缀signals.consumer开头。* .例如,连接器将 signal.consumer.security.protocol=SSL 等属性传递给Kafka消费者。

As is the case with the pass-through properties for database history clients, Debezium strips the prefixes from the properties before it passes them to the Kafka signals consumer.
与数据库历史客户端的直通属性一样,Debezium在将属性传递给Kafka信号消费者之前,会从属性中剥离前缀。

Debezium connector pass-through database driver configuration properties Debezium连接器直通数据库驱动程序配置属性

The Debezium connector provides for pass-through configuration of the database driver. Pass-through database properties begin with the prefix database.*. For example, the connector passes properties such as database.foobar=false to the JDBC URL.
Debezium连接器提供数据库驱动程序的直通配置。传递数据库属性开始以前缀数据库开头。* .例如,连接器将database.foobar =false这样的属性传递给JDBCURL。

As is the case with the pass-through properties for database history clients, Debezium strips the prefixes from the properties before it passes them to the database driver.
与数据库历史记录客户端的传递属性一样,Debezium在将属性传递给数据库驱动程序之前会从属性中剥离前缀。

Monitoring 监测

The Debezium MySQL connector provides three types of metrics that are in addition to the built-in support for JMX metrics that Zookeeper, Kafka, and Kafka Connect provide.
Debezium MySQL连接器提供了三种类型的指标,这些指标是Zookeeper、Kafka和Kafka Connect提供的对JMX指标的内置支持的补充。

  • Snapshot metrics provide information about connector operation while performing a snapshot.
    快照度量提供有关执行快照时连接器操作的信息。
  • Streaming metrics provide information about connector operation when the connector is reading the binlog.
    流式度量提供了连接器在阅读binlog时的连接器操作信息。
  • Schema history metrics provide information about the status of the connector’s schema history.
    架构历史记录度量提供有关连接器架构历史记录状态的信息。

Debezium monitoring documentation provides details for how to expose these metrics by using JMX.
Debezium监控文档提供了如何使用JMX公开这些指标的详细信息。

Snapshot metrics 快照指标

The MBean is debezium.mysql:type=connector-metrics,context=snapshot,server=*<mysql.server.name>*.
MBeandebezium.mysql:type=connector-metrics,context=snapshot,server=*<mysql.server.name>*

Snapshot metrics are not exposed unless a snapshot operation is active, or if a snapshot has occurred since the last connector start.
除非快照操作处于活动状态,或者自上次连接器启动以来已发生快照,否则不会公开快照指标。

The following table lists the shapshot metrics that are available.
下表列出了可用的shapshot指标。

Attributes 属性Type 类型Description 描述
LastEvent 最后活动string 字符串The last snapshot event that the connector has read. 连接器读取的最后一个快照事件。
MilliSecondsSinceLastEvent 上次事件后的毫秒long The number of milliseconds since the connector has read and processed the most recent event. 自连接器读取并处理最近事件以来的毫秒数。
TotalNumberOfEventsSeen 事件总数long The total number of events that this connector has seen since last started or reset. 自上次启动或重置以来此连接器看到的事件总数。
NumberOfEventsFiltered 已过滤事件数long The number of events that have been filtered by include/exclude list filtering rules configured on the connector. 已由连接器上配置的包含/排除列表筛选规则筛选的事件数。
CapturedTablesstring[] 字符串[]The list of tables that are captured by the connector. 连接器捕获的表的列表。
QueueTotalCapacityintThe length the queue used to pass events between the snapshotter and the main Kafka Connect loop.
QueueRemainingCapacityintThe free capacity of the queue used to pass events between the snapshotter and the main Kafka Connect loop. 用于在快照器和主Kafka Connect循环之间传递事件的队列的可用容量。
TotalTableCountintThe total number of tables that are being included in the snapshot. 快照中包含的表总数。
RemainingTableCount 剩余表计数intThe number of tables that the snapshot has yet to copy. 快照尚未复制的表数。
SnapshotRunning 快照运行boolean 布尔Whether the snapshot was started. 快照是否已启动。
SnapshotAbortedbooleanWhether the snapshot was aborted.
SnapshotCompletedbooleanWhether the snapshot completed.
SnapshotDurationInSecondslongThe total number of seconds that the snapshot has taken so far, even if not complete.
RowsScanned 行扫描Map<String, Long> 映射<String,长>Map containing the number of rows scanned for each table in the snapshot. Tables are incrementally added to the Map during processing. Updates every 10,000 rows scanned and upon completing a table.
MaxQueueSizeInBytes 最大尺寸long The maximum buffer of the queue in bytes. This metric is available if max.queue.size.in.bytes is set to a positive long value.
CurrentQueueSizeInBytes 当前型号尺寸规格long The current volume, in bytes, of records in the queue.

The connector also provides the following additional snapshot metrics when an incremental snapshot is executed:

AttributesTypeDescription
ChunkIdstring 字符串The identifier of the current snapshot chunk.
ChunkFromstring 字符串The lower bound of the primary key set defining the current chunk. 定义当前块的主键集的下限。
ChunkTostring 字符串The upper bound of the primary key set defining the current chunk. 定义当前块的主键集的上限。
TableFrom 表格格式string 字符串The lower bound of the primary key set of the currently snapshotted table. 当前快照表的主键集的下限。
TableTo 表到string 字符串The upper bound of the primary key set of the currently snapshotted table. 当前快照表的主键集的上限。

The Debezium MySQL connector also provides the HoldingGlobalLock custom snapshot metric. This metric is set to a Boolean value that indicates whether the connector currently holds a global or table write lock.

Streaming metrics

Transaction-related attributes are available only if binlog event buffering is enabled. See binlog.buffer.size in the advanced connector configuration properties for more details.
仅当启用binlog事件缓冲时,与事务相关的属性才可用。有关更多详细信息,请参见高级连接器配置属性中的binlog.buffer.size

The MBean is debezium.mysql:type=connector-metrics,context=streaming,server=*<mysql.server.name>*.
MBeandebezium.mysql:type=connector-metrics,context=streaming,server=*<mysql.server.name>*

The following table lists the streaming metrics that are available.
下表列出了可用的流式传输指标。

Attributes 属性Type 类型Description 描述
LastEvent 最后活动string 字符串The last streaming event that the connector has read. 连接器读取的最后一个流事件。
MilliSecondsSinceLastEvent 上次事件后的毫秒long The number of milliseconds since the connector has read and processed the most recent event. 自连接器读取并处理最近事件以来的毫秒数。
TotalNumberOfEventsSeen 事件总数long The total number of events that this connector has seen since the last start or metrics reset. 自上次启动或度量重置以来此连接器看到的事件总数。
TotalNumberOfCreateEventsSeen 查看的事件总数long The total number of create events that this connector has seen since the last start or metrics reset. 自上次启动或度量重置以来,此连接器看到的创建事件总数。
TotalNumberOfUpdateEventsSeen 已查看的更新事件总数long The total number of update events that this connector has seen since the last start or metrics reset. 自上次启动或度量重置以来,此连接器看到的更新事件总数。
TotalNumberOfDeleteEventsSeen 已删除事件总数long The total number of delete events that this connector has seen since the last start or metrics reset. 自上次启动或度量重置以来,此连接器看到的删除事件总数。
NumberOfEventsFiltered 已过滤事件数long The number of events that have been filtered by include/exclude list filtering rules configured on the connector. 已由连接器上配置的包含/排除列表筛选规则筛选的事件数。
CapturedTablesstring[] 字符串[]The list of tables that are captured by the connector. 连接器捕获的表的列表。
QueueTotalCapacity 总容量intThe length the queue used to pass events between the streamer and the main Kafka Connect loop. 用于在流处理器和主Kafka Connect循环之间传递事件的队列长度。
QueueRemainingCapacity 剩余产能intThe free capacity of the queue used to pass events between the streamer and the main Kafka Connect loop. 用于在拖缆和主Kafka Connect循环之间传递事件的队列的可用容量。
Connected 连接boolean 布尔Flag that denotes whether the connector is currently connected to the database server. 指示连接器当前是否连接到数据库服务器的标志。
MilliSecondsBehindSource MilliSecondsBehind Sourcelong The number of milliseconds between the last change event’s timestamp and the connector processing it. The values will incoporate any differences between the clocks on the machines where the database server and the connector are running. 上次更改事件的时间戳与连接器处理该事件之间的毫秒数。这些值将考虑数据库服务器和连接器运行所在计算机上的时钟之间的任何差异。
NumberOfCommittedTransactions 提交事务数long The number of processed transactions that were committed. 已提交的已处理事务数。
SourceEventPositionMap<String, String> 映射<字符串,字符串>The coordinates of the last received event. 上次接收事件的坐标。
LastTransactionIdstring 字符串Transaction identifier of the last processed transaction. 上次处理的事务的事务标识符。
MaxQueueSizeInBytes 最大尺寸long The maximum buffer of the queue in bytes. This metric is available if max.queue.size.in.bytes is set to a positive long value. 队列的最大缓冲区(以字节为单位)。如果max.queue.size.in.bytes设置为正的long值,则此度量可用。
CurrentQueueSizeInBytes 当前型号尺寸规格long The current volume, in bytes, of records in the queue. 队列中记录的当前卷(以字节为单位)。

The Debezium MySQL connector also provides the following additional streaming metrics:
Debezium MySQL连接器还提供以下额外的流媒体指标:

Attribute 属性Type 类型Description 描述
BinlogFilename 宾洛菲莱string 字符串The name of the binlog file that the connector has most recently read. 连接器最近读取的binlog文件的名称。
BinlogPosition Binlog位置long The most recent position (in bytes) within the binlog that the connector has read. 连接器在binlog中最近读取的位置(以字节为单位)。
IsGtidModeEnabled IsGtid模式已启用boolean 布尔Flag that denotes whether the connector is currently tracking GTIDs from MySQL server. 表示连接器当前是否正在跟踪来自MySQL服务器的GTID的标志。
GtidSetstring 字符串The string representation of the most recent GTID set processed by the connector when reading the binlog. 连接器在阅读binlog时处理的最新GTID集的字符串表示形式。
NumberOfSkippedEvents 跳过事件数long The number of events that have been skipped by the MySQL connector. Typically events are skipped due to a malformed or unparseable event from MySQL’s binlog. MySQL连接器跳过的事件数。通常情况下,由于MySQL的binlog中存在格式错误或无法解析的事件,事件会被跳过。
NumberOfDisconnects 断开次数long The number of disconnects by the MySQL connector. MySQL连接器断开连接的次数。
NumberOfRolledBackTransactionslong The number of processed transactions that were rolled back and not streamed. 已回滚但未流式处理的已处理事务数。
NumberOfNotWellFormedTransactionslong The number of transactions that have not conformed to the expected protocol of BEGIN + COMMIT/ROLLBACK. This value should be 0 under normal conditions. 不符合预期协议开始+COMMIT/ROLLBACK的事务数。正常情况下,该值应为0
NumberOfLargeTransactions 大事务数long The number of transactions that have not fit into the look-ahead buffer. For optimal performance, this value should be significantly smaller than NumberOfCommittedTransactions and NumberOfRolledBackTransactions. 未放入前瞻缓冲区的事务数。为了获得最佳性能,该值应该明显小于NumberOfCommittedTransactionsNumberOfRolledBackTransactions

Schema history metrics 架构历史度量

The MBean is debezium.mysql:type=connector-metrics,context=schema-history,server=*<mysql.server.name>*.
MBeandebezium.mysql:type=connector-metrics,context=schema-history,server=*<mysql.server.name>*

The following table lists the schema history metrics that are available.
下表列出了可用的架构历史记录度量。

Attributes 属性Type 类型Description 描述
Status 地位string 字符串One of STOPPED, RECOVERING (recovering history from the storage), RUNNING describing the state of the database history. STOPPEDRECOVERING(从存储中恢复历史记录)、RUNNING之一描述数据库历史记录的状态。
RecoveryStartTime 恢复开始时间long The time in epoch seconds at what recovery has started. 开始恢复的时间(以纪元秒为单位)。
ChangesRecovered 更改已删除long The number of changes that were read during recovery phase. 在恢复阶段读取的更改数。
ChangesApplied 应用的变更long the total number of schema changes applied during recovery and runtime. 恢复和运行时期间应用的架构更改总数。
MilliSecondsSinceLastRecoveredChangelong The number of milliseconds that elapsed since the last change was recovered from the history store. 从历史记录存储中恢复上次更改后经过的毫秒数。
MilliSecondsSinceLastAppliedChangelong The number of milliseconds that elapsed since the last change was applied. 自应用上次更改以来经过的毫秒数。
LastRecoveredChange 最后更新更改string 字符串The string representation of the last change recovered from the history store. 从历史记录存储恢复的上次更改的字符串表示形式。
LastAppliedChange 最后更改string 字符串The string representation of the last applied change. 上次应用的更改的字符串表示形式。

Behavior when things go wrong 当事情出错时的行为

Debezium is a distributed system that captures all changes in multiple upstream databases; it never misses or loses an event. When the system is operating normally or being managed carefully then Debezium provides exactly once delivery of every change event record.
Debezium是一个分布式系统,它捕获多个上游数据库中的所有更改;它永远不会错过或丢失任何事件。当系统正常运行或被仔细管理时,Debezium只提供一次每个更改事件记录的交付。

If a fault does happen then the system does not lose any events. However, while it is recovering from the fault, it might repeat some change events. In these abnormal situations, Debezium, like Kafka, provides at least once delivery of change events.
如果发生故障,则系统不会丢失任何事件。但是,当它从故障中恢复时,它可能会重复某些更改事件。在这些异常情况下,Debezium和Kafka一样,至少提供一次变更事件的交付。

The rest of this section describes how Debezium handles various kinds of faults and problems.
本节的其余部分描述Debezium如何处理各种故障和问题。

Configuration and startup errors 配置和启动错误

In the following situations, the connector fails when trying to start, reports an error or exception in the log, and stops running:
在以下情况下,连接器在尝试启动时失败,在日志中报告错误或异常,并停止运行:

  • The connector’s configuration is invalid.
    连接器的配置无效。
  • The connector cannot successfully connect to the MySQL server by using the specified connection parameters.
    连接器无法使用指定的连接参数成功连接到MySQL服务器。
  • The connector is attempting to restart at a position in the binlog for which MySQL no longer has the history available.
    连接器正在尝试在binlog中MySQL不再具有可用历史记录的位置重新启动。

In these cases, the error message has details about the problem and possibly a suggested workaround. After you correct the configuration or address the MySQL problem, restart the connector.
在这些情况下,错误消息包含有关问题的详细信息,可能还有建议的解决方法。更正配置或解决MySQL问题后,重新启动连接器。

MySQL becomes unavailable MySQL变得不可用

If your MySQL server becomes unavailable, the Debezium MySQL connector fails with an error and the connector stops. When the server is available again, restart the connector.
如果MySQL服务器不可用,Debezium MySQL连接器将失败并显示错误,连接器将停止。当服务器再次可用时,重新启动连接器。

However, if GTIDs are enabled for a highly available MySQL cluster, you can restart the connector immediately. It will connect to a different MySQL server in the cluster, find the location in the server’s binlog that represents the last transaction, and begin reading the new server’s binlog from that specific location.
但是,如果为高可用性MySQL集群启用了GTID,则可以立即重新启动连接器。它将连接到集群中的另一个MySQL服务器,在服务器的binlog中找到代表最后一个事务的位置,并从该特定位置开始阅读新服务器的binlog。

If GTIDs are not enabled, the connector records the binlog position of only the MySQL server to which it was connected. To restart from the correct binlog position, you must reconnect to that specific server.
如果没有启用GTID,连接器只记录它所连接的MySQL服务器的binlog位置。要从正确的binlog位置重新启动,您必须重新连接到该特定服务器。

Kafka Connect stops gracefully Kafka Connect优雅地停止

When Kafka Connect stops gracefully, there is a short delay while the Debezium MySQL connector tasks are stopped and restarted on new Kafka Connect processes.[ 重试 错误原因](javascript:void(0))

Kafka Connect process crashes[ 重试 错误原因](javascript:void(0))

If Kafka Connect crashes, the process stops and any Debezium MySQL connector tasks terminate without their most recently-processed offsets being recorded. In distributed mode, Kafka Connect restarts the connector tasks on other processes. However, the MySQL connector resumes from the last offset recorded by the earlier processes. This means that the replacement tasks might generate some of the same events processed prior to the crash, creating duplicate events.[ 重试 错误原因](javascript:void(0))

Each change event message includes source-specific information that you can use to identify duplicate events, for example:[ 重试 错误原因](javascript:void(0))

  • Event origin[ 重试 错误原因](javascript:void(0))
  • MySQL server’s event time[ 重试 错误原因](javascript:void(0))
  • The binlog file name and position[ 重试 错误原因](javascript:void(0))
  • GTIDs (if used)[ 重试 错误原因](javascript:void(0))

Kafka becomes unavailable[ 重试 错误原因](javascript:void(0))

The Kafka Connect framework records Debezium change events in Kafka by using the Kafka producer API. If the Kafka brokers become unavailable, the Debezium MySQL connector pauses until the connection is reestablished and the connector resumes where it left off.
Kafka Connect框架通过使用Kafka生产者API在Kafka中记录Debezium更改事件。如果Kafka代理不可用,Debezium MySQL连接器将暂停,直到重新建立连接,连接器将从中断的位置继续。

MySQL purges binlog files MySQL清除binlog文件

If the Debezium MySQL connector stops for too long, the MySQL server purges older binlog files and the connector’s last position may be lost. When the connector is restarted, the MySQL server no longer has the starting point and the connector performs another initial snapshot. If the snapshot is disabled, the connector fails with an error.
如果Debezium MySQL连接器停止太久,MySQL服务器会清除旧的binlog文件,连接器的最后一个位置可能会丢失。当连接器重新启动时,MySQL服务器不再具有起始点,连接器将执行另一个初始快照。如果禁用快照,则连接器将失败并显示错误。

See snapshots for details about how MySQL connectors perform initial snapshots.
有关MySQL连接器如何执行初始快照的详细信息,请参阅快照


links:

Debezium connector for MySQL::Debezium文档 — Debezium connector for MySQL :: Debezium Documentation

;