Whenever I restart the debezium kafka-connect container, or deploy another instance, I get the following error:
io.debezium.jdbc.JdbcConnectionException: ERROR: replication slot "debezium" already exists
at io.debezium.connector.postgresql.connection.PostgresReplicationConnection.initReplicationSlot(PostgresReplicationConnection.java:136)
at io.debezium.connector.postgresql.connection.PostgresReplicationConnection.<init>(PostgresReplicationConnection.java:79)
at io.debezium.connector.postgresql.connection.PostgresReplicationConnection.<init>(PostgresReplicationConnection.java:38)
at io.debezium.connector.postgresql.connection.PostgresReplicationConnection$ReplicationConnectionBuilder.build(PostgresReplicationConnection.java:349)
at io.debezium.connector.postgresql.PostgresTaskContext.createReplicationConnection(PostgresTaskContext.java:80)
at io.debezium.connector.postgresql.RecordsStreamProducer.<init>(RecordsStreamProducer.java:75)
at io.debezium.connector.postgresql.PostgresConnectorTask.start(PostgresConnectorTask.java:112)
at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:157)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:170)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:214)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.postgresql.util.PSQLException: ERROR: replication slot "debezium" already exists
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2412)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2125)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:297)
at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:428)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:354)
at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:301)
at org.postgresql.jdbc.PgStatement.executeCachedSql(PgStatement.java:287)
at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:264)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:260)
at org.postgresql.replication.fluent.logical.LogicalCreateSlotBuilder.make(LogicalCreateSlotBuilder.java:48)
at io.debezium.connector.postgresql.connection.PostgresReplicationConnection.initReplicationSlot(PostgresReplicationConnection.java:102)
... 14 more
I'm using this image: https://github.com/debezium/docker-images/tree/master/connect/0.8
And have config for it like this:
{
"name":"record-loader-connector",
"config":{
"connector.class":"io.debezium.connector.postgresql.PostgresConnector",
"database.dbname":"record_loader?ssl",
"database.user":"postgres",
"database.hostname":"redacted",
"database.history.kafka.bootstrap.servers":"redacted",
"database.history.kafka.topic":"dbhistory.recordloader",
"database.password":"redacted",
"name":"record-loader-connector",
"database.server.name":"recordLoaderDb",
"database.port":"20023",
"table.whitelist":".*sync"
},
"tasks":[
{
"connector":"record-loader-connector",
"task":0
}
],
"type":"source"
}
I've noticed these two config options (slot.name and slot.drop_on_stop), but it is not clear to me if/how I should change them:
http://debezium.io/docs/connectors/postgresql/#connector-properties
If you deploy multiple instances of the Debezium Postgres connector, you must make sure to use distinct replication slot names. You can specify a name when setting up the connector:
{
"name": "inventory-connector",
"config": {
"connector.class": "io.debezium.connector.postgresql.PostgresConnector",
"tasks.max": "1",
"database.hostname": "postgres",
"database.port": "5432",
"database.user": "postgres",
"database.password": "postgres",
"database.dbname" : "postgres",
"database.server.name": "dbserver1",
"database.whitelist": "inventory",
"database.history.kafka.bootstrap.servers": "kafka:9092",
"database.history.kafka.topic": "schema-changes.inventory",
"slot.name" : "my-slot-name"
}
}
I can't reproduce the issue you describe when restarting a given connector instance. It should detect that the slot already exists and re-use that one (one possible cause may be that you altered the logical decoding plug-in, too ("decoderbufs" vs. "wal2json")?. If you have a reproducer for this, could you please open an entry in our bug tracker?
To proceed, you can manually delete the slot in Postgres:
select pg_drop_replication_slot('debezium');
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With