Change data capture (CDC) is one of those data infrastructure patterns that quietly supports everything else. Event-driven microservices, real-time analytics, incremental ELT pipelines, operational alerting — they all depend on the same thing: a reliable stream of changes coming out of your transactional database.
If you're using Debezium today, you're not alone. It's the standard for CDC across many organizations — an open source tool that captures database changes and streams them to distributed streaming systems like Kafka. But that status comes with baggage. Because Debezium runs as part of Kafka Connect, which is a standard framework for integrating Kafka with external systems, adopting it means accepting the cost and complexity of a distributed streaming platform. For many teams, that's just part of the deal. Until now.
CockroachDB's native CDC system takes a different approach. With changefeeds built directly into the database engine, it removes the need for Kafka Connect and everything that comes with it: Zookeeper, replication slots, and connector orchestration. Now, with the release of enriched changefeeds in v.25.2, CockroachDB can also match Debezium's message format, giving you operational simplicity without breaking downstream consumers.
This post walks through why we built enriched changefeeds, how they compare to Debezium, and where they fit in the modern CDC landscape. You can migrate off Kafka Connect without giving up format formality: Whether you're planning a migration off PostgreSQL, or just exploring ways to reduce infrastructure overhead, this is your guide.
Why Debezium exists
Before we talk about alternatives, it's worth understanding why Debezium became the default CDC tool in the first place. It wasn't the first CDC system, but it was the first to provide a consistent event model across multiple databases, built on a scalable streaming platform. With Debezium, you can:
Capture changes from MySQL, PostgreSQL, SQL Server, Oracle, MongoDB, and others
Standardize the message format for downstream consumers
Use Kafka's durability and fan-out to wire in multiple systems
Support both initial snapshots and incremental updates
The core idea is simple: treat the database log as a source of truth and replay changes as structured events. For a lot of teams, this was exactly what they needed. The message format works. The platform is familiar. And most importantly, Debezium lets teams move data without needing to write their own polling logic or manage state.
But as more systems move to managed services, and infrastructure footprints shrink, the full weight of Kafka Connect – which Debezium is built on top of – becomes harder to justify.
The Debezium operational model
Running Debezium means standing up:
Kafka brokers (or a managed Kafka service)
Zookeeper (or KRaft, eventually)
Kafka Connect workers
Schema registry (if you're using Avro)
The Debezium connector itself, which must be configured per source
For teams with existing Kafka infrastructure, this might not feel like a big deal. But for others — especially those who only want to stream changes from one or two databases — the cost adds up fast.
Even once it's running, the system can be fragile. Connectors fail. Offsets get out of sync. Replication slots fill up. Monitoring all of this means stitching together metrics across components and hoping your alerting rules fire in time.
Native Changefeeds in CockroachDB
Let’s look at a more efficient and resilient CDC workflow. CockroachDB includes changefeeds as a native database feature. There's no plugin to install, no external log to parse. You create a changefeed with a simple SQL statement:
CREATE CHANGEFEED FOR TABLE users INTO 'kafka://broker:9092';
That statement starts a distributed job across the cluster. Each node participates in the scan, and changes are streamed directly from the database's key-value layer. Under the hood, CockroachDB uses a combination of rangefeeds and resolved timestamps to guarantee consistency and ordering.
Changefeeds are fault-tolerant. If a node crashes, the job resumes on another node. If a downstream sink becomes unavailable, the job backs off and retries. The system handles failover and backpressure without user intervention.
The output is JSON by default, though Avro is also supported. For most use cases, this is enough. But for teams coming from Debezium, the lack of rich metadata (like op and source) creates friction.
That's what the enriched changefeeds that are native to CockroachDB solve.
Matching the Debezium format
Here's a typical Debezium message:
{
"schema": { ... },
"payload": {
"before": { "id": 1, "name": "Alice" },
"after": { "id": 1, "name": "Alicia" },
"op": "u",
"source": {
"db": "appdb",
"table": "users", …
},
"ts_ms": 1680000000000
}
}
This structure has become a standard across many data systems. It tells you:
what changed (before and after)
what kind of change it was (op)
when it happened (ts_ms)
and where it came from (source).
Now, CockroachDB can emit the same format. You just set:
CREATE CHANGEFEED FOR TABLE users
INTO 'kafka://broker:9092'
WITH envelope = 'enriched';
You can even control which fields are included using enriched_properties:
CREATE CHANGEFEED FOR TABLE users
INTO 'kafka://broker:9092'
WITH envelope = 'enriched', enriched_properties = 'schema, source';
This gives you a familiar format with fewer bytes on the wire.
Format flexibility without infrastructure bloat
The CockroachDB enriched format supports:
JSON and Avro serialization
Inserts, updates, deletes, and initial scan events
Optional schema metadata
Source information for debugging and routing
Operation type for downstream interpretation
But it doesn’t require:
Kafka Connect
A schema registry
A connector plugin
Managing offsets or replication slots
This means you can build pipelines that expect Debezium-style events without having to run Debezium.
For teams already invested in Debezium, this can shorten migrations. For new systems, it avoids the need to make that investment in the first place.
Who enriched changefeeds are for
Enriched changefeeds aren’t just for Debezium users. They’re beneficial for any team that wants structure and context in their CDC output, without the weight of a full streaming platform:
If you’re migrating from Postgres, MySQL, SQL Server, or Oracle and want to keep your pipeline unchanged, enriched changefeeds help.
If you’re building a new system and want a simpler CDC path, they get you there faster.
If you’re an operator who just wants fewer things to break, enriched changefeeds give you fewer things to monitor.
What enriched changefeeds are not
We’re not building a Debezium source connector.
At least not yet. That’s still on the table if demand grows. But today, the cost of building and maintaining a full Debezium connector — including snapshot support, offset tracking, and compatibility testing — is too high relative to the number of users who need it.
Instead, we’ve focused on the part of Debezium that matters most: the format.
Try it out
Adopting enriched changefeeds doesn't just simplify CDC pipelines — it unlocks real business impact.
For teams migrating from Postgres or MySQL, enriched changefeeds shorten timelines and reduce rework, accelerating time-to-value on CockroachDB. For platform engineers, fewer moving parts mean lower maintenance overhead, faster incident resolution, and less downtime risk. And for product owners, this means more reliable real-time data powering user experiences and analytics.
By reducing integration costs and improving operational resilience, enriched changefeeds can help drive faster migrations, lower churn, and ultimately, increase revenue.
Enriched changefeeds are available now. They’re supported in CockroachDB Cloud and in self-hosted clusters. You can try them with any sink you already use: Kafka, webhooks, Pub/Sub, or cloud storage.
And if you’re still not sure, we’d love to talk.
Ready to learn more about how enriched changefeeds can streamline your CDC workflow? Visit here to talk to an expert.
Rohan Joshi leads Cockroach Labs' Migrations and Change Data Capture product areas, ensuring customers efficiently migrate applications and maintain real-time data flows across their ecosystems.
Miles Frankel is Technical Lead for the CDC team at Cockroach Labs. He is passionate about distributed systems and databases.