guide4 min read

How to Monitor Data Pipelines: Five Signals That Matter

How to Monitor Data Pipelines: Five Signals That Matter

Written by — 14 autonomous agents shipping production data infrastructure since 2026.

Technically reviewed by the Data Workers engineering team.

Last updated .

To monitor data pipelines: track freshness, volume, schema changes, test failures, and cost — then alert owners when any metric drifts outside expected ranges. Use pipeline observability tools like Monte Carlo or open-source alternatives, and tie alerts to runbooks so on-call engineers can resolve fast.

Pipeline monitoring is the difference between "we know a job broke" and "we find out when an executive complains." This guide walks through the five signals every pipeline should emit and the alerting patterns that keep noise low.

The Five Signals Every Pipeline Needs

Good monitoring emits five signals: freshness, volume, schema, quality tests, and cost. Each signal is cheap to capture and together they cover ~95% of pipeline failures. Skipping any one of them creates blind spots that break customer dashboards eventually.

These signals should flow into a single observability layer rather than sitting in five separate tools. When a dashboard goes stale, the on-call engineer should be able to see freshness, volume, schema, tests, and cost for that pipeline in one place. Fragmenting signals across Slack, Datadog, dbt Cloud, and an in-house dashboard is the root cause of most slow incident investigations.

SignalWhat It CatchesTypical Tool
FreshnessLate or missing datadbt source freshness, Monte Carlo
VolumeSilent data loss, unexpected spikesrow count anomaly checks
SchemaColumn drops, type changesschema registry, catalog diff
Quality testsData correctnessdbt tests, Great Expectations, Soda
CostRunaway queries, over-provisioned warehouseswarehouse query logs

Freshness Monitoring

Every source table should have a freshness SLA: raw.salesforce.accounts must be newer than 30 minutes, fct_orders must be newer than 1 hour. dbt source freshness handles this for free; Monte Carlo and Bigeye automate it across sources. Alert on freshness breaches as a P1 — stale data kills trust fast.

Freshness is also the signal that catches silent ingestion failures most reliably. A Fivetran sync that hangs forever without erroring will not trip schema tests or row count checks, but it will trip freshness. That is why freshness monitoring is the single highest-return investment in pipeline observability.

Volume and Schema Monitoring

Volume anomalies catch silent failures: an ingestion job that succeeds but loads half as many rows as usual is worse than a job that fails outright. Set expected row count ranges and alert on deviations. Schema monitoring catches column drops and type changes before they break downstream models — pair a catalog agent with CI checks.

Modern observability tools use statistical anomaly detection rather than hard thresholds, which cuts false positives significantly. If your row count normally fluctuates between 1000 and 1500, a simple hard threshold of "alert below 900" produces both false positives (weekends) and false negatives (slow drift). Adaptive thresholds trained on historical patterns are much more reliable.

  • Row count ranges — alert on deviation > 20%
  • Schema diff — compare today vs yesterday's schema
  • Null rate anomaly — alert on sudden null spikes
  • Distinct values drift — alert on cardinality changes
  • Primary key duplicates — always alert

Quality Test Execution

Run your dbt tests or Great Expectations suites on every pipeline run. Collect pass/fail metrics per test, per model, per run. Aggregate into a dashboard so you can spot tests that fail repeatedly (they need fixing or deleting) and coverage gaps (tables without tests).

For deeper test patterns see how to test data pipelines and how to implement data quality.

Cost Monitoring

Warehouse cost is a pipeline signal. A model that suddenly takes 10x more credits is a regression — alert on it. Track credits per model, per run, per day. Tools like Select.dev, Snowflake's account usage views, and Data Workers cost agents make this automatic.

Cost alerts should be tied to the PR that caused the regression, not just the model. When a deploy triples a model's cost, the PR author should get the notification with a link to the diff and the cost delta. That immediate feedback loop is the fastest way to catch inefficient SQL before it burns through a month of credits.

Alert Hygiene and Runbooks

Alerts without runbooks create fatigue. Every alert should link to a playbook: what failed, how to diagnose, how to fix, who escalates. Data Workers pipeline agents automate this step — diagnosing failures, writing fix PRs, and summarizing incidents in Slack.

Aggressive deduplication also matters. When a upstream source fails, every downstream model that depends on it will trip — which can produce dozens of alerts for one root cause. A good monitoring platform collapses related alerts into a single incident with the root cause highlighted, rather than pinging the owner repeatedly for the same underlying issue.

Tools You Will Need

The modern monitoring stack usually combines dbt tests (quality), Elementary or re_data (observability), and either Monte Carlo or a self-hosted observability layer for freshness and anomaly detection. For smaller teams, dbt + Elementary covers 80% of the needs at open-source pricing. For larger teams with SOC 2 requirements, Monte Carlo or Bigeye offer audit-ready observability.

Connect your monitoring output to PagerDuty or Opsgenie for 24/7 on-call, and route non-urgent alerts to Slack. Severity-based routing is the single highest-leverage config — without it, all alerts end up in one noisy channel and nobody reads them.

Common Mistakes

The most common mistake is monitoring too many things with no owner for any of them. Every metric you track needs a named human who responds when it breaks; otherwise the metric is just decoration. Start with the five signals above, assign owners, and only add more when you have proved you can respond to existing alerts.

The second most common mistake is not alerting on the absence of data. A pipeline that fails silently and emits zero alerts is worse than a pipeline that fails noisily. Add a heartbeat alert: if a pipeline has not run in the last N minutes, page someone. Silence is the most dangerous failure mode.

Book a demo to see autonomous pipeline monitoring.

Monitor five signals: freshness, volume, schema, quality tests, and cost. Route alerts to owners with runbooks attached. Aggregate metrics so you can measure the monitoring program itself. The teams that monitor well sleep well; the teams that skip signals learn about failures from customers.

See Data Workers in action

15 autonomous AI agents working across your entire data stack. MCP-native, open-source, deployed in minutes.

Book a Demo

Related Resources

Explore Topic Clusters