Data good enough for operations is not necessarily analytics-ready.

Exactly one month ago I published the first episode of my Industrial Data Quality Podcast. In my opinion, a topic of critical importance that is all too often overlooked, especially with all the buzz around AI.

In the most recent episode, I had the pleasure of inviting guest speaker Thomas Dhollander, co-founder of Timeseer.AI. Together we explored critical challenges in industrial time series data reliability and observability.

Key takeaways:

  • Data quality is not just a technical issue β€” it’s a people and process problem, deeply tied to governance and ownership.
  • Data management at many companies is still reactive β€” fixing issues only after models break or KPIs look suspicious. When companies scale their data-driven operations, they need to turn to proactive data management to avoid ending up in firefighting mode.
  • Data maturity varies by company and by industry β€” utilities and pharma often lead, some other industries may still view data as a byproduct.
  • Data should be treated like a product β€” with quality checks, documentation, and accountability β€” especially as you scale analytics. This is also true for OT data.
  • AI needs data quality β€” ML and AI depend on quality inputs and sensor drift or misconfigured tags can quietly corrupt your entire model output. Interestingly, AI is also a key enabler in scaling data quality.
  • Moving data to the cloud introduces new risks β€” missing context, inconsistent pipelines, and ownership confusion.