
Fig 1 - Datactics FlowDesigner
Datactics uses rules to drive your data quality processes. These rules are created in FlowDesigner, using a visual, drag-and-drop framework, which can be seen in Figure 1. They can be used as you would expect, to create matching rules, construct datasets, identify low-confidence results, and so on.
These rules are underpinned by the Datactics AI Engine. For Datactics, the emphasis is on “explainable AI”, meaning that the models used by the AI engine are designed to be transparent, reproducible, and ultimately trustworthy and accountable. These models also cater to a variety of use cases, and are not strictly restricted to data quality (graph linkage prediction and automated dataset labelling, for example, are both supported, and have significant applications beyond quality). That said, for the purposes of data quality, the most notable applications are AI-augmented data matching and deduplication, and error detection.
In particular, the former is focused on automatically pruning low-confidence match results that would normally need to be manually reviewed (and often arrive in large numbers), thus saving your data quality analysts a significant amount of time and effort.
In addition to providing this level of automation, these AI-enabled data quality processes will generally be more comprehensive, more systematic, and ultimately more accurate than their manual equivalents. Moreover, Datactics’ AI models benefit from unsupervised learning, allowing them to improve and grow more accurate over time without manual intervention.

Fig 2 - Data quality overview in Datactics
Data quality and data matching rules and checks can be executed in DQM, and the results are displayed in a variety of views and dashboards, one of which is shown in Figure 2. Notably, these dashboards can include prediction reasoning, and the data matching results view in particular features a prominent ‘explain decision’ button. The latter does what it says, using visualisations to show you which data features led to that particular decision being made and both of these elements provide good, useful examples of Datactics’ “explainable AI” philosophy in action.
The last element we should talk about is ML Monitor, the platform’s model monitoring dashboard. This allows you to track data drift, model performance, and so on. It also uses MLFlow for model management (for example, tracking model versions).