instruments like dbt make setting up SQL knowledge pipelines simple and systematic. However even with the added construction and clearly outlined knowledge fashions, pipelines can nonetheless develop into complicated, which makes debugging points and validating adjustments to knowledge fashions tough.
The growing complexity of information transformation logic offers rise to the next points:
- Conventional code evaluate processes solely take a look at code adjustments and exclude the information influence of these adjustments.
- Knowledge influence ensuing from code adjustments is tough to hint. In sprawling DAGs with nested dependencies, discovering how and the place knowledge influence happens is extraordinarily time-consuming, or close to not possible.
Gitlab’s dbt DAG (proven within the featured picture above) is the right instance of a knowledge venture that’s already a house-of-cards. Think about making an attempt to comply with a easy SQL logic change to a column by this complete lineage DAG. Reviewing a knowledge mannequin replace can be a frightening process.
How would you strategy this kind of evaluate?
What’s knowledge validation?
Knowledge validation refers back to the course of used to find out that the information is right when it comes to real-world necessities. This implies making certain that the SQL logic in a knowledge mannequin behaves as meant by verifying that the information is right. Validation is often carried out after modifying a knowledge mannequin, reminiscent of accommodating new necessities, or as a part of a refactor.
A novel evaluate problem
Knowledge has states and is straight affected by the transformation used to generate it. That is why reviewing knowledge mannequin adjustments is a singular problem, as a result of each the code and the information must be reviewed.
Because of this, knowledge mannequin updates ought to be reviewed not just for completeness, but additionally context. In different phrases, that the information is right and current knowledge and metrics weren’t unintentionally altered.
Two extremes of information validation
In most knowledge groups, the particular person making the change depends on institutional information, instinct, or previous expertise to evaluate the influence and validate the change.
“I’ve made a change to X, I feel I do know what the influence ought to be. I’ll examine it by operating Y”
The validation technique often falls into one in all two extremes, neither of which is right:
- Spot-checking with queries and a few high-level checks like row depend and schema. It’s quick however dangers lacking precise influence. Vital and silent errors can go unnoticed.
- Exhaustive checking of each single downstream mannequin. It’s sluggish and useful resource intensive, and might be expensive because the pipeline grows.
This leads to a knowledge evaluate course of that’s unstructured, laborious to repeat, and infrequently introduces silent errors. A brand new technique is required that helps the engineer to carry out exact and focused knowledge validation.
A greater strategy by understanding knowledge mannequin dependencies
To validate a change to an information venture, it’s essential to know the connection between fashions and the way knowledge flows by the venture. These dependencies between fashions inform us how knowledge is handed and remodeled from one mannequin to a different.
Analyze the connection between fashions
As we’ve seen, knowledge venture DAGs might be big, however a knowledge mannequin change solely impacts a subset of fashions. By isolating this subset after which analyzing the connection between the fashions, you may peel again the layers of complexity and focus simply on the fashions that truly want validating, given a selected SQL logic change.
The sorts of dependencies in a knowledge venture are:
Mannequin-to mannequin
A structural dependency through which columns are chosen from an upstream mannequin.
--- downstream_model
choose
a,
b
from {{ ref("upstream_model") }}
Column-to-column
A projection dependency that selects, renames, or transforms an upstream column.
--- downstream_model
choose
a,
b as b2
from {{ ref("upstream_model") }}
Mannequin-to-column
A filter dependency through which a downstream mannequin makes use of an upstream mannequin in a the place, be part of, or different conditional clause.
-- downstream_model
choose
a
from {{ ref("upstream_model") }}
the place b > 0
Understanding the dependencies between fashions helps us to outline the influence radius of a knowledge mannequin logic change.
Determine the influence radius
When making adjustments to an information mannequin’s SQL, it’s essential to know which different fashions is likely to be affected (the fashions you will need to examine). On the excessive stage, that is completed by model-to-model relationships. This subset of DAG nodes is called the influence radius.
Within the DAG under, the influence radius consists of nodes B (the modified mannequin) and D (the downstream mannequin). In dbt, these fashions might be recognized utilizing the modified+ selector.

Figuring out modified nodes and downstream is a superb begin, and by isolating adjustments like this you’ll scale back the potential knowledge validation space. Nevertheless, this might nonetheless lead to a lot of downstream fashions.
Classifying the varieties of SQL adjustments can additional allow you to to prioritize which fashions truly require validation by understanding the severity of the change, eliminating branches with adjustments which might be identified to be secure.
Classify the SQL change
Not all SQL adjustments carry the identical stage of threat to downstream knowledge, and so ought to be categorized accordingly. By classifying SQL adjustments this fashion, you may add a scientific strategy to your knowledge evaluate course of.
A SQL change to an information mannequin might be labeled as one of many following:
Non-breaking change
Adjustments that don’t influence the information in downstream fashions reminiscent of including new columns, changes to SQL formatting, or including feedback and so on.
-- Non-breaking change: New column added
choose
id,
class,
created_at,
-- new column
now() as ingestion_time
from {{ ref('a') }}
Partial-breaking change
Adjustments that solely influence downstream fashions that reference sure columns reminiscent of eradicating or renaming a column; or modifying a column definition.
-- Partial breaking change: `class` column renamed
choose
id,
created_at,
class as event_category
from {{ ref('a') }}
Breaking change
Adjustments that influence all downstream fashions reminiscent of filtering, sorting, or in any other case altering the construction or which means of the remodeled knowledge.
-- Breaking change: Filtered to exclude knowledge
choose
id,
class,
created_at
from {{ ref('a') }}
the place class != 'inner'
Apply classification to cut back scope
After making use of these classifications the influence radius, and the variety of fashions that should be validated, might be considerably lowered.

Within the above DAG, nodes B, C and F have been modified, leading to probably 7 nodes that should be validated (C to E). Nevertheless, not every department accommodates SQL adjustments that truly require validation. Let’s check out every department:
Node C: Non-breaking change
C is classed as a non-breaking change. Due to this fact each C and H don’t should be checked, they are often eradicated.
Node B: Partial-breaking change
B is classed as a partial-breaking change attributable to change to the column B.C1. Due to this fact, D and E should be checked solely in the event that they reference column B.C1.
Node F: Breaking change
The modification to mannequin F is classed as a breaking-change. Due to this fact, all downstream nodes (G and E) should be checked for influence. For example, mannequin g would possibly mixture knowledge from the modified upstream column
The preliminary 7 nodes have already been lowered to five that should be checked for knowledge influence (B, D, E, F, G). Now, by inspecting the SQL adjustments on the column stage, we are able to scale back that quantity even additional.
Narrowing the scope additional with column-level lineage
Breaking and non-breaking adjustments are simple to categorise however, with regards to inspecting partial-breaking adjustments, the fashions should be analyzed on the column stage.
Let’s take a better take a look at the partial-breaking change in mannequin B, through which the logic of column c1 has been modified. This modification may probably lead to 4 impacted downstream nodes: D, E, Okay, and J. After monitoring column utilization downstream, this subset might be additional lowered.

Following column B.c1 downstream we are able to see that:
- B.c1 → D.c1 is a column-to-column (projection) dependency.
- D.c1 → E is a model-to-column dependency.
- D → Okay is a model-to-model dependency. Nevertheless, as D.c1 will not be utilized in Okay, this mannequin might be eradicated.
Due to this fact, the fashions that should be validated on this department are B, D, and E. Along with the breaking change F and downstream G, the entire fashions to be validated on this diagram are F, G, B, D, and E, or simply 5 out of a complete of 9 probably impacted fashions.
Conclusion
Knowledge validation after a mannequin change is tough, particularly in massive and sophisticated DAGs. It’s simple to overlook silent errors and performing validation turns into a frightening process, with knowledge fashions typically feeling like black bins with regards to downstream influence.
A structured and repeatable course of
By utilizing this change-aware knowledge validation method, you may carry construction and precision to the evaluate course of, making it systematic and repeatable. This reduces the variety of fashions that should be checked, simplifies the evaluate course of, and lowers prices by solely validating fashions that truly require it.
Earlier than you go…
Dave is a senior technical advocate at Recce, the place we’re constructing a toolkit to allow superior knowledge validation workflows. He’s all the time completely satisfied to talk about SQL, knowledge engineering, or serving to groups navigate their knowledge validation challenges. Join with Dave on LinkedIn.
Analysis for this text was made potential by my colleague Chen En Lu (Popcorny).