Authors: Augusto Cerqua, Marco Letta, Gabriele Pinto
studying (ML) has gained a central position in economics, the social sciences, and enterprise decision-making. Within the public sector, ML is more and more used for so-called prediction coverage issues: settings the place policymakers purpose to establish models most liable to a unfavorable final result and intervene proactively; for example, concentrating on public subsidies, predicting native recessions, or anticipating migration patterns. Within the non-public sector, comparable predictive duties come up when companies search to forecast buyer churn, or optimize credit score danger evaluation. In each domains, higher predictions translate into extra environment friendly allocation of sources and simpler interventions.
To attain these targets, ML algorithms are more and more utilized to panel information, characterised by repeated observations of the identical models over a number of time intervals. Nonetheless, ML fashions weren’t initially designed to be used with panel information, which characteristic distinctive cross-sectional and longitudinal dimensions. When ML is utilized to panel information, there’s a excessive danger of a delicate however major problem: information leakage. This happens when data unavailable at prediction time by accident enters the mannequin coaching course of, inflating predictive efficiency. In our paper “On the Mis(Use) of Machine Studying With Panel Information” (Cerqua, Letta, and Pinto, 2025), not too long ago revealed within the Oxford Bulletin of Economics and Statistics, we offer the primary systematic evaluation of information leakage in ML with panel information, suggest clear pointers for practitioners, and illustrate the results by means of an empirical software with publicly out there U.S. county information.
The Leakage Downside
Panel information mix two constructions: a temporal dimension (models noticed throughout time) and a cross-sectional dimension (a number of models, reminiscent of areas or companies). Commonplace ML apply, splitting the pattern randomly into coaching and testing units, implicitly assumes unbiased and identically distributed (i.i.d.) information. This assumption is violated when default ML procedures (reminiscent of a random break up) are utilized to panel information, creating two principal sorts of leakage:
- Temporal leakage: future data leaks into the mannequin throughout the coaching section, making forecasts look unrealistically correct. Moreover, previous data can find yourself within the testing set, making ‘forecasts’ retrospective.
- Cross-sectional leakage: the identical or very comparable models seem in each coaching and testing units, which means the mannequin has already “seen” many of the cross-sectional dimension of the information.
Determine 1 reveals how totally different splitting methods have an effect on the danger of leakage. A random break up on the unit–time stage (Panel A) is essentially the most problematic, because it introduces each temporal and cross-sectional leakage. Alternate options reminiscent of splitting by models (Panel B), by teams (Panel C), or by time (Panel D), mitigate one kind of leakage however not the opposite. In consequence, no technique fully eliminates the issue: the suitable selection is dependent upon the duty at hand (see beneath), since in some circumstances one type of leakage might not be an actual concern.
Determine 1 | Coaching and testing units underneath totally different splitting guidelines
Two Sorts of Prediction Coverage Issues
A key perception of the examine is that researchers should clearly outline their prediction objective ex-ante. We distinguish two broad lessons of prediction coverage issues:
1. Cross-sectional prediction: The duty is to map outcomes throughout models in the identical interval. For instance, imputing lacking information on GDP per capita throughout areas when just some areas have dependable measurements. One of the best break up right here is on the unit stage: totally different models are assigned to coaching and testing units, whereas all time intervals are saved. This eliminates cross-sectional leakage, though temporal leakage stays. However since forecasting shouldn’t be the objective, this isn’t an actual subject.
2. Sequential forecasting: The objective is to foretell future outcomes primarily based on historic information—for instance, predicting county-level revenue declines one 12 months forward to set off early interventions. Right here, the right break up is by time: earlier intervals for coaching, later intervals for testing. This avoids temporal leakage however not cross-sectional leakage, which isn’t an actual concern because the similar models are being forecasted throughout time.
The improper strategy in each circumstances is the random break up by unit-time (Panel A of Determine 1), which contaminates outcomes with each sorts of leakage and produces misleadingly excessive efficiency metrics.
Sensible Pointers
To assist practitioners, we summarize a set of do’s and don’ts for making use of ML to panel information:
- Select the pattern break up primarily based on the analysis query: unit-based for cross-sectional issues, time-based for forecasting.
- Temporal leakage can happen not solely by means of observations, but additionally by means of predictors. For forecasting, solely use lagged or time-invariant predictors. Utilizing contemporaneous variables (e.g., utilizing unemployment in 2014 to foretell revenue in 2014) is conceptually improper and creates temporal information leakage.
- Adapt cross-validation to panel information. Random k-fold CV present in most ready-to-use software program packages is inappropriate, because it mixes future and previous data. As an alternative, use rolling or increasing home windows for forecasting, or stratified CV by models/teams for cross-sectional prediction.
- Be certain that out-of-sample efficiency is examined on really unseen information, not on information already encountered throughout coaching.
Empirical Utility
As an instance these points, we analyze a balanced panel of three,058 U.S. counties from 2000 to 2019, focusing completely on sequential forecasting. We take into account two duties: a regression downside—forecasting per capita revenue—and a classification downside—forecasting whether or not revenue will decline within the subsequent 12 months.
We run tons of of fashions, various break up methods, use of contemporaneous predictors, inclusion of lagged outcomes, and algorithms (Random Forest, XGBoost, Logit, and OLS). This complete design permits us to quantify how leakage inflates efficiency. Determine 2 beneath reviews our principal findings.
Panel A of Determine 2 reveals forecasting efficiency for classification duties. Random splits yield very excessive accuracy, however that is illusory: the mannequin has already seen comparable information throughout coaching.
Panel B reveals forecasting efficiency for regression duties. As soon as once more, random splits make fashions look much better than they are surely, whereas right time-based splits present a lot decrease, but sensible, accuracy.
Determine 2 | Temporal leakage within the forecasting downside
Panel A – Classification process
Panel B – Regression process
Within the paper, we additionally present that the overestimation of mannequin accuracy turns into considerably extra pronounced throughout years marked by distribution shifts and structural breaks, such because the Nice Recession, making the outcomes notably deceptive for coverage functions.
Why It Issues
Information leakage is greater than a technical pitfall; it has real-world penalties. In coverage purposes, a mannequin that appears extremely correct throughout validation could collapse as soon as deployed, resulting in misallocated sources, missed crises, or misguided concentrating on. In enterprise settings, the identical subject can translate into poor funding selections, inefficient buyer concentrating on, or false confidence in danger assessments. The hazard is particularly acute when machine studying fashions are supposed to function early-warning techniques, the place misplaced belief in inflated efficiency can lead to pricey failures.
In contrast, correctly designed fashions, even when much less correct on paper, present trustworthy and dependable predictions that may meaningfully inform decision-making.
Takeaway
ML has the potential to rework decision-making in each coverage and enterprise, however provided that utilized appropriately. Panel information provide wealthy alternatives, but are particularly susceptible to information leakage. To generate dependable insights, practitioners ought to align their ML workflow with the prediction goal, account for each temporal and cross-sectional constructions, and use validation methods that forestall overoptimistic assessments and an phantasm of excessive accuracy. When these rules are adopted, fashions keep away from the lure of inflated efficiency and as an alternative present steering that genuinely helps policymakers allocate sources and companies make sound strategic decisions. Given the fast adoption of ML with panel information in each private and non-private domains, addressing these pitfalls is now a urgent precedence for utilized analysis.
References
A. Cerqua, M. Letta, and G. Pinto, “On the (Mis)Use of Machine Studying With Panel Information”, Oxford Bulletin of Economics and Statistics (2025): 1–13, https://doi.org/10.1111/obes.70019.
















