TY - GEN
T1 - Evaluating the Impact of Data Augmentation on Predictive Model Performance
AU - Svabensky, Valdemar
AU - Borchers, Conrad
AU - Cloude, Elizabeth B.
AU - Shimada, Atsushi
N1 - Publisher Copyright:
© 2025 Copyright held by the owner/author(s).
PY - 2025/3/3
Y1 - 2025/3/3
N2 - In supervised machine learning (SML) research, large training datasets are essential for valid results. However, obtaining primary data in learning analytics (LA) is challenging. Data augmentation can address this by expanding and diversifying data, though its use in LA remains underexplored. This paper systematically compares data augmentation techniques and their impact on prediction performance in a typical LA task: prediction of academic outcomes. Augmentation is demonstrated on four SML models, which we successfully replicated from a previous LAK study based on AUC values. Among 21 augmentation techniques, SMOTE-ENN sampling performed the best, improving the average AUC by 0.01 and approximately halving the training time compared to the baseline models. In addition, we compared 99 combinations of chaining 21 techniques, and found minor, although statistically significant, improvements across models when adding noise to SMOTE-ENN (+0.014). Notably, some augmentation techniques significantly lowered predictive performance or increased performance fluctuation related to random chance. This paper's contribution is twofold. Primarily, our empirical findings show that sampling techniques provide the most statistically reliable performance improvements for LA applications of SML, and are computationally more efficient than deep generation methods with complex hyperparameter settings. Second, the LA community may benefit from validating a recent study through independent replication.
AB - In supervised machine learning (SML) research, large training datasets are essential for valid results. However, obtaining primary data in learning analytics (LA) is challenging. Data augmentation can address this by expanding and diversifying data, though its use in LA remains underexplored. This paper systematically compares data augmentation techniques and their impact on prediction performance in a typical LA task: prediction of academic outcomes. Augmentation is demonstrated on four SML models, which we successfully replicated from a previous LAK study based on AUC values. Among 21 augmentation techniques, SMOTE-ENN sampling performed the best, improving the average AUC by 0.01 and approximately halving the training time compared to the baseline models. In addition, we compared 99 combinations of chaining 21 techniques, and found minor, although statistically significant, improvements across models when adding noise to SMOTE-ENN (+0.014). Notably, some augmentation techniques significantly lowered predictive performance or increased performance fluctuation related to random chance. This paper's contribution is twofold. Primarily, our empirical findings show that sampling techniques provide the most statistically reliable performance improvements for LA applications of SML, and are computationally more efficient than deep generation methods with complex hyperparameter settings. Second, the LA community may benefit from validating a recent study through independent replication.
KW - data generation
KW - learning analytics
KW - prediction
KW - replication
KW - supervised learning
KW - synthetic data
UR - http://www.scopus.com/inward/record.url?scp=105000256723&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=105000256723&partnerID=8YFLogxK
U2 - 10.1145/3706468.3706485
DO - 10.1145/3706468.3706485
M3 - Conference contribution
AN - SCOPUS:105000256723
T3 - 15th International Conference on Learning Analytics and Knowledge, LAK 2025
SP - 126
EP - 136
BT - 15th International Conference on Learning Analytics and Knowledge, LAK 2025
PB - Association for Computing Machinery, Inc
T2 - 15th International Conference on Learning Analytics and Knowledge, LAK 2025
Y2 - 3 March 2025 through 7 March 2025
ER -