A computationally efficient artificial intelligence (AI) model called Extreme Learning Machines (ELM) is adopted to analyze patterns embedded in continuous assessment to model the weighted score (WS) and the examination (EX) score in engineering mathematics courses at an Australian regional university. The student performance data taken over a six-year period in multiple courses ranging from the mid- to the advanced level and a diverse course offering mode (i.e., on-campus, ONC, and online, ONL) are modelled by ELM and further benchmarked against competing models: random forest (RF) and Volterra. With the assessments and examination marks as key predictors of WS (leading to a grade in the mid-level course), ELM (with respect to RF and Volterra) outperformed its counterpart models both for the ONC and the ONL offer. This generated relative prediction error in the testing phase, of only 0.74%, compared to about 3.12% and 1.06%, respectively, while for the ONL offer, the prediction errors were only 0.51% compared to about 3.05% and 0.70%. In modelling the student performance in advanced engineering mathematics course, ELM registered slightly larger errors: 0.77% (vs. 22.23% and 1.87%) for ONC and 0.54% (vs. 4.08% and 1.31%) for the ONL offer. This study advocates a pioneer implementation of a robust AI methodology to uncover relationships among student learning variables, developing teaching and learning intervention and course health checks to address issues related to graduate outcomes, and student learning attributes in the higher education sector.
click here to see the content - pdf
Published in: IEEE Access
click here to watch making of B-AIM