The aim of higher education diagnostic assessments goes beyond the ranking of students or the verification of secondary school results. With the massification of higher education, the heterogeneity of students’ knowledge has increased significantly. This is especially pronounced in STEM fields, where more than a third of students drop out in the early stages of their studies. This can be largely attributed to their lack of basic mathematical and scientific skills. Since technical knowledge is strictly hierarchical, input deficiencies cannot be compensated for later; the lack of a stable foundation necessarily entails non-achievement in higher education.
The need for diagnostic measurements in Hungary is intensified by the changes in the public education regulatory system, especially regarding the introduction of the 2020 National Core Curriculum (NAT). During the transitional periods of curricular reforms, the differences between the knowledge levels of cohorts graduating in differing curricula led to heterogeneous first-year populations. The situation is further complicated by the performance assessment paradox phenomenon—the deterioration of the predictive power of admission scores. Experience shows that a high admission score is less and less a guarantee of stable basic knowledge and due to grade inflation, the admission results have become insufficient indicators for students’ achievement at university.
Level assessments not only measure lexical knowledge, but are suitable for the early identification of at-risk students, allowing us to intervene in a targeted manner before failure occurs. An often overlooked, but strategically critical function of diagnostic assessments is the identification of students who rank at the top 5-10%. In mass education, talents can remain latent if the pace of education is adjusted to the average or to a weaker level.
BME has long faced the challenge that first-year students’ level of mathematical knowledge is extremely heterogeneous. This often results from the specificities of the Hungarian admissions system, which recognizes a variety of prior qualifications, i.e., higher level mathematics classes in high school are optional (facultative). Further, the lingering negative effects of the COVID-19 pandemic and online learning were even felt in the performance of the class admitted in 2024.
In response, the BME Mathematics Institute decided to divide the teaching of Calculus into two levels. Level A is recommended for students with a solid foundation, while level B is suited for students who show significant deficiencies in the material. The goal of the level B course is not segregation, but reducing dropout: in addition to the regular curriculum, it aims at countering secondary school deficiencies. However, in order to objectively and effectively classify the more than 2000 incoming students, a reliable measurement tool was needed. Thus, we developed a three-stage, adaptive online placement test.
The first part of the test assessed basic procedural knowledge from grades 9-10. As the test progressed, students’ performance deteriorated; the number of unanswered questions increased. For the second part the test divided the students into two groups based on their attendance of facultative higher level mathematics classes in school. The second block covered the curriculum of grades 11-12. Results differed the most here. Almost ¾ of the students who had not attended facultative higher level mathematics classes performed below 50%, while students with facultative mathematics background achieved a much higher average. In the third, final stage, the test further divided students into four subgroups. The results of the lowest performing group were worrying, which justified the introduction of differentiated instruction.
Based on past semesters’ results, the adaptive placement test has proven to be a successful tool for mapping and grouping students. The introduction of differentiated learning was essential to ensure that pupils with limited knowledge had a chance to catch up, while others with more knowledge in the required course material could progress at their own pace. In conclusion, our research showed that input measurements are essential for effective engineering education, especially in a transitional, post-pandemic period.