WPI - Computer Science Department, PhD Defense Aaron Haim " On the Effects of On-Demand Assistance in Online Learning Platforms"

Tuesday, December 10, 2024
9:00 am to 10:00 am

Aaron Haim 

PhD candidate

WPI – Computer Science Department 

 

 

Tuesday, December 10, 2024

Time: 9:00 a.m. – 10 :00 a.m

Zoom: https://wpi.zoom.us/j/5088315569

Committee members

 Advisor : Prof. Neil Heffernan,  WPI – Computer Science Department 

 Prof. Jacob Whitehill, WPI – Computer Science Department 

 Prof. Lane Harrison, WPI – Computer Science Department 

 Prof.  Stacy Shaw, WPI – Social Science & Policy Studies 

 

Abstract :

Obtaining on-demand assistance on a given problem had been shown to improve student performance on either the subsequent problem or a set of problems making up the post-test. This effect was replicated at-scale within online learning platforms, even when crowdsourced by experts. However, while on-demand assistance was helpful in general, there were many unknowns as to what features of assistance attributed to the benefit of the student. Additionally, existing research was sometimes non-reproducible, making it difficult to determine whether the underlying results were reliable. 

The defended work explored different areas of improving student performance with on-demand assistance using an at-scale experiment manager. First, we observed whether students avoid certain types of assistance. Additionally, we determined whether students exhibiting help-seeking behavior would increase their overall performance on the next problem. 

We found that students were less likely to view an explanation over a hint set when provided to the student. There was also no significant difference when students exhibited help-seeking behavior, regardless of whether they viewed the assistance provided. The next study used large language models, or LLMs, to generate explanations, determining whether students benefit from receiving them when compared to no assistance or expert-generated assistance. We found that students who received a LLM-generated explanation performed better compared to those receiving nothing and approximately the same as expert-generated explanations.

Audience(s)

DEPARTMENT(S):

Computer Science