SabaiTA Student Perception and Learning Impact
Across the board, students rated SabaiTA highly
All eleven quantitative dimensions returned mean scores above 4.2 out of 5, with the majority of medians at a perfect 5.0, indicating strong and consistent satisfaction across usability, AI features, and learning impact.
Students felt more confident - not just more capable
A small share reported moments of confusion or overwhelm (3 of 50), while the overall emotional profile remains strongly positive. This aligns with research on affective computing and its role in sustaining learner engagement.
Positive feelings reported (Q3)
- More confident19
- More supported14
- More comfortable12
- Less stressed9
- More engaged4
- More motivated3
"Confidence and reduced stress are not soft byproducts - they are predictors of learning retention and persistence in STEM fields."
Pekrun et al. (2011)Pekrun et al. (2011). Measuring emotions in students' learning. Contemporary Educational Psychology, 36(1), 36-48.Low incidence suggests interface clarity is strong overall, with a small opportunity to smooth edge cases.
Explanatory feedback drives the most value
Students primarily value feedback that explains mistakes rather than just marking answers wrong. This mirrors findings in formative assessment literature where elaborative feedback outperforms corrective feedback alone.
Why AI feedback is helpful (Q7, n=45)
- Explains my mistakes23
- Helps find mistakes faster16
- Explains why answer is correct13
- Teaches correct methods10
- Easy to understand10
"Elaborated feedback that explains why an answer is wrong - and how to improve - produces significantly larger learning gains than simple right/wrong feedback."
Hattie & Timperley (2007)Hattie & Timperley (2007). The power of feedback. Review of Educational Research, 77(1), 81-112.2nd highest individual score across all survey dimensions.
The chat function serves as a cognitive scaffold
Among the 44 students who rated the chat feature, 76% selected "helps me understand the question" as a primary benefit - a pattern consistent with Vygotsky's zone of proximal development, where targeted prompts bridge the gap between what learners can do alone and with guidance.
Why AI chat is helpful (Q11, n=44)
- Helps me understand the question18
- Helps me when I get stuck13
- Helps me after a wrong answer11
- Helps me check thinking before submit10
- Helps me review after finishing5
Alternate support preferences (Q12)
- Preferred feedback only2
- Preferred friends/instructors1
- Did not notice it1
- Did not need it1
- Too slow1
Progress reports support self-regulated learning
The learning analytics dashboard received a mean of 4.34/5. The most cited benefit - "shows my weak topics" - reflects the principle of metacognitive awareness, a well-established predictor of academic achievement.
Why progress report is helpful (Q16, n=44)
- Shows my weak topics17
- Shows my strengths12
- Shows what to review next12
- Motivates me to continue8
- Shows improvement over time6
"Learning dashboards that surface weakness-specific data increase metacognitive accuracy and study time allocation - two of the strongest modifiable predictors of course performance."
Zimmerman & Schunk (2011)Zimmerman & Schunk (2011). Handbook of Self-Regulation of Learning and Performance. Routledge.Visibility opportunities (Q17)
A gentle nudge or onboarding reminder could increase awareness without changing the feature set.
Structured practice settings are generally accepted
Both features received mean scores above 4.2, indicating net acceptance. Open-text responses suggest a desire for selective flexibility, particularly in syntax-heavy scenarios.
Copy-Paste Restriction (Q18)
Desirable difficulties - strategies that impose cognitive effort during practice - have been shown to increase long-term retention even when students find them challenging in the moment.
Attempt Limit (Q19)
Attempt limits encourage deliberate retrieval practice - a mechanism empirically linked to durable memory traces. Students who understood this rated it more positively.

