Comparing the Factors That Predict Completion and Grades Among For-Credit and Open/MOOC Students in Online Learning

Authors

  • Ma. Victoria Almeda Teachers College Columbia University
  • Joshua Zuech NextThought
  • Ryan S. Baker University of Pennsylvania
  • Chris Utz NextThought
  • Greg Higgins NextThought
  • Rob Reynolds NextThought

DOI:

https://doi.org/10.24059/olj.v22i1.1060

Keywords:

MOOCS, Prediction Modeling, Predictive Analytics, Online Learning, Student Achievement, Learning Analytics, Higher Education, Distance Education

Abstract

Online education continues to become an increasingly prominent part of higher education, but many students struggle in distance courses. For this reasonFor this reason, there has been considerable interest in predicting which students will succeed in online courses , achieving poor grades or dropping out prior to course completionn). Effective intervention depends on understanding which students are at-risk in terms of actionable factors, and behavior within an online course is one key potential factor for intervention. In recent years, many have suggested that Massive Online Open Courses (MOOCs) are a particularly useful place to conduct research into behavior and interventions, given both their size and the relatively low consequences/costs of experimentation. However, it is not yet clear whether the same factors are associated with student success in open courses such as MOOCs as in for-credit course -- an important consideration before transferring research results between these two contexts. While there has been considerable research in each context, differences between course design and population limit our ability to know how broadly findings generalize; differences between studies may have nothing to do with whether students are taking a course for-credit or as a MOOC. Do , this body of literature has been split into two-subcategories: research on success in MOOCs and research on success in For-credit courses. Few studies Few studies have attempted tohave attempted to understand how students and their learning experiences differ between these contexts, bypassing an opportunity to synthesize findings across different student populations who engage in online education. Do bypassing an opportunity to synthesize findings across different populations who engage in online education. To address this issue, we Do learners behave the same way in MOOCs and for-credit courses? AAre the implications for learning different, even for the exact same behaviors? In this paper, we study these issues through developing models that predict student course success from online interactions, in an online learning platform that caters to both distinct student groups (i.e., students who enroll on a for-credit or a non-credit basis). Our findings indicate that our models perform well enough to predict students’ course grades for new students across both of our populations. Furthermore, models trained on one of the two populations were able to generalize to new students in the other student population. We find that features related to comments were good predictors of student grade for both groups. Models generated from this research can now be used by instructors and course designers to identify at-risk students both for-credit and MOOC learners, towards providing both groups with better support.

References

Akaike, H., 1973. Information theory and an extension of the maximum likelihood principle. In: Pertaran, B.N., Csaaki, F. (Eds.), International Symposium on Information Theory, 2nd ed. Acadeemiai Kiadi, Budapest, Hugary, pp. 267–281.

Andergassen, M., Guerra, V., Ledermüller, K., & Neumann, G. (2013). Development of a browser-based mobile audience response system for large classrooms. International Journal of Mobile and Blended Learning (IJMBL), 5(1), 58-76

Arnold, K. E., & Pistilli, M. D. (2012). Course signals at Purdue: using learning analytics to increase student success. In Proceedings of the 2nd International Conference on Learning Analytics and Knowledge, 267-270.

Baker, R., Lindrum, D., Lindrum, M.J., Perkowski, D. (2015) Analyzing Early At-Risk Factors in Higher Education e-Learning Courses. In Proceedings of the 8th International Conference on Educational Data Mining, 150-155.

Beard, L.A., Harper, C. (2002) Student Perceptions of Online versus on Campus Instruction. Education, 122(4).

Clow, D. (2013). MOOCs and the funnel of participation. In Proceedings of the Third International Conference on Learning Analytics and Knowledge, 185-189.

Cohen, J (2009). A coefficient of agreement from nominal scales. Educational and Psychological Measurement, 20 (1), 37-46.

Das, A., & Kempe, D. (2008, May). Algorithms for subset selection in linear regression. In Proceedings of the fortieth annual ACM symposium on Theory of computing (pp. 45-54). ACM.

Diaz, D.P. (2002) Online Drop Rates Revisited. The Technology Source. May/June. Retrieved on October 16, 2015, from http://technologysource.org/article/online_drop _rate_revisited/

Engle, D., Mankoff, C., & Carbrey, J. (2015). Coursera’s introductory human physiology course: Factors that characterize successful completion of a MOOC. The International Review of Research in Open and Distributed Learning, 16(2).

Hanley, J., & McNeil, B. (1982). The meaning and use of the area under a receiver operating characteristic (ROC) Curve. Radiology, 143, 29-36.

Jayaprakash, S. M., Moody, E. W., Lauría, E. J., Regan, J. R., & Baron, J. D. (2014). Early alert of academically at-risk students: An open source analytics initiative. Journal of Learning Analytics, 1(1), 6-47.

Jiang, S., Warschauer, M., Williams, A. E., O’Dowd, D., & Schenke, K. (2014, July). Predicting MOOC performance with week 1 behavior. In Proceedings of the 7th International Conference on Educational Data Mining (pp. 273-275).

Koller, D., Ng, A., Do, C., & Chen, Z. (2013). Retention and intention in massive open online courses: In depth. Educause Review, 48(3), 62-63.

Mazzolini, M., & Maddison, S. (2007). When to jump in: The role of the instructor in online discussion forums. Computers & Education, 49(2), 193-213.

Mierswa, I., Wurst, M., Klinkenberg, R., Scholz, M., & Euler, T. (2006). Yale: Rapid prototyping for complex data mining tasks. In Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, 935-940.

Nistor, N., & Neubauer, K. (2010). From participation to dropout: Quantitative participation patterns in online university courses. Computers & Education,55(2), 663-672.

Pistilli, M. D., & Arnold, K. E. (2010). In practice: Purdue Signals: Mining realâ€time academic data to enhance student success. About Campus, 15(3), 22-24.

Quinlan, R. (1993), C4.5: Programs for Machine Learning, San Mateo, CA: Morgan Kaufmann Publishers.

Romero, C., López, M. I., Luna, J. M., & Ventura, S. (2013). Predicting students' final performance from participation in on-line discussion forums. Computers & Education, 68, 458-472.

Romero, C., Ventura, S., Pechenizkiy, M., & Baker, R. S. (Eds.). (2010).Handbook of educational data mining. CRC Press.

Tyler-Smith, K. (2006). Early attrition among first time eLearners: A review of factors that contribute to drop-out, withdrawal and non-completion rates of adult learners undertaking eLearning programmes. Journal of Online learning and Teaching, 2(2), 73-85.

Wang, Y., & Baker, R. (2015). Content or platform: Why do students complete MOOCs?. Journal of Online Learning and Teaching, 11(1), 17.

Wang, A. Y., & Newlin, M. H. (2000). Characteristics of students who enroll and succeed in psychology Web-based classes. Journal of educational psychology, 92(1), 137.

Wang, A. Y., Newlin, M. H., & Tucker, T. L. (2001). A discourse analysis of online classroom chats: Predictors of cyber-student performance. Teaching of Psychology, 28(3), 222-226.

Whitmer, J. (2012) Logging on to improve achievement: Evaluating the relationship between use of the learning management system, student characteristics, and academic achievement in a hybrid large enrollment undergraduate course. Unpublished Doctoral Dissertation, UC Davis.

Yang, D., Sinha, T., Adamson, D., & Rose, C. P. (2013). Turn on, tune in, drop out: Anticipating student dropouts in massive open online courses. In Proceedings of the 2013 NIPS Data-Driven Education Workshop, 11,14.

Zafra, A., & Ventura, S. (2009). Predicting Student Grades in Learning Management Systems with Multiple Instance Genetic Programming. Paper presented at the Proceedings of the 2nd International Conference on Educational Data Mining, Cordoba, Spain.

Zhang, J., & Walls, R. (2006). Instructors’ self-perceived pedagogical principle implementation in the online environment. The Quarterly Review of Distance Education, 7(4), 413–426.

Downloads

Published

2018-03-01

Issue

Section

Massive Open Online Course (MOOC) Research