An Evaluation of Critical Thinking in Competency-Based and Traditional Online Learning Environments

Matthew Mayeshiba, Kay Jansen, Lisa Mihlbauer


Non-term, direct assessment competency-based education (CBE) represents a significant re-imagining of the structure of higher education. By regulating students’ progress through the program based on their mastery of tightly defined competencies rather than based on the time spent learning them, this learning environment affords students far greater flexibility than traditional programs. This focus on defined competencies has led to concerns that students in these types of programs may not demonstrate higher-level skills, such as critical thinking, at levels comparable to those enrolled in more traditional programs. This study evaluated 39 students’ demonstration of critical thinking in two assessments administered in parallel versions of one course: one offered through the non-term, direct assessment CBE University of Wisconsin Flexible Option, and the other offered through a traditional online program. For this study, each of the 78 assessments was scored using the critical thinking rubric from the Valid Assessment of Learning in Undergraduate Education (VALUE) project. We found that students from the CBE version of the course received significantly higher (p=.0013) overall scores than the students in the traditional online version of the course. While further research is required to refine these methods and ensure the generalizability of these results, they do not support concerns about students’ abilities in this learning environment.


AAC&U; Competency-based education; direct assessment; disruptive innovation; critical thinking; learning outcomes; non-term; nonterm; UW Flexible Option; VALUE rubric

Full Text:



AAC&U. (2015). Falling Short? College Learning and Career Success. Washington, D.C.: Hart Research Associates. Retrieved from

AAC&U. (2016). Valid Assessment of Learning in Undergraduate Education Project. Retrieved December 21, 2016, from Association of American Colleges and Universities:

Cohen, J. (1960). A Coefficient of Agreement for Nominal Scales. Educational and Psychological Measurement, 20, 37-46.

Cohen, J. (1968). Weighted Kappa: Nominal Scale Agreement with Provision for Scaled Disagreement or Partial Credit. Psychological Bulletin, 70, 213-220.

Eduventures. (2014). Closing the Degree Completion Gap: Challenges and Opportunities. Boston, MA: Eduventures. Retrieved from

Fain, P. (2015). Measuring Competency. Retrieved November 14, 2016, from Inside Higher Ed:

LeBlanc, P., & Selbe, J. (2016). Another Take on Competency. Retrieved January 12, 2017, from Inside Higher Education:

Lowry, R. (2016). Kappa as a Measure of Concordance in Categorical Sorting. Retrieved December 15, 2016, from VassarStats:

Nodine, T. (2016). How did we get here? A brief history of competency-based higher education in the United States. Journal of Competency-based Education, 1(1), 5-11. doi:10.1002/cbe2.1004

SHEEO. (2016). MSC Demonstration Year: September 2015 through August 2016. Retrieved from State Higher Education Executive Officers Association:

Viera, A. J., & Garrett, J. M. (2005). Understanding Interobserver Agreement: The Kappa Statistic. Family Medicine, 37(5), 360-363.

Ward, S. C. (2016). Let Them Eat Cake (Competently). Retrieved November 14, 2016, from Inside Higher Ed: