Development and Validation of the Purdue Global Online Teaching Effectiveness Scale

Authors

  • Elizabeth Reyes-Fournier Keiser University Online Division
  • Edward J. Cumella Purdue University Global
  • Michelle March College of Lake County
  • Jennifer Pedersen University of Alaska Anchorage/Kenai Peninsula College
  • Gabrielle Blackman Purdue University Global

DOI:

https://doi.org/10.24059/olj.v24i2.2071

Keywords:

online teaching effectiveness, instructor effectiveness, distance learning, student evaluations, asynchronous learning.

Abstract

The currently available measures of online teaching effectiveness (OTE) have several flaws, including a lack of psychometric rigor, high costs, and reliance on the construct of traditional on-the-ground teaching effectiveness as opposed to the unique features of OTE (Blackman, Pedersen, March, Reyes-Fournier, & Cumella, 2019). Therefore, the present research sought to establish a psychometrically sound framework for OTE and develop and validate a measure based on this clearly-defined construct. The authors developed pilot questions for the new measure based on a comprehensive review of the OTE literature and their many years of experience as online instructors. Students enrolled in exclusively online coursework and programs at Purdue University Global, N = 213, completed the survey, rating the effectiveness of their instructors. Exploratory Factor Analysis produced four clear OTE factors: Presence, Expertise, Engagement, and Facilitation. The resulting measure demonstrated good internal consistency and high correlations with an established OTE measure; good test-retest reliability; and predictive validity in relation to student achievement. Confirmatory Factor Analysis revealed a good fit of the data and yielded a final 12-item OTE measure. Further refinement and validation of the measure are recommended, particularly with students in other universities, and future research options are discussed.

Keywords: online teaching effectiveness, instructor effectiveness, distance learning, student evaluations, asynchronous learning.

References

References

Arbuckle, J. L. (2014). Amos (Version 23.0) [Computer Program]. Chicago: IBM SPSS.

Bangert, A. W. (2006). The development of an instrument for assessing online teaching effectiveness. Journal of Educational Computing Research, 35(3), 227-244.

Bangert, A. W. (2008). The development and validation of the Student Evaluation of Online Teaching Effectiveness. Computers in the Schools, 25(1-2), 25-47.

DOI: 10.1080/07380560802157717

Berk, R. A. (2013). Face-to-face versus online course evaluations: A “consumer’s guide” to

seven strategies. Journal of Online Learning and Teaching, 9(1), 140-148.

Bettinger, E., Fox, L., Loeb, S., & Taylor, E. (2015). Changing distributions: How online college

classes alter student and professor performance (CEPA Working Paper No.15-10).

Retrieved from http://cepa.stanford.edu/wp15-10

Bettinger, E., & Loeb, S. (2017). Promises and pitfalls of online education. Economic Studies at

Brookings: Evidence Speaks Reports, 2(15). Retrieved from

https://www.brookings.edu/wp-content/uploads/2017/06/ccf_20170609_loeb_evidence_s

peaks1.pdf

Blackman, G., Pedersen, J., March, M., Reyes-Fournier, E., & Cumella, E. J. (2019). A comprehensive literature review of online teaching effectiveness: Reconstructing the conceptual framework. Manuscript submitted for publication.

Cabrera-Nguyen, E. (2010). Author guidelines for reporting scale development and validation

results in the Journal of the Society for Social Work and Research. Journal of the Society

for Social Work and Research, 1, 99-103. doi:10.5243/jsswr.2010.8.

Campbell, D. T., & Fiske, D. W. (1959). Convergent and discriminant validity by the

multitrait-multimethod matrix. Psychological Bulletin, 56(2), 81-105.

Catell, R. B. (1979). The scientific use of factor analysis. New York, NY: Plenum.

Centra, J. A. (2005). The development of the Student Instructional Report II. Princeton, NJ:

Educational Testing Service. Retrieved from https://www.ets.org/Media/Products/283840.pdf

Chickering, A. W., & Gamson, Z. F. (1989). Seven principles for good practice in undergraduate education. Biochemical Education, 17(3), 140-141. doi:10.1016/0307-4412(89)90094-0

Faul, F., Erdfelder, E., Buchner, A., & Lang, A.-G. (2013). G*Power Version 3.1.7 [computer

software]. Uiversität Kiel, Germany. Retrieved from

http://www.psycho.uni-duesseldorf.de/abteilungen/aap/gpower3/download-and-register

Klieger, D., Centra, J., Young, J., Holtzman, S., & Kotloff, L. J. (2014). Testing the invariance of

interrater reliability between paper-based and online modalities of the SIR II™ Student

Instructional Report. Princeton, NJ: Educational Testing Service. Retrieved from

https://www.ets.org/Media/Research/pdf/SIRII-Report-Klieger-Centra-2014.pdf

Lewis, M. (2016). Demographics of online students. In S. Danver (Ed.), The SAGE encyclopedia

of online education (pp. 311-313). Thousand Oaks, CA: SAGE.

doi:10.4135/9781483318332.n103

Lokken, F. (2016). ITC Annual National eLearning Report 2016 survey results. Retrieved

from https://associationdatabase.com/aws/ITCN/asset_manager/get_file/154447?ver=297

Liu, O. L. (2011). Student evaluation of instruction: In the new paradigm of distance education. Research in Higher Education, 53(4), 471-486. doi:10.1007/s11162-011-9236-1

MacCallum, R. C., Widaman, K. F., Zhang, S., & Hong, S. (1999). Sample size in factor

analysis. Psychological Methods, 4, 84-99.

Myers, N., Ahn, S., & Jin, Y. (2011). Sample size and power estimates for a confirmatory factor

analytic model in exercise and sport: A Monte Carlo approach. Research Quarterly for

Exercise and Sport, 82(3), 412-423. doi:10.5641/027013611x13275191443621

National Center for Education Statistics. (2018). Digest of Education Statistics, 2016 (NCES 2017-094), Table 311.15.

Nilson, L. B., & Goodson, L. A. (2017). Online teaching at its best: Merging instructional

design with teaching and learning research (1st ed.). San Francisco, CA: Jossey-Bass.

Osborne, J. W., & Costello, A. B., (2004). Sample size and subject to item ratio in principal

components analysis. Practical Assessment, Research & Evaluation, 9(11). Retrieved

from http://PAREonline.net/getvn.asp?v=9&n=11

Pike, G. R. (2004). The Student Instructional Report for Distance Education: e-SIR II.

Assessment Update, 16(4), 11-12.

Purdue Global Office of Reporting and Analysis. (2018). Purdue Global facts: World-class

education online. Retrieved from https://www.purdueglobal.edu/about/facts-processes/

Saleh, A., & Bista, K. (2017). Examining factors impacting online survey response rates in

educational research: Perceptions of graduate students. Journal of Multidisciplinary

Evaluation, 13(29), 63-74.

Seaman, J. E., Allen, I. E., & Seaman, J. (2018). Grade increase: Tracking distance education in

the United States. Babson Park, MA: Babson Survey Research Group. Retrieved from

http://www.onlinelearningsurvey.com/highered.html

Serdyukov, P. (2015). Does online education need a special pedagogy? Journal of Computing & Information Technology, 23(1), 61–74. https://doi.org/10.2498/cit.1002511

Tabachnick, B. G., & Fidell, L. S. (2013). Using multivariate statistics. Boston, MA: Pearson.

Thomas, J. E. & Graham, C. R. (2017). Common practices for evaluating post-secondary online

instructors. Online Journal of Distance Learning Administration, 20(4). Retrieved from

https://www.westga.edu/~distance/ojdla/winter204/thomas_graham204.html

Wieland, A., Durach, C. F., Kembro, J., & Treiblmaier, H. (2017). Statistical and judgmental

criteria for scale purification. Supply Chain Management: An International Journal,

(4), 321-328. https://doi.org/10.1108/SCM-07-2016-0230

Young, S. (2006). Student views of effective online teaching in higher education. American

Journal of Distance Education, 20(2), 65-77.

Downloads

Published

2020-06-01

Issue

Section

Faculty, Professional Development, and Online Teaching