Formal course design and the student learning experience

What impact does collaboration between faculty and professional course designers have on the student learning experience? As the use of technologies increases, educational institutions have to find ways of identifying and addressing expectations about how technologies can best be incorporated into the teaching and learning experiences. This paper reports on efforts at Washington State University to develop and assess the course design and faculty development process and the impact the process has on student learning experiences. The results of a comprehensive set of faculty and student surveys from five groups suggest that the systematic course design process improves students’ opportunities for faculty-student interaction, student-student interaction, and other elements associated with best practice. The implications of this study for faculty development and policy implementation are discussed.


I. INTRODUCTION
At this writing, more than 1,100 US colleges and universities are offering courses over the Internet [1], global competitors are increasingly visible in the US educational market, the number of college courses using educational technologies continues to rise [2], and students themselves increasingly expect technology rich learning opportunities and experiences. As education increasingly migrates online, as even face-to-face classrooms integrate internet-based content and discussion tools, a number of issues have emerged, including, in particular, expectations for cutting costs and expanding access while simultaneously increasing learning outcomes. Central to these and other pressing issues that have been exacerbated by the technology explosion is the changing roles of faculty [3]. What kinds of institutional support are necessary for helping faculty create high quality and effective learning experiences for students in technology mediated programs?
To address these and other questions, Washington State University has established a multi-unit partnership between the Center for Teaching, Learning, and Technology (CTLT), the Distance Degree Programs (DDP), Writing Program, and Educational Telecommunications and Technologies (ETT) in order to devise and implement an ongoing process for assessing issues related to the effective implementation of technology to enhance teaching and learning. Ongoing assessment provides data for the design/development process itself. The study reported here reflects one aspect of systematic assessment and focuses on the survey of faculty goals, values, and instructional practice as they relate to student goals, values, and learning experiences and the established principles of good practice [4,5]. Central to the Goals, Activities, and Practices (GAPS) surveys devised and administered jointly by the WSU alliance is the assumption that course design and implementation that adheres as much as possible to these principles of good practice will reliably yield better learning outcomes than programs and courses that do not reflect these principles [6].
There are several attributes of good practice, but perhaps none more prominent and important than interaction-students with students, students with faculty, and students with content. In particular, Taylor and White [13] have found that faculty value interaction with their students-perhaps the most important principle of good practice according to an extensive body of research reviewed most prominently by Chickering and Gamson [4] and Chickering and Ehrmann [5]. Faculty-student interaction is a primary attribute of good teaching practice and is instrumental in enabling other principles of good practice. Faculty-student interaction enables faculty to provide rich and rapid feedback to students. It is instrumental if instructors are to facilitate student-to-student interaction and student collaboration and thereby help students experience diverse points of view and develop and share a commitment to high expectations. Finally, quality faculty-student interaction precipitates students' increased time on challenging tasks. These attributes of good instructional practice generally predict a learning experience that elicits improved learning outcomes.
However, though Taylor's and White's findings identify a general faculty perception that identifies quality interaction as key to their appreciation of working with motivated students and the subsequent improvements in student achievement, they also tend to identify quality interaction as an aspect of instruction inherent to face-to-face instruction. They do not tend to associate that kind of quality interaction as an aspect of distributed or distance learning programs. In fact, Taylor's and White's work underscores the general perception of distance learning as an educational strategy characterized by a vision of students working in isolation at their computers, and, therefore, as inferior. This vision of the inferiority of online learning was perhaps nowhere more visible than in the 1999 Phipps and Merisotis, [7] report from the Institute for Higher Education Policy (1999). The report, commissioned by the American Federation of Teachers and the National Education Association, analyzed "the most important and salient" (p. 11) works of original research on distance learning. It challenged the current research in distance education and lamented what the authors identified as a serious lack of progress distance education researchers have made. Phipps and Merisotis cited 40 "original studies" (p. 11) on distance education, and concluded that "an entire body of research needs to be developed to determine if students participating in distance learning for their whole program compare favorably with students taught in the conventional classroom" (p. 24). More than the problematic critique itself, the press coverage the report elicited confirmed the dubious perception of the state of distance education just a few years ago. One explanation for the continued concern was voiced by Wolcott [8] who suggested that the perception continues because distance education is rarely valued or rewarded as a scholarly pursuit at most universities.
On the other hand, the reward structure, often cited, may be in some sense counter to the evidence that faculty are primarily motivated by intrinsic rewards associated with the act of teaching rather than extrinsic or monetary rewards [9,10,11]. As Peirpoint & Harnett [9] note, intrinsic rewards are generally shaped by faculty's opportunities for interacting with motivated students.
Given that faculty frequently value interpersonal interaction with students, it is interesting that the Internet in particular represents a technology that, surprising to many, supports interactive communications. As one senior analyst at Apple noted: "[P]eople are most definitely not doing the things which the Internet was originally designed to do, moving large volumes of data around, getting remote access to supercomputer facilities, or whatever . . . . They're not connecting to other computers, but to other people" [12].
The implications for course design, therefore, emerge from the discrepancy between the ways technologies are generally being purposed by users and the general faculty perception of their application to educational settings. If the technologies themselves support interactive communication, it seems reasonable to expect that this kind of interaction can be infused into learning opportunities and used in ways that support the principles of good practice. The gap between faculty's technology perceptions and the potential of technology in educational practice, in other words, might be addressed through a systematic approach to design that helps faculty implement the application of technology to enhance faculty-student interaction and, indirectly, improve student learning outcomes in technology enhanced settings.
The challenge prompted a campus-wide collaboration between three units at Washington State University: The Distant Degree Program (DDP), the Center for Teaching, Learning, and Technology (CTLT), and Educational Technologies and Telecommunications at Washington State University (ETT). The goal has been to establish a course design/faculty development process through which faculty partner with course design professionals to design, develop, deliver, and assess the effectiveness of technology enhanced learning.
The WSU design process naturally varies according to the schedules and proclivities of individual faculty and designers, but the general approach involves intensive work over a period ranging from eight to twenty weeks. (Not counted in this time frame yet a key guideline for the WSU process, however, is that a course is not complete until it has been offered once, assessed, and subsequently revised). In addition to meeting with the lead designer, faculty meet and work with media and assessment specialists, student advising specialists, and undergraduate technology tutors, known as "hypernauts." Working with a team is a significant aspect of the faculty development process. The essential design model itself focuses on aligning faculty teaching goals and evaluation criteria with activities that foster students' interaction with each other as well as with the faculty member and course content. Further, assessment of the effectiveness of the effort is emphasized, and so it is consistent with this commitment that the process is usually initiated by carefully articulating evaluation and student performance criteria. An additional component of the course is a rubric guided discussion and practice with principles and strategies that help faculty facilitate interaction. For instance, faculty are encouraged to design activities that introduce students to authentic questions or problems that every discipline confronts, so facilitation of the discussion of those questions and problems is less likely to be capped by right and wrong answers or simplistic solutions. Course content, in this model, is a resource, a means for questioning and thinking like professionals (albeit as novices) in the disciplines. This report examines the effectiveness of the process.

II. METHODS
The purpose of this study is to examine whether students in technology rich learning environments whose instructors participated in the development process were more likely to experience the principles for good practice compared with students in technology rich learning environments whose instructors did not participate in the development process.
Data for this analysis come from the three-unit collaborative at WSU and reflect an ongoing assessment process developed to systematically evaluate the use and impact of innovative teaching practices. As a part of the process, the collaborative has developed a series of surveys that focus on faculty and student teaching and learning goals, activities, and processes (GAPs). The GAPs survey process involves three surveys-one for faculty and two for students. The surveys reported in this study were distributed online via a sophisticated survey generator (CTLSilhouette) developed at WSU by the Center for Teaching, Learning, and Technology. The survey was constrained by the parameters of the field, and was therefore distributed to five naturally occurring groups, including: Washington State distance students in classes in which faculty participated in the development process (WSUDDP w/CD); distance students in classes in which faculty did not participate in the development process (WSUDDP); residential students at Washington State University classes in which faculty did not participate in the design process (WSU); residential freshmen students at Washington State University in an innovative technology-rich program facilitated by undergraduate peers (WSUFS), and a sample derived from participating institutions taught by early adopter faculty outside WSU who did not participate in the WSU development process (NONWSU). Only one group of the five represents the development treatment.
Since the data, consistent with field constraints, reflect a convenience sample and variable response rates from the different participants, generalizing to the larger population is speculative at best. Nonetheless, statistical analyses of the distribution of the independent and dependent variables were carefully examined, and no deviations from normality or clustering were identified. Faculty and students who responded were not from selective disciplines.
Ordinary least squares regression was used to estimate the effects of students' perceptions of experiencing the principles for good practice. For the group variables, dummy-coded variables were created to estimate whether reports on the principles of good practice differed significantly between students in a WSUDDP w/ CD and students in the other four educational settings (WSUDDP, WSU, WSUFS, NONWSU) that did not include a course development process. The estimated regression equation omits the category of WSUDDP w/ CD. Because five regression analysis were conducted (one for each dependent variable), we included an experiment alpha of .01 to reduce the probability of making a type I error. The experimental procedure assumes the constructs of best practice are not related, and a correlation between the dependent variables was conducted to confirm that assumption. Further, by assuming the dependent variables of good practice are not necessarily related, some magnitude of the analysis is forfeited in favor of greater certainty of significance.

III. RESULTS
Descriptive data from Table 1 indicate 941 students responded to the survey. Of these 941 students, 23% were from WSUDDP w/ CD, 3% were from WSUDDP, 24% were from WSU, 38% were from WSUFS, and 12% were from NONWSU. Overall, 62% of the sample were female, and the average student age was between 21-23 years old. The results of the regression analysis are in Table 2. We created a series of dummy-coded variables to estimate whether reports on the principles of good practice differ significantly between students in a WSUDDP w CD and students in the other four educational settings (WSUDDP, WSU, WSUFS, NONWSU) that did not include a course development process. The estimated regression equation omits the category of WSUDDP with course development process. Each coefficient for the other four categories represents the effect of a student being in that educational setting versus a student in a WSUDDP with course development process on the principles of good practice. Overall, the results show that students in the three WSU educational settings without the development treatment report significantly lower on all measures of good practice compared to students whose instructors participated in the development process.   Several questions were asked to determine the perceived extent of faculty-student interaction. On the question that asked students if they "Received prompt feedback from instructor or peers on course activities," students in both WSUDDP and WSU with no development report significantly less timely feedback than those students responding to the question in courses in which instructors had participated in the development process. The size of the coefficients is substantial. A coefficient of -.697 for WSUDDP with no course development indicates that the mean student response is about .70 less than the mean response for students in a WSUDDP with course development, a difference of nearly one-unit in the restricted four-scale response category (e.g., 3 = Often vs. 2 = Sometimes). Further, testing for significant differences in the coefficients among the four educational settings with no course development found that students in WSUDDP reported significantly less timely feedback than students in WSUFS (t = -3.77, p < .001) and NONWSU (t = -3.50, p < .001). The significant drop for distance courses taught by instructors who did not participate in the development process supports the contention that without mediation, faculty bring to distance courses a set of assumptions or a limited skill set that in practice limit interaction.
The occurrence of student-student interaction indicated by the question that asked students if they: "Discussed course topics with others outside of class" elicited similar results. Students in all four educational settings with no development process report significantly less experience with those principles than did students in WSUDDP classes with development. The difference in means between students in WSU with no development and students in a WSUDDP with development is -.825, or again nearly a one-unit difference in the learning experience. The other coefficients are smaller but again consistently indicate that the development process significantly correlates with learning experiences that evidence principles of good practice. Again, among the groups with no course development, students in WSU reported significantly less experience with discussion compared to students in NONWSU (t = -4.15, p < .001) and in WSUDDP (t = -2.20, p < .05), suggesting the assumptions of interaction in conventional classes might be dubious.
An additional examination of the presence of interaction was articulated in the question that asked students the extent to which they: "Shared my ideas and responded to the ideas of others." The results suggest that students in three settings with no development-WSUDDP, WSU, and WSUFS-report significantly fewer occurrences of sharing ideas than students in a WSUDDP setting with the development process. The coefficients range from a high of -.970 for WSUDDP to a low of -.308 for WSUFS and are significant at p < .001. This suggests that the largest difference in means occurs again between students in a WSUDDP setting with no development and students in a WSUDDP setting with development. Other significant differences emerged between the four settings with no course development processes. Students in both WSUDDP and WSU report significantly less experience with shared ideas than students in both WSUFS and NONWSU.
The question designed to examine the existence of shared high expectations asked learners to report the extent to which they: "Learned in new ways that do not come easily to me." Only students in WSU with no course development report significantly less occurrence than students in a WSUDDP with course development. However, we also tested whether the coefficient for WSU with no course development (-.296) is significantly different than the coefficient for NONWSU with no course development (.122). We do find a significant difference (t = -3.27, p < .001) suggesting that students at WSU report significantly fewer opportunities to learn in new ways than do students not at WSU, which again points to issues of sample and, perhaps more interestingly, to the assumptions of the value of interaction in conventional face-to-face courses. There is less evidence of this isolated aspect of high expectations in conventional courses than in distance courses when instructors have participated in the development process.
Finally, the principle of time on task was explored by asking students if they: "Spent more time than expected on task." Again, students in all four educational settings with no course development process report significantly less time on task than did students in a WSUDDP setting with development. In addition, a consistent, mild correlation between this and faculty-student and student-student interaction verify that the nature of the "task" was academic and not incidental (such as learning how to work the technology). Further, the difference in means between students in WSU with no course development and students in a WSUDDP with course development is -.406, or again nearly a one-unit difference in their experience with this principle. The other coefficients are again smaller but still consistently indicate that course development processes are associated with the high expectations of a challenging task.

IV. DISCUSSION
Consistent findings that indicate participation in the development process increases the evidence of the principles of good practice have a number of important implications. First, it is useful to acknowledge some caveats related to this study. Evidence of improved learning has been associated with improved learning outcomes [6,14], but the self-report measures used in this study are not themselves direct indicators of improved learning outcomes. Further, though we have continued to validate the instrument, it cannot be assumed that the questions themselves adequately addressed the constructs for which they were designed. Second, the convenience sample of learners, though addressed by statistical procedures, is problematic on many levels, including bias, size, and demographic controls. Third, the sample of instructors is also problematic in that participation in the development process is largely voluntary, so it may not be surprising that faculty who are open to working with development professionals are likely to be supportive of efforts to provide students with learning opportunities identified by the community of professionals committed to a scholarly approach to teaching and learning. Those who are more resistant to working with professional designers, it follows, are more likely to be less inclined, for whatever reasons, to stay current with the scholarship of teaching and learning.
The implications of this study merit examination for all classroom experiences. First, the findings suggest that faculty development that integrates pedagogy with technology training improves interaction in ways that correspond with improved student learning outcomes. There is no reason to suspect this finding might not be true in conventional classroom settings as well as for online learning experiences. In the increasingly competitive profession and at a time when quality learning is essential, it is clear from this study and elsewhere [15] that the strategies for improving learning environments can be learned. It benefits both students and faculty to incorporate the principles into the classroom throughout institutes of higher learning. Finally, it is important to recognize that incorporating good practices into teaching requires resources for strategic instructional partnerships and an institutional commitment that promotes and rewards excellence in teaching [8,16].

VI. ABOUT THE AUTHORS
Gary Brown directs the Center for Teaching, Learning, and Technology at Washington State University. Gary has designed, developed, and assessed innovative projects across the curriculum. He has written extensively and presented nationally and internationally. He is the lead developer of the critical thinking project at WSU, sponsored by the state's higher education coordinating board and FIPSE. He has conducted assessment in the costs of educational technologies and recently received, with his CTLT colleagues, the NUTN award for best research paper on faculty motivation and perceptions of the efficacy of online learning. He is a National Learning Communities fellow and an advisory board member for the Technology Source and for the Higher Education Knowledge & Technology Exchange (HEKATE). Gary also leads the CTLT Silhouette Project, which serves Flashlight Online for the Teaching, Learning, and Technology Group.

Carrie B. Myers is a doctoral candidate in the Department of Educational Leadership & Counseling
Psychology at Washington State University. She is also a research assistant in the Division of Assessment at the Center for Teaching, Learning, and Technology. Her research interests are curriculum and instruction and faculty development in higher education. Specifically, her prior research examined the uses and roles of educational technologies in the college classroom, and the similarity and differences in learning goals between faculty and students. Carrie is currently conducting research using data from the National Study of Postsecondary Faculty to understand how the context of higher education affects how faculty make their teaching decisions and direct their teaching efforts.
Sharon Roy is the course design and development coordinator for Distance Degree Programs (DDP) at Washington State University, She leads the design and development of programs offered to learners at a distance, collaborates with faculty and staff to identify program needs and design specific courses, coordinates the efforts of the development teams, and participates in the university's larger efforts to continually assess and revise the development process to meet institutional, program, and learner goals. She earned her M.A. from the University of Toronto and worked as the course designer/project coordinator for internal and international development projects at Laurentian University.