An Analysis of Faculty Promotion of Critical Thinking and Peer Interaction within Threaded Discussions

The purposes of the research were to (1) examine the relationships between faculty behaviors that promote critical thinking and the resulting critical thinking within peer interaction and (2) identify specific faculty behaviors that result in the highest levels of critical thinking within peer interactions. Using a concurrent embedded mixed methods approach, 19,595 peer-to-peer responses were coded along the 5-point scale of the Interaction Analysis Model (IAM) and faculty behaviors within 91 courses were reviewed for 19 different behaviors. Comparing each individual faculty behavior to the IAM scores yielded interesting results. There were six significant correlations between faculty behaviors and scores on the IAM. Two of the correlations involved “negative” faculty behaviors, perhaps suggesting that peers make up for the lack of instructor presence within discussions. Multiple considerations for discussion design and facilitation are suggested along with recommendations for future research.


Introduction
The communication that occurs in any learning environment is the most important aspect of the educational process that happens in that environment (Lim, 2004), and student-to-student interactions are significant contributors to the level of student learning (Sher, 2009).Critical thinking has long been a matter of inquiry in higher education (Halpern, 2003), just as answering the question of how to increase and enhance the quality of interaction has been an important research goal for most of the last century (Berge & Mrozowski, 2001;Hannafin, Land, & Oliver, 1999).Working within Paul and Elder's (2010) framework of critical thinking, this research project attempted to identify faculty strategies that promote critical thinking in discussion forums and to analyze the level of peer-to-peer interaction within those discussions.Fostering critical thinking happens within the discussion forums, as well as in other areas.Therefore, it is important to analyze the interactions in the discussion forums to determine to what extent students appear to be thinking critically and to determine those faculty behaviors that best affect students' thinking.Facilitating peer-to-peer interaction has been described as "the most important strategy that online teachers need to employ" (Kearsley & Blomeyer, 2004, p. 49).Nonetheless, researchers, instructors, and students themselves have lamented an overall lack of quality in peer-to-peer interaction (Hall, 2010(Hall, , 2011)).Knowing that critical thinking scores and critical thinking in general are not up to desired levels in online learning, it is appropriate to consider ways that instructors and students can help each other through the threaded discussions to think more deeply and to engage more fully in the conversations.The remainder of this manuscript includes a review of the literature; data collection, analysis, and results; and a discussion of the implications of the results for application and future research.The literature review synthesizes the concepts of critical thinking, online learning, peer interaction, and faculty behaviors into a single, coherent review.The section on research methods explains the study's concurrent embedded mixed methods design, including the proportional stratified random sampling technique and the approach to collecting and analyzing data in the 91 courses, which resulted in more than 350,000 data points.The Results section offers details on the six significant correlations between faculty behaviors and critical thinking with peer interactions.Finally, multiple considerations for discussion design and facilitation are suggested along with ideas for faculty development and recommendations for future research.

Literature Review
The purposes of the research were (1) to examine the relationships between faculty behaviors that promote critical thinking and the resulting critical thinking within peer interaction and (2) to identify specific faculty behaviors that result in the highest levels of critical thinking within peer interactions.As there are multiple points under consideration in this research, it is necessary to analyze literature that combines the concepts of critical thinking, online learning, peer interaction, and faculty behaviors.
Since the communication that occurs in any learning environment is the most important aspect of the educational process that happens in that environment (Lim, 2004), and since the majority of the dialogue in the online learning environment occurs through the discussion boards (Jeong, 2003;Schwartzman, 2006), learners who engage in threaded discussions should be able to achieve a high level of cognitive processing (Thomas, 2002).Wickersham and Dooley (2006) suggest that strategies should be incorporated to provide all students with the ability to interact and participate in the discussion, to learn at their own pace, and to have an expanded time frame within which to reflect and respond.These authors support the use of collaborative learning and deep understanding of course material based on their content analysis of the written narrative of the online discussion as a means to determine the quality and depth of critical thinking.Osborne, Kriese, Tobey, and Johnson (2009) strongly suggest that critical thinking can successfully be taught in online courses and that interpersonal skills are an important component of critical thinking.
Student-to-student interactions are significant contributors to the level of student learning (Sher, 2009).Indeed, Tu and Corry (2003) concluded that 80-85% of learning is retained when acquired through higher level discussion prompts.As Rosenshine and Meister (1992) demonstrated nearly two decades ago, different types of prompts elicit different responses and may lead to different learning effects.Chin (2004) and Wang (2005) agreed that carefully designed questions are required tools for engaging students cognitively.In fact, Wang asserted that the "level of student thinking is directly proportional to the level of questions asked" (p.310) and that knowledge construction occurs through responding to high-level questions.While Elder and Paul (1998) noted that thinking is driven by those questions rather than their answers, Oliver (2008) pointed out that "the success of the activity can then be judged according to responses in the form of answers" (p.2).Data from Niemczyk and Savenye (2010) suggested that peer interaction from answering provided questions and creating self-questions results in enhanced learning.The proposed research could add to what is already known about the types of questioning, as a faculty behavior, by associating different questioning techniques with critical thinking among peers.Hou, Chang, and Sung (2007) analyzed the content and process of the discussion activities based on peer assessment without teacher intervention and found that, in the context of discussions with peer assessment, the addition of a peer assessment discussion mechanism did not increase the students' discussions of the topics.This research reinforces the value of faculty behaviors and supports additional inquiry into the association between faculty behaviors and critical thinking within discussions.Indeed, it is online instructors' ability to facilitate peer-to-peer interaction that has been described as "the most important strategy that online teachers need to employ" (Kearsley & Blomeyer, 2004, p. 49).Nonetheless, researchers, instructors, and students themselves have lamented an overall lack of quality in peer-to-peer interaction (see Hall, 2010Hall, , 2011)).Shim and Walczak (2012) investigated the effects of various instructor-driven teaching practices on the development of students' ability to think critically.The frequency of faculty asking challenging questions in class had a substantial influence on students' self-reported growth in critical thinking.Shim and Walczak also reported that faculty who frequently interpret abstract concepts for students and give well-organized presentations may positively affect critical thinking.Of instructor-initiated teaching practices, asking challenging questions in class had a significant and positive impact on students' gains in critical thinking.Challenging students to view issues from different perspectives and then providing explanations to help them understand abstract concepts was also recommended.The findings of Shim and Walczak were corroborated by Caram and Davis (2005), who found that teachers who ask the right questions often spark critical thinking, which leads to the creation of problem solvers.Caram and Davis went on to state that by progressing from simple questions to more difficult ones that require reasoning, students develop cognitive abilities and critical thinking skills.
The study explored relationships between specific faculty behaviors and the resulting presence of critical thinking within peer interaction.Specific research questions were the following: 1. What is the correlation between faculty promotion of critical thinking and the presence of critical thinking in peer interaction within threaded discussions?2. Which faculty behaviors toward the promotion of critical thinking are correlated with critical thinking within peer interactions?

Method
The research used a concurrent embedded mixed methods design as explained by Creswell (2008).Discussion transcripts were coded and statistically correlated with a qualitative review of faculty behaviors.The quantitative and qualitative data were collected simultaneously, although each was used to address a different research question.Quantitative data were used to respond to the first research question, while qualitative data were used to respond to the second research question.The data were then merged for interpretation.

Sample
A proportional stratified random sample was taken from 401 undergraduate and graduate courses taught exclusively online and peer reviewed in 2013 within the College of Education (CoE) at a regionally accredited institution.Of these 96 courses initially identified in the random sample, five courses were eliminated from the sample: Four courses had only one student enrolled and, therefore, had no peer-to-peer interaction; the fifth course was not able to be reviewed because of technical errors in retrieving the discussions from the archives.The sample (n = 91) represented about one fourth of the courses taught by instructors whose peer review scores on "fostering critical thinking" spanned the range of 0 (lowest) to 4 (highest).
The first peer reviews were conducted in 2011 as a pilot study in order to satisfy the Western Association of Schools and Colleges, now WSCUC (for WASC Senior College and University Commission), Standard 3.1.1:"Evaluation processes are systematic, include appropriate peer review, and for instructional faculty and other teaching staff, involve consideration of evidence of teaching effectiveness, including student evaluations of instruction."This study validated the criterion framework now used to conduct peer reviews of all faculty members.This framework consists of five measurable attributes: Fostering Critical Thinking, Instructive Feedback to Students, High Expectations, Establishing Relationships, and Instructor Expertise.
The second pilot led to a 4-point Likert scale that corresponds to the following performance levels: 4 (distinguished), 3 (proficient), 2 (developing), 1 (beginning), and 0 (not observed).See Figure 1 for a description of each performance level for critical thinking.

Data Collection
This study used a concurrent embedded mixed methods approach.Two variables were measured: (1) faculty behaviors related to the promotion of critical thinking within course discussions and ( 2) the level of peer interaction related to critical thinking within those discussions.
Faculty behaviors related to critical thinking were collected using the constant comparison procedures of the grounded theory approach.Using this inductive procedure, the investigators began by exploring all the faculty behaviors demonstrated in the discussion forums within a subset of courses.The investigators then conferred to examine their individually developed lists of behaviors and define the final list of collective behaviors to be applied to the sample of 91 courses.The final list included 12 positive faculty behaviors and seven negative behaviors.Investigators calibrated their application of the list of 19 behaviors to ensure consistency in coding by the four investigators.
Peer interaction was measured through content analysis using the Interaction Analysis Model (IAM) developed by Gunawardena, Anderson, and Lowe (1997).The IAM includes five phases that indicate the level of knowledge construction demonstrated within a peer-to-peer interaction.The five phases are (1) sharing and comparing, (2) dissonance, (3) negotiation and co-construction, (4) testing tentative constructions, and (5) statement and application of newly constructed knowledge.Each phase has specific indicators to guide coding.For example, an indicator for Phase 3, negotiation and coconstruction, is "identification of areas of agreement or overlap among conflicting concepts" (p.414).The IAM is one of the most widely used tools for measuring co-construction of knowledge among peers in asynchronous discussions.
The investigators chose to examine within each course the discussions in Weeks 1, 3, and 5 in order to accommodate the difference in course lengths between undergraduate and graduate courses.With approval of the Institutional Review Board and Office of Research and Creative Scholarship, the investigators accessed each course and downloaded transcripts of the discussion forums.Although some courses had only one discussion each week, most of the courses had two discussions each week.Student participation in these discussions was required and graded.With up to six discussions per course and a varied number of peer-to-peer interactions, a total of 19,595 postings were coded with the IAM.
Given the labor-intensive nature of content analysis and the large sample, a research assistant was hired and trained to code each peer-to-peer response according to the IAM.Prior to the content analysis work, each investigator and the research assistant reviewed a presentation overview of the IAM (Hall, 2013).This overview discussed the stages of the IAM and gave examples of specific language that might appear in student posts.The researchers then reviewed a subset of interactions and held a series of calibrations among themselves and with the research assistant to ensure consistent application of the IAM phases.The research assistant was able to complete approximately two thirds of the courses, and the investigators completed the remainder of the coding.

Data Analysis
From this dataset of IAM scores and list of faculty behaviors present in the 91 courses, a total of 352,710 data points were entered into a spreadsheet for analysis using SPSS 22. Included in the analysis were the individual posts' IAM scores and the list of behaviors noted on the part of the faculty for the discussion thread.At first, a visual inspection of the data was performed to ensure there were no scores outside of expected boundaries.For instance, the IAM scores should range from 1 to 5. If any score outside of that range was noted, the entry was deleted.Only one such entry was found.The result of clean data to be used then was 19,594 cases.The data for the IAM were coded on the ordinal scale of 1 through 5.The data for the possible 17 faculty behaviors were coded as dichotomous data, with a "1" representing that a particular behavior was noted and a "0" showing the absence of that behavior.
The confidentiality of subjects was maintained by keeping files password protected on computers that also required password access.Neither instructor names nor student names were recorded outside of the archived transcripts obtained from the courses.

Results
The faculty behaviors observed in the threaded discussions were determined to fit into either positive or negative.The 12 positive behaviors were the following: challenges student to think; communicates to student's subject; provides direction to additional resources; genuinely compliments the student's post; follows up to student; summarizes student's comment; directs student to another post; addresses more than individual student's comment; shares personal/professional experiences; responds more than once per week; uses two or more strategies; and cites material other than course.The seven negative behaviors were the following: does not respond to all students; responses are basically the same; asks closed-ended questions; responses are very limited; lack of follow-up to second level; response not related to post; uses 1 or 0 strategies from list.The instance of these behaviors was correlated with the scores on the IAM.
The first statistical test used was a correlation comparing each individual faculty behavior to the IAM scores.There were four positive faculty behaviors found to have a statistically significant, though weak, correlation with the scores on the IAM: communicates directly to the student's subject (r = 0.035, p <0.01); genuinely compliments the student's post (r = 0.018, p < 0.05); summarizes the student's post (r = 0.028, p < 0.01); and responds more than once per week to the student (r = 0.02, p < 0.01).Additionally, there were two negative faculty behaviors found to have a statistically significant, though weak, correlation with the scores on the IAM: responses were very limited (r = 0.019, p < 0.01); and lack of follow up to second level (r = 0.029, p < 0.01).

Discussion/Conclusions
The results support the assertion that there are some faculty behaviors which may have a mild impact on students' critical thinking skills within threaded discussions.This may seem like an intuitive assumption; however, none of the faculty behaviors identified in this study had more than a slight, small correlation with students' scores on the IAM.One explanation for this smaller effect could be the nature of the tool used to assess peer interaction.The IAM was developed to assess knowledge construction; yet while critical thinking is certainly a component of knowledge construction, perhaps this nuance influenced the correlation between faculty behaviors and peer interaction scores.The fact that two negative faculty behaviors had correlations with higher levels of critical thinking within peer interactions suggests that students may consciously or unconsciously increase their cognitive engagement with peers when the students recognize that the instructor is less engaged.

Figure 1 .
Figure 1.Performance levels for critical thinking dimension of peer review rubric Faculty are selected for peer review after teaching their fifth course and based on a 12-month cycle thereafter.The outcomes of the peer review are to highlight and recognize excellent faculty members and teaching practices, promote self-reflection and continuous improvement of faculty, provide targeted professional development based on classroom practice, and coach underperforming faculty members.Data CollectionThis study used a concurrent embedded mixed methods approach.Two variables were measured: (1) faculty behaviors related to the promotion of critical thinking within course discussions and (2) the level of peer interaction related to critical thinking within those discussions.Faculty behaviors related to critical thinking were collected using the constant comparison procedures of the grounded theory approach.Using this inductive procedure, the investigators began by exploring all the faculty behaviors demonstrated in the discussion forums within a subset of courses.The investigators then conferred to examine their individually developed lists of behaviors and define the final list of collective behaviors to be applied to the sample of 91 courses.The final list included 12 positive faculty behaviors and seven negative behaviors.Investigators calibrated their application of the list of 19 behaviors to ensure consistency in coding by the four investigators.Peer interaction was measured through content analysis using the Interaction Analysis Model (IAM) developed byGunawardena, Anderson, and Lowe (1997).The IAM includes five phases that indicate the level of knowledge construction demonstrated within a peer-to-peer interaction.The five phases are (1) sharing and comparing, (2) dissonance, (3) negotiation and co-construction, (4) testing tentative constructions, and (5) statement and application of newly constructed knowledge.Each phase has specific indicators to guide coding.For example, an indicator for Phase 3, negotiation and coconstruction, is "identification of areas of agreement or overlap among conflicting concepts" (p.414).The IAM is one of the most widely used tools for measuring co-construction of knowledge among peers in asynchronous discussions.The investigators chose to examine within each course the discussions in Weeks 1, 3, and 5 in order to accommodate the difference in course lengths between undergraduate and graduate courses.With approval of the Institutional Review Board and Office of Research and Creative Scholarship, the investigators accessed each course and downloaded transcripts of the discussion forums.Although some courses had only one discussion each week, most of the courses had two discussions each week.Student participation in these discussions was required and graded.With up to six discussions per course and a varied number of peer-to-peer interactions, a total of 19,595 postings were coded with the IAM.Given the labor-intensive nature of content analysis and the large sample, a research assistant was hired and trained to code each peer-to-peer response according to the IAM.Prior to the content analysis work, each investigator and the research assistant reviewed a presentation overview of the IAM(Hall, 2013).This overview discussed the stages of the IAM and gave examples of specific language that might appear in student posts.The researchers then reviewed a subset of interactions and held a series of