Student Success Factors in Graduate Psychology Professional Programs

Research examining factors contributing to online students’ success typically focuses on a single point in time or completion of a single course, as well as individual difference variables, such as learning style or motivation, that may predispose a student to succeed. However, research concerning longer term online student outcomes, such as graduation rates or how events occurring during matriculation may impact students’ progress, is lacking. Moreover, little is known about the factors that contribute to graduate psychology students’ successful completion of their online degree programs. This exploratory archival study uniquely focuses on data gathered from admissions files and the student information system to identify and examine possible factors contributing to students’ final program grade point average (GPA) and graduation rates in two distinct, fully online masters of psychology programs. Findings derived from chi-square tests and regression and logistic regression analyses indicate that placement on academic probation at any time during enrollment is associated with both lower final program GPA and a decreased probability of graduation. Prior graduate school experience and taking a leave of absence (LOA) were also associated with lower probability of graduation, whereas failing any course during matriculation was associated with a lower final program GPA. Implications include identifying students who fail any course, take an LOA, and/or are placed on academic probation as “at risk,” and proactively connecting with them to provide tailored advisement and resources to support their continued matriculation.


Introduction
Past estimates indicated explosive growth in online higher education in the United States, with enrollment rates doubling between 2002 and 2007 (Allen & Seaman, 2008). More recent estimates suggest online enrollment ranges from 5.5 million (Straumsheim, 2014) to 7.1 million students, with approximately 33% of higher education students enrolling in at least one online course (Allen & Seaman, 2014). With the advent of online learning at the postsecondary level, many have been interested in how distance education (DE) courses compare with traditional, face-to-face (FTF) courses in terms of student preparation, successful completion of courses, graduation rates, and postgraduation outcomes. Thus, research has centered on ensuring the online learning experience is comparable to FTF courses (Cameron, 2013), and much compares students' performance in and satisfaction with an online course to that of the same content delivered in a blended (Bollinger & Erichsen, 2013) or FTF context (Aragon, Johnson, & Shaik, 2002;Ashby, Sadera, & McNary, 2011;Butler & Pinto-Zipp, 2006;Harnish & Robert Bridges, 2015;Horspool & Lange, 2012;Karatas & Simsek, 2009;Lyke & Frank, 2012;Reuter, 2009;Wang & Newlin, 2000). More specifically, online course effectiveness research tends to center on three primary themes: student outcomes as measured by test scores and final grades earned in classes, students' attitudes about learning and/or learning online, and students' satisfaction with their online learning experience (Robinson & Hullinger, 2008).

Demographic Variables
As with most research paradigms, the literature on predictors of online student success has focused on various demographic variables. The results have been mixed regarding whether demographic variables, such as gender, ethnicity, and age, affect online student performance. A few studies found no differences for variables such as age or gender (Boston et al., 2011;Cochran et al., 2014;Harrell & Bower, 2011;Ilgan, 2013;Layne et al., 2013;Urtel, 2008;Wang, Shannon, & Ross, 2013;Yukselturk & Top, 2013), while others did find differences (Perez Cereijo, 2006;Simpson, 2006;Xu & Jaggars, 2014). It is important to point out that the majority of research examining demographic variables has used undergraduate populations. For example, Perez Cereijo (2006) surveyed three groups of students over three semesters who took the same course with the same instructor on their preferences for course modality: FTF versus asynchronous online. The results indicated that employment type (part-time or fulltime) and distance from school displayed differences in student preferences. Full-time students and students who lived further from campus preferred the asynchronous online format to FTF classes. For all other demographic variables, including gender, age, computer experience, and dependents at home, no significant differences were found (Perez Cereijo, 2006). Harrell and Bower (2011) examined several demographic variables, including gender, age, race, employment status, marital status, number of children, enrollment status, GPA, and financial aid status. Out of these nine predictor variables, only one was a significant predictor of online student persistence in an online course at a community college: GPA (Harrell & Bower, 2011). Higher GPA was found to reduce the odds of course withdrawal by a factor of 0.95 (Harrell & Bower, 2011).
Another study examined age, ethnicity, and gender on course performance between FTF and DE courses and found no differences between modality for age or gender (Urtel, 2008). The author found several differences in performance on several demographic variables between course modality, including class level and within-ethnicity and within-gender differences. Freshman underperformed when compared to all other class levels in both the FTF and DE courses (Urtel, 2008). Black students also underperformed in both FTF and DE courses, while White students performed better in the FTF course only (Urtel, 2008). While there were no gender differences, women performed better in the FTF course than in the DE course (Urtel, 2008). Finally, while there was a significant difference in the mean ages between FTF and DE, there was not a significant correlation between age and final grade (Urtel, 2008).
Research in the UK, using a population of "open entry educational system" institutions (meaning there were no qualifications for enrollment), used logistic regression to predict student success in online courses using demographic information, such as sex, age, education, and occupation, among others, as predictors (Simpson, 2006). The factors in the final prediction equation were as follows (in order): 1. Their chosen course level: Students entering on level 1 (first year of degree equivalent) tended to have a higher success than students entering on level 2 (second year of degree equivalent) courses. 2. The credit rating of a course: Students entering on 15-credit-point courses (equivalent to one-eighth full-time study) were more successful than students entering on 30-point or 60point courses. 3. A student's previous education qualifications. 4. Their course choice (arts students were more likely to be successful than mathematics and science students, for example). 5. Their socio-economic status (the higher, the more successful). 6. Sex (women more successful than men). 7. Age (middle aged students more likely to be successful than younger or older students). (Simpson, 2006, p. 131) The open entry institutions used a prediction equation based on these variables to determine if students were at risk for withdrawing or failing a course. If students were found to be at risk based on their results, the institution proactively contacted those students (Simpson, 2006). This approach capitalizes on the data about incoming students to determine if extra support is needed. The question remains whether relying solely on this kind of demographic data is enough to predict student success and retention and whether these results generalize to a graduate student population.

Individual Difference Variables
Individual difference variables are numerous, and many, including learning style, cognitive styles, and motivational preferences (including self-efficacy), and e-learning readiness, have been examined in the literature as potential predictors of online student success. Many of these studies examined the effects of various combinations of these and other factors on student performance. In general, the results are mixed as well. This section will review the results for this group of potential predictors.
Learning style. Learning style typically refers to the premise that students have different ways of and preferences for learning. The research on learning styles is based on the idea that students will experience greater satisfaction and a higher level of learning outcome if teaching style is matched to a student's learning style (Curry, 1991;Gardner, 1983;Honey & Mumford, 2006;Kolb, 1984). Many models of learning styles have been proposed, including Kolb's (1984) learning preference model, Curry's (1991) Theoretical Model of Learning Style Components and Effects, Gardner's (1983) theory of multiple intelligences, and the Myers-Briggs Personality Type Indicators (Myers & Briggs, 1995). Several studies have analyzed students' learning styles as a predictor of their success in online courses (Aragon et al., 2002;Eom et al., 2006;Harrell & Bower, 2011;Perez Cereijo, 2006;Wang & Newlin, 2000), with mixed results.
One such study used structural equation modeling to determine the effects of several individual difference variables on learning outcomes and student satisfaction using a sample of 397 students from a large midwestern university (Eom et al., 2006). Course structure, self-motivation, learning style, instructor feedback, instructor facilitation, interaction between the instructor and students, and interaction among students were included in the model. Learning outcomes were operationalized as survey questions about student perceptions of whether the quality of online learning was better than that of FTF courses and whether students learned more in one modality than the other (Eom et al., 2006). Of these six independent variables, only instructor feedback and learning style were significant predictors of learning outcomes. More specifically, students with visual and read/write learning styles, as well as those who received high levels of instructor feedback, reported perceiving a higher quality of learning in online courses than FTF courses (Eom et al., 2006). All six independent variables were also found to be related to students' satisfaction (Eom et al., 2006). Conversely, Harrell and Bower (2011) found that higher scores on the auditory learning style measure resulted in increased odds of withdrawing from an online course at a community college. This makes sense, as auditory learners learn best by processing verbal rather than written information, and the courses in the study tended to rely on written course material (Harrell & Bower, 2011).
Another study examined Curry's (1991) model of learning styles, which included a combination of cognitive controls (information processing habits), maintenance of motivation, and task engagement (Aragon et al., 2002). The researchers were interested in examining the impact of this model on several course learning outcomes, including comfort level with course content, use of study aids, quality of course project, and final course grade, with students in a FTF course compared to students in an online course. Results indicated no difference in motivation maintenance or task engagement with the exception of study aids, where FTF students used them more than online students (Aragon et al., 2002). Significant differences were found for cognitive controls: Online students tended to be more reflective and had a stronger preference for abstract conceptualization (learning by thinking), while FTF students tended to prefer learning by doing (Aragon et al., 2002). None of the learning style constructs were found to be predictors of student success in the course project or final grade (Aragon et al., 2002). Additional studies examining learning styles also found no effect on student outcomes, though different measures of learning styles and student success were examined (Perez Cereijo, 2006;Wang & Newlin, 2000).

Motivation and cognitive styles.
A large number of studies have examined motivation and cognitive styles in some form as a predictor of online student success (Artino & Stevens, 2009;Bates, 2006;Bernard et al., 2004;Eom et al., 2006;Gaythwaite, 2006;Harrell & Bower, 2011;Lee et al., 2013;Parker, 1994;Wang & Newlin, 2000;Wang et al., 2013;Waschull, 2005;Yukselturk & Top, 2013). When we examined in the literature, we saw that many studies investigated students' locus of control (internal or external); levels of self-efficacy-that is, one's belief in one's ability to achieve certain outcomes (Bandura, 1977); different cognitive styles, meaning the ways in which learners interact with, perceive, and respond to the learning situation (Keefe, as cited in DeTure, 2004), and their motivational beliefs related to learning as predictors of online student success. This section will review selected research on locus of control, self-efficacy, cognitive styles, and motivational beliefs.
Research on locus of control has found mixed results. One study of community college students found a large significant correlation between locus of control and academic persistence for DE learners such that students who had an internal locus of control tended to be more likely to finish an online course (Parker, 1994), but this was not the case for students in a FTF course. Two other studies did not find that locus of control was a significant predictor of course completion (Harrell & Bower, 2011;Muse, 2003). Wang and Newlin (2000) analyzed the cognitive-motivational characteristics on "cyberstudents" in three sections of an online statistics course. The authors conducted stepwise regression, which revealed three predictors of final grades in the course: total homepage hits by Week 15, inquisitive cognitive style, and internal locus of control. The authors concluded that online "students who maintained a high level of online course activity, displayed a high degree of inquisitiveness, and had an internal locus of control tended to perform well in the virtual classroom" (p. 141), though the authors noted that online students tended to exhibit "a greater external locus of control than conventional students" (p. 137). A study of undergraduates in an open university in Korea did find that students with an internal academic locus of control were less likely to drop out of online courses (Lee et al., 2013). While the results are mixed, it appears that in general students with an internal locus of control tend to perform better and persist in online courses (Lee et al., 2013;Parker, 1994;Wang & Newlin, 2000). Studies examining the effects of self-efficacy are generally in agreement that this variable tends to predict online student success (Bates, 2006;Gaythwaite, 2006). One study of community college students enrolled in FTF and online speech classes found self-efficacy was a significant predictor of final course grade (Gaythwaite, 2006). Another study of students enrolled in FTF and online introductory microcomputer applications courses also found that self-efficacy was positively correlated with performance scores (Bates, 2006). However, in the study of undergraduates in an open university in Korea, the researchers did not find that academic self-efficacy differentiated between students who persisted versus students who dropped out of online courses (Lee et al., 2013).
Other studies have examined cognitive styles and motivational beliefs. One such study examined students' cognitive styles and online technologies self-efficacy on their performance in six online courses at a community college (DeTure, 2004). Student success was operationally defined as performance on course assessments, including exams, research papers, and discussion participation. Results indicated that neither cognitive styles nor online technologies self-efficacy was a significant predictor of final grade (DeTure, 2004). The author noted that, in this case, cognitive styles and online technologies self-efficacy are poor determinants of success (DeTure, 2004). Artino and Stevens (2009) used extreme groups analysis to categorize groups according to scores on motivational beliefs and negative achievement emotions scales, and found that students with the adaptive motivation emotion profiles exhibited significantly higher mean scores on all five outcomes when compared to students with the less adaptive profile. The effect for the mean difference on course grade was moderate; all other effects were large. (Artino & Stevens, 2009, p. 585) Out of the five outcomes examined in this study, course grade was the only traditional measure of student success used. The other outcomes included variables such as self-reported use of the metacognitive learning strategies of elaboration and metacognition, continuing motivation, and course satisfaction. This can be interpreted to mean the extreme profile analysis was less effective for traditional measures of student success.
Another study examined the effects of cumulative GPA, confidence in prerequisite skills, general beliefs about DE, self-direction and initiative, and desire for interaction on achievement performance, operationalized as cumulative course grade (Bernard et al., 2004). The results demonstrated that while self-direction and general beliefs about distance education predicted cumulative course grade, "the two predictors accounted for 8.0% of the variance in cumulative GPA" (Bernard et al., 2004, p. 37). Cumulative GPA was excluded from the regression analyses due to the strong correlation with cumulative course grade (r = .68). The authors concluded that although self-direction and beliefs about DE significantly predicted cumulative course grade, cumulative GPA was the "best 'predictor' of course grade" (Bernard et al., 2004, p. 40).
Finally, researchers used path analysis to examine the relationships between several demographic variables (gender, age, education level), previous online learning experience, self-regulated learning and motivational strategies, and online technology self-efficacy and course grades and course satisfaction (Wang et al., 2013). The final path model led the authors to conclude that students with greater prior online course experience usually made more effective use of learning strategies in their online courses. With the use of more effective learning strategies, students had higher levels of motivation, which then led to higher levels of course satisfaction and higher levels of technology self-efficacy. Students with higher levels of course satisfaction and technology self-efficacy got better grades in online courses. (Wang et al., 2013, p. 315) In other words, the most successful students were likely more motivated to learn and used more effective learning strategies because of their previous experience in online courses, leading to satisfaction with their course experience and better grades in online courses (Wang et al., 2013).
Though the results are somewhat mixed and examined different measures of motivation and cognitive styles, the conclusion can be drawn that these individual differences variables can affect student success in online courses. The difficulty lies in determining student motivation and cognitive styles prior to admission and enrollment as well as during the matriculation process, as it is likely that student motivation, at least, fluctuates over the course of their enrollment and may be impacted by situational factors (e.g., employment status, financial problems, family distress, etc.). E-learning readiness. As long as there has been interest in and discussion and research about distance education, there have been concerns about and recommendations to assess students' readiness to learn online (Lorenzetti, 2005). Earlier researchers posed initial concerns about current and prospective students' possession of needed skills and strategies to learn effectively at a distance (Lim, 2004). These included facility with basic computer software programs, Internet access and ease of use of the Internet, and knowledge about and ability to navigate schools' online resources, such as library research databases, among other factors.
Some also questioned whether students required a more extensive computer and web-based skill set prior to commencing online courses (Chen, Lambert, & Guidry, 2010;Welsh, 2007), but the results are mixed. In their review, Chen et al. (2010) suggest most online courses do not require extensive facility with many web-based applications and tools; students generally need to know how to draft a paper using word-processing software, use e-mail, and post to discussion forums. When courses do require group collaboration to develop a wiki or paper, for example, course content typically includes tutorials about how to use these tools, faculty provide guidance about this, or both occur. However, Welsh's (2007) research suggested a positive relationship between course completion and computer skills. If students do not have a well-developed computer skill set in advance of commencing online studies, their ability to learn and master course content may suffer as a result of trying to simultaneously develop needed computer skills to navigate that course content.
In addition to basic computer skills, computer confidence (or computer self-efficacy), which develops via successful and repeated completion of computer-related tasks, may be related to successful online outcomes. These results are mixed as well, with some research suggesting that basic computer skills, computer confidence, and self-efficacy are not significant predictors of successful completion of online courses (DeTure, 2004;Harrell & Bower, 2011;Muse, 2003), whereas other research suggested computer confidence was a strong predictor of course outcomes (Osborn, 2000;Wang et al., 2013;Yukselturk & Top, 2013).

Student engagement and course activity.
A review of the research focused on student engagement in online courses suggests important differences in the way engagement is defined, as well as different ways to operationalize engagement, such as in terms of student behaviors, faculty behaviors, course design, and connection to the greater learning community at the school. In terms of definitional differences, engagement may not be defined forthrightly in a source (Griffin, 2014), or the subsequent discussion suggests a focus on observable markers of student participation in the course, such as number of page views or number of discussion post entries (Davies & Graff, 2005;Ramos & Yudko, 2008;Wang & Newlin, 2000;Woods McElroy & Lubich, 2013). A broader and deeper definition of engagement may be more helpful (Kuh, 2003). Defining engagement in terms of the total amount of time and energy a student allocates toward the study and analysis of course content, the completion of course assignments, and the obtainment of feedback from faculty (Kuh, 2003) may offer a more complete picture of student engagement, as it includes not only the observable footprints of a student within the course (e.g., page views, number of posts) but also that which is occurring within the student, such as synthesis of course content, which may be less amenable to direct observation.
What faculty is doing to promote students' engagement also is of interest. For example, Griffin (2014) suggests multiple strategies to promote in-course engagement, including creating assignments that promote students' learning and performance at all levels of Bloom's taxonomy, as well as front-loading a course with relevant content and revisiting that content throughout the course so as to build bridges between modules or lessons. By showing students the end goals of the course at the beginning, students may better orient their journey through and mastery of course content.
In terms of course design, Revere and Kovach (2011) make a distinction between traditional technologies and activities for engaged learning (e.g., discussion boards, chats) that are common to most online courses, and web-based applications (e.g., Google docs) that could be drawn upon to further develop and improve engagement between and among students, as well as between students and faculty. Moreover, Chen et al. (2010) suggest that schools and programs evaluate the extent to which online students receive and/or have access to the "fringe benefits" that FTF students have, such as opportunities for informal contact with faculty and peers, as well as more opportunities to receive personal contact and assistance from faculty. This may impact online students' perceived connection to their peer group, faculty, and program, which, in turn, may potentially and positively increase both students' engagement in courses and their help-seeking behavior.
However, does the amount and type of student engagement relate to successful online outcomes? While the research seems to yield consistent results, these should be interpreted and applied with caution. Davies and Graff (2005) found that students who failed a course tended to have a lower level of engagement in the course, defined as the number of "hits" or access attempts in different locations of the online course, whereas students who earned a passing grade demonstrated a higher level of engagement. It follows that the less a student is checking into a class, participating in the class, or both, the less likely the student is to pass the course. However, a high level of engagement did not differentiate between those who earned mid-passing versus high-passing grades (Davies & Graff, 2005). DeNeui and Dodge (2006) also found a small yet significant correlation between students' number of page hits and final course grade, yet the sample was drawn from a traditional FTF class; these students also had the option to access course documents and course-related information, such as class outlines or information about the instructor's office hours, from an optional, online, adjunctive course shell.
Engagement in class also depends on the student's schedule and availability to participate in and complete course activities and requirements. While a positive relationship between time devoted to courses, completion of assignments, and grades is not surprising, Romero and Barberá (2011) also found time flexibility, or the number of different times of day when a student could engage in online learning activities, positively correlated with successful completion of collaborative (e.g., group) course assignments and, to a lesser degree, individual assignments.
While it is clear that student engagement is an important focus in the distance education research, as well as a key component to student retention (Udermann, 2014), most studies focus exclusively on developing, maintaining, and/or increasing engagement within a given course versus for the duration of the degree program. Arguably, continued engagement from course-to-course may positively impact eventual degree completion; however, little to no research offers an analysis of student engagement over the course of completion of the degree at any level of education. Moreover, much more research concerning the relationships between successful outcomes and online students' level of engagement, pace of degree completion (e.g., part-time versus full-time), level of education (e.g., graduate training), and professional and personal demands that compete with availability to complete course requirements is needed.

Matriculation factors.
Few studies have examined common matriculation variables (e.g., withdrawals, admissions criteria, enrollment data, and matriculation) as potential predictors of online student success. One study examined two groups of community college students in a business course: total student population and successful students (earning a C or higher; Wojciechowski & Bierlein Palmer, 2005). The student's current GPA was the strongest predictor of student success, while the number of previous course withdrawals in both FTF and online courses was the third-most-significant predictor. Students with more prior course withdrawals were less successful in the course (Wojciechowski & Bierlein Palmer, 2005). A separate study found that students with large course loads had lower online course completion rates (Moore et al., 2002).
One unpublished research study examined 26 predictor variables including various individual differences variables, demographics, and matriculation factors in a logistic regression analysis predicting online course completion at a community college (Welsh, 2007). Students in the study were enrolled in at least one online course during one semester. The results indicated seven of those variables were statistically significant variables including course load, prior learning experiences, financial stability, time management and study environment, self-efficacy, extrinsic motivation, and computer skills (Welsh, 2007). Twenty of the 26 variables were entered into a hierarchical logistic regression model, of which three were statistically significant in the final block: course load, financial stability, and self-efficacy. The author concluded that despite having two or more at-risk factors, distance learners who had high levels of self-efficacy, good computer and time management skills, financial stability, a favorable study environment, were enrolled in more than one course, and believed their prior learning experiences helped prepared them for their course were more likely to be successful. (Welsh, 2007, p. 0) Another study investigated students' previous learning experiences (i.e., background preparation) and found that was a statistically significant predictor of online course completion (Muse, 2003). A different study found previous learning experiences to be statistically significant in predicting completion in DE courses (Powell, Conway, & Ross, 1990). A more recent study of undergraduates in online programs found that students with no transfer credits, a low number of course registrations for the year, last grade received was F, and last grade received of W (withdrawn) were all significant predictors of disenrollment, meaning these were factors that predicted whether students would disenroll in a degree program (Boston et al., 2011), which contradicts the Moore et al. (2002) study results. The authors concluded that students who had not taken courses elsewhere, didn't enroll in many courses during the year, and either failed or withdrew from their last course were more likely to not be enrolled at the end of the year. This pointed to the potential exploratory nature of students seeking online degrees versus a traditional degree from a brick-and-mortar institution (Boston et al., 2011). A subsequent study by the same research team found similar results for transfer credits and also found that the higher the credits earned to credits attempted ratio, the higher likelihood of reenrollment (Layne et al., 2013). A separate study also found similar results for previous withdrawal from online courses such that students who previously withdrew from an online course were more likely to do it again for a group of undergraduates enrolled in online courses during a single semester (Cochran et al., 2014). Overall, the research on these matriculation factors is disjointed, and most studies do not focus on graduate students, who often enter programs with previous education, work experience, and online/DE course experience.
The Council of Graduate Schools' (CGS) most recent report of the CGS/GRE Survey of Graduate Enrollment and Degrees found that first-time graduate enrollment and graduate school applications have been increasing over the last 10 years (Allum, 2014). The report presents 1-year trends that are based on data collected for 2012 and 2013; 5-year trends that compare data collected for 2008 and 2013; and 10year trends that are based on data collected for 2003 and 2013. First-time graduate enrollment increased by 2.6% between 2012 and 2013, 2.0% between 2008 and 2013, and 3.3% between 2003 and 2013 (Allum, 2014). Applications to graduate school have shown similar increases: 1.0% between 2012 and 2013, 6.1% between 2008 and 2013, and 3.6% between 2003 and 2013 (Allum, 2014). While this shows a general increase, first-time enrollment and applications to social and behavioral sciences graduate programs, including psychology, have shown no growth in the same period: -0.3% between 2012 and 2013 and 0.6% between 2008 and 2013 (data was not available on these programs in 2003; Allum, 2014). In addition, research has shown that there are large performance gaps between students taking online courses and those taking FTF courses in the social sciences even after removing the potential effects of student characteristics and peer effects (Xu & Jaggars, 2014). Coursework in psychology may be a unique subject area due to the need for hands-on demonstration and practice and high instructor-student and student-student interactions. This data indicates a need to conduct research on the factors that lead to student retention and degree completion and identification of those factors that may prevent students from graduating.
Furthermore, researchers have operationalized student outcomes in a variety of ways, making the generalizability of those results limited. Many studies also examine student performance in only one course or one cohort (i.e., at one point in time), limiting the conclusions to be drawn about student outcomes, such as graduation and final institutional GPA. What seem to be missing from the literature are examinations of student performance in a longitudinal manner (i.e., tracking students from start to finish in a given degree program). Also absent is literature that considers student performance in prior courses, as well as events affecting their enrollment status in a degree program. These factors may influence students' performance and whether they graduate, both of which are critical student outcomes.
This study builds upon the prior research and uniquely focuses on and compares psychology students from two distinct, fully online master's programs: the Master of Psychology program (MAP) and the Master of Industrial/Organization Psychology program (MAIO). Instead of examining internal characteristics and traits, or perceptions of the online experience, among these learners, this study explored how external events and circumstances occurring during matriculation may affect online graduate psychology students' successful completion of their degree program. Specifically, two research questions were examined in this study: 1. What admissions and at-risk matriculation factors affect students' probability of graduation? 2. What admissions and at-risk matriculation factors affect students' final program GPA?

Method Participants
The sample consisted of 171 graduate students in two online professional psychology programs. Thirty-six percent of participants were from the MAIO program, while 64% were from the MAP program. Eighty-one percent of the participants were female with an average age of 34.5 years (range = 25 to 61 years). The majority of students were White (43.3%), though 36.3% did not disclose their race. Of the 171 students, 61.4% graduated, 29.2% withdrew from the program, and 5.8% were dismissed.

Procedure and Data Analysis Strategy
Archival data was collected from admissions files and the institution's student information system for use in this research study. Participant data was tracked over a two-year period, as this is the average length of completion time for both graduate programs. Data was analyzed using IBM SPSS 20.0.0 and included between-groups analysis (chi-square test), regression analysis, and logistic regression analysis.

Instruments and Measures
Independent variables. Eight exploratory factors were examined as independent variables in this study, including undergraduate GPA (average of all undergraduate GPAs reported in the student information system; GPA is on a four-point scale), undergraduate major (psychology vs. nonpsychology), prior graduate school experience (did the student previously attend graduate school: Yes/No), fail first course (did the student fail the first course in the graduate program: Yes/No), fail any course (did the student fail one or more courses in the graduate program: Yes/No; note that a failing grade is a C or an F), leave of absence (did the student take a leave of absence [LOA] at any time: Yes/No), academic probation (was the student placed on academic probation at any time: Yes/No), and academic warning (was the student placed on academic warning at any time: Yes/No). Dependent variables. Two dependent variables were examined in this study. The first was the participants' final program grade point average (GPA). GPA is on a four-point scale. The second dependent variable was whether or not the student graduated from the program, coded "Yes" or "No."

Descriptive Statistics
Tables 1 and 2 present the descriptive statistics for the continuous and categorical variables in our study. Table 1 presents the means, standard deviations, and correlations for the two GPA variables: undergraduate and final program for the combined sample and by program. There were low and nonsignificant correlations between undergraduate and final program GPAs for the combined sample and for the MAIO program. There was a significant positive correlation between undergraduate and final program GPAs for the MAP sample. The means were above a B letter grade average on a four-point scale, with slightly higher final program GPAs for all samples. Table 2 presents the frequency distributions for our categorical independent variables for the combined sample and by program. For the combined sample, the information presented in this table shows that the majority of students were not placed on academic probation (80.1%) or academic warning (90.1%), did not fail a course (67.8%), did not take a leave of absence (79.5%), did not attend graduate school prior to enrolling in the institution (83%), passed their first course in the program (88.9%), and were psychology majors in their undergraduate institutions (53.8%). The frequency distributions for the MAP program tended to be similar to the combined sample, while the frequency distributions for the MAIO program tended to find higher percentages of students not placed on academic probation (90%) or academic warning (95%), not failing courses (81% did not fail any course and 87% did not fail the first course), and not taking a leave of absence (89%). The percentages for undergraduate major and prior graduate school were similar across programs.

Differences by Program
The first analysis examined whether differences in final program GPA and probability of graduation existed between the two programs. An ANOVA was conducted for final program GPA as the dependent variable. The results indicated there were no differences in final program GPA by program, F(1, 168) = .14, p > .05. All subsequent analyses for final program GPA were conducted using the combined sample. A chi-square was conducted to determine if there were differences in probability of graduation by program. A significant chi-square was found, which indicates an association between program and graduation rate, χ 2 (1, N = 171) = 5.13, p = .024. All subsequent analyses for probability of graduation reported here have been conducted separately for each program.

Research Question #1: What Admissions and At-Risk Matriculation Factors Affect Students' Ability to Graduate?
MAP Program. This research question was analyzed using stepwise logistic regression with all eight independent variables used as predictor variables. The results indicated a significant result for three of these predictor variables: Prior graduate school experience, leave of absence, and academic probation. The coefficient on the prior graduate school experience variable has a Wald statistic equal to 5.98, which is significant, p < .05. The coefficient on the leave of absence variable has a Wald statistic equal to 10.7, which is significant, p > .01. The coefficient on the academic probation variable has a Wald statistic equal to 25.8, which is significant, p < .001. The overall model is also significant, according to the model chisquare statistic, χ 2 (1, N = 109) = 48.2, p < .001. The model was able to classify correctly 93% of those who graduated and 61% of those who did not, for an overall success rate of 79% (see Table 3 for the  classification table). Table 4 presents the standardized beta weights, standard error of beta, Wald chi-square, p-values, and odds ratio (e β ) for the three-predictor model. These results tell us that the odds of graduation are about 5.52 times higher for students who do not have prior graduate school experience than students who do have prior graduate school experience. For leave of absence, the odds of graduation are about 6.59 times higher for students who didn't take a leave of absence than those who did. Finally, the odds of graduation are about 42.42 times higher for students who were not on academic probation at any time during their matriculation than for those who were on academic probation one or more times for students in the MAP program.

MAIO Program.
A logistic regression analysis was also conducted, with the results indicating a significant result for only one of the predictor variables: academic probation. The coefficient on the academic probation variable has a Wald statistic equal to 9.81, which is significant, p < .01. The overall model is also significant, according to the model chi-square statistic, χ 2 (1, N = 62) = 14.07, p < .001. Table 5 presents the standardized beta weights, standard error of beta, Wald chi-square, p values, and odds ratio (e β ) for this model. These results tell us that the odds of graduation are about 42 times higher for students who were not on academic probation at any time during their program than for those who were on academic probation one or more times for students in the MAIO program. The model was able correctly to classify 98% of those who graduated and 50% of those who did not, for an overall success rate of 89% (see Table 6 for the classification table).

Research Question #2: What Admissions and At-Risk Matriculation Factors Affect Students' Final Program GPA?
The samples were combined for the analysis of this research question. A stepwise regression analysis was conducted that included all eight predictor variables. A three-predictor model resulted and was able to account for 74% of the variance in final program GPA, F(3, 157) = 142.42, p < .001, R 2 = .74, R 2 Adjusted = .73. The final model included three independent variables: whether the student passed their first course in the program (β = .51, p < .001), whether the student failed any courses in the program (β = .29, p < .001), and academic probation (β = .27, p < .001; see Table 7).

Discussion
At this time, little is known about the factors that contribute to graduate psychology students' successful completion of their chosen online programs. This exploratory study focused on data gathered from admissions files and the student information system to identify and examine possible factors contributing to online graduate psychology students' final program GPA and probability of graduation over the course of two years. This section will review the one common factor across all the analyses (academic probation), the variables found to be significant predictors of final program GPA and probability of graduation, variables not found to be significant predictors, limitations, and implications and directions for future research in this area.

Academic Probation: A Significant Factor Common Across All Analyses
The one significant predictor across all three regression analyses was academic probation. Interestingly, the odds ratio was large and similar for both programs, indicating the strength of this variable as a predictor of the probability of graduation. In all cases, if a student was on academic probation at any time during enrollment in either program, the student was less likely to graduate and was likely to have a lower final program GPA than students who were never on academic probation. This is not a surprising result, as academic probation status means the student's program GPA fell below a 3.0 for two or more consecutive semesters (i.e., failing two courses). This is a critical consideration in graduate school because students fail a course with a letter grade of C or below, not F as in undergraduate programs. Thus, the standard for passing is higher at this graduate institution. This finding is linked to the other significant predictors, including failing any course or the first course in the program, which will be reviewed next.
The implication is that students who are placed on academic probation because of failing grades should be considered "at-risk," and additional resources or support should be made available to these students. Current practices at this institution are to place students who are classified as being on academic warning or probation on an academic development plan (ADP). The ADP is developmental in nature, such that students are provided additional support from student advisors and a faculty ADP manager. Perhaps students who are first placed on academic warning (the precursor to academic probation) should be the target of additional proactive outreach initiatives in an attempt to prevent those students from subsequently being placed on academic probation the following semester (Boston et al., 2011;Cochran et al., 2014;Simpson, 2006;Simpson, 2013). This is particularly critical if the student is in their first or second term in the program, where it is more difficult to be removed from academic warning or probation status (i.e., a superior grade is required to improve one's GPA to above 3.0).

Significant Factors: Failing Courses (Any Course or First Course), Academic Probation, Leave of Absence, and Prior Graduate School Experience
In general, the results indicated that if students failed a course, whether it was the first course in the curriculum or any other course in the program, they tended to have a lower probability of graduating. These results are consistent with previous research (Boston et al., 2011). While the results for probability of graduation differed by program, placement on academic probation at any time displayed an association with the dependent variable of interest for both programs. Four additional variables were significant predictors of the two dependent variables: first course passed and fail any course for final program GPA, and prior graduate school experience and LOA for probability of graduation. The first two variables are not surprising given the significance of academic probation status for predicting probability of graduation, as course failures are precursors to students being placed on academic probation. As previously mentioned, students who are placed on academic warning or probation are placed on ADPs. These outreach initiatives should be targeted at students who fail any course, as students with high program GPAs may not necessarily be placed on academic warning until their GPA falls below 3.0 (Cochran et al., 2014). If any student who fails a course is also automatically placed on an ADP, this may be helpful for these students. This is a critical consideration in graduate school because of the higher achievement standard for passing at this graduate institution.
It was expected that prior graduate school experience could relate positively to probability of graduation, as students previously exposed to the rigor and demands of graduate school could be better prepared for what was to come in their next graduate program. A significant positive relationship with probability of graduation was found indicating that the opposite was true. Students without prior graduate school experience were more likely to graduate than those students with prior graduate school experience. It is likely that students who attended graduate school prior to attending this institution were not successful for various reasons. Perhaps these students left prior graduate institutions for reasons of poor performance, poor fit with the program or institution, or life stressors that affected their ability to focus on academics. A small proportion of these students listed more than one prior graduate school, which may indicate an inability to meet the demands of graduate school. The true reasons students were not successful at their prior graduate institutions are unclear, as access to admissions interview data for these students was not available. However, those reasons could be a red flag that should be carefully considered when admissions applications are reviewed.
Finally, also not a surprise, students who did not take a leave of absence from the program were more likely to graduate than students who did. Students enrolled in the online graduate programs of this institution tend to be older, employed full-time, have families, and likely have other non-school-related responsibilities. Often, students can become overwhelmed and have difficulty balancing work, school, and home responsibilities (Hart, 2012). As such, students may decide to take time off from school during particularly busy times in their lives. The danger in doing so is the decreased likelihood of returning, as this data demonstrates. Efforts should be focused on proactive outreach to students on a leave of absence, including monitoring when they will return, contacting them close to their anticipated return date, initiating paperwork, forwarding book lists, and engaging in discussions and advisement about how they can manage their time and responsibilities in a different, more effective manner upon return, among others. These outreach initiatives may increase the chances of students returning to the program, as prevention strategies would likely not be effective.

Nonsignificant Factors: Undergraduate GPA, Undergraduate Major, and Academic Warning
Admittance to these graduate programs is contingent, in part, on the applicants' ability to satisfy admissions criteria, including a minimum required GPA. Thus, it is expected they enter the graduate programs with the requisite academic skills needed to succeed, and this may account for the lack of relationships between undergraduate GPA and graduation rates or final program GPA. The majority of students included in this sample reported an undergraduate major of psychology. Among the others, the majority focused on many different social sciences during their undergraduate training. This led to unequal-sized groups that could not be compared; however, the results may suggest that an undergraduate focus on psychology or a related social science could provide sufficient preparation for successful completion in an online graduate psychology program.
Due to the significance of academic probation status, the fact that academic warning was not a significant predictor was surprising. Academic warning is the precursor to academic probation. However, a student who is placed on academic warning can easily be taken off warning if the student passes the course with a high grade the next term. Often, students placed on academic warning experience a onetime disruption during a semester, which adversely affects their grades (e.g., a sudden, temporary illness for oneself or a family member or a temporary, unexpected increase in work hours). Some students may be able to bounce back from these temporary setbacks and be removed from academic warning in the subsequent term. The issue seems to stem more from chronic setbacks and when students are placed on academic probation, per these results.

Limitations
There are several limitations to this study, including sample-size issues stemming from content differences between the two graduate programs included in this study, differences in prior graduate school experience, and students who withdrew early from the programs, as well as limitations stemming from the archival and longitudinal nature of this study. Issues related to unequal groups affected the dataset and subsequent analyses. On a yearly basis, more students are enrolled in the MAP program than in the MAIO program. The MAP program has a broad, more general focus on multiple domains within the psychology field. As a result, completion of the degree is applicable to a wider range of professional paths, which, in turn, may appeal to a greater number of students. Having a narrower focus applicable to a more specific set of professions may explain why the MAIO program's enrollment figures tended to be less than that of MAP.
A minority of participants included in this study reported prior graduate school experience. As part of the application to the MAP and MAIO graduate programs, students are asked only to indicate if they have ever attended graduate school in the past. However, knowing whether students completed those prior programs, or the reasons why they left those institutions, may yield important information about how those prior experiences may influence students' learning in and completion of their next graduate program.
Approximately 29% of students originally included in this study ultimately withdrew from the institution. Unfortunately, the student information system and related record-keeping efforts did not indicate the reasons for such changes in enrollment. Having this information could provide meaningful insight into the differences between those who persist in their programs, those who voluntarily stop matriculating, and those who are academically or otherwise dismissed from the program.
For multiple reasons, the researchers elected to use an archival and quantitative approach to this study rather than actively collect data while students matriculated. Of course, an archival approach has limitations, including a lack of control about what data and what type of data is collected, how it is collected, whether any data is missing, and what, if anything, can be done to address these issues. A quantitative approach was chosen for ease of data analysis and interpretation. Working from a program evaluation perspective, however, the exploratory nature of this study fit well with an archival approach.
This allowed for an exploration of a general question concerning online graduate student success factors, permitted a longitudinal analysis of students' matriculation during the entire academic program (i.e., over the course of two years), and allowed the researchers to draw from data the institution had already collected. Finally, while the results from this study are compelling, this study analyzed factors associated only with student matriculation and enrollment. Additional factors, such as individual differences, teaching style, and reasons for withdrawal, should be addressed in future research using both quantitative and qualitative research methods.

Directions for Future Research
Existing research concerning any facet of distance education and learning tends not to involve graduate students, or graduate psychology students in particular. The subject matter of psychology courses tends to rely on hands-on demonstration and practice to teach certain counseling and therapeutic techniques, among other areas. Thus, much more research is needed to understand how this group learns, what facilitates that learning and subsequent successful degree completion, and how faculty and administrators can best support these students. Additional research into the reasons why students fail courses can also help to identify specific interventions institutions can utilize to assist students who are at risk.
As stated earlier, placement on academic probation significantly predicted a greater likelihood of not completing the program and earning a lower cumulative GPA in both online graduate programs. Interestingly, academic warning, the precursor to probation, was not a significant predictor. As discussed, for those students who typically maintain a sufficient-to-high cumulative program GPA, failing a single course may not impact the GPA to the extent needed to cross the threshold into academic warning. However, important differences may exist between students who fail one class, are placed on academic warning, and successfully return to a status of satisfactory academic progress quickly versus those who do not, and between those who are subsequently placed on academic probation and successfully return to a status of satisfactory academic progress quickly versus those who do not.
Additionally, whenever online graduate students fail a course, it may be helpful to proactively engage them in discussions about the circumstances that influenced their participation in the course. Given that many graduate distance learners are older and likely to have more and/or a more complex set of responsibilities in life, such as work and raising a family, school may be the first thing to fall by the wayside when push comes to shove. Proactively informing prospective online graduate students in advance of starting their programs about typical weekly time demands, the pace of matriculation, research and writing projects, scheduled breaks in the academic calendar, and so forth may better prepare students in general, as well as help them to schedule their time in such a way as to optimally anticipate, balance and fulfill their life roles and responsibilities. In addition, ensuring that prospective students have the necessary computer skills to succeed in an online learning environment may also be an important consideration, as research suggests that if students have to develop the needed computer skills to navigate course content, their ability to learn and master that course content may suffer as a result (Welsh, 2007).
As part of general programmatic record keeping, it would be helpful if online graduate students were queried about their reasons for taking an LOA. Doing so could inform program administrators and support staff about the nature of the circumstances influencing a student's decision, whether those circumstances are anticipated to be time limited or longer term, whether the student merely needs a break from classes, or whether those circumstances may impact the learning process over time. Depending upon the unique context within which the student is trying to live and learn, this and an assessment of readiness to return to school could inform the specific academic support and advisement the student receives. This information also may yield important data about the types of stressors students tend to manage well enough while attending graduate school as compared to those that may signal the need to place graduate study on hold temporarily. Moreover, there may be significant differences between those students who take one LOA during their matriculation versus those who take more than one.
This study sought to explore potential predictors of student success in online graduate psychology programs. Archival data from admissions criteria, matriculation variables, and course performance variables were included as possible predictors of students' final program GPA and probability of graduation. In general, this research demonstrates a need to identify and intervene with at-risk students (e.g., students who are failing courses, students who are placed on academic probation, and students who take LOAs). Future research using both quantitative and qualitative research designs should examine the reasons students fail courses and are subsequently classified as at risk in order to design potential interventions to support these students.