A Pandemic of Busywork: Increased Online Coursework Following the Transition to Remote Instruction is Associated with Reduced Academic Achievement

Under normal circumstances, when students invest more effort in their schoolwork, they generally show evidence of improved academic achievement. But when universities abruptly transitioned to remote instruction in Spring 2020, instructors assigned rapidly-prepared online learning activities, disrupting the normal relationship between effort and outcomes. In this study, we examine this relationship using data observed from a large-scale survey of undergraduate students, from logs of student activity in the online learning management system, and from students’ estimated cumulative performance in their courses (n = 4,636). We find that there was a general increase in the number of assignments that students were expected to complete following the transition to remote instruction, and that students who spent more time and reported more effort carrying out this coursework generally had lower course performance and reported feeling less successful. We infer that instructors, under pressure to rapidly put their course materials online, modified their courses to include online busywork that did not constitute meaningful learning activities, which had a detrimental effect on student outcomes at scale. These findings are discussed in contrast with other situations when increased engagement does not necessarily lead to improved learning outcomes, and in comparison with the broader relationship between effort and academic achievement.


A Pandemic of Busywork: Increased Online Coursework Following the Transition to Remote Instruction is Associated with Reduced Academic Achievement
In March 2020, the COVID-19 pandemic forced colleges to suspend in-person instruction and rapidly transition to remote instruction (Mervosh & Swales, 2020). In doing so, instructors were asked to immediately replace their planned classroom activities with substitute activities that students could carry out remotely, while aiming to preserve the integrity of their courses and learning outcomes. Lessons that would have normally taken place in lectures, laboratories, discussions, or studios were necessarily replaced by online assignments, in many cases by instructors with scant experience teaching online (Carey, 2020). Students underwent a corresponding contortion, as classroom and other forms of in-person learning transitioned fully toward effort invested in online homework. In this study, we examine the effects of higher education's rapid and unprecedented shift to remote instruction on college-student effort and academic achievement.
Considering that remote instruction, by definition, involves learning activities performed outside the classroom, we see relevance to long-standing research on the efficacy of homework (Trautwein & Köller, 2003). Large-scale research studies have historically observed that the amount of time spent on schoolwork carried out at home is positively correlated with student achievement (Cooper, 1989;Keith, 1988), and that this correlation becomes stronger with higher academic levels. If this is the case, the collegiate pivot toward online homework during remote instruction might have the potential to effectively facilitate student progress and academic continuity when classrooms were closed. However, if there is a causal relationship between these variables, the direction of the causal influence is not so clear. High achieving students may elect to spend more time on their homework, or may simply elect to take courses with workloads that demand more time. More recent research suggests that the introduction of homework can result in achievement gains, and completing this homework is beneficial for student learning, but when controlling for the amount of homework assigned, increasing the amount of time spent on this homework may be less consequential (Trautwein, 2007).
At the college level, largely enabled by clickstream data passively recorded in large-scale educational technology platforms, it is possible to take a more holistic view of student effort and behavior within online learning environments. Rather than narrowly measuring the self-reported time invested in specific tasks or assignments, these newer measures (often associated with the field of learning analytics; Lang et al., 2017) examine patterns of activity and cumulative time spent within an online learning management system (LMS). Such measures are commonly viewed through the broader framework of self-regulated learning, such that the quality and extent of activity can provide a composite measure of a student's motivation, engagement, time management, learning strategies, study behavior, and more (Roll & Winne, 2015;Winne, 2017). In general, studies find that these objective measures of a student's LMS activity are positively associated with engagement and achievement (Cerezo et al., 2016;Conijn et al., 2017;Joksimović et al., 2015;You, 2016;Yu & Jo, 2014), even when controlling for the amount of work assigned within a course (Motz et al., 2019).
Given these patterns, college instructors, facing the reality of transitioning all their course materials online in a short period of time, would have been well-justified in prioritizing the development of activities to help their students stay engaged during the period of remote instruction. Moreover, research on past disruptions to college teaching suggests that maintaining student engagement in their studies is a critical component of academic continuity (Day, 2015;SchWeber, 2013).
However, engagement for the sake of engagement is not a constructive aim. The benefit of student engagement for student achievement typically requires that the student's effort is invested in meaningful activities that are clearly connected with to-be-assessed learning outcomes. Learning activities are perceived as meaningful in relationship to specific learning goals, and if an activity is distal from a learning goal, the benefits of effort and engagement are muddied. For example, the benefit of a homework assignment depends on the quality of the assignment in relationship to its intended learning objectives (Dettmers et al., 2010;Rosário et al., 2018;Sun et al., 2019). If instructors prepared online learning activities in a hurried manner aimed at keeping students loosely engaged, but perhaps lacked the expertise in online course design to help students understand how effort on these activities would advance their learning, there is reason to believe that the conventional relationships between effort, engagement, and achievement would be disrupted.
The term "busywork" is used sparingly in the education research literature, but it loosely refers to activities that are perceived to occupy time but not promote learning (Malikow, 2007). Students might perceive an activity as busywork if a connection between effort invested in the activity and one's learning goals is unclear, even if the activity itself is intellectually stimulating (White, 2009). In this way, the term busywork should not be interpreted as a judgment of the actual or potential value of an activity, but rather as a label indicating students' perception of an activity's value for academic progress.
For the current study, we explore these relationships using data collected from a large-scale survey distributed to a population of college-level learners across nine college campuses. To corroborate students' self-reported effort and achievement, we join these survey responses to institutional records of student activity and performance within their LMS course sites. By analyzing these existing data, our goal is to better understand the association between student effort during the period of remote instruction and student achievement during that same time period.

Methods
The current study was exploratory, and data were not initially collected to confirm or test a specific hypothesis. The specific survey items and student behaviors under analysis, described below, were identified during preliminary review of broad trends in student responses to closedand open-ended survey questions. De-identified raw data from this study, as well as analysis scripts for producing all summaries, graphics, and statistics presented in this manuscript, are publicly available at https://osf.io/qwsrk/.

Participants and Procedure
Shortly after the end of the Spring 2020 semester, we deployed an online survey (via Qualtrics; Provo, UT) to all students over 18 years of age who were enrolled in a credit-bearing undergraduate course at any Indiana University (IU) campus, who had not registered Family Educational Rights and Privacy Act (FERPA) restrictions on their student directory listing, and who were not dual credit students (i.e., high school students taking college coursework). Invitations were distributed to the full census of all eligible students via email to students' official university email addresses, and nonrespondents were sent up to four email reminders during a four-week deployment.
In total, 66,826 eligible students were invited to participate. Of which, 6,156 students provided partial or complete responses, for an overall response rate of 9.2%. Due to legal and compliance restrictions on research data collected overseas, students responding from nondomestic IP addresses were filtered before entering the survey, and respondents with domestic IP addresses who indicated they were not currently residing in the United States were excluded (n=8). Additionally, respondents were invited to release their records from the learning management system (Canvas; Instructure, Salt Lake City, UT) for analysis in combination with their survey response; students who did not provide this release were also excluded from the current study (n=1,465; 23.8%). Finally, students whose course enrollments had no associated Canvas sites were also excluded (n=47; often internship, research, independent-study, or workstudy enrollments), leaving a final sample of 4,636 unique students, with representation from all nine of Indiana University's campuses. These participants were 70.9% female, 27.5% male, and 1.6% other or prefer not to answer, and were 13.8% freshman, 22.8% sophomore, 23.6% junior, 37.2% senior, and 0.4% other or non-degree-seeking. In summary, the current sample is further along in their academic progress, more likely to be female, and less likely to be international than the general university population. The total time taken to respond to the full survey was an average of 12.7 minutes.

Survey Instrument
The full survey questionnaire was wide ranging, and included topics and themes beyond the scope of the current study. Survey topics were identified from scholarly literature on higher education and from consultation with diverse constituencies of faculty leadership, students, and instructional design experts. Survey items were written specifically with the goal of understanding how the transition to remote instruction due to COVID-19 affected college students along a variety of dimensions, and a full analysis of these results is beyond the scope of any single research article. Survey items and flow were developed in consultation with the Indiana University Center for Survey Research, and the questionnaire was field tested with 20 students prior to deployment. The full survey instrument is available at https://osf.io/n6gqm/ and de-identified responses to all closed-ended survey items (including those not included in the current study) are available at https://osf.io/zdxge/.
For the current study, we focus on three specific survey questions (with their associated question id numbers): It took more effort to complete my coursework (id 37); I felt I was successful as a college student (id 30); and I earned lower grades than I expected (id 36). Each of these questions was preceded by the heading, "After courses transitioned to remote instruction…" Responses were on a five-point scale of agreement (strongly disagree, disagree, neither agree nor disagree, agree, strongly agree). These appeared on questionnaire pages 3 and 4. Respondents were not required to answer survey items to proceed, and the actual number of responses to each question ranges from 4,323 to 4,371. Missing responses are ignored in subsequent analyses.

Learning Management System (LMS) Records
All respondents who are included in the current study provided a digital signature at the start of the survey, giving permission for members of the study team to access specific elements of their private student information contained in the Canvas learning management system (LMS) for the purposes of this study. This release included their enrollment status in Canvas course sites, the times when they accessed content in Canvas, and additional details of assignments, submissions, and grades within Canvas. We queried these students' Canvas records from the Spring 2020 semester using a data platform provided by the Unizin Consortium (Unizin Consortium, 2020). From these records, we computed four measures: navigation time, number of events, number of assignments, and estimated course score.
• Navigation time was computed as the cumulative time, in minutes, between web browser navigation events in the LMS, excluding any measured durations between navigation events greater than or equal to 25 minutes (which were likely to be periods of inactivity), and excluding durations between web browser sessions (Baker et al., 2020).
• Number of events was the cumulative number of actions performed within the LMS, including navigation events, submitting assignments, uploading files, etc.
• Number of assignments was the cumulative number of assignments (including assigned quizzes, discussions, etc.) within all of a student's enrolled Canvas sites that had a deadline during the Spring 2020 semester, that were graded, and that contributed non-zero points toward a cumulative course score.
• Estimated course score is the cumulative percent score a student has earned at the end of the course, estimated from all grades entered into the Canvas gradebook, combined according to the grading rules and weighting schemes implemented by the instructor. This is not the official grade recorded for a student enrollment, but it is highly predictive of official grades and has the advantage of being more granular (as a numeric percentage) than ordinal letter grades. Official grades also have the disadvantage of being confounded by 'Pass' grades made broadly available specifically during this semester.
The first three measures were calculated separately before and after IU's official announcement of the transition to remote instruction (March 10, 2020, which, coincidentally, was precisely the midpoint of the Spring 2020 academic term). These are summed across course enrollments to produce cumulative measures at the student level. The fourth measure, estimated course score, was extracted at the enrollment level and averaged across course enrollments to produce a student-level average performance score. The queries for producing these measures are publicly available (Motz & Quick, 2020). While these measures all have some skew, the results reported below are unchanged if the skew is eliminated, so we have chosen to conduct all analyses with these measures at their original scale for improved interpretability.

Data Analysis
For the current study, we use Bayesian estimation analyses to examine the statistical credibility of effects in the observed data. There are many advantages to Bayesian estimation over traditional (frequentist) statistics (Kruschke & Liddell, 2018a, 2018b, but in particular, these methods provide an informative easily-interpretable estimate of the uncertainty in a given analytical parameter, which is referred to as the posterior distribution. With frequentist statistics, one would compute the relative improbability of measuring the observed data (or more extreme values) under a hypothetical null model. With Bayesian estimation however, one can estimate a direct statistical description of patterns within the observed data. In practice, such estimates are typically quantified as the width of the posterior distribution, and this is often measured as the 95% highest density interval (HDI; analogous to but distinct from a frequentist confidence interval). For example, while frequentist statistics would determine the p-value (the false positive rate under the null model) associated with a t-statistic and degrees of freedom, calculated from a difference of 9.4 as t(8866)= 10.2, p < 0.01, with Bayesian estimation methods, one would instead measure the uncertainty of an estimated difference of 9.4 with a corresponding 95% HDI: 7.6 to 11.2. If the 95% HDI of a difference estimate excludes zero (and also excludes values that are close to zero), one can infer that the difference is credibly different from zero. Throughout the remainder of the article, all differences are reported with the lower-and upper-bounds of their corresponding 95% HDI estimates.
Most statistical analyses in this study are carried out under the framework of a Bayesian generalized linear model, using the brms package for R (Bürkner, 2017). As is the default in brms, model parameters were estimated using the No-U-Turn Sampler (NUTS; Hoffman & Gelman, 2014), using 4,000 steps across four chains, following 2,000 warm-up iterations in each chain. Model convergence was evaluated by visual inspection of the chains, and by the potential scale reduction factor, " (Gelman & Rubin, 1992). " was less than 1.01 for each model coefficient (values of 1.00 reflect ideal model convergence).
For some other analyses in this study, the outcome variables were responses to survey items, measured on a five-point Likert scale of agreement. Rather than treat ordered responses as metric values, we carried out these analyses using the ordered-probit model (Liddell & Kruschke, 2018). The posterior distribution was estimated using Markov chain Monte Carlo (MCMC), using Gibbs sampling in JAGS (Plummer, 2003), with the runjags package (Denwood, 2016) for R. The effective sample sizes for the parameters of interest were at least 50,000, well above the recommended 10,000 (Kruschke, 2014).
An analysis script, written in R, that carries out all analyses on provided de-identified data is publicly available at https://osf.io/qwsrk/.

Results
As expected, when courses transitioned to remote instruction due to COVID-19, there was an increase in the cumulative number of assignments students were expected to complete in the learning management system. Prior to the transition, the average student had 58.6 graded assignments with due dates (SD = 40.2); after the transition, the average student had 68.0 such assignments (SD = 47.4) over equivalent durations of time (eight weeks in both; finals week is excluded). This increase in the number of assignments across the full sample (average 9.4 more assignments over the second half of the semester) is credibly non-zero (95% HDI: 7.6 to 11.2).
From students' perspectives, the large majority of respondents (72.5%) marked Agree or Strongly agree to the statement "It took more effort to complete my coursework" after courses transitioned to remote instruction. Compared with students who did not agree that their coursework took more effort, students who agreed had correspondingly larger increases in the number of assignments they were expected to complete (mean = 4.2 more assignments; 95% HDI: 2.1 to 6.3), providing evidence that their self-reported increase in effort was at least partially associated with the quantity of coursework.

Self-reported Effort and Objective Measures of Coursework and Student Activity.
Note. Error bars show +/-1 standard error. Pre-transition is the 8-week period prior to the announcement of the institution's immediate transition to remote instruction. Post-transition is the 8week period during remote instruction (which does not include finals week).
Moreover, students who agreed that it took more effort to complete their coursework following the transition to remote instruction also showed evidence of larger increases in their amount of activity in the LMS. While the full sample spent more time in Canvas during the second half of the semester (mean pre-transition = 1,035 minutes, post-transition = 1,159 minutes) and had more events in Canvas (mean pre-transition = 1,257 events, post-transition = 1,374), these average increases were credibly larger for students who agreed that it took more effort to complete their coursework: 88.7 additional minutes pre-to-post-transition (95% HDI: 51.4 to 126.0), and 66.6 additional events pre-to-post-transition (95% HDI: 27.8 to 105.7). These increases are shown in Figure 1.
Overall, students had more online assignments and invested more time on their online coursework after the transition to remote instruction. By dividing the amount of navigation time spent in the LMS by the number of assignments in the LMS, we can produce a normalized timeper-assignment measure. Averaging across all participants, the average time per assignment was 25.4 minutes pre-transition, and 22.6 minutes post-transition, a credible overall decrease (mean = -2.46; 95% HDI: -4.4 to -0.6). So even while students spent more time in the LMS after the transition to remote instruction, students generally spent less time on each assignment. However, this post-transition decrease in time-per-assignment was not reliably different for students who reported increased effort in their coursework (mean = -0.6; 95% HDI: -2.8 to 1.7). Thus, students who reported increased effort post-transition invested roughly proportionate amounts of time on a per-assignment basis as their peers who did not report increased effort. Thus, it was not the case that students who reported more effort had more demanding assignments after the transition to remote instruction. Instead, it would appear that students who reported more effort simply had more work to complete than their peers.
It took more effort to complete my coursework.

pre-transition post-transition
Students' average estimated cumulative percent score (averaging across Canvas course sites) was 77.7% (SD = 17.1). When responding to the statement "I earned lower grades than I expected," 39.5% of students Agreed or Strongly agreed, and 44.6% Agreed or Strongly agreed with the statement "I felt I was successful as a college student." Students who self-reported that they invested more effort also reported lower levels of agreement that they were successful as a college student (mean difference in agreement = -0.93; 95% HDI: -1.0 to -0.8), and higher levels of agreement that they earned lower grades than they expected (mean difference in agreement = 1.28; 95% HDI: 1.2 to 1.4). See Figure 2.

Figure 2
Self-reported Effort and Self-reported Academic Achievement.
Note. SD = Strongly disagree, D = Disagree, N = Neither agree nor disagree, A = Agree, SA = Strongly agree. Numbers (and corresponding shading) are cell frequencies.
The self-reported negative correlation between effort and achievement is corroborated by estimated grades within Canvas. Students who agreed that they invested more effort earned lower scores (77.0%) than their peers who did not report more effort (80.3%), and this -3.3% difference is also credibly non-zero (95% HDI: -4.0 to -2.5).

Discussion
When college courses transitioned to remote instruction due to the COVID-19 pandemic, student workload within the online learning management system generally increased (16% increase in the number of assignments over an equivalent period of time). This increase is expected and unsurprising, considering that the LMS became the de facto medium for student learning and course administration when face-to-face instruction was suspended. For some students, this increased volume of online assignments may have corresponded to a net decrease in their amount of coursework, if, for example, these online assignments replaced a larger number or more difficult set of in-person learning activities and assessments.
But for the substantial majority of students (72%), the increased online workload corresponded to a net increase in the amount of coursework, as indicated by their self-reported increase in effort following the transition to remote instruction. These students' claims of increased effort were validated, in comparison with their peers who did not report increased effort, using objective measures of coursework and activity. We found that students who agreed that they invested more effort in their coursework had more assignments, spent more time, and performed more actions in the LMS than their peers.
Normally, one might expect that an increased number of learning activities and increased effort on these learning activities would correspond with improved academic outcomes, as is typically the case in learning analytics examinations (Cerezo et al., 2016;Conijn et al., 2017;Joksimović et al., 2015;Motz et al., 2019;You, 2016;Yu & Jo, 2014). However, we observed precisely the opposite. Students who invested more effort in their coursework reported that they felt less successful and that they earned lower grades than expected. We corroborated these selfreport measures with students' performance scores from the LMS, finding that students who invested more effort after the transition to remote instruction received measurably lower grades than their peers who did not report increased effort.
Why was added effort associated with lower outcomes? Students who reported more effort were not investing more effort on a per-assignment basis than their peers, which is suggestive that their assignments were not simply more difficult (which might have caused lower grades due to more challenging assessments). Rather, we infer that students who invested greater effort simply had a larger volume of comparable assignments to complete.
Increased effort can sometimes make students feel less successful, even when the task is beneficial for learning. Paradoxically, perceiving oneself as engaging in effort can sometimes make a person feel like they have lower competence (Nicholls, 1984;Tsay & Banaji, 2011), and can make a beneficial learning activity feel less efficacious (Kirk-Johnson et al., 2019), and this might have explained the negative correlation between self-reported effort and self-reported success following the transition to remote instruction. But if this were the whole story, we should have observed that students who were tasked to invest more effort would have learned more from the additional assignments, an effect generally referred to as "desirable difficulties" (Bjork, 1994;McDaniel & Butler, 2011), and would thus have been expected to receive higher performance scores. Instead, we observed lower grades among those students who reported increased effort.
The pattern of increased effort associated with worse outcomes is also partially consistent with "wheel-spinning" behavior sometimes observed in online tutoring systems (Beck & Gong, 2013;Fang et al., 2017). Wheel-spinning occurs when a student fails to achieve mastery of a learning goal despite investing increased effort and time in the learning activity, and oftentimes wheel-spinning ultimately leads to struggle and disengagement. But the current study found that students who were investing increased effort and time during the period of remote instruction were not perseverating on their assignments. Instead, they were investing comparable amounts of time on a per-assignment basis as their peers, and thus wheel-spinning provides an inadequate account for these patterns.
Ruling out these simple explanations brings us no closer toward a comprehensive specific account for the negative correlation between effort and achievement. Indeed, a single account is unlikely to exist. Considering the scope of disruption to education and daily life during the COVID-19 pandemic, we see no reason to presume a single distinct explanation for why added effort resulted in worse academic outcomes during the second half of the Spring 2020 semester. Instead, we advance three possible hypotheses, which are not mutually exclusive: (1) the online learning activities assigned during remote instruction were misaligned from the evaluative assessments that determined students' grades; (2) students were generally unprepared to manage an increased online workload within the circumstances of rapidly updated course requirements, which necessarily created more confusion and provided more opportunities to fall behind; and most obviously, (3) the difficulties of daily life during the pandemic were disproportionately burdensome for students who had larger workloads. These three explanations were repeatedly raised in informal conversations with students, and were also evident in informal analysis of responses to open-ended survey items.

Misalignment between learning activities and educational assessment.
When instructors transitioned their coursework online, they created learning activities rapidly, largely without expertise in the design of online courses (Carey, 2020), and thus without careful consideration of how these activities would help students become prepared for course assessments. The rapidity of these changes also likely led to viewing online course tools in more operational terms. That is, instructors focused on enabling students to access and complete materials, presumably in correspondence to their previous face-to-face designs. The lack of consideration on the distinct intersections between the technical and pedagogical aspects of their teaching before and after the transition likely contributed to this misalignment. If students perceived that they were merely being kept busy while not advancing toward improved learning, their engagement in learning activities would have suffered, hence our use of the term "busywork" (Malikow, 2007;White, 2009). When students lack motivation to stay engaged in coursework, effort invested in learning activities can be negatively correlated with academic achievement (Tze et al., 2015). These issues present ongoing challenges and opportunities in the development of technology to support teaching and learning (Selwyn, 2020). It is therefore not surprising that a quick solution to the unprecedented transition to remote instruction was to implement more online assignments, and that increased engagement in these quick solutions had negative consequences.

More opportunities to fall behind.
A class assignment, particularly when it is graded with a due date, is not only a learning activity, it is also an evaluation. Adding more assignments like this not only provides incrementally more opportunities for engagement, but it also institutes more deadlines to remember, submissions to manage, feedback to interpret, and so on. In this study, the average student had 68 assignments to complete during the last 8 weeks of the spring semester-more than one graded online deadline every day; those who reported increased effort during remote instruction had even more. There is a practical limit to the amount of coursework that students can constructively manage. Whatever this limit may be, we might expect that when students approach this limit, they will be investing more effort but will be experiencing difficulty with time management, will miss deadlines, and will not be reflecting on feedback, each of which will clearly lower performance. Under better circumstances (before the pandemic), college students would have had more flexibility to manage their coursework, as university communities provided housing, food, computer workstations, high-speed internet, and study spaces. Without these amenities, and facing an increasing volume of assignments, college students would reasonably be expected to struggle.

Life getting in the way of increased coursework.
The proverbial elephant-in-the-room is the COVID-19 pandemic, which exerted oppressive effects on daily life far beyond the instructional format of college courses. Anecdotes in open-ended survey responses describe college students abruptly needing to provide full-time care for younger and/or older relatives, struggling to find work to supplement family income following loss of employment, being unable to access the Internet because utility companies stopped making in-home visits, and heart-wrenching concerns related to personal safety, physical and mental health, and housing and food security. Balancing challenges such as these against an increasingly demanding workload would have been untenable. While some students mustered additional effort during the period of remote instruction, this additional effort may have created additional challenges in daily life that interfered with performance on learning assessments. Importantly, these findings should not be interpreted as an indictment against college instructors. On the contrary, in the face of the COVID-19 pandemic, teachers were faced with an absurd and unprecedented challenge: to immediately adapt all of their course materials to a new instructional medium. The quality of this instruction is not under evaluation, as teachers were not given the time or training that would normally be necessary for a major revision of course design into an online format (Stewart et al., 2010). On the other side of the coin, effective online learning is believed to require self-regulation skills, particularly time-management and self-efficacy (Broadbent & Poon, 2015), and students may have been similarly unprepared for the transition. This study's findings should also not be interpreted as an indictment against the university, which fittingly offered Pass grades in order to sustain students' academic progress despite the difficulties of remote instruction. Moreover, both teachers and students had to endure more pressing challenges of the pandemic in their personal lives. In the grand scheme of things, this study views a massively catastrophic global event through the relatively narrow lens of college student learning. Our findings might be interpreted as retrospective historical observations of a specific aspect of the COVID-19 pandemic, and should not be interpreted as contributing directly to generalizable theory on online teaching or learning (Tobin, 2020). But with consideration of its context, we can still learn from history.
When instructional faculty were suddenly forced to explore the contemporary online learning toolkit, they produced a substantial volume of assignments that seemingly provided little value for student learning. In a sense, this finding aligns with broader trends in online education. Prior to COVID-19, transitioning to online courses has been driven by a mix of technological availability, university initiatives, and market demand, with quality of education sometimes a "variable" rather than a fixed boundary condition (Ortagus & Tyler Derreth, 2020). Similarly, during remote instruction our contemporary educational technology toolkit provided plentiful functionality to keep our students busy, but evidently, perceived quality was in short supply. In the future, the design, development, and adoption of learning technologies might prioritize effectiveness over mere functionality, so that student engagement with this technology is less likely to be associated with reduced academic achievement.