Comer, Denise K., and Edward M. White. “Adventuring into MOOC Writing Assessment: Challenges, Results, and Possibilities.” College Composition and Communication 67.3 (2016): 318-59. Print.
Denise K. Comer and Edward M. White explore assessment in the “first-ever first-year-writing MOOC,” English Composition I: Achieving Expertise, developed under the auspices of the Bill & Melinda Gates Foundation, Duke University, and Coursera (320). Working with “a team of more than twenty people” with expertise in many areas of literacy and online education, Comer taught the course (321), which enrolled more than 82,000 students, 1,289 of whom received a Statement of Accomplishment indicating a grade of 70% or higher. Nearly 80% of the students “lived outside the United States” and for a majority, English was not the first language, although 59% of these said they were “proficient or fluent in written English” (320). Sixty-six percent had bachelor’s or master’s degrees.
White designed and conducted the assessment, which addressed concerns about MOOCs as educational options. The authors recognize MOOCs as “antithetical” (319) to many accepted principles in writing theory and pedagogy, such as the importance of interpersonal instructor/student interaction (319), the imperative to meet the needs of a “local context” (Brian Huot, qtd. in Comer and White 325) and a foundation in disciplinary principles (325). Yet the authors contend that as “MOOCs are persisting,” refusing to address their implications will undermine the ability of writing studies specialists to influence practices such as Automated Essay Scoring, which has already been attempted in four MOOCs (319). Designing a valid assessment, the authors state, will allow composition scholars to determine how MOOCs affect pedagogy and learning (320) and from those findings to understand more fully what MOOCs can accomplish across diverse populations and settings (321).
Comer and White stress that assessment processes extant in traditional composition contexts can contribute to a “hybrid form” applicable to the characteristics of a MOOC (324) such as the “scale” of the project and the “wide heterogeneity of learners” (324). Models for assessment in traditional environments as well as online contexts had to be combined with new approaches that addressed the “lack of direct teacher feedback and evaluation and limited accountability for peer feedback” (324).
For Comer and White, this hybrid approach must accommodate the degree to which the course combined the features of an “xMOOC” governed by a traditional academic course design with those of a “cMOOC,” in which learning occurs across “network[s]” through “connections” largely of the learners’ creation (322-23).
Learning objectives and assignments mirrored those familiar to compositionists, such as the ability to “[a]rgue and support a position” and “[i]dentify and use the stages of the writing process” (323). Students completed four major projects, the first three incorporating drafting, feedback, and revision (324). Instructional videos and optional workshops in Google Hangouts supported assignments like discussion forum participation, informal contributions, self-reflection, and peer feedback (323).
The assessment itself, designed to shed light on how best to assess such contexts, consisted of “peer feedback and evaluation,” “Self-reflection,” three surveys, and “Intensive Portfolio Rating” (325-26).
The course supported both formative and evaluative peer feedback through “highly structured rubrics” and extensive modeling (326). Students who had submitted drafts each received responses from three other students, and those who submitted final drafts received evaluations from four peers on a 1-6 scale (327). The authors argue that despite the level of support peer review requires, it is preferable to more expert-driven or automated responses because they believe that
what student writers need and desire above all else is a respectful reader who will attend to their writing with care and respond to it with understanding of its aims. (327)
They found that the formative review, although taken seriously by many students, was “uneven,” and students varied in their appreciation of the process (327-29). Meanwhile, the authors interpret the evaluative peer review as indicating that “student writing overall was successful” (330). Peer grades closely matched those of the expert graders, and, while marginally higher, were not inappropriately high (330).
The MOOC provided many opportunities for self-reflection, which the authors denote as “one of the richest growth areas” (332). They provide examples of student responses to these opportunities as evidence of committed engagement with the course; a strong desire for improvement; an appreciation of the value of both receiving and giving feedback; and awareness of opportunities for growth (332-35). More than 1400 students turned in “final reflective essays” (335).
Self-efficacy measures revealed that students exhibited an unexpectedly high level of confidence in many areas, such as “their abilities to draft, revise, edit, read critically, and summarize” (337). Somewhat lower confidence levels in their ability to give and receive feedback persuade the authors that a MOOC emphasizing peer interaction served as an “occasion to hone these skills” (337). The greatest gain occurred in this domain.
Nine “professional writing instructors” (339) assessed portfolios for 247 students who had both completed the course and opted into the IRB component (340). This assessment confirmed that while students might not be able to “rely consistently” on formative peer review, peer evaluation could effectively supplement expert grading (344).
Comer and White stress the importance of further research in a range of areas, including how best to support effective peer response; how ESL writers interact with MOOCs; what kinds of people choose MOOCs and why; and how MOOCs might function in WAC/WID situations (344-45).
The authors stress the importance of avoiding “extreme concluding statements” about the effectiveness of MOOCs based on findings such as theirs (346). Their study suggests that different learners valued the experience differently; those who found it useful did so for varied reasons. Repeating that writing studies must take responsibility for assessment in such contexts, they emphasize that “MOOCs cannot and should not replace face-to-face instruction” (346; emphasis original). However, they contend that even enrollees who interacted briefly with the MOOC left with an exposure to writing practices they would not have gained otherwise and that the students who completed the MOOC satisfactorily amounted to more students than Comer would have reached in 53 years teaching her regular FY sessions (346).
In designing assessments, the authors urge, compositionists should resist the impulse to focus solely on the “Big Data” produced by assessments at such scales (347-48). Such a focus can obscure the importance of individual learners who, they note, “bring their own priorities, objectives, and interests to the writing MOOC” (348). They advocate making assessment an activity for the learners as much as possible through self-reflection and through peer interaction, which, when effectively supported, “is almost as useful to students as expert response and is crucial to student learning” (349). Ultimately, while the MOOC did not succeed universally, it offered many students valuable writing experiences (346).