College Composition Weekly: Summaries of research for college writing professionals

Read, Comment On, and Share News of the Latest from the Rhetoric and Composition Journals


Leave a comment

Stewart, Mary K. Communities of Inquiry in Technology-Mediated Activities. C&C, Sept. 2017. Posted 10/20/2017.

Stewart, Mary K. “Communities of Inquiry: A Heuristic for Designing and Assessing Interactive Learning Activities in Technology-Mediated FYC.” Computers and Composition 45 (2017): 67-84. Web. 13 Oct. 2017.

Mary K. Stewart presents a case study of a student working with peers in an online writing class to illustrate the use of the Community of Inquiry framework (CoI) in designing effective activities for interactive learning.

Stewart notes that writing-studies scholars have both praised and questioned the promise of computer-mediated learning (67-68). She cites scholarship contending that effective learning can take place in many different environments, including online environments (68). This scholarship distinguishes between “media-rich” and “media-lean” contexts. Media-rich environments include face-to-face encounters and video chats, where exchanges are immediate and are likely to include “divergent” ideas, whereas media-lean situations, like asynchronous discussion forums and email, encourage more “reflection and in-depth thinking” (68). The goal of an activity can determine which is the better choice.

Examining a student’s experiences in three different online environments with different degrees of media-richness leads Steward to argue that it is not the environment or particular tool that results in the success or failure of an activity as a learning experience. Rather, in her view, the salient factor is “activity design” (68). She maintains that the CoI framework provides “clear steps” that instructors can follow in planning effective activities (71).

Stewart defined her object of study as “interactive learning” (69) and used a “grounded theory” methodology to analyze data in a larger study of several different course types. Interviews of instructors and students, observations, and textual analysis led to a “core category” of “outcomes of interaction” (71). “Effective” activities led students to report “constructing new knowledge as a result of interacting with peers” (72). Her coding led her to identify “instructor participation” and “rapport” as central to successful outcomes; reviewing scholarship after establishing her own grounded theory, Stewart found that the CoI framework “mapped to [her] findings” (71-72).

She reports that the framework involves three components: social presence, teaching presence, and cognitive presence. Students develop social presence as they begin to “feel real to one another” (69). Stewart distinguishes between social presence “in support of student satisfaction,” which occurs when students “feel comfortable” and “enjoy working” together, and social presence “in support of student learning,” which follows when students actually value the different perspectives a group experience offers (76).

Teaching presence refers to the structure or design that is meant to facilitate learning. In an effective CoI activity, social and teaching presence are required to support cognitive presence, which is indicated by “knowledge construction,” specifically “knowledge that they would not have been able to construct without interacting with peers” (70).

For this article, Stewart focused on the experiences of a bilingual Environmental Studies major, Nirmala, in an asynchronous discussion forum (ADF), a co-authored Google document, and a synchronous video webinar (72). She argues that Nirmala’s experiences reflect those of other students in the larger study (72).

For the ADF, students were asked to respond to one of three questions on intellectual property, then respond to two other students who had addressed the other questions. The prompt specifically called for raising new questions or offering different perspectives (72). Both Nirmala and Steward judged the activity as effective even though it occurred in a media-lean environment because in sharing varied perspectives on a topic that did not have a single solution, students produced material that they were then able to integrate into the assigned paper (73):

The process of reading and responding to forum posts prompted critical thinking about the topic, and Nirmala built upon and extended the ideas expressed in the forum in her essay. . . . [She] engaged in knowledge construction as a result of interacting with her peers, which is to say she engaged in “interactive learning” or a “successful community of inquiry.” (73)

Stewart notes that this successful activity did not involve the “back-and-forth conversation” instructors often hope to encourage (74).

The co-authored paper was deemed not successful. Stewart contends that the presence of more immediate interaction did not result in more social presence and did not support cognitive presence (74). The instructions required two students to “work together” on the paper; according to Nirmala’s report, co-authoring became a matter of combining and editing what the students had written independently (75). Stewart writes that the prompt did not establish the need for exploration of viewpoints before the writing activity (76). As a result, Nirmala felt she could complete the assignment without input from her peer (76).

Though Nirmala suggested that the assignment might have worked better had she and her partner met face-to-face, Stewart argues from the findings that the more media-rich environment in which the students were “co-present” did not increase social presence (75). She states that instructors may tend to think that simply being together will encourage students to interact successfully when what is actually needed is more attention to the activity design. Such design, she contends, must specifically clarify why sharing perspectives is valuable and must require such exploration and reflection in the instructions (76).

Similarly, the synchronous video webinar failed to create productive social or cognitive presence. Students placed in groups and instructed to compose group responses to four questions again responded individually, merely “check[ing]” each other’s answers.  Nirmala reports that the students actually “Googled the answer and, like, copy pasted” (Nirmala, qtd. in Stewart 77). Steward contends that the students concentrated on answering the questions, skipping discussion and sharing of viewpoints (77).

For Stewart, these results suggest that instructors should be aware that in technology-mediated environments, students take longer to become comfortable with each other, so activity design should build in opportunities for the students to form relationships (78). Also, prompts can encourage students to share personal experiences in the process of contributing individual perspectives. Specifically, according to Stewart, activities should introduce students to issues without easy solutions and focus on why sharing perspectives on such issues is important (78).

Stewart reiterates her claim that the particular technological environment or tool in use is less important than the design of activities that support social presence for learning. Even in media-rich environments, students placed together may not effectively interact unless given guidance in how to do so. Stewart finds the CoI framework useful because it guides instructors in creating activities, for example, by determining the “cognitive goals” in order to decide how best to use teaching presence to build appropriate social presence. The framework can also function as an assessment tool to document the outcomes of activities (79). She provides a step-by-step example of CoI in use to design an activity in an ADF (79-81).

 


Leave a comment

Comer and White. MOOC Assessment. CCC, Feb. 2016. Posted 04/18/2016.

Comer, Denise K., and Edward M. White. “Adventuring into MOOC Writing Assessment: Challenges, Results, and Possibilities.” College Composition and Communication 67.3 (2016): 318-59. Print.

Denise K. Comer and Edward M. White explore assessment in the “first-ever first-year-writing MOOC,” English Composition I: Achieving Expertise, developed under the auspices of the Bill & Melinda Gates Foundation, Duke University, and Coursera (320). Working with “a team of more than twenty people” with expertise in many areas of literacy and online education, Comer taught the course (321), which enrolled more than 82,000 students, 1,289 of whom received a Statement of Accomplishment indicating a grade of 70% or higher. Nearly 80% of the students “lived outside the United States” and for a majority, English was not the first language, although 59% of these said they were “proficient or fluent in written English” (320). Sixty-six percent had bachelor’s or master’s degrees.

White designed and conducted the assessment, which addressed concerns about MOOCs as educational options. The authors recognize MOOCs as “antithetical” (319) to many accepted principles in writing theory and pedagogy, such as the importance of interpersonal instructor/student interaction (319), the imperative to meet the needs of a “local context” (Brian Huot, qtd. in Comer and White 325) and a foundation in disciplinary principles (325). Yet the authors contend that as “MOOCs are persisting,” refusing to address their implications will undermine the ability of writing studies specialists to influence practices such as Automated Essay Scoring, which has already been attempted in four MOOCs (319). Designing a valid assessment, the authors state, will allow composition scholars to determine how MOOCs affect pedagogy and learning (320) and from those findings to understand more fully what MOOCs can accomplish across diverse populations and settings (321).

Comer and White stress that assessment processes extant in traditional composition contexts can contribute to a “hybrid form” applicable to the characteristics of a MOOC (324) such as the “scale” of the project and the “wide heterogeneity of learners” (324). Models for assessment in traditional environments as well as online contexts had to be combined with new approaches that addressed the “lack of direct teacher feedback and evaluation and limited accountability for peer feedback” (324).

For Comer and White, this hybrid approach must accommodate the degree to which the course combined the features of an “xMOOC” governed by a traditional academic course design with those of a “cMOOC,” in which learning occurs across “network[s]” through “connections” largely of the learners’ creation (322-23).

Learning objectives and assignments mirrored those familiar to compositionists, such as the ability to “[a]rgue and support a position” and “[i]dentify and use the stages of the writing process” (323). Students completed four major projects, the first three incorporating drafting, feedback, and revision (324). Instructional videos and optional workshops in Google Hangouts supported assignments like discussion forum participation, informal contributions, self-reflection, and peer feedback (323).

The assessment itself, designed to shed light on how best to assess such contexts, consisted of “peer feedback and evaluation,” “Self-reflection,” three surveys, and “Intensive Portfolio Rating” (325-26).

The course supported both formative and evaluative peer feedback through “highly structured rubrics” and extensive modeling (326). Students who had submitted drafts each received responses from three other students, and those who submitted final drafts received evaluations from four peers on a 1-6 scale (327). The authors argue that despite the level of support peer review requires, it is preferable to more expert-driven or automated responses because they believe that

what student writers need and desire above all else is a respectful reader who will attend to their writing with care and respond to it with understanding of its aims. (327)

They found that the formative review, although taken seriously by many students, was “uneven,” and students varied in their appreciation of the process (327-29). Meanwhile, the authors interpret the evaluative peer review as indicating that “student writing overall was successful” (330). Peer grades closely matched those of the expert graders, and, while marginally higher, were not inappropriately high (330).

The MOOC provided many opportunities for self-reflection, which the authors denote as “one of the richest growth areas” (332). They provide examples of student responses to these opportunities as evidence of committed engagement with the course; a strong desire for improvement; an appreciation of the value of both receiving and giving feedback; and awareness of opportunities for growth (332-35). More than 1400 students turned in “final reflective essays” (335).

Self-efficacy measures revealed that students exhibited an unexpectedly high level of confidence in many areas, such as “their abilities to draft, revise, edit, read critically, and summarize” (337). Somewhat lower confidence levels in their ability to give and receive feedback persuade the authors that a MOOC emphasizing peer interaction served as an “occasion to hone these skills” (337). The greatest gain occurred in this domain.

Nine “professional writing instructors” (339) assessed portfolios for 247 students who had both completed the course and opted into the IRB component (340). This assessment confirmed that while students might not be able to “rely consistently” on formative peer review, peer evaluation could effectively supplement expert grading (344).

Comer and White stress the importance of further research in a range of areas, including how best to support effective peer response; how ESL writers interact with MOOCs; what kinds of people choose MOOCs and why; and how MOOCs might function in WAC/WID situations (344-45).

The authors stress the importance of avoiding “extreme concluding statements” about the effectiveness of MOOCs based on findings such as theirs (346). Their study suggests that different learners valued the experience differently; those who found it useful did so for varied reasons. Repeating that writing studies must take responsibility for assessment in such contexts, they emphasize that “MOOCs cannot and should not replace face-to-face instruction” (346; emphasis original). However, they contend that even enrollees who interacted briefly with the MOOC left with an exposure to writing practices they would not have gained otherwise and that the students who completed the MOOC satisfactorily amounted to more students than Comer would have reached in 53 years teaching her regular FY sessions (346).

In designing assessments, the authors urge, compositionists should resist the impulse to focus solely on the “Big Data” produced by assessments at such scales (347-48). Such a focus can obscure the importance of individual learners who, they note, “bring their own priorities, objectives, and interests to the writing MOOC” (348). They advocate making assessment an activity for the learners as much as possible through self-reflection and through peer interaction, which, when effectively supported, “is almost as useful to students as expert response and is crucial to student learning” (349). Ultimately, while the MOOC did not succeed universally, it offered many students valuable writing experiences (346).


1 Comment

Stornaiuolo and LeBlanc. Scalar Analysis in Literacy Studies. RTE, Feb. 2016. Posted 03/20/2016.

Stornaiuolo, Amy, and Robert Jean LeBlanc. “Scaling as a Literacy Activity: Mobility and Educational Inequality in an Age of Global Connectivity.” Research in the Teaching of English 50.3 (2016): 263-87. Print.

Amy Stornaiuolo and Robert Jean LeBlanc introduce the concepts of “scales” and “scalar analysis” as tools for examining how people locate themselves in a stratified global context. Scalar analysis moves beyond the dichotomy between “local” and “global,” shedding light on the ways in which locations are constantly in flux and in interaction with each other, often shifting as a result of strategic moves to respond to asymmetries and inequalities.

Stornaiuolo and LeBlanc applied their analysis to five teachers in four countries—India, Norway, South Africa, and the United States—who worked with adolescent students on “a Space2Cre8 (S2C8) project,” which was “oriented to helping young people in challenging circumstances engage in cross-cultural communication” (270).

The five teachers worked with the S2C8 groups once or twice a week during the two-year duration of the project; students engaged in various forms of media to communicate with each other about their lives and cultures. During the project, the teachers met ten times via Skype, communicated in emails, and produced memos and notes; additional data came from interviews and classroom observations (271-72, 284-85). The teachers came from varied disciplines, such as technology, art, history, and design (271). Much of the work took place in English, which was the “only shared language” across the sites (270).

According to Stornaiuolo and LeBlanc, scalar research is useful for literacy studies because of its power to examine how meaning gets created and how it shifts as it moves into and through different contexts (267). Understanding literacy through scales “compels several shifts in literacy research” (268). These shifts revolve around moving from a sense of literacy actions and artifacts as fixed in time and space to understanding them as products of “ongoing and often contentious labor” that evolve through “the active and strategic working/reworking of texts in unequal globalized contexts” (268; emphasis original). Scalar analysis asks scholars to examine “how people are positioned and position themselves and their literate identities in and through literary practice” (269).

Such a focus on the “mobilities” of meaning, Stornaiuolo and LeBlanc contend, is necessary to understand how inequalities are created and sustained, how meaning becomes more or less “understandable” (269) as it enters different scalar levels, and how people negotiate the hierarchical contexts that characterize globalization and in which they inevitably locate themselves. The authors were specifically interested in the ways that educators functioned in an environment understood through the lens of scales; the use of scales as a heuristic can both “explain how difference is turned into inequality” as well as how movement within and across scales can enhance agency for individuals and groups addressing their own marginalization (266).

Stornaiuolo and LeBlanc delineate six scalar “jumps” or “moves.” “Jumps” include “upscaling,” which involves “invok[ing] a higher scale rationale to prevail over lower-scale orders of discourse” (272); for example, institutional factors might be named as a reason for a particular choice. Kgotso, working in South Africa, refers to exam schedules and a teachers’ strike “to justify how he had been using his time” (280). Via “downscaling,” an actor asserts his or her local circumstances to validate a choice (272). Stornaiuolo and LeBlanc recount how Amit, writing from India, focused attention on how local technology limitations affected his group’s participation in the project (275). “Anchoring” privileges the actor’s location in the “here-and-now” without necessarily invoking higher or lower scales. The authors cite teacher emails in the project, in which the teachers claimed authority in reference to “an issue at hand” (273).

Other moves do not necessarily involve jumps. “Aligning” occurs when actors compare scalar locations to strengthen positions. As an example of aligning, Stornaiuolo and LeBlanc present the efforts of Kgotso in South Africa to compare his concerns about the dominance of English with similar issues he saw as affecting Amit’s work in India (278). Kgotso further engaged in “contesting,” a scalar move in which he challenged the “US-centric imprint” of the project, suggesting that the curriculum be reconsidered to address the needs of the two “linguistically disadvantaged” sets of participants (qtd. in Stornaiuolo and LeBlanc 278). Stornaiuolo and LeBlanc provide an example of “embedding” in the way that Maja, in Norway, saw the project as “nested within a number of other entities” such as school and university commitments that affected her own use of time (281).

Such examples, for Stornaiuolo and LeBlanc, indicate the usefulness of scalar analysis to illuminate “gaps” that reinforce inequality. Differences in resources, such as adequate bandwidth, affected the ease with which the teachers were able to integrate the social-media exchanges the project hoped to foster (275). Another important research gap uncovered was the varied access to English as the primary language in communications among students. For example, Maja in Norway saw the need to translate S2C8 contributions into English, and Kgotso contrasted his students’ use of Afrikaans with the need to “cross over” to an outside language when working in the project. Stornaiuolo and LeBlanc consider moves like Kgotso’s and similar ones by Amit to be examples of downscaling, asserting the validity of students’ local practices and needs (277, 281).

Gaps in availability of time also figured prominently in the findings. Materials presented by Stornaiuolo and LeBlanc suggest that the teachers regularly made scalar jumps and moves to position themselves in relation to the amount of time required by the project in comparison to the demands of their local situations and of the higher-order scales in which they found themselves embedded. The teachers, Stornaiuolo and LeBlanc suggest, saw S2C8 as such a higher-order scale, one in some sense “imposed” on their immediate missions and requiring strategic negotiation of the scalar landscape (281).

Although Stornaiuolo and LeBlanc acknowledge that their account of the S2C8 project echoes “familiar narratives” about the issues that arise when the promise of digital communication across space and time is actually put into practice, they argue that these narratives “mask” what scalar analysis can illuminate: “the ongoing labor of producing texts and contexts over multiple affiliations in time and space” (283). Especially visible, they indicate, are the ways that literacy productions are valued differently as they move through different scales. The authors contend that attention to scales provides a “concrete set of tools to highlight the constructed and contingent nature of all literacy practices” (283).


Leave a comment

Bourelle et al. Multimodal in f2f vs. online classes. C&C, Mar. 2016. Posted 01/24/2016.

Bourelle, Andrew, Tiffany Bourelle, Anna V. Knutson, and Stephanie Spong. “Sites of Multimodal Literacy: Comparing Student Learning in Online and Face-to-Face Environments.” Computers and Composition 39 (2015): 55-70. Web. 14 Jan. 2016.

Andrew Bourelle, Tiffany Bourelle, Anna V. Knutson, and Stephanie Spong report on a “small pilot study” at the University of New Mexico that compares how “multimodal liteacies” are taught in online and face-to-face (f2f) composition classes (55-56). Rather than arguing for the superiority of a particular environment, the writers contend, they hope to “understand the differences” and “generate a conversation regarding what instructors of a f2f classroom can learn from the online environment, especially when adopting a multimodal curriculum” (55). The authors find that while differences in overall learning measures were slight, with a small advantage to the online classes, online students demonstrated considerably more success in the multimodal component featured in both kinds of classes (60).

They examined student learning in two online sections and one f2f section teaching a “functionally parallel” multimodal curriculum (58). The online courses were part of eComp, an online initiative at the University of New Mexico based on the Writers’ Studio program at Arizona State University, which two of the current authors had helped to develop (57). Features derived from the Writers’ Studio included the assignment of three projects to be submitted in an electronic portfolio as well as a reflective component in which the students explicated their own learning. Additionally, the eComp classes “embedded” instructional assistants (IAs): graduate teaching assistants and undergraduate tutors (57-58). Students received formative peer review and feedback from both the instructor and the IAs. (57-58).

Students created multimodal responses to the three assignments—a review, a commentary, and a proposal. The multimodal components “often supplemented, rather than replaced, the written portion of the assignment” (58). Students analyzed examples from other classes and from public media through online discussions, focusing on such issues as “the unique features of each medium” and “the design features that either enhanced or stymied” a project’s rhetorical intent (58). Bourelle et al. emphasize the importance of foregrounding “rhetorical concepts” rather than the mechanics of electronic presentation (57).

The f2f class, taught by one of the authors who was also teaching one of the eComp classes, used the same materials, but the online discussion and analysis were replaced by in-class instruction and interaction, and the students received instructor and peer feedback (58). Students could consult the IAs in the campus writing center and seek other feedback via the center’s online tutorials (58).

The authors present their assessment as both quantitative, through holistic scores using a rubric that they present in an Appendix, and qualitative, through consideration of the students’ reflection on their experiences (57). The importance of including a number of different genres in the eportfolios created by both kinds of classes required specific norming on portfolio assessment for the five assessment readers (58-59). Four of the readers were instructors or tutors in the pilot, with the fifth assigned so that instructors would not be assessing their own students’ work (58). Third reads reconciled disparate scores. The readers examined all of the f2f portfolios and 21, or 50%, of the online submissions. Bourelle et al. provide statistical data to argue that this 50% sample adequately supports their conclusions at a “confidence level of 80%” (59).

The rubric assessed features such as

organization of contents (a logical progression), the overall focus (thesis), development (the unique features of the medium and how well the modes worked together), format and design (overall design aesthetics . . . ), and mechanics. . . . (60)

Students’ learning about multimodal production was assessed through the reflective component (60). The substantial difference in this score led to a considerable difference in the total scores (61).

The authors provide specific examples of work done by an f2f student and by an online student to illustrate the distinctions they felt characterized the two groups. They argue that students in the f2f classes as a group had difficulties “mak[ing] choices in design according to the needs of the audience” (61). Similarly, in the reflective component, f2f students had more trouble explaining “their choice of medium and how the choice would best communicate their message to the chosen audience” (61).

In contrast, the researchers state that the student representing the online cohort exhibits “audience awareness with the choice of her medium and the content included within” (62). Such awareness, the authors write, carried through all three projects, growing in sophistication (62-63). Based on both her work and her reflection, this student seemed to recognize what each medium offered and to make reasoned choices for effect. The authors present one student from the f2f class who demonstrated similar learning, but argue that, on the whole, the f2f work and reflections revealed less efficacy with multimodal projects (63).

Bourelle et al. do not feel that self-selection for more comfort with technology affected the results because survey data indicated that “life circumstances” rather than attitudes toward technology governed students’ choice of online sections (64). They indicate, in contrast, that the presence of the IAs may have had a substantive effect (64).

They also discuss the “archival” nature of an online environment, in which prior discussion and drafts remained available for students to “revisit,” with the result that the reflections were more extensive. Such reflective depth, Claire Lauer suggests, leads to “more rhetorically effective multimodal projects” (cited in Bourelle et al. 65).

Finally, they posit an interaction between what Rich Halverson and R. Benjamin Shapiro designate “technologies for learners” and “technologies for education.” The latter refer to the tools used to structure classrooms, while the former include specific tools and activities “designed to support the needs, goals, and styles of individuals” (qtd. in Bourelle et al. 65). The authors posit that when the individual tools students use are in fact the same as the “technologies for education,” students engage more fully with multimodality in such an immersive multimodal environment.

This interaction, the authors suggest, is especially important because of the need to address the caveat from research and the document CCCC Online Writing Instruction, 2013, that online courses should prioritize writing and rhetorical concepts, not the technology itself (65). The authors note that online students appeared to spontaneously select more advanced technology than the f2f students, choices that Daniel Anderson argues inherently lead to more “enhanced critical thinking” and higher motivation (66).

The authors argue that their research supports two recommendations: first, the inclusion of IAs for multimodal learning; and second, the adoption by f2f instructors of multimodal activities and presentations, such as online discussion, videoed instruction, tutorials, and multiple examples. Face-to-face instructors, in this view, should try to emulate more nearly the “archival and nonlinear nature of the online course” (66). The authors call for further exploration of their contention that “student learning is indeed different within online and f2f multimodal courses,” based on their findings at the University of New Mexico (67).


1 Comment

T. Bourelle et al. Using Instructional Assistants in Online Classes. C&C, Sept. 2015. Posted 10/13/2015.

Bourelle, Tiffany, Andrew Bourelle, and Sherry Rankins-Robertson. “Teaching with Instructional Assistants: Enhancing Student Learning in Online Classes.” Computers and Composition 37 (2015): 90-103. Web. 6 Oct. 2015.

Tiffany Bourelle, Andrew Bourelle, and Sherry Rankins-Robertson discuss the “Writers’ Studio,” a pilot program at Arizona State University that utilized upper-level English and education majors as “instructional assistants” (IAs) in online first-year writing classes. The program was initiated in response to a request from the provost to cut budgets without affecting student learning or increasing faculty workload (90).

A solution was an “increased student-to-teacher ratio” (90). To ensure that the creation of larger sections met the goal of maintaining teacher workloads and respected the guiding principles put forward by the Conference on College Composition and Communication Committee for Best Practices in Online Writing Instruction in its March 2013 Position Statement, the team of faculty charged with developing the cost-saving measures supplemented “existing pedagogical strategies” with several innovations (91).

The writers note that one available cost-saving step was to avoid staffing underenrolled sections. To meet this goal, the team created “mega-sections” in which one teacher was assigned per each 96 students, the equivalent of a full-time load. Once the enrollment reached 96, a second teacher was assigned to the section, and the two teachers team-taught. T. Bourelle et al. give the example of a section of the second semester of the first-year sequence that enrolled at 120 students and was taught by two instructors. These 120 students were assigned to 15-student subsections (91).

T. Bourelle et al. note several reasons why the new structure potentially increased faculty workload. They cite research by David Reinheimer to the effect that teaching writing online is inherently more time-intensive than instructors may expect (91). Second, the planned curriculum included more drafts of each paper, requiring more feedback. In addition, the course design required multimodal projects. Finally, students also composed “metacognitive reflections” to gauge their own learning on each project (92).

These factors prompted the inclusion of the IAs. One IA was assigned to each 15-student group. These upper-level students contributed to the feedback process. First-year students wrote four drafts of each paper: a rough draft that received peer feedback, a revised draft that received comments from the IAs, an “editing” draft students could complete using the writing center or online resources, and finally a submission to the instructor, who would respond by either accepting the draft for a portfolio or returning it with directions to “revise and resubmit” (92). Assigning portfolio grades fell to the instructor. The authors contend that “in online classes where students write multiple drafts for each project, instructor feedback on every draft is simply not possible with the number of students assigned to any teacher, no matter how she manages her time” (93).

T. Bourelle et al. provide extensive discussion of the ways the IAs prepared for their roles in the Writers’ Studio. A first component was an eight-hour orientation in which the assistants were introduced to important teaching practices and concepts, in particular the process of providing feedback. Various interactive exercises and discussions allowed the IAs to develop their abilities to respond to the multimodal projects required by the Studio, such as blogs, websites, or “sound portraits” (94). The instruction for IAs also covered the distinction between “directive” and “facilitative” feedback, with the latter designed to encourage “an author to make decisions and [give] the writer freedom to make choices” (94).

Continuing support throughout the semester included a “portfolio workshop” that enabled the IAs to guide students in their production of the culminating eportfolio requirement, which required methods of assessment unique to electronic texts (95). Bi-weekly meetings with the instructors of the larger sections to which their cohorts belonged also provided the IAs with the support needed to manage their own coursework while facilitating first-year students’ writing (95).

In addition, IAs enrolled in an online internship that functioned as a practicum comparable to practica taken by graduate teaching assistants at many institutions (95-97). The practicum for the Writers’ Studio internship reinforced work on providing facilitative feedback but especially incorporated the theory and practice of online instruction (96). T. Bourelle et al. argue that the effectiveness of the practicum experience was enhanced by the degree to which it “mirror[ed]” much of what the undergraduate students were experiencing in their first-year classes: “[B]oth groups of beginners are working within initially uncomfortable but ultimately developmentally positive levels of ambiguity, multiplicity, and open-endedness” (Barb Blakely Duffelmeyer, qtd. in T. Bourelle et al. 96). Still quoting Duffelmeyer, the authors contend that adding computers “both enriched and problematized” the pedagogical experience of the coursework for both groups (96), imposing the need for special attention to online environments.

Internship assignments also gave the IAs a sense of what their own students would be experiencing by requiring an eportfolio featuring what they considered their best examples of feedback to student writing as well as reflective papers documenting their learning (98).

The IAs in the practicum critiqued the first-year curriculum, for example suggesting stronger scaffolding for peer review and better timing of assignments. They wrote various instructional materials to support the first-year course activities (97).

Their contributions to the first-year course included “[f]aciliting discussion groups” (98) and “[d]eveloping supportive relationships with first-year writers” (100), but especially “[r]esponding to revised drafts” (99). T. Bourelle et al. note that the IAs’ feedback differed from that of peer reviewers in that the IAs had acquired background in composition and rhetorical theory; unlike writing-center tutors, the IAs were more versed in the philosophy and expectations embedded in the course itself (99). IAs were particularly helpful to students who had misread the assignments, and they were able to identify and mentor students who were falling behind (98, 99).

The authors respond to the critique that the IAs represented uncompensated labor by arguing that the Writers’ Studio offered a pedagogically valuable opportunity that would serve the students well if they pursued graduate or professional careers as educators, emphasizing the importance of designing such programs to benefit the students as well as the university (101). They present student and faculty testimony on the effectiveness of the IAs as a means of “supplement[ing] teacher interaction” rather than replacing it (102). While they characterize the “monetary benefit” to the university as “small” (101), they consider the project “successful” and urge other “teacher-scholars to build on what we have tried to do” (102).


Leave a comment

Cox, Black, Heney, and Keith. Responding to Students Online. TETYC, May 2015. Posted 07/22/15.

Cox, Stephanie, Jennifer Black, Jill Heney, and Melissa Keith. “Promoting Teacher Presence: Strategies for Effective and Efficient Feedback to Student Writing Online.” Teaching English in the Two-Year College 42.4 (2015): 376-91. Web. 14 July 2015.

Stephanie Cox, Jennifer Black, Jill Heney, and Melissa Keith address the challenges of responding to student writing online. They note the special circumstances attendant on online teaching, in which students lack the cues provided by body language and verbal tone when they interpret instructor comments (376). Students in online sections, the authors write, do not have easy access to clarification and individual direction, and may not always take the initiative in following up when their needs aren’t met (377). These features of the online learning environment require teachers to develop communicative skills especially designed for online teaching.

To overcome the difficulty teachers may find in building a community among students with whom they do not interact face-to-face, the authors draw on the Community of Inquiry framework developed by D. Randy Garrison. This model emphasizes presence as a crucial rhetorical dimension in community building, distinguishing between “social presence,” “cognitive presence,” and “teacher presence” as components of a classroom in which teachers can create effective learning environments.

Social presence indicates the actions and rhetorical choices that give students a sense of “a real person online,” in the words of online specialists Rena M. Palloff and Keith Pratt (qtd. in Cox et al. 377). Moves that allow the teacher to interact socially through the response process decrease the potential for students to “experience isolation and a sense of disconnection” (377). Cognitive presence involves activities that contribute to the “creation of meaning” in the classroom as students explore concepts and ideas. both individually and as part of the community. Through teacher presence, instructors direct learning and disseminate knowledge, setting the stage for social and cognitive interaction (377).

In the authors’ view, developing effective social, cognitive, and teacher presence requires attention to the purpose of particular responses depending on the stage of the writing process, to the concrete elements of delivery, and to the effects of different choices on the instructor’s workload.

Citing Peter Elbow’s discussion of “ranking and evaluation,” the authors distinguish between feedback that assigns a number on a scale and feedback that encourages ongoing development of an idea or draft (376-79; emphasis original). Ranking during early stages may allow teachers to note completion of tasks; evaluation, conversely, involves “communication” that allows students to move forward fruitfully on a project (379).

The authors argue that instructors in digital environments should follow James E. Porter’s call for “resurrecting the neglected rhetorical canon of delivery” (379). Digital teaching materials provide opportunities like emoticons for emulating the role of the body that is important to classical theories of delivery; such tools can emphasize emotions that can be lost in online exchanges.

Finally, the authors note the tendency for responding online to grow into an overwhelming workload. “Limit[ing] their comments” is a “healthy” practice that teachers need not regret. Determining what kind of feedback is most appropriate to a given type of writing is important in setting these limits, as is making sure that students understand that different tasks will elicit different kinds of response (379-80).

The authors explore ways to address informal writing without becoming overwhelmed. They point out that teachers often don’t respond in writing to informal work in face-to-face classrooms and thus do not necessarily need to do so in online classes. They suggest that “generalized group comments” can effectively point out shared trends in students’ work, present examples, and enhance teacher presence. Such comments may be written, but can also be “audio” or “narrated screen capture” that both supply opportunities for generating social and teacher presence while advancing cognitive goals.

They recommend making individual comments on informal work publicly, posting only “one formative point per student while encouraging students to read all of the class postings and the instructor responses” (382). Students thus benefit from a broader range of instruction. Individual response is important early and in the middle of the course to create and reinforce students’ connections with the instructor; it is also important during the early development of paper ideas when some students may need “redirect[ion]” (382).

The authors also encourage “feedback-free spaces,” especially for tentative early drafting; often making such spaces visible to all students gives students a sense of audience while allowing them to share ideas and experience how the writing process often unfolds through examples of early writing “in all its imperfection” (383).

Cox et al. suggest that feedback on formal assignments should embrace Richard Straub’s “six conversational response strategies” (383), which focus on informal language, specific connections to the student’s work, and maintaining an emphasis on “help or guidance” (384). The authors discuss five response methods for formal tasks. In their view, rubrics work best when free of complicated technical language and when integrated into a larger conversation about the student’s writing (385-86). Cox et al. recommend using the available software programs for in-text comments, which students find more legible and which allow instructors to duplicate responses when appropriate (387). The authors particularly endorse “audio in-text comments,” which not only save time but also allow the students to hear the voice of an embodied person, enhancing presence (387). Similarly, they recommend generating holistic end-comments via audio, with a highlighting system to tie the comments back to specific moments in the student’s text (387-88). Synchronous conferences, facilitated by many software options including screen-capture tools, can replace face-to-face conferences, which may not work for online students. The opportunity to talk not only about writing but also about other aspects of the student’s environment further build social, cognitive, and teacher presence (388).

The authors offer tables delineating the benefits and limitations of responses both to informal and formal writing, indicating the kind of presence supported by each and options for effective delivery (384, 389).