College Composition Weekly: Summaries of research for college writing professionals

Read, Comment On, and Share News of the Latest from the Rhetoric and Composition Journals


Abba et al. Students’ Metaknowledge about Writing. J of Writing Res., 2018. Posted 09/28/2018.

Abba, Katherine A., Shuai (Steven) Zhang, and R. Malatesha Joshi. “Community College Writers’ Metaknowledge of Effective Writing.” Journal of Writing Research 10.1 (2018): 85-105. Web. 19 Sept. 2018.

Katherine A. Abba, Shuai (Steven) Zhang, and R. Malatesha Joshi report on a study of students’ metaknowledge about effective writing. They recruited 249 community-college students taking courses in Child Development and Teacher Education at an institution in the southwestern U.S. (89).

All students provided data for the first research question, “What is community-college students’ metaknowledge regarding effective writing?” The researchers used data only from students whose first language was English for their second and third research questions, which investigated “common patterns of metaknowledge” and whether classifying students’ responses into different groups would reveal correlations between the focus of the metaknowledge and the quality of the students’ writing. The authors state that limiting analysis to this subgroup would eliminate the confounding effect of language interference (89).

Abba et al. define metaknowledge as “awareness of one’s cognitive processes, such as prioritizing and executing tasks” (86), and explore extensive research dating to the 1970s that explores how this concept has been articulated and developed. They state that the literature supports the conclusion that “college students’ metacognitive knowledge, particularly substantive procedures, as well as their beliefs about writing, have distinctly impacted their writing” (88).

The authors argue that their study is one of few to focus on community college students; further, it addresses the impact of metaknowledge on the quality of student writing samples via the “Coh-Metrix” analysis tool (89).

Students participating in the study were provided with writing prompts at the start of the semester during an in-class, one-hour session. In addition to completing the samples, students filled out a short biographical survey and responded to two open-ended questions:

What do effective writers do when they write?

Suppose you were the teacher of this class today and a student asked you “What is effective writing?” What would you tell that student about effective writing? (90)

Student responses were coded in terms of “idea units which are specific unique ideas within each student’s response” (90). The authors give examples of how units were recognized and selected. Abba et al. divided the data into “Procedural Knowledge,” or “the knowledge necessary to carry out the procedure or process of writing,” and “Declarative Knowledge,” or statements about “the characteristics of effective writing” (89). Within the categories, responses were coded as addressing “substantive procedures” having to do with the process itself and “production procedures,” relating to the “form of writing,” e.g., spelling and grammar (89).

Analysis for the first research question regarding general knowledge in the full cohort revealed that most responses about Procedural Knowledge addressed “substantive” rather than “production” issues (98). Students’ Procedural Knowledge focused on “Writing/Drafting,” with “Goal Setting/Planning” in second place (93, 98). Frequencies indicated that while revision was “somewhat important,” it was not as central to students’ knowledge as indicated in scholarship on the writing process such as that of John Hayes and Linda Flower and M. Scardamalia and C. Bereiter (96).

Analysis of Declarative Knowledge for the full-cohort question showed that students saw “Clarity and Focus” and “Audience” as important characteristics of effective writing (98). Grammar and Spelling, the “production” features, were more important than in Procedural Knowledge. The authors posit that students were drawing on their awareness of the importance of a polished finished product for grading (98). Overall, data for the first research question matched that of previous scholarship on students’ metaknowledge of effective writing, which shows some concern with the finished product and a possibly “insufficient” focus on revision (98).

To address the second and third questions, about “common patterns” in student knowledge and the impact of a particular focus of knowledge on writing performance, students whose first language was English were divided into three “classes” in both Procedural and Declarative Knowledge based on their responses. Classes in Procedural Knowledge were a “Writing/Drafting oriented group,” a “Purpose-oriented group,” and the largest, a “Plan and Review oriented group” (99). Responses regarding Declarative Knowledge resulted in a “Plan and Review” group, a “Time and Clarity oriented group,” and the largest, an “Audience oriented group.” One hundred twenty-three of the 146 students in the cohort belonged to this group. The authors note the importance of attention to audience in the scholarship and the assertion that this focus typifies “older, more experienced writers” (99).

The final question about the impact of metaknowledge on writing quality was addressed through the Coh-Metrix “online automated writing evaluation tool” that assessed variables such as “referential cohesion, lexical diversity, syntactic complexity and pattern density” (100). In addition, Abba et al. used a method designed by A. Bolck, M. A. Croon, and J. A. Hagenaars (“BCH”) to investigate relationships between class membership and writing features (96).

These analyses revealed “no relationship . . . between their patterns knowledge and the chosen Coh-Metrix variables commonly associated with effective writing” (100). The “BCH” analysis revealed only two significant associations among the 15 variables examined (96).

The authors propose that their findings did not align with prior research suggesting the importance of metacognitive knowledge because their methodology did not use human raters and did not factor in student beliefs about writing or questions addressing why they responded as they did. Moreover, the authors state that the open-ended questions allowed more varied responses than did responses to “pre-established inventor[ies]” (100). They maintain that their methods “controlled the measurement errors” better than often-used regression studies (100).

Abba et al. recommend more research with more varied cohorts and collection of interview data that could shed more light on students’ reasons for their responses (100-101). Such data, they indicate, will allow conclusions about how students’ beliefs about writing, such as “whether an ability can be improved,” affect the results (101). Instructors, in their view, can more explicitly address awareness of strategies and effective practices and can use discussion of metaknowledge to correct “misconceptions or misuse of metacognitive strategies” (101):

The challenge for instructors is to ascertain whether students’ metaknowledge about effective writing is accurate and support students as they transfer effective writing metaknowledge to their written work. (101)


Leave a comment

Lindenman et al. (Dis)Connects between Reflection and Revision. CCC, June 2018. Posted 07/22/2018.

Lindenman, Heather, Martin Camper, Lindsay Dunne Jacoby, and Jessica Enoch. “Revision and Reflection: A Study of (Dis)Connections between Writing Knowledge and Writing Practice.” College Composition and Communication 69.4 (2018): 581-611. Print.

Heather Lindenman, Martin Camper, Lindsay Dunne Jacoby, and Jessica Enoch report a “large-scale, qualitative assessment” (583) of students’ responses to an assignment pairing reflection and revision in order to evaluate the degree to which reflection and revision inform each other in students’ writing processes.

The authors cite scholarship designating reflection and revision “threshold concepts important to effective writing” (582). Scholarship suggests that reflection should encourage better revision because it “prompts metacognition,” defined as “knowledge of one’s own thinking processes and choices” (582). Lindenman et al. note the difficulties faced by teachers who recognize the importance of revision but struggle to overcome students’ reluctance to revise beyond surface-level correction (582). The authors conclude that engagement with the reflective requirements of the assignment did not guarantee effective revision (584).

The study team consisted of six English 101 instructors and four writing program administrators (587). The program had created a final English 101 “Revision and Reflection Assignment” in which students could draw on shorter memos on the four “linked essays” they wrote for the class. These “reflection-in-action” memos, using the terminology of Kathleen Blake Yancey, informed the final assignment, which asked for a “reflection-in-presentation”: students could choose one of their earlier papers for a final revision and write an extended reflection piece discussing their revision decisions (585).

The team collected clean copies of this final assignment from twenty 101 sections taught by fifteen instructors. A random sample across the sections resulted in a study size of 152 papers (586). Microsoft Word’s “compare document” feature allowed the team to examine students’ actual revisions.

In order to assess the materials, the team created a rubric judging the revisions as either “substantive, moderate, or editorial.” A second rubric allowed them to classify the reflections as “excellent, adequate, or inadequate” (586). Using a grounded-theory approach, the team developed forty codes to describe the reflective pieces (587). The study goal was to determine how well students’ accounts of their revisions matched the revisions they actually made (588).

The article includes the complete Revision and Reflection Assignment as well as a table reporting the assessment results; other data are available online (587). The assignment called for specific features in the reflection, which the authors characterize as “narrating progress, engaging teacher commentary, and making self-directed choices” (584).

The authors report that 28% of samples demonstrated substantive revision, while 44% showed moderate revision and 28% editorial revision. The reflection portion of the assignment garnered 19% excellent responses, 55% that were adequate, and 26% that were inadequate (587).

The “Narrative of Progress” invites students to explore the skills and concepts they feel they have incorporated into their writing process over the course of the semester. Lindenman et al. note that such narratives have been critiqued for inviting students to write “ingratiat[ing]” responses that they think teachers want to hear as well as for encouraging students to emphasize “personal growth” rather than a deeper understanding of rhetorical possibilities (588).

They include an example of a student who wrote about his struggles to develop stronger theses and who, in fact, showed considerable effort to address this issue in his revision, as well as an example of a student who wrote about “her now capacious understanding of revision in her memo” but whose “revised essay does not carry out or enact this understanding” (591). The authors report finding “many instances” where students made such strong claims but did not produce revisions that “actualiz[ed] their assertions” 591. Lindenman et al. propose that such students may have increased in their awareness of concepts, but that this awareness “was not enough to help them translate their new knowledge into practice within the context of their revisions” (592).

The section of student response to teacher commentary distinguishes between students for whom teachers’ comments served as “a heuristic” that allowed the student to take on roles as “agents” and the “majority” of students, who saw the comments as “a set of directions to follow” (592). Students who made substantive revisions, according to the authors, were able to identify issues called up the teacher feedback and respond to these concerns in the light of their own goals (594). While students who made “editorial” changes actually mentioned teacher comments more often (595), the authors point to shifts to first person in the reflective memos paired with visible revisions as an indication of student ownership of the process (593).

Analysis of “self-directed metacognitive practice” similarly found that students whose strong reflective statements were supported by actual revision showed evidence of “reach[ing] beyond advice offered by teachers or peers” (598). The authors note that, in contrast, “[a]nother common issue among self-directed, nonsubstantive revisers” was the expenditure of energy in the reflections to “convince their instructors that the editorial changes they made throughout their essays were actually significant” (600; emphasis original).

Lindenman et al. posit that semester progress-narratives may be “too abstracted from the actual practice of revision” and recommend that students receive “intentional instruction” to help them see how revision and reflection inform each other (601). They report changes to their assignment to foreground “the why of revision over the what” (602; emphasis original), and to provide students with a visual means of seeing their actual work via “track changes” or “compare documents” while a revision is still in progress (602).

A third change encourages more attention to the interplay between reflection and revision; the authors propose a “hybrid threshold concept: reflective revision” (604; emphasis original).

The authors find their results applicable to portfolio grading, in which, following the advice of Edward M. White, teachers are often encouraged to give more weight to the reflections than to the actual texts of the papers. The authors argue that only by examining the two components “in light of each other” can teachers and scholars fully understand the role that reflection can play in the development of metacognitive awareness in writing (604; emphasis original).


Leave a comment

Shepherd, Ryan. Digital Writing and Transfer. C&C, June 2018.

Shepherd, Ryan P. “Digital Writing, Multimodality, and Learning Transfer: Crafting Connections between Composition and Online Composing.” Computers and Composition 48 (2018): 103-14. Web. 4 Apr. 2018.

Ryan P. Shepherd conducted a survey and interviews to investigate the relationship between multimodal writing students did outside of school and the writing that they did for their classes. Shepherd focuses on students’ perceptions as to what constitutes “writing” and whether they see their out-of-school work as “writing.” He argues that these perceptions are important for transfer of in-school learning to new contexts (103).

He notes that scholars in the field have argued for the importance of drawing on students’ past writing experiences and their knowledge of those contexts to enhance their classroom learning (104). Some scholarship suggests that students do not see a relationship between the writing they know how to do for social media and school assignments. This scholarship indicates that one implication of this disconnect is that students may not apply the knowledge they accumulate in the classroom to the broader range of their writing activities (104).

Shepherd sent survey links to composition instructors and received 151 replies from first-year-writing students. He reports that the responses were skewed toward larger, doctoral-granting schools (104-05). In choosing 10 students from among 60 who were willing to be interviewed, Shepherd included Research 2 and Masters 1 institutions but found his population did not fully represent a diverse range of students (105). Interviews took place in Shepherd’s office or on Skype.

A principle question in both the survey and interviews was students’ definition of “writing.” Shepherd notes an emphasis on “expression” and “creativity” in these definitions, with 25% referring explicitly to the use of “paper” (105). In contrast, of the 132 definitions of writing in the surveys, only five brought up “digital” or “computer” and all five also included the word “paper” (106). The word “digital” did not occur in the definitions provided in the interviews.

At the same time, 92% of survey responses indicated experience with social media and 99% had used email (106). Forty-six percent of survey respondents had posted on four digital platforms: Facebook, Snapchat, Instagram, and Twitter, while only 5% had not posted to any of these venues and “only one participant had not written on social media at all” (106).

Similarly, interviewees reported extensive experience with social media. Students on both the surveys and in the interviews reported that they wrote “as much or more” outside of school than in class (107). In addition, students seemed uncertain as to whether they had done multimodal writing for school, “sometimes saying they ‘might’ have used images or charts and graphs with their writing at some point” (107).

Shepherd concludes that the students he studied did not connect the multimodal writing they commonly did outside of school with their schoolwork and did not include this use of social media in their definitions of writing. However, when encouraged to think about the relationship between the two kinds of writing experiences, “students were quick to make connections without prompting” (107).

For Shepherd, these finding impact recent discussions in composition studies about the transfer of academic knowledge to other contexts. He contends that many uses of the “transfer” metaphor do not completely or accurately capture what compositionists would like to see happen (108). This “incomplete” metaphor, he argues, implies that knowledge acquired in one place is simply carried to a new place. Thinking this way, Shepherd maintains, echoes the “banking model” of education in which knowledge is something teachers have provided that students can subsequently “withdraw” (108).

More appropriate, Shepherd writes, is the idea of transfer as a “bridge or connection between one area of knowledge and another inside of the learner’s mind” (108). He uses an analogy of knowing how to drive a car and later having to drive a “large box truck.” He posits that using prior knowledge in this new situation involves “generaliz[ing] the knowledge” by “creat[ing] a larger theory of ‘driving’” that encompasses both experiences (108-09). This re-theorization, he states, does not involve transporting any knowledge to a new place.

Shepherd reviews theories of transfer, arguing that similarity between two experiences is central to successful transfer. The comparison between driving a car and driving a truck is an example of “low-road transfer,” in which the two situations are easily seen to be similar (109).

Many kinds of transfer, in contrast, are “high-road transfer” in which the similarity is not necessarily obvious. Shepherd develops an example of relating knowing how to drive to learning how to ski. Theories suggest that in order to see connections between disparate activities like these, learners need to apply what Gavriel Salomon and David N. Perkins call “mindful abstraction” (109). According to Shepherd, related terms used by compositionists include “reflection” and “metacognition” (109). Shepherd argues that what matters is not so much whether or not the activities are clearly similar but rather the degree to which learners can come to perceive them as similar through metacognitive reflection (109).

In this reading, high-road transfer consists of “backward-“ and “forward-reaching” efforts. “Backward-reaching” transfer involves drawing on past experience in new contexts; Shepherd argues that composition uses this form less than “forward-reaching” transfer, which encourages students to think of how they can use classroom learning in the future (109-10). Shepherd maintains that his study supports the claim that both kinds of transfer are “quite difficult”; students need to develop a more complex “theory of writing” to see the necessary similarities and may require guidance to do so (110).

Shepherd suggests that theory-building can begin with students’ own definitions; they can then be challenged to explain why specific modes of communication, for example in social media, do not fit their definitions (111). Teachers can also ask students to teach kinds of writing in which they may be skilled but may not recognize as writing (111). Throughout, teachers can press for “guided reflection” (111) and “mindful abstraction” (112) in order to foreground connections that students may not see as self-evident.

In introducing students to multimodal work in the classroom, Shepherd suggests, teachers can show students that these kinds of assignments are actually familiar and that the students themselves “might already be experts” (112). To design curricula that facilitates the creation of these connections across writing contexts, Shepherd writes, research needs to address “two key areas”: “what students know” and “what students need to know” (112). More attention to the kinds of literacies that students practice outside of the classroom, Shepherd concludes, can equip teachers to apply this kind of research to teaching for more productive transfer.

Leave a comment

Kolln and Hancock. Histories of U. S. Grammar Instruction. English Teaching: Practice and Critique (NZ), 2005. Posted 04/22/2018.

Kolln, Martha, and Craig Hancock. “The Story of English Grammar in United States Schools.” English Teaching: Practice and Critique 4.3 (2005): 11-31. Web. 4 Mar. 2018.

Martha Kolln and Craig Hancock, publishing in English Teaching: Practice and Critique in 2005, respond in parallel essays to what they consider the devaluation of grammar teaching in United States schools and universities. English Teaching: Practice and Critique is a publication of Waikato University in New Zealand. The two essays trace historical developments in attitudes toward grammar education in U. S. English language curricula.

Kolln’s essay reports on a long history of uncertainty about teaching grammar in United States classrooms. Noting that confusion about the distinction between “grammar” and “usage” pervaded discussions since the beginning of the Twentieth Century, Kolln cities studies from 1906 and 1913 to illustrate the prevalence of doubts that the time needed to teach grammar was justified in light of the many other demands upon public-school educators (13).

Citing Richard Braddock, Richard Lloyd-Jones, and Lowell Schoer’s 1963 Research in Written Composition to note that “early research in composition and grammar was not highly developed” (13), Kolln argues that the early studies were flawed (14). A later effort to address grammar teaching, An Experience Curriculum in English, was advanced by a 1936 National Council of Teachers of English (NCTE) committee; this program, Kolln writes, “recommended that grammar be taught in connection with writing, rather than as an isolated unit of study” (14). She contends that the effort ultimately failed because teachers did not accept its focus on “functional grammar” in place of “the formal method [they] were used to” (14).

In Kolln’s history, the hiatus following this abortive project ended with the advent of structural linguistics in the 1950s. This new understanding of the workings of English grammar was originally received enthusiastically; Harold B. Allen’s 1958 Readings in Applied English Linguistics drew on nearly 100 articles, including many from NCTE (12). This movement also embraced Noam Chomsky’s 1957 Syntactic Structures; the NCTE convention in 1963 featured “twenty different sessions on language, . . . with 50 individual papers” under categories like “Semantics,” “Structural Linguistics for the Junior High School,” and “the Relationship of Grammar to Composition” (14-15).

Excitement over such “new grammar” (15), however, was soon “swept aside” (12). Kolln posits that Chomsky’s complex generative grammar, which was not meant as a teaching tool, did not adapt easily to the classroom (15). She traces several other influences supporting the continued rejection of grammar instruction. Braddock et al. in 1963 cited a study by Roland Harris containing “serious flaws,” according to two critics who subsequently reviewed it (16). This study led Braddock et al. to state that grammar instruction not only did not improve student writing, it led to “a harmful effect” (Braddock et al., qtd. in Kolln and Hancock 15). Kolln reports that this phrase is still referenced to argue against teaching grammar (15).

Other influences on attitudes toward grammar, for Kolln, include the advent of “student-centered” teaching after the Dartmouth seminar in 1966 , the ascendancy of the process movement, and a rejection of “elitist” judgments that denigrated students’ home languages (16-17). As a result of such influences and others, Kolln writes, “By 1980, the respected position that grammar had once occupied was no longer recognized by NCTE” (17).

Addressing other publications and position statements that echo this rejection of grammar instruction, Kolln writes that teacher education, in particular, has been impoverished by the loss of attention to the structure of language (19). She contends that “[t]he cost to English education of the NCTE anti-grammar policy is impossible to calculate” (19).

She sees shifts toward an understanding of grammar that distinguishes it from rote drill on correctness in the creation of an NCTE official assembly, The Assembly for the Teaching of English Grammar (ATEG). Several NCTE publications have forwarded the views of this group, including the book Grammar Alive! A Guide for Teachers, and articles in English Journal and Language Arts (20). Kolln urges that grammar, properly understood, be “seen as a legitimate part of the Language Arts curriculum that goes beyond an aid to writing” (20).

Hancock frames his discussion with a contemporaneous article by R. Hudson and J. Walmsley about trends in grammar instruction in the U.K. He sees a consensus among educators in England that “an informed understanding of language and an appropriate metalanguage with which to discuss it” are important elements of language education (qtd. in Kolln and Hancock 21). Further, this consensus endorses a rejection of “the older, dysfunctional, error-focused, Latin-based school grammar” (21-22).

In his view, the grounds for such widespread agreement in the United States, rather than encouraging an appreciation of well-designed grammar instruction, in fact lead away from the possibility of such an appreciation (22-23). He sees a U. S. consensus through the 1960s that literature, especially as seen through New Criticism, should be the principle business of English instruction. The emphasis on form, he writes, did not embrace linguistic theory; in general, grammar was “traditional” if addressed at all, and was seen as the responsibility of elementary schools (22). Literature was displaced by Critical Theory, which challenged the claim that “there is or should be a monolithic, central culture or a received wisdom” in the valuation of texts (22).

Similarly, he maintains that the advent of composition as a distinct field with its focus on “what writers actually do when they write” led to studies suggesting that experienced writers saw writing as meaning-making while inexperienced writers were found to, in Nancy Sommers’s words, “subordinate the demands of the specific problems of the text to the demands of the rules” (qtd. in Kolln and Hancock 23). Downplaying the rules, in this view, allowed students to engage more fully with the purposes of their writing.

In Hancock’s view, language educators in the U.S. distanced themselves from grammar instruction in their focus on “‘empowerment’ in writing” in order to address the needs of more diverse students (24). This need required a new acknowledgment of the varying contexts in which language occurred and an effort to value the many different forms language might take. Recognition of the damage done by reductive testing models also drove a retreat from a grammar defined as “policing people’s mistakes” (24-25).

Hancock argues that the public arena in which students tend to be judged does not allow either correctness or grammar to “simply be wished away” (25). He suggests that the “minimalist” theories of Constance Weaver in the 1990s and linguists like Steven Pinker are attempts to address the need for students to meet some kinds of standards, even though those standards are often poorly defined. These writers, in Hancock’s reading, contend that people learn their native grammars naturally and need little intervention to achieve their communicative goals (25, 27).

Hancock responds that a problem with this approach is that students who do not rise to the expected standard are blamed for their “failure to somehow soak it up from exposure or from the teacher’s non-technical remarks” (25). Hancock laments the “progressive diminution of knowledge” that results when so many teachers themselves are taught little about grammar (25): the lack of a “deep grounding in knowledge of the language” means that “[e]diting student writing becomes more a matter of what ‘feels right’” (26).

As a result of this history, he contends, “language-users” remain “largely unconscious of their own syntactic repertoire” (26), while teachers struggle with contradictory demands with so little background that, in Hancock’s view, “they are not even well-equipped to understand the nature of the problem” (29). He faults linguists as well for debunking prescriptive models while failing to provide “a practical alternative” (26).

Hancock presents a 2004 piece by Laura Micciche as a “counter-argument to minimalist approaches” (28). Hancock reads Micciche to say that there are more alternatives to the problems posed by grammatical instruction than outright rejection. He interprets her as arguing that a knowledge of language is “essential to formation of meaning” (28):

We need a discourse about grammar that does not retreat from the realities we face in the classroom—a discourse that takes seriously the connection between writing and thinking, the interwoven relationship between what we say and how we say it. (Micciche, qtd. in Kolln and Hancock 28)

Hancock deplores the “vacuum” created by the rejection of grammar instruction, a undefended space into which he feels prescriptive edicts are able to insert themselves (28, 29). Like Kolln, he points to ATEG, which in 2005-2006 was working to shift NCTE’s “official position against the teaching of formal grammar” (28). Hancock envisions grammar education that incorporates “all relevant linguistic grammars” and a “thoughtfully selected technical terminology” (28), as well as an understanding of the value of home languages as “the foundation for the evolution of a highly effective writing voice” (29). Such a grammar, he maintains, would be truly empowering, promoting an understanding of the “connection between formal choices and rhetorical effect” (26).


Leave a comment

Bowden, Darsie. Student Perspectives on Paper Comments. J of Writing Assessment, 2018. Posted 04/14/2018.

Bowden, Darsie. “Comments on Student Papers: Student Perspectives.” Journal of Writing Assessment 11.1 (2018). Web. 8 Apr. 2018.

Darsie Bowden reports on a study of students’ responses to teachers’ written comments in a first-year writing class at DePaul University, a four-year, private Catholic institution. Forty-seven students recruited from thirteen composition sections provided first drafts with comments and final drafts, and participated in two half-hour interviews. Students received a $25 bookstore gift certificate for completing the study.

Composition classes at DePaul use the 2000 version of the Council of Writing Program Administrators’ (WPA) Outcomes to structure and assess the curriculum. Of the thirteen instructors whose students were involved in the project, four were full-time non-tenure track and nine were adjuncts; Bowden notes that seven of the thirteen “had graduate training in composition and rhetoric,” and all ”had training and familiarity with the scholarship in the field.” All instructors selected were regular attendees at workshops that included guidance on responding to student writing.

For the study, instructors used Microsoft Word’s comment tool in order to make student experiences consistent. Both comments and interview transcripts were coded. Comment types were classified as “in-draft” corrections (actual changes made “in the student’s text itself”); “marginal”; and “end,” with comments further classified as “surface-level” or “substance-level.”

Bowden and her research team of graduate teaching assistants drew on “grounded theory methodologies” that relied on observation to generate questions and hypotheses rather than on preformed hypotheses. The team’s research questions were

  • How do students understand and react to instructor comments?
  • What influences students’ process of moving from teacher comments to paper revision?
  • What comments do students ignore and why?

Ultimately the third question was subsumed by the first two.

Bowden’s literature review focuses on ongoing efforts by Nancy Sommers and others to understand which comments actually lead to effective revision. Bowden argues that research often addresses “the teachers’ perspective rather than that of their students” and that it tends to assess the effectiveness of comments by how they “manifest themselves in changes in subsequent drafts.” The author cites J. M. Fife and P. O’Neill to contend that the relationship between comments and effects in drafts is not “linear” and that clear causal connections may be hard to discern. Bowden presents her study as an attempt to understand students’ actual thinking processes as they address comments.

The research team found that on 53% of the drafts, no in-draft notations were provided. Bowden reports on variations in length and frequency in the 455 marginal comments they examined and as well as in the end comments that appeared in almost all of the 47 drafts. The number of substance-level comments exceeded that of surface-level comments.

Her findings accord with much research in discovering that students “took [comments] seriously”; they “tried to understand them, and they worked to figure out what, if anything, to do in response.” Students emphasized comments that asked questions, explained responses, opened conversations, and “invited them to be part of the college community.” Arguing that such substance-level comments were “generative” for students, Bowden presents several examples of interview exchanges, some illustrating responses in which the comments motivated the student to think beyond the specific content of the comment itself. Students often noted that teachers’ input in first-year writing was much more extensive than that of their high school teachers.

Concerns about “confusion” occurred in 74% of the interviews. Among strategies for dealing with confusion were “ignor[ing] the comment completely,” trying to act on the comment without understanding it, or writing around the confusing element by changing the wording or structure. Nineteen students “worked through the confusion,” and seven consulted their teachers.

The interviews revealed that in-class activities like discussion and explanation impacted students’ attempts to respond to comments, as did outside factors like stress and time management. In discussions about final drafts, students revealed seeking feedback from additional readers, like parents or friends. They were also more likely to mention peer review in the second interview; although some mentioned the writing center, none made use of the writing center for drafts included in the study.

Bowden found that students “were significantly preoccupied with grades.” As a result, determining “what the teacher wants” and concerns about having “points taken off” were salient issues for many. Bowden notes that interviews suggested a desire of some students to “exert their own authority” in rejecting suggested revisions, but she maintains that this effort often “butts up against a concern about grades and scores” that may attenuate the positive effects of some comments.

Bowden reiterates that students spoke appreciatively of comments that encouraged “conversations about ideas, texts, readers, and their own subject positions as writers” and of those that recognized students’ own contributions to their work. Yet, she notes, the variety of factors influencing students’ responses to comments, including, for example, cultural differences and social interactions in the classroom, make it difficult to pinpoint the most effective kind of comment. Given these variables, Bowden writes, “It is small wonder, then, that even the ‘best’ comments may not result in an improved draft.”

The author discusses strategies to ameliorate the degree to which an emphasis on grades may interfere with learning, including contract grading, portfolio grading, and reflective assignments. However, she concludes, even reflective papers, which are themselves written for grades, may disguise what actually occurs when students confront instructor comments. Ultimately Bowden contends that the interviews conducted for her study contain better evidence of “the less ‘visible’ work of learning” than do the draft revisions themselves. She offers three examples of students who were, in her view,

thinking through comments in relationship to what they already knew, what they needed to know and do, and what their goals were at this particular moment in time.

She considers such activities “problem-solving” even though the problem could not be solved in time to affect the final draft.

Bowden notes that her study population is not representative of the broad range of students in writing classes at other kinds of institutions. She recommends further work geared toward understanding how teacher feedback can encourage the “habits of mind” denoted as the goal of learning by the2010 Framework for Success in Postsecondary Writing produced by the WPA, the National Council of Teachers of English, and the National Writing Project. Such understanding, she contends, can be effective in dealing with administrators and stakeholders outside of the classroom.


Wood, Tara. Disabilities and Time Management in Writing Classes. Dec. CCC. Posted 01/18/2018.

Wood, Tara. “Cripping Time in the College Composition Classroom.” College Composition and Communication 69.2 (2017): 260-86. Print.

Tara Wood proposes that the field of writing studies can productively use the concept of “crip time” to rethink the ways in which normative assumptions underlie many routine activities in writing classrooms.

Wood’s qualitative study, conducted at a large Midwestern research university, began with twenty students with “registered disabilities” but expanded to include thirty-five students because of the interest her work generated (266). She notes that her final study population included not only students registered with the university disability office, but also students registered with other official offices who might or might not have registered at school, students who chose not to register, and students in the process of registering. Some registered students did not request accommodation (282n1).

Wood gathered more than “2,000 minutes of audio” and transcribed more than 200,000 words (267). She avoids identifying particular students by their disabilities, but her notes reveal the range of situations covered by her research (282-83n3).

The data allowed Wood to meet a primary goal of letting the students speak for themselves. She cites scholarship on the challenges of “speaking for” others, particularly groups that have traditionally been silenced or unheard; many scholars report a “crisis of representation” as they consider their own positionality in studies of such groups (265-66). Wood indicates that in some cases the wording of her interview questions shaped responses, but notes that the focus of her article, issues of “time,” was not a topic introduced by any of her questions; rather, it arose as a concern from the students’ own discussion (267).

Reviewing scholarship in composition on “the intersection of disability studies (DS) and composition studies” (261), Wood notes that writing theorists have long been concerned about access but, in some cases, may have assumed that the process- and discussion-oriented pedagogies common to most writing classes do not pose the same problems as do lecture-based classes with heavy test-taking components (261). Wood contends that such assumptions elide the myriad ways that time affects students with disabilities in composition classes (261). Wood’s premise is that “time” as structured in writing classrooms reflects largely unexamined ideologies of normativity and ableism.

Quoting Margaret Price, Wood says of “crip time” that it is “a concept in disability culture that ‘refers to a flexible approach to normative time frames’” (264). As an attitude toward time, it “avoid[s] rigidity and lower[s] the stakes of writing” (270). Wood distinguishes such an approach from the kinds of responses to disability most common in academic settings, which focus on individual and sometimes “ad hoc” solutions (263) burdened by connection with “medical and legal models” (262). Wood presents crip time as a more systemic, philosophical response to the complexities presented by disability.

For Wood, the assumption that individual fixes devised by disability-service offices are adequate is one of several flawed approaches. She found a subset of instructors who deferred to the expertise of disability professionals rather than expressing a willingness to negotiate with students (271). Similarly, she reports a “disability myth” that students given extra time for assignments will “take advantage of an accommodation,” creating a situation that isn’t “fair to other students” (263). In contrast, the study explores students’ conflicted responses to the need for accommodation and the “pedagogical fallout” that can result (269). Wood also discusses “the tacit curative imaginaries” that cast disability as a “disease or illness” (270) and its correction as “compulsory,” with “able-bodiedness as the ultimate, ever-desirable end” (264).

Wood’s account focuses specifically on two components of writing classes, timed in-class writing and time requirements for assignments. Her interviewees reported on how their disabilities made producing “spontaneous” writing within set boundaries (267) a source of serious anxiety, which, in the views of some scholars, has itself been defined as an illness that “teachers must ‘treat’” (270). Wood quotes Alison Kafer to argue that teachers must become aware that their normative expectations for “how long things take” are “based on very particular minds and bodies” (268). In Wood’s view, crip time applies a sensitivity to difference to such assumptions (264).

Wood further details how some participants’ situations affected their handling of assignment deadlines. Students with OCD, for example, might resist handing in assignments because they need to “make [them] perfect” (275). Some students reported finding it difficult to ask for extra time (274). Students recounted a range of attitudes among their instructors, with some willing to negotiate time frames and other less willing (274).

Wood cites Patricia Dunn to contend that students with disabilities often display “a sophisticated metacognitive awareness of how to navigate the strictures they face in the classroom” (272). Some students in her study explain their strategies in working with instructors to plan the timing of their assignments (276-77). Others set their own deadlines (279), while one plans for the inevitable delays of illness by trying to “get ahead on writing assignments” (qtd. in Wood 273).

Wood quotes Robert McRuer’s contention that “being able-bodied means being capable of the normal physical exertions required in a particular system of labor” (279). She argues that such links between assumptions of normativity and the power structures arising from capitalist valuations of productivity make it imperative that instructors recognize how such assumptions impede access (280-81). Wood attributes to Paul Heilker the view that subscribing to crip time is a way of promoting “Students’ Right to Their Own Language” (278), since a more flexible classroom structure permits “disabled students to compose in their own ways” (281), thus affirming important components of their personhood (278, 281).

Wood qualifies her recommendations by stating that she is not arguing against deadlines per se but rather asking that teachers be “mindful” about the power dynamic in a writing classroom and the consequences of rigid time boundaries (275). In this view, decisions about time can best be made by listening to students (281) and working collaboratively with them toward strategies that, in the case of one student, are essential to “sustain[ing] her presence in academia” (277).

Ultimately, Wood contends, awareness of the possibilities opened up by concepts like crip time enrich the democratic, inclusive environment that educators can support when they follow Tony Scott’s advice to examine the “ideological assumptions” underlying their responses to pedagogical challenges (qtd. in Wood 281).

Leave a comment

Stewart, Mary K. Communities of Inquiry in Technology-Mediated Activities. C&C, Sept. 2017. Posted 10/20/2017.

Stewart, Mary K. “Communities of Inquiry: A Heuristic for Designing and Assessing Interactive Learning Activities in Technology-Mediated FYC.” Computers and Composition 45 (2017): 67-84. Web. 13 Oct. 2017.

Mary K. Stewart presents a case study of a student working with peers in an online writing class to illustrate the use of the Community of Inquiry framework (CoI) in designing effective activities for interactive learning.

Stewart notes that writing-studies scholars have both praised and questioned the promise of computer-mediated learning (67-68). She cites scholarship contending that effective learning can take place in many different environments, including online environments (68). This scholarship distinguishes between “media-rich” and “media-lean” contexts. Media-rich environments include face-to-face encounters and video chats, where exchanges are immediate and are likely to include “divergent” ideas, whereas media-lean situations, like asynchronous discussion forums and email, encourage more “reflection and in-depth thinking” (68). The goal of an activity can determine which is the better choice.

Examining a student’s experiences in three different online environments with different degrees of media-richness leads Steward to argue that it is not the environment or particular tool that results in the success or failure of an activity as a learning experience. Rather, in her view, the salient factor is “activity design” (68). She maintains that the CoI framework provides “clear steps” that instructors can follow in planning effective activities (71).

Stewart defined her object of study as “interactive learning” (69) and used a “grounded theory” methodology to analyze data in a larger study of several different course types. Interviews of instructors and students, observations, and textual analysis led to a “core category” of “outcomes of interaction” (71). “Effective” activities led students to report “constructing new knowledge as a result of interacting with peers” (72). Her coding led her to identify “instructor participation” and “rapport” as central to successful outcomes; reviewing scholarship after establishing her own grounded theory, Stewart found that the CoI framework “mapped to [her] findings” (71-72).

She reports that the framework involves three components: social presence, teaching presence, and cognitive presence. Students develop social presence as they begin to “feel real to one another” (69). Stewart distinguishes between social presence “in support of student satisfaction,” which occurs when students “feel comfortable” and “enjoy working” together, and social presence “in support of student learning,” which follows when students actually value the different perspectives a group experience offers (76).

Teaching presence refers to the structure or design that is meant to facilitate learning. In an effective CoI activity, social and teaching presence are required to support cognitive presence, which is indicated by “knowledge construction,” specifically “knowledge that they would not have been able to construct without interacting with peers” (70).

For this article, Stewart focused on the experiences of a bilingual Environmental Studies major, Nirmala, in an asynchronous discussion forum (ADF), a co-authored Google document, and a synchronous video webinar (72). She argues that Nirmala’s experiences reflect those of other students in the larger study (72).

For the ADF, students were asked to respond to one of three questions on intellectual property, then respond to two other students who had addressed the other questions. The prompt specifically called for raising new questions or offering different perspectives (72). Both Nirmala and Steward judged the activity as effective even though it occurred in a media-lean environment because in sharing varied perspectives on a topic that did not have a single solution, students produced material that they were then able to integrate into the assigned paper (73):

The process of reading and responding to forum posts prompted critical thinking about the topic, and Nirmala built upon and extended the ideas expressed in the forum in her essay. . . . [She] engaged in knowledge construction as a result of interacting with her peers, which is to say she engaged in “interactive learning” or a “successful community of inquiry.” (73)

Stewart notes that this successful activity did not involve the “back-and-forth conversation” instructors often hope to encourage (74).

The co-authored paper was deemed not successful. Stewart contends that the presence of more immediate interaction did not result in more social presence and did not support cognitive presence (74). The instructions required two students to “work together” on the paper; according to Nirmala’s report, co-authoring became a matter of combining and editing what the students had written independently (75). Stewart writes that the prompt did not establish the need for exploration of viewpoints before the writing activity (76). As a result, Nirmala felt she could complete the assignment without input from her peer (76).

Though Nirmala suggested that the assignment might have worked better had she and her partner met face-to-face, Stewart argues from the findings that the more media-rich environment in which the students were “co-present” did not increase social presence (75). She states that instructors may tend to think that simply being together will encourage students to interact successfully when what is actually needed is more attention to the activity design. Such design, she contends, must specifically clarify why sharing perspectives is valuable and must require such exploration and reflection in the instructions (76).

Similarly, the synchronous video webinar failed to create productive social or cognitive presence. Students placed in groups and instructed to compose group responses to four questions again responded individually, merely “check[ing]” each other’s answers.  Nirmala reports that the students actually “Googled the answer and, like, copy pasted” (Nirmala, qtd. in Stewart 77). Steward contends that the students concentrated on answering the questions, skipping discussion and sharing of viewpoints (77).

For Stewart, these results suggest that instructors should be aware that in technology-mediated environments, students take longer to become comfortable with each other, so activity design should build in opportunities for the students to form relationships (78). Also, prompts can encourage students to share personal experiences in the process of contributing individual perspectives. Specifically, according to Stewart, activities should introduce students to issues without easy solutions and focus on why sharing perspectives on such issues is important (78).

Stewart reiterates her claim that the particular technological environment or tool in use is less important than the design of activities that support social presence for learning. Even in media-rich environments, students placed together may not effectively interact unless given guidance in how to do so. Stewart finds the CoI framework useful because it guides instructors in creating activities, for example, by determining the “cognitive goals” in order to decide how best to use teaching presence to build appropriate social presence. The framework can also function as an assessment tool to document the outcomes of activities (79). She provides a step-by-step example of CoI in use to design an activity in an ADF (79-81).