College Composition Weekly: Summaries of research for college writing professionals

Read, Comment On, and Share News of the Latest from the Rhetoric and Composition Journals


2 Comments

King, Emily. Student Silence in Classroom Discussion. TETYC, Mar. 2018. Posted 03/21/2018.

King, Emily. “Understanding Classroom Silence: How Students’ Perceptions of Power Influence Participation in Discussion-Based Composition Classrooms.” Teaching English in the Two-Year College 45.3 (2018): 284-305. Web. 16 Mar. 2018.

Emily King conducted a qualitative study of students’ willingness to participate in discussions in writing classrooms. She finds such exchanges essential in critical pedagogy, which, she contends, requires collaborative, dialogic engagement in order to raise student awareness of inequities and power structures “in the classroom and beyond” (284). In particular, she addresses how students’ perceptions of power differentials may influence their willingness to take part in discussion.

King reviews several decades of scholarship on student participation in critical classrooms to reveal hypotheses about the reasons students may or may not choose to speak during class. She cites scholars like Ira Shor, Paulo Freire, and Patricia Bizzell to propose that students often conclude, in Shor’s words, that their job is to “answer questions, not question answers” (qtd. in King 285), and that teachers’ efforts to make the classroom more democratic only arouse students’ suspicions because they perceive that the teacher will always retain power (285).

Other scholars reviewed by King find an explanation in students’ efforts to differentiate their identities from the institutional ones they find imposed when they enter college (285). Russel K. Durst posits that students resist the degree to which critical exploration “complicate[s] rather than simplify[ies]” the lives of students who simply want to see writing as an instrumental means to a goal (qtd. in King 286). King argues that all these explanations revolve around student responses to power relationships and that attention to this question can enhance teachers’ ability to further critical curricula (286).

The study employed “gateway research,” a six-step method related to oral history created by Carolyn Lunsford Mears. Based on interpretation of interview data, the method allows researchers to explore “students’ individual narratives” to understand how they respond to experience (288). King observed a colleague’s first-year writing class for two weeks, taking notes on student participation, and distributed an anonymous questionnaire to several sections, eliciting 75 responses. She conducted in-depth interviews with four students from her own and her colleague’s courses (288-89). King maintains that comparing survey and interview results yielded an informative picture of student attitudes (290).

King found that 43% of the students surveyed said they “seldom participate in class discussion,” while 35% classified themselves as “moderate” participants. Only 23% claimed to speak often (291-92). In King’s own observations of the students in her class and in the class she observed, students participated even less than their survey data indicated, with only 36% of the students falling into the “high” and “moderate” categories (292).

In both the interviews and the surveys, students insisted that “social difference” (292) had no effect on their participation while revealing in comments that they were very aware of issues of race, class, and gender (292-94):

[T]he interviewees spoke freely about social difference and injustice in the world and even on campus but were adamant about the lack of connection between those judgments and their own classroom behavior. (293).

King contends that students appeared to see the teacher’s fairness or lack of bias as the primary guarantor of equality in the classroom (294).

Examining her data on motivation for classroom choices, King finds that despite denying the influence of power and social difference, students are both aware of these components of classroom behavior and work actively to respond to them. King argues that many participation choices are not connected to learning but rather to efforts to “manage reputation” and “alter or affirm social identity” in response to pressures from class, gender, and race (295).

Particularly salient, in King’s view, was the association in students’ comments between speaking in class and appearing intelligent. The two female students, who were the most vocal, noted that classmates often spoke because “they ‘wanted to seem smart’ but really ‘had nothing to say’” (296), while in one case, in King’s representation, the student specifically wanted to appear smart and engaged because “she did not believe [these traits] were generally associated with Hispanic students” (296).

Similarly, the less communicative males King interviewed expressed concerns about appearing less intelligent; in one case the student “was very concerned about racial stereotypes against which he believed he was constantly working, even within his own family” (297). Comments quoted by King indicate he wanted to participate more but “I don’t want to seem like I’m dumb” (qtd. in King 297). This same student indicated concerns about other students’ perceptions about his social class (297).

The other male student exhibited characteristics of what Ira Shor calls “Siberian Syndrome,” casting himself as a “listener” who sat on the periphery in class (298). According to King, this student’s choices indicated an awareness that “his contributions to class discussions would be judged by his peers” (298).

King writes that the two women’s choices allowed them to establish power in the classroom (299). These women connected their classroom behavior to their personas outside the classroom, with one stating that she was a “natural leader” (qtd. in King 299). Their roles included a sense that students had a responsibility to the class and that part of their role was to “maintain” conversations the teacher had started (299). In addition, these women suggested that such a sense of leadership and group responsibility was a gendered trait (297).

These observations lead King to note that while teachers value active participation, “very talkative students” may be motivated more by a desire to be noticed than by learning and that they may stifle contributions from less vocal classmates (299). She presents interview data from one male interviewee suggesting that he did feel silenced when other students dominated the conversation (298). King writes that this reaction may be particularly prevalent in students who struggle with “Imposter Syndrome,” doubting that they actually belong in college (300).

King notes that her study may be limited by the effect on her objectivity of her involvement as researcher and by ambiguities in the definitions of words like “power” and “participation” (301). She contends that her research offers a “different lens” with which to examine student resistance to engagement in critical classrooms because of its focus on student responses (301). Her study leads her to conclude that students are alert to power issues that arise from social difference and often manage their responses to these issues without teacher intervention, even when they actively deny the influence of difference (302).

King urges more attention to student voices through qualitative research to determine how teachers can effectively develop their own roles as facilitators and co-learners in critically informed classrooms (302).


Litterio, Lisa M. Contract Grading: A Case Study. J of Writing Assessment, 2016. Posted 04/20/2017.

Litterio, Lisa M. “Contract Grading in a Technical Writing Classroom: A Case Study.” Journal of Writing Assessment 9.2 (2016). Web. 05 Apr. 2017.

In an online issue of the Journal of Writing Assessment, Lisa M. Litterio, who characterizes herself as “a new instructor of technical writing,” discusses her experience implementing a contract grading system in a technical writing class at a state university in the northeast. Her “exploratory study” was intended to examine student attitudes toward the contract-grading process, with a particular focus on how the method affected their understanding of “quality” in technical documents.

Litterio’s research into contract grading suggests that it can have the effect of supporting a process approach to writing as students consider the elements that contribute to an “excellent” response to an assignment. Moreover, Litterio contends, because it creates a more democratic classroom environment and empowers students to take charge of their writing, contract grading also supports critical pedagogy in the Freirean model. Litterio draws on research to support the additional claim that contract grading “mimic[s] professional practices” in that “negotiating and renegotiating a document” as students do in contracting for grades is a practice that “extends beyond the classroom into a workplace environment.”

Much of the research she reports dates to the 1970s and 1980s, often reflecting work in speech communication, but she cites as well models from Ira Shor, Jane Danielewicz and Peter Elbow, and Asao Inoue from the 2000s. In a common model, students can negotiate the quantity of work that must be done to earn a particular grade, but the instructor retains the right to assess quality and to assign the final grade. Litterio depicts her own implementation as a departure from some of these models in that she did make the final assessment, but applied criteria devised collaboratively by the students; moreover, her study differs from earlier reports of contract grading in that it focuses on the students’ attitudes toward the process.

Her Fall 2014 course, which she characterizes as a service course, enrolled twenty juniors and seniors representing seven majors. Neither Litterio nor any of the students were familiar with contract grading, and no students withdrew on learning from the syllabus and class announcements of Litterio’s grading intentions. At mid-semester and again at the end of the course, Litterio administered an anonymous open-ended survey to document student responses. Adopting the role of “teacher-researcher,” Litterio hoped to learn whether involvement in the generation of criteria led students to a deeper awareness of the rhetorical nature of their projects, as well as to “more involvement in the grading process and more of an understanding of principles discussed in technical writing, such as usability and document design.”

Litterio shares the contract options, which allowed students to agree to produce a stated number of assignments of either “excellent,” “great,” or “good” quality, an “entirely positive grading schema” that draws on Frances Zak’s claim that positive evaluations improved student “authority over their writing.”

The criteria for each assignment were developed in class discussion through an open voting process that resulted in general, if not absolute, agreement. Litterio provides the class-generated criteria for a resumé, which included length, format, and the expectations of “specific and strong verbs.” As the instructor, Litterio ultimately decided whether these criteria were met.

Mid-semester surveys indicated that students were evenly split in their preferences for traditional grading models versus the contract-grading model being applied. At the end of the semester, 15 of the 20 students expressed a preference for traditional grading.

Litterio coded the survey responses and discovered specific areas of resistance. First, some students cited the unfamiliarity of the contract model, which made it harder for them to “track [their] own grades,” in one student’s words. Second, the students noted that the instructor’s role in applying the criteria did not differ appreciably from instructors’ traditional role as it retained the “bias and subjectivity” the students associated with a single person’s definition of terms like “strong language.” Students wrote that “[i]t doesn’t really make a difference in the end grade anyway, so it doesn’t push people to work harder,” and “it appears more like traditional grading where [the teacher] decide[s], not us.”

In addition, students resisted seeing themselves and their peers as qualified to generate valid criteria and to offer feedback on developing drafts. Students wrote of the desire for “more input from you vs. the class,” their sense that student-generated criteria were merely “cosmetics,” and their discomfort with “autonomy.” Litterio attributes this resistance to the role of expertise to students’ actual novice status as well as to the nature of the course, which required students to write for different discourse communities because of their differing majors. She suggests that contract grading may be more appropriate for writing courses within majors, in which students may be more familiar with the specific nature of writing in a particular discipline.

However, students did confirm that the process of generating criteria made them more aware of the elements involved in producing exemplary documents in the different genres. Incorporating student input into the assessment process, Litterio believes, allows instructors to be more reflective about the nature of assessment in general, including the risk of creating a “yes or no . . . dichotomy that did not allow for the discussions and subjectivity” involved in applying a criterion. Engaging students throughout the assessment process, she contends, provides them with more agency and more opportunity to understand how assessment works. Student comments reflect an appreciation of having a “voice.”

This study, Litterio contends, challenges the assumption that contract grading is necessarily “more egalitarian, positive, [and] student-centered.” The process can still strike students as biased and based entirely on the instructor’s perspective, she found. She argues that the reflection on the relationship between student and teacher roles enabled by contract grading can lead students to a deeper understanding of “collective norms and contexts of their actions as they enter into the professional world.”


Hassel and Giordano. Assessment and Remediation in the Placement Process. CE, Sept. 2015. Posted 10/19/2015.

Hassel, Holly, and Joanne Baird Giordano. “The Blurry Borders of College Writing: Remediation and the Assessment of Student Readiness.” College English 78.1 (2015): 56-80. Print.

Holly Hassel and Joanne Baird Giordano advocate for the use of multiple assessment measures rather than standardized test scores in decisions about placing entering college students in remedial or developmental courses. Their concern results from the “widespread desire” evident in current national conversations to reduce the number of students taking non-credit-bearing courses in preparation for college work (57). While acknowledging the view of critics like Ira Shor that such courses can increase time-to-graduation, they argue that for some students, proper placement into coursework that supplies them with missing components of successful college writing can make the difference between completing a degree and leaving college altogether (61-62).

Sorting students based on their ability to meet academic outcomes, Hassel and Giordano maintain, is inherent in composition as a discipline. What’s needed, they contend, is more comprehensive analysis that can capture the “complicated academic profiles” of individual students, particularly in open-access institutions where students vary widely and where the admissions process has not already identified and acted on predictors of failure (61).

They cite an article from The Chronicle of Higher Education stating that at two-year colleges, “about 60 percent of high-school graduates . . . have to take remedial courses” (Jennifer Gonzalez, qtd. in Hassel and Giordano 57). Similar statistics from other university systems, as well as pushes from organizations like Complete College America to do away with remedial education in the hope of raising graduation rates, lead Hassel and Giordano to argue that better methods are needed to document what competences college writing requires and whether students possess them before placement decisions are made (57). The inability to make accurate decisions affects not only the students, but also the instructors who must alter curriculum to accommodate misplaced students, the support staff who must deal with the disruption to students’ academic progress (57), and ultimately the discipline of composition itself:

Our discipline is also affected negatively by not clearly and accurately identifying what markers of knowledge and skills are required for precollege, first-semester, second-semester, and more advanced writing courses in a consistent way that we can adequately measure. (76)

In the authors’ view, the failure of placement to correctly identify students in need of extra preparation can be largely attributed to the use of “stand-alone” test scores, for example ACT and SAT scores and, in the Wisconsin system where they conducted their research, scores from the Wisconsin English Placement Test (WEPT) (60, 64). They cite data demonstrating that reliance on such single measures is widespread; in Wisconsin, such scores “[h]istorically” drove placement decisions, but concerns about student success and retention led to specific examinations of the placement process. The authors’ pilot process using multiple measures is now in place at nine of the two-year colleges in the system, and the article details a “large-scale scholarship of teaching and learning project , , , to assess the changes to [the] placement process” (62).

The scholarship project comprised two sets of data. The first set involved tracking the records of 911 students, including information about their high school achievements; their test scores; their placement, both recommended and actual; and their grades and academic standing during their first year. The “second prong” was a more detailed examination of the first-year writing and in some cases writing during the second year of fifty-four students who consented to participate. In all, the researchers examined an average of 6.6 pieces of writing per student and a total of 359 samples (62-63). The purpose of this closer study was to determine “whether a student’s placement information accurately and sufficiently allowed that student to be placed into an appropriate first-semester composition course with or without developmental reading and studio writing support” (63).

From their sample, Hassel and Giordano conclude that standardized test scores alone do not provide a usable picture of the abilities students bring to college with regard to such areas as rhetorical knowledge, knowledge of the writing process, familiarity with academic writing, and critical reading skills (66).

To assess each student individually, the researchers considered not just their ACT and WEPT scores and writing samples but also their overall academic success, including “any reflective writing” from instructors, and a survey (66). They note that WEPT scores more often overplaced students, while the ACT underplaced them, although the two tests were “about equally accurate” (66-67).

The authors provide a number of case studies to indicate how relying on test scores alone would misrepresent students’ abilities and specific needs. For example, the “strong high school grades and motivation levels” (68) of one student would have gone unmeasured in an assessment process using only her test scores, which would have placed her in a developmental course. More careful consideration of her materials and history revealed that she could succeed in a credit-bearing first-year writing course if provided with a support course in reading (67). Similarly, a Hmong-speaking student would have been placed into developmental courses based on test-scores alone, which ignored his success in a “challenging senior year curriculum” and the considerable higher-level abilities his actual writing demonstrated (69).

Interventions from the placement team using multiple measures to correct the test-score indications resulted in a 90% success rate. Hassel and Giordano point out that such interventions enabled the students in question to move more quickly toward their degrees (70).

Additional case studies illustrate the effects of overplacement. An online registration system relying on WEPT scores allowed one student to move into a non-developmental course despite his weak preparation in high school and his problematic writing sample; this student left college after his second semester (71-72). Other problems arose because of discrepancies between reading and writing scores. The use of multiple measures permitted the placement team to fine-tune such students’ coursework through detailed analysis of the actual strengths and weaknesses in the writing samples and high-school curricula and grades. In particular, the authors note that students entering college with weak higher-order cognitive and rhetorical skills require extra time to build these abilities; providing this extra time through additional semesters of writing moves students more quickly and reliably toward degree completion than the stress of a single inappropriate course (74-76).

The authors offer four recommendations (78-79): the use of multiple measures, use of assessment data to design a curriculum that meets actual needs; creation of well-thought-out “acceleration” options through pinpointing individual needs; and a commitment to the value of developmental support “for students who truly need it”: “Methods that accelerate or eliminate remediation will not magically make such students prepared for college work” (79).