College Composition Weekly: Summaries of research for college writing professionals

Read, Comment On, and Share News of the Latest from the Rhetoric and Composition Journals


Leave a comment

Worthy et al. Teacher Educators’ Perspectives on Dyslexia. RTE, Nov. 2018. Posted 01/05/2019.

Worthy, Jo, Catherine Lammert, Stacia L. Long, Cori Salmerón, and Vickie Godfrey. “‘What If We Were Committed to Giving Every Individual the Services and Opportunities They Need?’ Teacher Educators’ Understandings, Perspectives, and Practices Surrounding Dyslexia.” Research in the Teaching of English 53.2 (2018): 125-48. Print.

Jo Worthy, Catherine Lammert, Stacia L. Long, Cori Salmerón, and Vickie Godfrey discuss a study on approaches to dyslexia in teacher education. The authors note that while research has not been able to clearly define dyslexia or agree on an ideal intervention, many states are passing legislation that treats dyslexia as a specific condition with specific treatment protocols (125).

Worthy et al. address the discourse surrounding dyslexia through the Bakhtinian categories of “ideological becoming” and “internally persuasive discourse” as opposed to Bakhtin’s understanding of “authoritative discourse” (AD) (126). “AD” consists of dicta handed down by those claiming expertise; it tends to take over conversations and silence those it does not credential to speak (127). In the authors’ view, AD surrounding dyslexia is based on a medical model in which dyslexia is a narrowly defined “deficit,” which is described in medical terms and which can only be treated by those specifically trained to do so (127). This discourse, the authors state, views educators as inadequately informed and unqualified to deal with students diagnosed with the condition (130).

The authors, in contrast, address the issue through the “field of disability studies in education,” which sees “variation among learners as natural,” as well as “socially constructed” and influenced by “context and social interactions, as well as social, political, and historical systems and discourse” (127). “DisCrit” scholars or those practicing “disability critical race studies” further note the degree to which matters of “race, class, privilege, and power” affect how labels are assigned and addressed (126; 127-28).

Surveying research in dyslexia studies, the authors note that none of the “top 10 most published authors, . . . none were educators” (126). According to Worthy et al., research has failed to find any specific causal or measurable factor that separates students believed to be dyslexic from other students in the reading continuum (128). Brain imaging studies have thus far been inconclusive (129).

Worthy et al. report consensus that “there is no best method for teaching reading” (128), yet many state legislatures have mandated specific treatments like the Orton-Gillingham program (O-G), even though its “multisensory” processes have not been shown to be effective (130). Programs that focus primarily on decoding, the authors state, also show little effect in themselves (130) and should be part of, rather than the core of, “comprehensive, meaning-based reading instruction” (129).

Worthy et al. position themselves as experienced public-school teachers and teacher-educators who began to question the current discourse on dyslexia when it failed to jibe with their own experiences. They began to find similar discomfort with the AD surrounding dyslexia among students and colleagues (130-31). For their study, they recruited 21 women and 4 men from a range of universities in Texas; the participants, who had many levels of experience both as teachers and as teacher-educators, engaged in semi-structured interviews (131). The authors explain their coding process, which yielded three “a priori” categories and three “inductive” categories (132).

“A priori” categories were “definitions and understanding about dyslexia”; “compliance with dyslexia policies”; and “confidence about dyslexia” (132). The researchers found that their interview subjects reflected the conflict between the AD of dyslexia and a more questioning stance that recognized that research did not provide the same degree of certainty as the prevalent AD (133). The participants reported increased official attention to the question of dyslexia and increased oversight of curricula (134). They reported complying with mandates but, in some cases, “present[ing] the state’s information about dyslexia with a broader discussion of struggle and literacy, where they could contextualize and complicate it” (134).

Participant response regarding “confidence about dyslexia” varied, with five of the educators “express[ing] unqualified confidence” in their ability to address the condition. The authors characterize the “remaining educators” as questioning their own experience in light of the dominant discourse (135); these teacher-educators “stopped short” of claiming they were prepared to work with students identified with the condition (135).

“Inductive analysis” of the interviews (136) led to three categories: teacher-educators’ expertise in teaching reading; their responses to AD; and their use of “critical perspectives” (132). Participants shared a belief that teaching reading should be an observation- and assessment-based, individualized process (136-37). In this view, decoding was important but only as part of a curriculum that engaged students in the whole process of reading (136). New teachers, the educators agreed, would benefit from a “more nuanced perspective” that would allow them to recognize their own ability to teach reading across many skills levels (137).

Participants challenged “the vague definition and subjective identification procedures” (137) that most felt led to “overidentification” and to early labeling that called for unnecessary interventions (138). Some felt that the dyslexia label could remove a stigma from reading difficulties; others saw being labeled as conveying a judgment of “something wrong” (138). The teacher-educators questioned the efficacy of programs like the O-G method that foreground “skill work” and interventions that remove students from classrooms to receive instruction characterized by “a lack of alignment” with classroom work (140). The authors note that these views accord with DisCrit analysis that favors “inclusion” rather than “segregation,” which AD seems to advocate (140).

Challenges to the exclusion of educator voices informed participants’ critical perspectives, with one respondent calling the medical community’s adherence to medical models “cult-like” (“Patrice,” qtd. in Worthy et al. 141). Participants noted that the problematic claim that dyslexic readers were highly creative and intelligent has actually made the label desirable for more affluent parents, with dyslexia “the socially acceptable learning disability” (141) that can shield children from “probable consequences of low achievement” (142). According to “Marty,” discrimination in labeling results in the view that “White kids are dyslexic. Black kids are stupid” (qtd. in Worthy et al. 142).

The authors argue that despite being positioned by the current AD as unqualified to teach students with identified reading disabilities, the teacher-educators they surveyed “are more than qualified—by virtue of their preparation and experience—to teach reading to all children” (142). They advocate for the role these educators can play in helping their preservice teaching students negotiate the rigid political landscape they will encounter when they take their knowledge about teaching reading into the schools (143).

Worthy et al. also recommend that proponents of critical perspectives adjust their use of jargon to communicate with wide audiences rather than falling back on a “righteous authority” of their own (144). Their hope is that research and practice in teaching reading can align more comprehensively, drawing on the contributions of classroom educators to complicate what they see as an ineffective, limited approach to the wide range of variation in children’s paths toward reading skill.


Leave a comment

Earle, Chris S. Habermas and Religion in Public Life. CE, Nov. 2018. Posted 12/21/2018.

Earle, Chris S. “Religion, Democracy, and Public Writing: Habermas on the Role of Religion in Public Life.” College English 81.2 (2018): 133-54. Print.

Chris S. Earle discusses the issue of students’ inclusion of religion-based argument in writing classrooms. He links this concern to the problem of democratic deliberation in a diverse society in which religion plays an important role for many citizens.

He notes scholarship in composition regarding non-religious students’ resistance to argument drawn from religious belief and the concomitant problem of religious students’ need to bring their deep convictions to bear on questions of policy (134). He finds two often-used pedagogical approaches: encouraging critical thinking by asking students to recognize the existence of multiple viewpoints, and a focus on audience by developing reasons that would be persuasive to people who lack a religious commitment (134-35). In Earle’s view, these approaches do not address aspects of the problem that he considers additional “obligation[s] of democratic citizenship” (135).

To explore these obligations and suggest fruitful approaches to them, Earle proposes the “translation proviso” of Jürgen Habermas (135). In Earle’s reading, this theory recognizes the possible contradictions underlying “Value pluralism.” Habermas finds religious conviction important in democratic life because it can provide “a counterweight to forces . . . that threaten to instrumentalize human life” (135). But, Earle writes, Habermas also contends that reasons given in public debate must ultimately find expression in “terms acceptable to all involved” (135). These tenets set up a tension between “inclusion and reciprocity,” concepts Earle presents as central to “translation” (136-37).

Inclusion, in this view, means that all voices are heard. Reciprocity requires all interlocutors to express these views in ways that audiences will accept. Paradoxically, Earle argues, the need for inclusion requires religious views to be honored, yet reciprocity requires religious views to be subject to “validity claims” that they may not be able to accommodate. The result can be that arguers end up bringing “private reason” to decision-making, resulting in an “irreducible moral pluralism” in which stakeholders’ insistence on being included clashes with the refusal to subject their viewpoints to full debate (137).

Earle presents John Rawls’s solution as the elimination of “reasonable comprehensive doctrines” from “public debate” (137). Citizens would be limited to arguing for their positions through “the public use of reason” (137). For Rawls, public reason is founded on widely shared democratic and constitutional principles, whereas for Habermas, public reason can include “any reason that can be ‘defended as being in the best interest of all considered as equal moral and political beings’” (Seyla Benhabib, qtd. in Earle 137).

According to Earle, both Rawls and Habermas offer the “translation proviso” as a means to overcome this problem. For Rawls, religion can enter public debate, but religion per se does not provide the kind of reasons that can be accepted across the broad audiences engaged in such debate. Religious arguers must, “over time,” produce “a public translation” that will lay out their claims in terms accessible to all (139). Earle draws on the example of Jeffrey Ringer’s student who linked his religious convictions to “the democratic principle of free will” (139).

For Habermas, Earle contends, this version of the proviso means that religious arguers often may find the need for their positions to be “watered down”: was the student forced to “background his core beliefs in order to satisfy an audience or assignment requirements” (139)? If so, translation burdens religious arguers more than non-religious ones.

Earle writes that Habermas tackles this limitation of translation, first, by adding “an institutional filter” that would require public translation only in specific public settings like “courts, legislative bodies, and the discourse of elected officials and candidates” (1140). Earle claims that for Habermas, this adjustment allows religion to work as a moral force in the larger public while being converted to what Habermas called “generally acceptable language” in formal policy-making environments (qtd. in Earle 141). Working with this distinction can encourage students to distinguish between claims based on doctrinaire religious authority and those appealing to a broader “moral insight” (142).

Earle recommends setting this process in motion by encouraging students to write for many different audiences, assessing how reasons may need to be translated for different contexts and genres (142-43). Still, he contends, excluding religious claims from formal decision-making contexts may cause religious students to be constrained in ways that non-religious students are not (143). As an approach to addressing this problem, Earle presents Habermas’s depiction of translation as “a cooperative task” (144). In this view, a process of “reciprocal-perspective taking” in which respondents “listen to each other, reflect upon the limits of faith and reason, and [are] willing to modify their proposals and commitments” can result in more equitable exchanges across divisions (144).

Earle cites the critique of Maeve Cooke that generating broadly accessible reasons, even through reciprocity, may prevent students from accepting reasons that do not match “what sounds familiar” or is “compatible with what they already know” (145). Reasons that embody difference, Earle notes, may often be those of “less powerful groups” (145). He posits that, responding to Habermas’s proviso, students working together to generate diverse claims may learn to hear a fuller range of voices. Instructors should especially help students locate “real opposing voices” rather than generating their arguments prior to engaging specific points of view (150; emphasis original).

To reinforce the emphasis on listening inherent in reciprocity, Earle examines Martin Luther King, Jr.’s “Letter from Birmingham Jail,” used by both Rawls and Habermas to illustrate translation (147-49). Earle illustrates the ways in which King articulated his understanding of the views of those who opposed his practice of civil disobedience before “drawing connections and identifying shared premises between God’s law and, when just, constitutional law” (148). Earle contrasts this act of translation with the rhetoric of Kim Davis, the Kentucky county clerk who refused to issue same-sex marriage licenses on the grounds of religious freedom. Paramount for Earle is the refusal of Davis and her supporters to listen to and examine in good faith the views of those she opposes, with the result that she did not try to justify her positions to those audiences as true translation and reciprocity would require (149).

In Earle’s view, Habermas’s understanding of translation would move writers away from seeking out opposing views simply to recognize or rebut them (150). He acknowledges that hoping students, regardless of their religious commitments, will truly hear views that they find unacceptable and, in the process, “critically reflect upon the partiality of their perspectives” (150) is an “ideal” rather than a common result (152). He urges accepting the role of religious as well as non-religious points of view as a crucial component of “accepting as unavoidable what Habermas refers to as the democratic confusion of voices” (152). In such an ideal, Earle writes, members of a democratic society “might find a basis for agreement and even consubstantiality on something other than the content of our beliefs” (152).


Leave a comment

Estrem et al. “Reclaiming Writing Placement.” WPA, Fall 2018. Posted 12/10/2018.

Estrem, Heidi, Dawn Shepherd, and Samantha Sturman. “Reclaiming Writing Placement.” Journal of the Council of Writing Program Administrators 42.1 (2018): 56-71. Print.

Heidi Estrem, Dawn Shepherd, and Samantha Sturman urge writing program administrators (WPAs) to deal with long-standing issues surrounding the placement of students into first-year writing courses by exploiting “fissures” (60) created by recent reform movements.

The authors note ongoing efforts by WPAs to move away from using single or even multiple test scores to determine which courses and how much “remediation” will best serve students (61). They particularly highlight “directed self-placement” (DSP) as first encouraged by Dan Royer and Roger Gilles in a 1998 article in College Composition and Communication (56). Despite efforts at individual institutions to build on DSP by using multiple measures, holistic as well as numerical, the authors write that “for most college students at most colleges and universities, test-based placement has continued” (57).

Estrem et al. locate this pressure to use test scores in the efforts of groups like Complete College America (CCA) and non-profits like the Bill and Melinda Gates Foundation, which “emphasize efficiency, reduced time to degree, and lower costs for students” (58). The authors contrast this “focus on degree attainment” with the field’s concern about “how to best capture and describe student learning” (61).

Despite these different goals, Estrem et al. recognize the problems caused by requiring students to take non-credit-bearing courses that do not address their actual learning needs (59). They urge cooperation, even if it is “uneasy,” with reform groups in order to advance improvements in the kinds of courses available to entering students (58). In their view, the impetus to reduce “remedial” coursework opens the door to advocacy for the kinds of changes writing professionals have long seen as serious solutions. Their article recounts one such effort in Idaho to use the mandate to end remediation as it is usually defined and replace it with a more effective placement model (60).

The authors note that CCA calls for several “game changers” in student progress to degree. Among these are the use of more “corequisite” courses, in which students can earn credit for supplemental work, and “multiple measures” (59, 61). Estrem et al. find that calls for these game changers open the door for writing professionals to introduce innovative courses and options, using evidence that they succeed in improving student performance and retention, and to redefine “multiple measures” to include evidence such as portfolio submissions (60-61).

Moreover, Estrem et al. find three ways in which WPAs can respond to specific calls from reform movements in ways that enhance student success. First, they can move to create new placement processes that enable students to pass their first-year courses more consistently, thus responding to concerns about costs to students (62); second, they can provide data on increased retention, which speaks to time to degree; and finally, they can recognize a current “vacuum” in the “placement test market” (62-63). They note that ACT’s Compass is no longer on the market; with fewer choices, institutions may be open to new models. The authors contend that these pressures were not as exigent when directed self-placement was first promoted. The existence of such new contexts, they argue, provides important and possibly short-lived opportunities (63).

The authors note the growing movement to provide college courses to students while they are in high school (62). Despite the existence of this model for lowering the cost and time to degree, Estrem et al. argue that the first-year experience is central to student success in college regardless of students’ level when they enter, and that placing students accurately during this first college exposure can have long-lasting effects (63).

Acknowledging that individual institutions must develop tools that work in their specific contexts, Estrem et al. present “The Write Class,” their new placement tool. The Write Class is “a web application that uses an algorithm to match students with a course based on the information they provide” (64). Students are asked a set of questions, beginning with demographics. A “second phase,” similar to that in Royer and Gilles’s original model, asks for “reflection” on students’ reading and writing habits and attitudes, encouraging, among other results, student “metaawareness” about their own literacy practices (65).

The third phase provides extensive information about the three credit-bearing courses available to entering students: the regular first-year course in which most students enroll; a version of this course with an additional workshop hour with the instructor in a small group setting; or a second-semester research-based course (64). The authors note that the courses are given generic names, such as “Course A,” to encourage students to choose based on the actual course materials and their self-analysis rather than a desire to get into or dodge specific courses (65).

Finally, students are asked to take into account “the context of their upcoming semester,” including the demands they expect from family and jobs (65). With these data, the program advises students on a “primary and secondary placement,” for some including the option to bypass the research course through test scores and other data (66).

In the authors’ view, the process has a number of additional benefits that contribute to student success. Importantly, they write, the faculty are able to reach students prior to enrollment and orientation rather than find themselves forced to deal with placement issues after classes have started (66). Further, they can “control the content and the messaging that students receive” regarding the writing program and can respond to concerns across campus (67). The process makes it possible to have “meaningful conversation[s]” with students who may be concerned about their placement results; in addition, access to the data provided by the application allows the WPAs to make necessary adjustments (67-68).

Overall, the authors present a student’s encounter with their placement process as “a pedagogical moment” (66), in which the focus moves from “getting things out of the way” to “starting a conversation about college-level work and what it means to be a college student” (68). This shift, they argue, became possible through rhetorically savvy conversations that took advantage of calls for reform; by “demonstrating how [The Write Class process] aligned with this larger conversation,” the authors were able to persuade administrators to adopt the kinds of concrete changes WPAs and writing scholars have long advocated (66).


Leave a comment

Pruchnic et al. Mixed Methods in Direct Assessment. J or Writ Assessment, 2018. Posted 12/01/2018.

Pruchnic, Jeff, Chris Susak, Jared Grogan, Sarah Primeau, Joe Torok, Thomas Trimble, Tanina Foster, and Ellen Barton. “Slouching Toward Sustainability: Mixed Methods in the Direct Assessment of Student Writing.” Journal of Writing Assessment 11.1 (2018). Web. 27 Nov. 2018.

[Page numbers from pdf generated from the print dialogue]

Jeff Pruchnic, Chris Susak, Jared Grogan, Sarah Primeau, Joe Torok, Thomas Trimble, Tanina Foster, and Ellen Barton report on an assessment of “reflection argument essay[s]” from the first-year-composition population of a large, urban, public research university (6). Their assessment used “mixed methods,” including a “thin-slice” approach (1). The authors suggest that this method can address difficulties faced by many writing programs in implementing effective assessments.

The authors note that many stakeholders to whom writing programs must report value large-scale quantitative assessments (1). They write that the validity of such assessments is often measured in terms of statistically determined interrater reliability (IRR) and samples considered large enough to adequately represent the population (1).

Administrators and faculty of writing programs often find that implementing this model requires time and resources that may not be readily available, even for smaller programs. Critics of this model note that one of its requirements, high interrater reliability, can too easily come to stand in for validity (2); in the view of Peter Elbow, such assessments favor “scoring” over “discussion” of the results (3). Moreover, according to the authors, critics point to the “problematic decontextualization of program goals and student achievement” that large-scale assessments can foster (1).

In contrast, Pruchnic et al. report, writing programs have tended to value the “qualitative assessment of a smaller sample size” because such models more likely produce the information needed for “the kinds of curricular changes that will improve instruction” (1). Writing programs, the authors maintain, have turned to redefining a valid process as one that can provide this kind of information (3).

Pruchnic et al. write that this resistance to statistically sanctioned assessments has created a bind for writing programs. Pruchnic et al. cite scholars like Peggy O’Neill (2) and Richard Haswell (3) to posit that when writing programs refuse the measures of validity required by external stakeholders, they risk having their conclusions dismissed and may well find themselves subject to outside intervention (3). Haswell’s article “Fighting Number with Number” proposes producing quantitative data as a rhetorical defense against external criticism (3).

In the view of the authors, writing programs are still faced with “sustainability” concerns:

The more time one spends attempting to perform quantitative assessment at the size and scope that would satisfy statistical reliability and validity, the less time . . . one would have to spend determining and implementing the curricular practices that would support the learning that instructors truly value. (4)

Hoping to address this bind, Pruchnic et al. write of turning to a method developed in social studies to analyze “lengthy face-to-face social and institutional interactions” (5). In a “thin-slice” methodology, raters use a common rubric to score small segments of the longer event. The authors report that raters using this method were able to predict outcomes, such as the number of surgery malpractice claims or teacher-evaluation results, as accurately as those scoring the entire data set (5).

To test this method, Pruchnic et al. created two teams, a “Regular” and a “Research” team. The study compared interrater reliability, “correlation of scores,” and the time involved to determine how closely the Research raters, scoring thin slices of the assessment data, matched the work of the Regular raters (5).

Pruchnic et al. provide a detailed description of their institution and writing program (6). The university’s assessment approach is based on Edward White’s “Phase 2 assessment model,” which involves portfolios with a final reflective essay, the prompt for which asks students to write an evidence-based argument about their achievements in relation to the course outcomes (8). The authors note that limited resources gradually reduced the amount of student writing that was actually read, as raters moved from full-fledged portfolio grading to reading only the final essay (7). The challenges of assessing even this limited amount of student work led to a sample that consisted of only 6-12% of the course enrollment.

The authors contend that this is not a representative sample; as a result, “we were making decisions about curricular and other matters that were not based upon a solid understanding of the writing of our entire student body” (7). The assessment, in the authors’ view, therefore did not meet necessary standards of reliability and validity.

The authors describe developing the rubric to be used by both the Research and Regular teams from the precise prompt for the essay (8). They used a “sampling calculator” to determine that, given the total of 1,174 essays submitted, 290 papers would constitute a representative sample; instructors were asked for specific, randomly selected papers to create a sample of 291 essays (7-8).

The Regular team worked in two-member pairs, both members of each pair reading the entire essay, with third readers called in as needed (8): “[E]ach essay was read and scored by only one two-member team” (9). The authors used “double coding” in which one-fifth of the essays were read by a second team to establish IRR (9). In contrast, the 10-member Research team was divided into two groups, each of which scored half the essays. These readers were given material from “the beginning, middle, and end” of each essay: the first paragraph, the final paragraph, and a paragraph selected from the middle page or pages of the essay, depending on its length. Raters scored the slices individually; the averaged five team members’ scores constituted the final scores for each paper (9).

Pruchnic et al. discuss in detail their process for determining reliability and for correlating the scores given by the Regular and Research teams to determine whether the two groups were scoring similarly. Analysis of interrater reliability revealed that the Research Team’s IRR was “one full classification higher” than that of the Regular readers (12). Scores correlated at the “low positive” level, but the correlation was statistically significant (13). Finally, the Research team as a whole spent “a little more than half the time” scoring than the Regular group, while individual average scoring times for Research team members was less than half of the scoring time of the Regular members (13).

Additionally, the assessment included holistic readings of 16 essays randomly representing the four quantitative result classifications of Poor through Good (11). This assessment allowed the authors to determine the qualities characterizing essays ranked at different levels and to address the pedagogical implications within their program (15, 16).

The authors conclude that thin-slice scoring, while not always the best choice in every context (16), “can be added to the Writing Studies toolkit for large-scale direct assessment of evaluative reflective writing” (14). Future research, they propose, should address the use of this method to assess other writing outcomes (17). Paired with a qualitative assessment, they argue, a mixed-method approach that includes thin-slice analysis as an option can help satisfy the need for statistically grounded data in administrative and public settings (16) while enabling strong curricular development, ideally resulting in “the best of both worlds” (18).


Leave a comment

Witte, Alison. CMSs as Genres. C&C, Sept. 2018. Posted 11/20/2018.

Witte, Alison. “‘Why Won’t Moodle. . . ?’: Using Genre Studies to Understand Students’ Approaches to Interacting with User Interfaces.” Computers and Composition 49 (2018): 48-60. Web. 9 Nov. 2018.

Alison Witte addresses the difficulties her first-year students faced when they encountered the Course Management System (CMS) in use at her institution. She surveyed students in first-year courses over six semesters to discover the factors that may have caused these problems (50). Witte found that examining the CMS interface as a genre provided insights into how students interacted with the program.

The author notes that the use of a CMS has “become a normalized part of many educational institutions’ landscapes” (48). The program’s power to shape interactions between students, instructors, and the institution, she writes, can generate “tensions” (48). She describes Moodle, the CMS in place for her university, comparing its “static” features with the more interactive and responsive features of social media sites; she notes in particular the “teacher-driven design” that permits the instructor to determine what sections to create and to provide the content (49). Witte quotes a faculty mentor who supports the university’s commitment to Moodle because the students are familiar with it from high school and “like it,” even though, according to Witte, there is only “anecdotal” evidence behind this claim (49).

In Witte’s view, if students are indeed comfortable in electronic environments, they should not exhibit the level of difficulty she observes (49). Her survey investigates which kinds of interfaces students have experienced and how these experiences might influence their reactions to Moodle (50).

Drawing on genre theory, Witte proposes, highlights the ways an interface cues users to control what behaviors and actions are acceptable, requiring users to determine the “appropriate response” in the rhetorical situation established by the interface (52). Citing Carolyn Miller, Witte considers genre “a way of understanding how a text responds to a particular recurring situation” (50). Just as Microsoft Word’s presentation of a blank page cues an essaylike response rather than a social-media post, the CMS signals certain kinds of “typified” actions (51).

Arguing that writing studies has not explored electronic interfaces through this theoretical lens, Witte contends that interfaces have generally been seen as tools to produce other things rather than as “text[s] with both expectations and formal conventions” of their own (50). Instructors, she proposes, are like other users of electronic environments in that their use of these familiar programs becomes “unconscious or invisible” because they are so accustomed to the process (51). Her study foregrounds the need for teachers to be more alert to the ways that their use of a CMS acts as a genre students must interpret and positions them in certain ways in the classroom environment (50). Teachers’ understanding of this interaction, she maintains, can help students use a CMS more effectively.

Witte notes two common models of CMS use. In many cases, the system attempts to “replicate” a classroom environment, allowing students to complete familiar academic tasks such as taking quizzes and completing assignments. A second model treats the CMS as a “repository” where students go to procure whatever they need for the class. These models share a “top-down” quality in that the teacher decides on the categories and sections and provides the material (52-53). The models limit students to responding in ways determined by the instructor and indicated by the conventions incorporated into the interface (53).

For Witte, a “guiding assumption” in the study was “that people learn unfamiliar genres by determining how they are like and unlike genres they know and by observing how the unfamiliar genre is used in context” (50). Hence, her survey asks the 68 participating students which interfaces they normally interact with (54). It also asks multiple-choice and open-ended questions about students’ experiences with Moodle, including ease of use and kinds of use across classes. Finally, students were asked what they liked about the CMS and what improvements they might suggest (54).

The majority of the participants were in their first college semesters. Witte proposes that while these students might be among the most likely to report problems with the CMS, surveying this particular population yielded good information on how best to help students navigate their early exposure to such platforms (54).

Data revealed that students used a variety of social media, Word tools for producing documents, and “Miscellaneous Web-based Interfaces” like iTunes, E-bay, or YouTube (54). They most commonly relied on the CMS to “complete course work and to find the information necessary” to do so (55). All of the students used Moodle in some of their classes. Grounded-theory coding of the open-ended responses produced four categories of “likes” that focused on availability of materials and information and ease of completing tasks. Students’ suggestions for improvement addressed usability issues, “Mobile Device Compatibility,” and inconsistency in the ways teachers used the CMS (54).

Analysis of her data suggests to Witte that students receive conflicting genre cues about the function of the CMS, sometimes assuming it is more like social media sites than it is in practice and in fact asking for more interactivity with their mobile devices and other media choices (56). They may see certain cues as inviting informal, interactive responses while other require a more “school/professional response” in which they become “passive consumer[s] of information” (56). In Witte’s view, instructors do not always articulate clearly exactly what role the CMS should play in their individual courses; moreover, students may approach the CMS with a different idea about its purposes than the instructor intends (57).

Seeing a CMS as a genre, Witte contends, helps instructors think about their use of the program in terms of audience, redirecting the focus from “its technological affordances to what it does or how it is used in particular context for particular people” (57). She urges instructors to plan CMS structure in accordance with course design, for example, arranging a course built around weekly schedules by weeks and courses meant to provide materials without regard to due date by topic. The survey reveals that students may need specific direction about the type of response indicated by CMS features, like text boxes or discussion forums (57). Instructors are urged to clarify their own purposes and expectations for how students use the resource and to communicate these explicitly (57-58).

Witte also argues that casting a CMS as a genre provides an opportunity to introduce students to genre theory and to understand through a concrete example how audience and purpose relate to the conventions of a particular form. In this view, students can explore how to use their exposure to other genres to situate new genres like a CMS in their contexts when they encounter them (58); they may then carry the experience of navigating a CMS into their interactions with other texts they may be called on to respond to or produce.


Leave a comment

Corrigan, Paul. “Conclusion to Literature.” TETYC Sept. 2018. Posted 11/06/2018.

Corrigan, Paul T. “Conclusion to Literature.” Teaching English in the Two-Year College 46.1 (2018): 30-48. Print.

Paul T. Corrigan argues for a reassessment of the value and purpose of the “Introduction to Literature” course that is part of the general-education curriculum at many higher-learning institutions.

Corrigan expresses concern that the understanding of many humanities scholars and teachers that reading “literature” is an important life activity is not widely shared by the public (30). Corrigan locates twenty-four “apologias” for literature published since 2000 that argue that such texts “may help us change or understand or give meaning or perspective to our lives” (30), but notes that only people already convinced of the value of literature will read these books (31). His study of “nineteen current anthologies and eighty-two available syllabi” for the introductory college course indicates to him that students taking the course are not widely encouraged to appreciate literature as an activity that will bring meaning into their lives (31, 37).

In Corrigan’s view, students taking the college course have already been introduced to literature, and in fact have been widely exposed to such reading, throughout their elementary and high-school experiences (37). Because, for many, “Introduction to Literature” is actually the last literature course the majority of students will take, Corrigan argues that the standard course is a “conclusion” to literature rather than a beginning (37).

Introduction to Literature, he maintains, is both among “the most commonly taught” and “most commonly taken” college courses across institutions (32). For Corrigan, that so many students take this course makes it a powerful platform for helping students see the value of literature; students who will then leave college with a positive impression of literature will far outnumber those who go on from the course to become majors and can influence public perception of humanistic learning throughout their lives (32).

To make the introductory course fulfill this purpose, Corrigan proposes shifting the focus from an preponderant review of the “means” of reading literature, such a formal elements of analysis and criticism, to attention to the “ends” of such reading (34), that is, the “why” of reading, or in the words of M. Elizabeth Sargent, “For what?” Teachers of literature, Sargent contends, should have “at least one thoughtful, evolving committed answer to this question” (qtd. in Corrigan 33).

Corrigan acknowledges that his sample permits only an “indirect peek” into the presentation of the ends of literary instruction, but characterizes his findings as “highly suggestive and instructive” (34). His analysis of the anthologies and syllabi categorizes the sample using four terms.

Materials in which attention to the ends/why issue does not appear at all fall under the classification “absent.” He gives as an example an anthology that responds to the question “Who needs it [poetry]?” with the comment that the “study of poetry” is the collection’s aim (qtd. in Corrigan 34-35; emendation in Corrigan; emphasis original). A syllabus in this category suggests that “‘an appreciation of literature’ may benefit ‘civilization’” and states that what a student will take from the class is “up to you” (qtd. in Corrigan 35). Twenty-one percent of the anthologies and 51% of the syllabi fell into this group (34).

Materials containing “nascent” references to the reason for reading literature made up 47% of the anthologies and 37% of the syllabi. These materials included short discussions or mentions of the value of literature, such as “a few paragraphs” in introductory sections or specific but short statements in course goals (35).

Corrigan placed materials in which “the question of why literature matters [is] one significant topic among others, although not a pervasive or central concern” in his category of “present” (35). Twenty-six percent (5 of the 19) anthologies met this criterion, and 10% (8 of 82) of the syllabi did so (35). Corrigan gives examples of how these teaching artifacts explicitly invited students to connect their reading experience to their lives (35-36).

Only a single anthology and two syllabi fell into the final category, “emphasized” (36). Corrigan delineates how Literature for Life, by X. J. Kennedy, Dana Gioia, and Nina Revoyr, “foreground[s]” the purpose of reading literature as a principal focus of the text (36). A syllabus from Western Michigan University builds connections to students’ lives into its course theme of “literary representations of food” with specific assignments asking students to address the topic in their own experiences (36).

In Corrigan’s view, recognizing that a college Introduction to Literature is more likely to be the “last time [most students] will spend any serious time thinking about literature” warrants recasting the course as “Conclusion to Literature” (37). He argues that the technical disciplinary processes of literary study can still be incorporated but should be used to enhance students’ ability to relate to and connect with the texts they read (40); he maintains that using the course to develop students’ ability to value literature will equip them with more incentive to read and value it in the future “than any amount of knowledge could provide” (38).

Quoting Karen Manarin et al., Corrigan agrees that “merely telling” students how literature matters is insufficient; he calls for pedagogy actively designed to draw out applications to students’ lives. His overview of his own course includes examples of assignments, paper prompts, and activities such as visiting nature centers in conjunction with reading nature poems (39). Writing that teachers may take for granted the importance of the “ends” of literature, he argues that re-seeing the introductory course as a conclusion “attends to, rather than assumes, those ends” (38).

 


Leave a comment

Sills, Ellery. Creating “Outcomes 3.0.” CCC, Sept. 2018. Posted 10/24/2018.

Sills, Ellery. “Making Composing Policy Audible: A Genealogy of the WPA Outcomes Statement 3.0.” College Composition and Communication 70.1 (2018): 57-81. Print.

Ellery Sills provides a “genealogy” of the deliberations involved in the development of “Outcomes 3.0,” the third revision of the Council of Writing Program Administrators’ Outcome Statement for First-Year Composition (58). His starting point is “Revising FYC Outcomes for a Multimodal, Digitally Composed World,” a 2014 article by six of the ten composition faculty who served on the task force to develop Outcomes (OS) 3.0 (57).

Sills considers the 2014 article a “perfectly respectable history” of the document (58), but argues that such histories do not capture the “multivocality” of any policymaking process (59). He draws on Chris Gallagher to contend that official documents like the three Outcomes Statements present a finished product that erases debates and disagreements that go into policy recommendations (59). Sills cites Michel Foucault’s view that, in contrast, a genealogy replaces “the monotonous finality” (qtd. in Sills 59) of a history by “excavat[ing] the ambiguities” that characterized the deliberative process (59).

For Sills, Outcomes 3.0 shares with previous versions of the Outcomes Statement the risk that it will be seen as “hegemonic” and that its status as an official document will constrain teachers and programs from using it to experiment and innovate (75-76). He argues that sharing the various contentions that arose as the document was developed can enhance its ability to function as, in the words of Susan Leigh Star, a document of “cooperation without consensus” (qtd. in Sills 73) that does not preclude interpretations that may not align with a perceived status quo (76). Rather, in Sill’s view, revealing the different voices involved in its production permits Outcomes 3.0 to be understood as a “boundary object,” that is, an object that is

strictly defined within a particular community of practice, but loosely defined across different communities of practice. . . . [and that] allows certain terms and concepts . . . to encompass many different things. (74)

He believes that “[k]eeping policy deliberations audible” (76) will encourage instructors and programs to interpret the document’s positions flexibly as they come to see how many different approaches were brought to bear in generating the final text.

Sills invited all ten task members to participate in “discourse-based” interviews. Five agreed: Dylan Dryer, Susanmarie Harrington, Bump Halbritter, Beth Brunk-Chavez, and Kathleen Blake Yancey (60-61). Discussion focused on deliberations around the terms “composing, technology, and genre” (61; emphasis original).

Sills’s discussion of the deliberations around “composing” focus on the shift from “writing” as a key term to a less restrictive term that could encompass many different ways in which people communicate today (61). Sills indicates that the original Outcomes Statement (1.0) of 2000 made digital practices a “residual category” in comparison to traditional print-based works, while the 3.0 task force worked toward a document that endorsed both print and multimodal practices without privileging either (63).

Ideally, in the interviewees’ views, curricula in keeping with Outcomes 3.0 recognizes composing’s “complexity,” regardless of the technologies involved (65). At the same time, in Sills’s analysis, the multiplicity of practices incorporated under composing found common ground in the view, in Dryer’s words, that “we teach writing, we’re bunch of writers” (qtd. in Sills 65).

Sills states that the “ambiguity” of terms like “composing” served not only to open the door to many forms of communicative practice but also to respond to the “kairotic” demands of a document like Outcomes. 3.0. Interviewees worried that naming specific composing practices would result in guidelines that quickly fell out of date as composing options evolved (64).

According to Sills, interviews about the deliberations over genre revealed more varied attitudes than those about composing (66). In general, the responses Sills records suggest a movement away from seeing genre as fixed “static form[s]” (67) calling for a particular format toward recognizing genres as fluid, flexible, and responsive to rhetorical situations. Sills quotes Dryer’s claim that the new document depicts “students and readers and writers” as “much more agentive”; “genres change and . . . readers and writers participate in that change” (qtd. in Sills 67). Halbritter emphasizes a shift from “knowledge about” forms to a process of “experiential learning” as central to the new statement’s approach (68). For Harrington, the presentation of genre in the new document reflects attention to “habits of mind” such as rhetorical awareness and “taking responsibility for making choices” (qtd. in Sills 69).

Brunk-Chavez’s interview addresses the degree to which, in the earlier statements, technology was handled as a distinct element when genre was still equated primarily with textual forms. In the new document, whatever technology is being used is seen as integral to the genre being produced (69). Moreover, she notes that OS 3.0’s handling of genre opens it to types of writing done across disciplines (70).

She joins Yancy, however, in noting the need for the document to reflect “the consensus of the field” (72). While there was some question as to whether genre as a literary or rhetorical term should even be included in the original OS, Yancy argues that the term’s “time has come” (71). Yet the interviews capture a sense that not every practitioner in composition shares a common understanding of the term and that the document should still be applicable, for example, to instructors for whom “genre” still equates with modes (71).

In addressing this variation in the term’s function in practice, Sills notes Yancey’s desire for OS 3.0 to be a “bridging document” that does not “move too far ahead of where the discipline is,” linking scholarly exploration of genre with the many ways practitioners understand and use the term (72).

Sills considers challenges that the OS 3.0 must address if it is to serve the diverse and evolving needs of the field. Responding to concerns of scholars like Jeff Rice that the document imposes an ultimately conservative “ideology of generality” that amounts to a “rejection of the unusual” (qtd. in Sills 75), Sills acknowledges that the authority of the statement may prevent “subordinate communities of practice” like contingent faculty from “messing around with” its recommendations. But he contends that the task force’s determination to produce flexible guidelines and to foster ongoing revision can encourage “healthy resistance” to possible hegemony (76).

He further recommends specific efforts to expand participation, such as creating a Special Interest Group or a “standing institutional body” like an Outcomes Collective with rotating membership from which future task forces can be recruited on a regular timetable. Such ongoing input, he contends, can both invite diversity as teachers join the conversation more widely and assure the kairotic validity of future statements in the changing field (77-78).