College Composition Weekly: Summaries of research for college writing professionals

Read, Comment On, and Share News of the Latest from the Rhetoric and Composition Journals


Leave a comment

Estrem et al. “Reclaiming Writing Placement.” WPA, Fall 2018. Posted 12/10/2018.

Estrem, Heidi, Dawn Shepherd, and Samantha Sturman. “Reclaiming Writing Placement.” Journal of the Council of Writing Program Administrators 42.1 (2018): 56-71. Print.

Heidi Estrem, Dawn Shepherd, and Samantha Sturman urge writing program administrators (WPAs) to deal with long-standing issues surrounding the placement of students into first-year writing courses by exploiting “fissures” (60) created by recent reform movements.

The authors note ongoing efforts by WPAs to move away from using single or even multiple test scores to determine which courses and how much “remediation” will best serve students (61). They particularly highlight “directed self-placement” (DSP) as first encouraged by Dan Royer and Roger Gilles in a 1998 article in College Composition and Communication (56). Despite efforts at individual institutions to build on DSP by using multiple measures, holistic as well as numerical, the authors write that “for most college students at most colleges and universities, test-based placement has continued” (57).

Estrem et al. locate this pressure to use test scores in the efforts of groups like Complete College America (CCA) and non-profits like the Bill and Melinda Gates Foundation, which “emphasize efficiency, reduced time to degree, and lower costs for students” (58). The authors contrast this “focus on degree attainment” with the field’s concern about “how to best capture and describe student learning” (61).

Despite these different goals, Estrem et al. recognize the problems caused by requiring students to take non-credit-bearing courses that do not address their actual learning needs (59). They urge cooperation, even if it is “uneasy,” with reform groups in order to advance improvements in the kinds of courses available to entering students (58). In their view, the impetus to reduce “remedial” coursework opens the door to advocacy for the kinds of changes writing professionals have long seen as serious solutions. Their article recounts one such effort in Idaho to use the mandate to end remediation as it is usually defined and replace it with a more effective placement model (60).

The authors note that CCA calls for several “game changers” in student progress to degree. Among these are the use of more “corequisite” courses, in which students can earn credit for supplemental work, and “multiple measures” (59, 61). Estrem et al. find that calls for these game changers open the door for writing professionals to introduce innovative courses and options, using evidence that they succeed in improving student performance and retention, and to redefine “multiple measures” to include evidence such as portfolio submissions (60-61).

Moreover, Estrem et al. find three ways in which WPAs can respond to specific calls from reform movements in ways that enhance student success. First, they can move to create new placement processes that enable students to pass their first-year courses more consistently, thus responding to concerns about costs to students (62); second, they can provide data on increased retention, which speaks to time to degree; and finally, they can recognize a current “vacuum” in the “placement test market” (62-63). They note that ACT’s Compass is no longer on the market; with fewer choices, institutions may be open to new models. The authors contend that these pressures were not as exigent when directed self-placement was first promoted. The existence of such new contexts, they argue, provides important and possibly short-lived opportunities (63).

The authors note the growing movement to provide college courses to students while they are in high school (62). Despite the existence of this model for lowering the cost and time to degree, Estrem et al. argue that the first-year experience is central to student success in college regardless of students’ level when they enter, and that placing students accurately during this first college exposure can have long-lasting effects (63).

Acknowledging that individual institutions must develop tools that work in their specific contexts, Estrem et al. present “The Write Class,” their new placement tool. The Write Class is “a web application that uses an algorithm to match students with a course based on the information they provide” (64). Students are asked a set of questions, beginning with demographics. A “second phase,” similar to that in Royer and Gilles’s original model, asks for “reflection” on students’ reading and writing habits and attitudes, encouraging, among other results, student “metaawareness” about their own literacy practices (65).

The third phase provides extensive information about the three credit-bearing courses available to entering students: the regular first-year course in which most students enroll; a version of this course with an additional workshop hour with the instructor in a small group setting; or a second-semester research-based course (64). The authors note that the courses are given generic names, such as “Course A,” to encourage students to choose based on the actual course materials and their self-analysis rather than a desire to get into or dodge specific courses (65).

Finally, students are asked to take into account “the context of their upcoming semester,” including the demands they expect from family and jobs (65). With these data, the program advises students on a “primary and secondary placement,” for some including the option to bypass the research course through test scores and other data (66).

In the authors’ view, the process has a number of additional benefits that contribute to student success. Importantly, they write, the faculty are able to reach students prior to enrollment and orientation rather than find themselves forced to deal with placement issues after classes have started (66). Further, they can “control the content and the messaging that students receive” regarding the writing program and can respond to concerns across campus (67). The process makes it possible to have “meaningful conversation[s]” with students who may be concerned about their placement results; in addition, access to the data provided by the application allows the WPAs to make necessary adjustments (67-68).

Overall, the authors present a student’s encounter with their placement process as “a pedagogical moment” (66), in which the focus moves from “getting things out of the way” to “starting a conversation about college-level work and what it means to be a college student” (68). This shift, they argue, became possible through rhetorically savvy conversations that took advantage of calls for reform; by “demonstrating how [The Write Class process] aligned with this larger conversation,” the authors were able to persuade administrators to adopt the kinds of concrete changes WPAs and writing scholars have long advocated (66).


Leave a comment

Pruchnic et al. Mixed Methods in Direct Assessment. J or Writ Assessment, 2018. Posted 12/01/2018.

Pruchnic, Jeff, Chris Susak, Jared Grogan, Sarah Primeau, Joe Torok, Thomas Trimble, Tanina Foster, and Ellen Barton. “Slouching Toward Sustainability: Mixed Methods in the Direct Assessment of Student Writing.” Journal of Writing Assessment 11.1 (2018). Web. 27 Nov. 2018.

[Page numbers from pdf generated from the print dialogue]

Jeff Pruchnic, Chris Susak, Jared Grogan, Sarah Primeau, Joe Torok, Thomas Trimble, Tanina Foster, and Ellen Barton report on an assessment of “reflection argument essay[s]” from the first-year-composition population of a large, urban, public research university (6). Their assessment used “mixed methods,” including a “thin-slice” approach (1). The authors suggest that this method can address difficulties faced by many writing programs in implementing effective assessments.

The authors note that many stakeholders to whom writing programs must report value large-scale quantitative assessments (1). They write that the validity of such assessments is often measured in terms of statistically determined interrater reliability (IRR) and samples considered large enough to adequately represent the population (1).

Administrators and faculty of writing programs often find that implementing this model requires time and resources that may not be readily available, even for smaller programs. Critics of this model note that one of its requirements, high interrater reliability, can too easily come to stand in for validity (2); in the view of Peter Elbow, such assessments favor “scoring” over “discussion” of the results (3). Moreover, according to the authors, critics point to the “problematic decontextualization of program goals and student achievement” that large-scale assessments can foster (1).

In contrast, Pruchnic et al. report, writing programs have tended to value the “qualitative assessment of a smaller sample size” because such models more likely produce the information needed for “the kinds of curricular changes that will improve instruction” (1). Writing programs, the authors maintain, have turned to redefining a valid process as one that can provide this kind of information (3).

Pruchnic et al. write that this resistance to statistically sanctioned assessments has created a bind for writing programs. Pruchnic et al. cite scholars like Peggy O’Neill (2) and Richard Haswell (3) to posit that when writing programs refuse the measures of validity required by external stakeholders, they risk having their conclusions dismissed and may well find themselves subject to outside intervention (3). Haswell’s article “Fighting Number with Number” proposes producing quantitative data as a rhetorical defense against external criticism (3).

In the view of the authors, writing programs are still faced with “sustainability” concerns:

The more time one spends attempting to perform quantitative assessment at the size and scope that would satisfy statistical reliability and validity, the less time . . . one would have to spend determining and implementing the curricular practices that would support the learning that instructors truly value. (4)

Hoping to address this bind, Pruchnic et al. write of turning to a method developed in social studies to analyze “lengthy face-to-face social and institutional interactions” (5). In a “thin-slice” methodology, raters use a common rubric to score small segments of the longer event. The authors report that raters using this method were able to predict outcomes, such as the number of surgery malpractice claims or teacher-evaluation results, as accurately as those scoring the entire data set (5).

To test this method, Pruchnic et al. created two teams, a “Regular” and a “Research” team. The study compared interrater reliability, “correlation of scores,” and the time involved to determine how closely the Research raters, scoring thin slices of the assessment data, matched the work of the Regular raters (5).

Pruchnic et al. provide a detailed description of their institution and writing program (6). The university’s assessment approach is based on Edward White’s “Phase 2 assessment model,” which involves portfolios with a final reflective essay, the prompt for which asks students to write an evidence-based argument about their achievements in relation to the course outcomes (8). The authors note that limited resources gradually reduced the amount of student writing that was actually read, as raters moved from full-fledged portfolio grading to reading only the final essay (7). The challenges of assessing even this limited amount of student work led to a sample that consisted of only 6-12% of the course enrollment.

The authors contend that this is not a representative sample; as a result, “we were making decisions about curricular and other matters that were not based upon a solid understanding of the writing of our entire student body” (7). The assessment, in the authors’ view, therefore did not meet necessary standards of reliability and validity.

The authors describe developing the rubric to be used by both the Research and Regular teams from the precise prompt for the essay (8). They used a “sampling calculator” to determine that, given the total of 1,174 essays submitted, 290 papers would constitute a representative sample; instructors were asked for specific, randomly selected papers to create a sample of 291 essays (7-8).

The Regular team worked in two-member pairs, both members of each pair reading the entire essay, with third readers called in as needed (8): “[E]ach essay was read and scored by only one two-member team” (9). The authors used “double coding” in which one-fifth of the essays were read by a second team to establish IRR (9). In contrast, the 10-member Research team was divided into two groups, each of which scored half the essays. These readers were given material from “the beginning, middle, and end” of each essay: the first paragraph, the final paragraph, and a paragraph selected from the middle page or pages of the essay, depending on its length. Raters scored the slices individually; the averaged five team members’ scores constituted the final scores for each paper (9).

Pruchnic et al. discuss in detail their process for determining reliability and for correlating the scores given by the Regular and Research teams to determine whether the two groups were scoring similarly. Analysis of interrater reliability revealed that the Research Team’s IRR was “one full classification higher” than that of the Regular readers (12). Scores correlated at the “low positive” level, but the correlation was statistically significant (13). Finally, the Research team as a whole spent “a little more than half the time” scoring than the Regular group, while individual average scoring times for Research team members was less than half of the scoring time of the Regular members (13).

Additionally, the assessment included holistic readings of 16 essays randomly representing the four quantitative result classifications of Poor through Good (11). This assessment allowed the authors to determine the qualities characterizing essays ranked at different levels and to address the pedagogical implications within their program (15, 16).

The authors conclude that thin-slice scoring, while not always the best choice in every context (16), “can be added to the Writing Studies toolkit for large-scale direct assessment of evaluative reflective writing” (14). Future research, they propose, should address the use of this method to assess other writing outcomes (17). Paired with a qualitative assessment, they argue, a mixed-method approach that includes thin-slice analysis as an option can help satisfy the need for statistically grounded data in administrative and public settings (16) while enabling strong curricular development, ideally resulting in “the best of both worlds” (18).


Leave a comment

Ray et al. Rethinking Student Evaluations of Teaching. Comp Studies Spring 2018. Posted 08/25/2018.

Ray, Brian, Jacob Babb, and Courtney Adams Wooten. “Rethinking SETs: Retuning Student Evaluations of Teaching for Student Agency.” Composition Studies 46.1 (2018): 34-56. Web. 10 Aug. 2018.

Brian Ray, Jacob Babb, and Courtney Adams Wooten report a study of Student Evaluations of Teaching (SETs) across a range of institutions. The researchers collected 55 different forms, 45 of which were institutions’ generic forms, while 10 were designed specifically for writing classes. They coded 1,108 different questions from these forms in order to determine what kinds of questions were being asked (35).

The authors write that although SETs and their use, especially in personnel decisions, is of concern in rhetoric and composition, very little scholarship in the field has addressed the issue (34-35). They summarize a history of student evaluations as tools for assessment of teachers, beginning with materials from the 1920s. Early SETs focused heavily on features of personality such as “wit,” “tact,” and “popularity” (38), as well as physical appearance (39). This focus on “subjective” characteristics of teachers asked students to judge “factors that neither they nor the instructor had sole control over and that they could do little to affect” (38).

This emphasis persisted throughout twentieth century. A scholar named Herbert Marsh conducted “numerous studies” in the 1970s and 1980s and eventually created the Student Evaluation of Education Quality form (SEEQ) in 1987 (35). This instrument asked students about nine features:

[L]earning, enthusiasm, organization and clarity, group interaction, individual rapport, breadth of coverage, tests and grading, assignments, and difficulty (39)

The authors contend that these nine factors substantively guide the SETs they studied (35), and they claim that, in fact, in important ways, “current SET forms differ little from those seen in the 1920s” (40).

Some of composition’s “only published conversations about SETs” revolved around workshops conducted by the Conference on College Composition and Communication (CCCC) from 1956 through 1962 (39). The authors report that instructors participating in these discussions saw the forms as most appropriate for “formative” purposes; very few institutions used them in personnel matters (39).

Data from studies of SETs in other fields reveal some of the problems that can result from common versions of these measures (37). The authors state that studies over the last ten years have not been able to link high teacher ratings on SETs with improved student learning or performance (40). Studies point out that many of the most common categories, like “clarity and fairness,” remain subjective, and that students consistently rank instructors on personality rather than on more valid measures of effectiveness (41).

Such research documents bias related to gender and ethnicity, with female African-American teachers rated lowest on one study asking students to assess “a hypothetical curriculum vitae according to teaching qualifications and expertise” (42). Male instructors are more commonly praised for their “ability to innovate and stimulate critical thought”; women are downgraded for failing to be “compassionate and polite” (42). Studies showed that elements like class size and workload affected results (42). Physical attractiveness continues to influence student opinion, as does the presence of “any kind of reward,” like lenient grading or even supplying candy (43).

The authors emphasize their finding that a large percentage of the questions they examined asked students about either some aspect of the teacher’s behavior (e.g., “approachability,” “open-mindedness” [42]) or what the teacher did (“stimulated my critical thinking” [45]). The teacher was the subject of nearly half of the questions (45). The authors argue that “this pattern of hyper-attention” (44) to the teacher casts the teacher as “solely responsible” for the success or failure of the course (43). As a result, in the authors’ view, students receive a distorted view of agency in a learning situation. In particular, they are discouraged from seeing themselves as having an active role in their own learning (35).

The authors contend that assigning so much agency to a single individual runs counter to “posthumanist” views of how agency operates in complex social and institutional settings (36). In this view, many factors, including not only all participants and their histories and interests but also the environment and even the objects in the space, play a part in what happens in a classroom (36). When SET questions fail to address this complexity, the authors posit, issues of validity arise when students are asked to pass judgment on subjective and ambiguously defined qualities as well as on factors beyond the control of any participant (40). Students encouraged to focus on instructor agency may also misjudge teaching that opts for modern “de-center[ed]” teaching methods rather than the lecture-based instruction they expect (44).

Ray et al. note that some programs ask students about their own level of interest and willingness to participate in class activities and advocate increased use of such questions (45). But they particularly advocate replacing the emphasis on teacher agency with questions that encourage students to assess their own contributions to their learning experience as well as to examine the class experience as a whole and to recognize the “relational” aspects of a learning environment (46). For example:

Instead of asking whether instructors stimulated critical thought, it seems more reasonable to ask if students engaged in critical thinking, regardless of who or what facilitated engagement. (46; emphasis original)

Ray et al. conclude that questions that isolate instructors’ contributions should lean toward those that can be objectively defined and rated, such as punctuality and responding to emails in a set time frame (46).

The authors envision improved SETs, like those of some programs, that are based on a program’s stated outcomes and that ask students about the concepts and abilities they have developed through their coursework (48). They suggest that programs in institutions that use “generic” evaluations for broader analysis or that do not allow individual departments to eliminate the official form should develop their own parallel forms in order to gather the kind of information that enables more effective assessment of classroom activity (48-49).

A major goal, in the authors’ view, should be questions that “encourage students to identify the interconnected aspects of classroom agency through reflection on their own learning” (49).

 


Leave a comment

Lindenman et al. (Dis)Connects between Reflection and Revision. CCC, June 2018. Posted 07/22/2018.

Lindenman, Heather, Martin Camper, Lindsay Dunne Jacoby, and Jessica Enoch. “Revision and Reflection: A Study of (Dis)Connections between Writing Knowledge and Writing Practice.” College Composition and Communication 69.4 (2018): 581-611. Print.

Heather Lindenman, Martin Camper, Lindsay Dunne Jacoby, and Jessica Enoch report a “large-scale, qualitative assessment” (583) of students’ responses to an assignment pairing reflection and revision in order to evaluate the degree to which reflection and revision inform each other in students’ writing processes.

The authors cite scholarship designating reflection and revision “threshold concepts important to effective writing” (582). Scholarship suggests that reflection should encourage better revision because it “prompts metacognition,” defined as “knowledge of one’s own thinking processes and choices” (582). Lindenman et al. note the difficulties faced by teachers who recognize the importance of revision but struggle to overcome students’ reluctance to revise beyond surface-level correction (582). The authors conclude that engagement with the reflective requirements of the assignment did not guarantee effective revision (584).

The study team consisted of six English 101 instructors and four writing program administrators (587). The program had created a final English 101 “Revision and Reflection Assignment” in which students could draw on shorter memos on the four “linked essays” they wrote for the class. These “reflection-in-action” memos, using the terminology of Kathleen Blake Yancey, informed the final assignment, which asked for a “reflection-in-presentation”: students could choose one of their earlier papers for a final revision and write an extended reflection piece discussing their revision decisions (585).

The team collected clean copies of this final assignment from twenty 101 sections taught by fifteen instructors. A random sample across the sections resulted in a study size of 152 papers (586). Microsoft Word’s “compare document” feature allowed the team to examine students’ actual revisions.

In order to assess the materials, the team created a rubric judging the revisions as either “substantive, moderate, or editorial.” A second rubric allowed them to classify the reflections as “excellent, adequate, or inadequate” (586). Using a grounded-theory approach, the team developed forty codes to describe the reflective pieces (587). The study goal was to determine how well students’ accounts of their revisions matched the revisions they actually made (588).

The article includes the complete Revision and Reflection Assignment as well as a table reporting the assessment results; other data are available online (587). The assignment called for specific features in the reflection, which the authors characterize as “narrating progress, engaging teacher commentary, and making self-directed choices” (584).

The authors report that 28% of samples demonstrated substantive revision, while 44% showed moderate revision and 28% editorial revision. The reflection portion of the assignment garnered 19% excellent responses, 55% that were adequate, and 26% that were inadequate (587).

The “Narrative of Progress” invites students to explore the skills and concepts they feel they have incorporated into their writing process over the course of the semester. Lindenman et al. note that such narratives have been critiqued for inviting students to write “ingratiat[ing]” responses that they think teachers want to hear as well as for encouraging students to emphasize “personal growth” rather than a deeper understanding of rhetorical possibilities (588).

They include an example of a student who wrote about his struggles to develop stronger theses and who, in fact, showed considerable effort to address this issue in his revision, as well as an example of a student who wrote about “her now capacious understanding of revision in her memo” but whose “revised essay does not carry out or enact this understanding” (591). The authors report finding “many instances” where students made such strong claims but did not produce revisions that “actualiz[ed] their assertions” 591. Lindenman et al. propose that such students may have increased in their awareness of concepts, but that this awareness “was not enough to help them translate their new knowledge into practice within the context of their revisions” (592).

The section of student response to teacher commentary distinguishes between students for whom teachers’ comments served as “a heuristic” that allowed the student to take on roles as “agents” and the “majority” of students, who saw the comments as “a set of directions to follow” (592). Students who made substantive revisions, according to the authors, were able to identify issues called up the teacher feedback and respond to these concerns in the light of their own goals (594). While students who made “editorial” changes actually mentioned teacher comments more often (595), the authors point to shifts to first person in the reflective memos paired with visible revisions as an indication of student ownership of the process (593).

Analysis of “self-directed metacognitive practice” similarly found that students whose strong reflective statements were supported by actual revision showed evidence of “reach[ing] beyond advice offered by teachers or peers” (598). The authors note that, in contrast, “[a]nother common issue among self-directed, nonsubstantive revisers” was the expenditure of energy in the reflections to “convince their instructors that the editorial changes they made throughout their essays were actually significant” (600; emphasis original).

Lindenman et al. posit that semester progress-narratives may be “too abstracted from the actual practice of revision” and recommend that students receive “intentional instruction” to help them see how revision and reflection inform each other (601). They report changes to their assignment to foreground “the why of revision over the what” (602; emphasis original), and to provide students with a visual means of seeing their actual work via “track changes” or “compare documents” while a revision is still in progress (602).

A third change encourages more attention to the interplay between reflection and revision; the authors propose a “hybrid threshold concept: reflective revision” (604; emphasis original).

The authors find their results applicable to portfolio grading, in which, following the advice of Edward M. White, teachers are often encouraged to give more weight to the reflections than to the actual texts of the papers. The authors argue that only by examining the two components “in light of each other” can teachers and scholars fully understand the role that reflection can play in the development of metacognitive awareness in writing (604; emphasis original).

 


Leave a comment

Donahue & Foster-Johnson. Text Analysis for Evidence of Transfer. RTE, May 2018. Posted 07/13/2018.

Donahue, Christiane, and Lynn Foster-Johnson. “Liminality and Transition: Text Features in Postsecondary Student Writing.” Research in the Teaching of English 52.4 (2018): 359-381. Web. 4 July 2018.

Christiane Donahue and Lynn Foster-Johnson detail a study of student writing in the “liminal space” between a “generic” first-year-writing course and a second, “discipline-inspired” first-year seminar (365). They see their study as unusual in that it draws its data and conclusions from empirical “corpus analysis” of the texts students produce (376-77). They also present their study as different from much other research in that it considered a “considerably larger” sample that permits them to generalize about the broader population of the specific institution where the study took place (360).

The authors see liminal spaces as appropriate for the study of the issue usually referred to as “transfer,” which they see as a widely shared interest across composition studies (359). They contend that their study of “defined features” in texts produced as students move from one type of writing course to another allows them to identify “just-noticeable difference[s]” that they believe can illuminate how writing develops across contexts (361).

The literature review examines definitions of liminality as well as wide-ranging writing scholarship that attempts to articulate how knowledge created in one context changes as it is applied in new situations. They cite Linda Adler-Kassner’s 2014 contention that students may benefit from “learning strategy rather than specific writing rules or forms,” thus developing the ability to adapt to a range of new contexts (362).

One finding from studies such as that of Lucille McCarthy in 1987 and Donahue in 2010 is that while students change the way they employ knowledge as they move from first to final years of education, they do not seem fully aware of how their application of what they know has changed (361-62). Thus, for Donahue and Foster-Johnson, the actual features detectable in the texts themselves can be illuminating in ways that other research methodologies may not (362, 364).

Examining the many terms that have been used to denote “transfer,” Donahue and Foster-Johnson advocate for “models of writing knowledge reuse” and “adaptation,” which capture the recurrence of specific features and the ways these features may change to serve a new exigency (364).

The study took place in a “selective” institution (366) defined as a “doctoral university of high research activity” (365). The student population is half White, with a diverse range of other ethnicities, and 9% first-generation college students (366). Students take either one or two sections of general first-year writing, depending on needs identified by directed self-placement (366), and a first-year seminar that is “designed to teach first-year writing while also introducing students to a topic in a particular (inter)discipline and gesturing toward disciplinary writing” (365). The authors argue that this sequence provides a revealing “’bridge’ moment in students’ learning” (365).

Students were thus divided into three cohorts depending on which courses they took and in which semester. Ninety percent of the instructors provided materials, collecting “all final submitted drafts of the first and last ‘source-based’ papers” for 883 students. Fifty-two papers from each cohort were randomly chosen, resulting in 156 participants (366-67). Each participating student’s work was examined at four time points, with the intention of identifying the presence or absence of specific features (368).

The features under scrutiny were keyed to faculty-developed learning outcomes for the courses (367-68). The article discusses the analysis of seven: thesis presence, thesis type, introduction type, overall text structure, evidence types, conclusion type, and overall essay purpose (367). Each feature was further broken down into “facets,” 38 in all, that illustrated “the specific aspects of the feature” (367-68).

The authors provide detailed tables of their results and list findings in their text. They report that “the portrait is largely one of stability,” but note students’ ability to vary choices “when needed” (369). Statistically significant differences showing “change[s] across time” ranged from 13% in Cohort 1 to 29% in Cohort 2 and 16% in Cohort 3. An example of a stable strategy is the use of “one explicit thesis at the beginning” of a paper (371); a strategy “rarely” used was “a thesis statement [placed] inductively at the middle or end” (372). Donahue and Foster-Johnson argue that these results indicate that students had learned useful options that they could draw on as needed in different contexts (372).

The authors present a more detailed examination of the relationship between “thesis type” and “overall essay aim” (374). They give examples of strong correlations between, for example, “the purpose of analyzing an object” and the use of “an interpretive thesis” as well as negative correlations between, for example, “the purpose of analyzing an object” and “an evaluative thesis” (374). In their view, these data indicate that some textual features are “congruen[t]” with each other while others are “incompatible” (374). They find that their textual analysis documents these relationships and students’ reliance on them.

They note a “reset effect”: in some cases, students increased their use of a facet (e.g., “external source as authority”) over the course of the first class, but then reverted to using the facet less at the beginning of the second class, only to once again increase their reliance on such strategies as the second class progressed (374-75), becoming, “‘repeating newcomers’ in the second term” (374).

Donahue and Foster-Johnson propose as one explanation for the observed stability the possibility that “more stays consistent across contexts than we might readily acknowledge” (376), or that in general-education contexts in which exposure to disciplinary writing is preliminary, the “boundaries we imagine are fuzzy” (377). They posit that it is also possible that curricula may offer students mainly “low-road” opportunities for adaptation or transformation of learned strategies (377). The authors stress that in this study, they were limited to “what the texts tell us” and thus could not speak to students’ reasons for their decisions (376).

Questions for future research, they suggest, include whether students are aware of deliberate reuse of strategies and whether or not “students reusing features do so automatically or purposefully” (377). Research might link student work to particular students with identifiers that would enable follow-up investigation.

They argue that compared to the methods of textual analysis and “topic-modeling” their study employs, “current assessment methods . . . are crude in their construct representation and antiquated in the information they provide” (378). They call for “a new program of research” that exploits a new

capability to code through automated processes and allow large corpora of data to be uploaded and analyzed rapidly under principled categories of analysis. 378

 


Leave a comment

Bowden, Darsie. Student Perspectives on Paper Comments. J of Writing Assessment, 2018. Posted 04/14/2018.

Bowden, Darsie. “Comments on Student Papers: Student Perspectives.” Journal of Writing Assessment 11.1 (2018). Web. 8 Apr. 2018.

Darsie Bowden reports on a study of students’ responses to teachers’ written comments in a first-year writing class at DePaul University, a four-year, private Catholic institution. Forty-seven students recruited from thirteen composition sections provided first drafts with comments and final drafts, and participated in two half-hour interviews. Students received a $25 bookstore gift certificate for completing the study.

Composition classes at DePaul use the 2000 version of the Council of Writing Program Administrators’ (WPA) Outcomes to structure and assess the curriculum. Of the thirteen instructors whose students were involved in the project, four were full-time non-tenure track and nine were adjuncts; Bowden notes that seven of the thirteen “had graduate training in composition and rhetoric,” and all ”had training and familiarity with the scholarship in the field.” All instructors selected were regular attendees at workshops that included guidance on responding to student writing.

For the study, instructors used Microsoft Word’s comment tool in order to make student experiences consistent. Both comments and interview transcripts were coded. Comment types were classified as “in-draft” corrections (actual changes made “in the student’s text itself”); “marginal”; and “end,” with comments further classified as “surface-level” or “substance-level.”

Bowden and her research team of graduate teaching assistants drew on “grounded theory methodologies” that relied on observation to generate questions and hypotheses rather than on preformed hypotheses. The team’s research questions were

  • How do students understand and react to instructor comments?
  • What influences students’ process of moving from teacher comments to paper revision?
  • What comments do students ignore and why?

Ultimately the third question was subsumed by the first two.

Bowden’s literature review focuses on ongoing efforts by Nancy Sommers and others to understand which comments actually lead to effective revision. Bowden argues that research often addresses “the teachers’ perspective rather than that of their students” and that it tends to assess the effectiveness of comments by how they “manifest themselves in changes in subsequent drafts.” The author cites J. M. Fife and P. O’Neill to contend that the relationship between comments and effects in drafts is not “linear” and that clear causal connections may be hard to discern. Bowden presents her study as an attempt to understand students’ actual thinking processes as they address comments.

The research team found that on 53% of the drafts, no in-draft notations were provided. Bowden reports on variations in length and frequency in the 455 marginal comments they examined and as well as in the end comments that appeared in almost all of the 47 drafts. The number of substance-level comments exceeded that of surface-level comments.

Her findings accord with much research in discovering that students “took [comments] seriously”; they “tried to understand them, and they worked to figure out what, if anything, to do in response.” Students emphasized comments that asked questions, explained responses, opened conversations, and “invited them to be part of the college community.” Arguing that such substance-level comments were “generative” for students, Bowden presents several examples of interview exchanges, some illustrating responses in which the comments motivated the student to think beyond the specific content of the comment itself. Students often noted that teachers’ input in first-year writing was much more extensive than that of their high school teachers.

Concerns about “confusion” occurred in 74% of the interviews. Among strategies for dealing with confusion were “ignor[ing] the comment completely,” trying to act on the comment without understanding it, or writing around the confusing element by changing the wording or structure. Nineteen students “worked through the confusion,” and seven consulted their teachers.

The interviews revealed that in-class activities like discussion and explanation impacted students’ attempts to respond to comments, as did outside factors like stress and time management. In discussions about final drafts, students revealed seeking feedback from additional readers, like parents or friends. They were also more likely to mention peer review in the second interview; although some mentioned the writing center, none made use of the writing center for drafts included in the study.

Bowden found that students “were significantly preoccupied with grades.” As a result, determining “what the teacher wants” and concerns about having “points taken off” were salient issues for many. Bowden notes that interviews suggested a desire of some students to “exert their own authority” in rejecting suggested revisions, but she maintains that this effort often “butts up against a concern about grades and scores” that may attenuate the positive effects of some comments.

Bowden reiterates that students spoke appreciatively of comments that encouraged “conversations about ideas, texts, readers, and their own subject positions as writers” and of those that recognized students’ own contributions to their work. Yet, she notes, the variety of factors influencing students’ responses to comments, including, for example, cultural differences and social interactions in the classroom, make it difficult to pinpoint the most effective kind of comment. Given these variables, Bowden writes, “It is small wonder, then, that even the ‘best’ comments may not result in an improved draft.”

The author discusses strategies to ameliorate the degree to which an emphasis on grades may interfere with learning, including contract grading, portfolio grading, and reflective assignments. However, she concludes, even reflective papers, which are themselves written for grades, may disguise what actually occurs when students confront instructor comments. Ultimately Bowden contends that the interviews conducted for her study contain better evidence of “the less ‘visible’ work of learning” than do the draft revisions themselves. She offers three examples of students who were, in her view,

thinking through comments in relationship to what they already knew, what they needed to know and do, and what their goals were at this particular moment in time.

She considers such activities “problem-solving” even though the problem could not be solved in time to affect the final draft.

Bowden notes that her study population is not representative of the broad range of students in writing classes at other kinds of institutions. She recommends further work geared toward understanding how teacher feedback can encourage the “habits of mind” denoted as the goal of learning by the2010 Framework for Success in Postsecondary Writing produced by the WPA, the National Council of Teachers of English, and the National Writing Project. Such understanding, she contends, can be effective in dealing with administrators and stakeholders outside of the classroom.


Leave a comment

Webber, Jim. Reframing vs. Artful Critique of Reform. Sept. CCC, 2017. Posted 10/31/2017.

Webber, Jim. ”Toward an Artful Critique of Reform: Responding to Standards, Assessment, and Machine Scoring.” College Composition and Communication 69.1 (2017): 118-45. Print.

Jim Webber analyzes the responses of composition scholars to the reform movement promoted by entities like College Learning Assessment (CLA) and Complete College America (CCA). He notes that the standardization agenda of such groups, intended to improve the efficiency of higher education, has suffered setbacks; for example, many states have rejected the Common Core State Standards (118-19). However, in Webber’s view, these setbacks are temporary and will be followed by renewed efforts by testing and measurement agencies to impose their own criteria for student success (119).

The standardization these groups urge on higher education will, they claim, give parents and students better information about institutions and will ultimately serve as grounds for such moves as “performance funding” (119). The overall goal of such initiatives is to move students through college as quickly as possible, especially into majors (119).

Webber recognizes two prongs of composition’s response to such pressures to portray “college students and parents as consumers” (119). One thread urges “reframing” or “redirecting” the efforts of the testing industry and groups like CLA and CCA. For Webber, this viewpoint adopts a “realist style.” Scholars who espouse reframing urge that compositionists work within the current realities created by the power of the testing and standardization apparatus to “expand” the meanings of terms like “college readiness” (120), adjusting them in ways that reflect composition’s inclusive, humanistic values (122)–that is, in Frank Farmer’s term, “insinuat[ing]” the professional ethos of composition and its authority into the standardization apparatus (qtd. in Webber 122).

Scholars who adopt this realist style, Webber claims, “figur[e] public policy as accommodation to the world” (141n5); moreover, in Webber’s view, they accept the description of “the way the world is” (133) put forward by CCA and others as “irreducibly competitive” and thus “[reduce] the scope of policy values to competition, efficiency, and instrumentality” (141n5).

Webber cites scholars in this vein who contend that the protests of scholars and writing professionals have been and will be effectively “ignored” by policymakers (137). More productive, in this view, is collaboration that will at least provide “a seat at the policy table,” giving professionals a chance to infuse the debate with their values (133).

Webber presents the 2011 Framework for Success in Postsecondary Writing as an example of how the reframing position “work[s] within the limits established by the dominant discourse of reform” (123). He notes that Bruce Comiskey was unable to discern any “apparent difference” between the aspirations of the Framework and those of the reform movement (125; emphasis original). For Webber, this approach sets up composition professionals as “competition” for the testing industry as the experts who can make sure students meet the reformers’ criteria for successful learning (124). Reframing in this way, Webber says, requires “message management” (123) to make sure that the response’s “strategic” potential is sustained (121).

Scholars who urge reframing invoke Cornel West’s “prophetic pragmatism” (122), which requires them to:

think genealogically about specific practices in light of the best available social theories, cultural critiques, and historiographic insights and to act politically to achieve certain moral consequences in light of effective strategies and tactics. (qtd. in Webber 122)

Webber contends that reframers interpret this directive to mean that “public critique” by compositionists “cannot deliver the consequences they desire” (123; emphasis original). Thus, a tactical approach is required.

The second thread in compositionists’ response to the reform movement is that of critique that insists that allowing the reform industry to set the terms and limits of the discussion is “to grant equivalence between our professional judgments and those of corporate-political service providers” (125-26). Webber quotes Judith Summerfield and Philip M. Anderson, who argue that “managing behavior and preparing students for vocations” does not accord with “a half-century (at the least) of enlightened classroom study and socio-psycholinguistic research” (qtd. in Webber 125).

In Webber’s view, the strands of reframing and critique have reached a “stalemate” (126). In response to the impasse, Webber explores the tradition of pragmatism, drawing on John Dewey and others. He argues that reframers call on the tenets of “melioration” and “prophetic critique” (127). “Meliorism,” according to Webber’s sources, is a linguistic process in that it works toward improving conditions through addressing the public discourse (127). In discussing West’s prophetic pragmatism as a form of “critical melioration,” Webber focuses on the “artfulness” of West’s concept (128).

Webber sees artfulness as critique “in particular contexts” in which ordinary people apply their own judgments of the consequences of a theory or policy based on the effects of these theories or policies on their lives (128-29). An artful critique invites public participation in the assessment of policies, an interaction that, according to West, functions as “antiprofessionalism,” not necessarily for the purpose of completely “eliminating or opposing all professional elites” but rather to “hold them to account” (qtd. in Webber 129).

Webber argues that proponents of reframing within composition have left out this aspect of West’s pragmatism (128). Webber’s own proposal for an artful critique involves encouraging such active participation by the publics actually affected by policies. He contends that policymakers will not be able to ignore students and parents as they have composition professionals (137).

His approach begins with “scaling down” by inviting public inquiry at a local level, then “scaling up” as the conversation begins to trigger broader responses (130). He presents the effects of student protests as the University of Missouri in 2015 as an example of how local action that challenges the power of elites can have far-reaching consequences (137-38). Compositionists, he maintains, should not abandon critique but should “expand our rhetoric of professionalism to engage the antiprofessional energy of local inquiry and resistance” (138).

As a specific application of his view, Webber provides examples of how composition professionals have enlisted public resistance to machine-scoring of student writing. As students experience “being read” by machines, he contends, they become aware of how such policies do not mesh with their concerns and experiences (137). This awareness engages them in critically “problematizing” their perspectives and assumptions (131). In the process, Webber argues, larger, more diverse audiences are encouraged to relate their own experiences, leading to “a broader public discussion of shared concerns” (131).

For Webber, drawing on the everyday judgments of ordinary people as to the value of policies put forward by professionals contrasts with the desire to align composition’s values with those of the standardization movement in hopes of influencing the latter from within. Opening the debate beyond strategic professionalism can generate a pragmatism that more nearly fits West’s prophetic ideals and that can “unsettle the inevitability of reform and potentially authorize composition’s professional perspectives” in ways that reframing the terms of the corporate initiatives cannot (135).