College Composition Weekly: Summaries of research for college writing professionals

Read, Comment On, and Share News of the Latest from the Rhetoric and Composition Journals


2 Comments

Abba et al. Students’ Metaknowledge about Writing. J of Writing Res., 2018. Posted 09/28/2018.

Abba, Katherine A., Shuai (Steven) Zhang, and R. Malatesha Joshi. “Community College Writers’ Metaknowledge of Effective Writing.” Journal of Writing Research 10.1 (2018): 85-105. Web. 19 Sept. 2018.

Katherine A. Abba, Shuai (Steven) Zhang, and R. Malatesha Joshi report on a study of students’ metaknowledge about effective writing. They recruited 249 community-college students taking courses in Child Development and Teacher Education at an institution in the southwestern U.S. (89).

All students provided data for the first research question, “What is community-college students’ metaknowledge regarding effective writing?” The researchers used data only from students whose first language was English for their second and third research questions, which investigated “common patterns of metaknowledge” and whether classifying students’ responses into different groups would reveal correlations between the focus of the metaknowledge and the quality of the students’ writing. The authors state that limiting analysis to this subgroup would eliminate the confounding effect of language interference (89).

Abba et al. define metaknowledge as “awareness of one’s cognitive processes, such as prioritizing and executing tasks” (86), and explore extensive research dating to the 1970s that explores how this concept has been articulated and developed. They state that the literature supports the conclusion that “college students’ metacognitive knowledge, particularly substantive procedures, as well as their beliefs about writing, have distinctly impacted their writing” (88).

The authors argue that their study is one of few to focus on community college students; further, it addresses the impact of metaknowledge on the quality of student writing samples via the “Coh-Metrix” analysis tool (89).

Students participating in the study were provided with writing prompts at the start of the semester during an in-class, one-hour session. In addition to completing the samples, students filled out a short biographical survey and responded to two open-ended questions:

What do effective writers do when they write?

Suppose you were the teacher of this class today and a student asked you “What is effective writing?” What would you tell that student about effective writing? (90)

Student responses were coded in terms of “idea units which are specific unique ideas within each student’s response” (90). The authors give examples of how units were recognized and selected. Abba et al. divided the data into “Procedural Knowledge,” or “the knowledge necessary to carry out the procedure or process of writing,” and “Declarative Knowledge,” or statements about “the characteristics of effective writing” (89). Within the categories, responses were coded as addressing “substantive procedures” having to do with the process itself and “production procedures,” relating to the “form of writing,” e.g., spelling and grammar (89).

Analysis for the first research question regarding general knowledge in the full cohort revealed that most responses about Procedural Knowledge addressed “substantive” rather than “production” issues (98). Students’ Procedural Knowledge focused on “Writing/Drafting,” with “Goal Setting/Planning” in second place (93, 98). Frequencies indicated that while revision was “somewhat important,” it was not as central to students’ knowledge as indicated in scholarship on the writing process such as that of John Hayes and Linda Flower and M. Scardamalia and C. Bereiter (96).

Analysis of Declarative Knowledge for the full-cohort question showed that students saw “Clarity and Focus” and “Audience” as important characteristics of effective writing (98). Grammar and Spelling, the “production” features, were more important than in Procedural Knowledge. The authors posit that students were drawing on their awareness of the importance of a polished finished product for grading (98). Overall, data for the first research question matched that of previous scholarship on students’ metaknowledge of effective writing, which shows some concern with the finished product and a possibly “insufficient” focus on revision (98).

To address the second and third questions, about “common patterns” in student knowledge and the impact of a particular focus of knowledge on writing performance, students whose first language was English were divided into three “classes” in both Procedural and Declarative Knowledge based on their responses. Classes in Procedural Knowledge were a “Writing/Drafting oriented group,” a “Purpose-oriented group,” and the largest, a “Plan and Review oriented group” (99). Responses regarding Declarative Knowledge resulted in a “Plan and Review” group, a “Time and Clarity oriented group,” and the largest, an “Audience oriented group.” One hundred twenty-three of the 146 students in the cohort belonged to this group. The authors note the importance of attention to audience in the scholarship and the assertion that this focus typifies “older, more experienced writers” (99).

The final question about the impact of metaknowledge on writing quality was addressed through the Coh-Metrix “online automated writing evaluation tool” that assessed variables such as “referential cohesion, lexical diversity, syntactic complexity and pattern density” (100). In addition, Abba et al. used a method designed by A. Bolck, M. A. Croon, and J. A. Hagenaars (“BCH”) to investigate relationships between class membership and writing features (96).

These analyses revealed “no relationship . . . between their patterns knowledge and the chosen Coh-Metrix variables commonly associated with effective writing” (100). The “BCH” analysis revealed only two significant associations among the 15 variables examined (96).

The authors propose that their findings did not align with prior research suggesting the importance of metacognitive knowledge because their methodology did not use human raters and did not factor in student beliefs about writing or questions addressing why they responded as they did. Moreover, the authors state that the open-ended questions allowed more varied responses than did responses to “pre-established inventor[ies]” (100). They maintain that their methods “controlled the measurement errors” better than often-used regression studies (100).

Abba et al. recommend more research with more varied cohorts and collection of interview data that could shed more light on students’ reasons for their responses (100-101). Such data, they indicate, will allow conclusions about how students’ beliefs about writing, such as “whether an ability can be improved,” affect the results (101). Instructors, in their view, can more explicitly address awareness of strategies and effective practices and can use discussion of metaknowledge to correct “misconceptions or misuse of metacognitive strategies” (101):

The challenge for instructors is to ascertain whether students’ metaknowledge about effective writing is accurate and support students as they transfer effective writing metaknowledge to their written work. (101)

 


Leave a comment

Webber, Jim. Reframing vs. Artful Critique of Reform. Sept. CCC, 2017. Posted 10/31/2017.

Webber, Jim. ”Toward an Artful Critique of Reform: Responding to Standards, Assessment, and Machine Scoring.” College Composition and Communication 69.1 (2017): 118-45. Print.

Jim Webber analyzes the responses of composition scholars to the reform movement promoted by entities like College Learning Assessment (CLA) and Complete College America (CCA). He notes that the standardization agenda of such groups, intended to improve the efficiency of higher education, has suffered setbacks; for example, many states have rejected the Common Core State Standards (118-19). However, in Webber’s view, these setbacks are temporary and will be followed by renewed efforts by testing and measurement agencies to impose their own criteria for student success (119).

The standardization these groups urge on higher education will, they claim, give parents and students better information about institutions and will ultimately serve as grounds for such moves as “performance funding” (119). The overall goal of such initiatives is to move students through college as quickly as possible, especially into majors (119).

Webber recognizes two prongs of composition’s response to such pressures to portray “college students and parents as consumers” (119). One thread urges “reframing” or “redirecting” the efforts of the testing industry and groups like CLA and CCA. For Webber, this viewpoint adopts a “realist style.” Scholars who espouse reframing urge that compositionists work within the current realities created by the power of the testing and standardization apparatus to “expand” the meanings of terms like “college readiness” (120), adjusting them in ways that reflect composition’s inclusive, humanistic values (122)–that is, in Frank Farmer’s term, “insinuat[ing]” the professional ethos of composition and its authority into the standardization apparatus (qtd. in Webber 122).

Scholars who adopt this realist style, Webber claims, “figur[e] public policy as accommodation to the world” (141n5); moreover, in Webber’s view, they accept the description of “the way the world is” (133) put forward by CCA and others as “irreducibly competitive” and thus “[reduce] the scope of policy values to competition, efficiency, and instrumentality” (141n5).

Webber cites scholars in this vein who contend that the protests of scholars and writing professionals have been and will be effectively “ignored” by policymakers (137). More productive, in this view, is collaboration that will at least provide “a seat at the policy table,” giving professionals a chance to infuse the debate with their values (133).

Webber presents the 2011 Framework for Success in Postsecondary Writing as an example of how the reframing position “work[s] within the limits established by the dominant discourse of reform” (123). He notes that Bruce Comiskey was unable to discern any “apparent difference” between the aspirations of the Framework and those of the reform movement (125; emphasis original). For Webber, this approach sets up composition professionals as “competition” for the testing industry as the experts who can make sure students meet the reformers’ criteria for successful learning (124). Reframing in this way, Webber says, requires “message management” (123) to make sure that the response’s “strategic” potential is sustained (121).

Scholars who urge reframing invoke Cornel West’s “prophetic pragmatism” (122), which requires them to:

think genealogically about specific practices in light of the best available social theories, cultural critiques, and historiographic insights and to act politically to achieve certain moral consequences in light of effective strategies and tactics. (qtd. in Webber 122)

Webber contends that reframers interpret this directive to mean that “public critique” by compositionists “cannot deliver the consequences they desire” (123; emphasis original). Thus, a tactical approach is required.

The second thread in compositionists’ response to the reform movement is that of critique that insists that allowing the reform industry to set the terms and limits of the discussion is “to grant equivalence between our professional judgments and those of corporate-political service providers” (125-26). Webber quotes Judith Summerfield and Philip M. Anderson, who argue that “managing behavior and preparing students for vocations” does not accord with “a half-century (at the least) of enlightened classroom study and socio-psycholinguistic research” (qtd. in Webber 125).

In Webber’s view, the strands of reframing and critique have reached a “stalemate” (126). In response to the impasse, Webber explores the tradition of pragmatism, drawing on John Dewey and others. He argues that reframers call on the tenets of “melioration” and “prophetic critique” (127). “Meliorism,” according to Webber’s sources, is a linguistic process in that it works toward improving conditions through addressing the public discourse (127). In discussing West’s prophetic pragmatism as a form of “critical melioration,” Webber focuses on the “artfulness” of West’s concept (128).

Webber sees artfulness as critique “in particular contexts” in which ordinary people apply their own judgments of the consequences of a theory or policy based on the effects of these theories or policies on their lives (128-29). An artful critique invites public participation in the assessment of policies, an interaction that, according to West, functions as “antiprofessionalism,” not necessarily for the purpose of completely “eliminating or opposing all professional elites” but rather to “hold them to account” (qtd. in Webber 129).

Webber argues that proponents of reframing within composition have left out this aspect of West’s pragmatism (128). Webber’s own proposal for an artful critique involves encouraging such active participation by the publics actually affected by policies. He contends that policymakers will not be able to ignore students and parents as they have composition professionals (137).

His approach begins with “scaling down” by inviting public inquiry at a local level, then “scaling up” as the conversation begins to trigger broader responses (130). He presents the effects of student protests as the University of Missouri in 2015 as an example of how local action that challenges the power of elites can have far-reaching consequences (137-38). Compositionists, he maintains, should not abandon critique but should “expand our rhetoric of professionalism to engage the antiprofessional energy of local inquiry and resistance” (138).

As a specific application of his view, Webber provides examples of how composition professionals have enlisted public resistance to machine-scoring of student writing. As students experience “being read” by machines, he contends, they become aware of how such policies do not mesh with their concerns and experiences (137). This awareness engages them in critically “problematizing” their perspectives and assumptions (131). In the process, Webber argues, larger, more diverse audiences are encouraged to relate their own experiences, leading to “a broader public discussion of shared concerns” (131).

For Webber, drawing on the everyday judgments of ordinary people as to the value of policies put forward by professionals contrasts with the desire to align composition’s values with those of the standardization movement in hopes of influencing the latter from within. Opening the debate beyond strategic professionalism can generate a pragmatism that more nearly fits West’s prophetic ideals and that can “unsettle the inevitability of reform and potentially authorize composition’s professional perspectives” in ways that reframing the terms of the corporate initiatives cannot (135).

 

 


1 Comment

Moore & MacArthur. Automated Essay Evaluation. JoWR, June 2016. Posted 10/04/2016.

Moore, Noreen S., and Charles A. MacArthur. “Student Use of Automated Essay Evaluation Technology During Revision.” Journal of Writing Research 8.1 (2016): 149-75. Web. 23 Sept. 2016.

Noreen S. Moore and Charles A. MacArthur report on a study of 7th- and 8th-graders’ use of Automated Essay Evaluation technology (AEE) and its effects on their writing.

Moore and MacArthur define AEE as “the process of evaluating and scoring written prose via computer programs” (M. D. Shermis and J. Burstein, qtd. in Moore and MacArthur 150). The current study was part of a larger investigation of the use of AEE in K-12 classrooms (150, 153-54). Moore and MacArthur focus on students’ revision practices (154).

The authors argue that such studies are necessary because “AEE has the potential to offer more feedback and revision opportunities for students than may otherwise be available” (150). Teacher feedback, they posit, may not be “immediate” and may be “ineffective” and “inconsistent” as well as “time consuming,” while the alternative of peer feedback “requires proper training” (151). The authors also posit that AEE will increasingly become part of the writing education landscape and that teachers will benefit from “participat[ing]” in explorations of its effects (150). They argue that AEE should “complement” rather than replace teacher feedback and scoring (151).

Moore and MacArthur review extant research on two kinds of AEE, one that uses “Latent Semantic Analysis” (LSA) and one that has been “developed through model training” (152). Studies of an LSA program owned by Pearson and designed to evaluate summaries compared the program with “word-processing feedback” and showed enhanced improvement across many traits, including “quality, organization, content, use of detail, and style” as well as time spent on revision (152). Other studies also showed improvement. Moore and MacArthur note that some of these studies relied on scores from the program itself as indices of improvement and did not demonstrate any transfer of skills to contexts outside of the program (153).

Moore and MacArthur contend that their study differs from previous research in that it does not rely on “data collected by the system” but rather uses “real time” information from think-aloud protocols and semi-structured interviews to investigate students’ use of the technology. Moreover, their study reveals the kinds of revision students actually do (153). They ask:

  • How do students use AEE feedback to make revisions?
  • Are students motivated to make revisions while using AEE technology?
  • How well do students understand the feedback from AEE, both the substantive feedback and the conventions feedback? (154)

The researchers studied six students selected to be representative of a 12-student 7th- and 8th-grade “literacy class” at a private northeastern school whose students exhibited traits “that may interfere with school success” (154). The students were in their second year of AEE use and the teacher in the third year of use. Students “supplement[ed]” their literacy work with in-class work using the “web-based MY Access!” program (154).

Moore and MacArthur report that “intellimetric” scoring used by MY Access! correlates highly with scoring by human raters (155). The software is intended to analyze “focus/coherence, organization, elaboration/development, sentence structure, and mechanics/conventions” (155).

MY Access provides feedback through MY Tutor, which responds to “non-surface” issues, and MY Editor, which addresses spelling, punctuation, and other conventions. MY Tutor provides a “one sentence revision goal”; “strategies for achieving the goal”; and “a before and after example of a student revising based on the revision goal and strategy” (156). The authors further note that “[a]lthough the MY Tutor feedback is different for each score point and genre, the same feedback is given for the same score in the same genre” (156). MY Editor responds to specific errors in each text individually.

Each student submitted a first and revised draft of a narrative and an argumentative paper, for a total of 24 drafts (156). The researchers analyzed only revisions made during the think-aloud; any revision work prior to the initial submission did not count as data (157).

Moore and MacArthur found that students used MY Tutor for non-surface feedback only when their submitted essays earned low scores (158). Two of the three students who used the feature appeared to understand the feedback and used it successfully (163). The authors report that for the students who used it successfully, MY Tutor feedback inspired a larger range of changes and more effective changes in the papers than feedback from the teacher or from self-evaluation (159). These students’ changes addressed “audience engagement, focusing, adding argumentative elements, and transitioning” (159), whereas teacher feedback primarily addressed increasing detail.

One student who scored high made substantive changes rated as “minor successes” but did not use the MY Tutor tool. This student used MY Editor and appeared to misunderstand the feedback, concentrating on changes that eliminated the “error flag” (166).

Moore and MacArthur note that all students made non-surface revisions (160), and 71% of these efforts were suggested by AEE (161). However, 54.3% of the total changes did not succeed, and MY Editor suggested 68% of these (161). The authors report that the students lacked the “technical vocabulary” to make full use of the suggestions (165); moreover, they state that “[i]n many of the instances when students disagreed with MY Editor or were confused by the feedback, the feedback seemed to be incorrect” (166). The authors report other research that corroborates their concern that grammar checkers in general may often be incorrect (166).

As limitations, the researchers point to the small sample, which, however, allowed access to “rich data” and “detailed description” of actual use (167). They note also that other AEE program might yield different results. Lack of data on revisions students made before submitting their drafts also may have affected the results (167). The authors supply appendices detailing their research methods.

Moore and MacArthur propose that because the AEE scores prompt revision, such programs can effectively augment writing instruction, but recommend that scores need to track student development so that as students score near the maximum at a given level, new criteria and scores encourage more advanced work (167-68). Teachers should model the use of the program and provide vocabulary so students better understand the feedback. Moore and MacArthur argue that effective use of such programs can help students understand criteria for writing assessment and refine their own self-evaluation processes (168).

Research recommendations include asking whether scores from AEE continue to encourage revision and investigating how AEE programs differ in procedures and effectiveness. The study did not examine teachers’ approaches to the program. Moore and MacArthur urge that stakeholders, including “the people developing the technology and the teachers, coaches, and leaders using the technology . . . collaborate” so that AEE “aligns with classroom instruction” (168-69).