College Composition Weekly: Summaries of research for college writing professionals

Read, Comment On, and Share News of the Latest from the Rhetoric and Composition Journals


2 Comments

Abba et al. Students’ Metaknowledge about Writing. J of Writing Res., 2018. Posted 09/28/2018.

Abba, Katherine A., Shuai (Steven) Zhang, and R. Malatesha Joshi. “Community College Writers’ Metaknowledge of Effective Writing.” Journal of Writing Research 10.1 (2018): 85-105. Web. 19 Sept. 2018.

Katherine A. Abba, Shuai (Steven) Zhang, and R. Malatesha Joshi report on a study of students’ metaknowledge about effective writing. They recruited 249 community-college students taking courses in Child Development and Teacher Education at an institution in the southwestern U.S. (89).

All students provided data for the first research question, “What is community-college students’ metaknowledge regarding effective writing?” The researchers used data only from students whose first language was English for their second and third research questions, which investigated “common patterns of metaknowledge” and whether classifying students’ responses into different groups would reveal correlations between the focus of the metaknowledge and the quality of the students’ writing. The authors state that limiting analysis to this subgroup would eliminate the confounding effect of language interference (89).

Abba et al. define metaknowledge as “awareness of one’s cognitive processes, such as prioritizing and executing tasks” (86), and explore extensive research dating to the 1970s that explores how this concept has been articulated and developed. They state that the literature supports the conclusion that “college students’ metacognitive knowledge, particularly substantive procedures, as well as their beliefs about writing, have distinctly impacted their writing” (88).

The authors argue that their study is one of few to focus on community college students; further, it addresses the impact of metaknowledge on the quality of student writing samples via the “Coh-Metrix” analysis tool (89).

Students participating in the study were provided with writing prompts at the start of the semester during an in-class, one-hour session. In addition to completing the samples, students filled out a short biographical survey and responded to two open-ended questions:

What do effective writers do when they write?

Suppose you were the teacher of this class today and a student asked you “What is effective writing?” What would you tell that student about effective writing? (90)

Student responses were coded in terms of “idea units which are specific unique ideas within each student’s response” (90). The authors give examples of how units were recognized and selected. Abba et al. divided the data into “Procedural Knowledge,” or “the knowledge necessary to carry out the procedure or process of writing,” and “Declarative Knowledge,” or statements about “the characteristics of effective writing” (89). Within the categories, responses were coded as addressing “substantive procedures” having to do with the process itself and “production procedures,” relating to the “form of writing,” e.g., spelling and grammar (89).

Analysis for the first research question regarding general knowledge in the full cohort revealed that most responses about Procedural Knowledge addressed “substantive” rather than “production” issues (98). Students’ Procedural Knowledge focused on “Writing/Drafting,” with “Goal Setting/Planning” in second place (93, 98). Frequencies indicated that while revision was “somewhat important,” it was not as central to students’ knowledge as indicated in scholarship on the writing process such as that of John Hayes and Linda Flower and M. Scardamalia and C. Bereiter (96).

Analysis of Declarative Knowledge for the full-cohort question showed that students saw “Clarity and Focus” and “Audience” as important characteristics of effective writing (98). Grammar and Spelling, the “production” features, were more important than in Procedural Knowledge. The authors posit that students were drawing on their awareness of the importance of a polished finished product for grading (98). Overall, data for the first research question matched that of previous scholarship on students’ metaknowledge of effective writing, which shows some concern with the finished product and a possibly “insufficient” focus on revision (98).

To address the second and third questions, about “common patterns” in student knowledge and the impact of a particular focus of knowledge on writing performance, students whose first language was English were divided into three “classes” in both Procedural and Declarative Knowledge based on their responses. Classes in Procedural Knowledge were a “Writing/Drafting oriented group,” a “Purpose-oriented group,” and the largest, a “Plan and Review oriented group” (99). Responses regarding Declarative Knowledge resulted in a “Plan and Review” group, a “Time and Clarity oriented group,” and the largest, an “Audience oriented group.” One hundred twenty-three of the 146 students in the cohort belonged to this group. The authors note the importance of attention to audience in the scholarship and the assertion that this focus typifies “older, more experienced writers” (99).

The final question about the impact of metaknowledge on writing quality was addressed through the Coh-Metrix “online automated writing evaluation tool” that assessed variables such as “referential cohesion, lexical diversity, syntactic complexity and pattern density” (100). In addition, Abba et al. used a method designed by A. Bolck, M. A. Croon, and J. A. Hagenaars (“BCH”) to investigate relationships between class membership and writing features (96).

These analyses revealed “no relationship . . . between their patterns knowledge and the chosen Coh-Metrix variables commonly associated with effective writing” (100). The “BCH” analysis revealed only two significant associations among the 15 variables examined (96).

The authors propose that their findings did not align with prior research suggesting the importance of metacognitive knowledge because their methodology did not use human raters and did not factor in student beliefs about writing or questions addressing why they responded as they did. Moreover, the authors state that the open-ended questions allowed more varied responses than did responses to “pre-established inventor[ies]” (100). They maintain that their methods “controlled the measurement errors” better than often-used regression studies (100).

Abba et al. recommend more research with more varied cohorts and collection of interview data that could shed more light on students’ reasons for their responses (100-101). Such data, they indicate, will allow conclusions about how students’ beliefs about writing, such as “whether an ability can be improved,” affect the results (101). Instructors, in their view, can more explicitly address awareness of strategies and effective practices and can use discussion of metaknowledge to correct “misconceptions or misuse of metacognitive strategies” (101):

The challenge for instructors is to ascertain whether students’ metaknowledge about effective writing is accurate and support students as they transfer effective writing metaknowledge to their written work. (101)

 


Leave a comment

Salig et al. Student Perceptions of “Essentialist Language” in Persuasive Writing. J of Writ. Res., 2018. Posted 05/10/2018.

Salig, Lauren K., L. Kimberly Epting, and Lizabeth A. Rand. “Rarely Say Never: Essentialist Rhetorical Choices in College Students’ Perceptions of Persuasive Writing.” Journal of Writing Research 93.3 (2018): 301-31. Web. 3 May 2018.

Lauren K. Salig, L. Kimberly Epting, and Lizabeth A. Rand investigated first-year college students’ perceptions of effective persuasive writing. Triggered by ongoing research that suggests that students struggle with the analytical and communicative skills demanded by this genre, the study focused on students’ attitudes toward “essentialist” language in persuasive discourse.

The authors cite research indicating that “one-sided” arguments are less persuasive than those that acknowledge opposing views and present more than one perspective on a issue (303); they posit that students’ failure to develop multi-sided arguments may account for assessments showing poor command of persuasive writing (303). Salig et al. argue that “the language used in one-sided arguments and the reasons students might think one-sidedness benefits their writing have not been extensively evaluated from a psychological perspective” (304). Their investigation is intended both to clarify what features students believe contribute to good persuasive writing and to determine whether students actually apply these beliefs in identifying effective persuasion (305).

The authors invoke a term, “essentialism,” to encompass different forms of language that exhibit different levels of “black-and-white dualism” (304). Such language may fail to acknowledge exceptions to generalizations; one common way it may manifest itself is the tendency to include “boosters” such as ‘“always,’ ‘every,’ and ‘prove,’” while eliminating “hedges” such as qualifiers (304). “Essentialist” thinking, the authors contend, “holds that some categories have an unobservable, underlying ‘essence’ behind them” (304). Salig et al. argue that while some subsets of “generic language” may enable faster learning because they allow the creation of useful categories, the essentialist tendency in such language to override analytical complexity can prove socially harmful (305).

The investigation involved two studies designed, first, to determine whether students conceptually recognized the effects of essentialist language in persuasive writing, and second, to assess whether they were able to apply this recognition in practice (306).

Study 1 consisted of two phases. In the first, students were asked to generate features that either enhanced or detracted from the quality, persuasiveness, and credibility of writing (307). Twenty-seven characteristics emerged after coding; these were later reduced to 23 by combining some factors. Features related to essentialism, Bias and One-sidedness, were listed as damaging to persuasiveness and credibility, while Refutation of Opposition and Inclusion of Other Viewpoints were seen as improving these two factors. Although, in the authors’ view, these responses aligned with educational standards such as the Common Core State Standards, students did not see these four characteristics as affecting the quality of writing (309).

In Phase 2 of Study 1, students were prompted to list “writing behaviors that indicated the presence of the specified characteristic” (310). The researchers developed the top three behaviors for each feature into sentence form; they provide the complete list of these student-generated behavioral indicators (311-14).

From the Study 1 results, Salig et al. propose that students do conceptually grasp “essentialism” as a negative feature and can name ways that it may show up in writing. Study 2 was designed to measure the degree to which this conceptual knowledge influences student reactions to specific writing in which the presence or absence of essentialist features becomes the variable under examination (314-15).

In this study 79 psychology students were shown six matched pairs of statements, varying only in that one represented essentialist language and the other contained hedges and qualifiers (315). In each case, participants were asked to state which of the two statements was “better,” and then to refer to a subset of the 23 features identified in Study 1 that had been narrowed to focus on persuasiveness in order to provide reasons for their preference (316). They were asked to set aside their personal responses to the topic (318). The researchers provide the statement pairs, three of which contained citations (317-18).

In Likert-scale responses, the students generally preferred the non-essentialist samples (319), although the “driving force” for this finding was that students preferring non-essentialist samples rated the essentialist samples very low in persuasiveness (323). Further, of the 474 choices, 222 indicated that essentialist examples were “better,” while 252 chose the non-essentialist examples, a difference that the researchers report as not significant (321).

Salig et al. find that the reasons students chose for preferring essentialist language differed from their reasons for preferring non-essentialist examples. Major reasons for the essentialist choice were Voice/Tone, Concision, Persuasive Effectiveness, One-sidedness, and Grabs/Retains Attention. Students who chose non-essentialist samples as better cited Other Viewpoints, Argument Clarity/Consistency, Detail, Writer’s Knowledge, Word Choice/Language, and Bias (322).

Participants were divided almost equally among those who consistently chose non-essentialist options, those who consistently chose essentialist options, and those whose chose one or the other half of the time (323). Correlations indicated that students who were somewhat older (maximum age was 21, with M = 18.49 years) “were associated with lower persuasiveness ratings on essentialist samples than younger students or students with less education” (324). The authors posit that the second study examined a shift from “conceptual to operational understanding” (324) and thus might indicate the effects either of cognitive development or increased experience or some combination in conjunction with other factors (325).

In addition, the authors consider effects of current methods of instruction on students’ responses to the samples. They note that “concision” showed up disproportionately as a reason given by students who preferred essentialist samples. They argue that possibly students have inferred that “strong, supported, and concise arguments” are superior (326). Citing Linda Adler-Kassner, they write that students are often taught to support their arguments before they are encouraged to include counterarguments (326).The authors recommend earlier attention, even before high school, to the importance of including multiple viewpoints (328).

The study also revealed an interaction between student preferences and the particular sets, with sets 4 and 5 earning more non-essentialist votes than other sets. The length of the samples and the inclusion of citations in set 4 lead the researchers to consider whether students perceived these as appropriate for “scholarly” or more formal contexts in comparison to shorter, more emphatic samples that students may have associated with “advertising” (327). Sets 4 and 5 also made claims about “students” and “everybody,” prompting the researchers to suggest that finding themselves the subjects of sweeping claims may have encouraged students to read the samples with more awareness of essentialist language (327).

The authors note that their study examined “one, and arguably the simplest, type” of essentialist language. They urge ongoing research into the factors that enable students not just to recognize but also to apply the concepts that characterize non-essentialist language (328-29).

 


Leave a comment

Limpo and Alves. Effects of Beliefs about “Writing Skill Malleability” on Performance. JoWR 2017. Posted 11/24/2017.

Limpo, Teresa, and Rui A. Alves. “Relating Beliefs in Writing Skill Malleability to Writing Performance: The Mediating Roles of Achievement Goals and Self-Efficacy.” Journal of Writing Research 9.2 (2017): 97-125. Web. 15 Nov. 2017.

Teresa Limpo and Rui A. Alves discuss a study with Portuguese students designed to investigate pathways between students’ beliefs about writing ability and actual writing performance. They use measures for achievement goals and self-efficacy to determine how these factors mediate between beliefs and performance. Their study goals involved both exploring these relationships and assessing the validity and reliability of the instruments and theoretical models they use (101-02).

The authors base their approach on the assumption that people operate via “implicit theories,” and that central to learning are theories that see “ability” as either “incremental,” in that skills can be honed through effort, or as an “entity” that cannot be improved despite effort (98). Limpo and Alves argue that too little research has addressed how these beliefs about “writing skill malleability” influence learning in the specific “domain” of writing (98).

The authors report earlier research that indicates that students who see writing as an incremental skill perform better in intervention studies. They contend that the “mechanisms” through which this effect occurs have not been thoroughly examined (99).

Limpo and Alves apply a three-part model of achievement goals: “mastery” goals involve the desire to improve and increase competence; “performance-approach” goals involve the desire to do better than others in the quest for competence; and “performance-avoidance” goals manifest as the desire to avoid looking incompetent or worse than others (99-100). Mastery and performance-approach goals correlate positively because they address increased competence, but performance-approach and performance-avoidance goals also correlate because they both concern how learners see themselves in comparison to others (100).

The authors write that “there is overall agreement” among researchers in this field that these goals affect performance. Students with mastery goals display “mastery-oriented learning patterns” such as “use of deep strategies, self-regulation, effort and persistence, . . . [and] positive affect,” while students who focus on performance avoidance exhibit “helpless learning patterns” including “unwillingness to seek help, test anxiety, [and] negative affect” (100-01). Student outcomes with respect to performance-approach goals were less clear (101). The authors hope to clarify the role of self-efficacy in these goal choices and outcomes (101).

Limpo and Alves find that self-efficacy is “perhaps the most studied variable” in examinations of motivation in writing (101). They refer to a three-part model: self-efficacy for “conventions,” or “translating ideas into linguistic forms and transcribing them into writing”; for “ideation,” finding ideas and organizing them, and for “self-regulation,” which involves knowing how to make the most of “the cognitive, emotional, and behavioral aspects of writing” (101). They report associations between self-efficacy, especially for self-regulation, and mastery goals (102). Self-efficacy, particularly for conventions, has been found to be “among the strongest predictors of writing performance” (102).

The authors predicted several “paths” that would illuminate the ways in which achievement goals and self-efficacy linked malleability beliefs and performance. They argue that their study contributes new knowledge by providing empirical data about the role of malleability beliefs in writing (103).

The study was conducted among native Portuguese speakers in 7th and 8th grades in a “public cluster of schools in Porto” that is representative of the national population (104). Students received writing instruction only in their Portuguese language courses, in which teachers were encouraged to use “a process-oriented approach” to teach a range of genres but were not given extensive pedagogical support or the resources to provide a great deal of “individualized feedback” (105).

The study reported in this article was part of a larger study; for the relevant activities, students first completed scales to measure their beliefs about writing-skill malleability and to assess their achievement goals. They were then given one of two prompts for “an opinion essay” on whether students should have daily homework or extra curricular activities (106). After the prompts were provided, students filled out a sixteen-item measure of self-efficacy for conventions, ideation, and self-regulation. A three-minute opportunity to brainstorm about their responses to the prompts followed; students then wrote a five-minute “essay,” which was assessed as a measure of performance by graduate research assistants who had been trained to use a “holistic rating rubric.” Student essays were typed and mechanical errors corrected. The authors contend that the use of such five-minute tasks has been shown to be valid (107).

The researchers predicted that they would see correlations between malleability beliefs and performance; they expected to see beliefs affect goals, which would affect self-efficacy, and lead to differences in performance (115). They found these associations for mastery goals. Students who saw writing as an incremental, improvable skill displayed “a greater orientation toward mastery goals” (115). The authors state that this result for writing had not been previously demonstrated. Their research reveals that “mastery goals contributed to students’ confidence” and therefore to self-efficacy, perhaps because students with this belief “ actively strive” for success (115).

They note, however, that prior research correlated these results with self-efficacy for conventions, whereas their study showed that self-efficacy for self-regulation, students’ belief that “they can take control of their own writing,” was the more important contributor to performance (116); in fact, it was “the only variable directly influencing writing performance” (116). Limpo and Alves hypothesize that conventions appeared less central in their study because the essays had been typed and corrected, so that errors had less effect on performance scores (116).

Data on the relationship between malleability beliefs and performance-approach or performance-avoidance goals, the goals associated with success in relation to others, were “less clear-cut” (117). Students who saw skills as fixed tended toward performance-avoidance, but neither type of performance goal affected self-efficacy.

Limpo and Alves recount an unexpected finding that the choice of performance-avoidance goals did not affect performance scores on the essays (117). The authors hypothesize that the low-stakes nature of the task and its simplicity did not elicit “the self-protective responses” that often hinder writers who tend toward these avoidance goals (117). These unclear results lead Limpo and Alves to withhold judgment about the relationship among these two kinds of goals, self-efficacy, and performance, positing that other factors not captured in the study might be involved (117-18).

They recommend more extensive research with more complex writing tasks and environments, including longitudinal studies and consideration of such factors as “past performance” and gender (118). They encourage instructors to foster a view of writing as an incremental skill and to emphasize self-regulation strategies. They recommend “The Self-Regulated Strategy Development model” as “one of the most effective instructional models for teaching writing” (119).


Leave a comment

Patchan and Shunn. Effects of Author and Reviewer Ability in Peer Feedback. JoWR 2016. Posted 11/25/2016.

Patchan, Melissa M., and Christian D. Shunn. “Understanding the Effects of Receiving Peer Feedback for Text Revision: Relations between Author and Reviewer Ability.” Journal of Writing Research 8.2 (2016): 227-65. Web. 18 Nov. 2016. doi: 10.17239/jowr-2016.08.02.03

Melissa M. Patchan and Christian D. Shunn describe a study of the relationship between the abilities of writers and peer reviewers in peer assessment. The study asks how the relative ability of writers and reviewers influences the effectiveness of peer review as a learning process.

The authors note that in many content courses, the time required to provide meaningful feedback encourages many instructors to turn to peer assessment (228). They cite studies suggesting that in such cases, peer response can be more effective than teacher response because, for example, students may actually receive more feedback, the feedback may be couched in more accessible terms, and students may benefit from seeing models and new strategies (228-29). Still, studies find, teachers and students both question the efficacy of peer assessment, with students stating that the quality of review depends largely on the abilities of the reviewer (229).

Patchan and Shunn distinguish between the kind of peer review characteristic of writing classrooms, which they describe as “pair or group-based face-to-face conversations” emphasizing “qualitative feedback” and the type more often practiced in large content classes, which they see as more like “professional journal reviewing” that is “asynchronous, and written-based” (228). Their study addresses the latter format and is part of a larger study examining peer feedback in a widely required psychology class at a “large, public research university in the southeast” (234).

A random selection of 189 students wrote initial drafts in response to an assignment assessing media handling of a psychological study using criteria from the course textbook (236, 238). Students then received four drafts to review and were given a week to revise their own drafts in response to feedback. Participants used the “web-based peer assessment functions of turnitin.com” (237).

The researchers rated participants as high-ability writers using SAT scores and grades in their two first-year writing courses (236). Graduate rhetoric students also rated the first drafts. The protocol then included a “median split” to designate writers in binary fashion as either high- or low-ability. “High” authors were categorized as “high” reviewers. Patchan and Shunn note that there was a wide range in writer abilities but argue that, even though the “design decreases the power of this study,” such determinations were needed because of the large sample size, which in turn made the detection of “important patterns” likely (236-37). They feel that “a lower powered study was a reasonable tradeoff for higher external validity (i.e., how reviewer ability would typically be detected)” (237).

The authors describe their coding process in detail. In addition to coding initial drafts for quality, coders examined each reviewer’s feedback for its attention to higher-order problems and lower-order corrections (239-40). Coders also tabulated which comments resulted in revision as well as the “quality of the revision” (241). This coding was intended to “determine how the amount and type of comments varied as a function of author ability and reviewer ability” (239). A goal of the study was to determine what kinds of feedback triggered the most effective responses in “low” authors (240).

The study was based on a cognitive model of writing derived from the updated work of Linda Flower and John R. Hayes, in which three aspects of writing/revision follow a writer’s review of a text: problem detection, problem diagnosis, and strategy selection for solving the diagnosed problems (230-31). In general, “high” authors were expected to produce drafts with fewer initial problems and to have stronger reading skills that allowed them to detect and diagnose more problems in others’ drafts, especially “high-level” problems having to do with global issues as opposed to issues of surface correctness (230). High ability authors/reviewers were also assumed to have a wider repertoire of solution strategies to suggest for peers and to apply to their own revisions (233). All participants received a rubric intended to guide their feedback toward higher-order issues (239).

Some of the researchers’ expectations were confirmed, but others were only partially supported or not supported (251). Writers whose test scores and grades categorized them as “high” authors did produce better initial drafts, but only by a slight margin. The researchers posit that factors other than ability may affect draft quality, such as interest or time constraints (243). “High” and “low” authors received the same number of comments despite differences in the quality of the drafts (245), but “high” authors made more higher-order comments even though they didn’t provide more solutions (246). “High” reviewers indicated more higher-order issues to “low” authors than to “high,” while “low” reviewers suggested the same number of higher-order changes to both “high” and “low” authors (246).

Patchan and Shunn considered the “implementation rate,” or number of comments on which students chose to act, and “revision quality” (246). They analyzed only comments that were specific enough to indicate action. In contrast to findings in previous studies, the expectation that better writers would make more and better revisions was not supported. Overall, writers acted on only 32% of the comments received and only a quarter of the comments resulted in improved drafts (248). Author ability did not factor into these results. Moreover, the ability of the reviewer had no effect on how many revisions were made or how effective they were (248).

It was expected that low-ability authors would implement more suggestions from higher-ability reviewers, but in fact, “low authors implemented more high-level criticism comments . . . from low reviewers than from high reviewers” (249). The quality of the revisions also improved for low-ability writers when the comments came from low-ability reviewers. The researchers conclude that “low authors benefit the most from feedback provided by low reviewers” (249).

Students acted on 41% of the low-level criticisms, but these changes seldom resulted in better papers (249).

The authors posit that rates of commenting and implementation may both be impacted by limits or “thresholds” on how much feedback a given reviewer is willing to provide and how many comments a writer is able or willing to act on (252, 253). They suggest that low-ability reviewers may explain problems in language that is more accessible to writers with less ability. Patchan and Shunn suggest that feedback may be most effective when it occurs within the student’s zone of proximal development, so that weaker writers may be helped most by peers just beyond them in ability rather than by peers with much more sophisticated skills (253).

In the authors’ view, that “neither author ability nor reviewer ability per se directly affected the amount and quality of revisions” (253) suggests that the focus in designing effective peer review processes should shift from how to group students to improving students’ ability to respond to comments (254). They recommend further research using more “direct” measures of writing and reviewing ability (254). A major conclusion from this study is that “[h]igher-ability students will likely revise their texts successfully regardless of who [they are] partnered with, but the lower-ability students may need feedback at their own level” (255).


1 Comment

Moore & MacArthur. Automated Essay Evaluation. JoWR, June 2016. Posted 10/04/2016.

Moore, Noreen S., and Charles A. MacArthur. “Student Use of Automated Essay Evaluation Technology During Revision.” Journal of Writing Research 8.1 (2016): 149-75. Web. 23 Sept. 2016.

Noreen S. Moore and Charles A. MacArthur report on a study of 7th- and 8th-graders’ use of Automated Essay Evaluation technology (AEE) and its effects on their writing.

Moore and MacArthur define AEE as “the process of evaluating and scoring written prose via computer programs” (M. D. Shermis and J. Burstein, qtd. in Moore and MacArthur 150). The current study was part of a larger investigation of the use of AEE in K-12 classrooms (150, 153-54). Moore and MacArthur focus on students’ revision practices (154).

The authors argue that such studies are necessary because “AEE has the potential to offer more feedback and revision opportunities for students than may otherwise be available” (150). Teacher feedback, they posit, may not be “immediate” and may be “ineffective” and “inconsistent” as well as “time consuming,” while the alternative of peer feedback “requires proper training” (151). The authors also posit that AEE will increasingly become part of the writing education landscape and that teachers will benefit from “participat[ing]” in explorations of its effects (150). They argue that AEE should “complement” rather than replace teacher feedback and scoring (151).

Moore and MacArthur review extant research on two kinds of AEE, one that uses “Latent Semantic Analysis” (LSA) and one that has been “developed through model training” (152). Studies of an LSA program owned by Pearson and designed to evaluate summaries compared the program with “word-processing feedback” and showed enhanced improvement across many traits, including “quality, organization, content, use of detail, and style” as well as time spent on revision (152). Other studies also showed improvement. Moore and MacArthur note that some of these studies relied on scores from the program itself as indices of improvement and did not demonstrate any transfer of skills to contexts outside of the program (153).

Moore and MacArthur contend that their study differs from previous research in that it does not rely on “data collected by the system” but rather uses “real time” information from think-aloud protocols and semi-structured interviews to investigate students’ use of the technology. Moreover, their study reveals the kinds of revision students actually do (153). They ask:

  • How do students use AEE feedback to make revisions?
  • Are students motivated to make revisions while using AEE technology?
  • How well do students understand the feedback from AEE, both the substantive feedback and the conventions feedback? (154)

The researchers studied six students selected to be representative of a 12-student 7th- and 8th-grade “literacy class” at a private northeastern school whose students exhibited traits “that may interfere with school success” (154). The students were in their second year of AEE use and the teacher in the third year of use. Students “supplement[ed]” their literacy work with in-class work using the “web-based MY Access!” program (154).

Moore and MacArthur report that “intellimetric” scoring used by MY Access! correlates highly with scoring by human raters (155). The software is intended to analyze “focus/coherence, organization, elaboration/development, sentence structure, and mechanics/conventions” (155).

MY Access provides feedback through MY Tutor, which responds to “non-surface” issues, and MY Editor, which addresses spelling, punctuation, and other conventions. MY Tutor provides a “one sentence revision goal”; “strategies for achieving the goal”; and “a before and after example of a student revising based on the revision goal and strategy” (156). The authors further note that “[a]lthough the MY Tutor feedback is different for each score point and genre, the same feedback is given for the same score in the same genre” (156). MY Editor responds to specific errors in each text individually.

Each student submitted a first and revised draft of a narrative and an argumentative paper, for a total of 24 drafts (156). The researchers analyzed only revisions made during the think-aloud; any revision work prior to the initial submission did not count as data (157).

Moore and MacArthur found that students used MY Tutor for non-surface feedback only when their submitted essays earned low scores (158). Two of the three students who used the feature appeared to understand the feedback and used it successfully (163). The authors report that for the students who used it successfully, MY Tutor feedback inspired a larger range of changes and more effective changes in the papers than feedback from the teacher or from self-evaluation (159). These students’ changes addressed “audience engagement, focusing, adding argumentative elements, and transitioning” (159), whereas teacher feedback primarily addressed increasing detail.

One student who scored high made substantive changes rated as “minor successes” but did not use the MY Tutor tool. This student used MY Editor and appeared to misunderstand the feedback, concentrating on changes that eliminated the “error flag” (166).

Moore and MacArthur note that all students made non-surface revisions (160), and 71% of these efforts were suggested by AEE (161). However, 54.3% of the total changes did not succeed, and MY Editor suggested 68% of these (161). The authors report that the students lacked the “technical vocabulary” to make full use of the suggestions (165); moreover, they state that “[i]n many of the instances when students disagreed with MY Editor or were confused by the feedback, the feedback seemed to be incorrect” (166). The authors report other research that corroborates their concern that grammar checkers in general may often be incorrect (166).

As limitations, the researchers point to the small sample, which, however, allowed access to “rich data” and “detailed description” of actual use (167). They note also that other AEE program might yield different results. Lack of data on revisions students made before submitting their drafts also may have affected the results (167). The authors supply appendices detailing their research methods.

Moore and MacArthur propose that because the AEE scores prompt revision, such programs can effectively augment writing instruction, but recommend that scores need to track student development so that as students score near the maximum at a given level, new criteria and scores encourage more advanced work (167-68). Teachers should model the use of the program and provide vocabulary so students better understand the feedback. Moore and MacArthur argue that effective use of such programs can help students understand criteria for writing assessment and refine their own self-evaluation processes (168).

Research recommendations include asking whether scores from AEE continue to encourage revision and investigating how AEE programs differ in procedures and effectiveness. The study did not examine teachers’ approaches to the program. Moore and MacArthur urge that stakeholders, including “the people developing the technology and the teachers, coaches, and leaders using the technology . . . collaborate” so that AEE “aligns with classroom instruction” (168-69).


1 Comment

Omizo & Hart-Davidson. Genre Signals in Academic Writing. JoWR, 2016. Posted 24 May 2016.

Omizo, Ryan, and William Hart-Davidson. “Finding Genre Signals in Academic Writing.” Journal of Writing Research 7.3 (2016): 485-509. Web. 18 May 2016.

Ryan Omizo and William Hart-Davidson, publishing in a special section on digital text analysis in the Journal of Writing Research, report on a process for investigating markers of genres, specifically in academic writing. They hope to develop a tool that will help advisors and advisees in graduate programs recognize differences between the rhetorical moves made by experienced writers in a field and those more likely to appear in the work of less experienced writers.

They draw on “rhetorical genre theory” to state that although particular kinds of text “recur” in the scholarship of a given field, simply learning patterns for these generic texts does not necessarily produce the kind of text that characterizes expert writing within the field (486). Specific instances of a particular genre vary from the “stable textual patterns” that are easy to identify (486).

As a result, the authors contend, understanding that textual patterns actually constitute rhetorical moves is a necessary component of successfully participating in a genre. Omizo and Hart-Davidson characterize the markers of a genre as “signals shared by author and reader about the social activity—the genre—they are co-negotiating” (486). Understanding the rhetorical purposes of genre features allows novice writers to use them effectively.

The authors work with 505 research articles from the SpringerOpen Journal archive. In order to determine how particular genre markers function as social signals, they begin by developing a coding scheme that mimics what human readers might do in finding clusters of words that do social work within a genre. They give the example of identifying a move essential to an article that can be labeled “science”: “propositional hedging,” in which the writer qualifies a claim to reflect stronger or weaker evidence (487). Omizo and Hart-Davidson argue that in searching for such moves, it is possible to identify a “key protein,” or crucial marker, that indicates the presence of the move (487).

After this initial coding, the authors analyze the texts and convert the markers they find to a graph that allows them to calculate “the relationships between words” (487), which then make visible similarities and differences between the uses of markers in expert work and in novice work, with the intention of allowing advisors and advisees to address the reasons for differences (489).

Their study addresses citation styles in chemistry and materials science (502). They argue that citations are among important kinds of “signaling work” that “communicate something about a text’s status as a response to a familiar kind of exigency to a particular audience” (488). They hoped to find “classifiable patterns in citations moves” that varied “consistently” between experienced and novice writers (489).

They review other ways of categorizing in-text citations, some recognizing as many as twelve different uses of citations. For their own purposes, they created four categories of in-text citations that could be recognized from “premarked cue phrases” similar to those used by D. Marcu, who used phrases marked with “although” and “yet” to locate rhetorical moves (491). Omizo and Hart-Davidson’s scheme, they contend, can recognize types of citation moves and assign them rhetorical functions across disciplines, without requiring any specific knowledge of the discipline or field in which the moves occur (490). Moreover, they argue that their system can distinguish between “mentor and mentee texts” (491).

They categorize citations into

  • Extractions: This term denotes “an idea paraphrased from source [sic] and attributed via a parenthetical reference” (491). In an extraction, the paraphrase itself does not reference the source. Such a rhetorical choice, they posit, “prioritize[s] the information” rather than the source author[s] as “active agents” (491).
  • Groupings: These include “3 or more sources within a parenthesis or brackets” (492). The authors see the social function of groupings as an indication of how the writer or writers locate their work on the topic in question in the larger disciplinary field. As opposed to an extraction, which notes “what particular agents are saying” about a topic, groupings indicate what “a community of scholars is saying” (493). Groupings often facilitate the groundwork laid out in research-article introductions, in particular allowing scholars to establish their ethos as knowledgeable members of the relevant community (493).
  • Author(s) as Actant(s): In this category, the author(s) of the source appear in the sentence as subjects or objects. The category also requires a publication date (493). Omizo and Hart-Davidson see this form of citation as “a qualitatively different means to engage with sourced material” (495), specifically allowing the writer of the current paper to interact directly with others in the field, whether to “affirm, extend, complicate, or challenge” (495).
  • Non-citations: This category encompasses all other sentences in an article, including references to named authors using pronouns or without specific dates (495). Recognizing that they are leaving out some moves that other coders might classify as citations, the authors argue that the limited “shallow parsing” their program uses allows them to more precisely determine “citational intrusion whereupon authors are making manifest their adherence to research conventions and signaling adjuncts to their arguments” (495). Thus, they exclude such components of a text as an extended discussion that is not marked by citation conventions.

Omizo and Hart-Davidson explain in detail how they convert the citation patterns their program discovers into graphs that allow them to chart the relationships between different citations (496-501). They believe that this process allows them to detect several phenomena that may be useful to advisors aiding students in developing their “scholarly voice” (507). The data suggest that it may be possible to use a coding scheme like the one proposed in this study to amass features that characterize a body of work by experienced writers and compare analogous features of an advisee’s draft in order to detect deviations that signal “that there is something this writer does not know about the ways others in the disciplinary area use” the particular feature, in this case citations (505).

For example, the data indicate that the papers of less experienced writers vary less and adhere to conventions more insistently than do those of more experienced writers, who have been exposed to more genres and whose status allows more deviation (503-04). Advisee papers exhibit more “elaboration” than do those of their mentors; Omizo and Hart-Davidson suggest that the detection of more Author(s)-as-Actant(s) citations signals this feature. Markers at the sentence level such as words like “actually” or “better” can point to the presence of more explicit evaluative stances in the work of the less experienced writers (505).

In sum, the authors propose that digital analysis can detect patterns in the citation practices of novice scholars that point to differences between their work and the work of more established scholars and thus can allow them to focus their revision on the rhetorical moves embodied in these differences.


1 Comment

Geisler, Cheryl. Digital Analysis of Texts. JoWR, 2016. Posted 05/17/2016.

Geisler, Cheryl. “Current and Emerging Methods in the Rhetorical Analysis of Texts. Opening: Toward an Integrated Approach.” Journal of Writing Research 7.3 (2016): 417-24. Web. 08 May 2016.

Cheryl Geisler introduces a special section of the Journal of Writing Research focusing on the use of various digital tools to analyze texts. Noting the “rise of digital humanities,” which involves making use of the options software provides for “all sorts of rhetorical purposes,” Geisler and the authors of the articles in the special section ask two related questions: “How can we best understand the costs and benefits of adopting a particular approach? Are they simply alternatives or can they be integrated?” (418).

To experiment with different approaches, the authors of the special-section articles all worked with the same texts, a set of documents “produced by eight pairs of PhD advisors and their advisees” across the disciplines of Computer Science, Chemical Engineering, Materials Science Engineering, and Humanities and Social Sciences (418). This body of texts had been collected for a larger interview-based study of academic citation practices and source use conducted by one of the special-section authors, A. Karatsolis. Karatsolis’s coding was provided for half of the documents in the later study and the “coding schemes” were provided for all.

Geisler’s overview of the status of digital text analysis draws on the categories of I. Pollach, who proposed three types of analysis. To those categories, Geisler added two more, hand-coding and text mining. Geisler discusses

  • Hand-coding, in which human readers assign text elements to categories developed in a coding scheme;
  • Computer-aided content analysis, which draws on “content dictionaries” to “map words and phrases onto content categories”;
  • Computer-aided interpretive textual analysis, a.k.a. computer-assisted qualitative data analysis (CAQDAS), which aids human analysts in efforts to “manage, retrieve, code, and link data”;
  • Corpus linguistics, which searches texts for “words or terms that co-occur more often that [sic] would be expected by chance”; and
  • Text mining, which finds features pre-selected by humans. (419)

Geisler explores various current uses of each process and includes a list of software that combines qualitative and quantitative analysis (420-21). Her examples suggest that approaches like hand-coding and corpus linguistics are often combined with digital approaches. For example, one study used a “concordance tool (AntiConc)” to search teacher comments for traces of a “program-wide rubric” (421).

Discussing the possibility of an integrated approach, Geisler summarizes three examples. The first is from Helsinki University of Technology: the study combined “text-mining techniques with qualitative approaches” (421). A second, from 2011, is referred to as the KWALON Experiment. In this project, as in the study reported in the JOWR special section, researchers examined the same body of texts, a very large data set (421-22). Only one researcher was able to analyze the entire set, a result Geisler posits may result from the use of the digital concordance tool to select the texts before the researcher hand-coded them (422).

In the third example of integrated approaches, researchers from the University of Leipzig developed “Blended Reading,” in which digital tools help readers designate appropriate texts; expert human readers use “snippets” from the “most relevant” of these documents to “manually annotate” texts; and finally, these annotations contribute to “automatic detection” over “multiple iterations” to refine the process. The resulting tool can then be applied to the entire corpus. According to Geisler, “[w]hat is intriguing” about this example “is that it seems to combine high quality hand coding with automatic methods” (422).

Geisler offers the articles in the special section as a study of how “a choice of analytic methods both invites and constrains” rhetorical examination of texts (423).