Shi, Yuchen, Flora Matos, and Deanna Kuhn. “Dialog as a Bridge to Argumentative Writing.” Journal of Writing Research 11.1 (2019): 107-29. Web. 5 June 2019.
Yuchen Shi, Flora Matos, and Deanna Kuhn report on a study of a dialogic approach to argumentative writing conducted with sixth-graders at “an urban public middle school in an underserved neighborhood in a large Northeastern city in the United States” (113). The study replicates earlier research on the same curriculum, with added components to assess whether the intervention increased “meta-level understanding of the purpose and goals of evidence in argumentative writing” (112-13).
Noting that research has documented the degree to which students struggle with the cognitive demands of argumentative writing as opposed to narration (108), the authors report that while the value of discourse as a precursor to writing an argument has been recognized, much of the discourse studied has been at the “whole-classroom level” (108). In contrast, the authors’ intervention paired students so that they could talk “directly” with others who both shared and opposed their positions (108).
In the authors’ view, this process provided students with two elements that affect the success of written communication: “a clearly defined audience and a meaningful purpose” (108). They argue that this direct engagement with the topic and with an audience over a period of time improves on reading about a topic, which they feel students may do “disinterestedly” because they do not yet have a sense of what kind of evidence they may need (110). The authors’ dialogic intervention allows students to develop their own questions as they become aware of the arguments they will have to make (110).
Further, the authors maintain, the dialogic exchange linking individual students “removes the teacher” and makes the process student-centered (109).
Claiming that the ability to produce “evidence-based claims” is central to argument, the authors centered their study on the relation between claims and evidence in students’ discussions and in their subsequent writing (110). Their model, they write, allowed them to see a developmental sequence as students were first most likely to choose evidence that supported their own position, only later beginning to employ evidence that “weaken[s] the opposing claim” (111). Even more sophisticated approaches to evidence, which the authors label “weaker my” and “support other,” develop more slowly if at all (111-12).
Two class were chosen to participate, one as the experimental group (22 students) and one as a comparison group (27 students). The curriculum was implemented in “twice-weekly 40-minute class sessions” that continued in “four cycles” throughout the school year (114). Each cycle began a new topic; the four topics were selected from a list because students seemed equally divided in their views on those issues (114).
The authors divided their process into Pregame, Game, and Endgame sections. In the Pregame, students in small groups generated reasons in support of their position. In the Game, student pairs sharing a position dialogued electronically with “a different opposing pair at each session” (115). During this section, students generated their own “evidence questions” which the researchers answered by the next session; the pairs were given other evidence in Q&A format. The Endgame consisted of a debate, which was then scored and a winning side designated (115). Throughout, students constructed reflection pieces; electronic transcripts preserved the interactions (115).
At the end of each cycle, students wrote individual papers. The comparison group also wrote an essay on the fourth topic, whether students should go directly to college from high school or work for a year. For this essay, students in the both groups were provided with evidence only at the end of the cycle. This essay was used for the final assessment (116-17).
Other elements assessed included whether students could recall answers to 12 evidence questions, in order to determine if differences in the use of evidence in the two groups was a function of superior memory of the material (123). A second component was a fifth essay written by the experimental group on whether teens accused of serious crimes should be tried as adults or juveniles (118). The authors wanted to assess whether the understanding of claims and evidence cultivated during the curriculum informed writing on a topic that had not been addressed through the dialogic intervention (118).
For the assessment, the researchers considered “a claim together with any reason and/or evidence supporting it” as an “idea unit” (118). These units were subcategorized as “either evidence-based or non-evidence-based.” Analyzing only the claims that contained evidence, the researchers further distinguished between “functional” and “non-functional” evidence-based claims. Functional claims were those where there was a clear written link between the evidence and claim. Only the use of functional claims was assessed. (118).
Results indicated that while the number of idea units and evidence-based claims did not vary significantly across the groups, the experimental group was significantly more successful in including functional evidence-based claims (120). Also, the intervention encouraged significantly more use of “weaken-other” claims, which the writers characterize as “a more demanding skill commonly neglected by novice writers” (120). Students did not show progress in using “weaken-own” or “support-other” evidence (121).
With the intention of determining the intervention’s effects on students’ meta-level awareness about evidence in arguing, researchers discovered that the groups did not vary in the kinds of evidence they would like most to see, with both choosing “support-own.” However, the experimental group was much more likely to state that “weaken-other” evidence was the type “they would like to see second most” (122). The groups were similar in students’ ability to recall evidence, in the authors’ view indicating that superior recall in one group or the other did not explain the results (125).
Assessment of the essay on the unfamiliar topic was hampered by an even smaller sample size and the fact that the two groups wrote on different topics. The writers report that 54% of the experimental-group students made support-own or weaken-other claims, but that the number of such claims decreased to a frequency similar to that of the comparison group on the college/work topic (124).
The authors argue that increased use of more sophisticated weaken-other evidence points to higher meta-awareness of evidence as a component of argument, but that students could show more growth as measured by their ability to predict the kind of evidence they would need or use (125).
Noting the small sample size as a limitation, the authors suggest that both the dialogic exchange of their curriculum and the students’ “deep engagement” with topics contributed to the results they recorded. They suggest that “[a]rguing to learn” through dialogue and engagement can be an important pedagogical activity because of the discourse and cognitive skills these activities develop (126).