College Composition Weekly: Summaries of research for college writing professionals

Read, Comment On, and Share News of the Latest from the Rhetoric and Composition Journals


1 Comment

Kitalong and Miner. Rhetorical Agency through Multimodal Composing. Mar. 2018 C&C. Posted 02/02/2018.

Kitalong, Karla Saari, and Rebecca L. Miner. “Multimodal Composition Pedagogy Designed to Enhance Authors’ Personal Agency: Lessons from Non-academic and Academic Composing Environments.” Computers and Composition 46 (2017): 39-55. Web. 21 Jan. 2018.

Karla Saari Kitalong and Rebecca L. Miner discuss the use of multimodal assignments to enhance student engagement and personal agency. They compare and contrast the responses of students working on multimodal projects in three different scenarios to argue that multimodal assignments, if well-structured, offer opportunities to move students beyond “normative reproduction of received knowledges” (52).

The authors state that even though the “current turn” to multimodality began in 1999-2000, composition is “still grappling with how to teach and engage with the many complexities of multimodal composition” (39). Kitalong and Miner see agreement among scholars that effective use of multimodality involves more than simply including a multimodal component in an assignment (40). In their view, multimodal composition, like all composition, should

allow students to practice so that they can synthesize modes, genres, ideas, and skills, and become ever more fluid and flexible composers. (40)

Such assignments, the authors argue, should instill in students as well a sense that their work has value and can impact issues important to them on both a local and global level (40, 41). Quoting Anne Wysocki, they define this component of “agency” as an awareness that

[b]ecause the structures into which we have grown up are neither necessary nor fixed, they can be changed when we forge new positions for ourselves among them, or when we construct new relations between the different structures that matter to us. (40)

Effective agency is “alert,” in Wysocki’s words, to openings for activism and change (40). Kitalong and Miner argue that their three scenarios illustrate how such alertness can result from the specific activities inherent in multimodal learning when those activities are paired with reflection and revision (40).

The first scenario involved a “front-end evaluation” for an exhibit, Water’s Journey Through the Everglades, “a collection of interactive science museum exhibits” designed to educate visitors in the Fort Lauderdale, Florida, area about the importance of water to individuals and the environment (41) as well as to encourage interest in STEM careers among middle-school children. The evaluation, conducted by Kitalong as “lead formative evaluator,” measured middle-school students’ levels of knowledge about water and its role locally and globally (41). Kitalong and Miner report data collected from 20 sixth-graders given the task of “visually depicting” their knowledge (41).

From drawings provided by the sixth-graders, the authors conclude that at the local level, the students envisioned themselves as active conservators of water, whereas, when asked to portray their role and that of other actors at the global level, they showed humans as “small and passive” (42). Some of the drawings seemed to present “distant views” that included no indication of human action, even though the sixth-grades were enrolled in a STEM magnet school (42).

Kitalong and Miner conclude that while the sixth-graders’ responses indicated that they grasped the material and would be able to learn more, they were not inspired to develop agency.

In contrast, in the second scenario, 75 late-elementary and middle-school students worked with Sketch-N-Tell, an interactive “Discovery Game” that allowed them to create images and designs from “traditional art supplies (paper, markers, crayons)” that they could then digitize and animate (45). The primary purpose of the activity was testing for usability and audience appeal of the game for Come Back to the Fair, an “immersive game-like learning environment that virtually replicates the 1964-1965 New York World’s Fair” (44). This environment was intended both to stimulate interest in STEM and to encourage participants to think more critically about the ways technology can impact lives (44).

Kitalong and Miner contend that the assignment to create their own “visions of future technologies” and the encouragement within the project to reflect on and revise their efforts quickly led these students to assume agency as actual contributors to the project (46). Hands-on multimodal participation, they maintain, sparked engagement and inspired students to modify their creations in ways that suggested attention to the global effects of their visions (47). In the authors’ view, students’ responses indicated that “[t]hey were not merely accumulating modes, but coordinating and synthesizing them” (47).

Scenario 3 took place in a sophomore-level composition course taught by Miner at a “STEM-focused school” (47). Students created “Timeline Maps” tracing the development of a product in a field they were considering as a career. The assignment, which led from the production of a multimodal exhibit to a researched argument paper, required attention to ethical issues in the field (48). Creating the Timeline Maps and the related presentations asked students to “dearticulate an assemblage of texts and rearticulate them” in new forms, in the authors’ view thereby encouraging new perspectives and new connections (48). Peer review and a reflective essay helped to generate agency by triggering questions about otherwise familiar processes and products, so that, by the argumentative paper, students were considering their personal positions in relation to ethical issues and taking strong, critically informed stances (49).

The authors posit that the prompt for Scenario 1 limited students’ engagement and sense of agency by asking for “depictions of the status quo” rather than solutions (52). Thus, design of prompts that “explicitly encourage students to learn something new” is one of three components that the authors recommend for making full use of the potential of multimodal assignments (53). A second component is giving students freedom to combine multiple modes; the authors contend that this freedom results in “excitement” and “engagement in their own learning,” which in itself produces the “reflectiveness and self-awareness” necessary for agency (53). In this view, the responsibility imposed by uncertainty about what the teacher expects further demonstrates to students their own ability to exert control (53).

Third, Kitalong and Miner identify time for reflection as one of the most formative elements in Scenarios 2 and 3 (53). They see the act of reassembling familiar materials into new forms as requiring extended time that allows students to find connections to their personal interests. Reconsidering their products through different stages in light of input from peers and other respondents leads students to revise the impact of their projects, in itself an exercise of rhetorical agency (53). The authors argue that multimodal composition enhanced by “the act of describing and reflecting upon their rhetorical choices . . . ultimately provoked a sense of personal agency” in the learning scenarios (54).


1 Comment

Moore & MacArthur. Automated Essay Evaluation. JoWR, June 2016. Posted 10/04/2016.

Moore, Noreen S., and Charles A. MacArthur. “Student Use of Automated Essay Evaluation Technology During Revision.” Journal of Writing Research 8.1 (2016): 149-75. Web. 23 Sept. 2016.

Noreen S. Moore and Charles A. MacArthur report on a study of 7th- and 8th-graders’ use of Automated Essay Evaluation technology (AEE) and its effects on their writing.

Moore and MacArthur define AEE as “the process of evaluating and scoring written prose via computer programs” (M. D. Shermis and J. Burstein, qtd. in Moore and MacArthur 150). The current study was part of a larger investigation of the use of AEE in K-12 classrooms (150, 153-54). Moore and MacArthur focus on students’ revision practices (154).

The authors argue that such studies are necessary because “AEE has the potential to offer more feedback and revision opportunities for students than may otherwise be available” (150). Teacher feedback, they posit, may not be “immediate” and may be “ineffective” and “inconsistent” as well as “time consuming,” while the alternative of peer feedback “requires proper training” (151). The authors also posit that AEE will increasingly become part of the writing education landscape and that teachers will benefit from “participat[ing]” in explorations of its effects (150). They argue that AEE should “complement” rather than replace teacher feedback and scoring (151).

Moore and MacArthur review extant research on two kinds of AEE, one that uses “Latent Semantic Analysis” (LSA) and one that has been “developed through model training” (152). Studies of an LSA program owned by Pearson and designed to evaluate summaries compared the program with “word-processing feedback” and showed enhanced improvement across many traits, including “quality, organization, content, use of detail, and style” as well as time spent on revision (152). Other studies also showed improvement. Moore and MacArthur note that some of these studies relied on scores from the program itself as indices of improvement and did not demonstrate any transfer of skills to contexts outside of the program (153).

Moore and MacArthur contend that their study differs from previous research in that it does not rely on “data collected by the system” but rather uses “real time” information from think-aloud protocols and semi-structured interviews to investigate students’ use of the technology. Moreover, their study reveals the kinds of revision students actually do (153). They ask:

  • How do students use AEE feedback to make revisions?
  • Are students motivated to make revisions while using AEE technology?
  • How well do students understand the feedback from AEE, both the substantive feedback and the conventions feedback? (154)

The researchers studied six students selected to be representative of a 12-student 7th- and 8th-grade “literacy class” at a private northeastern school whose students exhibited traits “that may interfere with school success” (154). The students were in their second year of AEE use and the teacher in the third year of use. Students “supplement[ed]” their literacy work with in-class work using the “web-based MY Access!” program (154).

Moore and MacArthur report that “intellimetric” scoring used by MY Access! correlates highly with scoring by human raters (155). The software is intended to analyze “focus/coherence, organization, elaboration/development, sentence structure, and mechanics/conventions” (155).

MY Access provides feedback through MY Tutor, which responds to “non-surface” issues, and MY Editor, which addresses spelling, punctuation, and other conventions. MY Tutor provides a “one sentence revision goal”; “strategies for achieving the goal”; and “a before and after example of a student revising based on the revision goal and strategy” (156). The authors further note that “[a]lthough the MY Tutor feedback is different for each score point and genre, the same feedback is given for the same score in the same genre” (156). MY Editor responds to specific errors in each text individually.

Each student submitted a first and revised draft of a narrative and an argumentative paper, for a total of 24 drafts (156). The researchers analyzed only revisions made during the think-aloud; any revision work prior to the initial submission did not count as data (157).

Moore and MacArthur found that students used MY Tutor for non-surface feedback only when their submitted essays earned low scores (158). Two of the three students who used the feature appeared to understand the feedback and used it successfully (163). The authors report that for the students who used it successfully, MY Tutor feedback inspired a larger range of changes and more effective changes in the papers than feedback from the teacher or from self-evaluation (159). These students’ changes addressed “audience engagement, focusing, adding argumentative elements, and transitioning” (159), whereas teacher feedback primarily addressed increasing detail.

One student who scored high made substantive changes rated as “minor successes” but did not use the MY Tutor tool. This student used MY Editor and appeared to misunderstand the feedback, concentrating on changes that eliminated the “error flag” (166).

Moore and MacArthur note that all students made non-surface revisions (160), and 71% of these efforts were suggested by AEE (161). However, 54.3% of the total changes did not succeed, and MY Editor suggested 68% of these (161). The authors report that the students lacked the “technical vocabulary” to make full use of the suggestions (165); moreover, they state that “[i]n many of the instances when students disagreed with MY Editor or were confused by the feedback, the feedback seemed to be incorrect” (166). The authors report other research that corroborates their concern that grammar checkers in general may often be incorrect (166).

As limitations, the researchers point to the small sample, which, however, allowed access to “rich data” and “detailed description” of actual use (167). They note also that other AEE program might yield different results. Lack of data on revisions students made before submitting their drafts also may have affected the results (167). The authors supply appendices detailing their research methods.

Moore and MacArthur propose that because the AEE scores prompt revision, such programs can effectively augment writing instruction, but recommend that scores need to track student development so that as students score near the maximum at a given level, new criteria and scores encourage more advanced work (167-68). Teachers should model the use of the program and provide vocabulary so students better understand the feedback. Moore and MacArthur argue that effective use of such programs can help students understand criteria for writing assessment and refine their own self-evaluation processes (168).

Research recommendations include asking whether scores from AEE continue to encourage revision and investigating how AEE programs differ in procedures and effectiveness. The study did not examine teachers’ approaches to the program. Moore and MacArthur urge that stakeholders, including “the people developing the technology and the teachers, coaches, and leaders using the technology . . . collaborate” so that AEE “aligns with classroom instruction” (168-69).