College Composition Weekly: Summaries of research for college writing professionals

Read, Comment On, and Share News of the Latest from the Rhetoric and Composition Journals


Bourelle et al. Multimodal in f2f vs. online classes. C&C, Mar. 2016. Posted 01/24/2016.

Bourelle, Andrew, Tiffany Bourelle, Anna V. Knutson, and Stephanie Spong. “Sites of Multimodal Literacy: Comparing Student Learning in Online and Face-to-Face Environments.” Computers and Composition 39 (2015): 55-70. Web. 14 Jan. 2016.

Andrew Bourelle, Tiffany Bourelle, Anna V. Knutson, and Stephanie Spong report on a “small pilot study” at the University of New Mexico that compares how “multimodal liteacies” are taught in online and face-to-face (f2f) composition classes (55-56). Rather than arguing for the superiority of a particular environment, the writers contend, they hope to “understand the differences” and “generate a conversation regarding what instructors of a f2f classroom can learn from the online environment, especially when adopting a multimodal curriculum” (55). The authors find that while differences in overall learning measures were slight, with a small advantage to the online classes, online students demonstrated considerably more success in the multimodal component featured in both kinds of classes (60).

They examined student learning in two online sections and one f2f section teaching a “functionally parallel” multimodal curriculum (58). The online courses were part of eComp, an online initiative at the University of New Mexico based on the Writers’ Studio program at Arizona State University, which two of the current authors had helped to develop (57). Features derived from the Writers’ Studio included the assignment of three projects to be submitted in an electronic portfolio as well as a reflective component in which the students explicated their own learning. Additionally, the eComp classes “embedded” instructional assistants (IAs): graduate teaching assistants and undergraduate tutors (57-58). Students received formative peer review and feedback from both the instructor and the IAs. (57-58).

Students created multimodal responses to the three assignments—a review, a commentary, and a proposal. The multimodal components “often supplemented, rather than replaced, the written portion of the assignment” (58). Students analyzed examples from other classes and from public media through online discussions, focusing on such issues as “the unique features of each medium” and “the design features that either enhanced or stymied” a project’s rhetorical intent (58). Bourelle et al. emphasize the importance of foregrounding “rhetorical concepts” rather than the mechanics of electronic presentation (57).

The f2f class, taught by one of the authors who was also teaching one of the eComp classes, used the same materials, but the online discussion and analysis were replaced by in-class instruction and interaction, and the students received instructor and peer feedback (58). Students could consult the IAs in the campus writing center and seek other feedback via the center’s online tutorials (58).

The authors present their assessment as both quantitative, through holistic scores using a rubric that they present in an Appendix, and qualitative, through consideration of the students’ reflection on their experiences (57). The importance of including a number of different genres in the eportfolios created by both kinds of classes required specific norming on portfolio assessment for the five assessment readers (58-59). Four of the readers were instructors or tutors in the pilot, with the fifth assigned so that instructors would not be assessing their own students’ work (58). Third reads reconciled disparate scores. The readers examined all of the f2f portfolios and 21, or 50%, of the online submissions. Bourelle et al. provide statistical data to argue that this 50% sample adequately supports their conclusions at a “confidence level of 80%” (59).

The rubric assessed features such as

organization of contents (a logical progression), the overall focus (thesis), development (the unique features of the medium and how well the modes worked together), format and design (overall design aesthetics . . . ), and mechanics. . . . (60)

Students’ learning about multimodal production was assessed through the reflective component (60). The substantial difference in this score led to a considerable difference in the total scores (61).

The authors provide specific examples of work done by an f2f student and by an online student to illustrate the distinctions they felt characterized the two groups. They argue that students in the f2f classes as a group had difficulties “mak[ing] choices in design according to the needs of the audience” (61). Similarly, in the reflective component, f2f students had more trouble explaining “their choice of medium and how the choice would best communicate their message to the chosen audience” (61).

In contrast, the researchers state that the student representing the online cohort exhibits “audience awareness with the choice of her medium and the content included within” (62). Such awareness, the authors write, carried through all three projects, growing in sophistication (62-63). Based on both her work and her reflection, this student seemed to recognize what each medium offered and to make reasoned choices for effect. The authors present one student from the f2f class who demonstrated similar learning, but argue that, on the whole, the f2f work and reflections revealed less efficacy with multimodal projects (63).

Bourelle et al. do not feel that self-selection for more comfort with technology affected the results because survey data indicated that “life circumstances” rather than attitudes toward technology governed students’ choice of online sections (64). They indicate, in contrast, that the presence of the IAs may have had a substantive effect (64).

They also discuss the “archival” nature of an online environment, in which prior discussion and drafts remained available for students to “revisit,” with the result that the reflections were more extensive. Such reflective depth, Claire Lauer suggests, leads to “more rhetorically effective multimodal projects” (cited in Bourelle et al. 65).

Finally, they posit an interaction between what Rich Halverson and R. Benjamin Shapiro designate “technologies for learners” and “technologies for education.” The latter refer to the tools used to structure classrooms, while the former include specific tools and activities “designed to support the needs, goals, and styles of individuals” (qtd. in Bourelle et al. 65). The authors posit that when the individual tools students use are in fact the same as the “technologies for education,” students engage more fully with multimodality in such an immersive multimodal environment.

This interaction, the authors suggest, is especially important because of the need to address the caveat from research and the document CCCC Online Writing Instruction, 2013, that online courses should prioritize writing and rhetorical concepts, not the technology itself (65). The authors note that online students appeared to spontaneously select more advanced technology than the f2f students, choices that Daniel Anderson argues inherently lead to more “enhanced critical thinking” and higher motivation (66).

The authors argue that their research supports two recommendations: first, the inclusion of IAs for multimodal learning; and second, the adoption by f2f instructors of multimodal activities and presentations, such as online discussion, videoed instruction, tutorials, and multiple examples. Face-to-face instructors, in this view, should try to emulate more nearly the “archival and nonlinear nature of the online course” (66). The authors call for further exploration of their contention that “student learning is indeed different within online and f2f multimodal courses,” based on their findings at the University of New Mexico (67).


Anderson et al. Contributions of Writing to Learning. RTE, Nov. 2015. Posted 12/17/2015.

Anderson, Paul, Chris M. Anson, Robert M. Gonyea, and Charles Paine. “The Contributions of Writing to Learning and Development: Results from a Large-Scale, Multi-institutional Study.” Research in the Teaching of English 50.2 (2015): 199-235. Print

Note: The study referenced by this summary was reported in Inside Higher Ed on Dec. 4, 2015. My summary may add some specific details to the earlier article and may clarify some issues raised in the comments on that piece. I invite the authors and others to correct and elaborate on my report.

Paul Anderson, Chris M. Anson, Robert M. Gonyea, and Charles Paine discuss a large-scale study designed to reveal whether writing instruction in college enhances student learning. They note widespread belief both among writing professionals and other stakeholders that including writing in curricula leads to more extensive and deeper learning (200), but contend that the evidence for this improvement is not consistent (201-02).

In their literature review, they report on three large-scale studies that show increased student learning in contexts rich in writing instruction. These studies concluded that the amount of writing in the curriculum improved learning outcomes (201). However, these studies contrast with the varied results from many “small-scale, quasi-experimental studies that examine the impact of specific writing interventions” (200).

Anderson et al. examine attempts to perform meta-analyses across such smaller studies to distill evidence regarding the effects of writing instruction (202). They postulate that these smaller studies often explore such varied practices in so many diverse environments that it is hard to find “comparable studies” from which to draw conclusions; the specificity of the interventions and the student populations to which they are applied make generalization difficult (203).

The researchers designed their investigation to address the disparity among these studies by searching for positive associations between clearly designated best practices in writing instruction and validated measures of student learning. In addition, they wanted to know whether the effects of writing instruction that used these best practices differed from the effects of simply assigning more writing (210). The interventions and practices they tested were developed by the Council of Writing Program Administrators (CWPA), while the learning measures were those used in the National Survey of Student Engagement (NSSE). This collaboration resulted from a feature of the NSSE in which institutions may form consortia to “append questions of specific interest to the group” (206).

Anderson et al. note that an important limitation of the NSSE is its reliance on self-report data, but they contend that “[t]he validity and reliability of the instrument have been extensively tested” (205). Although the institutions sampled were self-selected and women, large institutions, research institutions, and public schools were over-represented, the authors believe that the overall diversity and breadth of the population sampled by the NSSE/CWPA collaboration, encompassing more than 70,000 first-year and senior students, permits generalization that has not been possible with more narrowly targeted studies (204).

The NSSE queries students on how often they have participated in pedagogic activities that can be linked to enhanced learning. These include a wide range of practices such as service-learning, interactive learning, “institutionally challenging work” such as extensive reading and writing; in addition, the survey inquires about campus features such as support services and relationships with faculty as well as students’ perceptions of the degree to which their college experience led to enhanced personal development. The survey also captures demographic information (205-06).

Chosen as dependent variables for the joint CWPA/NSSE study were two NSSE scales:

  • Deep Approaches to Learning, which encompassed three subscales, Higher-Order Learning, Integrative Learning, and Reflective Learning. This scale focused on activities related to analysis, synthesis, evaluation, combination of diverse sources and perspectives, and awareness of one’s own understanding of information (211).
  • Perceived Gains in Learning and Development, which involved subscales of Practical Competence such as enhanced job skills, including the ability to work with others and address “complex real-world problems”; Personal and Social Development, which inquired about students’ growth as independent learners with “a personal code of values and ethics” able to “contribut[e] to the community”; and General Education Learning, which includes the ability to “write and speak clearly and effectively, and to think critically and analytically” (211).

The NSSE also asked students for a quantitative estimate of how much writing they actually did in their coursework (210). These data allowed the researchers to separate the effects of simply assigning more writing from those of employing different kinds of writing instruction.

To test for correlations between pedagogical choices in writing instruction and practices related to enhanced learning as measured by the NSSE scales, the research team developed a “consensus model for effective practices in writing” (206). Eighty CWPA members generated questions that were distilled to 27 divided into “three categories based on related constructs” (206). Twenty-two of these ultimately became part of a module appended to the NSSE that, like the NSSE “Deep Approaches to Learning” scale, asked students how often their coursework had included the specific activities and behaviors in the consensus model. The “three hypothesized constructs for effective writing” (206) were

  • Interactive Writing Processes, such as discussing ideas and drafts with others, including friends and faculty;
  • Meaning-Making Writing Tasks, such as using evidence, applying concepts across domains, or evaluating information and processes; and
  • Clear Writing Expectations, which refers to teacher practices in making clear to students what kind of learning an activity promotes and how student responses will be assessed. (206-07)

They note that no direct measures of student learning is included in the NSSE, nor are such measures included in their study (204). Rather, in both the writing module and the NSSE scale addressing Deep Approaches to Learning, students are asked to report on kinds of assignments, instructor behaviors and practices, and features of their interaction with their institutions, such as whether they used on-campus support services (205-06). The scale on Perceived Gains in Learning and Development asks students to self-assess (211-12).

Despite the lack of specific measures of learning, Anderson et al. argue that the curricular content included in the Deep Approaches to Learning scale does accord with content that has been shown to result in enhanced student learning (211, 231). The researchers argue that comparisons between the NSSE scales and the three writing constructs allow them to detect an association between the effective writing practices and the attitudes toward learning measured by the NSSE.

Anderson et al. provide detailed accounts of their statistical methods. In addition to analysis for goodness-of-fit, they performed “blocked hierarchical regressions” to determine how much of the variance in responses was explained by the kind of writing instruction reported versus other factors, such as demographic differences, participation in various “other engagement variables” such as service-learning and internships, and the actual amount of writing assigned (212). Separate regressions were performed on first-year students and on seniors (221).

Results “suggest[ed] that writing assignments and instructional practices represented by each of our three writing scales were associated with increased participation in Deep Approaches to Learning, although some of that relationship was shared by other forms of engagement” (222). Similarly, the results indicate that “effective writing instruction is associated with more favorable perceptions of learning and development, although other forms of engagement share some of that relationship” (224). In both cases, the amount of writing assigned had “no additional influence” on the variables (222, 223-24).

The researchers provide details of the specific associations among the three writing constructs and the components of the two NSSE scales. Overall, they contend, their data strongly suggest that the three constructs for effective writing instruction can serve “as heuristics that instructors can use when designing writing assignments” (230), both in writing courses and courses in other disciplines. They urge faculty to describe and research other practices that may have similar effects, and they advocate additional forms of research helpful in “refuting, qualifying, supporting, or refining the constructs” (229). They note that, as a result of this study, institutions can now elect to include the module “Experiences with Writing,” which is based on the three constructs, when students take the NSSE (231).

 


Rice, Jenny. Para-Expertise in Writing Classrooms. CE, Nov. 2015. Posted 12/07/2015.

Rice, Jenny. “Para-Expertise, Tacit Knowledge, and Writing Problems.” College English 78.2 (2015): 117-38. Print.

Jenny Rice examines how views of expertise in rhetoric and composition shape writing instruction. She argues for replacing the definition of non-expertise as a lack of knowledge with expanded approaches to expertise open to what Michael Polanyi has called “tacit knowledge” (125). Rice proposes a new category of knowledge, “para-expertise,” that draws on tacit knowledge to enable students and other non-experts to do activities related to expertise.

Rice cites a number of approaches to expertise in rhet/comp’s disciplinary considerations. Among them is the idea that the field has content that only qualified individuals can impart (120). Further, she sees expectations in writing-across-the-curriculum and writing-in-the-disciplines, as well as the view that composition courses should inculcate students in “expert [reading and writing] practice[s]” (121), as indications of the rhetorical presence notions of expertise acquire in the field (120-21).

She opposes the idea of novice practice as a deficiency with other attitudes toward expertise. Within the field of composition studies, she points to the work of Linda Flower and John Hayes. These scholars, she writes, found that the expertise of good writers consisted not of specific knowledge but rather of the ability to pose more complex problems for themselves as communicators. Whereas weaker writers “often flatline around fulfilling the details of the prompt, including word count and other conventional details,” expert writers “use the writing prompt as a way to articulate and define their own understanding of the rhetorical situation to which they are responding” (121).

This discussion leads Rice to a view of expertise as meaningful problem-posing, an activity rather than a body of knowledge. In this view, students can do the work of expertise even when they have no field-specific knowledge (122). Understanding expertise in this way leads Rice to explore categories of expertise as laid out in “the interdisciplinary field of Studies of Expertise and Experience (SEE)” (123). Scholars in this field distinguish between “contributory experts” who “have the ability to do things within the domain of their expertise” (Harry Collins and Robert Evans, qtd. in Rice 123; emphasis original); and “interactional experts,” who may not be able to actively produce within the field but who are “immersed in the language of that particular domain” (123). Rice provides the example of artists and art critics (123).

Rice emphasizes the importance of interactional expertise by noting that not all contributory experts communicate easily with each other and thus require interactional experts to “bridge the gulf” between discourse communities addressing a shared problem (124). She provides the example of “organic farmers and agricultural scholars” who function within separate expert domains yet need others to “translate” across these domains (124-25).

But Rice feels these definitions need to be augmented with another category to encompass people like students who lack the domain-specific knowledge to be contributory or interactional experts. She proposes the category “para-expertise,” in which para takes on its “older etymology” as “alongside (touching the side of) different forms of expertise” (119).

In Rice’s view, the tacit knowledge that fuels para-expertise, while usually discounted in formal contexts, arises from “embodied knowledge” gleaned from everyday living in what Debra Hawhee has called “rhetoric’s sensorium” (cited in Rice 126). In Rice’s words, this sensorium may be defined as “the participatory dimension of communication that falls outside of simple articulation without falling outside the realm of understanding” (126). She gives the example of not being able to articulate the cues that, when implicitly sensed, result in her clear knowledge that she is hearing her mother’s voice on the phone (125)

Rice’s extended example of the work of para-expertise revolves around students’ sense of the effects of campus architecture on their moods and function. Interviews with “hundreds of college students” at “four different university campuses” regarding their responses to “urban legends” about dorms and other buildings being like prisons lead Rice to argue that the students were displaying felt knowledge of the bodily and psychological effects of window and hallway dimensions even though they did not have the expert disciplinary language to convert their sensed awareness into technical architectural principles (127-31). In particular, Rice states, the students drew a sense of a problem to be addressed from their tacit or para knowledge and thus were embarking on “the activity of expertise” (131).

In Rice’s discussion, para-expertise can productively engage with other forms of expertise through the formation of “strategic expertise alliances” (131). By itself para-expertise cannot resolve a problem, but those whose tacit knowledge has led them to identify the problem can begin to address it via coalitions with those with the specific disciplinary tools to do so. As a classroom example, she explains that students on her campus had become concerned about intentions to outsource food options, thus endangering connections with local providers and reducing choices. Lacking the vocabulary to present their concerns to administrators, a group of students and faculty joined with local community organizations that were able to provide specific information and guidance in constructing arguments (132-33).

Rice’s own writing students, participating in this campus issue, were asked to gather oral histories from members of a nearby farmers’ market. The students, however, felt “intimidated and out of place” during their visits to the farmers’ market (136), partly because, as students from other areas, they had seldom had any reason to visit the market. Rice considers this tacit response to the market the opening of a problem to be addressed: “How can a community farmers market reach students who only temporarily reside in that community?” (136; emphasis original).

Rice writes:

[T]he solution calls for greater expertise than first-year students possess. Rather than asking students to (artificially) adopt the role of expertise and pose a solution, however, we turned to a discussion of expert alliances. Who were the “pivot points” in this problem? Who were the contributory experts, and who had the skills of interactional expertise? (136)

Ultimately, alliances resulting from this discussion led to the creation of a branch of the farmers’ market on campus (136).

Rice argues that this approach to expertise highlights its nature as a collaborative effort across different kinds of knowledge and activities (134). It de-emphasizes the “terribly discouraging” idea that “discovery” is the path to expertise and replaces that “myth” with an awareness that “invention and creation” and how “[e]xperts pose problems” are the keys to expert action (122; emphasis original). It also helps students understand the different kinds of expertise and how their own tacit knowledge can become part of effective action (135).

 


Addison, Joanne. Common Core in College Classrooms. Journal of Writing Assessment, Nov. 2015. Posted 12/03/2015.

Addison, Joanne. “Shifting the Locus of Control: Why the Common Core State Standards and Emerging Standardized Tests May Reshape College Writing Classrooms.” Journal of Writing Assessment 8.1 (2015): 1-11. Web. 20 Nov. 2015.

Joanne Addison offers a detailed account of moves by testing companies and philanthropists to extend the influence of the Common Core State Standards Initiative (CCSSI) to higher education. Addison reports that these entities are building “networks of influence” (1) that will shift agency from teachers and local institutions to corporate interests. She urges writing professionals to pay close attention to this movement and to work to retain and restore teacher control over writing instruction.

Addison writes that a number of organizations are attempting to align college writing instruction with the CCSS movement currently garnering attention in K-12 institutions. This alignment, she documents, is proceeding despite criticisms of the Common Core Standards for demanding skills that are “not developmentally appropriate,” for ignoring crucial issues like “the impact of poverty on educational opportunity,” and for the “massive increase” in investment in and reliance on standardized testing (1). But even if these challenges succeed in scaling back the standards, she contends, too many teachers, textbooks, and educational practices will have been influenced by the CCSSI for its effects to dissipate entirely (1). Control of professional development practices by corporations and specific philanthropies, in particular, will link college writing instruction to the Common Core initiative (2).

Addison connects the investment in the Common Core to the “accountability movement” (2) in which colleges are expected to demonstrate the “value added” by their offerings as students move through their curriculum (5). Of equal concern, in Addison’s view, is the increasing use of standardized test scores in college admissions and placement; she notes, for example, “640 colleges and universities” in her home state of Colorado that have “committed to participate” in the Partnership for Assessment of Readiness for College and Career (PARCC) by using standardized tests created by the organization in admissions and placement; she points to an additional 200 institutions that have agreed to use a test generated by the Smarter Balanced Assessment Consortium (SBAC) (2).

In her view, such commitments are problematic not only because they use single-measure tools rather than more comprehensive, pedagogically sound decision-making protocols but also because they result from the efforts of groups like the English Language Arts Work Group for CCSSI, the membership of which is composed of executives from testing companies, supplemented with only one “retired English professor” and “[e]xactly zero practicing teachers” (3).

Addison argues that materials generated by organizations committed to promoting the CCSSI show signs of supplanting more pedagogically sound initiatives like NCTE’s Read-Write-Think program (4). To illustrate how she believes the CCSSI has challenged more legitimate models of professional development, she discusses the relationship between CCSSI-linked coalitions and the National Writing Project.

She writes that in 2011, funds for the National Writing Project were shifted to the president’s Race to the Top (3). Some funding was subsequently restored, but grants from the Bill and Melinda Gates Foundation specifically supported National Writing Project sites that worked with an entity called the Literacy Design Collaborative (LDC) to promote the use of the Common Core Standards in assignment design and to require the use of a “jurying rubric ” intended to measure the fit with the Standards in evaluating student work (National Writing Project, 2014, qtd. in Addison 4). According to Addison, “even the briefest internet search reveals a long list of school districts, nonprofits, unions, and others that advocate the LDC approach to professional development” (4). Addison contends that teachers have had little voice in developing these course-design and assessment tools and are unable, under these protocols, to refine instruction and assessment to fit local needs (4).

Addison expresses further concern about the lack of teacher input in the design, administration, and weight assigned to the standardized testing used to measure “value added” and thus hold teachers and institutions accountable for student success. A number of organizations largely funded by the Bill and Melinda Gates Foundation promote the use of “performance-based” standardized tests given to entering college students and again to seniors (5-6). One such test, the Collegiate Learning Assessment (CLA), is now used by “700 higher education institutions” (5). Addison notes that nine English professors were among the 32 college professors who worked on the development and use of this test; however, all were drawn from “CLA Performance Test Academies” designed to promote the “use of performance-based assessments in the classroom,” and the professors’ specialties were not provided (5-6).

A study conducted using a similar test, the Common Core State Standards Validation Assessment (CCSSAV) indicated that the test did provide some predictive power, but high-school GPA was a better indicator of student success in higher education (6). In all, Addison reports four different studies that similarly found that the predictor of choice was high-school GPA, which, she says, improves on the snapshot of a single moment supplied by a test, instead measuring a range of facets of student abilities and achievements across multiple contexts (6).

Addison attributes much of the movement toward CCSSI-based protocols to the rise of “advocacy philanthropy,” which shifts giving from capital improvements and research to large-scale reform movements (7). While scholars like Cassie Hall see some benefits in this shift, for example in the ability to spotlight “important problems” and “bring key actors together,” concerns, according to Addison’s reading of Hall, include

the lack of external accountability, stifling innovation (and I would add diversity) by offering large-scale, prescriptive grants, and an unprecedented level of influence over state and government policies. (7)

She further cites Hall’s concern that this shift will siphon money from “field-initiated academic research” and will engender “a growing lack of trust in higher education” that will lead to even more restrictions on teacher agency (7).

Addison’s recommendations for addressing the influx of CCSSI-based influences include aggressively questioning our own institutions’ commitments to facets of the initiative, using the “15% guideline” within which states can supplement the Standards, building competing coalitions to advocate for best practices, and engaging in public forums, even where such writing is not recognized in tenure-and-promotion decisions, to “place teachers’ professional judgment at the center of education and help establish them as leaders in assessment” (8). Such efforts, in her view, must serve the effort to identify assessment as a tool for learning rather than control (7-8).

Access this article at http://journalofwritingassessment.org/article.php?article=82


3 Comments

Combs, Frost, and Eble. Collaborative Course Design in Scientific Writing. CS, Sept. 2015. Posted 11/12/15.

Combs, D. Shane, Erin A. Frost, and Michelle F. Eble. “”Collaborative Course Design in Scientific Writing: Experimentation and Productive Failure.” Composition Studies 43.2 (2015): 132-49. Web. 11 Nov. 2015.

Writing in the “Course Design” section of Composition Studies, D. Shane Combs, Erin A. Frost, and Michelle F. Eble describe a science-writing course taught at East Carolina University, “a doctoral/research institution with about 27,000 students, serv[ing] a largely rural population” (132). The course has been taught by the English department since 1967 as an upper-level option for students in the sciences, English, and business and technical communication. The course also acts as an option for students to fulfill the requirement to take two writing-intensive (WI) courses, one in the major; as a result, it serves students in areas like biology and chemistry. The two to three sections per semester offered by English are generally taught by “full-time teaching instructors” and sometimes by tenured/tenure-track faculty in technical and professional communication (132).

Combs et al. detail iterations of the course taught by Frost and Eble, who had not taught it before. English graduate student D. Shane Combs contributed as a peer mentor. Inclusion of the peer mentor as well as the incorporation of university-wide writing outcomes into the course-specific outcomes resulted from a Quality Enhancement Plan underway at the university as a component of its reaccreditation. This plan included a special focus on writing instruction, for example, a Writing Mentors program that funded peer-mentor support for WI instruction. Combs, who was sponsored by the English department, brought writing-center experience as well as learning from “a four-hour professional development session” to his role (133).

Drawing on work by Donna J. Haraway, Sandra Harding, and James C. Wilson, Frost and Eble’s collaboratively designed sections of the course were intent “on moving students into a rhetorical space where they can explore the socially constructed nature of science, scientific rhetoric, and scientific traditions” (134). In their classes, the instructors announced that they would be teaching from “an ‘apparent feminist’ perspective,” in Frost’s case, and from “a critical gender studies approach” in Eble’s (134-35). The course required three major assignments: field research on scientific writing venues in an area of the student’s choice; “a complete scientific article” for one of the journals that had been investigated; and a conversion of the scientific article into a general-audience article appropriate for CNN.com (135). A particular goal of these assignments was to provoke cognitive dissonance in order to raise questions of how scientific information can be transmitted “in responsible ways” as students struggled with the selectivity needed for general audiences (135).

Other components of students’ grades were class discussion, a “scripted oral debate completed in small groups,” and a “personal process journal.” In addition, students participated in “cross-class peer review,” in which students from Frost’s class provided feedback on the lay articles from Eble’s class and vice versa (136).

In their Critical Reflection, Combs et al. consider three components of the class that provided particular insights: the collaboration in course design; the inclusion of the peer mentor; and the cross-class peer review (137). Collaboration not only allowed the instructors to build on each other’s strengths and experiences, it also helped them analyze other aspects of the class. Frost and Eble determined that differences in their own backgrounds and teaching styles impacted student responses to assignments. For example, Eble’s experience on an Institutional Review Board influenced her ability to help students think beyond the perception that writing for varied audiences required them to “dumb down” their scientific findings (137).

Much discussion centers on what the researchers learned from the cross-class peer review about students’ dissonance in producing the CNN.com lay article. Students in the two classes addressed this challenge quite differently. Frost’s students resisted the complexity that Eble’s students insisted on sustaining in their revisions of their scientific article, while students in Eble’s class criticized the submissions from Frost’s students as “too simple.” The authors write that “even though students were presented with the exact same assignment prompt, they received different messages about their intended audiences” (138).

The researchers credit Combs’s presence as a peer mentor in Frost’s class for the students’ ability to revise more successfully for non-specialized audiences. They argue that he provided a more immediate outside audience at the same time that he promoted a sense of community and identification that encouraged students to make difficult rhetorical decisions (138-39). His feedback to the instructors helped them recognize the value of the cross-class peer review despite the apparent challenges it presented. In his commentary, he discusses how receiving the feedback from the other class prompted one student to achieve a “successful break from a single-form draft writing and in-class peer review” (Combs, qtd. in Combs et al. 140). He quotes the student’s perception that everyone in her own class “had the same understanding of what the paper was supposed to be” and her sense that the disruption of seeing the other class’s very different understanding fueled a complete revision that made her “happier with [her] actual article” (140). The authors conclude that both the contributions of the peer mentor and the dissonance created by the very different understandings of audience led to increased critical reflection (140), in particular, in Combs’s words, the recognition that

there are often spaces in writing not filled by right-and-wrong choices, but by creating drafts, receiving feedback, and ultimately making the decision to go in a chosen direction. (140)

In future iterations, in addition to retaining the cross-class peer review and the peer-mentor presence, the instructors propose equalizing the amount of feedback the classes receive, especially since receiving more feedback rather than less pushes students to “prioritize” and hence develop important revision strategies (141). They also plan to simplify the scientific-article assignment, which Frost deemed “too much” (141). An additional course-design revision involves creating a lay article from a previously published scientific paper in order to prepare students for the “affective impact” (141) of making radical changes in work to which they are already deeply committed. A final change involves converting the personal journal to a social-media conversation to develop awareness of the exigencies of public discussion of science (141).


Preston, Jacqueline. Composition as “Assemblage.” CCC, Sep. 2015. Posted 11/03/2015.

Preston, Jacqueline. “Project(ing) Literacy: Writing to Assemble in a Postcomposition FYW Classroom.” College Composition and Communication 67.1 (2015): 35-63. Print.

Jacqueline Preston advocates for a project-based model for composition, particularly in basic-writing classes. Such a model, she argues, benefits students in several important ways. It refuses the longstanding deficit approach that, according to Victor Villanueva, defines students who fall into the basic-writing population in terms of “illness” (qtd. in Preston 35); it allows students to draw on their histories, interests, and multiple “acquired literacies” (42) to produce writing that is rich in “complexity,” “relevancy,” and “contingency” (39); and it encourages students to view writing as an “assemblage” of many overlapping components, including personal histories; cultural, social, and political interactions; prior reading and writing; and many kinds of “rhetorical negotiation” (54).

Preston contends that composition still embraces a deficit model that sees its purpose as preparing underprepared students for future academic work. Such an approach, working with a narrow understanding of literacy, focuses on writing as a “technology of representation” (RaĂşl Sánchez, qtd. in Preston 38, 61n7), devoted to proficient communication that primarily serves as a “conduit” for information (43). This view requires that students’ lived literacies be dismissed as deficiencies and that composition itself be limited to fulfilling a service role within the limits of the university (36, 38).

In contrast, Preston presents a view of writing aligned with postcompositionist approaches that advocate seeing writing more expansively as the actual moment of “culture making itself” (40). She urges composition studies to embrace Kenneth Burke’s concept of “dialectical space” as the realm of the “both/and” in which “merger and division” bring together disparate assemblages to transform them into something transcendent.

Seeing writing through this lens, she argues, allows an awareness of writing as a process of “becoming,” a concept from Gilles Deleuze and FĂ©lix Guattari in which each act of assembly transforms previous knowledge and creates new realities (39-40). Drawing on Sidney Dobrin’s book Postcomposition, she argues that the view of composition engendered by the project model she describes enables engaging “the possibles” that “emerge on the edge of chaos” but that “strive toward becoming actuals” if embraced in a dialectical spirit (Dobrin, qtd. in Preston 54).

Preston presents the project-based model, which she traces to John Dewey and William Heard Kilpatrick, as a pedagogical method that can introduce students to this view of literacy. Her article is based on a twelve-month grounded-theory study examining the experiences of ten students and seven faculty (37, 61n11). In Preston’s program, basic writing is the purview of eight tenured and tenure-line faculty in “an independent basic writing unit” in which “constructivist approaches” have long been in place (41). Preston presents examples of student work in the course, focusing especially on a particular student who had entered college uncertain of his readiness but who successfully developed a fundraising and social-media plan to encourage the installation of bike racks in the city.

Her account of this student’s work contrasts his experience with the expectations he would have been asked to meet in a traditional argument curriculum (50-51). She recounts that his original proposal to “do a presentation to the Downtown Alliance . . . as a citizen” (student, qtd. in Preston 40) evolved as he learned more about previous work done on his idea and drew on his prior involvement in the bicycling community, including expertise and literacies he had developed through that background. In a more traditional approach, she argues, he would have gathered evidence and counterarguments but would never have had

a chance to come face-to-face with the inherent complexities of his writing project and to see “good writing” as a multifarious and contingent response to constantly shifting rhetorical, social, and political realities. (51)

Adoption of a project-based model, Preston writes, raises questions about the nature of “good writing” and “effective pedagogy.” The model, she states, does not completely dismiss the conventions and genre requirements common to more traditional curricula. As students compose many different kinds of texts, from a “well-researched proposal to a sponsor” to emails, interview questions, brochures, and video presentations, they not only incorporate conventions but, because of their investment in their projects, become “eager to know more about the conventions of particular genres and how best to use outside resources to appeal to specific audiences” (52). The model stresses the degree to which all writing is a situated assemblage of many different contingent components always open to revision rather than a representation of a stable truth (51).

Effective pedagogy, in this model, becomes pedagogy that resists practices that limit access; builds on and furthers students’ histories, literacies, goals, and interests; provides students with a richer sense of the possibilities writing offers; and “produc[es} writing that has consequence” (53). Important, in Preston’s view, is the model’s capacity for allowing students to “transfer from” their own experiences the material to support critical inquiry rather than insisting that the sole purpose of first-year writing is to enable students defined as underprepared to “transfer to,” that is, to tailor their work to narrow views of literacy as circumscribed by traditional notions of proficient college work (62n12; emphasis original).


Hassel and Giordano. Assessment and Remediation in the Placement Process. CE, Sept. 2015. Posted 10/19/2015.

Hassel, Holly, and Joanne Baird Giordano. “The Blurry Borders of College Writing: Remediation and the Assessment of Student Readiness.” College English 78.1 (2015): 56-80. Print.

Holly Hassel and Joanne Baird Giordano advocate for the use of multiple assessment measures rather than standardized test scores in decisions about placing entering college students in remedial or developmental courses. Their concern results from the “widespread desire” evident in current national conversations to reduce the number of students taking non-credit-bearing courses in preparation for college work (57). While acknowledging the view of critics like Ira Shor that such courses can increase time-to-graduation, they argue that for some students, proper placement into coursework that supplies them with missing components of successful college writing can make the difference between completing a degree and leaving college altogether (61-62).

Sorting students based on their ability to meet academic outcomes, Hassel and Giordano maintain, is inherent in composition as a discipline. What’s needed, they contend, is more comprehensive analysis that can capture the “complicated academic profiles” of individual students, particularly in open-access institutions where students vary widely and where the admissions process has not already identified and acted on predictors of failure (61).

They cite an article from The Chronicle of Higher Education stating that at two-year colleges, “about 60 percent of high-school graduates . . . have to take remedial courses” (Jennifer Gonzalez, qtd. in Hassel and Giordano 57). Similar statistics from other university systems, as well as pushes from organizations like Complete College America to do away with remedial education in the hope of raising graduation rates, lead Hassel and Giordano to argue that better methods are needed to document what competences college writing requires and whether students possess them before placement decisions are made (57). The inability to make accurate decisions affects not only the students, but also the instructors who must alter curriculum to accommodate misplaced students, the support staff who must deal with the disruption to students’ academic progress (57), and ultimately the discipline of composition itself:

Our discipline is also affected negatively by not clearly and accurately identifying what markers of knowledge and skills are required for precollege, first-semester, second-semester, and more advanced writing courses in a consistent way that we can adequately measure. (76)

In the authors’ view, the failure of placement to correctly identify students in need of extra preparation can be largely attributed to the use of “stand-alone” test scores, for example ACT and SAT scores and, in the Wisconsin system where they conducted their research, scores from the Wisconsin English Placement Test (WEPT) (60, 64). They cite data demonstrating that reliance on such single measures is widespread; in Wisconsin, such scores “[h]istorically” drove placement decisions, but concerns about student success and retention led to specific examinations of the placement process. The authors’ pilot process using multiple measures is now in place at nine of the two-year colleges in the system, and the article details a “large-scale scholarship of teaching and learning project , , , to assess the changes to [the] placement process” (62).

The scholarship project comprised two sets of data. The first set involved tracking the records of 911 students, including information about their high school achievements; their test scores; their placement, both recommended and actual; and their grades and academic standing during their first year. The “second prong” was a more detailed examination of the first-year writing and in some cases writing during the second year of fifty-four students who consented to participate. In all, the researchers examined an average of 6.6 pieces of writing per student and a total of 359 samples (62-63). The purpose of this closer study was to determine “whether a student’s placement information accurately and sufficiently allowed that student to be placed into an appropriate first-semester composition course with or without developmental reading and studio writing support” (63).

From their sample, Hassel and Giordano conclude that standardized test scores alone do not provide a usable picture of the abilities students bring to college with regard to such areas as rhetorical knowledge, knowledge of the writing process, familiarity with academic writing, and critical reading skills (66).

To assess each student individually, the researchers considered not just their ACT and WEPT scores and writing samples but also their overall academic success, including “any reflective writing” from instructors, and a survey (66). They note that WEPT scores more often overplaced students, while the ACT underplaced them, although the two tests were “about equally accurate” (66-67).

The authors provide a number of case studies to indicate how relying on test scores alone would misrepresent students’ abilities and specific needs. For example, the “strong high school grades and motivation levels” (68) of one student would have gone unmeasured in an assessment process using only her test scores, which would have placed her in a developmental course. More careful consideration of her materials and history revealed that she could succeed in a credit-bearing first-year writing course if provided with a support course in reading (67). Similarly, a Hmong-speaking student would have been placed into developmental courses based on test-scores alone, which ignored his success in a “challenging senior year curriculum” and the considerable higher-level abilities his actual writing demonstrated (69).

Interventions from the placement team using multiple measures to correct the test-score indications resulted in a 90% success rate. Hassel and Giordano point out that such interventions enabled the students in question to move more quickly toward their degrees (70).

Additional case studies illustrate the effects of overplacement. An online registration system relying on WEPT scores allowed one student to move into a non-developmental course despite his weak preparation in high school and his problematic writing sample; this student left college after his second semester (71-72). Other problems arose because of discrepancies between reading and writing scores. The use of multiple measures permitted the placement team to fine-tune such students’ coursework through detailed analysis of the actual strengths and weaknesses in the writing samples and high-school curricula and grades. In particular, the authors note that students entering college with weak higher-order cognitive and rhetorical skills require extra time to build these abilities; providing this extra time through additional semesters of writing moves students more quickly and reliably toward degree completion than the stress of a single inappropriate course (74-76).

The authors offer four recommendations (78-79): the use of multiple measures, use of assessment data to design a curriculum that meets actual needs; creation of well-thought-out “acceleration” options through pinpointing individual needs; and a commitment to the value of developmental support “for students who truly need it”: “Methods that accelerate or eliminate remediation will not magically make such students prepared for college work” (79).


1 Comment

T. Bourelle et al. Using Instructional Assistants in Online Classes. C&C, Sept. 2015. Posted 10/13/2015.

Bourelle, Tiffany, Andrew Bourelle, and Sherry Rankins-Robertson. “Teaching with Instructional Assistants: Enhancing Student Learning in Online Classes.” Computers and Composition 37 (2015): 90-103. Web. 6 Oct. 2015.

Tiffany Bourelle, Andrew Bourelle, and Sherry Rankins-Robertson discuss the “Writers’ Studio,” a pilot program at Arizona State University that utilized upper-level English and education majors as “instructional assistants” (IAs) in online first-year writing classes. The program was initiated in response to a request from the provost to cut budgets without affecting student learning or increasing faculty workload (90).

A solution was an “increased student-to-teacher ratio” (90). To ensure that the creation of larger sections met the goal of maintaining teacher workloads and respected the guiding principles put forward by the Conference on College Composition and Communication Committee for Best Practices in Online Writing Instruction in its March 2013 Position Statement, the team of faculty charged with developing the cost-saving measures supplemented “existing pedagogical strategies” with several innovations (91).

The writers note that one available cost-saving step was to avoid staffing underenrolled sections. To meet this goal, the team created “mega-sections” in which one teacher was assigned per each 96 students, the equivalent of a full-time load. Once the enrollment reached 96, a second teacher was assigned to the section, and the two teachers team-taught. T. Bourelle et al. give the example of a section of the second semester of the first-year sequence that enrolled at 120 students and was taught by two instructors. These 120 students were assigned to 15-student subsections (91).

T. Bourelle et al. note several reasons why the new structure potentially increased faculty workload. They cite research by David Reinheimer to the effect that teaching writing online is inherently more time-intensive than instructors may expect (91). Second, the planned curriculum included more drafts of each paper, requiring more feedback. In addition, the course design required multimodal projects. Finally, students also composed “metacognitive reflections” to gauge their own learning on each project (92).

These factors prompted the inclusion of the IAs. One IA was assigned to each 15-student group. These upper-level students contributed to the feedback process. First-year students wrote four drafts of each paper: a rough draft that received peer feedback, a revised draft that received comments from the IAs, an “editing” draft students could complete using the writing center or online resources, and finally a submission to the instructor, who would respond by either accepting the draft for a portfolio or returning it with directions to “revise and resubmit” (92). Assigning portfolio grades fell to the instructor. The authors contend that “in online classes where students write multiple drafts for each project, instructor feedback on every draft is simply not possible with the number of students assigned to any teacher, no matter how she manages her time” (93).

T. Bourelle et al. provide extensive discussion of the ways the IAs prepared for their roles in the Writers’ Studio. A first component was an eight-hour orientation in which the assistants were introduced to important teaching practices and concepts, in particular the process of providing feedback. Various interactive exercises and discussions allowed the IAs to develop their abilities to respond to the multimodal projects required by the Studio, such as blogs, websites, or “sound portraits” (94). The instruction for IAs also covered the distinction between “directive” and “facilitative” feedback, with the latter designed to encourage “an author to make decisions and [give] the writer freedom to make choices” (94).

Continuing support throughout the semester included a “portfolio workshop” that enabled the IAs to guide students in their production of the culminating eportfolio requirement, which required methods of assessment unique to electronic texts (95). Bi-weekly meetings with the instructors of the larger sections to which their cohorts belonged also provided the IAs with the support needed to manage their own coursework while facilitating first-year students’ writing (95).

In addition, IAs enrolled in an online internship that functioned as a practicum comparable to practica taken by graduate teaching assistants at many institutions (95-97). The practicum for the Writers’ Studio internship reinforced work on providing facilitative feedback but especially incorporated the theory and practice of online instruction (96). T. Bourelle et al. argue that the effectiveness of the practicum experience was enhanced by the degree to which it “mirror[ed]” much of what the undergraduate students were experiencing in their first-year classes: “[B]oth groups of beginners are working within initially uncomfortable but ultimately developmentally positive levels of ambiguity, multiplicity, and open-endedness” (Barb Blakely Duffelmeyer, qtd. in T. Bourelle et al. 96). Still quoting Duffelmeyer, the authors contend that adding computers “both enriched and problematized” the pedagogical experience of the coursework for both groups (96), imposing the need for special attention to online environments.

Internship assignments also gave the IAs a sense of what their own students would be experiencing by requiring an eportfolio featuring what they considered their best examples of feedback to student writing as well as reflective papers documenting their learning (98).

The IAs in the practicum critiqued the first-year curriculum, for example suggesting stronger scaffolding for peer review and better timing of assignments. They wrote various instructional materials to support the first-year course activities (97).

Their contributions to the first-year course included “[f]aciliting discussion groups” (98) and “[d]eveloping supportive relationships with first-year writers” (100), but especially “[r]esponding to revised drafts” (99). T. Bourelle et al. note that the IAs’ feedback differed from that of peer reviewers in that the IAs had acquired background in composition and rhetorical theory; unlike writing-center tutors, the IAs were more versed in the philosophy and expectations embedded in the course itself (99). IAs were particularly helpful to students who had misread the assignments, and they were able to identify and mentor students who were falling behind (98, 99).

The authors respond to the critique that the IAs represented uncompensated labor by arguing that the Writers’ Studio offered a pedagogically valuable opportunity that would serve the students well if they pursued graduate or professional careers as educators, emphasizing the importance of designing such programs to benefit the students as well as the university (101). They present student and faculty testimony on the effectiveness of the IAs as a means of “supplement[ing] teacher interaction” rather than replacing it (102). While they characterize the “monetary benefit” to the university as “small” (101), they consider the project “successful” and urge other “teacher-scholars to build on what we have tried to do” (102).


Tarsa, Rebecca. Online Interface as Exordium. CE, Sept. 2015. Posted 09/29/2015.

Tarsa, Rebecca. “Upvoting the Exordium: Literacy Practices of the Digital Interface.” College English 78.1 (2015):12-33. Print.

Rebecca Tarsa proposes strategies for creating an effective “exordium” for writing classrooms by examining how the digital interface works as an exordium in online participatory sites in which students voluntarily contribute writing. She draws on Teena Carnegie’s work to argue that the interface of an online site meets Cicero’s definition of the exordium as an appeal designed to “make the listener ‘well-disposed, attentive, and receptive’ to the ensuring speech” (25). In the case of an online site, the interface as exordium accomplishes this goal by “project[ing] to users the potential for interactivity within the site that matches their desired engagement while also supporting the ends of the site itself” (25-26).

To determine how interfaces affect students’ writing decisions, Tarsa drew on interviews with thirty students at two institutions, one a two-year college and the other a research university (15). The students were members of the general-education population and not necessarily advanced online writers (16). Using grounded theory methodology, Tarsa developed her observations after coding the interviews (16-17). More than three-quarters of the students voluntarily raised the issue of the effects of a site’s interface, leading Tarsa to recognize it as an important element in students’ online participation (17). She notes that her conclusions about student activities were based on self-report and cannot be considered generalizable, but argues that using “students’ own perceptions” is valuable because it provides useful additions to “our understanding of digital participatory cultures” (18).

Tarsa introduces the concept of “affordances,” which she defines as “the potential interaction offered to users by a tool or feature of a site’s interface” (18). She focuses on two kinds of affordances, “[e]ntry” and “qualitative” (18, 22). Entry affordances, she writes, affect student decisions about participation long before they have accessed any content. Such affordances involve the appearance of a site, which the students Tarsa interviewed often seemed to judge as inviting or uninviting, perhaps ‘boring” (student, qtd. in Tarsa 19). A second important feature of an interface that influences participation is the registration process, if one is in place. Tarsa found that students might use a site extensively yet resist the step of signing up, in some cases because they felt they already had too many accounts and passwords (20). Tarsa found that “usability” was not a determining factor in students’ decisions; rather, they were likely to judge whether or not a particular feature or requirement was “useful” (20). For example, acquiring the ability to access a site on a mobile device was useful to some of the students interviewed (20-21).

Students who ultimately decided to register, Tarsa reports, tended to do so either because they “had something in particular they wanted to contribute” or because “they wanted to customize their interface experience or vote on content” (21). In such cases, the students had regularly visited the sites before deciding to sign up. She posits that although a desire to write was not necessarily the primary motivation, having registered cleared the way for future engagement, for example writing (21).

Tarsa depicts “qualitative affordances” as invitations to interact, initially through voting on the quality of content. She writes that such judgments of quality can involve sharing, “liking” (a “one-way” judgment), or voting up or down (a “two-way” assessment) (22). Tarsa argues that the ability to vote offers users a safe, visible, easy-to-use means of becoming a contributor to an online community. Such actions by users become a form of agency, as audiences determine what content will become successful.

The existence of qualitative affordances, Tarsa posits, is one factor in overcoming users’ resistance to entry affordances, like registration (23). Eliminating this resistance positions users to take the next step of writing. Regular involvement in voting activities “create[s] higher levels of comfort with and investment in a site overall” (24), necessary components if a user is going to risk the “range of anxieties” (23) inherent in writing. Thus, the ability to vote on content drew the students Tarsa interviewed into sites where “all but one” of those who had registered for the purpose of voting “eventually went on to participate within those sites via writing” (23).

Invoking Carnegie’s theory, Tarsa proposes that the work of motivating writing begins with the features of the interface working as exordium, particularly in promising and facilitating the “interactivity” that leads to a sense of “connection” and “acceptance” (Carnegie, qtd. in Tarsa 26). Interacting with other users through the qualitative affordances enabled by the interface leads writers to an awareness of audiences beyond their immediate sphere (28). While the threat of being voted down may discourage some writing, in Tarsa’s view, the familiarity with interaction that results from these affordances is more likely to encourage writing than to “quash” it (27). She notes that a particular exordium will not appeal to every user; each online culture competes with so others that any site seeking to prompt participation must hone its interface with careful attention to its intended audience (26-27).

Tarsa sees challenges in creating a classroom exordium that makes use of the features that interfaces provide in online cultures. She states that the ability to write on impulse with little cost or risk fuels participation in online interaction; this “spontaneity” is difficult to reproduce in the classroom (29). Options like blogging, while promising, must be designed so as to reduce entry barriers like “schedul[ing] time to write the assigned post, navigat[ing] to the site, and log[ging] in before they can write” (29). Making entry routines part of a regular class day is one possible step toward encouraging participation. Similarly, class discussion does not mimic the interactivity offered by qualitative affordances because of the risk speaking up poses and its inability to indicate spontaneous reactions.

Tarsa suggests incorporating versions of more popular qualitative affordances like “liking” or supplying links to related material into such activities as selection of material for a digital bibliography (29-30). Finally, the features of online participatory sites can play “an ongoing part in rhetorical inquiry” into “the relationship between author and audience” (30). In Tarsa’s view, such efforts to exploit the features of the online exordium that invite writing can also encourage it in classrooms.


Sullivan, Patrick. Making Room for Creativity in the Composition Class. CCC, Sept. 2015. Posted 09/15/2015.

Sullivan, Patrick. “The UnEssay: Making Room for Creativity in the Composition Classroom.” College Composition and Communication 67.1 (2015): 6-34. Print.

Patrick Sullivan urges composition scholars to embrace creativity as a fundamental component of an enriched writing curriculum. In Sullivan’s view, although researchers and scholars outside of composition have steadily moved creativity to the core of their models of cognition and of the kinds of thinking they feel are needed to meet 21st-century challenges, writing scholars have tended to isolate “creativity” in creative-writing courses. Sullivan presents a “most essential question”: “Might there be some value in embracing creativity as an integral part of how we theorize writing?” (7).

A subset of questions includes such issues as current definitions of creativity, emerging views of its contribution in myriad contexts, and the relationship between creativity and important capacities like critical thinking (7).

Sullivan surveys works by educators, psychologists, neuroscientists, and others on the value of creativity and the ways it can be fostered. This work challenges the view that creativity is the special domain of a limited number of special people; rather, the research Sullivan presents considers it a “common and shared intellectual capacity” (12) responsible for the development of culture through ongoing innovation (9) as well as essential to the flexible thinking and problem-solving ability needed beyond the classroom (8-9, 15).

Scholars Sullivan cites position creativity as an antidote to the current focus on testing and accountability that promotes what Douglas Hesse calls the “extraordinarily narrow view of writing” that results from such initiatives as the Common Core Standards (qtd. in Sullivan 18). Sullivan draws on Ken Robinson, who contends that current models of schooling have “educated out” our natural creativity: “[M]ost children think they’re highly creative; most adults think they’re not” (qtd. in Sullivan 9).

Other scholars urging the elevation of creativity as central to cognition include intelligence researcher Robert J. Sternberg, for whom creativity entails three components: “synthetic ability (generating ideas), analytical ability (evaluating ideas, critical thinking), and practical ability (translating ideas into practice and products)” (10). Sullivan compares models of “habits of mind” developed by other scholars with the habits of mind incorporated into the “Framework for Success in Postsecondary Writing” collaboratively generated by the Council of Writing Program Administrators, the National Council of Teachers of English, and the National Writing Project; he notes that many such models, including the “Framework,” consider creativity “an essential twenty-first-century cognitive aptitude” (12). He recommends to composition scholars the international view that creativity is equal in importance to literacy, a view embodied in the Finnish educational system and in the Program for International Student Assessment (PISA), which would replace testing for memorization with testing for students’ ability “to think for themselves” (Amanda Ripley, qtd. in Sullivan 13).

Importantly, Sullivan argues, incorporating creativity into classrooms has crucial implications for overall cognitive development. According to the researchers Sullivan cites, expanding the kinds of activities and the kinds of writing students do enhances overall mental function (14), leading to the “rhetorical dexterity” (Shannon Carter, qtd. in Sullivan 20) essential to negotiating today’s rapidly changing rhetorical environments (21).

As further evidence of the consensus on the centrality of creativity to learning and cognition, Sullivan presents the 2001 revision of Bloom’s 1956 Taxonomy. This revision replaces “synthesis and evaluation” at the pinnacle of cognitive growth with “creating” (19). Discussing the revised Taxonomy to which they contributed, Lorin Anderson and David Krathwohl note that the acquisition of the “deep understanding” necessary to “construction and insight” demands the components inherent in “Create” (qtd. in Sullivan 19-20).

Such deep understanding, Sullivan argues, is the goal of the writing classroom: “[I]ts connection here to creativity links this luminous human capacity to our students’ cognitive development” (20). Similarly, concern about students’ transfer of the intellectual work of academic writing to other domains and a recognition of the importance of metacognition to deep learning link the work of creativity scholars to recent composition theory and applications (20). Sullivan suggests shifting from “critical thinking” to “creative and critical thinking” because “[a]ll good thinking . . . is creative in some way” (16).

Sullivan sees the increased focus within writing studies on multimodal and other diverse uses of writing as a move toward reframing public conceptions of academic writing; he presents “desegregat[ing] creative writing” as one way of “actively expanding our definition of academic writing” (21). He lists many ways of incorporating creativity into classrooms, then provides the unit on creativity that he has embedded in his first-year writing class (22). His goal is to “provide students with an authentic experience of the joys, challenges, and rewards of college-level reading, writing, and thinking” (22-23). To this end, the course explores what Paul Hirst calls “knowledge domains,” specifically, in Sullivan’s class, “traditional assignments” examining how knowledge functions in history and the human sciences (23-24), with the unit on creativity “[s]andwiched” between them (24).

In this unit, students consider the definition of creativity and then write poems and stories. The centerpiece is an individual project in which students produce “their own work of art” such as “a sculpture, a painting, a drawing, a photograph, a collage, or a song” (24). Sullivan furnishes examples of student work, including quotes illustrating the metacognitive understanding he hopes to inculcate: “that creativity, and the arts in particular, provide a unique and important way of looking at the world and producing knowledge” (25).

The final assignment is an “unessay,” which bans standard formats and invites students to “[i]nvent a new form!” (26). Sullivan shares examples of student responses to this assignment, many involving multimodal components that gesture toward a more inclusive embrace of what Kathleen Blake Yancey calls “what our students know as writing” (qtd. in Sullivan 28). Ultimately, Sullivan contends, such diverse, creatively rich pedagogy will realize David Russell’s hope of casting writing not as “a single elementary skill” but rather “as a complex rhetorical activity embedded in the differentiated practices of academic discourse communities” (qtd. in Sullivan 29), and, importantly, Douglas Hesse’s hope of communicating to students that writing is not an isolated academic exercise but rather “a life activity with many interconnected manifestations” (qtd. in Sullivan 18).