College Composition Weekly: Summaries of research for college writing professionals

Read, Comment On, and Share News of the Latest from the Rhetoric and Composition Journals


Leave a comment

Sweeney, Meghan A. Audience Awareness as a Threshold Concept. RTE, Aug. 2018. Posted 09/18/2018.

Sweeney, Meghan A. “Audience Awareness as a Threshold Concept of Reading: An Examination of Student Learning in Biochemistry.” Research in the Teaching of English 53.1 (2018): 58-79. Print.

Meghan A. Sweeney presents a case study of a basic-writing student, “Bruce,” who grapples with a composition “threshold concept,” audience awareness. The study tracks Bruce across a three-course composition sequence in his first semester, then through his second-semester work in a research-based composition course as well as biochemistry and chemistry classes in support of his planned major, anesthesiology (64). Sweeney argues that Bruce moved from a “pre-liminal” through a “liminal” phase to end with a “post-liminal” relationship to the concept of audience awareness.

The composition sequence emphasized college reading, which Sweeney finds to be undertheorized in writing instruction (58). Sweeney explores scholarship in disciplinarity to suggest that the development of effective reading practices is important to students’ ability to move beyond the writing classroom and enter “communities of practice,” which P. Prior defines as “a continual process whereby newcomers and old-timers reproduce and produce themselves, their practices, and their communities” (59).

J. Lave and E. Wenger, studying these phenomena, see them “as a set of relations among persons, activity, and world, over time and in relation to other communities of practice” (qtd. in Sweeney 61). Lave and Wenger propose the category of “legitimate peripheral participation” to characterize how students begin their acculturation into such disciplinary sociocultural environments (61).

In studying Bruce’s progress as he approaches the community of practice he intends to enter, Sweeney also draws on the “academic literacies approach,” which emphasizes the ways in which entry into a community of practice involves changes in identity as students begin to see themselves as members of new groups (60). Among the challenges this shift entails are those of transferring practices and concepts from more general academic work to the specialized requirements of the new environment (60-61).

Sweeney’s study examines how such foundational concepts function as students carry them beyond composition. She discusses “threshold concepts” as those that are “potentially transformative” in that, once students grasp them, they begin to think in new ways characteristic of the community of practice in question (63). She gives “opportunity costs” as an example of such a concept in economics (63), advocating more attention to how students introduced to composition’s threshold concepts use these concepts as they transfer their learning into new communities (63).

At the large public research university in the western U.S. where the study was conducted, students deemed underprepared take a semester-long three-course integrated reading and writing combination including “a three-unit composition intensive,” a reading course, and an editing-for-style course (64) before moving on to a second-semester composition course. Sweeney characterizes Bruce, a first-generation Korean American student from a working-class background, as “highly motivated” (64). She quotes B. Flyvbjerg to argue that an atypical subject like Bruce may “reveal more information because they activate more actors and more basic mechanisms in the situation studied” (qtd. in Sweeney 64).

Sweeney observed both Bruce’s second-semester writing course and two sessions of his lecture-style chemistry class, taking notes on Bruce’s involvement and on the ways in which the professors presented the material relevant to their fields (65). Her data collection also included “four semi-structured” interviews in which Bruce provided insights into his reading practices and use of rhetorical concepts across the different classes (65).

Data from Bruce’s work in the first-semester composition combination leads Sweeney to argue that when he entered the sequence, he limited his reading response to summary, failing to engage with audience questions (66). She writes that as the semester progressed, he encountered discussions and readings about how writers differ depending on their situated practice and membership within a field. This exposure, Sweeney writes, triggered Bruce’s deepening attention to audience, and by the end, he

had begun to visualize other readers of his texts, to expect writers to influence others through rhetorical choices, and to expect audiences to keep an open mind while still maintaining an awareness of the choices made. (67-68)

The author describes Bruce’s progress to this point as a transition from “a pre-liminal space” in which the “troublesome” threshold concept of audience awareness posed challenges (67) to a “liminal” phase (68) in which a learner recursively “engages with [the] threshold concept but oscillates between old and emergent ideas” (62).

Sweeney contends that in his second semester, Bruce’s experiences in chemistry and biochemistry classes completed his movement into a “post-liminal” engagement with audience awareness in his new community of practice and with the identity formation involved in this engagement (68). Noting that Bruce learned quickly what information was important to the professor and adjusted his reading strategies accordingly, Sweeney records such moves as the professor’s references to “we” in lecturing, inviting students to see themselves as community members (69).

In biochemistry, Bruce worked with a lab mentor; Sweeney finds it crucial that Bruce recognized that he was not the audience for the technical papers he was asked to read. That realization pushed him to do independent research on Google and other less-advanced sources to develop his acculturation into “biochem jargon” (69).

Sweeney draws on Bruce’s final paper for the course as evidence of his post-liminal growth: she indicates that his exposure to audience awareness in his composition class meant that he “expected a critical reader” (71) and paid attention to the details that would demonstrate to the professor that he had been a strong participant in the class. At the same time, Sweeney notes, Bruce saw the details as “necessary for other scientists who might want to replicate his experiment” (72). Thus he was writing for “dual audiences” but with full awareness of his own standing as a peripheral participant (72). In Sweeney’s view, Bruce’s transformational relationship with audience was further evinced by his assertion that even experts did not read as doubters when encountering new information, and that therefore his strategy of reading new material for comprehension rather than as a critic was appropriate for his early work in science (70).

Sweeney’s study suggests that for students like Bruce who have been deemed underprepared, awareness of audience may drive them to accept this designation (74). She proposes that for Bruce, his struggles to enter the biochemistry community in the light of this designation may have been “generative” because they pushed him to assert agency by developing effective personal reading strategies (76). She argues that actively teaching audience awareness in early composition courses, in contrast to models that assume students will acquire disciplinary identities through “apprenticeship,” can give students a more productive understanding of how they can begin to relate to the communities of practice they hope to enter (75).


Leave a comment

Donahue & Foster-Johnson. Text Analysis for Evidence of Transfer. RTE, May 2018. Posted 07/13/2018.

Donahue, Christiane, and Lynn Foster-Johnson. “Liminality and Transition: Text Features in Postsecondary Student Writing.” Research in the Teaching of English 52.4 (2018): 359-381. Web. 4 July 2018.

Christiane Donahue and Lynn Foster-Johnson detail a study of student writing in the “liminal space” between a “generic” first-year-writing course and a second, “discipline-inspired” first-year seminar (365). They see their study as unusual in that it draws its data and conclusions from empirical “corpus analysis” of the texts students produce (376-77). They also present their study as different from much other research in that it considered a “considerably larger” sample that permits them to generalize about the broader population of the specific institution where the study took place (360).

The authors see liminal spaces as appropriate for the study of the issue usually referred to as “transfer,” which they see as a widely shared interest across composition studies (359). They contend that their study of “defined features” in texts produced as students move from one type of writing course to another allows them to identify “just-noticeable difference[s]” that they believe can illuminate how writing develops across contexts (361).

The literature review examines definitions of liminality as well as wide-ranging writing scholarship that attempts to articulate how knowledge created in one context changes as it is applied in new situations. They cite Linda Adler-Kassner’s 2014 contention that students may benefit from “learning strategy rather than specific writing rules or forms,” thus developing the ability to adapt to a range of new contexts (362).

One finding from studies such as that of Lucille McCarthy in 1987 and Donahue in 2010 is that while students change the way they employ knowledge as they move from first to final years of education, they do not seem fully aware of how their application of what they know has changed (361-62). Thus, for Donahue and Foster-Johnson, the actual features detectable in the texts themselves can be illuminating in ways that other research methodologies may not (362, 364).

Examining the many terms that have been used to denote “transfer,” Donahue and Foster-Johnson advocate for “models of writing knowledge reuse” and “adaptation,” which capture the recurrence of specific features and the ways these features may change to serve a new exigency (364).

The study took place in a “selective” institution (366) defined as a “doctoral university of high research activity” (365). The student population is half White, with a diverse range of other ethnicities, and 9% first-generation college students (366). Students take either one or two sections of general first-year writing, depending on needs identified by directed self-placement (366), and a first-year seminar that is “designed to teach first-year writing while also introducing students to a topic in a particular (inter)discipline and gesturing toward disciplinary writing” (365). The authors argue that this sequence provides a revealing “’bridge’ moment in students’ learning” (365).

Students were thus divided into three cohorts depending on which courses they took and in which semester. Ninety percent of the instructors provided materials, collecting “all final submitted drafts of the first and last ‘source-based’ papers” for 883 students. Fifty-two papers from each cohort were randomly chosen, resulting in 156 participants (366-67). Each participating student’s work was examined at four time points, with the intention of identifying the presence or absence of specific features (368).

The features under scrutiny were keyed to faculty-developed learning outcomes for the courses (367-68). The article discusses the analysis of seven: thesis presence, thesis type, introduction type, overall text structure, evidence types, conclusion type, and overall essay purpose (367). Each feature was further broken down into “facets,” 38 in all, that illustrated “the specific aspects of the feature” (367-68).

The authors provide detailed tables of their results and list findings in their text. They report that “the portrait is largely one of stability,” but note students’ ability to vary choices “when needed” (369). Statistically significant differences showing “change[s] across time” ranged from 13% in Cohort 1 to 29% in Cohort 2 and 16% in Cohort 3. An example of a stable strategy is the use of “one explicit thesis at the beginning” of a paper (371); a strategy “rarely” used was “a thesis statement [placed] inductively at the middle or end” (372). Donahue and Foster-Johnson argue that these results indicate that students had learned useful options that they could draw on as needed in different contexts (372).

The authors present a more detailed examination of the relationship between “thesis type” and “overall essay aim” (374). They give examples of strong correlations between, for example, “the purpose of analyzing an object” and the use of “an interpretive thesis” as well as negative correlations between, for example, “the purpose of analyzing an object” and “an evaluative thesis” (374). In their view, these data indicate that some textual features are “congruen[t]” with each other while others are “incompatible” (374). They find that their textual analysis documents these relationships and students’ reliance on them.

They note a “reset effect”: in some cases, students increased their use of a facet (e.g., “external source as authority”) over the course of the first class, but then reverted to using the facet less at the beginning of the second class, only to once again increase their reliance on such strategies as the second class progressed (374-75), becoming, “‘repeating newcomers’ in the second term” (374).

Donahue and Foster-Johnson propose as one explanation for the observed stability the possibility that “more stays consistent across contexts than we might readily acknowledge” (376), or that in general-education contexts in which exposure to disciplinary writing is preliminary, the “boundaries we imagine are fuzzy” (377). They posit that it is also possible that curricula may offer students mainly “low-road” opportunities for adaptation or transformation of learned strategies (377). The authors stress that in this study, they were limited to “what the texts tell us” and thus could not speak to students’ reasons for their decisions (376).

Questions for future research, they suggest, include whether students are aware of deliberate reuse of strategies and whether or not “students reusing features do so automatically or purposefully” (377). Research might link student work to particular students with identifiers that would enable follow-up investigation.

They argue that compared to the methods of textual analysis and “topic-modeling” their study employs, “current assessment methods . . . are crude in their construct representation and antiquated in the information they provide” (378). They call for “a new program of research” that exploits a new

capability to code through automated processes and allow large corpora of data to be uploaded and analyzed rapidly under principled categories of analysis. 378

 


4 Comments

Grouling and Grutsch McKinney. Multimodality in Writing Center Texts. C&C, in press, 2016. Posted 08/21/2016.

Grouling, Jennifer, and Grutsch McKinney, Jackie. “Taking Stock: Multimodality in Writing Center Users’ Texts.” (In press.) Computers and Composition (2016). http://dx.doi.org/10.1016/j.compcom.2016.04.003 Web. 12 Aug. 2016.

Jennifer Grouling and Jackie Grutsch McKinney note that the need for multimodal instruction has been accepted for more than a decade by composition scholars (1). But they argue that the scholarship supporting multimodality as “necessary and appropriate” in classrooms and writing centers has tended to be “of the evangelical vein” consisting of “think pieces” rather than actual studies of how multimodality figures in classroom practice (2).

They present a study of multimodality in their own program at Ball State University as a step toward research that explores what kinds of multimodal writing takes place in composition classrooms (2). Ball State, they report, can shed light on this question because “there has been programmatic and curricular support here [at Ball State] for multimodal composition for nearly a decade now” (2).

The researchers focus on texts presented to the writing center for feedback. They ask three specific questions:

Are collected texts from writing center users multimodal?

What modes do students use in creation of their texts?

Do students call their texts multimodal? (2)

For two weeks in the spring semester, 2014, writing center tutors asked students visiting the center to allow their papers to be included in the study. Eighty-one of 214 students agreed. Identifying information was removed and the papers stored in a digital folder (3).

During those two weeks as well as the next five weeks, all student visitors to the center were asked directly if their projects were multimodal. Students could respond “yes,” “no,” or “not sure” (3). The purpose of this extended inquiry was to ensure that responses to the question during the first two “collection” weeks were not in some way unrepresentative. Grouling and Grutsch McKinney note that the question could be answered online or in person; students were not provided with a definition of “multimodal” even if they expressed confusion but only told to “answer as best they could” (3).

The authors decided against basing their study on the argument advanced by scholars like Jody Shipka and Paul Prior that “all communication practices have multimodal components” because such a definition did not allow them to see the distinctions they were investigating (3). Definitions like those presented by Tracey Bowen and Carl Whithaus that emphasize the “conscious” use of certain components also proved less helpful because students were not interviewed and their conscious intent could not be accessed (3). However, Bowen and Whithaus also offered a “more succinct definition” that proved useful: “multimodality is the ‘designing and composing beyond written words'” (qtd. in Grouling and Grutsch McKinney 3).

Examination of the papers led the researchers to code for a “continuum” of multimodality rather than a present/not-present binary (3-4). Fifty-seven, or 74%, of the papers were composed only in words and were coded as zero or “monomodal” (4). Some papers occupied a “grey area” because of elements like bulleted lists and tables. The researchers coded texts using bullets as “1” and those using lists and tables “2.” These categories shared the designation “elements of graphic design”; 19.8%, or 16, papers met this designation. Codes “3” and “4” indicated one or more modes beyond text and thus indicated “multimodal” work. No paper received a “4”; only eight, or 9.9%, received a “3,” indicating inclusion of one mode beyond words (4). Thus, the study materials exhibited little use of multimodal elements (4).

In answer to the second question, findings indicated that modes used even by papers coded “3” included only charts, graphs, and images. None used audio, video, or animation (4). Grouling and Grutsch McKinney posit that the multimodal elements were possibly not “created by the student” and that the instructor or template may have prompted the inclusion of such materials (5).

They further report that they could not tell whether any student had “consciously manipulated” elements of the text to make it multimodal (5). They observe that in two cases, students used visual elements apparently intended to aid in development of a paper in progress (5).

The “short answer” to the third research question, whether students saw their papers as multimodal, was “not usually” (5; emphasis original). Only 6% of 637 appointments and 6% of writers of the 81 collected texts answered yes. In only one case in which the student identified the paper as multimodal did the coders agree. Two of the five texts called multimodal by students received a code of 0 from the raters (5). Students were more able to recognize when their work was not multimodal; 51 of 70 texts coded by the raters as monomodal were also recognized as such by their authors (5).

Grouling and Grutsch McKinney express concern that students seem unable to identify multimodality given that such work is required in both first-year courses, and even taking transfer students into account, the authors note that “the vast majority” of undergraduates will have taken a relevant course (6). They state that they would be less concerned that students do not use the term if the work produced exhibited multimodal features, but this was not the case (6).

University system data indicated that a plurality of writing center attendees came from writing classes, but students from other courses produced some of the few multimodal pieces, though they did not use the term (7).

Examining program practices, Grouling and Grutsch McKinney determined that often only one assignment was designated “multimodal”—most commonly, presentations using PowerPoint (8). The authors advocate for “more open” assignments that present multimodality “as a rhetorical choice, and not as a requirement for an assignment” (8). Such emphasis should be accompanied by “programmatic assessment” to determine what students are actually learning (8-9).

The authors also urge more communication across the curriculum about the use of multiple modes in discipline-specific writing. While noting that advanced coursework in a discipline may have its own vocabulary and favored modes, Grouling and Grutsch McKinney argue that sharing the vocabulary from composition studies with faculty across disciplines will help students see how concepts from first-year writing apply in their coursework and professional careers (9).

The authors contend that instructors and tutors should attend to “graphic design elements” like “readability and layout” (10). In all cases, they argue, students should move beyond simply inserting illustrations into text to a better “integration” of modes to enhance communication (10). Further, incorporating multimodal concepts in invention and composing can enrich students’ understanding of the writing process (10). Such developments, the authors propose, can move the commitment to multimodality beyond the “evangelical phase” (11).

 


1 Comment

Anson, Chris M. Expert Writers and Genre Transfer. CCC, June 2016. Posted 07/09/2016.

Anson, Chris M. “The Pop Warner Chronicles: A Case Study in Contextual Adaptation and the Transfer of Writing Ability.” College Composition and Communication 67.4 (2016): 518-49. Print.

Chris Anson presents a case study of an expert writer, “Martin,” attempting to “transfer” his extensive writing experience to the production of seventy-five-word “game summaries” for his son’s Pop Warner football team. The study leads Anson to argue that current theory on transfer does not fully account for Martin’s experiences working in a new genre and advocates for a “more nuanced understanding of existing ability, disposition, context, and genre in the deployment of knowledge for writing” (520).

Martin wrote the summaries to fulfill a participation requirement for families of Pop Warner players (522). He believed that the enormous amount of writing he did professionally and his deep understanding of such concepts as rhetorical strategies and composing processes made the game-summary assignment an appropriate choice (522). The summary deadline was the evening of the Sunday after each Saturday game; the pieces appeared in a local newspaper each Thursday (523).

Martin logged his writing activities during a twelve-week period, noting that he wrote multiple genres, both formal and informal, for his academic job (520). For the game summaries, he received verbal and emailed guidance from the team coordinator. This guidance allowed him to name the genre, define an audience (principally, team families), and recognize specific requirements, such as including as many players as possible each week and mentioning every player at least once, always in a positive light, during the season (523-24). Martin learned that the team coordinator would do a preliminary edit, then pass the summaries on to the newspaper editors (524).

Anson writes that Martin’s first challenge was to record the games through extensive notes on a legal pad, matching players against a team roster. When Martin sat down on the Sunday following the game to write his first summary, he was surprised to find himself “paralyzed” (526). The effort to be accurate while making the brief account “interesting and punchy” took much longer than Martin had anticipated (526-27). Moreover, it earned only derision from his two sons, primarily for its “total English professor speak”: long sentences and “big words” (528).

On advice from his wife, Martin tightened the draft, in his view “[taking] the life completely out of it” (528). When the summary appeared in the newspaper, it had been further shortened and edited, in ways that made no sense to Martin, for example, word substitutions that sometimes opted for “plain[er]” language but other times chose “fancier” diction (530). He notes that he was offered no part in these edits and received no feedback beyond seeing the final published version (529).

Martin experienced similar frustration throughout the season, struggling to intuit and master the conventions of the unfamiliar genre. His extensive strengths were “beside the point” (531); faced with this new context, a “highly successful writer” became “a ‘struggling’ or ‘less effective’ writer” (531-32).

Anson draws on Anne Beaufort’s model of discourse knowledge to analyze Martin’s struggles. He reports that Beaufort lists five “knowledge domains” that affect the ability to write in a particular context:

writing process knowledge, subject matter knowledge, rhetorical knowledge, and genre knowledge, all of which are enveloped and informed by knowledge of the discourse community. (532; italics original)

In his analysis of Martin’s situation, Anson contends that Martin possessed the kind of reflective awareness of both writing process knowledge and rhetorical knowledge that theoretically would allow him to succeed in the new context (533). He notes that some scholarship suggests that such knowledge developed over years of practice can actually impede transfer because familiar genres are in fact “overpracticed,” resulting in “discursive entrenchment,” for example when students cannot break free of a form like the five-paragraph theme (533). Anson argues, however, that because of his “meta-level awareness” of the new situation, Martin was able to make deliberate decisions about how to address the new exigencies (533-34).

Anson further maintains that, as a reasonably attentive sports fan, Martin possessed sufficient subject-matter knowledge to comprehend the broad genre of sports reporting into which the game summaries fell (534-35).

Anson finds genre knowledge and knowledge of the discourse community central to Martin’s challenge. Martin had to accommodate the “unique variation” on sports reporting that the summaries imposed with their focus on children’s activities and their attention to the specific expectations of the families and the team coordinator (535).

Moreover, Anson cites scholarship challenging the notion that any genre can be permanently “stabilized” by codified, uniformly enforced rules (536). On the contrary, this scholarship posits, genres are “ever changing sets of socially acceptable strategies that participants can use to improvise their responses to a particular situation” (Catherine E. Schryer, qtd. in Anson 536), thus underscoring Beaufort’s claim that the nature of the relevant discourse community “subsumes” all other aspects of transfer, including genre knowledge (536).

In Anson’s analysis, the discourse community within which Martin functioned was complex and problematic. Far from unifying around accepted norms, the community consisted of a number of “transient” groups of families and officials who produced unstable “traditions”; moreover, Anson posits that the newspaper editors’ priorities differed from those of the team coordinator and families (537).

The study leads Anson to propose that external factors will usually override the individual strengths writers bring to new tasks. He notes agreement among scholars that “[t]ransfer theories are always ‘negative’,” recognizing that transfer always requires “significant cognitive effort and some degree of training” (539). Anson argues that Martin’s experiences align with theories of “strong negative transfer,” which state that writers will always struggle to adjust to new tasks and contexts (539-40).

Anson urges scholarship on transfer to apply a “principle of uniqueness” that recognizes that each situation brings together a unique set of exigencies and abilities. While noting that Martin is “qualitatively different” from writers in composition classrooms (541), Anson contends that students face similar struggles when they are constantly routed across contexts where genre rules change radically, often because of the preferences of individual instructors (541-42). A foundational course alone, he states, cannot adequately nurture the flexibility students need to navigate these landscapes, nor is there adequate articulation and conceptual consensus across the different disciplines in which students must perform (541). Moreover, he claims, students seldom receive the kind of mentoring that will enable success even when they import strong skills.

In a twist at the conclusion of the article, Anson reveals that he is “Martin” (544). The existence of such a genre-resistant article itself, he suggests, illustrates that his full understanding of the discourse community engaged with a composition journal like College Composition and Communication provided him with “the confidence and authority” to “strategically deviate from the expectations of a genre” in which he was an expert (544). In contrast, in his role as “Martin,” interacting with the Pop Warner community, he lacked this confidence and authority and therefore felt unable “to bend the Pop Warner summary genre to fit his typical flexibility and creativity” (543-44). This sense of constraint, he suggests, drove his/Martin’s search for the “genre stability” (543) that would provide the guidance a writer new to a discourse community needs to succeed.

Thus the ability to mesh a writer’s own practices with the requirements of a genre, he argues, demands more than rhetorical, genre, subject-matter, and procedural knowledge; it demands an understanding of the specific, often unique, discourse community, knowledge which, as in the case of the Pop Warner community, may be unstable, contradictory, or difficult to obtain (539).

 


Leave a comment

Anderson et al. Contributions of Writing to Learning. RTE, Nov. 2015. Posted 12/17/2015.

Anderson, Paul, Chris M. Anson, Robert M. Gonyea, and Charles Paine. “The Contributions of Writing to Learning and Development: Results from a Large-Scale, Multi-institutional Study.” Research in the Teaching of English 50.2 (2015): 199-235. Print

Note: The study referenced by this summary was reported in Inside Higher Ed on Dec. 4, 2015. My summary may add some specific details to the earlier article and may clarify some issues raised in the comments on that piece. I invite the authors and others to correct and elaborate on my report.

Paul Anderson, Chris M. Anson, Robert M. Gonyea, and Charles Paine discuss a large-scale study designed to reveal whether writing instruction in college enhances student learning. They note widespread belief both among writing professionals and other stakeholders that including writing in curricula leads to more extensive and deeper learning (200), but contend that the evidence for this improvement is not consistent (201-02).

In their literature review, they report on three large-scale studies that show increased student learning in contexts rich in writing instruction. These studies concluded that the amount of writing in the curriculum improved learning outcomes (201). However, these studies contrast with the varied results from many “small-scale, quasi-experimental studies that examine the impact of specific writing interventions” (200).

Anderson et al. examine attempts to perform meta-analyses across such smaller studies to distill evidence regarding the effects of writing instruction (202). They postulate that these smaller studies often explore such varied practices in so many diverse environments that it is hard to find “comparable studies” from which to draw conclusions; the specificity of the interventions and the student populations to which they are applied make generalization difficult (203).

The researchers designed their investigation to address the disparity among these studies by searching for positive associations between clearly designated best practices in writing instruction and validated measures of student learning. In addition, they wanted to know whether the effects of writing instruction that used these best practices differed from the effects of simply assigning more writing (210). The interventions and practices they tested were developed by the Council of Writing Program Administrators (CWPA), while the learning measures were those used in the National Survey of Student Engagement (NSSE). This collaboration resulted from a feature of the NSSE in which institutions may form consortia to “append questions of specific interest to the group” (206).

Anderson et al. note that an important limitation of the NSSE is its reliance on self-report data, but they contend that “[t]he validity and reliability of the instrument have been extensively tested” (205). Although the institutions sampled were self-selected and women, large institutions, research institutions, and public schools were over-represented, the authors believe that the overall diversity and breadth of the population sampled by the NSSE/CWPA collaboration, encompassing more than 70,000 first-year and senior students, permits generalization that has not been possible with more narrowly targeted studies (204).

The NSSE queries students on how often they have participated in pedagogic activities that can be linked to enhanced learning. These include a wide range of practices such as service-learning, interactive learning, “institutionally challenging work” such as extensive reading and writing; in addition, the survey inquires about campus features such as support services and relationships with faculty as well as students’ perceptions of the degree to which their college experience led to enhanced personal development. The survey also captures demographic information (205-06).

Chosen as dependent variables for the joint CWPA/NSSE study were two NSSE scales:

  • Deep Approaches to Learning, which encompassed three subscales, Higher-Order Learning, Integrative Learning, and Reflective Learning. This scale focused on activities related to analysis, synthesis, evaluation, combination of diverse sources and perspectives, and awareness of one’s own understanding of information (211).
  • Perceived Gains in Learning and Development, which involved subscales of Practical Competence such as enhanced job skills, including the ability to work with others and address “complex real-world problems”; Personal and Social Development, which inquired about students’ growth as independent learners with “a personal code of values and ethics” able to “contribut[e] to the community”; and General Education Learning, which includes the ability to “write and speak clearly and effectively, and to think critically and analytically” (211).

The NSSE also asked students for a quantitative estimate of how much writing they actually did in their coursework (210). These data allowed the researchers to separate the effects of simply assigning more writing from those of employing different kinds of writing instruction.

To test for correlations between pedagogical choices in writing instruction and practices related to enhanced learning as measured by the NSSE scales, the research team developed a “consensus model for effective practices in writing” (206). Eighty CWPA members generated questions that were distilled to 27 divided into “three categories based on related constructs” (206). Twenty-two of these ultimately became part of a module appended to the NSSE that, like the NSSE “Deep Approaches to Learning” scale, asked students how often their coursework had included the specific activities and behaviors in the consensus model. The “three hypothesized constructs for effective writing” (206) were

  • Interactive Writing Processes, such as discussing ideas and drafts with others, including friends and faculty;
  • Meaning-Making Writing Tasks, such as using evidence, applying concepts across domains, or evaluating information and processes; and
  • Clear Writing Expectations, which refers to teacher practices in making clear to students what kind of learning an activity promotes and how student responses will be assessed. (206-07)

They note that no direct measures of student learning is included in the NSSE, nor are such measures included in their study (204). Rather, in both the writing module and the NSSE scale addressing Deep Approaches to Learning, students are asked to report on kinds of assignments, instructor behaviors and practices, and features of their interaction with their institutions, such as whether they used on-campus support services (205-06). The scale on Perceived Gains in Learning and Development asks students to self-assess (211-12).

Despite the lack of specific measures of learning, Anderson et al. argue that the curricular content included in the Deep Approaches to Learning scale does accord with content that has been shown to result in enhanced student learning (211, 231). The researchers argue that comparisons between the NSSE scales and the three writing constructs allow them to detect an association between the effective writing practices and the attitudes toward learning measured by the NSSE.

Anderson et al. provide detailed accounts of their statistical methods. In addition to analysis for goodness-of-fit, they performed “blocked hierarchical regressions” to determine how much of the variance in responses was explained by the kind of writing instruction reported versus other factors, such as demographic differences, participation in various “other engagement variables” such as service-learning and internships, and the actual amount of writing assigned (212). Separate regressions were performed on first-year students and on seniors (221).

Results “suggest[ed] that writing assignments and instructional practices represented by each of our three writing scales were associated with increased participation in Deep Approaches to Learning, although some of that relationship was shared by other forms of engagement” (222). Similarly, the results indicate that “effective writing instruction is associated with more favorable perceptions of learning and development, although other forms of engagement share some of that relationship” (224). In both cases, the amount of writing assigned had “no additional influence” on the variables (222, 223-24).

The researchers provide details of the specific associations among the three writing constructs and the components of the two NSSE scales. Overall, they contend, their data strongly suggest that the three constructs for effective writing instruction can serve “as heuristics that instructors can use when designing writing assignments” (230), both in writing courses and courses in other disciplines. They urge faculty to describe and research other practices that may have similar effects, and they advocate additional forms of research helpful in “refuting, qualifying, supporting, or refining the constructs” (229). They note that, as a result of this study, institutions can now elect to include the module “Experiences with Writing,” which is based on the three constructs, when students take the NSSE (231).

 


Leave a comment

Rice, Jenny. Para-Expertise in Writing Classrooms. CE, Nov. 2015. Posted 12/07/2015.

Rice, Jenny. “Para-Expertise, Tacit Knowledge, and Writing Problems.” College English 78.2 (2015): 117-38. Print.

Jenny Rice examines how views of expertise in rhetoric and composition shape writing instruction. She argues for replacing the definition of non-expertise as a lack of knowledge with expanded approaches to expertise open to what Michael Polanyi has called “tacit knowledge” (125). Rice proposes a new category of knowledge, “para-expertise,” that draws on tacit knowledge to enable students and other non-experts to do activities related to expertise.

Rice cites a number of approaches to expertise in rhet/comp’s disciplinary considerations. Among them is the idea that the field has content that only qualified individuals can impart (120). Further, she sees expectations in writing-across-the-curriculum and writing-in-the-disciplines, as well as the view that composition courses should inculcate students in “expert [reading and writing] practice[s]” (121), as indications of the rhetorical presence notions of expertise acquire in the field (120-21).

She opposes the idea of novice practice as a deficiency with other attitudes toward expertise. Within the field of composition studies, she points to the work of Linda Flower and John Hayes. These scholars, she writes, found that the expertise of good writers consisted not of specific knowledge but rather of the ability to pose more complex problems for themselves as communicators. Whereas weaker writers “often flatline around fulfilling the details of the prompt, including word count and other conventional details,” expert writers “use the writing prompt as a way to articulate and define their own understanding of the rhetorical situation to which they are responding” (121).

This discussion leads Rice to a view of expertise as meaningful problem-posing, an activity rather than a body of knowledge. In this view, students can do the work of expertise even when they have no field-specific knowledge (122). Understanding expertise in this way leads Rice to explore categories of expertise as laid out in “the interdisciplinary field of Studies of Expertise and Experience (SEE)” (123). Scholars in this field distinguish between “contributory experts” who “have the ability to do things within the domain of their expertise” (Harry Collins and Robert Evans, qtd. in Rice 123; emphasis original); and “interactional experts,” who may not be able to actively produce within the field but who are “immersed in the language of that particular domain” (123). Rice provides the example of artists and art critics (123).

Rice emphasizes the importance of interactional expertise by noting that not all contributory experts communicate easily with each other and thus require interactional experts to “bridge the gulf” between discourse communities addressing a shared problem (124). She provides the example of “organic farmers and agricultural scholars” who function within separate expert domains yet need others to “translate” across these domains (124-25).

But Rice feels these definitions need to be augmented with another category to encompass people like students who lack the domain-specific knowledge to be contributory or interactional experts. She proposes the category “para-expertise,” in which para takes on its “older etymology” as “alongside (touching the side of) different forms of expertise” (119).

In Rice’s view, the tacit knowledge that fuels para-expertise, while usually discounted in formal contexts, arises from “embodied knowledge” gleaned from everyday living in what Debra Hawhee has called “rhetoric’s sensorium” (cited in Rice 126). In Rice’s words, this sensorium may be defined as “the participatory dimension of communication that falls outside of simple articulation without falling outside the realm of understanding” (126). She gives the example of not being able to articulate the cues that, when implicitly sensed, result in her clear knowledge that she is hearing her mother’s voice on the phone (125)

Rice’s extended example of the work of para-expertise revolves around students’ sense of the effects of campus architecture on their moods and function. Interviews with “hundreds of college students” at “four different university campuses” regarding their responses to “urban legends” about dorms and other buildings being like prisons lead Rice to argue that the students were displaying felt knowledge of the bodily and psychological effects of window and hallway dimensions even though they did not have the expert disciplinary language to convert their sensed awareness into technical architectural principles (127-31). In particular, Rice states, the students drew a sense of a problem to be addressed from their tacit or para knowledge and thus were embarking on “the activity of expertise” (131).

In Rice’s discussion, para-expertise can productively engage with other forms of expertise through the formation of “strategic expertise alliances” (131). By itself para-expertise cannot resolve a problem, but those whose tacit knowledge has led them to identify the problem can begin to address it via coalitions with those with the specific disciplinary tools to do so. As a classroom example, she explains that students on her campus had become concerned about intentions to outsource food options, thus endangering connections with local providers and reducing choices. Lacking the vocabulary to present their concerns to administrators, a group of students and faculty joined with local community organizations that were able to provide specific information and guidance in constructing arguments (132-33).

Rice’s own writing students, participating in this campus issue, were asked to gather oral histories from members of a nearby farmers’ market. The students, however, felt “intimidated and out of place” during their visits to the farmers’ market (136), partly because, as students from other areas, they had seldom had any reason to visit the market. Rice considers this tacit response to the market the opening of a problem to be addressed: “How can a community farmers market reach students who only temporarily reside in that community?” (136; emphasis original).

Rice writes:

[T]he solution calls for greater expertise than first-year students possess. Rather than asking students to (artificially) adopt the role of expertise and pose a solution, however, we turned to a discussion of expert alliances. Who were the “pivot points” in this problem? Who were the contributory experts, and who had the skills of interactional expertise? (136)

Ultimately, alliances resulting from this discussion led to the creation of a branch of the farmers’ market on campus (136).

Rice argues that this approach to expertise highlights its nature as a collaborative effort across different kinds of knowledge and activities (134). It de-emphasizes the “terribly discouraging” idea that “discovery” is the path to expertise and replaces that “myth” with an awareness that “invention and creation” and how “[e]xperts pose problems” are the keys to expert action (122; emphasis original). It also helps students understand the different kinds of expertise and how their own tacit knowledge can become part of effective action (135).

 


3 Comments

Combs, Frost, and Eble. Collaborative Course Design in Scientific Writing. CS, Sept. 2015. Posted 11/12/15.

Combs, D. Shane, Erin A. Frost, and Michelle F. Eble. “”Collaborative Course Design in Scientific Writing: Experimentation and Productive Failure.” Composition Studies 43.2 (2015): 132-49. Web. 11 Nov. 2015.

Writing in the “Course Design” section of Composition Studies, D. Shane Combs, Erin A. Frost, and Michelle F. Eble describe a science-writing course taught at East Carolina University, “a doctoral/research institution with about 27,000 students, serv[ing] a largely rural population” (132). The course has been taught by the English department since 1967 as an upper-level option for students in the sciences, English, and business and technical communication. The course also acts as an option for students to fulfill the requirement to take two writing-intensive (WI) courses, one in the major; as a result, it serves students in areas like biology and chemistry. The two to three sections per semester offered by English are generally taught by “full-time teaching instructors” and sometimes by tenured/tenure-track faculty in technical and professional communication (132).

Combs et al. detail iterations of the course taught by Frost and Eble, who had not taught it before. English graduate student D. Shane Combs contributed as a peer mentor. Inclusion of the peer mentor as well as the incorporation of university-wide writing outcomes into the course-specific outcomes resulted from a Quality Enhancement Plan underway at the university as a component of its reaccreditation. This plan included a special focus on writing instruction, for example, a Writing Mentors program that funded peer-mentor support for WI instruction. Combs, who was sponsored by the English department, brought writing-center experience as well as learning from “a four-hour professional development session” to his role (133).

Drawing on work by Donna J. Haraway, Sandra Harding, and James C. Wilson, Frost and Eble’s collaboratively designed sections of the course were intent “on moving students into a rhetorical space where they can explore the socially constructed nature of science, scientific rhetoric, and scientific traditions” (134). In their classes, the instructors announced that they would be teaching from “an ‘apparent feminist’ perspective,” in Frost’s case, and from “a critical gender studies approach” in Eble’s (134-35). The course required three major assignments: field research on scientific writing venues in an area of the student’s choice; “a complete scientific article” for one of the journals that had been investigated; and a conversion of the scientific article into a general-audience article appropriate for CNN.com (135). A particular goal of these assignments was to provoke cognitive dissonance in order to raise questions of how scientific information can be transmitted “in responsible ways” as students struggled with the selectivity needed for general audiences (135).

Other components of students’ grades were class discussion, a “scripted oral debate completed in small groups,” and a “personal process journal.” In addition, students participated in “cross-class peer review,” in which students from Frost’s class provided feedback on the lay articles from Eble’s class and vice versa (136).

In their Critical Reflection, Combs et al. consider three components of the class that provided particular insights: the collaboration in course design; the inclusion of the peer mentor; and the cross-class peer review (137). Collaboration not only allowed the instructors to build on each other’s strengths and experiences, it also helped them analyze other aspects of the class. Frost and Eble determined that differences in their own backgrounds and teaching styles impacted student responses to assignments. For example, Eble’s experience on an Institutional Review Board influenced her ability to help students think beyond the perception that writing for varied audiences required them to “dumb down” their scientific findings (137).

Much discussion centers on what the researchers learned from the cross-class peer review about students’ dissonance in producing the CNN.com lay article. Students in the two classes addressed this challenge quite differently. Frost’s students resisted the complexity that Eble’s students insisted on sustaining in their revisions of their scientific article, while students in Eble’s class criticized the submissions from Frost’s students as “too simple.” The authors write that “even though students were presented with the exact same assignment prompt, they received different messages about their intended audiences” (138).

The researchers credit Combs’s presence as a peer mentor in Frost’s class for the students’ ability to revise more successfully for non-specialized audiences. They argue that he provided a more immediate outside audience at the same time that he promoted a sense of community and identification that encouraged students to make difficult rhetorical decisions (138-39). His feedback to the instructors helped them recognize the value of the cross-class peer review despite the apparent challenges it presented. In his commentary, he discusses how receiving the feedback from the other class prompted one student to achieve a “successful break from a single-form draft writing and in-class peer review” (Combs, qtd. in Combs et al. 140). He quotes the student’s perception that everyone in her own class “had the same understanding of what the paper was supposed to be” and her sense that the disruption of seeing the other class’s very different understanding fueled a complete revision that made her “happier with [her] actual article” (140). The authors conclude that both the contributions of the peer mentor and the dissonance created by the very different understandings of audience led to increased critical reflection (140), in particular, in Combs’s words, the recognition that

there are often spaces in writing not filled by right-and-wrong choices, but by creating drafts, receiving feedback, and ultimately making the decision to go in a chosen direction. (140)

In future iterations, in addition to retaining the cross-class peer review and the peer-mentor presence, the instructors propose equalizing the amount of feedback the classes receive, especially since receiving more feedback rather than less pushes students to “prioritize” and hence develop important revision strategies (141). They also plan to simplify the scientific-article assignment, which Frost deemed “too much” (141). An additional course-design revision involves creating a lay article from a previously published scientific paper in order to prepare students for the “affective impact” (141) of making radical changes in work to which they are already deeply committed. A final change involves converting the personal journal to a social-media conversation to develop awareness of the exigencies of public discussion of science (141).