College Composition Weekly: Summaries of research for college writing professionals

Read, Comment On, and Share News of the Latest from the Rhetoric and Composition Journals


Gold et al. A Survey of Students’ Online Practices. CCC, Sept. 2020. Posted 10/19/2020.

Gold, David, Jathan Day, and Adrienne E. Raw. “Who’s Afraid of Facebook? A Survey of Students’ Online Writing Practices.” College Composition and Communication 72.1 (2020): 4-30. Print.

David Gold, Jathan Day, and Adrienne E. Raw contend that qualitative research on students’ online writing practices could fruitfully be supplemented with quantitative studies of these practices. They argue that such research is needed to fill gaps in teachers’ knowledge of where students write online, for whom and for what purpose, and what rhetorical challenges they face in these spaces (7).

In fall 2018, the authors conducted a twenty-eight item survey at a large public Midwestern university (7). They sent the survey to a random sample of students, then followed up by enlisting the help of writing instructors in both first-year and upper-level courses. Respondents numbered 803, with 58.5% female, 18.3% first-generation college students, 66.2% (of 687 responses) white, 16.9% Asian American, 4.4% Black, 3.6% Latinx, 0.6% Native American or Pacific Islander, 8.3% two or more categories. Like the university’s general population, 73.1% report family income higher than the U.S. median for 2017 (7).

The authors maintain that their survey provides more fine-grained information than is usual in national surveys, which they state do not investigate the “myriad writing activities for multiple purposes” in which students may take part (4). They also write that their survey extends language arts research that tends to focus on a few of the more well-known sites; their survey asks about eleven different venues: Facebook, Snapchat, Instagram, YouTube, Twitter, LinkedIn, blogs, discussion forums, news/magazine sites, Wikipedia, and user review sites (8).

The information they gather, in their view, is important to writing teachers because it offers insight into potential misconceptions that may guide assignment decisions. Beyond lack of knowledge as to where students actually participate, assignments may incorrectly assume student familiarity with certain sites (8), or teachers may assume students have more expertise than they actually have (12). The authors note that students are often asked to write on blogs, but very few of their respondents report having an account on a blog (9). Assignments, the authors state, make little use of more widely used sites like Snapchat, perhaps assuming they are “mere photo-sharing tool[s],” raising the possibility that composition should address the rhetorical aspects of such activities (9).

The authors also contend that more specific knowledge of how and why students do or do not write online can further what they see as a goal of composition as a field: furthering participation in civil or public rhetoric, including engagement on controversial topics (13, 15). Their results show that while instructors encourage contributions to blogs, they make little use of Snapchat and Instagram, which at the date of the research were “extremely popular” (9). Awareness of such disparities, in the authors’ view, can aid teachers making assignment decisions.

Gold et al. provide tables showing the data from their analyses. Examining “Spaces for Writing (and Not Writing) (8), the authors find that although most of their respondents had accounts at multiple sites, they wrote less on these sites than might be expected (8): “[D]igital ‘participatory’ culture may not be as participatory as we imagine” (11). Students were much more likely to read than to write, with “responding” as a “middle ground” (11). Snapchat elicited the most writing, with sites like blogs and discussion forums the least. Gold et al. suggest increased attention to both photo-sharing and the process of responding to understand the rhetorical environment offered by these activities (11-12).

Results for “Purposes and Audiences for Writing” (12) indicate that students most commonly use online communication to “maintain relationships with family and friends.” A second fairly common purpose was “developing personal or professional identity” (14). Most students surveyed “never” share creative work or “information or expertise,” and never enter into debates on controversial subjects (14).

Analyzing audiences, the authors propose four categories: family and friends; “members of an affinity space” like one designed to share recreational, political, or cultural activities; “members of a professional community,” which might include networking; and “fellow citizens or the general public” (13-14). The authors found that majorities of the students in their sample “never” wrote for any of the last three audiences (15).

The authors found that the more platforms students frequented, the more likely they were to write, suggesting that supporting the use of a wider range of sites might lead to greater proficiency across genres and audiences (16). Students exhibited a definite sense of what different sites were suited for, agreeing that blogs and discussion forums were appropriate for debate on controversies, but also almost never contributing to such sites (17).

Gold et al. write that while there has been much discourse about how students are presumed to write online, there has been less attention to the reasons they do not write (19). Noting the problems often associated with posting on public sites like Facebook and Twitter, such as bullying and shaming (19), Gold et al. focus on five reasons for resistance to writing that have emerged in research, the one most commonly indicated being concern over how “intended readers” might react (20).

Sizable majorities also resisted posting because of fear their contributions might reach unintended audiences; fear that posts would be online “forever”; worry that they lacked the authority to contribute; and “lack of skill” in a given venue (21). The student’s degree of “platform expertise” did not affect these responses.

Pointing out that all writers, including teaching professionals, make choices as to whether to edit or simply delete a drafted post, the authors posit that for students, the preferred decision to delete may represent “lost opportunities to engage with an interlocutor or audience” (21). Suggesting that these “affective components” militating against increased engagement may be “persistent features” of online writing in general, the authors urge teachers to consider these disincentives in designing online assignments (21).

The authors argue for the value of quantitative research both for the detailed information it can provide and for its potential to generate qualitative inquiry (22). They acknowledge limitations of any instrument, including the problem of capturing change, noting that as they wrote, Tiktok was emerging to compete with other popular sites (22). They advocate more detailed quantitative research with larger and more varied samples to explore such findings from their study as a lack of correlation between demographic variables and responses to their questions (23). They cite ongoing work on what constitutes “publics” as beneficial to students, who, they maintain, “have much to gain from writing in a wider variety of spaces for a richer range of purposes and audiences” (24).


Pruchnic et al. Mixed Methods in Direct Assessment. J or Writ Assessment, 2018. Posted 12/01/2018.

Pruchnic, Jeff, Chris Susak, Jared Grogan, Sarah Primeau, Joe Torok, Thomas Trimble, Tanina Foster, and Ellen Barton. “Slouching Toward Sustainability: Mixed Methods in the Direct Assessment of Student Writing.” Journal of Writing Assessment 11.1 (2018). Web. 27 Nov. 2018.

[Page numbers from pdf generated from the print dialogue]

Jeff Pruchnic, Chris Susak, Jared Grogan, Sarah Primeau, Joe Torok, Thomas Trimble, Tanina Foster, and Ellen Barton report on an assessment of “reflection argument essay[s]” from the first-year-composition population of a large, urban, public research university (6). Their assessment used “mixed methods,” including a “thin-slice” approach (1). The authors suggest that this method can address difficulties faced by many writing programs in implementing effective assessments.

The authors note that many stakeholders to whom writing programs must report value large-scale quantitative assessments (1). They write that the validity of such assessments is often measured in terms of statistically determined interrater reliability (IRR) and samples considered large enough to adequately represent the population (1).

Administrators and faculty of writing programs often find that implementing this model requires time and resources that may not be readily available, even for smaller programs. Critics of this model note that one of its requirements, high interrater reliability, can too easily come to stand in for validity (2); in the view of Peter Elbow, such assessments favor “scoring” over “discussion” of the results (3). Moreover, according to the authors, critics point to the “problematic decontextualization of program goals and student achievement” that large-scale assessments can foster (1).

In contrast, Pruchnic et al. report, writing programs have tended to value the “qualitative assessment of a smaller sample size” because such models more likely produce the information needed for “the kinds of curricular changes that will improve instruction” (1). Writing programs, the authors maintain, have turned to redefining a valid process as one that can provide this kind of information (3).

Pruchnic et al. write that this resistance to statistically sanctioned assessments has created a bind for writing programs. Pruchnic et al. cite scholars like Peggy O’Neill (2) and Richard Haswell (3) to posit that when writing programs refuse the measures of validity required by external stakeholders, they risk having their conclusions dismissed and may well find themselves subject to outside intervention (3). Haswell’s article “Fighting Number with Number” proposes producing quantitative data as a rhetorical defense against external criticism (3).

In the view of the authors, writing programs are still faced with “sustainability” concerns:

The more time one spends attempting to perform quantitative assessment at the size and scope that would satisfy statistical reliability and validity, the less time . . . one would have to spend determining and implementing the curricular practices that would support the learning that instructors truly value. (4)

Hoping to address this bind, Pruchnic et al. write of turning to a method developed in social studies to analyze “lengthy face-to-face social and institutional interactions” (5). In a “thin-slice” methodology, raters use a common rubric to score small segments of the longer event. The authors report that raters using this method were able to predict outcomes, such as the number of surgery malpractice claims or teacher-evaluation results, as accurately as those scoring the entire data set (5).

To test this method, Pruchnic et al. created two teams, a “Regular” and a “Research” team. The study compared interrater reliability, “correlation of scores,” and the time involved to determine how closely the Research raters, scoring thin slices of the assessment data, matched the work of the Regular raters (5).

Pruchnic et al. provide a detailed description of their institution and writing program (6). The university’s assessment approach is based on Edward White’s “Phase 2 assessment model,” which involves portfolios with a final reflective essay, the prompt for which asks students to write an evidence-based argument about their achievements in relation to the course outcomes (8). The authors note that limited resources gradually reduced the amount of student writing that was actually read, as raters moved from full-fledged portfolio grading to reading only the final essay (7). The challenges of assessing even this limited amount of student work led to a sample that consisted of only 6-12% of the course enrollment.

The authors contend that this is not a representative sample; as a result, “we were making decisions about curricular and other matters that were not based upon a solid understanding of the writing of our entire student body” (7). The assessment, in the authors’ view, therefore did not meet necessary standards of reliability and validity.

The authors describe developing the rubric to be used by both the Research and Regular teams from the precise prompt for the essay (8). They used a “sampling calculator” to determine that, given the total of 1,174 essays submitted, 290 papers would constitute a representative sample; instructors were asked for specific, randomly selected papers to create a sample of 291 essays (7-8).

The Regular team worked in two-member pairs, both members of each pair reading the entire essay, with third readers called in as needed (8): “[E]ach essay was read and scored by only one two-member team” (9). The authors used “double coding” in which one-fifth of the essays were read by a second team to establish IRR (9). In contrast, the 10-member Research team was divided into two groups, each of which scored half the essays. These readers were given material from “the beginning, middle, and end” of each essay: the first paragraph, the final paragraph, and a paragraph selected from the middle page or pages of the essay, depending on its length. Raters scored the slices individually; the averaged five team members’ scores constituted the final scores for each paper (9).

Pruchnic et al. discuss in detail their process for determining reliability and for correlating the scores given by the Regular and Research teams to determine whether the two groups were scoring similarly. Analysis of interrater reliability revealed that the Research Team’s IRR was “one full classification higher” than that of the Regular readers (12). Scores correlated at the “low positive” level, but the correlation was statistically significant (13). Finally, the Research team as a whole spent “a little more than half the time” scoring than the Regular group, while individual average scoring times for Research team members was less than half of the scoring time of the Regular members (13).

Additionally, the assessment included holistic readings of 16 essays randomly representing the four quantitative result classifications of Poor through Good (11). This assessment allowed the authors to determine the qualities characterizing essays ranked at different levels and to address the pedagogical implications within their program (15, 16).

The authors conclude that thin-slice scoring, while not always the best choice in every context (16), “can be added to the Writing Studies toolkit for large-scale direct assessment of evaluative reflective writing” (14). Future research, they propose, should address the use of this method to assess other writing outcomes (17). Paired with a qualitative assessment, they argue, a mixed-method approach that includes thin-slice analysis as an option can help satisfy the need for statistically grounded data in administrative and public settings (16) while enabling strong curricular development, ideally resulting in “the best of both worlds” (18).


2 Comments

Abba et al. Students’ Metaknowledge about Writing. J of Writing Res., 2018. Posted 09/28/2018.

Abba, Katherine A., Shuai (Steven) Zhang, and R. Malatesha Joshi. “Community College Writers’ Metaknowledge of Effective Writing.” Journal of Writing Research 10.1 (2018): 85-105. Web. 19 Sept. 2018.

Katherine A. Abba, Shuai (Steven) Zhang, and R. Malatesha Joshi report on a study of students’ metaknowledge about effective writing. They recruited 249 community-college students taking courses in Child Development and Teacher Education at an institution in the southwestern U.S. (89).

All students provided data for the first research question, “What is community-college students’ metaknowledge regarding effective writing?” The researchers used data only from students whose first language was English for their second and third research questions, which investigated “common patterns of metaknowledge” and whether classifying students’ responses into different groups would reveal correlations between the focus of the metaknowledge and the quality of the students’ writing. The authors state that limiting analysis to this subgroup would eliminate the confounding effect of language interference (89).

Abba et al. define metaknowledge as “awareness of one’s cognitive processes, such as prioritizing and executing tasks” (86), and explore extensive research dating to the 1970s that explores how this concept has been articulated and developed. They state that the literature supports the conclusion that “college students’ metacognitive knowledge, particularly substantive procedures, as well as their beliefs about writing, have distinctly impacted their writing” (88).

The authors argue that their study is one of few to focus on community college students; further, it addresses the impact of metaknowledge on the quality of student writing samples via the “Coh-Metrix” analysis tool (89).

Students participating in the study were provided with writing prompts at the start of the semester during an in-class, one-hour session. In addition to completing the samples, students filled out a short biographical survey and responded to two open-ended questions:

What do effective writers do when they write?

Suppose you were the teacher of this class today and a student asked you “What is effective writing?” What would you tell that student about effective writing? (90)

Student responses were coded in terms of “idea units which are specific unique ideas within each student’s response” (90). The authors give examples of how units were recognized and selected. Abba et al. divided the data into “Procedural Knowledge,” or “the knowledge necessary to carry out the procedure or process of writing,” and “Declarative Knowledge,” or statements about “the characteristics of effective writing” (89). Within the categories, responses were coded as addressing “substantive procedures” having to do with the process itself and “production procedures,” relating to the “form of writing,” e.g., spelling and grammar (89).

Analysis for the first research question regarding general knowledge in the full cohort revealed that most responses about Procedural Knowledge addressed “substantive” rather than “production” issues (98). Students’ Procedural Knowledge focused on “Writing/Drafting,” with “Goal Setting/Planning” in second place (93, 98). Frequencies indicated that while revision was “somewhat important,” it was not as central to students’ knowledge as indicated in scholarship on the writing process such as that of John Hayes and Linda Flower and M. Scardamalia and C. Bereiter (96).

Analysis of Declarative Knowledge for the full-cohort question showed that students saw “Clarity and Focus” and “Audience” as important characteristics of effective writing (98). Grammar and Spelling, the “production” features, were more important than in Procedural Knowledge. The authors posit that students were drawing on their awareness of the importance of a polished finished product for grading (98). Overall, data for the first research question matched that of previous scholarship on students’ metaknowledge of effective writing, which shows some concern with the finished product and a possibly “insufficient” focus on revision (98).

To address the second and third questions, about “common patterns” in student knowledge and the impact of a particular focus of knowledge on writing performance, students whose first language was English were divided into three “classes” in both Procedural and Declarative Knowledge based on their responses. Classes in Procedural Knowledge were a “Writing/Drafting oriented group,” a “Purpose-oriented group,” and the largest, a “Plan and Review oriented group” (99). Responses regarding Declarative Knowledge resulted in a “Plan and Review” group, a “Time and Clarity oriented group,” and the largest, an “Audience oriented group.” One hundred twenty-three of the 146 students in the cohort belonged to this group. The authors note the importance of attention to audience in the scholarship and the assertion that this focus typifies “older, more experienced writers” (99).

The final question about the impact of metaknowledge on writing quality was addressed through the Coh-Metrix “online automated writing evaluation tool” that assessed variables such as “referential cohesion, lexical diversity, syntactic complexity and pattern density” (100). In addition, Abba et al. used a method designed by A. Bolck, M. A. Croon, and J. A. Hagenaars (“BCH”) to investigate relationships between class membership and writing features (96).

These analyses revealed “no relationship . . . between their patterns knowledge and the chosen Coh-Metrix variables commonly associated with effective writing” (100). The “BCH” analysis revealed only two significant associations among the 15 variables examined (96).

The authors propose that their findings did not align with prior research suggesting the importance of metacognitive knowledge because their methodology did not use human raters and did not factor in student beliefs about writing or questions addressing why they responded as they did. Moreover, the authors state that the open-ended questions allowed more varied responses than did responses to “pre-established inventor[ies]” (100). They maintain that their methods “controlled the measurement errors” better than often-used regression studies (100).

Abba et al. recommend more research with more varied cohorts and collection of interview data that could shed more light on students’ reasons for their responses (100-101). Such data, they indicate, will allow conclusions about how students’ beliefs about writing, such as “whether an ability can be improved,” affect the results (101). Instructors, in their view, can more explicitly address awareness of strategies and effective practices and can use discussion of metaknowledge to correct “misconceptions or misuse of metacognitive strategies” (101):

The challenge for instructors is to ascertain whether students’ metaknowledge about effective writing is accurate and support students as they transfer effective writing metaknowledge to their written work. (101)

 


Ray et al. Rethinking Student Evaluations of Teaching. Comp Studies Spring 2018. Posted 08/25/2018.

Ray, Brian, Jacob Babb, and Courtney Adams Wooten. “Rethinking SETs: Retuning Student Evaluations of Teaching for Student Agency.” Composition Studies 46.1 (2018): 34-56. Web. 10 Aug. 2018.

Brian Ray, Jacob Babb, and Courtney Adams Wooten report a study of Student Evaluations of Teaching (SETs) across a range of institutions. The researchers collected 55 different forms, 45 of which were institutions’ generic forms, while 10 were designed specifically for writing classes. They coded 1,108 different questions from these forms in order to determine what kinds of questions were being asked (35).

The authors write that although SETs and their use, especially in personnel decisions, is of concern in rhetoric and composition, very little scholarship in the field has addressed the issue (34-35). They summarize a history of student evaluations as tools for assessment of teachers, beginning with materials from the 1920s. Early SETs focused heavily on features of personality such as “wit,” “tact,” and “popularity” (38), as well as physical appearance (39). This focus on “subjective” characteristics of teachers asked students to judge “factors that neither they nor the instructor had sole control over and that they could do little to affect” (38).

This emphasis persisted throughout twentieth century. A scholar named Herbert Marsh conducted “numerous studies” in the 1970s and 1980s and eventually created the Student Evaluation of Education Quality form (SEEQ) in 1987 (35). This instrument asked students about nine features:

[L]earning, enthusiasm, organization and clarity, group interaction, individual rapport, breadth of coverage, tests and grading, assignments, and difficulty (39)

The authors contend that these nine factors substantively guide the SETs they studied (35), and they claim that, in fact, in important ways, “current SET forms differ little from those seen in the 1920s” (40).

Some of composition’s “only published conversations about SETs” revolved around workshops conducted by the Conference on College Composition and Communication (CCCC) from 1956 through 1962 (39). The authors report that instructors participating in these discussions saw the forms as most appropriate for “formative” purposes; very few institutions used them in personnel matters (39).

Data from studies of SETs in other fields reveal some of the problems that can result from common versions of these measures (37). The authors state that studies over the last ten years have not been able to link high teacher ratings on SETs with improved student learning or performance (40). Studies point out that many of the most common categories, like “clarity and fairness,” remain subjective, and that students consistently rank instructors on personality rather than on more valid measures of effectiveness (41).

Such research documents bias related to gender and ethnicity, with female African-American teachers rated lowest on one study asking students to assess “a hypothetical curriculum vitae according to teaching qualifications and expertise” (42). Male instructors are more commonly praised for their “ability to innovate and stimulate critical thought”; women are downgraded for failing to be “compassionate and polite” (42). Studies showed that elements like class size and workload affected results (42). Physical attractiveness continues to influence student opinion, as does the presence of “any kind of reward,” like lenient grading or even supplying candy (43).

The authors emphasize their finding that a large percentage of the questions they examined asked students about either some aspect of the teacher’s behavior (e.g., “approachability,” “open-mindedness” [42]) or what the teacher did (“stimulated my critical thinking” [45]). The teacher was the subject of nearly half of the questions (45). The authors argue that “this pattern of hyper-attention” (44) to the teacher casts the teacher as “solely responsible” for the success or failure of the course (43). As a result, in the authors’ view, students receive a distorted view of agency in a learning situation. In particular, they are discouraged from seeing themselves as having an active role in their own learning (35).

The authors contend that assigning so much agency to a single individual runs counter to “posthumanist” views of how agency operates in complex social and institutional settings (36). In this view, many factors, including not only all participants and their histories and interests but also the environment and even the objects in the space, play a part in what happens in a classroom (36). When SET questions fail to address this complexity, the authors posit, issues of validity arise when students are asked to pass judgment on subjective and ambiguously defined qualities as well as on factors beyond the control of any participant (40). Students encouraged to focus on instructor agency may also misjudge teaching that opts for modern “de-center[ed]” teaching methods rather than the lecture-based instruction they expect (44).

Ray et al. note that some programs ask students about their own level of interest and willingness to participate in class activities and advocate increased use of such questions (45). But they particularly advocate replacing the emphasis on teacher agency with questions that encourage students to assess their own contributions to their learning experience as well as to examine the class experience as a whole and to recognize the “relational” aspects of a learning environment (46). For example:

Instead of asking whether instructors stimulated critical thought, it seems more reasonable to ask if students engaged in critical thinking, regardless of who or what facilitated engagement. (46; emphasis original)

Ray et al. conclude that questions that isolate instructors’ contributions should lean toward those that can be objectively defined and rated, such as punctuality and responding to emails in a set time frame (46).

The authors envision improved SETs, like those of some programs, that are based on a program’s stated outcomes and that ask students about the concepts and abilities they have developed through their coursework (48). They suggest that programs in institutions that use “generic” evaluations for broader analysis or that do not allow individual departments to eliminate the official form should develop their own parallel forms in order to gather the kind of information that enables more effective assessment of classroom activity (48-49).

A major goal, in the authors’ view, should be questions that “encourage students to identify the interconnected aspects of classroom agency through reflection on their own learning” (49).

 


Donahue & Foster-Johnson. Text Analysis for Evidence of Transfer. RTE, May 2018. Posted 07/13/2018.

Donahue, Christiane, and Lynn Foster-Johnson. “Liminality and Transition: Text Features in Postsecondary Student Writing.” Research in the Teaching of English 52.4 (2018): 359-381. Web. 4 July 2018.

Christiane Donahue and Lynn Foster-Johnson detail a study of student writing in the “liminal space” between a “generic” first-year-writing course and a second, “discipline-inspired” first-year seminar (365). They see their study as unusual in that it draws its data and conclusions from empirical “corpus analysis” of the texts students produce (376-77). They also present their study as different from much other research in that it considered a “considerably larger” sample that permits them to generalize about the broader population of the specific institution where the study took place (360).

The authors see liminal spaces as appropriate for the study of the issue usually referred to as “transfer,” which they see as a widely shared interest across composition studies (359). They contend that their study of “defined features” in texts produced as students move from one type of writing course to another allows them to identify “just-noticeable difference[s]” that they believe can illuminate how writing develops across contexts (361).

The literature review examines definitions of liminality as well as wide-ranging writing scholarship that attempts to articulate how knowledge created in one context changes as it is applied in new situations. They cite Linda Adler-Kassner’s 2014 contention that students may benefit from “learning strategy rather than specific writing rules or forms,” thus developing the ability to adapt to a range of new contexts (362).

One finding from studies such as that of Lucille McCarthy in 1987 and Donahue in 2010 is that while students change the way they employ knowledge as they move from first to final years of education, they do not seem fully aware of how their application of what they know has changed (361-62). Thus, for Donahue and Foster-Johnson, the actual features detectable in the texts themselves can be illuminating in ways that other research methodologies may not (362, 364).

Examining the many terms that have been used to denote “transfer,” Donahue and Foster-Johnson advocate for “models of writing knowledge reuse” and “adaptation,” which capture the recurrence of specific features and the ways these features may change to serve a new exigency (364).

The study took place in a “selective” institution (366) defined as a “doctoral university of high research activity” (365). The student population is half White, with a diverse range of other ethnicities, and 9% first-generation college students (366). Students take either one or two sections of general first-year writing, depending on needs identified by directed self-placement (366), and a first-year seminar that is “designed to teach first-year writing while also introducing students to a topic in a particular (inter)discipline and gesturing toward disciplinary writing” (365). The authors argue that this sequence provides a revealing “’bridge’ moment in students’ learning” (365).

Students were thus divided into three cohorts depending on which courses they took and in which semester. Ninety percent of the instructors provided materials, collecting “all final submitted drafts of the first and last ‘source-based’ papers” for 883 students. Fifty-two papers from each cohort were randomly chosen, resulting in 156 participants (366-67). Each participating student’s work was examined at four time points, with the intention of identifying the presence or absence of specific features (368).

The features under scrutiny were keyed to faculty-developed learning outcomes for the courses (367-68). The article discusses the analysis of seven: thesis presence, thesis type, introduction type, overall text structure, evidence types, conclusion type, and overall essay purpose (367). Each feature was further broken down into “facets,” 38 in all, that illustrated “the specific aspects of the feature” (367-68).

The authors provide detailed tables of their results and list findings in their text. They report that “the portrait is largely one of stability,” but note students’ ability to vary choices “when needed” (369). Statistically significant differences showing “change[s] across time” ranged from 13% in Cohort 1 to 29% in Cohort 2 and 16% in Cohort 3. An example of a stable strategy is the use of “one explicit thesis at the beginning” of a paper (371); a strategy “rarely” used was “a thesis statement [placed] inductively at the middle or end” (372). Donahue and Foster-Johnson argue that these results indicate that students had learned useful options that they could draw on as needed in different contexts (372).

The authors present a more detailed examination of the relationship between “thesis type” and “overall essay aim” (374). They give examples of strong correlations between, for example, “the purpose of analyzing an object” and the use of “an interpretive thesis” as well as negative correlations between, for example, “the purpose of analyzing an object” and “an evaluative thesis” (374). In their view, these data indicate that some textual features are “congruen[t]” with each other while others are “incompatible” (374). They find that their textual analysis documents these relationships and students’ reliance on them.

They note a “reset effect”: in some cases, students increased their use of a facet (e.g., “external source as authority”) over the course of the first class, but then reverted to using the facet less at the beginning of the second class, only to once again increase their reliance on such strategies as the second class progressed (374-75), becoming, “‘repeating newcomers’ in the second term” (374).

Donahue and Foster-Johnson propose as one explanation for the observed stability the possibility that “more stays consistent across contexts than we might readily acknowledge” (376), or that in general-education contexts in which exposure to disciplinary writing is preliminary, the “boundaries we imagine are fuzzy” (377). They posit that it is also possible that curricula may offer students mainly “low-road” opportunities for adaptation or transformation of learned strategies (377). The authors stress that in this study, they were limited to “what the texts tell us” and thus could not speak to students’ reasons for their decisions (376).

Questions for future research, they suggest, include whether students are aware of deliberate reuse of strategies and whether or not “students reusing features do so automatically or purposefully” (377). Research might link student work to particular students with identifiers that would enable follow-up investigation.

They argue that compared to the methods of textual analysis and “topic-modeling” their study employs, “current assessment methods . . . are crude in their construct representation and antiquated in the information they provide” (378). They call for “a new program of research” that exploits a new

capability to code through automated processes and allow large corpora of data to be uploaded and analyzed rapidly under principled categories of analysis. 378

 


Limpo and Alves. Effects of Beliefs about “Writing Skill Malleability” on Performance. JoWR 2017. Posted 11/24/2017.

Limpo, Teresa, and Rui A. Alves. “Relating Beliefs in Writing Skill Malleability to Writing Performance: The Mediating Roles of Achievement Goals and Self-Efficacy.” Journal of Writing Research 9.2 (2017): 97-125. Web. 15 Nov. 2017.

Teresa Limpo and Rui A. Alves discuss a study with Portuguese students designed to investigate pathways between students’ beliefs about writing ability and actual writing performance. They use measures for achievement goals and self-efficacy to determine how these factors mediate between beliefs and performance. Their study goals involved both exploring these relationships and assessing the validity and reliability of the instruments and theoretical models they use (101-02).

The authors base their approach on the assumption that people operate via “implicit theories,” and that central to learning are theories that see “ability” as either “incremental,” in that skills can be honed through effort, or as an “entity” that cannot be improved despite effort (98). Limpo and Alves argue that too little research has addressed how these beliefs about “writing skill malleability” influence learning in the specific “domain” of writing (98).

The authors report earlier research that indicates that students who see writing as an incremental skill perform better in intervention studies. They contend that the “mechanisms” through which this effect occurs have not been thoroughly examined (99).

Limpo and Alves apply a three-part model of achievement goals: “mastery” goals involve the desire to improve and increase competence; “performance-approach” goals involve the desire to do better than others in the quest for competence; and “performance-avoidance” goals manifest as the desire to avoid looking incompetent or worse than others (99-100). Mastery and performance-approach goals correlate positively because they address increased competence, but performance-approach and performance-avoidance goals also correlate because they both concern how learners see themselves in comparison to others (100).

The authors write that “there is overall agreement” among researchers in this field that these goals affect performance. Students with mastery goals display “mastery-oriented learning patterns” such as “use of deep strategies, self-regulation, effort and persistence, . . . [and] positive affect,” while students who focus on performance avoidance exhibit “helpless learning patterns” including “unwillingness to seek help, test anxiety, [and] negative affect” (100-01). Student outcomes with respect to performance-approach goals were less clear (101). The authors hope to clarify the role of self-efficacy in these goal choices and outcomes (101).

Limpo and Alves find that self-efficacy is “perhaps the most studied variable” in examinations of motivation in writing (101). They refer to a three-part model: self-efficacy for “conventions,” or “translating ideas into linguistic forms and transcribing them into writing”; for “ideation,” finding ideas and organizing them, and for “self-regulation,” which involves knowing how to make the most of “the cognitive, emotional, and behavioral aspects of writing” (101). They report associations between self-efficacy, especially for self-regulation, and mastery goals (102). Self-efficacy, particularly for conventions, has been found to be “among the strongest predictors of writing performance” (102).

The authors predicted several “paths” that would illuminate the ways in which achievement goals and self-efficacy linked malleability beliefs and performance. They argue that their study contributes new knowledge by providing empirical data about the role of malleability beliefs in writing (103).

The study was conducted among native Portuguese speakers in 7th and 8th grades in a “public cluster of schools in Porto” that is representative of the national population (104). Students received writing instruction only in their Portuguese language courses, in which teachers were encouraged to use “a process-oriented approach” to teach a range of genres but were not given extensive pedagogical support or the resources to provide a great deal of “individualized feedback” (105).

The study reported in this article was part of a larger study; for the relevant activities, students first completed scales to measure their beliefs about writing-skill malleability and to assess their achievement goals. They were then given one of two prompts for “an opinion essay” on whether students should have daily homework or extra curricular activities (106). After the prompts were provided, students filled out a sixteen-item measure of self-efficacy for conventions, ideation, and self-regulation. A three-minute opportunity to brainstorm about their responses to the prompts followed; students then wrote a five-minute “essay,” which was assessed as a measure of performance by graduate research assistants who had been trained to use a “holistic rating rubric.” Student essays were typed and mechanical errors corrected. The authors contend that the use of such five-minute tasks has been shown to be valid (107).

The researchers predicted that they would see correlations between malleability beliefs and performance; they expected to see beliefs affect goals, which would affect self-efficacy, and lead to differences in performance (115). They found these associations for mastery goals. Students who saw writing as an incremental, improvable skill displayed “a greater orientation toward mastery goals” (115). The authors state that this result for writing had not been previously demonstrated. Their research reveals that “mastery goals contributed to students’ confidence” and therefore to self-efficacy, perhaps because students with this belief “ actively strive” for success (115).

They note, however, that prior research correlated these results with self-efficacy for conventions, whereas their study showed that self-efficacy for self-regulation, students’ belief that “they can take control of their own writing,” was the more important contributor to performance (116); in fact, it was “the only variable directly influencing writing performance” (116). Limpo and Alves hypothesize that conventions appeared less central in their study because the essays had been typed and corrected, so that errors had less effect on performance scores (116).

Data on the relationship between malleability beliefs and performance-approach or performance-avoidance goals, the goals associated with success in relation to others, were “less clear-cut” (117). Students who saw skills as fixed tended toward performance-avoidance, but neither type of performance goal affected self-efficacy.

Limpo and Alves recount an unexpected finding that the choice of performance-avoidance goals did not affect performance scores on the essays (117). The authors hypothesize that the low-stakes nature of the task and its simplicity did not elicit “the self-protective responses” that often hinder writers who tend toward these avoidance goals (117). These unclear results lead Limpo and Alves to withhold judgment about the relationship among these two kinds of goals, self-efficacy, and performance, positing that other factors not captured in the study might be involved (117-18).

They recommend more extensive research with more complex writing tasks and environments, including longitudinal studies and consideration of such factors as “past performance” and gender (118). They encourage instructors to foster a view of writing as an incremental skill and to emphasize self-regulation strategies. They recommend “The Self-Regulated Strategy Development model” as “one of the most effective instructional models for teaching writing” (119).


2 Comments

Bastian, Heather. Affect and “Bringing the Funk” to First-Year Writing. CCC, Sept. 2017. Posted 10/05/2017.

Bastian, Heather. “Student Affective Responses to ‘Bringing the Funk’ in the First-Year Writing Classroom.” College Composition and Communication 69.1 (2017): 6-34. Print.

Heather Bastian reports a study of students’ affective responses to innovative assignments in a first-year writing classroom. Building on Adam Banks’s 2015 CCCC Chair’s Address, Bastian explores the challenges instructors may face when doing what Banks called “bring[ing] the funk” (qtd. in Bastian 6) by asking students to work in genres that do not conform to “academic convention” (7).

According to Bastian, the impetus for designing such units and assignments includes the need to “prepare students for uncertain futures within an increasingly technological world” (8). Bastian cites scholarship noting teachers’ inability to forecast exactly what will be demanded of students as they move into professions; this uncertainty, in this view, means that the idea of what constitutes writing must be expanded and students should develop the rhetorical flexibility to adapt to the new genres they may encounter (8).

Moreover, Bastian argues, citing Mary Jo Reiff and Anis Bawarshi, that students’ dependence on familiar academic formulas means that their responses to rhetorical situations can become automatic and unthinking, with the result that they do not question the potential effects of their choices or explore other possible solutions to rhetorical problems. This automatic response limits “their meaning-making possibilities to what academic convention allows and privileges” (8-9)

Bastian contends that students not only fall back on traditional academic genres but also develop “deep attachments” to the forms they find familiar (9). The field, she states, has little data on what these attachments are like or how they guide students’ rhetorical decisions (9, 25).

She sees these attachments as a manifestation of “affect”; she cites Susan McLeod’s definition of affect as “noncognitive phenomena, including emotions but also attitudes, beliefs, moods, motivations, and intuitions” (9). Bastian cites further scholarship that indicates a strong connection between affect and writing as well as emotional states and learning (9-10). In her view, affect is particularly important when teachers design innovative classroom experiences because students’ affective response to such efforts can vary greatly; prior research suggests that as many as half the students in a given situation will resist moving beyond the expected curriculum (10).

Bastian enlisted ten of twenty-two students in a first-year-writing class at a large, public midwestern university in fall 2009 (11). She used “multiple qualitative research methods” to investigate these first-semester students’ reactions to the third unit in a four-unit curriculum intended to meet the program’s goals of “promot[ing] rhetorical flexibility and awareness”; the section under study explored genre from different perspectives (11). The unit introduced “the concept of genre critique,” as defined by the course textbook, Amy J. Devitt et al.’s Scenes of Writing: “questioning and evaluating to determine the strengths and shortcomings of a genre as well as its ideological import” (12).

Bastian designed the unit to “disrupt” students’ expectation of a writing class on the reading level, in that she presented her prompt as a set of “game rules,” and also on the “composing” level, as the unit did not specify what genre the students were to critique nor the form in which they were to do so (12). Students examined a range of genres and genre critiques, “including posters, songs, blogs, . . . artwork, poems, . . . comics, speeches, creative nonfiction. . . .” (13). The class developed a list of the possible forms their critiques might take.

Bastian acted as observer, recording evidence of “the students’ lived experiences” as they negotiated the unit. She attended all class sessions, made notes of “physical reactions” and “verbal reactions” (13). Further data consisted of one-hour individual interviews and a set of twenty-five questions. For this study, she concentrated on questions that asked about students’ levels of comfort with various stages of the unit (13).

Like other researchers, Bastian found that students asked to create innovative projects began with “confusion”; her students also displayed “distrust” (14) in that they were not certain that the assignment actually allowed them to choose their genres (19). All students considered “the essay” the typical genre for writing classes; some found the familiar conventions a source of confidence and comfort, while for others the sense of routine was “boring” (student, qtd. in Bastian 15).

Bastian found that the degree to which students expressed “an aversion” to the constraints of “academic convention” affected their responses to the assignment, particularly the kinds of genres they chose and their levels of comfort with the unusual assignment.

Those who said that they wanted more freedom in classroom writing chose what the students as a whole considered “atypical” genres for their critiques, such as recipes, advertisements, or magazine covers (16-17). Students who felt safer within the conventions preferred more “typical” choices such as PowerPoint presentations and business letters (16, 22). The students who picked atypical genres claimed that they appreciated the opportunity to experience “a lot more chance to express yourself” (student, qtd. in Bastian 22), and possibly discover “hidden talents” (22).

The author found, however, that even students who wanted more freedom did not begin the unit with high levels of comfort. She found that the unusual way the assignment was presented, the “concept of critique,” and the idea that they could pick their own genres concerned even the more adventurous students (18). In Bastian’s view, the “power of academic convention” produced a forceful emotional attachment: students “distrusted the idea that both textual innovation and academic convention is both valid and viable in the classroom” (20).

Extensive exposure to critiques and peer interaction reduced discomfort for all students by the end of the unit (19), but those who felt least safe outside the typical classroom experience reported less comfort (23). One student expressed a need to feel safe, yet, after seeing his classmates’ work, chose an atypical response, encouraging Bastian to suggest that with the right support, “students can be persuaded to take risks” (23).

Bastian draws on research suggesting that what Barry Kroll calls “intelligent confusion” (qtd. in Bastian 26) and “cognitive disequilibrium” can lead to learning if supported by appropriate activities (26). The students reported gains in a number of rhetorical dimensions and specifically cited the value of having to do something that made them uncomfortable (25). Bastian argues that writing teachers should not be surprised to encounter such resistance, and can prepare for it with four steps: ‘openly acknowledge and discuss” the discomfort students might feel; model innovation; design activities that translate confusion into learning; and allow choice (27-28). She urges more empirical research on the nature of students’ affective responses to writing instruction (29).

 


Bailey & Bizzaro. Research in Creative Writing. August RTE. Posted 08/25/2017.

Bailey, Christine, and Patrick Bizzaro. “Research in Creative Writing: Theory into Practice.” Research in the Teaching of English 52.1 (2017): 77-97. Print.

Christine Bailey and Patrick Bizzaro discuss the disciplinarity of creative writing and its place in relation to the discipline of composition. They work to establish an aesthetic means of interpreting and representing data about creative writing in the belief that in order to emerge as a discipline its own right, creative writing must arrive at a set of shared values and understandings as to how research is conducted.

Bailey and Bizzaro’s concerns derive from their belief that creative writing must either establish itself as a discipline or it will be incorporated into composition studies (81). They contend that creative writing studies, like other emerging disciplines, must account for, in the words of Timothy J. San Pedro, “hierarchies of power” within institutions (qtd. in Bailey and Bizzaro 78) such that extant disciplines control or oppress less powerful disciplines, much as “teaching practices and the texts used in schools” oppress marginal student groups (78). A decision to use the methodologies of the “dominant knowledges” thus accedes to “imperial legacies” (San Pedro, qtd. in Bailey and Bizzaro 78).

Bailey and Bizzaro report that discussion of creative writing by compositionists such as Douglas Hesse and Wendy Bishop has tended to address how creative writing can be appropriately positioned as part of composition (79). Drawing on Bishop, the authors ascribe anxiety within some English departments over the role of creative writing to “genre-fear,” that is, “the belief that two disciplines cannot simultaneously occupy the same genre” (79).

They recount Bishop’s attempt to resolve the tension between creative writing studies and composition by including both under what she called a de facto “ready-made synthesis” that she characterized as the “study of writers writing” (qtd. in Bailey and Bizzaro 80). In the authors’ view, this attempt fails because the two fields differ substantially: “what one values as the basis for making knowledge differs from what the other values” (80).

The authors see creative writing studies itself as partially responsible for the difficulties the field has faced in establishing itself as a discipline (79, 80-81). They draw on Stephen Toulmin’s approach to disciplinarity: “a discipline exists ‘where men’s [sic] shared commitment to a sufficiently agreed set of ideals leads to the development of an isolable and self-defining repertory of procedures” (qtd. In Bailey and Bizzaro 80). The authors elaborate to contend that in a discipline, practitioners develop shared views as to what counts as knowledge and similarly shared views about the most appropriate means of gathering and reporting that knowledge (80).

Creative writing studies, they contend, has not yet acted on these criteria (81). Rather, they state, creative writers seem to eschew empirical research in favor of “craft interviews” consisting of “writers’ self-reports”; meanwhile, compositionists have undertaken to fill the gap by applying research methodologies appropriate to composition but not to creative writing (81). The authors’ purpose, in this article, is to model a research methodology that they consider more in keeping with the effort to define and apply the specific values accruing to creative writing.

The methodology they advance involves gathering, interpreting, and representing aesthetic works via an aesthetic form, in this case, the novel. Students in nine sections of first-year-writing classes in spring and fall 2013 responded to a “creative-narrative” prompt: “How did you come to this place in your life? Tell me your story” (84). Students were asked to respond with “a creative piece such as a poem, screenplay, or graphic novel” (84). All students were invited to participate with the understanding that their work would be confidential and might be represented in published research that might take on an alternative form such as a novel; the work of students who signed consent forms was duplicated and analyzed (84-85).

Data ultimately consisted of 57 artifacts, 55 of which were poems (85). Coding drew on the work of scholars like K. M. Powell, Elspeth Probyn, and Roz Ivanič to examine students’ constructions of self through the creative-narrative process, and on that of James E. Seitz to consider how students’ use of metaphor created meaning (85, 86). Further coding was based on Kara P. Alexander’s 2011 study of literacy narratives (86).

This analysis was combined with the results of a demographic survey to generate six groups revolving around “[c]ommon threads” in the data (86); “personas” revealed through the coded characteristics divided students into those who, for example, “had a solid identity in religion”; “were spiritually lost”; were “uncertain of identity [and] desiring change”; were “reclusive” with “strong family ties”; were interested in themes of “redemption or reformation”; or “had lived in multiple cultures” (86). This list, the authors state, corresponds to “a standard analysis” that they contrast with their alternative creative presentation (86).

In their methodology, Bailey and Bizzaro translate the “composites” identified by the descriptors into six characters for a young-adult novel Bailey developed (88). Drawing on specific poems by students who fell into each composite as well as on shared traits that emerged from analysis of identity markers and imagery in the poems, the authors strove to balance the identities revealed through the composites with the individuality of the different students. They explore how the characters of “Liz” and “Emmy” are derived from the “data” provided by the poems (89-90), and offer an excerpt of the resulting novel (90-92).

They present examples of other scholars who have “used aesthetic expressions in the development of research methods” (88). Such methods include ethnography, a form of research that the authors consider “ultimately a means of interpretive writing” (93). Thus, in their view, creating a novel from the data presented in poems is a process of interpreting those data, and the novel is similar to the kind of “storytell[ing]” (93) in which ethnography gathers data, then uses it to represent, interpret, and preserve individuals and their larger cultures (92-93).

They continue to contend that embracing research methods that value aesthetic response is essential if creative writing is to establish itself as a discipline (93). These methodologies, they argue, can encourage teachers to both value aesthetic elements of student work and to use their own aesthetic responses to enhance teaching, particularly as these methods of gathering and representing data result in “aesthetic objects” that are “evocative, engage readers’ imaginations, and resonate with the world we share not only with our students but also with our colleagues in creative writing” (94). They argue that “when the ‘literariness’ of data reports [becomes] a consideration in the presentation of research,” composition and creative writing will have achieved “an equitable relationship in writing studies” (95).

 


Gallagher, Chris W. Behaviorism as Social-Process Pedagogy. Dec. CCC. Posted 01/12/2017.

Gallagher, Chris W. “What Writers Do: Behaviors, Behaviorism, and Writing Studies.” College Composition and Communication 68.2 (2016): 238-65. Web. 12 Dec. 2016.

Chris W. Gallagher provides a history of composition’s relationship with behaviorism, arguing that this relationship is more complex than commonly supposed and that writing scholars can use the connections to respond to current pressures imposed by reformist models.

Gallagher notes the efforts of many writing program administrators (WPAs) to articulate professionally informed writing outcomes to audiences in other university venues, such as general-education committees (238-39). He reports that such discussions often move quickly from compositionists’ focus on what helps students “writ[e] well” to an abstract and universal ideal of “good writing” (239).

This shift, in Gallagher’s view, encourages writing professionals to get caught up in “the work texts do” in contrast to the more important focus on “the work writers do” (239; emphasis original). He maintains that “the work writers do” is in fact an issue of behaviors writers exhibit and practice, and that the resistance to “behaviorism” that characterizes the field encourages scholars to lose sight of the fact that the field is “in the behavior business; we are, and should be, centrally concerned with what writers do” (240; emphasis original).

He suggests that “John Watson’s behavioral ‘manifesto’—his 1913 paper, ‘Psychology as the Behaviorist Views It’” (241) captures what Gallagher sees as the “general consensus” of the time and a defining motivation for behaviorism: a shift away from “fuzzy-headed . . . introspective analysis” to the more productive process of “study[ing] observable behaviors” (241). Gallagher characterizes many different types of behaviorism, ranging from those designed to actually control behavior to those hoping to understand “inner states” through their observable manifestations (242).

One such productive model of behaviorism, in Gallagher’s view, is that of B. F. Skinner in the 1960s and 1970s. Gallagher argues that Skinner emphasized not “reflex behaviors” like those associated with Pavlov but rather “operant behaviors,” which Gallagher, citing psychologist John Staddon, characterizes as concerned with “the ways in which human (and other animal) behavior operates in its environment and is guided by its consequences” (242).

Gallagher contends that composition’s resistance to work like Skinner’s was influenced by views like that of James A. Berlin, for whom behaviorism was aligned with “current-traditional rhetoric” because it was deemed an “objective rhetoric” that assumed that writing was merely the process of conveying an external reality (243). The “epistemic” focus and “social turn” that emerged in the 1980s, Gallagher writes, generated resistance to “individualism and empiricism” in general, leading to numerous critiques of what were seen as behaviorist impulses.

Gallagher attributes much tension over behaviorism in composition to the influx of government funding in the 1960s designed to “promote social efficiency through strategic planning and accountability” (248). At the same time that this funding rewarded technocratic expertise, composition focused on “burgeoning liberation movements”; in Gallagher’s view, behaviorism erred by falling on the “wrong” or “science side” of this divide (244). Gallagher chronicles efforts by the National Council of Teachers of English and various scholars to arrive at a “détente” that could embrace forms of accountability fueled by behaviorism, such as “behavioral objectives” (248), while allowing the field to “hold on to its humanist core” (249).

In Gallagher’s view, scholars who struggled to address behaviorism such as Lynn Z. and Martin Bloom moved beyond mechanistic models of learning to advocate many features of effective teaching recognized today, such as a resistance to error-oriented pedagogy, attention to process, purposes, and audiences, and provision of “regular, timely feedback” (245-46). Negative depictions of behaviorism, Gallagher argues, in fact neglect the degree to which, in such scholarship, behaviorism becomes “a social-process pedagogy” (244; emphasis original).

In particular, Gallagher argues that “the most controversial behaviorist figure in composition history,” Robert Zoellner (246), has been underappreciated. According to Gallagher, Zoellner’s “talk-write” pedagogy was a corrective for “think-write” models that assumed that writing merely conveyed thought, ignoring the possibility that writing and thinking could inform each other (246). Zoellner rejected reflex-driven behaviorism that predetermined stimulus-response patterns, opting instead for an operant model in which objectives followed from rather than controlled students’ behaviors, which should be “feely emitted” (Zoellner, qtd. in Gallagher 250) and should emerge from “transactional” relationships among teachers and students in a “collaborative,” lab-like setting in which teachers interacted with students and modeled writing processes (247).

The goal, according to Gallagher, was consistently to “help students develop robust repertoires of writing behaviors to help them adapt to the different writing situations in which they would find themselves” (247). Gallagher contends that Zoellner advocated teaching environments in which

[behavioral objectives] are not codified before the pedagogical interaction; . . . are rooted in the transactional relationship between teachers and students; . . . are not required to be quantifiably measurable; and . . . operate in a humanist idiom. (251).

Rejected in what Martin Nystrand denoted “the social 1980s” (qtd. in Gallagher 251), as funding for accountability initiatives withered (249), behaviorism did attract the attention of Mike Rose. His chapter in Why Writers Can’t Write and that of psychology professor Robert Boice attended to the ways in which writers relied on specific behaviors to overcome writer’s block; in Gallagher’s view, Rose’s understanding of the short-comings of overzealous behaviorism did not prevent him from taking “writers’ behaviors qua behaviors extremely seriously” (253).

The 1990s, Gallagher reports, witnessed a moderate revival of interest in Zoellner, who became one of the “unheard voices” featured in new histories of the field (254). Writers of these histories, however, struggled to dodge behaviorism itself, hoping to develop an empiricism that would not insist on “universal laws and objective truth claims” (255). After these efforts, however, Gallagher reports that the term faded from view, re-emerging only recently in Maja Joiwind Wilson’s 2013 dissertation as a “repressive” methodology exercised as a form of power (255).

In contrast to these views, Gallagher argues that “behavior should become a key term in our field” (257). Current pressures to articulate ways of understanding learning that will resonate with reformers and those who want to impose rigid measurements, he contends, require a vocabulary that foregrounds what writers actually do and frames the role of teachers as “help[ing] students expand their behavioral repertoires” (258; emphasis original). This vocabulary should emphasize the social aspects of all behaviors, thereby foregrounding the fluid, dynamic nature of learning.

In his view, such a vocabulary would move scholars beyond insisting that writing and learning “operate on a higher plane than that of mere behaviors”; instead, it would generate “better ways of thinking and talking about writing and learning behaviors” (257; emphasis original). He recommends, for example, creating “learning goals” instead of “outcomes” because such a shift discourages efforts to reduce complex activities to pre-determined, reflex-driven steps toward a static result (256). Scholars accustomed to a vocabulary of “processes, practices, and activities” can benefit from learning as well to discuss “specific, embodied, scribal behaviors” and the environments necessary if the benefits accruing to these behaviors are to be realized (258).

 


Patchan and Shunn. Effects of Author and Reviewer Ability in Peer Feedback. JoWR 2016. Posted 11/25/2016.

Patchan, Melissa M., and Christian D. Shunn. “Understanding the Effects of Receiving Peer Feedback for Text Revision: Relations between Author and Reviewer Ability.” Journal of Writing Research 8.2 (2016): 227-65. Web. 18 Nov. 2016. doi: 10.17239/jowr-2016.08.02.03

Melissa M. Patchan and Christian D. Shunn describe a study of the relationship between the abilities of writers and peer reviewers in peer assessment. The study asks how the relative ability of writers and reviewers influences the effectiveness of peer review as a learning process.

The authors note that in many content courses, the time required to provide meaningful feedback encourages many instructors to turn to peer assessment (228). They cite studies suggesting that in such cases, peer response can be more effective than teacher response because, for example, students may actually receive more feedback, the feedback may be couched in more accessible terms, and students may benefit from seeing models and new strategies (228-29). Still, studies find, teachers and students both question the efficacy of peer assessment, with students stating that the quality of review depends largely on the abilities of the reviewer (229).

Patchan and Shunn distinguish between the kind of peer review characteristic of writing classrooms, which they describe as “pair or group-based face-to-face conversations” emphasizing “qualitative feedback” and the type more often practiced in large content classes, which they see as more like “professional journal reviewing” that is “asynchronous, and written-based” (228). Their study addresses the latter format and is part of a larger study examining peer feedback in a widely required psychology class at a “large, public research university in the southeast” (234).

A random selection of 189 students wrote initial drafts in response to an assignment assessing media handling of a psychological study using criteria from the course textbook (236, 238). Students then received four drafts to review and were given a week to revise their own drafts in response to feedback. Participants used the “web-based peer assessment functions of turnitin.com” (237).

The researchers rated participants as high-ability writers using SAT scores and grades in their two first-year writing courses (236). Graduate rhetoric students also rated the first drafts. The protocol then included a “median split” to designate writers in binary fashion as either high- or low-ability. “High” authors were categorized as “high” reviewers. Patchan and Shunn note that there was a wide range in writer abilities but argue that, even though the “design decreases the power of this study,” such determinations were needed because of the large sample size, which in turn made the detection of “important patterns” likely (236-37). They feel that “a lower powered study was a reasonable tradeoff for higher external validity (i.e., how reviewer ability would typically be detected)” (237).

The authors describe their coding process in detail. In addition to coding initial drafts for quality, coders examined each reviewer’s feedback for its attention to higher-order problems and lower-order corrections (239-40). Coders also tabulated which comments resulted in revision as well as the “quality of the revision” (241). This coding was intended to “determine how the amount and type of comments varied as a function of author ability and reviewer ability” (239). A goal of the study was to determine what kinds of feedback triggered the most effective responses in “low” authors (240).

The study was based on a cognitive model of writing derived from the updated work of Linda Flower and John R. Hayes, in which three aspects of writing/revision follow a writer’s review of a text: problem detection, problem diagnosis, and strategy selection for solving the diagnosed problems (230-31). In general, “high” authors were expected to produce drafts with fewer initial problems and to have stronger reading skills that allowed them to detect and diagnose more problems in others’ drafts, especially “high-level” problems having to do with global issues as opposed to issues of surface correctness (230). High ability authors/reviewers were also assumed to have a wider repertoire of solution strategies to suggest for peers and to apply to their own revisions (233). All participants received a rubric intended to guide their feedback toward higher-order issues (239).

Some of the researchers’ expectations were confirmed, but others were only partially supported or not supported (251). Writers whose test scores and grades categorized them as “high” authors did produce better initial drafts, but only by a slight margin. The researchers posit that factors other than ability may affect draft quality, such as interest or time constraints (243). “High” and “low” authors received the same number of comments despite differences in the quality of the drafts (245), but “high” authors made more higher-order comments even though they didn’t provide more solutions (246). “High” reviewers indicated more higher-order issues to “low” authors than to “high,” while “low” reviewers suggested the same number of higher-order changes to both “high” and “low” authors (246).

Patchan and Shunn considered the “implementation rate,” or number of comments on which students chose to act, and “revision quality” (246). They analyzed only comments that were specific enough to indicate action. In contrast to findings in previous studies, the expectation that better writers would make more and better revisions was not supported. Overall, writers acted on only 32% of the comments received and only a quarter of the comments resulted in improved drafts (248). Author ability did not factor into these results. Moreover, the ability of the reviewer had no effect on how many revisions were made or how effective they were (248).

It was expected that low-ability authors would implement more suggestions from higher-ability reviewers, but in fact, “low authors implemented more high-level criticism comments . . . from low reviewers than from high reviewers” (249). The quality of the revisions also improved for low-ability writers when the comments came from low-ability reviewers. The researchers conclude that “low authors benefit the most from feedback provided by low reviewers” (249).

Students acted on 41% of the low-level criticisms, but these changes seldom resulted in better papers (249).

The authors posit that rates of commenting and implementation may both be impacted by limits or “thresholds” on how much feedback a given reviewer is willing to provide and how many comments a writer is able or willing to act on (252, 253). They suggest that low-ability reviewers may explain problems in language that is more accessible to writers with less ability. Patchan and Shunn suggest that feedback may be most effective when it occurs within the student’s zone of proximal development, so that weaker writers may be helped most by peers just beyond them in ability rather than by peers with much more sophisticated skills (253).

In the authors’ view, that “neither author ability nor reviewer ability per se directly affected the amount and quality of revisions” (253) suggests that the focus in designing effective peer review processes should shift from how to group students to improving students’ ability to respond to comments (254). They recommend further research using more “direct” measures of writing and reviewing ability (254). A major conclusion from this study is that “[h]igher-ability students will likely revise their texts successfully regardless of who [they are] partnered with, but the lower-ability students may need feedback at their own level” (255).