College Composition Weekly: Summaries of research for college writing professionals

Read, Comment On, and Share News of the Latest from the Rhetoric and Composition Journals


Anderson et al. Contributions of Writing to Learning. RTE, Nov. 2015. Posted 12/17/2015.

Anderson, Paul, Chris M. Anson, Robert M. Gonyea, and Charles Paine. “The Contributions of Writing to Learning and Development: Results from a Large-Scale, Multi-institutional Study.” Research in the Teaching of English 50.2 (2015): 199-235. Print

Note: The study referenced by this summary was reported in Inside Higher Ed on Dec. 4, 2015. My summary may add some specific details to the earlier article and may clarify some issues raised in the comments on that piece. I invite the authors and others to correct and elaborate on my report.

Paul Anderson, Chris M. Anson, Robert M. Gonyea, and Charles Paine discuss a large-scale study designed to reveal whether writing instruction in college enhances student learning. They note widespread belief both among writing professionals and other stakeholders that including writing in curricula leads to more extensive and deeper learning (200), but contend that the evidence for this improvement is not consistent (201-02).

In their literature review, they report on three large-scale studies that show increased student learning in contexts rich in writing instruction. These studies concluded that the amount of writing in the curriculum improved learning outcomes (201). However, these studies contrast with the varied results from many “small-scale, quasi-experimental studies that examine the impact of specific writing interventions” (200).

Anderson et al. examine attempts to perform meta-analyses across such smaller studies to distill evidence regarding the effects of writing instruction (202). They postulate that these smaller studies often explore such varied practices in so many diverse environments that it is hard to find “comparable studies” from which to draw conclusions; the specificity of the interventions and the student populations to which they are applied make generalization difficult (203).

The researchers designed their investigation to address the disparity among these studies by searching for positive associations between clearly designated best practices in writing instruction and validated measures of student learning. In addition, they wanted to know whether the effects of writing instruction that used these best practices differed from the effects of simply assigning more writing (210). The interventions and practices they tested were developed by the Council of Writing Program Administrators (CWPA), while the learning measures were those used in the National Survey of Student Engagement (NSSE). This collaboration resulted from a feature of the NSSE in which institutions may form consortia to “append questions of specific interest to the group” (206).

Anderson et al. note that an important limitation of the NSSE is its reliance on self-report data, but they contend that “[t]he validity and reliability of the instrument have been extensively tested” (205). Although the institutions sampled were self-selected and women, large institutions, research institutions, and public schools were over-represented, the authors believe that the overall diversity and breadth of the population sampled by the NSSE/CWPA collaboration, encompassing more than 70,000 first-year and senior students, permits generalization that has not been possible with more narrowly targeted studies (204).

The NSSE queries students on how often they have participated in pedagogic activities that can be linked to enhanced learning. These include a wide range of practices such as service-learning, interactive learning, “institutionally challenging work” such as extensive reading and writing; in addition, the survey inquires about campus features such as support services and relationships with faculty as well as students’ perceptions of the degree to which their college experience led to enhanced personal development. The survey also captures demographic information (205-06).

Chosen as dependent variables for the joint CWPA/NSSE study were two NSSE scales:

  • Deep Approaches to Learning, which encompassed three subscales, Higher-Order Learning, Integrative Learning, and Reflective Learning. This scale focused on activities related to analysis, synthesis, evaluation, combination of diverse sources and perspectives, and awareness of one’s own understanding of information (211).
  • Perceived Gains in Learning and Development, which involved subscales of Practical Competence such as enhanced job skills, including the ability to work with others and address “complex real-world problems”; Personal and Social Development, which inquired about students’ growth as independent learners with “a personal code of values and ethics” able to “contribut[e] to the community”; and General Education Learning, which includes the ability to “write and speak clearly and effectively, and to think critically and analytically” (211).

The NSSE also asked students for a quantitative estimate of how much writing they actually did in their coursework (210). These data allowed the researchers to separate the effects of simply assigning more writing from those of employing different kinds of writing instruction.

To test for correlations between pedagogical choices in writing instruction and practices related to enhanced learning as measured by the NSSE scales, the research team developed a “consensus model for effective practices in writing” (206). Eighty CWPA members generated questions that were distilled to 27 divided into “three categories based on related constructs” (206). Twenty-two of these ultimately became part of a module appended to the NSSE that, like the NSSE “Deep Approaches to Learning” scale, asked students how often their coursework had included the specific activities and behaviors in the consensus model. The “three hypothesized constructs for effective writing” (206) were

  • Interactive Writing Processes, such as discussing ideas and drafts with others, including friends and faculty;
  • Meaning-Making Writing Tasks, such as using evidence, applying concepts across domains, or evaluating information and processes; and
  • Clear Writing Expectations, which refers to teacher practices in making clear to students what kind of learning an activity promotes and how student responses will be assessed. (206-07)

They note that no direct measures of student learning is included in the NSSE, nor are such measures included in their study (204). Rather, in both the writing module and the NSSE scale addressing Deep Approaches to Learning, students are asked to report on kinds of assignments, instructor behaviors and practices, and features of their interaction with their institutions, such as whether they used on-campus support services (205-06). The scale on Perceived Gains in Learning and Development asks students to self-assess (211-12).

Despite the lack of specific measures of learning, Anderson et al. argue that the curricular content included in the Deep Approaches to Learning scale does accord with content that has been shown to result in enhanced student learning (211, 231). The researchers argue that comparisons between the NSSE scales and the three writing constructs allow them to detect an association between the effective writing practices and the attitudes toward learning measured by the NSSE.

Anderson et al. provide detailed accounts of their statistical methods. In addition to analysis for goodness-of-fit, they performed “blocked hierarchical regressions” to determine how much of the variance in responses was explained by the kind of writing instruction reported versus other factors, such as demographic differences, participation in various “other engagement variables” such as service-learning and internships, and the actual amount of writing assigned (212). Separate regressions were performed on first-year students and on seniors (221).

Results “suggest[ed] that writing assignments and instructional practices represented by each of our three writing scales were associated with increased participation in Deep Approaches to Learning, although some of that relationship was shared by other forms of engagement” (222). Similarly, the results indicate that “effective writing instruction is associated with more favorable perceptions of learning and development, although other forms of engagement share some of that relationship” (224). In both cases, the amount of writing assigned had “no additional influence” on the variables (222, 223-24).

The researchers provide details of the specific associations among the three writing constructs and the components of the two NSSE scales. Overall, they contend, their data strongly suggest that the three constructs for effective writing instruction can serve “as heuristics that instructors can use when designing writing assignments” (230), both in writing courses and courses in other disciplines. They urge faculty to describe and research other practices that may have similar effects, and they advocate additional forms of research helpful in “refuting, qualifying, supporting, or refining the constructs” (229). They note that, as a result of this study, institutions can now elect to include the module “Experiences with Writing,” which is based on the three constructs, when students take the NSSE (231).

 


Vidali, Amy. Disabling Writing Program Administration. WPA, Sept. 2015. Posted 10/28/2015.

Vidali, Amy. “Disabling Writing Program Administration.” Journal of the Council of Writing Program Administrators 38.2 (2015): 32-55. Print.

Amy Vidali examines the narratives of writing program administrators (WPAs) from the standpoint of disability studies. She argues that the way in which these narratives frame the WPA experience excludes instructive considerations of the intersections between WPA work and disability even though disability functions metaphorically in these texts. Her analysis explores the degree to which “these narratives establish normative expectations of who WPAs are and can be” (33).

Drawing on disability scholars Jay Dolmage and Carrie Sandahl (48n3, 49n4), Vidali proposes “disabling writing program work” (33; emphasis original). Similar to “crip[ping]” an institution or activity, disabling brings to the fore “able-bodied assumptions and exclusionary effects” (Sandahl, qtd. in Vidali 49n4) and tackles the disabling/enabling binary (49). Vidali’s examination of the WPA literature addresses its tendency to privilege ableist notions of success, to exclude access to disabled individuals, and to ignore the insights offered by the lens of disability.

In Vidali’s view, the WPA accounts she extracts from many sources focus on disabilities like depression and anxiety, generally positing that WPA work causes such disabilities and that they are an inevitable part of the WPA landscape that must be managed or “escaped” (37, 39). She uses her own experience with depression to discuss how identifying the mental and physical manifestations of depression solely with the stresses of WPA work impoverishes the field’s understanding of “how anxiety might be produced in the interaction of bodies and environments” (40) which occurs in any complex group configuration; recognition of this interaction removes the responsibility for the disability and its effects from “particular problem bodies” and locates it in the larger set of relationships, including inequities, among people and institutions (42). In other words, for Vidali, acknowledging the existence of disabilities outside of and prior to WPA work and their embodied influence within that work can allow scholars to “reframe WPA narratives in more productive ways” (41).

Vidali writes that the failure to recognize disability as an embodied human state interacting with the WPA environment is exacerbated by the lack of data on the number of WPAs with disabilities and on the kinds of disabilities they bring to the task. Vidali examines surveys in which researchers shied away from asking questions about disability for fear respondents might not feel comfortable answering, especially since revealing disability can lead to discrimination (44, 47).

Particularly damaging, she argues, are narratives often critiqued within the disability-studies community, for example, accounts of “overcoming” the burdens of disability, hero-narratives, and equations between “health” and “success.” Drawing on Paul Longmore and Simi Linton, Vidali writes that narratives of overcoming demand that individuals deal with the difficulties created by their interaction with environments in an effort to accommodate themselves to normal expectations, but these narratives refuse to acknowledge “the power differential” involved and increase the pressure to make do with non-inclusive situations rather than advocate for change (42).

Similarly, in Vidali’s view, hero narratives suggest that only the “hyper-able” are qualified to be WPAs; images of the WPA as miraculous and unflappable problem-solver deny the possibility that people “who may work at different paces and in different manners” can be equally effective (43). Such narratives risk “reifying unreasonable job expectations” that may further exclude disabled individuals as well as reinforcing the assumption that candidates for WPA work “all enter WPA positions with the same abilities, tools, and goals” (43). Vidali argues that such views of the ideal WPA coincide with a model in which health is a necessity for success and ultimately “only the fittest survive as WPAs” (40).

Vidali proposes alternatives to extant WPA narratives that open the door to more “interdependent” interaction that permits individuals to care for themselves and each other (40-41). Changes to the expectations WPAs have for themselves and each other can value such qualities as “productive designation of tasks to support teams” and acceptance of a wider range of communication options (43). Moving away from the WPA as hyper-able hero can also permit reflection on failure and an effective response to its inevitability (42). Vidali notes how her own depression served as a catalyst for increased attention to inclusiveness and access in her program and how its intersection with her WPA work alerted her to the ways that disability as metaphor for something that must be disguised rather than an embodied reality experienced by many limits WPAs’ options. She stresses her view that

disabling writing program administration isn’t only about disabled WPAs telling their stories: It’s about creating inclusive environments for all WPAs, not only at the time they are hired, but in ways that account for the embodied realities that come with time. (47)


2 Comments

Hansen et al. Effectiveness of Dual Credit Courses. WPA Journal, Spring 2015. Posted 08/12/15.

Hansen, Kristine, Brian Jackson, Brett C. McInelly, and Dennis Eggett. “How Do Dual Credit Students Perform on College Writing Tasks After They Arrive on Campus? Empirical Data from a Large-Scale Study.” Journal of the Council of Writing Program Administrators 38.2 (2015): 56-92). Print.

Kristine Hansen, Brian Jackson, Brett C. McInelly, and Dennis Eggett conducted a study at Brigham Young University (BYU) to determine whether students who took a dual-credit/concurrent-enrollment writing course (DC/CE) fared as well on the writing assigned in a subsequent required general-education course as students who took or were taking the university’s first-year-writing course. With few exceptions, Hansen et al. concluded that the students who had taken the earlier courses for their college credit performed similarly to students who had not. However, the study raised questions about the degree to which taking college writing in high school, or for that matter, in any single class, adequately meets the needs of maturing student writers (79).

The exigence for the study was the proliferation of efforts to move college work into high schools, presumably to allow students to graduate faster and thus lower the cost of college, with some jurisdictions allowing students as young as fourteen to earn college credit in high school (58). Local, state, and federal policy makers all support and even “mandate” such opportunities (57), with rhetorical and financial backing from organizations and non-profits promoting college credit as a boon to the overall economy (81). Hansen et al. express concern that no uniform standards or qualifications govern these initiatives (58).

The study examined writing in BYU’s “American Heritage” (AH) course. In this course, which in September 2012 enrolled approximately half of the first-year class, students wrote two 900-word papers involving argument and research. They wrote the first paper in stages with grades and TA feedback throughout, while they relied on peer feedback and their understanding of an effective writing process, which they had presumably learned in the first assignment, for the second paper (64). Hansen et al. provide the prompts for both assignments (84-87).

The study consisted of several components. Students in the AH course were asked to sign a consent form; those who did so were emailed a survey about their prior writing instruction. Of these, 713 took the survey. From these 713 students,189 were selected (60-61). Trained raters using a holistic rubric with a 6-point scale read both essays submitted by these 189 students. The rubric pinpointed seven traits: “thesis, critical awareness, evidence, counter-arguments, organization, grammar and style, sources and citations” (65). A follow-up survey assessed students’ experiences writing the second paper, while focus groups provided additional qualitative information. Hansen et al. note that although only eleven students participated in the focus groups, the discussion provided “valuable insights into students’ motivations for taking pre-college credit options and the learning experiences they had” (65).

The 189 participants fell into five groups: those whose “Path to FYW Credit” consisted of AP scores; those who received credit for a DC/CE option; those planning to take FYW in the future; those taking it concurrently with AH; and those who had taken BYU’s course, many of them in the preceding summer (61, 63). Analysis reveals that the students studied were a good match in such categories as high-school GPA and ACT scores for the full BYU first-year population (62). However, strong high-school GPAs and ACT scores and evidence of regular one-on-one interaction with instructors (71), coupled with the description of BYU as a “private institution” with “very selective admission standards” (63) indicate that the students studied, while coming from many geographic regions, were especially strong students whose experiences could not be generalized to different populations (63, 82).

Qualitative results indicated that, for the small sample of students who participated in the focus group, the need to “get FYW out of the way” was not the main reason for choosing AP or DC/CE options. Rather, the students wanted “a more challenging curriculum” (69). These students reported good teaching practices; in contrast to the larger group taking the earlier survey, who reported writing a variety of papers, the students in the focus group reported a “literature[-]based” curriculum with an emphasis on timed essays and fewer research papers (69). Quotes from the focus-group students who took the FYW course from BYU reveal that they found it “repetitive” and “a good refresher,” not substantially different despite their having reported an emphasis on literary analysis in the high-school courses (72). The students attested that the earlier courses had prepared them well, although some expressed concerns about their comfort coping with various aspects of the first-year experience (71-72).

Three findings invited particular discussion (73):

  • Regardless of the writing instruction they had received, the students differed very little in their performance in the American Heritage class;
  • In general, although their GPAs and test scores indicated that they should be superior writers, the students scored in the center of the 6-point rubric scale, below expectations;
  • Scores were generally higher for the first essay than for the second.

The researchers argue that the first finding does not provide definitive evidence as to whether “FYW even matters” (73). They cite research by numerous scholars that indicates that the immediate effects of a writing experience are difficult to measure because the learning of growing writers does not exhibit a “tidy linear trajectory” (74). The FYW experience may trigger “steps backward” (Nancy Sommers, qtd. in Hansen et al. 72). The accumulation of new knowledge, they posit, can interfere with performance. Therefore, students taking FYW concurrently with AH might have been affected by taking in so much new material (74), while those who had taken the course in the summer had significantly lower GPAs and ACT scores (63). The authors suggest that these factors may have skewed the performance of students with FYW experience.

The second finding, the authors posit, similarly indicates students in the early-to-middle stages of becoming versatile, effective writers across a range of genres. Hansen et al. cite research on the need for a “significant apprenticeship period” in writing maturation (76). Students in their first year of college are only beginning to negotiate this developmental stage.

The third finding may indicate a difference in the demands of the two prompts, a difference in the time and energy students could devote to later assignments, or, the authors suggest, the difference in the feedback built into the two papers (76-77).

Hansen et al. recommend support for the NCTE position that taking a single course, especially at an early developmental stage, does not provide students an adequate opportunity for the kind of sustained practice across multiple genres required for meaningful growth in writing (77-80). Decisions about DC/CE options should be based on individual students’ qualifications (78); programs should work to include additional writing courses in the overall curriculum, designing these courses to allow students to build on skills initiated in AP, DC/CE, and FYW courses (79).

They further recommend that writing programs shift from promising something “new” and “different” to an emphasis on the recursive, nonlinear nature of writing, clarifying to students and other stakeholders the value of ongoing practice (80). Additionally, they recommend attention to the motives and forces of the “growth industry” encouraging the transfer of more and more college credit to high schools (80). The organizations sustaining this industry, they write, hope to foster a more literate, capable workforce. But the authors contend that speeding up and truncating the learning process, particularly with regard to a complex cognitive task like writing, undercut this aim (81-82) and do not, in fact, guarantee faster graduation (79). Finally, citing Richard Haswell, they call for more empirical, replicable studies of phenomena like the effects of DC/CE courses in order to document their impact across broad demographics (82).


1 Comment

Dryer and Peckham. Social Context of Writing Assessment. WPA, Fall 2014. Posted 3/24/2015.

Dryer, Dylan B., and Irvin Peckham. “Social Contexts of Writing Assessment: Toward an Ecological Construct of the Rater.” WPA: Writing Program Administration 38.1 (2014): 12-41.

Dylan B. Dryer and Irvin Peckham argue for a richer understanding of the factors affecting the validity of writing assessments. A more detailed picture of how the assumptions of organizers and raters as well as the environment itself drive results can lead to more thoughtful design of assessment processes.

Drawing on Stuart MacMillan’s “model of Ecological Inquiry,” Dryer and Peckham conduct an “empirical, qualitative research study” (14), becoming participants in a large-scale assessment organized by a textbook publisher to investigate the effectiveness of the textbook. Nineteen raters including Dryer and Peckham, all experienced college-writing teachers, examined reflective pieces from the portfolios of more than 1800 composition students using the criteria of Rhetorical Knowledge; Critical Thinking, Reading, and Writing; Writing Processes; and Knowledge of Conventions from the WPA Outcomes Statement 1.0 (15). In addition to scores on each of these criteria, raters assigned holistic scores that served as the primary data for Dryer and Peckham’s study. Raters were introduced to the purpose of the assessment and to the criteria and benchmark papers by a “chief reader” characterized by the textbook publisher as “a national leader in writing assessment research” (19). The room set-up consisted of four tables with three to four raters and a table leader charged with maintaining the scoring protocols presented by the chief reader. Dryer and Peckham augmented their observations and assessment data with preliminary questionnaires, interviews, exit surveys, and focus groups.

Dryer and Peckham adapted MacMillan’s four-level model by dividing the environment into “social contexts” of field, room, table, and rater (14). Field variables involved the extent to which the raters were attuned to general assumptions common to composition scholarship, such as definitions of concepts and how to prioritize the four criteria (19-22). The “room” system consisted of the expectations established by the chief reader and the degree to which raters worked within those expectations as they applied the criteria (22-24). Table-specific variables were based on the recognition that each table operated with its own microecology growing out of such components as interpersonal interactions among the raters and the interventions of the table leaders (25-30). Finally, the individual-rater system encompassed factors such as how each rater negotiated the space between his or her own responses to the process and the expectations and pressures of the field, room, and table (30-33).

Field-level findings included the observation that most of the raters agreed with the ordered ranking of the criteria that had been chosen by the WPA team that developed the outcomes (20-21). The authors maintain that the ability of their study to identify the outliers (three of seventeen raters) who considered Writing Processes and Knowledge of Conventions most important permits a sense of how widely the field’s values have spread throughout the profession (22). Collecting a complete “scoring history” for each rater, including the number of “overturned” scores, or scores that deviated from the room consensus, allowed the finding that ranking the four criteria differently from the field consensus led to a high percentage of such incorrect scores (21).

The room-level conclusions demonstrated the assumption that there actually is a “real” or correct score for each paper and that the benchmark papers adequately represent how a paper measured against the selected criteria can earn this score. This phenomenon, the authors argue, tends to pervade assessment environments (23). Raters were extolled for bringing “professional” expertise to the process (19); however, raters whose scores deviated too far from the correct score were judged “out of line” (22). Interviews and surveys reveal that raters were concerned with the fit between their own judgments and the “room” score and sometimes struggled to adjust their scores to more closely match the room consensus (e.g., 23-24).

At the table level, observations and interviews revealed the degree to which some raters’ behavior and perceived attitudes influenced other raters’ decisions (e.g., 28). Table leaders’ ability to keep the focus on the specific criteria, redirecting raters away from other, more individual criteria, affected overall table results (25-27): A comparison of the range of table scores with the overall room score enabled the authors to designate some tables as “timid,” unwilling to risk awarding high or low scores, and others as “bold,” able to assign scores across the entire 1-6 range (25). Dryer and Peckham note that some raters consciously opted for a 2 rather than a 1, for example, because they felt that the 2 would be “adjacent” to either a 1 or a 3 and thus “safe” from being declared incorrect (28).

Discussion of the rater-level social system focused on the “surprising degree” to which raters did not actually conform to the approved rubric to make their judgments (31). For example, raters responded to the writer’s perceived gender as well as to suppositions about the English program from which particular papers had been drawn (30-31). Similarly, at the table level, raters veered toward criteria not on the rubric, such as “voice” or “engagement” (24). These raters’ resistance to the room expectations showed up overtly in the exit surveys and interviews but not in the data from the assessment itself.

Dryer and Peckham recommend four adjustments to standard procedures for such assessments. First, awareness of the “ecology of scoring” can suggest protocols to head off the most likely deviations from consistent use of the rubric (33-34). Second, this same awareness can prevent overconfidence in the power of calibration and norming to disrupt individual preconceptions about what constitutes a good paper (34-35). Third, the authors recommend more opportunities to discuss the meaning and value of key terms and to air individual concerns with room and field expectations (35). Fourth, the collection of data like individual and table-level scoring as well as measures of overall and individual alignment with the field should become standard practice. Rather than undercutting the validity of assessments, the authors argue, such data would underscore the complexity of the process and accentuate the need for care and expertise both in evaluating student writing and in applying the results, heading off the assumption that writing assessment is a simple or mechanical task that can easily be outsourced (36).