College Composition Weekly: Summaries of research for college writing professionals

Read, Comment On, and Share News of the Latest from the Rhetoric and Composition Journals


Leave a comment

Pruchnic et al. Mixed Methods in Direct Assessment. J or Writ Assessment, 2018. Posted 12/01/2018.

Pruchnic, Jeff, Chris Susak, Jared Grogan, Sarah Primeau, Joe Torok, Thomas Trimble, Tanina Foster, and Ellen Barton. “Slouching Toward Sustainability: Mixed Methods in the Direct Assessment of Student Writing.” Journal of Writing Assessment 11.1 (2018). Web. 27 Nov. 2018.

[Page numbers from pdf generated from the print dialogue]

Jeff Pruchnic, Chris Susak, Jared Grogan, Sarah Primeau, Joe Torok, Thomas Trimble, Tanina Foster, and Ellen Barton report on an assessment of “reflection argument essay[s]” from the first-year-composition population of a large, urban, public research university (6). Their assessment used “mixed methods,” including a “thin-slice” approach (1). The authors suggest that this method can address difficulties faced by many writing programs in implementing effective assessments.

The authors note that many stakeholders to whom writing programs must report value large-scale quantitative assessments (1). They write that the validity of such assessments is often measured in terms of statistically determined interrater reliability (IRR) and samples considered large enough to adequately represent the population (1).

Administrators and faculty of writing programs often find that implementing this model requires time and resources that may not be readily available, even for smaller programs. Critics of this model note that one of its requirements, high interrater reliability, can too easily come to stand in for validity (2); in the view of Peter Elbow, such assessments favor “scoring” over “discussion” of the results (3). Moreover, according to the authors, critics point to the “problematic decontextualization of program goals and student achievement” that large-scale assessments can foster (1).

In contrast, Pruchnic et al. report, writing programs have tended to value the “qualitative assessment of a smaller sample size” because such models more likely produce the information needed for “the kinds of curricular changes that will improve instruction” (1). Writing programs, the authors maintain, have turned to redefining a valid process as one that can provide this kind of information (3).

Pruchnic et al. write that this resistance to statistically sanctioned assessments has created a bind for writing programs. Pruchnic et al. cite scholars like Peggy O’Neill (2) and Richard Haswell (3) to posit that when writing programs refuse the measures of validity required by external stakeholders, they risk having their conclusions dismissed and may well find themselves subject to outside intervention (3). Haswell’s article “Fighting Number with Number” proposes producing quantitative data as a rhetorical defense against external criticism (3).

In the view of the authors, writing programs are still faced with “sustainability” concerns:

The more time one spends attempting to perform quantitative assessment at the size and scope that would satisfy statistical reliability and validity, the less time . . . one would have to spend determining and implementing the curricular practices that would support the learning that instructors truly value. (4)

Hoping to address this bind, Pruchnic et al. write of turning to a method developed in social studies to analyze “lengthy face-to-face social and institutional interactions” (5). In a “thin-slice” methodology, raters use a common rubric to score small segments of the longer event. The authors report that raters using this method were able to predict outcomes, such as the number of surgery malpractice claims or teacher-evaluation results, as accurately as those scoring the entire data set (5).

To test this method, Pruchnic et al. created two teams, a “Regular” and a “Research” team. The study compared interrater reliability, “correlation of scores,” and the time involved to determine how closely the Research raters, scoring thin slices of the assessment data, matched the work of the Regular raters (5).

Pruchnic et al. provide a detailed description of their institution and writing program (6). The university’s assessment approach is based on Edward White’s “Phase 2 assessment model,” which involves portfolios with a final reflective essay, the prompt for which asks students to write an evidence-based argument about their achievements in relation to the course outcomes (8). The authors note that limited resources gradually reduced the amount of student writing that was actually read, as raters moved from full-fledged portfolio grading to reading only the final essay (7). The challenges of assessing even this limited amount of student work led to a sample that consisted of only 6-12% of the course enrollment.

The authors contend that this is not a representative sample; as a result, “we were making decisions about curricular and other matters that were not based upon a solid understanding of the writing of our entire student body” (7). The assessment, in the authors’ view, therefore did not meet necessary standards of reliability and validity.

The authors describe developing the rubric to be used by both the Research and Regular teams from the precise prompt for the essay (8). They used a “sampling calculator” to determine that, given the total of 1,174 essays submitted, 290 papers would constitute a representative sample; instructors were asked for specific, randomly selected papers to create a sample of 291 essays (7-8).

The Regular team worked in two-member pairs, both members of each pair reading the entire essay, with third readers called in as needed (8): “[E]ach essay was read and scored by only one two-member team” (9). The authors used “double coding” in which one-fifth of the essays were read by a second team to establish IRR (9). In contrast, the 10-member Research team was divided into two groups, each of which scored half the essays. These readers were given material from “the beginning, middle, and end” of each essay: the first paragraph, the final paragraph, and a paragraph selected from the middle page or pages of the essay, depending on its length. Raters scored the slices individually; the averaged five team members’ scores constituted the final scores for each paper (9).

Pruchnic et al. discuss in detail their process for determining reliability and for correlating the scores given by the Regular and Research teams to determine whether the two groups were scoring similarly. Analysis of interrater reliability revealed that the Research Team’s IRR was “one full classification higher” than that of the Regular readers (12). Scores correlated at the “low positive” level, but the correlation was statistically significant (13). Finally, the Research team as a whole spent “a little more than half the time” scoring than the Regular group, while individual average scoring times for Research team members was less than half of the scoring time of the Regular members (13).

Additionally, the assessment included holistic readings of 16 essays randomly representing the four quantitative result classifications of Poor through Good (11). This assessment allowed the authors to determine the qualities characterizing essays ranked at different levels and to address the pedagogical implications within their program (15, 16).

The authors conclude that thin-slice scoring, while not always the best choice in every context (16), “can be added to the Writing Studies toolkit for large-scale direct assessment of evaluative reflective writing” (14). Future research, they propose, should address the use of this method to assess other writing outcomes (17). Paired with a qualitative assessment, they argue, a mixed-method approach that includes thin-slice analysis as an option can help satisfy the need for statistically grounded data in administrative and public settings (16) while enabling strong curricular development, ideally resulting in “the best of both worlds” (18).


2 Comments

Abba et al. Students’ Metaknowledge about Writing. J of Writing Res., 2018. Posted 09/28/2018.

Abba, Katherine A., Shuai (Steven) Zhang, and R. Malatesha Joshi. “Community College Writers’ Metaknowledge of Effective Writing.” Journal of Writing Research 10.1 (2018): 85-105. Web. 19 Sept. 2018.

Katherine A. Abba, Shuai (Steven) Zhang, and R. Malatesha Joshi report on a study of students’ metaknowledge about effective writing. They recruited 249 community-college students taking courses in Child Development and Teacher Education at an institution in the southwestern U.S. (89).

All students provided data for the first research question, “What is community-college students’ metaknowledge regarding effective writing?” The researchers used data only from students whose first language was English for their second and third research questions, which investigated “common patterns of metaknowledge” and whether classifying students’ responses into different groups would reveal correlations between the focus of the metaknowledge and the quality of the students’ writing. The authors state that limiting analysis to this subgroup would eliminate the confounding effect of language interference (89).

Abba et al. define metaknowledge as “awareness of one’s cognitive processes, such as prioritizing and executing tasks” (86), and explore extensive research dating to the 1970s that explores how this concept has been articulated and developed. They state that the literature supports the conclusion that “college students’ metacognitive knowledge, particularly substantive procedures, as well as their beliefs about writing, have distinctly impacted their writing” (88).

The authors argue that their study is one of few to focus on community college students; further, it addresses the impact of metaknowledge on the quality of student writing samples via the “Coh-Metrix” analysis tool (89).

Students participating in the study were provided with writing prompts at the start of the semester during an in-class, one-hour session. In addition to completing the samples, students filled out a short biographical survey and responded to two open-ended questions:

What do effective writers do when they write?

Suppose you were the teacher of this class today and a student asked you “What is effective writing?” What would you tell that student about effective writing? (90)

Student responses were coded in terms of “idea units which are specific unique ideas within each student’s response” (90). The authors give examples of how units were recognized and selected. Abba et al. divided the data into “Procedural Knowledge,” or “the knowledge necessary to carry out the procedure or process of writing,” and “Declarative Knowledge,” or statements about “the characteristics of effective writing” (89). Within the categories, responses were coded as addressing “substantive procedures” having to do with the process itself and “production procedures,” relating to the “form of writing,” e.g., spelling and grammar (89).

Analysis for the first research question regarding general knowledge in the full cohort revealed that most responses about Procedural Knowledge addressed “substantive” rather than “production” issues (98). Students’ Procedural Knowledge focused on “Writing/Drafting,” with “Goal Setting/Planning” in second place (93, 98). Frequencies indicated that while revision was “somewhat important,” it was not as central to students’ knowledge as indicated in scholarship on the writing process such as that of John Hayes and Linda Flower and M. Scardamalia and C. Bereiter (96).

Analysis of Declarative Knowledge for the full-cohort question showed that students saw “Clarity and Focus” and “Audience” as important characteristics of effective writing (98). Grammar and Spelling, the “production” features, were more important than in Procedural Knowledge. The authors posit that students were drawing on their awareness of the importance of a polished finished product for grading (98). Overall, data for the first research question matched that of previous scholarship on students’ metaknowledge of effective writing, which shows some concern with the finished product and a possibly “insufficient” focus on revision (98).

To address the second and third questions, about “common patterns” in student knowledge and the impact of a particular focus of knowledge on writing performance, students whose first language was English were divided into three “classes” in both Procedural and Declarative Knowledge based on their responses. Classes in Procedural Knowledge were a “Writing/Drafting oriented group,” a “Purpose-oriented group,” and the largest, a “Plan and Review oriented group” (99). Responses regarding Declarative Knowledge resulted in a “Plan and Review” group, a “Time and Clarity oriented group,” and the largest, an “Audience oriented group.” One hundred twenty-three of the 146 students in the cohort belonged to this group. The authors note the importance of attention to audience in the scholarship and the assertion that this focus typifies “older, more experienced writers” (99).

The final question about the impact of metaknowledge on writing quality was addressed through the Coh-Metrix “online automated writing evaluation tool” that assessed variables such as “referential cohesion, lexical diversity, syntactic complexity and pattern density” (100). In addition, Abba et al. used a method designed by A. Bolck, M. A. Croon, and J. A. Hagenaars (“BCH”) to investigate relationships between class membership and writing features (96).

These analyses revealed “no relationship . . . between their patterns knowledge and the chosen Coh-Metrix variables commonly associated with effective writing” (100). The “BCH” analysis revealed only two significant associations among the 15 variables examined (96).

The authors propose that their findings did not align with prior research suggesting the importance of metacognitive knowledge because their methodology did not use human raters and did not factor in student beliefs about writing or questions addressing why they responded as they did. Moreover, the authors state that the open-ended questions allowed more varied responses than did responses to “pre-established inventor[ies]” (100). They maintain that their methods “controlled the measurement errors” better than often-used regression studies (100).

Abba et al. recommend more research with more varied cohorts and collection of interview data that could shed more light on students’ reasons for their responses (100-101). Such data, they indicate, will allow conclusions about how students’ beliefs about writing, such as “whether an ability can be improved,” affect the results (101). Instructors, in their view, can more explicitly address awareness of strategies and effective practices and can use discussion of metaknowledge to correct “misconceptions or misuse of metacognitive strategies” (101):

The challenge for instructors is to ascertain whether students’ metaknowledge about effective writing is accurate and support students as they transfer effective writing metaknowledge to their written work. (101)

 


Leave a comment

Ray et al. Rethinking Student Evaluations of Teaching. Comp Studies Spring 2018. Posted 08/25/2018.

Ray, Brian, Jacob Babb, and Courtney Adams Wooten. “Rethinking SETs: Retuning Student Evaluations of Teaching for Student Agency.” Composition Studies 46.1 (2018): 34-56. Web. 10 Aug. 2018.

Brian Ray, Jacob Babb, and Courtney Adams Wooten report a study of Student Evaluations of Teaching (SETs) across a range of institutions. The researchers collected 55 different forms, 45 of which were institutions’ generic forms, while 10 were designed specifically for writing classes. They coded 1,108 different questions from these forms in order to determine what kinds of questions were being asked (35).

The authors write that although SETs and their use, especially in personnel decisions, is of concern in rhetoric and composition, very little scholarship in the field has addressed the issue (34-35). They summarize a history of student evaluations as tools for assessment of teachers, beginning with materials from the 1920s. Early SETs focused heavily on features of personality such as “wit,” “tact,” and “popularity” (38), as well as physical appearance (39). This focus on “subjective” characteristics of teachers asked students to judge “factors that neither they nor the instructor had sole control over and that they could do little to affect” (38).

This emphasis persisted throughout twentieth century. A scholar named Herbert Marsh conducted “numerous studies” in the 1970s and 1980s and eventually created the Student Evaluation of Education Quality form (SEEQ) in 1987 (35). This instrument asked students about nine features:

[L]earning, enthusiasm, organization and clarity, group interaction, individual rapport, breadth of coverage, tests and grading, assignments, and difficulty (39)

The authors contend that these nine factors substantively guide the SETs they studied (35), and they claim that, in fact, in important ways, “current SET forms differ little from those seen in the 1920s” (40).

Some of composition’s “only published conversations about SETs” revolved around workshops conducted by the Conference on College Composition and Communication (CCCC) from 1956 through 1962 (39). The authors report that instructors participating in these discussions saw the forms as most appropriate for “formative” purposes; very few institutions used them in personnel matters (39).

Data from studies of SETs in other fields reveal some of the problems that can result from common versions of these measures (37). The authors state that studies over the last ten years have not been able to link high teacher ratings on SETs with improved student learning or performance (40). Studies point out that many of the most common categories, like “clarity and fairness,” remain subjective, and that students consistently rank instructors on personality rather than on more valid measures of effectiveness (41).

Such research documents bias related to gender and ethnicity, with female African-American teachers rated lowest on one study asking students to assess “a hypothetical curriculum vitae according to teaching qualifications and expertise” (42). Male instructors are more commonly praised for their “ability to innovate and stimulate critical thought”; women are downgraded for failing to be “compassionate and polite” (42). Studies showed that elements like class size and workload affected results (42). Physical attractiveness continues to influence student opinion, as does the presence of “any kind of reward,” like lenient grading or even supplying candy (43).

The authors emphasize their finding that a large percentage of the questions they examined asked students about either some aspect of the teacher’s behavior (e.g., “approachability,” “open-mindedness” [42]) or what the teacher did (“stimulated my critical thinking” [45]). The teacher was the subject of nearly half of the questions (45). The authors argue that “this pattern of hyper-attention” (44) to the teacher casts the teacher as “solely responsible” for the success or failure of the course (43). As a result, in the authors’ view, students receive a distorted view of agency in a learning situation. In particular, they are discouraged from seeing themselves as having an active role in their own learning (35).

The authors contend that assigning so much agency to a single individual runs counter to “posthumanist” views of how agency operates in complex social and institutional settings (36). In this view, many factors, including not only all participants and their histories and interests but also the environment and even the objects in the space, play a part in what happens in a classroom (36). When SET questions fail to address this complexity, the authors posit, issues of validity arise when students are asked to pass judgment on subjective and ambiguously defined qualities as well as on factors beyond the control of any participant (40). Students encouraged to focus on instructor agency may also misjudge teaching that opts for modern “de-center[ed]” teaching methods rather than the lecture-based instruction they expect (44).

Ray et al. note that some programs ask students about their own level of interest and willingness to participate in class activities and advocate increased use of such questions (45). But they particularly advocate replacing the emphasis on teacher agency with questions that encourage students to assess their own contributions to their learning experience as well as to examine the class experience as a whole and to recognize the “relational” aspects of a learning environment (46). For example:

Instead of asking whether instructors stimulated critical thought, it seems more reasonable to ask if students engaged in critical thinking, regardless of who or what facilitated engagement. (46; emphasis original)

Ray et al. conclude that questions that isolate instructors’ contributions should lean toward those that can be objectively defined and rated, such as punctuality and responding to emails in a set time frame (46).

The authors envision improved SETs, like those of some programs, that are based on a program’s stated outcomes and that ask students about the concepts and abilities they have developed through their coursework (48). They suggest that programs in institutions that use “generic” evaluations for broader analysis or that do not allow individual departments to eliminate the official form should develop their own parallel forms in order to gather the kind of information that enables more effective assessment of classroom activity (48-49).

A major goal, in the authors’ view, should be questions that “encourage students to identify the interconnected aspects of classroom agency through reflection on their own learning” (49).

 


Leave a comment

Donahue & Foster-Johnson. Text Analysis for Evidence of Transfer. RTE, May 2018. Posted 07/13/2018.

Donahue, Christiane, and Lynn Foster-Johnson. “Liminality and Transition: Text Features in Postsecondary Student Writing.” Research in the Teaching of English 52.4 (2018): 359-381. Web. 4 July 2018.

Christiane Donahue and Lynn Foster-Johnson detail a study of student writing in the “liminal space” between a “generic” first-year-writing course and a second, “discipline-inspired” first-year seminar (365). They see their study as unusual in that it draws its data and conclusions from empirical “corpus analysis” of the texts students produce (376-77). They also present their study as different from much other research in that it considered a “considerably larger” sample that permits them to generalize about the broader population of the specific institution where the study took place (360).

The authors see liminal spaces as appropriate for the study of the issue usually referred to as “transfer,” which they see as a widely shared interest across composition studies (359). They contend that their study of “defined features” in texts produced as students move from one type of writing course to another allows them to identify “just-noticeable difference[s]” that they believe can illuminate how writing develops across contexts (361).

The literature review examines definitions of liminality as well as wide-ranging writing scholarship that attempts to articulate how knowledge created in one context changes as it is applied in new situations. They cite Linda Adler-Kassner’s 2014 contention that students may benefit from “learning strategy rather than specific writing rules or forms,” thus developing the ability to adapt to a range of new contexts (362).

One finding from studies such as that of Lucille McCarthy in 1987 and Donahue in 2010 is that while students change the way they employ knowledge as they move from first to final years of education, they do not seem fully aware of how their application of what they know has changed (361-62). Thus, for Donahue and Foster-Johnson, the actual features detectable in the texts themselves can be illuminating in ways that other research methodologies may not (362, 364).

Examining the many terms that have been used to denote “transfer,” Donahue and Foster-Johnson advocate for “models of writing knowledge reuse” and “adaptation,” which capture the recurrence of specific features and the ways these features may change to serve a new exigency (364).

The study took place in a “selective” institution (366) defined as a “doctoral university of high research activity” (365). The student population is half White, with a diverse range of other ethnicities, and 9% first-generation college students (366). Students take either one or two sections of general first-year writing, depending on needs identified by directed self-placement (366), and a first-year seminar that is “designed to teach first-year writing while also introducing students to a topic in a particular (inter)discipline and gesturing toward disciplinary writing” (365). The authors argue that this sequence provides a revealing “’bridge’ moment in students’ learning” (365).

Students were thus divided into three cohorts depending on which courses they took and in which semester. Ninety percent of the instructors provided materials, collecting “all final submitted drafts of the first and last ‘source-based’ papers” for 883 students. Fifty-two papers from each cohort were randomly chosen, resulting in 156 participants (366-67). Each participating student’s work was examined at four time points, with the intention of identifying the presence or absence of specific features (368).

The features under scrutiny were keyed to faculty-developed learning outcomes for the courses (367-68). The article discusses the analysis of seven: thesis presence, thesis type, introduction type, overall text structure, evidence types, conclusion type, and overall essay purpose (367). Each feature was further broken down into “facets,” 38 in all, that illustrated “the specific aspects of the feature” (367-68).

The authors provide detailed tables of their results and list findings in their text. They report that “the portrait is largely one of stability,” but note students’ ability to vary choices “when needed” (369). Statistically significant differences showing “change[s] across time” ranged from 13% in Cohort 1 to 29% in Cohort 2 and 16% in Cohort 3. An example of a stable strategy is the use of “one explicit thesis at the beginning” of a paper (371); a strategy “rarely” used was “a thesis statement [placed] inductively at the middle or end” (372). Donahue and Foster-Johnson argue that these results indicate that students had learned useful options that they could draw on as needed in different contexts (372).

The authors present a more detailed examination of the relationship between “thesis type” and “overall essay aim” (374). They give examples of strong correlations between, for example, “the purpose of analyzing an object” and the use of “an interpretive thesis” as well as negative correlations between, for example, “the purpose of analyzing an object” and “an evaluative thesis” (374). In their view, these data indicate that some textual features are “congruen[t]” with each other while others are “incompatible” (374). They find that their textual analysis documents these relationships and students’ reliance on them.

They note a “reset effect”: in some cases, students increased their use of a facet (e.g., “external source as authority”) over the course of the first class, but then reverted to using the facet less at the beginning of the second class, only to once again increase their reliance on such strategies as the second class progressed (374-75), becoming, “‘repeating newcomers’ in the second term” (374).

Donahue and Foster-Johnson propose as one explanation for the observed stability the possibility that “more stays consistent across contexts than we might readily acknowledge” (376), or that in general-education contexts in which exposure to disciplinary writing is preliminary, the “boundaries we imagine are fuzzy” (377). They posit that it is also possible that curricula may offer students mainly “low-road” opportunities for adaptation or transformation of learned strategies (377). The authors stress that in this study, they were limited to “what the texts tell us” and thus could not speak to students’ reasons for their decisions (376).

Questions for future research, they suggest, include whether students are aware of deliberate reuse of strategies and whether or not “students reusing features do so automatically or purposefully” (377). Research might link student work to particular students with identifiers that would enable follow-up investigation.

They argue that compared to the methods of textual analysis and “topic-modeling” their study employs, “current assessment methods . . . are crude in their construct representation and antiquated in the information they provide” (378). They call for “a new program of research” that exploits a new

capability to code through automated processes and allow large corpora of data to be uploaded and analyzed rapidly under principled categories of analysis. 378

 


Leave a comment

Limpo and Alves. Effects of Beliefs about “Writing Skill Malleability” on Performance. JoWR 2017. Posted 11/24/2017.

Limpo, Teresa, and Rui A. Alves. “Relating Beliefs in Writing Skill Malleability to Writing Performance: The Mediating Roles of Achievement Goals and Self-Efficacy.” Journal of Writing Research 9.2 (2017): 97-125. Web. 15 Nov. 2017.

Teresa Limpo and Rui A. Alves discuss a study with Portuguese students designed to investigate pathways between students’ beliefs about writing ability and actual writing performance. They use measures for achievement goals and self-efficacy to determine how these factors mediate between beliefs and performance. Their study goals involved both exploring these relationships and assessing the validity and reliability of the instruments and theoretical models they use (101-02).

The authors base their approach on the assumption that people operate via “implicit theories,” and that central to learning are theories that see “ability” as either “incremental,” in that skills can be honed through effort, or as an “entity” that cannot be improved despite effort (98). Limpo and Alves argue that too little research has addressed how these beliefs about “writing skill malleability” influence learning in the specific “domain” of writing (98).

The authors report earlier research that indicates that students who see writing as an incremental skill perform better in intervention studies. They contend that the “mechanisms” through which this effect occurs have not been thoroughly examined (99).

Limpo and Alves apply a three-part model of achievement goals: “mastery” goals involve the desire to improve and increase competence; “performance-approach” goals involve the desire to do better than others in the quest for competence; and “performance-avoidance” goals manifest as the desire to avoid looking incompetent or worse than others (99-100). Mastery and performance-approach goals correlate positively because they address increased competence, but performance-approach and performance-avoidance goals also correlate because they both concern how learners see themselves in comparison to others (100).

The authors write that “there is overall agreement” among researchers in this field that these goals affect performance. Students with mastery goals display “mastery-oriented learning patterns” such as “use of deep strategies, self-regulation, effort and persistence, . . . [and] positive affect,” while students who focus on performance avoidance exhibit “helpless learning patterns” including “unwillingness to seek help, test anxiety, [and] negative affect” (100-01). Student outcomes with respect to performance-approach goals were less clear (101). The authors hope to clarify the role of self-efficacy in these goal choices and outcomes (101).

Limpo and Alves find that self-efficacy is “perhaps the most studied variable” in examinations of motivation in writing (101). They refer to a three-part model: self-efficacy for “conventions,” or “translating ideas into linguistic forms and transcribing them into writing”; for “ideation,” finding ideas and organizing them, and for “self-regulation,” which involves knowing how to make the most of “the cognitive, emotional, and behavioral aspects of writing” (101). They report associations between self-efficacy, especially for self-regulation, and mastery goals (102). Self-efficacy, particularly for conventions, has been found to be “among the strongest predictors of writing performance” (102).

The authors predicted several “paths” that would illuminate the ways in which achievement goals and self-efficacy linked malleability beliefs and performance. They argue that their study contributes new knowledge by providing empirical data about the role of malleability beliefs in writing (103).

The study was conducted among native Portuguese speakers in 7th and 8th grades in a “public cluster of schools in Porto” that is representative of the national population (104). Students received writing instruction only in their Portuguese language courses, in which teachers were encouraged to use “a process-oriented approach” to teach a range of genres but were not given extensive pedagogical support or the resources to provide a great deal of “individualized feedback” (105).

The study reported in this article was part of a larger study; for the relevant activities, students first completed scales to measure their beliefs about writing-skill malleability and to assess their achievement goals. They were then given one of two prompts for “an opinion essay” on whether students should have daily homework or extra curricular activities (106). After the prompts were provided, students filled out a sixteen-item measure of self-efficacy for conventions, ideation, and self-regulation. A three-minute opportunity to brainstorm about their responses to the prompts followed; students then wrote a five-minute “essay,” which was assessed as a measure of performance by graduate research assistants who had been trained to use a “holistic rating rubric.” Student essays were typed and mechanical errors corrected. The authors contend that the use of such five-minute tasks has been shown to be valid (107).

The researchers predicted that they would see correlations between malleability beliefs and performance; they expected to see beliefs affect goals, which would affect self-efficacy, and lead to differences in performance (115). They found these associations for mastery goals. Students who saw writing as an incremental, improvable skill displayed “a greater orientation toward mastery goals” (115). The authors state that this result for writing had not been previously demonstrated. Their research reveals that “mastery goals contributed to students’ confidence” and therefore to self-efficacy, perhaps because students with this belief “ actively strive” for success (115).

They note, however, that prior research correlated these results with self-efficacy for conventions, whereas their study showed that self-efficacy for self-regulation, students’ belief that “they can take control of their own writing,” was the more important contributor to performance (116); in fact, it was “the only variable directly influencing writing performance” (116). Limpo and Alves hypothesize that conventions appeared less central in their study because the essays had been typed and corrected, so that errors had less effect on performance scores (116).

Data on the relationship between malleability beliefs and performance-approach or performance-avoidance goals, the goals associated with success in relation to others, were “less clear-cut” (117). Students who saw skills as fixed tended toward performance-avoidance, but neither type of performance goal affected self-efficacy.

Limpo and Alves recount an unexpected finding that the choice of performance-avoidance goals did not affect performance scores on the essays (117). The authors hypothesize that the low-stakes nature of the task and its simplicity did not elicit “the self-protective responses” that often hinder writers who tend toward these avoidance goals (117). These unclear results lead Limpo and Alves to withhold judgment about the relationship among these two kinds of goals, self-efficacy, and performance, positing that other factors not captured in the study might be involved (117-18).

They recommend more extensive research with more complex writing tasks and environments, including longitudinal studies and consideration of such factors as “past performance” and gender (118). They encourage instructors to foster a view of writing as an incremental skill and to emphasize self-regulation strategies. They recommend “The Self-Regulated Strategy Development model” as “one of the most effective instructional models for teaching writing” (119).


Leave a comment

Bastian, Heather. Affect and “Bringing the Funk” to First-Year Writing. CCC, Sept. 2017. Posted 10/05/2017.

Bastian, Heather. “Student Affective Responses to ‘Bringing the Funk’ in the First-Year Writing Classroom.” College Composition and Communication 69.1 (2017): 6-34. Print.

Heather Bastian reports a study of students’ affective responses to innovative assignments in a first-year writing classroom. Building on Adam Banks’s 2015 CCCC Chair’s Address, Bastian explores the challenges instructors may face when doing what Banks called “bring[ing] the funk” (qtd. in Bastian 6) by asking students to work in genres that do not conform to “academic convention” (7).

According to Bastian, the impetus for designing such units and assignments includes the need to “prepare students for uncertain futures within an increasingly technological world” (8). Bastian cites scholarship noting teachers’ inability to forecast exactly what will be demanded of students as they move into professions; this uncertainty, in this view, means that the idea of what constitutes writing must be expanded and students should develop the rhetorical flexibility to adapt to the new genres they may encounter (8).

Moreover, Bastian argues, citing Mary Jo Reiff and Anis Bawarshi, that students’ dependence on familiar academic formulas means that their responses to rhetorical situations can become automatic and unthinking, with the result that they do not question the potential effects of their choices or explore other possible solutions to rhetorical problems. This automatic response limits “their meaning-making possibilities to what academic convention allows and privileges” (8-9)

Bastian contends that students not only fall back on traditional academic genres but also develop “deep attachments” to the forms they find familiar (9). The field, she states, has little data on what these attachments are like or how they guide students’ rhetorical decisions (9, 25).

She sees these attachments as a manifestation of “affect”; she cites Susan McLeod’s definition of affect as “noncognitive phenomena, including emotions but also attitudes, beliefs, moods, motivations, and intuitions” (9). Bastian cites further scholarship that indicates a strong connection between affect and writing as well as emotional states and learning (9-10). In her view, affect is particularly important when teachers design innovative classroom experiences because students’ affective response to such efforts can vary greatly; prior research suggests that as many as half the students in a given situation will resist moving beyond the expected curriculum (10).

Bastian enlisted ten of twenty-two students in a first-year-writing class at a large, public midwestern university in fall 2009 (11). She used “multiple qualitative research methods” to investigate these first-semester students’ reactions to the third unit in a four-unit curriculum intended to meet the program’s goals of “promot[ing] rhetorical flexibility and awareness”; the section under study explored genre from different perspectives (11). The unit introduced “the concept of genre critique,” as defined by the course textbook, Amy J. Devitt et al.’s Scenes of Writing: “questioning and evaluating to determine the strengths and shortcomings of a genre as well as its ideological import” (12).

Bastian designed the unit to “disrupt” students’ expectation of a writing class on the reading level, in that she presented her prompt as a set of “game rules,” and also on the “composing” level, as the unit did not specify what genre the students were to critique nor the form in which they were to do so (12). Students examined a range of genres and genre critiques, “including posters, songs, blogs, . . . artwork, poems, . . . comics, speeches, creative nonfiction. . . .” (13). The class developed a list of the possible forms their critiques might take.

Bastian acted as observer, recording evidence of “the students’ lived experiences” as they negotiated the unit. She attended all class sessions, made notes of “physical reactions” and “verbal reactions” (13). Further data consisted of one-hour individual interviews and a set of twenty-five questions. For this study, she concentrated on questions that asked about students’ levels of comfort with various stages of the unit (13).

Like other researchers, Bastian found that students asked to create innovative projects began with “confusion”; her students also displayed “distrust” (14) in that they were not certain that the assignment actually allowed them to choose their genres (19). All students considered “the essay” the typical genre for writing classes; some found the familiar conventions a source of confidence and comfort, while for others the sense of routine was “boring” (student, qtd. in Bastian 15).

Bastian found that the degree to which students expressed “an aversion” to the constraints of “academic convention” affected their responses to the assignment, particularly the kinds of genres they chose and their levels of comfort with the unusual assignment.

Those who said that they wanted more freedom in classroom writing chose what the students as a whole considered “atypical” genres for their critiques, such as recipes, advertisements, or magazine covers (16-17). Students who felt safer within the conventions preferred more “typical” choices such as PowerPoint presentations and business letters (16, 22). The students who picked atypical genres claimed that they appreciated the opportunity to experience “a lot more chance to express yourself” (student, qtd. in Bastian 22), and possibly discover “hidden talents” (22).

The author found, however, that even students who wanted more freedom did not begin the unit with high levels of comfort. She found that the unusual way the assignment was presented, the “concept of critique,” and the idea that they could pick their own genres concerned even the more adventurous students (18). In Bastian’s view, the “power of academic convention” produced a forceful emotional attachment: students “distrusted the idea that both textual innovation and academic convention is both valid and viable in the classroom” (20).

Extensive exposure to critiques and peer interaction reduced discomfort for all students by the end of the unit (19), but those who felt least safe outside the typical classroom experience reported less comfort (23). One student expressed a need to feel safe, yet, after seeing his classmates’ work, chose an atypical response, encouraging Bastian to suggest that with the right support, “students can be persuaded to take risks” (23).

Bastian draws on research suggesting that what Barry Kroll calls “intelligent confusion” (qtd. in Bastian 26) and “cognitive disequilibrium” can lead to learning if supported by appropriate activities (26). The students reported gains in a number of rhetorical dimensions and specifically cited the value of having to do something that made them uncomfortable (25). Bastian argues that writing teachers should not be surprised to encounter such resistance, and can prepare for it with four steps: ‘openly acknowledge and discuss” the discomfort students might feel; model innovation; design activities that translate confusion into learning; and allow choice (27-28). She urges more empirical research on the nature of students’ affective responses to writing instruction (29).

 


Leave a comment

Bailey & Bizzaro. Research in Creative Writing. August RTE. Posted 08/25/2017.

Bailey, Christine, and Patrick Bizzaro. “Research in Creative Writing: Theory into Practice.” Research in the Teaching of English 52.1 (2017): 77-97. Print.

Christine Bailey and Patrick Bizzaro discuss the disciplinarity of creative writing and its place in relation to the discipline of composition. They work to establish an aesthetic means of interpreting and representing data about creative writing in the belief that in order to emerge as a discipline its own right, creative writing must arrive at a set of shared values and understandings as to how research is conducted.

Bailey and Bizzaro’s concerns derive from their belief that creative writing must either establish itself as a discipline or it will be incorporated into composition studies (81). They contend that creative writing studies, like other emerging disciplines, must account for, in the words of Timothy J. San Pedro, “hierarchies of power” within institutions (qtd. in Bailey and Bizzaro 78) such that extant disciplines control or oppress less powerful disciplines, much as “teaching practices and the texts used in schools” oppress marginal student groups (78). A decision to use the methodologies of the “dominant knowledges” thus accedes to “imperial legacies” (San Pedro, qtd. in Bailey and Bizzaro 78).

Bailey and Bizzaro report that discussion of creative writing by compositionists such as Douglas Hesse and Wendy Bishop has tended to address how creative writing can be appropriately positioned as part of composition (79). Drawing on Bishop, the authors ascribe anxiety within some English departments over the role of creative writing to “genre-fear,” that is, “the belief that two disciplines cannot simultaneously occupy the same genre” (79).

They recount Bishop’s attempt to resolve the tension between creative writing studies and composition by including both under what she called a de facto “ready-made synthesis” that she characterized as the “study of writers writing” (qtd. in Bailey and Bizzaro 80). In the authors’ view, this attempt fails because the two fields differ substantially: “what one values as the basis for making knowledge differs from what the other values” (80).

The authors see creative writing studies itself as partially responsible for the difficulties the field has faced in establishing itself as a discipline (79, 80-81). They draw on Stephen Toulmin’s approach to disciplinarity: “a discipline exists ‘where men’s [sic] shared commitment to a sufficiently agreed set of ideals leads to the development of an isolable and self-defining repertory of procedures” (qtd. In Bailey and Bizzaro 80). The authors elaborate to contend that in a discipline, practitioners develop shared views as to what counts as knowledge and similarly shared views about the most appropriate means of gathering and reporting that knowledge (80).

Creative writing studies, they contend, has not yet acted on these criteria (81). Rather, they state, creative writers seem to eschew empirical research in favor of “craft interviews” consisting of “writers’ self-reports”; meanwhile, compositionists have undertaken to fill the gap by applying research methodologies appropriate to composition but not to creative writing (81). The authors’ purpose, in this article, is to model a research methodology that they consider more in keeping with the effort to define and apply the specific values accruing to creative writing.

The methodology they advance involves gathering, interpreting, and representing aesthetic works via an aesthetic form, in this case, the novel. Students in nine sections of first-year-writing classes in spring and fall 2013 responded to a “creative-narrative” prompt: “How did you come to this place in your life? Tell me your story” (84). Students were asked to respond with “a creative piece such as a poem, screenplay, or graphic novel” (84). All students were invited to participate with the understanding that their work would be confidential and might be represented in published research that might take on an alternative form such as a novel; the work of students who signed consent forms was duplicated and analyzed (84-85).

Data ultimately consisted of 57 artifacts, 55 of which were poems (85). Coding drew on the work of scholars like K. M. Powell, Elspeth Probyn, and Roz Ivanič to examine students’ constructions of self through the creative-narrative process, and on that of James E. Seitz to consider how students’ use of metaphor created meaning (85, 86). Further coding was based on Kara P. Alexander’s 2011 study of literacy narratives (86).

This analysis was combined with the results of a demographic survey to generate six groups revolving around “[c]ommon threads” in the data (86); “personas” revealed through the coded characteristics divided students into those who, for example, “had a solid identity in religion”; “were spiritually lost”; were “uncertain of identity [and] desiring change”; were “reclusive” with “strong family ties”; were interested in themes of “redemption or reformation”; or “had lived in multiple cultures” (86). This list, the authors state, corresponds to “a standard analysis” that they contrast with their alternative creative presentation (86).

In their methodology, Bailey and Bizzaro translate the “composites” identified by the descriptors into six characters for a young-adult novel Bailey developed (88). Drawing on specific poems by students who fell into each composite as well as on shared traits that emerged from analysis of identity markers and imagery in the poems, the authors strove to balance the identities revealed through the composites with the individuality of the different students. They explore how the characters of “Liz” and “Emmy” are derived from the “data” provided by the poems (89-90), and offer an excerpt of the resulting novel (90-92).

They present examples of other scholars who have “used aesthetic expressions in the development of research methods” (88). Such methods include ethnography, a form of research that the authors consider “ultimately a means of interpretive writing” (93). Thus, in their view, creating a novel from the data presented in poems is a process of interpreting those data, and the novel is similar to the kind of “storytell[ing]” (93) in which ethnography gathers data, then uses it to represent, interpret, and preserve individuals and their larger cultures (92-93).

They continue to contend that embracing research methods that value aesthetic response is essential if creative writing is to establish itself as a discipline (93). These methodologies, they argue, can encourage teachers to both value aesthetic elements of student work and to use their own aesthetic responses to enhance teaching, particularly as these methods of gathering and representing data result in “aesthetic objects” that are “evocative, engage readers’ imaginations, and resonate with the world we share not only with our students but also with our colleagues in creative writing” (94). They argue that “when the ‘literariness’ of data reports [becomes] a consideration in the presentation of research,” composition and creative writing will have achieved “an equitable relationship in writing studies” (95).