College Composition Weekly: Summaries of research for college writing professionals

Read, Comment On, and Share News of the Latest from the Rhetoric and Composition Journals


Leave a comment

Pruchnic et al. Mixed Methods in Direct Assessment. J or Writ Assessment, 2018. Posted 12/01/2018.

Pruchnic, Jeff, Chris Susak, Jared Grogan, Sarah Primeau, Joe Torok, Thomas Trimble, Tanina Foster, and Ellen Barton. “Slouching Toward Sustainability: Mixed Methods in the Direct Assessment of Student Writing.” Journal of Writing Assessment 11.1 (2018). Web. 27 Nov. 2018.

[Page numbers from pdf generated from the print dialogue]

Jeff Pruchnic, Chris Susak, Jared Grogan, Sarah Primeau, Joe Torok, Thomas Trimble, Tanina Foster, and Ellen Barton report on an assessment of “reflection argument essay[s]” from the first-year-composition population of a large, urban, public research university (6). Their assessment used “mixed methods,” including a “thin-slice” approach (1). The authors suggest that this method can address difficulties faced by many writing programs in implementing effective assessments.

The authors note that many stakeholders to whom writing programs must report value large-scale quantitative assessments (1). They write that the validity of such assessments is often measured in terms of statistically determined interrater reliability (IRR) and samples considered large enough to adequately represent the population (1).

Administrators and faculty of writing programs often find that implementing this model requires time and resources that may not be readily available, even for smaller programs. Critics of this model note that one of its requirements, high interrater reliability, can too easily come to stand in for validity (2); in the view of Peter Elbow, such assessments favor “scoring” over “discussion” of the results (3). Moreover, according to the authors, critics point to the “problematic decontextualization of program goals and student achievement” that large-scale assessments can foster (1).

In contrast, Pruchnic et al. report, writing programs have tended to value the “qualitative assessment of a smaller sample size” because such models more likely produce the information needed for “the kinds of curricular changes that will improve instruction” (1). Writing programs, the authors maintain, have turned to redefining a valid process as one that can provide this kind of information (3).

Pruchnic et al. write that this resistance to statistically sanctioned assessments has created a bind for writing programs. Pruchnic et al. cite scholars like Peggy O’Neill (2) and Richard Haswell (3) to posit that when writing programs refuse the measures of validity required by external stakeholders, they risk having their conclusions dismissed and may well find themselves subject to outside intervention (3). Haswell’s article “Fighting Number with Number” proposes producing quantitative data as a rhetorical defense against external criticism (3).

In the view of the authors, writing programs are still faced with “sustainability” concerns:

The more time one spends attempting to perform quantitative assessment at the size and scope that would satisfy statistical reliability and validity, the less time . . . one would have to spend determining and implementing the curricular practices that would support the learning that instructors truly value. (4)

Hoping to address this bind, Pruchnic et al. write of turning to a method developed in social studies to analyze “lengthy face-to-face social and institutional interactions” (5). In a “thin-slice” methodology, raters use a common rubric to score small segments of the longer event. The authors report that raters using this method were able to predict outcomes, such as the number of surgery malpractice claims or teacher-evaluation results, as accurately as those scoring the entire data set (5).

To test this method, Pruchnic et al. created two teams, a “Regular” and a “Research” team. The study compared interrater reliability, “correlation of scores,” and the time involved to determine how closely the Research raters, scoring thin slices of the assessment data, matched the work of the Regular raters (5).

Pruchnic et al. provide a detailed description of their institution and writing program (6). The university’s assessment approach is based on Edward White’s “Phase 2 assessment model,” which involves portfolios with a final reflective essay, the prompt for which asks students to write an evidence-based argument about their achievements in relation to the course outcomes (8). The authors note that limited resources gradually reduced the amount of student writing that was actually read, as raters moved from full-fledged portfolio grading to reading only the final essay (7). The challenges of assessing even this limited amount of student work led to a sample that consisted of only 6-12% of the course enrollment.

The authors contend that this is not a representative sample; as a result, “we were making decisions about curricular and other matters that were not based upon a solid understanding of the writing of our entire student body” (7). The assessment, in the authors’ view, therefore did not meet necessary standards of reliability and validity.

The authors describe developing the rubric to be used by both the Research and Regular teams from the precise prompt for the essay (8). They used a “sampling calculator” to determine that, given the total of 1,174 essays submitted, 290 papers would constitute a representative sample; instructors were asked for specific, randomly selected papers to create a sample of 291 essays (7-8).

The Regular team worked in two-member pairs, both members of each pair reading the entire essay, with third readers called in as needed (8): “[E]ach essay was read and scored by only one two-member team” (9). The authors used “double coding” in which one-fifth of the essays were read by a second team to establish IRR (9). In contrast, the 10-member Research team was divided into two groups, each of which scored half the essays. These readers were given material from “the beginning, middle, and end” of each essay: the first paragraph, the final paragraph, and a paragraph selected from the middle page or pages of the essay, depending on its length. Raters scored the slices individually; the averaged five team members’ scores constituted the final scores for each paper (9).

Pruchnic et al. discuss in detail their process for determining reliability and for correlating the scores given by the Regular and Research teams to determine whether the two groups were scoring similarly. Analysis of interrater reliability revealed that the Research Team’s IRR was “one full classification higher” than that of the Regular readers (12). Scores correlated at the “low positive” level, but the correlation was statistically significant (13). Finally, the Research team as a whole spent “a little more than half the time” scoring than the Regular group, while individual average scoring times for Research team members was less than half of the scoring time of the Regular members (13).

Additionally, the assessment included holistic readings of 16 essays randomly representing the four quantitative result classifications of Poor through Good (11). This assessment allowed the authors to determine the qualities characterizing essays ranked at different levels and to address the pedagogical implications within their program (15, 16).

The authors conclude that thin-slice scoring, while not always the best choice in every context (16), “can be added to the Writing Studies toolkit for large-scale direct assessment of evaluative reflective writing” (14). Future research, they propose, should address the use of this method to assess other writing outcomes (17). Paired with a qualitative assessment, they argue, a mixed-method approach that includes thin-slice analysis as an option can help satisfy the need for statistically grounded data in administrative and public settings (16) while enabling strong curricular development, ideally resulting in “the best of both worlds” (18).


Leave a comment

Witte, Alison. CMSs as Genres. C&C, Sept. 2018. Posted 11/20/2018.

Witte, Alison. “‘Why Won’t Moodle. . . ?’: Using Genre Studies to Understand Students’ Approaches to Interacting with User Interfaces.” Computers and Composition 49 (2018): 48-60. Web. 9 Nov. 2018.

Alison Witte addresses the difficulties her first-year students faced when they encountered the Course Management System (CMS) in use at her institution. She surveyed students in first-year courses over six semesters to discover the factors that may have caused these problems (50). Witte found that examining the CMS interface as a genre provided insights into how students interacted with the program.

The author notes that the use of a CMS has “become a normalized part of many educational institutions’ landscapes” (48). The program’s power to shape interactions between students, instructors, and the institution, she writes, can generate “tensions” (48). She describes Moodle, the CMS in place for her university, comparing its “static” features with the more interactive and responsive features of social media sites; she notes in particular the “teacher-driven design” that permits the instructor to determine what sections to create and to provide the content (49). Witte quotes a faculty mentor who supports the university’s commitment to Moodle because the students are familiar with it from high school and “like it,” even though, according to Witte, there is only “anecdotal” evidence behind this claim (49).

In Witte’s view, if students are indeed comfortable in electronic environments, they should not exhibit the level of difficulty she observes (49). Her survey investigates which kinds of interfaces students have experienced and how these experiences might influence their reactions to Moodle (50).

Drawing on genre theory, Witte proposes, highlights the ways an interface cues users to control what behaviors and actions are acceptable, requiring users to determine the “appropriate response” in the rhetorical situation established by the interface (52). Citing Carolyn Miller, Witte considers genre “a way of understanding how a text responds to a particular recurring situation” (50). Just as Microsoft Word’s presentation of a blank page cues an essaylike response rather than a social-media post, the CMS signals certain kinds of “typified” actions (51).

Arguing that writing studies has not explored electronic interfaces through this theoretical lens, Witte contends that interfaces have generally been seen as tools to produce other things rather than as “text[s] with both expectations and formal conventions” of their own (50). Instructors, she proposes, are like other users of electronic environments in that their use of these familiar programs becomes “unconscious or invisible” because they are so accustomed to the process (51). Her study foregrounds the need for teachers to be more alert to the ways that their use of a CMS acts as a genre students must interpret and positions them in certain ways in the classroom environment (50). Teachers’ understanding of this interaction, she maintains, can help students use a CMS more effectively.

Witte notes two common models of CMS use. In many cases, the system attempts to “replicate” a classroom environment, allowing students to complete familiar academic tasks such as taking quizzes and completing assignments. A second model treats the CMS as a “repository” where students go to procure whatever they need for the class. These models share a “top-down” quality in that the teacher decides on the categories and sections and provides the material (52-53). The models limit students to responding in ways determined by the instructor and indicated by the conventions incorporated into the interface (53).

For Witte, a “guiding assumption” in the study was “that people learn unfamiliar genres by determining how they are like and unlike genres they know and by observing how the unfamiliar genre is used in context” (50). Hence, her survey asks the 68 participating students which interfaces they normally interact with (54). It also asks multiple-choice and open-ended questions about students’ experiences with Moodle, including ease of use and kinds of use across classes. Finally, students were asked what they liked about the CMS and what improvements they might suggest (54).

The majority of the participants were in their first college semesters. Witte proposes that while these students might be among the most likely to report problems with the CMS, surveying this particular population yielded good information on how best to help students navigate their early exposure to such platforms (54).

Data revealed that students used a variety of social media, Word tools for producing documents, and “Miscellaneous Web-based Interfaces” like iTunes, E-bay, or YouTube (54). They most commonly relied on the CMS to “complete course work and to find the information necessary” to do so (55). All of the students used Moodle in some of their classes. Grounded-theory coding of the open-ended responses produced four categories of “likes” that focused on availability of materials and information and ease of completing tasks. Students’ suggestions for improvement addressed usability issues, “Mobile Device Compatibility,” and inconsistency in the ways teachers used the CMS (54).

Analysis of her data suggests to Witte that students receive conflicting genre cues about the function of the CMS, sometimes assuming it is more like social media sites than it is in practice and in fact asking for more interactivity with their mobile devices and other media choices (56). They may see certain cues as inviting informal, interactive responses while other require a more “school/professional response” in which they become “passive consumer[s] of information” (56). In Witte’s view, instructors do not always articulate clearly exactly what role the CMS should play in their individual courses; moreover, students may approach the CMS with a different idea about its purposes than the instructor intends (57).

Seeing a CMS as a genre, Witte contends, helps instructors think about their use of the program in terms of audience, redirecting the focus from “its technological affordances to what it does or how it is used in particular context for particular people” (57). She urges instructors to plan CMS structure in accordance with course design, for example, arranging a course built around weekly schedules by weeks and courses meant to provide materials without regard to due date by topic. The survey reveals that students may need specific direction about the type of response indicated by CMS features, like text boxes or discussion forums (57). Instructors are urged to clarify their own purposes and expectations for how students use the resource and to communicate these explicitly (57-58).

Witte also argues that casting a CMS as a genre provides an opportunity to introduce students to genre theory and to understand through a concrete example how audience and purpose relate to the conventions of a particular form. In this view, students can explore how to use their exposure to other genres to situate new genres like a CMS in their contexts when they encounter them (58); they may then carry the experience of navigating a CMS into their interactions with other texts they may be called on to respond to or produce.


Leave a comment

Corrigan, Paul. “Conclusion to Literature.” TETYC Sept. 2018. Posted 11/06/2018.

Corrigan, Paul T. “Conclusion to Literature.” Teaching English in the Two-Year College 46.1 (2018): 30-48. Print.

Paul T. Corrigan argues for a reassessment of the value and purpose of the “Introduction to Literature” course that is part of the general-education curriculum at many higher-learning institutions.

Corrigan expresses concern that the understanding of many humanities scholars and teachers that reading “literature” is an important life activity is not widely shared by the public (30). Corrigan locates twenty-four “apologias” for literature published since 2000 that argue that such texts “may help us change or understand or give meaning or perspective to our lives” (30), but notes that only people already convinced of the value of literature will read these books (31). His study of “nineteen current anthologies and eighty-two available syllabi” for the introductory college course indicates to him that students taking the course are not widely encouraged to appreciate literature as an activity that will bring meaning into their lives (31, 37).

In Corrigan’s view, students taking the college course have already been introduced to literature, and in fact have been widely exposed to such reading, throughout their elementary and high-school experiences (37). Because, for many, “Introduction to Literature” is actually the last literature course the majority of students will take, Corrigan argues that the standard course is a “conclusion” to literature rather than a beginning (37).

Introduction to Literature, he maintains, is both among “the most commonly taught” and “most commonly taken” college courses across institutions (32). For Corrigan, that so many students take this course makes it a powerful platform for helping students see the value of literature; students who will then leave college with a positive impression of literature will far outnumber those who go on from the course to become majors and can influence public perception of humanistic learning throughout their lives (32).

To make the introductory course fulfill this purpose, Corrigan proposes shifting the focus from an preponderant review of the “means” of reading literature, such a formal elements of analysis and criticism, to attention to the “ends” of such reading (34), that is, the “why” of reading, or in the words of M. Elizabeth Sargent, “For what?” Teachers of literature, Sargent contends, should have “at least one thoughtful, evolving committed answer to this question” (qtd. in Corrigan 33).

Corrigan acknowledges that his sample permits only an “indirect peek” into the presentation of the ends of literary instruction, but characterizes his findings as “highly suggestive and instructive” (34). His analysis of the anthologies and syllabi categorizes the sample using four terms.

Materials in which attention to the ends/why issue does not appear at all fall under the classification “absent.” He gives as an example an anthology that responds to the question “Who needs it [poetry]?” with the comment that the “study of poetry” is the collection’s aim (qtd. in Corrigan 34-35; emendation in Corrigan; emphasis original). A syllabus in this category suggests that “‘an appreciation of literature’ may benefit ‘civilization’” and states that what a student will take from the class is “up to you” (qtd. in Corrigan 35). Twenty-one percent of the anthologies and 51% of the syllabi fell into this group (34).

Materials containing “nascent” references to the reason for reading literature made up 47% of the anthologies and 37% of the syllabi. These materials included short discussions or mentions of the value of literature, such as “a few paragraphs” in introductory sections or specific but short statements in course goals (35).

Corrigan placed materials in which “the question of why literature matters [is] one significant topic among others, although not a pervasive or central concern” in his category of “present” (35). Twenty-six percent (5 of the 19) anthologies met this criterion, and 10% (8 of 82) of the syllabi did so (35). Corrigan gives examples of how these teaching artifacts explicitly invited students to connect their reading experience to their lives (35-36).

Only a single anthology and two syllabi fell into the final category, “emphasized” (36). Corrigan delineates how Literature for Life, by X. J. Kennedy, Dana Gioia, and Nina Revoyr, “foreground[s]” the purpose of reading literature as a principal focus of the text (36). A syllabus from Western Michigan University builds connections to students’ lives into its course theme of “literary representations of food” with specific assignments asking students to address the topic in their own experiences (36).

In Corrigan’s view, recognizing that a college Introduction to Literature is more likely to be the “last time [most students] will spend any serious time thinking about literature” warrants recasting the course as “Conclusion to Literature” (37). He argues that the technical disciplinary processes of literary study can still be incorporated but should be used to enhance students’ ability to relate to and connect with the texts they read (40); he maintains that using the course to develop students’ ability to value literature will equip them with more incentive to read and value it in the future “than any amount of knowledge could provide” (38).

Quoting Karen Manarin et al., Corrigan agrees that “merely telling” students how literature matters is insufficient; he calls for pedagogy actively designed to draw out applications to students’ lives. His overview of his own course includes examples of assignments, paper prompts, and activities such as visiting nature centers in conjunction with reading nature poems (39). Writing that teachers may take for granted the importance of the “ends” of literature, he argues that re-seeing the introductory course as a conclusion “attends to, rather than assumes, those ends” (38).

 


Leave a comment

Sills, Ellery. Creating “Outcomes 3.0.” CCC, Sept. 2018. Posted 10/24/2018.

Sills, Ellery. “Making Composing Policy Audible: A Genealogy of the WPA Outcomes Statement 3.0.” College Composition and Communication 70.1 (2018): 57-81. Print.

Ellery Sills provides a “genealogy” of the deliberations involved in the development of “Outcomes 3.0,” the third revision of the Council of Writing Program Administrators’ Outcome Statement for First-Year Composition (58). His starting point is “Revising FYC Outcomes for a Multimodal, Digitally Composed World,” a 2014 article by six of the ten composition faculty who served on the task force to develop Outcomes (OS) 3.0 (57).

Sills considers the 2014 article a “perfectly respectable history” of the document (58), but argues that such histories do not capture the “multivocality” of any policymaking process (59). He draws on Chris Gallagher to contend that official documents like the three Outcomes Statements present a finished product that erases debates and disagreements that go into policy recommendations (59). Sills cites Michel Foucault’s view that, in contrast, a genealogy replaces “the monotonous finality” (qtd. in Sills 59) of a history by “excavat[ing] the ambiguities” that characterized the deliberative process (59).

For Sills, Outcomes 3.0 shares with previous versions of the Outcomes Statement the risk that it will be seen as “hegemonic” and that its status as an official document will constrain teachers and programs from using it to experiment and innovate (75-76). He argues that sharing the various contentions that arose as the document was developed can enhance its ability to function as, in the words of Susan Leigh Star, a document of “cooperation without consensus” (qtd. in Sills 73) that does not preclude interpretations that may not align with a perceived status quo (76). Rather, in Sill’s view, revealing the different voices involved in its production permits Outcomes 3.0 to be understood as a “boundary object,” that is, an object that is

strictly defined within a particular community of practice, but loosely defined across different communities of practice. . . . [and that] allows certain terms and concepts . . . to encompass many different things. (74)

He believes that “[k]eeping policy deliberations audible” (76) will encourage instructors and programs to interpret the document’s positions flexibly as they come to see how many different approaches were brought to bear in generating the final text.

Sills invited all ten task members to participate in “discourse-based” interviews. Five agreed: Dylan Dryer, Susanmarie Harrington, Bump Halbritter, Beth Brunk-Chavez, and Kathleen Blake Yancey (60-61). Discussion focused on deliberations around the terms “composing, technology, and genre” (61; emphasis original).

Sills’s discussion of the deliberations around “composing” focus on the shift from “writing” as a key term to a less restrictive term that could encompass many different ways in which people communicate today (61). Sills indicates that the original Outcomes Statement (1.0) of 2000 made digital practices a “residual category” in comparison to traditional print-based works, while the 3.0 task force worked toward a document that endorsed both print and multimodal practices without privileging either (63).

Ideally, in the interviewees’ views, curricula in keeping with Outcomes 3.0 recognizes composing’s “complexity,” regardless of the technologies involved (65). At the same time, in Sills’s analysis, the multiplicity of practices incorporated under composing found common ground in the view, in Dryer’s words, that “we teach writing, we’re bunch of writers” (qtd. in Sills 65).

Sills states that the “ambiguity” of terms like “composing” served not only to open the door to many forms of communicative practice but also to respond to the “kairotic” demands of a document like Outcomes. 3.0. Interviewees worried that naming specific composing practices would result in guidelines that quickly fell out of date as composing options evolved (64).

According to Sills, interviews about the deliberations over genre revealed more varied attitudes than those about composing (66). In general, the responses Sills records suggest a movement away from seeing genre as fixed “static form[s]” (67) calling for a particular format toward recognizing genres as fluid, flexible, and responsive to rhetorical situations. Sills quotes Dryer’s claim that the new document depicts “students and readers and writers” as “much more agentive”; “genres change and . . . readers and writers participate in that change” (qtd. in Sills 67). Halbritter emphasizes a shift from “knowledge about” forms to a process of “experiential learning” as central to the new statement’s approach (68). For Harrington, the presentation of genre in the new document reflects attention to “habits of mind” such as rhetorical awareness and “taking responsibility for making choices” (qtd. in Sills 69).

Brunk-Chavez’s interview addresses the degree to which, in the earlier statements, technology was handled as a distinct element when genre was still equated primarily with textual forms. In the new document, whatever technology is being used is seen as integral to the genre being produced (69). Moreover, she notes that OS 3.0’s handling of genre opens it to types of writing done across disciplines (70).

She joins Yancy, however, in noting the need for the document to reflect “the consensus of the field” (72). While there was some question as to whether genre as a literary or rhetorical term should even be included in the original OS, Yancy argues that the term’s “time has come” (71). Yet the interviews capture a sense that not every practitioner in composition shares a common understanding of the term and that the document should still be applicable, for example, to instructors for whom “genre” still equates with modes (71).

In addressing this variation in the term’s function in practice, Sills notes Yancey’s desire for OS 3.0 to be a “bridging document” that does not “move too far ahead of where the discipline is,” linking scholarly exploration of genre with the many ways practitioners understand and use the term (72).

Sills considers challenges that the OS 3.0 must address if it is to serve the diverse and evolving needs of the field. Responding to concerns of scholars like Jeff Rice that the document imposes an ultimately conservative “ideology of generality” that amounts to a “rejection of the unusual” (qtd. in Sills 75), Sills acknowledges that the authority of the statement may prevent “subordinate communities of practice” like contingent faculty from “messing around with” its recommendations. But he contends that the task force’s determination to produce flexible guidelines and to foster ongoing revision can encourage “healthy resistance” to possible hegemony (76).

He further recommends specific efforts to expand participation, such as creating a Special Interest Group or a “standing institutional body” like an Outcomes Collective with rotating membership from which future task forces can be recruited on a regular timetable. Such ongoing input, he contends, can both invite diversity as teachers join the conversation more widely and assure the kairotic validity of future statements in the changing field (77-78).


Leave a comment

Gindlesparger, Kathryn Johnson. Ethical Representation in the “Study-Abroad Blog.” CE, Sept. 2018. Posted 10/15/2018.

Gindlesparger, Kathryn Johnson. “‘Share Your Awesome Time with Others’: Interrogating Privilege and Identification in the Study-Abroad Blog.” College English 81.1 (2018): 7-26. Print.

Kathryn Johnson Gindlesparger analyses the ethical dimensions of “study-abroad blogs” that students produce to document their trips. In Gindlesparger’s view, such blogs as currently constructed by study-abroad planning agencies like International Student Exchange Programs (ISEP) enable problematic representations and identifications. She argues for a more thoughtful, ethically aware approach to such responses to study-abroad experiences.

Gindlesparger’s analysis focuses on three of thirteen first- and second-year students enrolled in her 2012 “Contemporary Europe” class; the class addressed “tensions that may go unnoticed” if courses are “less inclusive of internationally traumatic subject matter” (8). Students recorded their experiences during a three-week trip that included two Holocaust sites and one “youth center for Bosnian refugees in Berlin” (8). The three students gave permission for their materials to be included in the study and participated in reflective interviews five years later (9).

The study-abroad industry, Gindlesparger writes, is experiencing an “explosion,” with shorter trips now the more common format (9). She reports that institutions find the trips to be revenue-generating vehicles; she sees the student blogs not only as ways to share experiences with home audiences but also as marketing tools (9).

Gindlesparger’s first object of analysis is an ISEP “advice column,” “How to Write a Study Abroad Blog: 5 Tips for Success” (11). She contends that the genre as constructed by this document and others like it, including her own assignment sheet, positions students to respond to exposure to others’ trauma in troubling ways.

The five tips reported by Gindlesparger are “Write,” “Reflect on your experience,” “Share photos,” “Keep it short,” and “Be honest” (12). Essential to the tip advice, she states, is the emphasis on “positive experience” that can be depicted as “action”: the advice sheet instructs students to “keep your content to what is most exciting and noteworthy” (qtd. in Gindlesparger 12). Examples in the sheet, in Gindlesparger’s view, suggest that for U. S. students, a study-abroad experience allows them to act as “conquerors of a passive world” that is their “playground” and to consider their trip as “a vacation-oriented experience” (12).

This configuration of the rhetorical situation inherent in a study-abroad trip, Gindlesparger writes, turns the experience into a means by which the students focus on their own “personal growth and development” (Talya Zemach-Bersin, qtd. in Gindlesparger 10). In this view, growth that results from encountering less affluent cultures or sites of trauma can translate into the accumulation of “cultural capital” (9), such that students may “use the misfortune of others to explore their own privilege” (8).

Gindlesparger finds that directing students to make connections between what they encounter and their own experiences contributes to problematic representation and appropriation of cultures and historical trauma. In particular, she argues, the exhortation to relate personally to what study-abroad students observe creates problems because questions about “what surprised you or what you have learned” are “arhetorical tools that can be applied to any situation” (13). The blog tips, as well as the perceived need to allow students freedom to choose their own subjects, make no rhetorical or ethical distinction between visits to a concentration camp and a beach day (14).

The blog entries and later interviews of Gindlesparger’s three study subjects explore the genre demands of the blogs. In Gindlesparger’s analysis, “Eric” responded to a meeting with a Holocaust survivor by “positioning her life experience as entertainment for Eric’s gain” (15) as he casts her history as a “tragic masterpiece” and a vivid “painting” for his consumption (qtd. in Gindlesparger 15). Eric has difficulty moving beyond his earlier school readings on the Holocaust as he tries to relate to an individual whose experiences may not have been captured in those readings (16). In his interview, Eric notes his earlier urge to handle the experience by “tying a bow on it” (qtd. in Gindlesparger 16).

According to Gindlesparger, “Emily” “overidentifi[es]” with Nazis assembled in a Nuremberg stadium used for rallies when she imagines that she can put herself in the Nazis’ shoes and assigns her own values to their response to Hitler (17), contending that they might have felt “helpless” before Hitler’s tactics. Gindlesparger argues that the blog genre insists that the “complex intellectual task of trying to understand” Nazis must be “‘exciting,’ ‘awesome,’ or at least show how [Emily] is bettered” (17).

Gindlesparger writes that Alyssa’s response to the Mauthausen Concentration Camp is the “inciting incident” for her study (18). Alyssa’s blog entry attempts to relate the experiences of the camp victims to her own ROTC basic training (18). Getting up early and the arrangement of the camp trigger identification with the prisoners (18), to the point that “[t]he gas chamber experience was something I could somewhat relate to” (qtd. in Gindlesparger 18). In her interview, Gindlesparger recounts, Alyssa focused on the blog’s mandate to keep her report “awesome” by writing something “readable and enjoyable” (19), with the result that she was discouraged from dealing with the emotional experience of the concentration camp.

From the interviews, Gindlesparger concludes that students resist addressing discomforting experiences, choosing instead the tactic encouraged by the blog genre, “identifying from similarity” (20). This kind of identification glosses over differences that might challenge students’ complacency or comfort. Gindlesparger turns to Krista Ratcliffe’s concept of “rhetorical listening,” in which participating in what Ratcliffe calls a “genuine conversation” can allow “working through their own discomfort” to become “the students’ end goal” (20). Gindlesparger proposes Dominick LaCapra’s “empathetic unsettlement” as a way to undercut inappropriate closure and resist the temptation to see others’ horrific experiences as somehow accruing to an observer’s spiritual gain (20).

Noting that the three students were “genuine, caring sympathetic people” who did their best to respond to expectations as they understood them (19), and that two of the three found it hard to explain their blog entries (21), Gindlesparger suggests more attention to the rhetorical demands of the genre itself as part of the “predeparture preparation” (21). She also recommends calling attention to the time-intensive nature of working through unsettlement, in contrast to the genre’s demands for fast, brief responses, as well as asking for revision after contemplative work in order to allow students to reevaluate “tidy” responses (22). Similarly, exploring students’ own positionality in preparation for exposure to others’ trauma and creating opportunities for more extensive interaction with difference during the trip can enable students to “identify from difference rather than similarity” (23). Gindlesparger finds these pedagogical choices important as composition increasingly engages with audiences and experiences outside of the classroom (23).


Leave a comment

Cox, Anicca. Full-Time Lecturers and Academic Freedom. Forum, Fall 2018. Posted 10/05/2018.

Cox, Anicca. “Collaboration and Resistance: Academic Freedom and Non-Tenured Labor.” Forum: Issues about Part-Time and Contingent Faculty 22.1 (2018): A4-A13. Web. 01 Oct. 2018.

Anicca Cox, in the Fall 2018 issue of Forum: Issues about Part-Time and Contingent Faculty, discusses a case study of her institution’s decision to replace non-tenure-track part-time faculty (PTLs) with full-time, non-tenure-track lecturers (FTLs) on two-year contracts. She interviewed three of the ten new full-time hires and three part-time instructors who taught in the program (A6).

Noting that the percentages of FTLs in higher education is increasing, Cox reports that this change has entailed better working conditions, more access to benefits, and more job security, among other positive effects (A5, A7). She suggests that this trend may reflect institutions’ “response to the increasingly publicized problems of an outsized reliance” on contingent labor that constitutes a “seemingly altruistic move” (A5). She writes that the more stable teaching force provides institutions with more predictable costs than hiring based on shifting enrollments (A5).

Cox focuses on how the PhDs most likely to be preferred for such positions negotiate possible constraints on their academic freedom and professional identifications. The program she studied hired ten new FTLs, nine of which were either literature PhDs or were completing doctorates, as well as a new tenure-track writing program administrator (WPA) to implement a revised first-year writing program (A6). Part-time instructors who had previously taught at the institution were not hired for the new lines.

The new WPA “designed a heavily scripted curriculum” in which all components, including textbooks, were prescribed (A6). The full-time instructors were given office space and professional development specific to the program; they were evaluated much more broadly than the part-time faculty and often included ongoing research in the evaluation dossiers they prepared (A7).

Cox’s study asked how these instructors

perceived themselves fitting into the institution and department relative to their own sense of professional identity, and how those feelings shaped and otherwise intersected with their work as instructors both inside and outside classroom. (A6)

Her study, part of a larger analysis, emphasized both the effects on professional identity of the new context and the question of how collaboration among teaching professionals was impacted by the new alignment (A7).

Interviews with FTLs revealed that they “did not feel like hired mercenaries” but did not feel fully integrated into the department (A8). A focus of their concern was the sense that they were not considered “intellectual contributors” and were enlisted to perform a “role” that did not jibe with their professional preparation (A8). One respondent expressed concern about being issued a “teacher proof” curriculum dismissive of her scholarship and expertise (A8). In comparison, the PTLs, while accustomed to being given scripted curricula, expressed concern that the new program materials were not appropriate for the actual student population they were used to teaching (A9). These teachers felt less conflicted over identity issues because they saw themselves primarily as teachers, not researchers (A9-10).

Tensions in the FTL position also affected collaboration in that the new lecturers felt constrained from “simply asserting their purported academic freedom” and, rather than challenging the program structure, began devising ways to adjust the curriculum without “getting caught” (qtd. in Cox A10-11). Collaboration, in this study, became a way of “spread[ing] the blame” so that renewal at the end of the two-year contract would be less likely to be threatened (A11). Part-time lecturers, in contrast, relied on long-standing patterns of “informal collaborations,” sometimes making “radical changes” in the prescribed teaching materials (A11), despite having lost the opportunity to share practices with many of their colleagues in the new configuration. These teachers posited that the failure to hire from within their ranks reflected a desire on the part of administrators to eliminate “the baggage they carried over from previous iterations of the first-year writing program” (A11); Cox posits that they acted to modify the curriculum despite recognizing the precarity of their situation in the new program (A11).

Cox supports the shift toward more full-time positions but notes that the particulars of the arrangement she studied drove instructors to invest energy in sustaining a coherent professional identity rather than working together to improve student outcomes (A12). She writes that the benefits of the full-time jobs were “not enough to neutralize the frustrations” engendered by the lecturers’ compromised fit within the department (A12). She recommends that should these kinds of readjustments become more common, they be constructed

in a way that recognizes and honors the laboriously forged and deeply felt professional identities of workers by supporting continued professional development and encouraging autonomy in curricular design. (A12)


2 Comments

Abba et al. Students’ Metaknowledge about Writing. J of Writing Res., 2018. Posted 09/28/2018.

Abba, Katherine A., Shuai (Steven) Zhang, and R. Malatesha Joshi. “Community College Writers’ Metaknowledge of Effective Writing.” Journal of Writing Research 10.1 (2018): 85-105. Web. 19 Sept. 2018.

Katherine A. Abba, Shuai (Steven) Zhang, and R. Malatesha Joshi report on a study of students’ metaknowledge about effective writing. They recruited 249 community-college students taking courses in Child Development and Teacher Education at an institution in the southwestern U.S. (89).

All students provided data for the first research question, “What is community-college students’ metaknowledge regarding effective writing?” The researchers used data only from students whose first language was English for their second and third research questions, which investigated “common patterns of metaknowledge” and whether classifying students’ responses into different groups would reveal correlations between the focus of the metaknowledge and the quality of the students’ writing. The authors state that limiting analysis to this subgroup would eliminate the confounding effect of language interference (89).

Abba et al. define metaknowledge as “awareness of one’s cognitive processes, such as prioritizing and executing tasks” (86), and explore extensive research dating to the 1970s that explores how this concept has been articulated and developed. They state that the literature supports the conclusion that “college students’ metacognitive knowledge, particularly substantive procedures, as well as their beliefs about writing, have distinctly impacted their writing” (88).

The authors argue that their study is one of few to focus on community college students; further, it addresses the impact of metaknowledge on the quality of student writing samples via the “Coh-Metrix” analysis tool (89).

Students participating in the study were provided with writing prompts at the start of the semester during an in-class, one-hour session. In addition to completing the samples, students filled out a short biographical survey and responded to two open-ended questions:

What do effective writers do when they write?

Suppose you were the teacher of this class today and a student asked you “What is effective writing?” What would you tell that student about effective writing? (90)

Student responses were coded in terms of “idea units which are specific unique ideas within each student’s response” (90). The authors give examples of how units were recognized and selected. Abba et al. divided the data into “Procedural Knowledge,” or “the knowledge necessary to carry out the procedure or process of writing,” and “Declarative Knowledge,” or statements about “the characteristics of effective writing” (89). Within the categories, responses were coded as addressing “substantive procedures” having to do with the process itself and “production procedures,” relating to the “form of writing,” e.g., spelling and grammar (89).

Analysis for the first research question regarding general knowledge in the full cohort revealed that most responses about Procedural Knowledge addressed “substantive” rather than “production” issues (98). Students’ Procedural Knowledge focused on “Writing/Drafting,” with “Goal Setting/Planning” in second place (93, 98). Frequencies indicated that while revision was “somewhat important,” it was not as central to students’ knowledge as indicated in scholarship on the writing process such as that of John Hayes and Linda Flower and M. Scardamalia and C. Bereiter (96).

Analysis of Declarative Knowledge for the full-cohort question showed that students saw “Clarity and Focus” and “Audience” as important characteristics of effective writing (98). Grammar and Spelling, the “production” features, were more important than in Procedural Knowledge. The authors posit that students were drawing on their awareness of the importance of a polished finished product for grading (98). Overall, data for the first research question matched that of previous scholarship on students’ metaknowledge of effective writing, which shows some concern with the finished product and a possibly “insufficient” focus on revision (98).

To address the second and third questions, about “common patterns” in student knowledge and the impact of a particular focus of knowledge on writing performance, students whose first language was English were divided into three “classes” in both Procedural and Declarative Knowledge based on their responses. Classes in Procedural Knowledge were a “Writing/Drafting oriented group,” a “Purpose-oriented group,” and the largest, a “Plan and Review oriented group” (99). Responses regarding Declarative Knowledge resulted in a “Plan and Review” group, a “Time and Clarity oriented group,” and the largest, an “Audience oriented group.” One hundred twenty-three of the 146 students in the cohort belonged to this group. The authors note the importance of attention to audience in the scholarship and the assertion that this focus typifies “older, more experienced writers” (99).

The final question about the impact of metaknowledge on writing quality was addressed through the Coh-Metrix “online automated writing evaluation tool” that assessed variables such as “referential cohesion, lexical diversity, syntactic complexity and pattern density” (100). In addition, Abba et al. used a method designed by A. Bolck, M. A. Croon, and J. A. Hagenaars (“BCH”) to investigate relationships between class membership and writing features (96).

These analyses revealed “no relationship . . . between their patterns knowledge and the chosen Coh-Metrix variables commonly associated with effective writing” (100). The “BCH” analysis revealed only two significant associations among the 15 variables examined (96).

The authors propose that their findings did not align with prior research suggesting the importance of metacognitive knowledge because their methodology did not use human raters and did not factor in student beliefs about writing or questions addressing why they responded as they did. Moreover, the authors state that the open-ended questions allowed more varied responses than did responses to “pre-established inventor[ies]” (100). They maintain that their methods “controlled the measurement errors” better than often-used regression studies (100).

Abba et al. recommend more research with more varied cohorts and collection of interview data that could shed more light on students’ reasons for their responses (100-101). Such data, they indicate, will allow conclusions about how students’ beliefs about writing, such as “whether an ability can be improved,” affect the results (101). Instructors, in their view, can more explicitly address awareness of strategies and effective practices and can use discussion of metaknowledge to correct “misconceptions or misuse of metacognitive strategies” (101):

The challenge for instructors is to ascertain whether students’ metaknowledge about effective writing is accurate and support students as they transfer effective writing metaknowledge to their written work. (101)