Skip to content

Composition Forum 37, Fall 2017
http://compositionforum.com/issue/37/

Cultivating Change from the Ground Up: Developing a Grassroots Programmatic Assessment

Maria Conti, Rachel LaMance, and Susan Miller-Cochran

Abstract: To address the needs and interests of primary stakeholders in a writing program, this article presents a model of “grassroots” assessment that involves instructors from all ranks as well as students in the development, facilitation, and interpretation of assessment results. The authors describe two assessment plans that measured student and instructor perceptions about curricular changes, the individual results of those assessments, and the conclusions the authors came to about the importance and challenges of involving a range of stakeholders in assessment.

1. Introduction

Seventeen years ago, Joseph Harris urged Writing Program Administrators (WPAs) to involve faculty at all ranks in teaching the first-year course. We argue that not only should WPAs involve all faculty--non-tenure-track (NTT), graduate instructors, and tenure-track--in teaching, but those faculty should also be involved in developing, conducting, and interpreting assessments of those courses. Such involvement fosters “authentic assessment,” in the spirit of Daniel J. Royer and Roger Gilles, by avoiding a top-down model. And as Edward White, Norbert Elliot, and Irvin Peckham point out, when done well, this kind of assessment development can be a community-building enterprise.

Programmatic assessment in writing programs is often governed by institutional processes, accreditation requirements, and administrative interests. Often assessments are mandated with specific time constraints and a quick turnaround in an effort to produce data that demonstrate a program’s effectiveness. Yet none of these exigencies arise from, and often do not even align with, the needs and interests of the primary stakeholders in the assessments: instructors and students. Instead, these exigencies often produce monolithic programmatic assessment approaches that are not always responsive to local contexts.

To address the needs and interests of local stakeholders, we present an approach to curricular assessment that involves instructors from all ranks along with their students in the development, facilitation, and interpretation of assessment results. We call such an approach a “grassroots curricular assessment model,” and it presents a way to develop what Cindy Moore, Peggy O’Neill, and Brian Huot describe as “assessment cultures” that move away from the common top-down approach to assessment. Instead, a grassroots model moves toward fostering the kind of professional development that invites all teachers--including graduate teaching assistants and faculty of all ranks and tracks--to have a meaningful role in shaping the direction of a program. Ideally, such involvement also offers the kinds of opportunities Ann Penrose identified that include NTT faculty as professionals in program-wide conversations. Likewise, grassroots models include the voices of undergraduate student writers as essential partners in understanding the effectiveness and impact of curricular choices.

The approach we propose does not solely focus on achievement of learning outcomes or direct assessment of student learning, although our broader assessment plan certainly involves such measures. Rather, the two examples we describe involve measuring students’ and instructors’ perceptions of the effectiveness of curricular approaches. Both assessment examples provide snapshots of what is happening in the program, and together, multiple snapshots provide a larger picture of the effectiveness of various curricular and pedagogical approaches in a writing program.

A grassroots approach to programmatic assessment responds to two recommendations made by the Conference on College Composition and Communication (CCCC) in Writing Assessment: A Position Statement. First, the statement recommends that “[a]ssessments of written literacy should be designed and evaluated by well-informed current or future teachers of the students being assessed...” (Introduction). In other words, the teachers teaching the courses--as the people most knowledgeable about what is happening in the classroom--should help shape, implement, score, and interpret the assessment. Additionally, the statement recommends that “[e]ven when external forces require assessment, the local community must assert control of the assessment process” (Guiding Principle 1B). We agree that when outside forces mandate the development of an assessment, maintaining involvement of and control by teachers and the program itself is essential.

For a program embarking on a grassroots assessment project, it is important to have a solid foundation for the reasons and methods for the assessment and to consider reasons and methods from the perspectives of all stakeholders. We recommend thinking through the heuristic in Table 1, which we used in developing our assessment plan:

Table 1. Heuristic for UA WP Assessment

Reasons for conducting the assessment:

1. What do you want to know?

2. Who wants to know it?

3. Why?

Methods for conducting the assessment:

1. What kinds of data will you need to collect?

2. How will you collect data?

3. What data do you already have that might be useful?

2. Programmatic Context

The assessments described in this article were developed in the Writing Program (WP) at the University of Arizona (UA). The WP resides in the Department of English, which houses four graduate programs: Creative Writing; English Applied Linguistics; Literature; and Rhetoric, Composition, and the Teaching of English. The WP has a distributed administrative structure with a Director, three Associate Directors, and eight Assistant Directors, and all administrators share responsibility for directing courses and responding to administrative issues that arise. The WP staffs classes with approximately 45 NTT faculty members and 125 graduate students, and the graduate students come from each of the four graduate programs in the department as well as the interdisciplinary Second Language Acquisition and Teaching PhD program.

The WP is large in both size and scope, offering approximately 12,500 seats in writing courses each year in seven different 100-level Foundations Writing courses. Students can meet the Foundations Writing requirement at UA through several paths and combinations of courses, depending on their placement:

  • ENGL 101 and ENGL 102, usually taken in the first year

  • ENGL 107 and ENGL 108, recommended for international L2 writers

  • ENGL 101A and ENGL 102, offering an additional 1-credit-hour studio for students needing additional support during the first semester

  • ENGL 106, ENGL 107, and ENGL 108, recommended for international L2 writers needing an additional semester of support

  • ENGL 109H for students placing into honors writing

Uniformity across the many course sections in the WP has historically been achieved by using common textbooks and following a common sequence of major projects that are guided by student learning outcomes (SLOs), with instructors varying their approaches to those projects. As of Fall 2016, the WP began using the common SLOs as a point of reference but gave instructors more autonomy in determining their assignments and choosing textbooks to meet the SLOs. The variation in assignments and approaches, though, introduces an exigence for programmatic assessment. When sections of a course vary in the range of approaches used, it is important to understand whether students are achieving SLOs at a satisfactory rate across an entire program.

In addition to introducing more variation between sections of Foundations Writing, the WP has undergone several curricular revisions recently to incorporate more of a focus on understanding genres of writing in academic and professional contexts. The first change occurred in fall 2015, when the WP introduced a new genre-focused pilot ENGL 101 curriculum. While implementing that course, the WP also assessed the needs of L2 writers and determined that additional support was needed within the broader program curriculum for many international students. In fall 2016, the WP introduced a new course, ENGL 106, that precedes the typical two-semester writing requirement. For students who place into ENGL 106, the writing requirement now takes three semesters.

Involving all instructors in the development of assessment, rather than handing down an assessment mandate to them, is an essential move in fulfilling the criteria Penrose discussed in her 2012 Writing Program Administration article, Professional Identity in a Contingent-Labor Profession. Penrose describes three criteria that must be met for NTT faculty to perceive themselves as professionals in a programmatic and disciplinary context. They must feel they are three things:

  • Expert: NTT faculty must feel that they have access to the resources needed to develop expertise, and they must also have opportunities to share their knowledge with others in the program.

  • Autonomous Agents: NTT faculty must have the opportunity to impact decision-making in a program and have options for what kinds of professional development they would like to participate in.

  • Community Members: Finally, NTT faculty should be an integral part of the knowledge-making and identity of the program, and they should be invited to participate in ways that do not create unrealistic expectations on their time, given their other responsibilities.

Penrose’s framework provides a mandate for designing an assessment project with NTT faculty instead of for them, acknowledging and drawing on the considerable teaching expertise and knowledge of the discipline and of the program that they bring to their work. Given the makeup of the teaching faculty at UA, it was essential that both NTT faculty and graduate students be involved as equal partners in that process.

As the WP began the process of designing an inclusive, dynamic approach to assessment, we found that the questions driving assessment, and the reasons for pursuing it, were different for the two curricular innovations we wanted to assess. Even though in both cases we were interested in whether students were achieving SLOs at a satisfactory rate, the specific overarching questions were different:

  • ENGL 101/101A/107: As the WP moves toward more variation between courses, how do students perceive their mastery of SLOs? We wanted to pilot a method of measuring student achievement of SLOs. In addition, we wanted to know how well students understood the SLOs themselves.

  • ENGL 106: As we introduced a new course to help L2 writers develop confidence in their language abilities in English, we wanted to measure whether the class was doing what we wanted it to do. How well did students and instructors perceive the course was achieving its objectives?

The design of our assessment plan was shaped by three different external forces:

  • The annual assessment requirements for programs at the university, which are completely SLO-based and which draw on the models of the American Association of Colleges and Universities VALUE rubrics.

  • A five-year longitudinal study by Aimee Mapes and Amy Kimme Hea, which provides thick description of student writers’ metacognitive practices and affective relationships to writing during their time in higher education and into their first year after college.

  • A Consultant-Evaluator Service external review by the Council of Writing Program Administrators (CWPA), which we conducted in Susan’s second year at UA and which gave us the opportunity to understand the state of the program on several levels. This process required the WP to prepare a written self-study before the reviewers’ visit.

The self-study and Consultant-Evaluator evaluation revealed important pressure points, so we embarked on several assessment activities as a program, including a version of Bob Broad’s Dynamic Criteria Mapping. We offered opportunities for all teachers to be involved on the committee shaping the assessment, and we also had a committee working to clarify our SLOs for teachers and students.

Through our mapping exercises (a small portion of which are included in Table 2), we began to respond to the questions identified in the introduction (Table 1):

Table 2. Dynamic Mapping Exercise

What do we want to know?

What questions could we answer?

Why do we want to know this?

Why would those data be interesting?

What kinds of data would answer the question?

What do we have access to?

How well are students achieving SLOs across the Writing Program?

Understanding whether our approach to teaching writing is a good match for our student population and our SLOs

Portfolio assessment? (outcomes 1, 2, 4)

What are our values?

How can we rearticulate assessments to determine if we are teaching toward our values or teaching toward disconnected ideals?

To help define/narrow our values and create a solid mission for FYC

Assignment sheets, lesson plans

Are students guaranteed to learn the same concepts across several sections of FYC?

How consistent is our curriculum across courses?

In/consistency across various sections of the same FYC course

Course calendars, instructor-assigned readings, assignment sheets, syllabi

What do faculty value in student writing?

To understand if faculty value the same things, if their values are aligned with institutional and program values

Teacher comments on student writing, interviews, surveys

How are 106/107 instructors assessing students’ preparedness for their courses?

Consistency or inconsistency of 106/107 placements or placement change recommendations (is writing similar?)

Diagnostic writing (ENGL 106/107)

As a starting point for our assessment efforts, Maria and Rachel measured students’ and teachers’ perceptions of learning in two of the curricular spaces undergoing revision{1}. These two assessment projects were made possible through newly created Graduate Assistant Director (GAD) positions, which were first implemented in the 2016-17 academic year. In these positions, Rachel and Maria were tasked with proposing research projects that aligned with their own interests and with WP research initiatives. O’Neill, Moore, and Huot note that while it is common for graduate students to “take part in large-scale assessments by reading portfolios or submitting sample first-year composition papers to the WPA, they aren’t often asked to help design such assessments” (5). By contrast, the research work of these positions allowed Maria and Rachel, as major stakeholders in the curricula they were assessing, not only to participate in the assessment projects but also to guide their design and implementation. The GAD position also provided an opportunity for Maria and Rachel to interact on a regular basis with WPAs as they learned to integrate administrative work into their own scholarship. This interaction contributed to their professional development by fostering “expertise and experience” in the assessment work of a WPA (O’Neill, Moore, and Huot 5). As researchers, they were able to benefit from their own classroom experiences with the curriculum but also from camaraderie with fellow instructors. Graduate students as unlikely stakeholders were able to make key assessment decisions that impacted the WP as a whole. This distributed decision making promotes a culture of assessment throughout the program.

Maria’s and Rachel’s projects are narrated below in the first person, giving voice to their own subject positions as researchers and investment as stakeholders in the WP. They each give an overview of the programmatic exigence for their studies, describe the methods they employed, and provide insight gained from the assessment process.

3. Assessment Projects

3.1 Rooted in the Outcomes: Maria’s Assessment of Student Learning Outcomes and Curriculum

If decisions based on an assessment must promote teaching and learning, as current validity theory dictates, then we must be accountable to those people who are most expert about teaching and learning—students and teachers. (Huot 179)

Exigence for the Assessment

A key feature of a grassroots curricular assessment model is that it emphasizes the involvement of students and instructors throughout a writing assessment. Those closest to the assessment or most likely to be impacted by its results should have opportunities to become involved with the assessment as it takes shape within a specific, local context. According to O’Neill, Moore, and Huot, writing assessments that are “context-specific” are those that seek to honor and understand “the values, beliefs, and perceptions that characterize a particular institution, department, or program” (11). As a GAD, I created a research project that offers a model for how to incorporate the perceptions of local stakeholders, undergraduate students in particular, into writing program curricular assessment.

As my project began to take shape in fall 2016, the UA WP’s Assessment Committee began setting long-term goals for program-wide portfolio assessment of the Foundations Writing courses based on the program’s SLOs. Faculty and graduate students from all ranks serve on this committee. Yet, we were still missing the voices of undergraduate students. Huot suggests that writing programs should ask themselves how involving undergraduates in assessment practices can serve the teaching and learning needs of the program: “How might student identities, experiences, and attitudes shape assessment?” (65) In order for a large-scale portfolio assessment to measure SLOs in a meaningful way, the WP first needed a better understanding of students’ perceptions of their own learning in relationship to the SLOs as well as knowledge about how well students understood them.

To address this, I designed a large-scale quantitative assessment for students at the end of the first course in the two-course Foundations Writing sequence: ENGL 101, ENGL 101A (which includes a studio component for students needing additional support), and ENGL 107 (ENGL 101 for international students). I wanted to know the extent to which students felt they had or had not reached the SLOs by the end of each of these courses. The study was guided by the following research question: “Which, if any, differences are significant between courses (English 101, 101A, and 107) regarding students’ perceptions of their own learning in relation to the SLOs?”

In the following sections, I describe how this grassroots assessment project involved the input of undergraduate students across three courses. Additionally, I provide an overview of key findings as well as a reflection on how this process could be adapted by other institutions in their local contexts.

Methods

The UA WP SLOs are modeled very closely on the latest version of the CWPA Outcomes Statement. I selected the eight outcomes most closely related to the first course in our two-course Foundations Writing sequence. I then converted the WP SLOs into can-do Likert-scale statements using student-friendly language. This shift included removing unnecessary jargon and defining key terms in parentheses. Answering the call of Susan Miller-Cochran and Rochelle Rodrigo for more usability studies in rhetoric and composition teaching and scholarship, I conducted usability tests of the survey with undergraduates from ENGL 101, 101A, and 107. Undergraduate student feedback from these tests helped me to revise the survey before distributing it to a total of 4,787 students enrolled in the three courses in the fall 2016 semester. The involvement of undergraduate students in the usability phase of the project was an integral part of creating an assessment that accounted for its stakeholders’ local perspectives.

Undergraduate Students as Partners in Usability Testing

The usability tests helped me to identify the best options in student-centered language choices. To recruit usability test participants across the three courses, I created a two-minute video introducing the study for instructors to play in their classes. Students were given a $10 Amazon gift card for one hour of their time. During the usability test, I asked student participants to take the survey. The survey consisted of a 6-point Likert scale, with “0” being “strongly disagree” and “5” being “strongly agree.”

I then asked participants to identify words, phrases, or questions they found confusing or difficult. I also asked about prior experiences or knowledge that they drew on to answer each question. From their responses, I revised several areas to make the survey questions clearer. Here is one example:

Initial: I can explain the effects of rhetorical features in a text, such as word choice, organization, and use of persuasive appeals (logic, emotion, and credibility).

Across the board, students had difficulty with this question because it asked about several things at once. The usability test participants offered suggestions for splitting the question into two, with one question asking about word choice and organization and another asking about the use of persuasive appeals. Several participants asked if I meant “logos, pathos, and ethos” by the phrase “logic, emotion, and credibility,” and still others did not make the connection from the English words to the Greek terms on their own. I was surprised that the student participants were more familiar with the Greek terms than their English counterparts, but together we revised the question to include both. The question was also split into two.

Revised: I can explain the effects of word choice and organization in a text. I can explain the effects of persuasion in a text, using logos (logic), pathos (emotion), and ethos (credibility).

The usability test also revealed an unanticipated issue with the question about which class (ENGL 101, 101A, or 107) students were taking. It was crucial that students understand the question and answer it correctly as it was the mechanism by which responses would be sorted for data analysis. Many students commented that they were not sure which class they were in, and they had to check UA’s course management system to verify. In revisions, I added descriptors to the classes (in italics) to eliminate student error and confusion in this area.

Revised: Which class are you taking now?
  1. English 101 (3-credit class)
  2. English 101A (4-credit class with 1 hour of studio per week)
  3. English 107 (3-credit class for international students)

Without these descriptors, many students would have selected the wrong course, which would have invalidated conclusions divided by class.

In these and other questions, students’ input featured heavily into the revision of the survey before it was distributed via Qualtrics to all students enrolled in the three courses. Because the feedback of students in each of these courses was explicitly sought, this study was able to respond effectively to the needs of participants in the local context.

Instructors as Partners in Data Collection

Just as undergraduate students had a key role to play in this grassroots assessment project, the project would not have been successful without the support of faculty from all ranks. I developed an email for all 122 instructors of the three courses that introduced them to the survey and asked them to play a two-minute recruitment video about the study in their classes. Susan sent the email invitation as the Writing Program Director, and students’ participation was voluntary.

Because there were fewer students enrolled in ENGL 101A and 107 than ENGL 101 in the fall 2016 semester, the number of responses from these classes was markedly lower. To mitigate this, I sought an even higher level of support from instructors in these courses. I asked them if I could personally visit their classes to make a final pitch to their students, and many of them agreed. This helped the 101A and 107 groups to have more statistical power, even though the final number of participants was still far lower than ENGL 101 (see Table 3). A total of 1,474 responses were collected.

Table 3. Survey Responses by Course

Responded to Survey

Registered for Course

101

1,144

4,011

101A (Studio)

256

661

107 (International)

74

115

Total

1,474

4,787

Overview of Key Findings

Working with two statisticians, I conducted t-tests to determine which responses were significant between ENGL 101, 101A, and 107. We used a 99% confidence level (p<.01) for determining statistical significance. For several questions, students’ perceptions of their own learning were significantly different across groups, but two stood out as particularly interesting.

Audience: ENGL 101 vs. 107 Students

One of the most interesting significant differences between groups is in the question regarding audience: “I can identify a text’s intended audience (person or group the writer wishes to address).” ENGL 107 students rated themselves significantly lower than ENGL 101 students in this area. While the results are preliminary and the data are limited, these results could be explained through the kinds of assignments taught in the course. Paul Kei Matsuda and Ryan Skinnell note that many first-year writing assignments are geared toward a U.S.-centric audience (237). If the intended audience of much first-year writing is presumed to be an American one, this may explain why international students, having less practice in this area, rated themselves lower. Additional data collected over time would help determine whether these trends hold for a broader student population.

Lack of Significant Differences between ENGL 101A and 107 Students

Another result I found highly interesting was the lack of significant differences between 101A (studio writing) and 107 (international students). However, this may be because both groups of writers were placed into courses other than the default ENGL 101. This may cause a self-fulfilling prophecy where students rate themselves lower on the basis of their placement.

Further, UA’s placement mechanism for domestic students at the time of this study provides a possible explanation. Only international (visa) students are placed into 107, while other types of L2 writers, such as resident immigrant students, are often placed into 101A. In many institutions, multilingual students are placed into basic writing or support courses at the secondary and postsecondary levels (Gilliland 198). Since UA’s placement mechanism at the time of this survey did not include asking students about prior educational experiences, the extent to which resident immigrants or other L2 writers comprise the population of 101A courses is not known, although anecdotal evidence reveals a high proportion of these writers. To some extent, the lack of significant differences between 101A and 107 can be attributed to the fact that many L2 students in UA’s WP end up in 101A. The WP is in the process of implementing directed self-placement for both domestic and international students so that placement can better take into account students’ prior educational and language experiences.

Challenges, Reflections, and Future Growth

One of the most significant contributions that this project makes to our program’s assessment practices is an understanding of how students interpret the SLOs. Unless WP courses can provide ways for students to become more familiar with the SLOs through course materials and assignments, student reflections on learning outcomes in our upcoming large-scale portfolio assessment will not be valid indicators of their learning. Students are key partners in context-specific assessment, and involving them as such greatly strengthens assessment initiatives. Each programmatic context is slightly different, and students at different institutions bring a range of experiences and backgrounds that inform assessment. This survey helped us identify some of the needs and interpretations of our students, and we recommend such a model to other programs as they build assessments that draw on the experiences of multiple stakeholders.

Our assessment would have been more helpful had we asked students why they rated their abilities the way that they did in each category. In the future, and for other programs embarking on such an assessment, we recommend follow-up questions that help the program come to more nuanced conclusions. Additionally as the UA WP further revises its placement mechanism for both domestic and international student writers, it would be interesting to replicate this study to see whether or not there would be any significant differences between English 101A and 107. Since the usability testing portion of this study was highly generative, it will also be important to incorporate this type of piloting in future writing program research.

3.2. Planting the Seeds for Linguistic and Cultural Success: Rachel's Assessment of an L2 Curriculum

Exigence for the Assessment

In recent years, the UA WP has, like many universities in the U.S., welcomed a growing number of international students (Matsuda; Preto-Bay and Hansen). Our program was underprepared to handle this rapid influx, and existing courses were not sufficiently supporting the needs of the majority of our international student population. In order to prepare students linguistically and academically for subsequent courses, both composition and otherwise, we introduced ENGL 106, a new course that combined content-based instruction with English for academic purposes and focused on themes of language variation and world Englishes.

The new ENGL 106 course adds an additional semester of Foundations Writing instruction and could be perceived by some entities within the university as slowing down international students’ progress toward degree. For this reason, we sought to justify that the course successfully introduced students to academic writing and, more broadly, the U.S. academic environment, thus making it a worthwhile addition to the Foundations lineup. Our second purpose stemmed from the preliminary nature of the course; like all pilots, it would need to be tweaked and adapted after being put into practice in the classroom environment.

So, we wanted to know if the course was doing what we intended it to do and if it sufficiently prepared students linguistically and culturally for ENGL 107 and subsequent university courses. We also wanted to know which aspects of the curriculum design were successful and which needed to be revised. Those aims led us to ask the following research questions:

  1. Are students in ENGL 106 achieving the proposed student learning outcomes?

  2. Is ENGL 106 sufficiently preparing students, both linguistically and culturally, for ENGL 107?

  3. What are the strengths and limitations of ENGL 106, and how can they inform future iterations of the curriculum?

Methods

This study took place during the 2016-17 academic year, which included ENGL 106’s first implementation in the fall and the second implementation in the spring. I had determined early in designing the project to examine both student and instructor perspectives to gain a more well-rounded view of the new course. In addition, I wanted quantitative and qualitative data, so that we would have strength in numbers but also strength through personal voices. I felt that it was especially necessary to listen to the voices of our international students, given that in her review of existing literature on the experience of second language speakers in writing courses, Ilona Leki found that most studies focused on teachers, tutors, and documents rather than the voices of the L2 students themselves. To collect this range of perspectives, I distributed surveys to instructors and students, conducted focus group interviews with students, and observed collaborative meetings with instructors who were teaching the course.

Student and Instructor Surveys

I distributed surveys to students and instructors at three points during the year: at the beginning and end of the fall semester and at the end of the spring semester. In each student survey, I asked respondents to rate their agreement with can-do statements on a five-point Likert scale, from “strongly agree” to “strongly disagree.” Students also had a sixth option, “I don’t know what this is.” These can-do statements consisted of SLOs re-shaped into student-friendly language (e.g., “I can recognize the differences between spoken and written English”; “I can use examples to support the ideas in my writing”) as well as general writing skills (e.g., “I can create outlines for my writing”; “I feel comfortable writing a thesis statement”). These can-do statements allowed me to gauge changes in students’ confidence ratings over time. In addition to measuring perceived achievement of SLOs, the surveys also asked about several additional areas of interest:

  • Students’ general experiences in the course

  • Students’ use of course materials and perceived usefulness of the course textbook (a grammar tutorial handbook)

  • Course assignments and activities that were most useful or least useful for students

  • Skills students learned during the course

  • Students’ perceptions of their preparedness for ENGL 107

  • Instructors’ perceptions of the SLOs

  • Instructors’ perceptions of student preparedness for ENGL 107

  • Instructors’ thoughts on course pacing, readings, and assignments

  • Instructors’ suggestions for future iterations of the curriculum

Focus Group Interviews

I conducted three one-hour, semi-structured focus group interviews over the course of the year: one mid-fall, one at the end of fall, and one mid-spring. During the focus group interviews, participants interacted with each other, asking questions, affirming, and disagreeing. They sometimes steered the conversation, and at times I served more as a facilitator than an interviewer. Through this somewhat relaxed format, the three students’ voices emerged, occasionally quite forcefully.

In the first interview, we discussed the students’ impressions of ENGL 106 up to that point, which was at the beginning of the second unit. By the second interview, the students were finishing up the course and completing the final portfolio. We discussed their overall impressions of the course, which contained reflections on the second unit. In the third interview, we discussed the participants’ experiences in their current classes (two were in ENGL 107, while one was in 101). They led the discussion toward a comparison of ENGL 106 and ENGL 101/107, and we also discussed the skills they felt they had gained in ENGL 106. As a sort of member checking exercise in the second and third interviews, I presented to participants quotes from previous interviews as well as preliminary trends I had identified. I asked students to comment on the discrepancies I perceived and clarify my understandings.

Collaborative Meeting Observations

The course designers had already envisioned monthly meetings with ENGL 106 instructors to provide guidance on course themes and assignments, collaboratively troubleshoot problems emerging from classroom interactions, and share teaching materials. As an instructor of the course, I would already be attending these meetings. Most of my fellow instructors were graduate teaching assistants, but a tenured professor, two NTT professors, and the program’s placement coordinator also served as instructors. There were six meetings over the course of the fall semester and five during the spring. Instructors were required to attend around half of these meetings. In addition, other writing program instructors who were not current ENGL 106 instructors occasionally joined our conversation. Two of the curriculum developers, who also served as instructors during the first semester, led these groups and determined starting topics of conversation. For example, at the end of the first unit, we discussed our impressions of the first project and our plans for the following unit. In addition to these planned discussion topics, the floor was left open for instructors to share their experiences with ENGL 106, both rewarding and challenging. By taking notes and recording our discussions, I was able to present more formal recommendations for curriculum revisions based on the informal collaboration among these instructors from multiple ranks within the WP.

Overview of Key Findings

The success of our new curriculum was measured in various ways, including interest and relevance of the topic to students and instructors, students’ perceived confidence in SLOs and writing skills, perceived transfer of skills, and the effectiveness of course materials provided to instructors. The findings below are but a taste of the information gathered through this stakeholder-centered assessment.

Course topics were interesting to the vast majority of instructors and students. All instructors had a background in second language acquisition or Teaching English to Speakers of Other Languages (TESOL), so the topics of world Englishes and language variation were particularly interesting to them. This enjoyment of course topics and readings was indicated in instructor surveys as well as throughout collaborative meetings. Of the second student survey’s 72 respondents, 93% indicated that the course content was interesting to them.

From students’ responses to can-do statements, it is clear that they perceived making progress in SLOs as well as the more general writing skills. In the first survey, an average of 62% of respondents (n=142) reported that they agreed or strongly agreed with the items, as opposed to an average of 88% on the second survey.

Some student-survey responses and focus-group discussions indicated that the role and benefit of ENGL 106 was not clear to all students. When asked on the third survey what was the most important thing they learned in ENGL 106, one student responded, “I do not know, and I do not know why I should take 106.” While doubting the necessity of ENGL 106 to their own course of study, focus-group participants also acknowledged its role as a stepping stone. One student likened the class to a “warm up” that would “stretch your English a little bit,” adding, “It’s just a refresher course for some of us.” Another student claimed, “So 106 was like the first step in writing, and now we are writing more professional.”

Through instructor collaborative meetings and focus-group interviews, the first unit and major project were identified as areas for improvement. The topic of language variation (mostly within the U.S.) and related readings were perceived as less relevant to students than the topic and readings in the subsequent unit on world Englishes. One of the readings in particular was identified as being too difficult and was therefore eliminated by multiple instructors during the second semester of implementation. Another issue we faced was confusion over the major assignment’s goals and names. Instructors discussed this issue in the excerpt below from the collaborative meeting just after the first unit.

Instructor A: Part of what my students got confused about was the title of both parts: Description and Explanation. They described the original text and explained the ideas in it. And so next time around I would like to call it a reflection...

Instructor B: Comparative...

Instructor A: Something else...

Instructor C: Once I started saying the word “compare”.... It was just the way it was described [that wasn’t working].

Instructor A: And also having it be a [Description] and Explanation, it made them think it was two different things because they have the description, the explanation, and the re-write. So having one title and having the first part not be part of the project per se.

Instructor D: I think that will help a lot... and also then it will help us keep the focus...

Instructor E: So not only was “description” and “explanation” problematic words in my class, but so was the word “re-write” because when I would say, “Rewrite your essay,” they would say, “Make my essay an email?”... One student turned in a 1,400-word WeChat conversation.

The instructors shared the struggles they faced in the classroom, but also collaboratively worked to solve these immediate problems while simultaneously suggesting improvements for future iterations, many of which were implemented in the spring semester.

Challenges, Reflections, and Future Growth

Collaborative spaces were helpful not only for moral support among instructors but were also highly generative environments where curricular problems could be addressed through the involvement of multiple voices. The collaborative nature of the instructor meetings not only encouraged thorough and meaningful feedback on the curriculum, but it also provided a unique space that fostered the development of surveys. This second and originally unintended purpose of collaborative meetings emerged through collaborative discussions. Sometimes these discussions would inspire me to add new questions to the surveys under development, and other times I would directly ask my fellow instructors about their concerns or curiosities. These multiple perspectives from those directly involved in the curriculum helped me refine my original research project and design more comprehensive surveys to collect more meaningful data. This type of flexibility in research and instrument design likely would have been impossible in a traditional top-down assessment. The bottom-up grassroots approach we adopted left room for the voices of major stakeholders not only in the feedback but also in the research implementation.

The collaborative and generative nature of focus group interviews was so informative that I have decided to continue in this methodological direction in a similar project. Talking to students about their experiences in our classes began to make invisible aspects of their lives more visible. One of the major challenges I faced in my project, however, was stakeholder buy-in. Encouraging students and instructors who were already busy with their own work and lives outside the university was not an easy task. ENGL 106 instructors were aware of my research project, because I had introduced it at a beginning-of-year faculty meeting, discussed it at collaborative meetings, provided preliminary data at the end of the fall semester, and emailed multiple times about surveys. ENGL 107 instructors were not as aware of my project because they did not attend the 106 collaborative meetings or receive survey emails in the fall. Because ENGL 106 instructors had frequent reminders about the research being conducted and could see the immediate relevance of my work, they were more likely to respond. In the end, stakeholders should be made aware of projects and be kept apprised of updates to increase buy-in.

4. What We Learned through Grassroots Assessment

Through our ongoing assessment projects, we are seeing the benefit of including a broad range of stakeholders in the process. We especially understood in more depth the symbiotic value of having graduate students leading parts of the assessment process. These assessment projects not only contributed to our programmatic growth, but they also contribute to the growing body of research on Graduate Writing Program Administrator (gWPA) positions. Although not all gWPA positions involve empirical research, this component has many benefits for graduate students as new researchers. In their article, Suzan Aiken, Emily J. Beard, David R. E. McClure, and Lee Nickoson describe the “micro studies,” or small-scale research projects, that they incorporated into a graduate research methods class. These low-stakes studies helped them gain practical experience, take risks, and navigate failure (147). Similarly, the research-oriented gWPA position gave Maria and Rachel an in-depth look at empirical studies in general and programmatic research in particular. Their projects allowed them to see the different forms research can take within writing studies, the immediate effects of their assessment projects, and the practical challenges of implementing semi-large-scale research efforts. Because Maria’s project introduced her to quantitative study, she enrolled in a statistics course the following semester, which greatly impacted the way she reads and conducts research.

As in Aiken et al., the projects served as a site of enculturation into the discourse of rhetoric and composition research, to which Maria and Rachel were both newcomers. For Rachel, this was an especially important introduction given that she comes from an applied linguistics background. Additionally, she gained first-hand experience in managing a longitudinal research project that involved multiple populations and multiple instruments. While the research projects were not conducted to fulfill course requirements, Maria and Rachel still benefitted from the hands-on and guided experience of dipping their toes into the sometimes intimidating research waters. The stakes were lower because they were supported in their endeavors by the administrative team of the UA WP.

In the early stages of developing the UA WP’s programmatic assessment plan, we found that Broad’s process of Dynamic Criteria Mapping helped us to include a range of perspectives and approaches. Although Maria and Rachel had already begun the process of data gathering while the WP was conducting assessment planning sessions, we immediately found that their projects provided the kinds of robust, context-specific assessment data that we wanted to have beyond our broader portfolio-based assessment of SLOs. While portfolios give us one kind of focused understanding of students’ and teachers’ perceptions of achievement of SLOs, these focused assessment projects gave us the opportunity to understand students’ own interpretations of the SLOs and to take a deeper dive into their experiences as students in WP classes. We didn’t want Maria’s and Rachel’s assessment projects to be isolated instances, though. We wanted our assessment process to be coordinated and ongoing across the program.

Involving a range of voices, from graduate and undergraduate students to faculty of all ranks and levels of seniority in a program, is a highly generative approach for writing program assessment. Our project offers a reimagining of how writing programs might do meaningful, ongoing programmatic assessments that are specific to the needs of the students and instructors as well as to particular curricula. Writing professionals seeking the “best way” to conduct various assessments should remember that “what is ‘best’ depends not only on the particular purpose of the assessment but on the specific context (e.g., institutional mission, students, faculty) and the potential impact on teaching and learning” (O’Neill, Moore, and Huot 10). Further, we argue that having a range of stakeholders take leadership roles in assessment is integral; having teachers and students guide assessment allows for necessary redirection, and it creates an authenticity and investment in the assessment that is not achievable when an administrator (even a WPA) mandates a top-down assessment.

The amount of time stakeholders must put into assessment projects such as these became strikingly evident from our assessment work, however. How do we ensure that stakeholders are sufficiently compensated for their time? Maria and Rachel each received a course release as part of their GAD positions. Students who completed surveys received no compensation for their time, but many of them did complete them during class time when they would already be doing work for their writing classes. Focus group and usability participants received refreshments and gift cards as compensation. Instructors, on the other hand, did not receive such direct or indirect compensation. They completed surveys, which were far more in-depth than the student versions, on their own time. Although the collaborative meeting requirement for ENGL 106 was made clear to instructors when they applied to teach the course, the time they spent in the meetings remained uncompensated. Lecturers in the program can count that participation as part of their service to the program, but graduate students cannot. However, the meetings were viewed as such important sites of professional development that, as mentioned earlier, members of the WP not teaching ENGL 106 frequently attended. In addition, instructors collectively agreed to maintain the meeting requirement in the second year of implementation. In all circumstances, we must remain ethical and mindful about how stakeholders are compensated for participation in assessment projects, especially those employing qualitative, ongoing, or longitudinal assessment methods.

When assessment is guided by local stakeholders, the questions asked and the interpretations of those answers are designed to impact student learning directly. And because each assessment project really just gives us a snapshot--a particular point-in-time perspective on a program--ongoing assessment is necessary to develop a culture of assessment. WPAs need to remember, however, that many stakeholders in writing programs have been historically disenfranchised and not compensated for this type of labor, so it is of the utmost importance that we think ethically about involvement, compensation, and reciprocity.

Notes

  1. For both studies, we consulted with IRB before proceeding with the assessment projects. At our institution, IRB generally determines that approval is not required for programmatic assessment, even when the results are reported externally, as long as they are not described as generalizable. (Return to text.)

Works Cited

Aiken, Suzan, Emily J. Beard, David R.E. McClure, and Lee Nickoson. An Introduction to the Work (and Play) of Writing Studies Research Methods Through Micro Study. The CEA Forum, vol. 42, no. 1, 2013, pp. 127-154.

Broad, Bob. Organic Matters: In Praise of Locally Grown Writing Assessment. Organic Writing Assessment: Dynamic Criteria Mapping in Action, edited by Bob Broad, Linda Adler-Kassner, Barry Alford, and Jane Detweiler, Utah State University Press, 2009, pp. 1-13.

Conference on College Composition and Communication. Writing Assessment: A Position Statement. NCTE, Nov. 2014, www.ncte.org/cccc/resources/positions/writingassessment.

Council of Writing Program Administrators. WPA Outcomes Statement for First-Year Composition (3.0). July 2014, wpacouncil.org/positions/outcomes.

Gilliland, Elizabeth A. Talking about Writing: Culturally and Linguistically Diverse Adolescents’ Socialization into Academic Literacy. 2012. University of California, Davis, PhD dissertation. ProQuest, search-proquest-com.ezproxy3.library.arizona.edu/docview/1234551468?pq-origsite=summon.

Harris, Joseph. Meet the New Boss, Same as the Old Boss: Class Consciousness in Composition. College Composition and Communication, vol. 52, no. 1, 2000, pp. 42-68.

Huot, Brian. (Re)articulating Writing Assessment for Teaching and Learning. Utah State University Press, 2002.

Leki, Ilona. Hearing Voices: L2 Students’ Experiences in L2 Writing Courses. On Second Language Writing, edited by Tony Silva and Paul Kei Matsuda, Lawrence Erlbaum Associates, 2001, pp. 17-28.

Matsuda, Paul Kei. The Myth of Linguistic Homogeneity in U.S. College Composition. College English, vol. 68, no. 6, 2006, pp. 637-651.

Matsuda, Paul Kei, and Ryan Skinnell. Considering the Impact of the WPA Outcomes Statement on Second Language Writers. The WPA Outcomes Statement: A Decade Later, edited by Nicholas N. Behm, Gregory R. Glau, Deborah H. Holdstein, Duane Roen, and Edward M. White, Southern Illinois University Press, 2012, pp. 230-241.

Miller-Cochran, Susan K., and Rochelle L. Rodrigo. Introduction: Developing Connections Among Rhetoric, Usability, and Writing. Rhetorically Rethinking Usability: Theories, Practices, Methodologies, edited by Susan Miller-Cochran and Rochelle L. Rodrigo, Hampton Press, 2009, pp. 1-8.

Moore, Cindy, Peggy O’Neill, and Brian Huot. Creating a Culture of Assessment in Writing Programs and Beyond. College Composition and Communication, vol. 61, no. 1, 2009, pp. W107-W132.

O’Neill, Peggy, Cindy Moore, and Brian Huot. A Guide to College Writing Assessment. Utah State University Press, 2009.

Penrose, Ann M. Professional Identity in a Contingent-Labor Profession: Expertise, Autonomy, Community in Composition Teaching. WPA: Writing Program Administration, vol. 35, no. 2, 2012, pp. 108-126.

Preto-Bay, Ana M., and Kristine Hansen. Preparing for the Tipping Point: Designing Writing Programs to Meet the Needs of the Changing Population. Writing Program Administration, vol. 30, no. 1-2, 2006, pp. 37-57.

Royer, Daniel J., and Roger Gilles. Directed Self-Placement: An Attitude of Orientation. College Composition and Communication, vol. 50, no. 1, 1998, pp. 54-70.

White, Edward M., Norbert Elliot, and Irvin Peckham. Very Like a Whale: The Assessment of Writing Programs. Utah State University Press, 2015.

Return to Composition Forum 37 table of contents.