PAB #6 – Hansen et al.

Hansen, K. et al. (2006). Are Advanced Placement English and First-Year College Composition Equivalent? A Comparison of Outcomes in the Writing of Three Groups of Sophomore College Students. Research in the Teaching of English. Vol. 40 No. 4, May 2006 461-501.

2000px-BYU_Medallion_Logo.svg

“The most exciting phrase to hear in science, the one that heralds new discoveries, is not ‘Eureka!’, but rather, ‘hmm… that’s funny…’” – Isaac Asimov

“This lengthy introduction to our study gives a broad context for our question about the equivalence of high school AP course and/or exams and college FYC courses.  If these two paths lead to the same educational outcomes, perhaps our concern that AP students are being deprived of something when they skip FYC is baseless.  If the two paths do not produce the same results, however, then perhaps it is time to question the practice of assuming equivalence between FYC and high school AP courses and/or tests” (473 [emphasis added]).  


Introduction

For this PAB, I will be briefly investigating the article as an object of study, attempting to recap, review, and categorize its RAD value according to Driscoll and Perdue’s metric for RAD research from their 2012 “Theory, Lore, and More: An Analysis of RAD Research in The Writing Center Journal, 1980-2009” (link).


Object of Study

“Are Advanced Placement English and First-Year College Composition Equivalent? A Comparison of Outcomes in the Writing of Three Groups of Sophomore College Students,” from Brigham Young University’s Kristine Hansen, Gary L. Hatch, Patricia Esplin, Richard R. Sudweeks, and William S. Bradshaw, in coordination with University of Maryland’s Jennifer Gonzales and University of Washington’s Suzanne Reeves, outlines the methodologies and findings of a study conducted specifically “to inform policy” – both at BYU and in general – regarding FYC exemptions for incoming freshmen based upon performance from AP exam scores (a fairly widely-accepted standard for pass-through for introductory composition courses nation-wide) (461).

After significant theoretical grounding and a description of both BYU’s FYC program and the AP English exams and pass-through/crediting processes, the researchers note that – counterintuitively – students who met AP pass-through standards with the minimum score of 3 were actually 10% less likely to elect to enroll in FYC coursework than higher scoring 4 and 5 students. (approximately 30%, 40%, and 40%, respectively.)  Most higher-scoring students who registered for FYC content elected to join honors sections and coursework (468).  Based in part upon this information, protocols were designed to investigate not only AP pass-through vs. FYC-only outcomes, but also to compare both of these results against combined AP/FYC outcomes.  They similarly note that honors coursework specifically may account for a portion of the reported uptick in writing competencies for combination students.  I note this small fact because it indicates adherence to one of the primary standards of consistent, careful RAD methodology: the flagging and signposting of data points and factors which may appear inconsistent for further analysis, consideration, and explanation.

Two separate researchers each holistically scored two pieces of equal length from 497 total students, including a sample set-aside of 182 sophomore-status students, in a history of civilization course, with general scoring averages used to measure and index writing competencies based upon a set scoring rubric (474-76).  Based upon these samples and scores, the researchers made a final determination that both FYC and AP students’ performance was comparatively equal, but below expected quality.  In categories of self-efficacy, academic characteristics, and specific essay scoring, FYC-only students slightly underscored their AP-only counterparts.  Combination (AP+FYC) student writers outperformed both groups and general course outcome and composition expectations.  However, before drawing conclusions from this data, the authors also carefully studied the inherent variances introduced into the data from raters, grading consistency over time, differences in assignment prompts between sections, and unexplained/undetected sources of variability.  These facets of variation are carefully documented, combinatorially investigated, and represented through a single coefficient of generalizability (0.58, well within the realm of correlation for single-facet protocols) (476-79).

Considering this recognition of limitations, sample size, reporting, theoretical backgrounding, transparent selection protocols, the study’s methodology was significantly empirical, likely scoring an 11-12 (14 being a “perfect” score) using Driscoll & Perdue’s scoring metric for RAD categorization (see Appendix).  The only likely reductions for the work on Driscoll and Perdue’s scale would be related to (1) limited participant sampling, since the only coursework sampled was situated within three sections of a specific (and primarily elective) course, and (2) only a cursory exploration of representativeness of their selected sample to the student body in general (See Fig. 1 and caption), as well as (3) concerns about applicability of assessment and discussion, which I will expand upon below.

Hansen, Figure 1
Figure 1 – Sample representation (document Appendix A).  Note that, while the researchers do note the sample mean GPA & ACT scores including standard deviations in comparison to the general first year student body means (implementing careful data representation practices), they do not indicate how representative the disciplinary/college breakdown is of the 497 total sampled students, nor of the 182 sophomore-status set-asides.  Additionally, they do not separate these two sample groups for comparison in their scoring means.  Note these values are discussed elsewhere in the study as individual data points, but are not assembled with or compared to the demographic data.

In the end, for a cursory view of a specific program, the researchers’ methodology and reporting are sound.  Their sample representation is not optimal, but also far superior to most sampling protocols within humanities research (especially ethnographic research, which was a portion of the students’ self-reporting of writing process).  However, there is one moderately damning moment in the document, as noted and emphasized in the pull-quote passage at the beginning of this analysis, and the following:

“The fact that performance of students in the AP/no FYC group was not significantly different from those in the no AP/+FYC group suggests that the outcomes produced by the high school AP experience are roughly parallel to those produced by taking a first-year composition course.  But does this mean that AP test scores should immediately translate into credit hours?  It may not when one considers that the mean scores of both these groups indicated that the students had only ‘limited’ or ‘uneven’ writing skills.  In other words, though the outcomes were fairly similar, the standards achieved were not satisfactory” (484 [emphasis added]).

The researchers provided a clear standard for falsification of their data – a standard that was clearly and unequivocally met.  Although the researchers did their level best to salvage this claim by noting the inherent performance benefit of the AP+FYC group, they also already confounded the results of that group by noting that they were largely comprised of honors students who were exposed to a different and augmented composition curriculum.  Attempts through several sections of the document to reclaim this data as a positive trend, especially given the viable – but weak – coefficient provided by confounding factors, are largely for naught.  Their attempts to paint AP English testing and curriculum for being generally focused “on product rather than process” (487-88) are similarly laudable, and no doubt mostly true, but not founded within the research provided nor well-expressed through the writing skills expressed by AP-only students within their study.  Additionally, their assignment prompts and protocols were not well-designed to measure process-based writing, except in the pre-writing process allotted to students; there were no opportunities for feedback, either directive or facilitative, nor were there revision possibilities after initial submission.  Students were given severely limited time to produce the final essay, and several reported that they would have availed themselves of additional resources and feedback had they been given additional time to compose (486).  Perhaps if the researchers’ protocols had been process-centric, given that that is the philosophical basis of their FYC programming, the data would have expressed a more positive trend for BYU-trained student writers over AP pass-through students.  However, we cannot know without replication and revision of these protocols, which must instead be analyzed as presented.

It must be mentioned that the researchers also discovered through this study, despite its falsification, that students scoring a 3, rather than a 4 or 5, consistently underperformed compared to FYC-only sophomores, forming the basis for their revised recommendation – “namely, that students entering BYU with AP English scores of 3 should not be allowed to bypass the first-year composition requirement because these students may benefit from completing the FYC requirement through an honors or advanced option” (484).  This is an astute recommendation based largely in their failure to find meaningful performative difference between BYU FYC and AP results.  This embracing of falsification to instead lead to new insights is not only praiseworthy, but one of the great strengths of data-driven, RAD research in general, as data often tells us more than we may expect.

All in all, this is an excellent study, well reported, and with a strong conclusion that demonstrates how conscientious empirical research – disinterested in proving presupposed notions of pedagogy and theory – can provide new insights, alter scholars’ understanding and appreciation of facets of their teaching and their students’ capabilities, and locate new sites of study through the embracing of “failure.”


Questions for consideration:

Okay, what the heck just happened?

This summary and analysis of the article is, essentially, how I would analytically (with some commentary) apply a notion of use value and RAD quality to an OoS that came in the form of an article.  What you just read wasn’t so much a PAB written about an article, but rather a PAB about a “text,” in the same way you might dissect Pride and Prejudice differently than you would dissect a scholarly article about it.

This is my Pride and Prejudice.

Anyhow, as an OoS, this is a really an exploration of a moment in time in a resource project, and how that moment is represented through the end document.  At some point, in a room, somebody said to somebody else “Oh, shoot” (they may have substituted a vowel there in “shoot”).  I’m looking at this text, quite simply, as the aftermath of “oh, shoot,” as an example of what happens, and what is possible, when RAD research violates expected results in a big way.

So, what you’re saying is they failed?

Oh, yeah.  They may or may not feel they did, because their research made a formative contribution (which is good!), but by the standards set by many in the academy today (who are less celebratory of the research process as generating rather than confirming knowledge – which is bad!), yes.  They failed.

Sometimes, it’s most daring not to set out where there are no trails, but rather to recapitulate and retread the trails that previously defeated you.  Hansen et al discovered a newfound value in the most basic literacy components of FYC curriculum, powerful as a supplement to preexisting pedagogical structures, and they did it because they were willing to perform research that violated every presupposition they had.  And when those suppositions were defeated, they didn’t walk away, tempting though it likely was.  They went even deeper into their analysis.  That’s exciting.

Then they went ahead and published that failure after the fact.  Their failure, and their willingness to see that failure through to the end, provides new knowledge and new information for the discourse.  That’s even more exciting.

Failure is so cool.  RAD loves failure.  RAD is cool.

Suddenly RAD is cool?  Like, a week ago, tops, you said “RAD Research isn’t sexy.”

RAD is the cool discipline for cool people who are too cool to let something uncool like success get in their way.  They don’t have time for success.  They’re too busy being cool, and learning from how cool they are, and how cool it is to fail.

I said it, and I’m standing by it, both as a philosophy and an academic position.  RAD is rad.  Not sexy, sure.  But COOL.  One day, I’m going to write and publish an academic article called “COOL RAD FOR COOL KIDS” (all caps, yes) just so I can put it at the top of my CV.


References

Driscoll, D. and S. Perdue (2012). Theory, Lore, and More: An Analysis of RAD Research in The Writing Center Journal, 1980-2009. The Writing Center Journal, Vol. 32 No. 1, 2012 11-39.

Hansen, K. et al. (2006). Are Advanced Placement English and First-Year College Composition Equivalent? A Comparison of Outcomes in the Writing of Three Groups of Sophomore College Students. Research in the Teaching of English. Vol. 40 No. 4, May 2006 461-501.


APPENDIX

Appendix A: Driscoll & Perdue's complete RAD Research Rubric from
Appendix A: Driscoll & Perdue’s complete RAD Research Rubric from “Theory, Lore, and More” (2012).
Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s