Paper #4 – Theories and Methods

Doctor Dan Richards - Old Dominion University

Dr. Daniel Richards (Link to Personal Site)

“I have always been worried about positivism Trojan Horsing its way into English from other disciplines on campus because that is the institutional capital that allows funding to get directed our way.  We can’t make institutional arguments about our value anymore based on unquantifiable assertions about opaque notions of critical thinking and citizenship-cultivating.  So, to ‘play the game,’ we’re borrowing some empirical methods to help support ourselves, taking the hit for the long game of remaining in existence.  Positivism is alive and well in the academy, and in English (see genre theorists, like Dylan Dryer), and it is because data is the capital that gets things done.  I don’t think we need to ‘bring’ RAD in, or be fearful of RAD impinging its ugly scientific face into our meetings—I think we need to concern ourselves with thinking about how to best and most responsibly use it” (D. Richards, Personal Communication, Oct. 27, 2015).

Introduction – Dr. Richards

Last week for my second PAB, I said that I intended to do quite a bit this week (link).  I still intend to discuss all of those issues; however, I’m going to postpone for a couple of weeks so I can a) speak more about these issues within the contexts of epistemological alignment in paper #5, and b) speak instead to the responses I received from Dr. Daniel Richards from an email interview we performed earlier this week.

This may be a bit off-topic, but I’ve liked Dan Richards immensely since the moment I met him as he stepped into a conversation I was having at a barbecue and immediately and publicly disagreed with me.  I asked Dr. Richards to comment on the work I’d been doing here and he smacked me around a bit in some personal comments about why I always insist on doing everything the hard/thickheaded way, what with my overreaching and all.  Fair.

Dr. Richards describes himself as an academic pragmatist, and I describe myself as a pragmatic academic cynic.  One would assume we agree on much – and we do – but I find our points of disagreement to be most interesting – for a pragmatist, Dr. Richards is ever the optimist, and carefully conservative with academic criticism. Because of this, although I have selected quotes from our interview which most coincide with this week’s paper theme, I have included the full interview sequence in a link below in the references for those who are interested (edited to remove personal comments).

Ethics as theory

Dr. Richards’ responses are, to a large extent, focused on approaching the theories and methods of RAD research from the perspective of Technical Communication scholarship and pedagogy.  I think this is an excellent contrast to the feminist research theory and feminist epistemology discussed in last week’s PAB entries: if feminist research is beginning to track towards questions of research ethics and the nature of data presentation, the resurgence of Technical Communication as a discipline is in no small part thanks to its close-knit relationship to empirical practices and ideology.  “Empirical researchers in English alone aren’t even a cohesive group.  From my stance in Tech Comm,” Dr. Richards notes, “empirical methods are on the rise and part of the re-establishment of the discipline” (Richards, 2015) – and I tend to agree.  Tech Comm is certainly a model I’ve looked to often in preparing my readings for this course of study.

As noted in previous PABs, especially those dealing with Barton, Driscoll, Haswell, and Witte, RAD empiricism seems to be a “big tent” ideological/epistemological practice.  What RAD contributes – beyond methodology – are frames and practices for understanding what research is and does, and who it serves, in the modern academy.  When Dr. Richards notes that “writing programs and WPA work as a whole are embracing empirical methods as they seek to gather big data on writing skills/literacy skills in freshman cohorts, which helps them build arguments for institutional funding based on performance and student ‘excellence,’” I immediately think back to Matthew Bodie’s comment dialogue with me in my third paper (link), where Matthew brought up ethical concerns about the role that such “big data” plays in perpetuating the corporatization of the modern academy: “I cannot help but to be struck by your categorization of assessment scores and administrative data as metadata and part of RAD research […] in your thinking, I would be curious to know what you think about the ethics of this type of metadata filling future publications” (Bodie, 2015)

I think that Matthew’s question is a fair one, and one that has dramatically plagued empirical research for some time now.  There is an ethical concern when one begins to conflate “data” with “information.”  There is even more concern when one begins to conflate “information” with “truth.”  All of this gets me to the primary theoretical interest that I think makes RAD truly disciplinary – and more than merely empirical – a theoretical lens which filters and generates the most meaningful knowledge in the field. It’s not just research ethics, but the ethics of research.  It’s not enough to know how to accurately present information, or morally acquire data – we must develop ethical knowledge of research and communication ecologies… and hierarchies.  Dr. Richards shares an ethical concern regarding the capitalist notion of institutional research and scholarly research (see the pull quote above), but it’s his conclusion I find most interesting of all: “I think we need to concern ourselves with thinking about how to best and most responsibly use [RAD].  This means negotiating the values we have as humanists with the harsh data-driven realities of the posthistorical University” [emphasis added].  Empiricism as an ideology has an ethic, but it’s an ethic that informs empirical values, not empirical research. One thing that RAD does is present empirical research as demanding careful ethical consideration and institutional and peer balances to the moral and epistemological challenges of producing meaning through data.

Empiricism is an epistemological value.  RAD is a humane value.  I encourage interested parties to google “RAD research and try to locate references to RAD external to the humanities (or even English studies, more specifically).  RAD was brought into being by folks like Haswell precisely because they saw an ethical context lost in the assessment of “Big Data”

Ethics as Methodology

Let’s start here: ethics is not a methodology.  But rarely does it come closer to being one than within RAD, where the categorization and recommendation of best practices (“ethics”) is the direct antecedent to the development of more highly ethical, “humanitistic” protocols for the execution of RAD research (“methods”).  When asked whether RAD empiricism and the “postmodern notions of the humanities today” are inherently in conflict, Richards argues that “that’s the rub.  But no, I don’t think they’re mutually exclusive domains of thinking/thought. I do think they’re two separate desires […] I might even call them epistemological attitudes, and these attitudes are often in conflict.  But you can accumulate quantitative data in a ‘postmodern’ way.  Positivism got in trouble because of the assumptions made about what the data said, not how the data were accumulated.”

What the “new” empiricism of RAD within English offers “methodologically,” then, may be a “postmodern way” to accumulate data – and to be more open and intersectional about what assumptions are made in its presentation.

It is through this context that I think I first found RAD – and Driscoll – especially appealing.  Richards points out, when dealing with the questions of “big data,” that these concerns have already existed for some time in fields that embraced RAD research ethics early – like Technical Communication, where scholars like Paul Dombrowski have offered thorough explorations of information, communication, and research ecologies for a quarter-century – and that these fields have begun to see the next frontier of ethical concerns for research well ahead of the rest of the academy: “I can’t help but think that student voices and individuality will continue to be drowned out with this big data approach,” Richards notes, “so ethical stances [to come] will most likely revolve around retaining and re-capturing student voices and the eclectic and diverse student bodies we serve.”

Of course, Technical Communication is a difficult case study to apply to the field in general: in plenty of English departments, it’s already comfortably enjoying outsider status, derided for being commercial, and sell-out, and product-oriented.  Dr. Richards also feels that empiricism means different things in different sub-disciplines, and sets aside TC as different within English Studies: “I think the interdisciplinary origins and composition of TC lends itself to a lack of bemoaning of empirical work, as you’d see in literature or rhetoric proper, but as a whole TC needs empirical work to find out what it wants to know: how do people and technology interact as it pertains to communication.  We will always need empirical work to help us answer that question.”


This is where I’d leave things off for now.  There are trends in empirical acceptance by sub-disciplines – as Dr. Richards argument notes, they are often as much historical and original as epistemological.  The major questions of RAD are mostly ethical in nature, and the answer to those questions is largely a combination of concerted efforts to create an overarching ethical narrative, of striving to be transparent, pragmatic, and effective, and of reaching out across disciplines to provide value through a strongly ethical understanding of data that can synthesize many kinds of knowledge to justify its own existence.

I dream of a world where RAD scholars can convince people they are more than just researchers – that they are philosophers, ethicists, and journalists inscribing the questions of the age into history.  But -at least in the meantime – I’ll settle for being consistent, and believing there are ethical theories out there that haven’t been fully resolved yet.


Bodie, M. (24 Oct. 2015). Re: Paper #3 – Objects of Study [Weblog Comment].  Retrieved from

Dombrowski, P. M. (2000). Ethics and technical communication: The past quarter century. Journal of technical writing and communication, 30(1), 3-29.

Dombrowski, P. M. (1995). Post‐modernism as the resurgence of humanism in technical communication studies. Technical Communication Quarterly, 4(2), 165-185.

McNely, B., Spinuzzi, C., & Teston, C. (2015). Contemporary Research Methodologies in Technical Communication. Technical Communication Quarterly, 24(1), 1-13.

Richards, D.  (27 Oct. 2015). Email Interview [Publicly Archived]. Retrieved from


PAB #8 – Cushman

Cushman, E., K. Powell, and P. Takayoshi. (2004). Response to “Accepting the Roles Created for Us: The Ethics of Reciprocity.” College Composition and Communication, Vol. 56 No. 1, Sep. 2004, pp. 150-156.

Portrait of Ellen Cushman

Ellen Cushman (Profile)

“My last question there reveals another trouble that I’m having with the disclosure of the rhetorical navigations of researchers and participants […] to what extent is self-reflexive writing in qualitative studies a ‘show,’ a performance of exotic moments of trial, distress, or anxiety?  My question stems from a concern over the tenor of some […] self-reflexive writing that sensationalizes tense moments or researchers’ personal lives, or miscommunications and misunderstandings, or conflicts” (152).

So this seems to be Ellen’s main concern: Is it the data collection process that is worthy of study, reflection, and, ultimately, a place in scholarly journals?  We think so because it is the minutiae of the research process that can, in fact, lead to our overall findings.  We believe this process needs to be revealed.  Our findings do not magically appear or ‘reveal’ themselves as if they were already there for us to find them.  Our findings are directly influenced by who we are, what we know, and what decisions we make at the research site as participants ask certain things of us” (154).


It’s not been my style to use book reviews or article responses in my PABs thus far, because I tend to find them one of the less rhetorically interesting forms of scholarship when discussing research.  Too often, they can be parsed down to something along the lines of (cynically, as is my mode) “the authors are very smart, and I can tell you this is true because I’m also very smart, and I understood their article/book.  As proof of how smart I am, look – I understood their article/book.  QED.  I have now participated in the published scholarly discourse.”

Every once in a while, an article response pops up that “brings its own drinks to the party,” so to speak.  Ellen Cushman’s response to Powell and Takayoshi’s article did an excellent job of raising the general research concern, and eliciting real, productive discourse about what “Accepting the Roles Created for Us” contributes to our understanding, and presentation, of research.

The other reason I tend to avoid responses/reviews is they tend to be short and light on content – that is to say: not optimal for assignments such as this.  Because of this, my review of the literature will be significantly shorter in this entry, though I will endeavor to provide significantly more analysis in the Q&A section below.


Let’s start here: Dr. Cushman did not write a tear-down piece.  In general, her response was inquisitive and supportive.

However, Cushman’s response raises several valid concerns, both ethical and methodological:

1.) While the research case studies the originating authors provide present situations where the researchers were well aware of specific hierarchy-based concerns, that is not the case for most ethnographic research.
2.) When using research of student writing/progress within the research-site classroom, the very nature of study can be an element in the “strivings of participants” (151) to improve, and this can yield the positive benefits and desires the authors advocate for – but it means that the research environment is no longer “observationally sterile,” and thus invalidates any replicability of the results.  It becomes research which is only purely useful within the context of the active research window.
3.) Whether we want them to be there or not, research/social hierarchies do exist.  We need to more fully engage with the implications of that if we’re to “enact equitable and hierarchical power relations” (152).
4.)  Although the disclosure of the specific moments in the authors’ article were viable, useful, and purposeful, Cushman believes that in the vast majority of cases such disclosure would have “detracted from the overall findings of the work,” especially in her own experiences, and “would have risked revealing more about the minutiae of data collection than it would have [about] the goal and focus of the study” (152).
5.)  Similarly, Cushman fears that having researchers insert themselves into the research narrative risks detracting from “the report of participants’ lived realities” (152).
6.)  Finally, as noted in the pull-quote above, the author expresses concerns that such disclosure could be vogue, conspicuous, or “showy;” that is to say, there is a danger of the act of disclosure becoming a process of proving one’s progressive research credentials, more than a process of demonstrating confounding factors in one’s more data-centric analysis (152-53).

In their response to this line of questioning and list of concerns, the original authors affirm that these concerns are valid, even if they do dismiss them slightly through both rhetorical and theoretical positioning.  For now, I will note that their responses are fair, involved, and welcoming of criticism.

What’s next?

Next week during my paper on the intersection of outside theory and RAD research, I will address more about the way in which these texts function both as performative scholarship and as scholarly performance, and how the response from the authors reflects the ethos they similarly express in their theoretical contributions and methods in the first article.

From there, I will attempt to demonstrate that – both as a rhetorical and research concern – this type of theory and practice (both feminist theory specifically and external theories in general) is beginning in recent years to express significant awareness of the importance of legitimizing research protocols.  I will explore how we can validate and make authoritative subjective forms of social theory for RAD research, how embracing such theoretical forms has been, and will continue to be, an essential part of the resurgence of empirical study in the humanities, and how these theories may intersect with the methods and Objects of Study discussed in previous weeks.

Questions for consideration:

So you view Powell and Takayoshi’s proposal as unethical?

Let me put it another way.  Power dynamics exist in research, and that’s absolutely, 100% unavoidable in most cases.  That’s one of the reasons I’m really ruminating over the feminist ethics of reporting “dissensus” in research.  These unplanned moments are provoked by the research environment or relationships between participants, and thus cannot be accounted for within the ethical models prepared in order to engage in formative research.

So, let’s ask this question: if the power dynamic exists, and is moderately unavoidable (as Powell and Takayoshi demonstrate), was it ethical for Powell to presume upon the “reciprocal” relationship to ask “Andy” permission to publish his remarkably vulnerable and emotional outburst in her auto-ethnography?  Can we imagine, really, that collaborating undergraduates in a ethnographic research would have the wherewithal to resist the hierarchal concern of the mentor-as-researcher and make an informed, ethically-guided decision to allow or disallow its inclusion?

And, if not, Cushman makes a key point here in a growing discourse on feminist epistemology and research disclosure – Powell’s inclusion of this dissensus is a “show”: of how keen she is to her participants’/students’ needs, how accessible (and accessed) she may be, how compassionately ethical her research protocols are, etc.

So Powell and Takayoshi are wrong? 

I don’t think they’re wrong.  This is why – as a RAD advocate – I want to study these questions.  I think the authors are right when they raise a very valid concern about the ethical structures of research design through the framework of feminist epistemology – and then I think they come to the exactly wrong solution to the concern in the form of disclosure.  As a RAD advocate, this is my position: we study people, as Powell and Takayoshi point out, and that study causes relationships to form.  We study data we gain from studying people, but that does not make our relationships with those people data, nor does it make their relationships with us data.

We should be honest about data.  We should be open about relationships.  We should be careful with people.  But we should not conflate the three, not in our lives, and not in our scholarship.  The authors would doubtless claim that they are doing the opposite, but as an academic equally interested in the history of research ethics and research itself, I’m not so sure that’s true.

What’s the problem with their research, then? 

The human relations of empirical research (at the personal level) inevitably lead to the observer’s paradox.  What the authors propose – and model – doesn’t eliminate the effects of the paradox from the data.  Rather, it accentuates it.  This is “bad research” from a RAD perspective, in that it can yield bad or nonsensical results – and due to the methodologies used – and lack of rigor – it would be almost impossible to identify the problems inherent to such study. However – and more importantly – it is poor stewardship of our moral obligations to student/community research participants. In my estimation, to make the specific human part of the human interest primarily does three things, one good, one bad, and one mixed:

1.) Makes the work feel more accessible to lay readers (good!),
2.) diverts attention away from the general applicability of the work (bad!), and
3.) makes the researcher, and not the participant or data, “the story” (mixed).

The feminist theory backing this research, and the epistemological assumptions behind that theory, do not preference RAD methodology, nor do they preference “traditional” research ethics.  And that’s fine!  But at the end of the day, we need to ask two questions in RAD – “why are we doing research?” and “how does theory change our research when the rubber meets the road?”  This may have value to feminist/gender theory and scholarship that bases its research off the theories inherent to those disciplines.  However, it may not be cross-applicable.

For the RAD researcher, Powell and Takayoshi (and Cushman’s response) raise serious concerns for doing our own research “back home,” and questions of reciprocity need to be brought to the forefront of RAD studies in general English Studies or in pedagogy.  But that doesn’t mean that it would have take the form that it takes in feminist scholarship – and I doubt, honestly, that it would work if it did.  This is qualitative, non-RAD study, undeniably.  The methodologies these scholars are grappling with are missing a degree of rigor necessary for RAD study; moreover, there is no way to to take these purely subjective, impromptu experiences and in any way replicate or aggregate them.

Finally, I’d note that there’s a lot of talk about research “agendas” within this literature, both in the response and the original article.  That’s a language choice that makes me a little jumpy – we should never enter into research with a set agenda that research will “serve.”  Either we are exploring hypotheses, or we are exploring data, but neither should be (optimally) a means to a predetermined end.  That just leads to shoehorning, and the validation of invalid research protocols (i.e., experimental contamination).


Cushman, E., K. Powell, and P. Takayoshi. (2004). Response to “Accepting the Roles Created for Us: The Ethics of Reciprocity.” College Composition and Communication, Vol. 56 No. 1, Sep. 2004, pp. 150-156.

Powell, K. M., and P. Takayoshi (2003). Accepting Roles Created for Us: The Ethics of Reciprocity. College Composition and Communication, Vol. 54 No. 3, Feb. 2003, pp. 394-422.

PAB #7 – Powell & Takayoshi

Powell, K. M., and P. Takayoshi (2003). Accepting Roles Created for Us: The Ethics of Reciprocity. College Composition and Communication, Vol. 54 No. 3, Feb. 2003, pp. 394-422.

Powell - Portrait

Katrina M. Powell (Profile)


Pamela Takayoshi (Profile)

“At the heart of calls for reciprocity in research is a recognition/assertion/insistence that research involves building relationships among humans.  At a basic level, research is about understanding other people, their lives, and their experiences.  As researchers, we asked for admittance to our participants’ lives, thoughts, and experiences, and our participants opened their lives to us in sometimes surprisingly intimate detail.  Thinking about our relationship with participants only within the bounds of our study would have limited the relationships we might have built with our participants.  Indeed, as we relate in this article, our experiences suggest that participants’ desires often fell outside the “researcher” roles we had constructed for ourselves when we built our methodologies” (399).


So, my feeling is that the main problem with my argument – that RAD might be seen as its own growing/nascent discipline with foundational literature and scholarship within the field of English Studies – is that some of the definitions are a little… blurry.  Which is to say, last week I talked about OoSs that were largely meta-analytic.  RAD scholars study data, often data they don’t produce; they study the study of that data, and it just gets more complex from there.  When scholars study how we all study, it becomes difficult to extricate their theory and methods from their literature and data.

What this means in application is that some things that may be OoSs are also methods within RAD.  In the same way that – in education – pedagogy may be theory, source, OoS, data, and method, there is significant crossover in RAD.  The desire to know how we research and learn better often treats methodology itself as a legitimate Object of Study.

That said, I don’t want to retread old ground and say nothing new this week, so I’ll be looking elsewhere for answers to the question of what theories are used in the field, and I’ll largely be downplaying “methods” since that already significantly intersected with my OoS findings.

Let me put this in more personal terms.  I don’t – at this point in my academic trajectory – want to be what would traditionally be labelled a “Writing Center Studies scholar.”  I don’t want to be Technical Communications guru.  I don’t want to be a WPA/FYC wonk.  I don’t want to be a Lacanian post-modern Marxist what-have-you.  I do want to be a researcher and a research advocate.  And I want to be able to use the tools of WCS, TC, WPA, (and, yes, Lacanian Postmodern Marxism) to contribute meaningfully to those discourses while still maintaining an identity as someone who values how we get to truths as much as what those truths are.  I just wanted to be transparent about that up front.

So, in talking about theory, I think it might behoove us to prove I can do just that.  I’m looking at theory and epistemology that I don’t personally use, in contexts I don’t personally study – then I’m talking about whether or not that “works” within the context of my greater RAD argument.

Maybe I’ll prove myself wrong.  After last week, wouldn’t that be ironic?

Let’s do feminism.


Feminism.  Hey, I’m as surprised as anybody, because I thought that feminist theory was about as far from RAD models of research as one could get – seeing as much of feminist epistemology deals with the situated nature of knowledge within research, and works to overcome the limitations of RAD approaches by essentially torpedoing the researcher/subject relationship, “negotiating” protocols, “intervening” in power dynamics of objectivity, etc.  However, it turns out that there’s a huge body of theory on feminist research out there right now – much of it dealing with empiricism, and a good portion addressing the ethical questions at the forefront of the RAD concern today.  And they’re doing some really cool work that pushes the definitions of the field and gets rid of a lot of the cobwebs in the attic of STEM’s influence on humanities-based RAD work.

Article Review – “Accepting Roles Created for Us: The Ethics of Reciprocity”

In their 2003 “Accepting Roles Created for Us: The Ethics of Reciprocity,” authors Powell and Takayoshi investigate how the concept of researcher-subject (or, in the language of the feminist epistemology, “participant”) reciprocity functions to create more dynamic research.  From this, they call for the disclosure of what they at points label “moments of dissensus,” or those instances that are counter-indicative of the intent or direction of study, but equally informative in different ways.  Citing Sue Wilkinson (1992) and Patricia A. Sullivan (1992), who in no small way built the modern field of feminist research, their work establishes that feminist research (especially ethical concerns within feminist studies) is making explicit the implicit design philosophies of research in general: transparent protocols, open communication between subject and researcher, objectivity protocols that are aware of the research power dynamic, awareness of the risk of speaking for “the other” ex officio, maintaining the rights and humanity of the studied and the sanctity of the data, etc. (394-95).

However, the authors have an ethical concern.  Much as the RAD empiricist risks imposing power upon the research relationship, as previously discussed, the feminist researcher risks engaging in what Ellen Cushman refers to as “’missionary activism: intervention without invitation [which] slips into paternalistic activism” (395).  By contrast, purely “activist” research sets protocols and expectations of both researcher-“participant” and subject-“participant” and foundationally includes both in the design, execution, collection, and interpretation of research.

That said, Powell and Takayoshi work to distinguish between what might be collaborative – that is to say, within the framework of preexisting power structures but towards a common goal – and what might be defined as “reciprocal,” or serving the specific needs and desires of each party in exchange towards the research process. “Studies can be collaborative without being mutually beneficial,” the authors warn; “researchers can construct methodological frameworks in which knowledge is collaboratively developed […] but when the roles of participants are confined to the research project […] the research relationship may benefit only the researcher, and, thus, not be reciprocal at all” (396).

This is all very terrifying.  This does not sound like the general bailiwick of things that RAD scholars write about.  This sounds, for lack of a better word, very feminist-y. It’s all relational, and relationships, and a bunch of other stuff people who crunch numbers don’t do well.  But, far more importantly, it’s a way of thinking about research that is quite novel, and would need to be modeled and executed to provide guidelines for newly ethical research before it would be useful for all RAD research in the humanities.

An excerpt from the article, showing conversation between the researcher (Powell) and a student, "Andy."
A sample of something that I would not precisely label “ethically empirical” ethnography – (Powell & Takayoshi 409)

The good news is that the authors do exactly that, providing two qualitative studies which experienced these “moments of dissensus” – both of which demonstrate these methods and provide viable, real-world models of the research ethic described – but, more importantly, the pitfalls the researchers fell into in their execution.  I will return to these specific research findings in my paper next week.  For now, let’s suffice it to say that they are… unorthodox.  However, the originating protocols are carefully and well designed.  And they do great work to prove the lie that research – empirical or not – has to be unemotional or disinterested, for better or worse.

Questions for consideration:

This sounds all very risky for the researcher.  It sounds like what the authors are proposing is giving up design control of experiments and studies to non-expert non-researchers.

It is.  And research is inherently risk-averse, in that it attempts to control variables.  I would never say that all empirical RAD research should be reciprocal, nor that feminist epistemology has a role in all research, honestly.  However, as counter-intuitive as this notion is, and as much as it pushes the definitions of empirical study far left (it would not, for instance, score very well on Driscoll’s analytic scale for RAD status – trending towards, essentially, auto-ethnography), it also allows for the discovery of new lines of inquiry and new types of knowledge which may only be located within the participatory model of research outlined.

Isn’t there an entirely separate ethical concern to getting so involved in the private lives and thoughts of research participants? 

Oh, yeah.  I have no idea how you get this kind of thing past an IRB (not because it’s unethical, but because it’s unpredictable to the point where ethics cannot be determined), and that’s part of what my research will be going into next week.  At one point in their research reporting, one of the authors notes that a student was having an emotional breakdown and crying in her office based on personal counseling issues that came up while discussing content from the ethnographic study (a roommate’s suicide).  That’s… let’s just say “not ideal.”  I think this doesn’t provide a perfect model for how we should be compassionate and reciprocal in research design – in fact, it is highly problematic – but I think it does prove that we need to keep thinking about these issues.


Powell, K. M., and P. Takayoshi (2003). Accepting Roles Created for Us: The Ethics of Reciprocity. College Composition and Communication, Vol. 54 No. 3, Feb. 2003, pp. 394-422.

Paper #3 – Objects of Study

Cartoon - Illustration of two men talking in office setting. One man says “sometimes I think the collaborative process would work better without you.” Illustration by Peter C. Vey, The New Yorker, May 18, 2009, 65
A general sense of what the typical RAD researcher hears most days in the modern academy.  Illustration by Peter C. Vey, The New Yorker, May 18, 2009.

“Certainly, he that hath a satirical vein, as he maketh others afraid of his wit, so he had need be afraid of others’ memory. He that questioneth much, shall learn much, and content much; but especially, if he apply his questions to the skill of the persons whom he asketh; for he shall give them occasion, to please themselves in speaking, and himself shall continually gather knowledge. But let his questions not be troublesome; for that is fit for a poser. And let him be sure to leave other men their turns to speak. […] If you dissemble, sometimes, your knowledge of that you are thought to know, you shall be thought, another time, to know that you know not. Speech of a man’s self ought to be seldom, and well chosen. I knew one, was wont to say in scorn, He must needs be a wise man, he speaks so much of himself: and there is but one case, wherein a man may commend himself with good grace; and that is in commending virtue in another; especially if it be such a virtue, whereunto himself pretendeth” – “Of Discourse,” The Essays or Counsels Civil and Moral of Francis Bacon (1601).


In my previous posts, I have discussed the history, major questions, ideologies, and academic concerns raised by empirical/RAD researchers in the various subfields of English Studies.  In addition to this, I have made an argument for considering RAD research as more than simply a methodology, but rather a specific field of study of its own right, and a scholarly ideology which allows for inquiry and discovery not always possible through other practices.

In this post, I will attempt to define what some of the primary objects of study in RAD research are, both in terms of quantitative and qualitative data, and in the description of these Objects, explore their role, their appeal, the challenges of them, and – where applicable – their history.  Following that, I will discuss the collaborative, supportive role of RAD OoSs in modern English Studies.

Objects of Study

What follows is discussion and definition – in brief – of various types of Objects of Study typically useful to the RAD researcher in English Studies

Data in general

It may seem obvious, and it is, but it warrants a brief reiteration: RAD researchers look at data (sometimes even data for its own sake), especially that data which can be replicated and aggregated to provide new insights and confirmation of previous contributions (the “D” in RAD, after all, does stand for “data-supported”).  The focus on data analysis may be specific, as I will note in the following OoSs, but it may also be general, interpreting trends in research throughout the field.

Publications and Publishing Trends

As demonstrated by works from Stephen North, Richard Haswell, Dana Driscoll, Sherry Wynn Perdue, and others, RAD researchers frequently study research trends, including the nature, location, and tenor of publications that either preference RAD research, or tend to avoid it.  Richard Haswell, in coordination with Glenn Blalock, was instrumental in the establishment of CompPile, a keyword-searchable bibliographic index of over 100,000 writing studies publications from 1939 to present, which focuses heavily upon data-driven indexing of research in various English Studies fields.

Part of this interest is certainly self-interest.  If nobody is publishing RAD research, it certainly makes it difficult to maintain an academic career as a RAD researcher.  However, there is also a strong ideological and disciplinary interest in these questions, as the limitation of RAD research likewise limits the type and scope of inquiry possible within the general field of English Studies, and these restrictive publishing realities raise serious questions about accessibility, ethics, and economics of the academy.

Program/Course/Assignment Objectives & Outcomes

Course and assignment objectives and outcomes, especially within the field of WPA, are a primary Object of Study for many RAD researchers.  Allowing for data-driven quantitative and often qualitative evaluation of teaching strategies, technologies, policies, programs, and assessment protocols, student outcomes are often one of the high-water metrics of RAD research – and these outcomes are often able to be measured and reported through many discrete methodologies.

Whether looking at specific grading data to determine changes in student qualification over time relating to stated objectives, or examining set learning outcomes for programs, courses, and assignments in order to assess viability or measurability, RAD researchers tend to preference course outcomes as highly informative, measurable, and (when so designed) objective.

Survey Data

One of the more methodologically-centered Objects of Study, survey data – whether of students, faculty, or the professional and civic communities – provides a meaningful combination of qualitative and quantitative responses.  Surveys have several benefits, including set protocols for determining confidence intervals, selecting representative samples, and reporting results.  Surveys are often flexible, easily anonymized, and typically (especially in the modern digital environment) remarkably low-cost ways to collect data about outcomes, assessment, experiences, skills, and attitudes – and they are a commonly accepted and generally familiar form of research which allows RAD researchers to communicate their results not only to academics, but to administrators and the community at large.

Survey data has a strong historical foundation (along with assessment results – see below) in the origins of quantitative research in the humanities.  Whereas experimental protocols can prove costly and prone to significant data management challenges (and whereas, by virtue of the source of writing, practically all writing experimentation qualifies as human experimentation and comes up against significant challenges in terms of ethics generally and getting through IRB review and approval specifically), survey data’s tendency towards simpler anonymization and proclivity towards more easily ethical application has long meant that it is a preferred tool and object of study of the humanities researcher.  Its low costs have similarly long appealed to RAD researchers – who often struggle along with their English department colleagues to locate funding in the modern STEM-centered academy.

Assessment Results and Protocols

Writing skills assessment provides a serious challenge for the RAD researcher: as numerical, replicable, aggregable data which can be anonymized and complied, assessment results in the forms of testing outcomes, graded writing, portfolio scoring, and so on appear at first blush to be the perfect Object of Study for the RAD researcher.  Researchers approaching assessment data in recent decades, however, have been keenly aware of the qualitative challenges of assessment values.  As noted by Cherry and Meyer in their 1993 “Reliability Issues in Holistic Assessment,” a heavily research-guided analysis of challenges in unbiased and rational assessment, “like all things human, measurement is not a perfect business” and is affected by subjective factors such as assessor biases and instrument (e.g., assignment prompt) quality (29-33).

As such, much research using assessment results is heavily qualified and restricted in terms of confidence, replicability, and applicability.  However, it is also a significant point of research interest, as demonstrated by significant, continuing RAD research about assessment (for examples, see Haswell and Haswell, “Gender Bias”; Freedman; Ball).


Perhaps one of the most valuable objects of study for RAD researchers (and often least appreciated by non-RAD colleagues) is metadata and meta-analysis on the field of English studies in general through previous literature and research, or of specific subdisciplines, especially composition, technical communication, discourse analysis, and rhetoric.  Metadata may include hundreds of different data sources, such as assessment scores, administrative data including transfer credits, retention rates, and funding, demographic data relating to learning communities and student bodies, technology availability and usage, or academic policies and curricula and their effects on learning outcomes.

Meta-analytic RAD research allows for the compilation and interpretation of a broad spectrum of both RAD and non-RAD qualitative and quantitative findings in order to provide specific interpretations of trends within the fields in question, to make institution- or program-specific research valuable and applicable to other institutions and communities, or to make novel discoveries based upon pre-existing scholarship.  For examples of current meta-analytic research on various fields, see Koster, Tribushinina, de Jong, and Van den Bergh (2015); Clayson (2009); Roska (2009); and Bangert-Drowns, Hurley, and Wilkinson (2004).

Meta-analysis is not, however, without its inherent pitfalls.  Due to a publication bias towards positive findings, metadata from existing literature tends to skew towards optimistic interpretations of policies and results.  Also, because the originating research was not written with a mind towards broad applicability to meta-analysis, the compilation of this data is challenging and prone to bias on the part of well-intentioned RAD researchers – the ease of shoehorning various and disparate research and literature into a specific, presupposed result through sample study selection is significantly pronounced.  Additionally, these selection issues and the need to express standard deviations for quantitative values, standard error, and observational error (especially pronounced because of various qualitative methodologies used in English Studies for producing the originating literature) can lead to rejection of valuable research in order to maintain good statistical controls.


In general, because RAD research originates in general research methodology, and because RAD as a disciplinary approach (or even subdiscipline of writing studies) is founded upon traditional empirical values, these Objects of Study are inherently reflective of the history of empirical research and the scientific method as a whole.  Also, as noted in previous posts, the outsider status of RAD researchers in many literature/rhetoric-focused traditional English departments means that many RAD researchers are essentially interacting with these Objects of Study as outsiders, often from general Education Research programs and education colleges.

What this means, in part, is that the history of RAD research (and general empirical study as its precursor) is not necessarily the history of research within English Studies.  This reflects back to the major questions I discussed previously – the natural track of RAD research is one of intersection with English Studies, rather than one of parallel development.  Thus, we must examine these objects of study and their value to English Studies in terms of how they can support discipline-specific scholarly discourses, and we should advocate the transition of these Objects of Study into the fields and disciplines in question, while promoting the ethical use of their benefits in future studies.


I’ve done a lot of moralizing lately about virtue ethics and epistemology and the nature of truth and a million other things that probably don’t need to be rehashed further.  Instead, I’d like to briefly speak towards the way in which these Objects of Study demonstrate the nuance and complexity of RAD research “as discipline.”

One of the ways we can tell a discipline is valid, and vibrant, and productive is by studying the complexities inherent to their study and attempting to balance the claims of that discipline’s theory with the “boots on the ground” realities of execution.  If this catalog of some of RAD’s Objects of Study (along with my previous posts) demonstrates anything, I hope it is that I am cognizant of the fact that there are serious ethical, procedural, methodological, and ideological debates happening with RAD studies about best practices and beliefs, and also that RAD researchers are aware of these challenges and attempt to address them through discipline-guided discourse and formative scholarship.

It’s for this reason as much as any that I hope that RAD research can one day be viewed as a subdiscipline of writing studies.  The reflective nature of scholarly practice is a hallmark of a “true” discipline, and RAD is reflective, discursive, and multi-faceted.  We have certainly granted that designation to study areas far less disciplined and self-reflective than empirical RAD research – but much more importantly, legitimizing RAD as “more-than” allows for scholars who wish to do this kind of work, but fear the repercussions of being viewed as positivist by their departments, the opportunity to claim that focus as a facet of their expertise and scholarship in a way that RAD-as-methodology likely never can.

Questions for Consideration

1.) So that’s what RAD does?

Not even close!  If I’d listed all the primary, “methodological” Objects of Study alone that are available to RAD researchers, this would have gone 175,000 words over length instead of just 1750!  Once you count the fact that almost every Object of Study available to any other English Studies scholar is also available for RAD researchers to support, supplement, analyze, problematize, or falsify, the possibilities are effectively infinite!  I’m doing a thing with exclamation points here, I just noticed.  I’ll stop now.

Anyhow, RAD “does” what “needs doing,” and that’s one of the things I love about it.  In the same way that Gender Studies has become almost entirely intersectional at this point, RAD-for-RAD’s-sake has basically vanished as RAD research scholars and specialists find new ways to support their programs and create meaningful, analyzable data for their colleagues.

2.) Can [my Object of Study] be supplemented by RAD?

Well, I don’t know what you’re working on, but I’ll eat my hat if the answer is no!  Ask me in the comments below; let me know what you’re working on and I’d be happy to take a swing at finding a RAD approach to supplement your scholarship.

Cartoon - Two men walking down the street having a conversation; one man says to the other
Consider the incisive, productive guidance that a RAD researcher can provide your scholarship! Illustration by Bruce Eric Kaplan, The New Yorker, August 1, 2005.


Bacon, F. (1601). On Discourse. Renasance Editions. Accessed Oct 15, 2015.

Ball, A. (1997). Expanding the Dialogue on Culture as a Critical Component When Assessing Writing. Assessing Writing: A Critical Sourcebook. Eds. B. Huot and P. O’Neill, Bedford/St. Martin’s, Boston. 2009, 357-386.

Bangert-Drowns, R., Hurley, M., and Wilkinson, B. (2004). The Effects of School-Based Writing-to-Learn Interventions on Academic Achievement: A Meta-Analysis. Review of Educational Research, Vol. 74 No. 1, 29-58. Retrieved from

Cherry, R. and Meyer, P. (1993). Reliability Issues in Holistic Assessment. Assessing Writing: A Critical Sourcebook. Eds. B. Huot and P. O’Neill, Bedford/St. Martin’s, Boston. 2009, 29-56.

Clayson, D.E. (2009). Student Evaluations of Teaching: Are They Related to What Students Learn?: A Meta-Analysis and Review of the Literature. Journal of Marketing Education, Vol. 31 No. 3, 16-30. Retrieved from

CompPile (2004). Eds. Blalock, G., & Haswell, R. (2004, May 1). Retrieved October 15, 2015, from

Driscoll, D. (2009). Composition Studies, Professional Writing and Empirical Research: A Skeptical View. Journal of Technical Writing and Communication, Vol. 39 No. 2, 195-205. Retrieved from

Driscoll, D. and S. Perdue (2012). Theory, Lore, and More: An Analysis of RAD Research in The Writing Center Journal, 1980-2009. The Writing Center Journal, Vol. 32 No. 1, 11-39.

Freedman, S. (1981). Influences on Evaluators of Expository Essays: Beyond the Text. Assessing Writing: A Critical Sourcebook. Eds. B. Huot and P. O’Neill, Bedford/St. Martin’s, Boston. 2009, 289-300.

Haswell, R. (2005). NCTE/CCCC’s Recent War on Scholarship, Written Communication, Vol. 22 No. 2, 198-223. Retrieved from

Haswell, R. and Haswell, J. (1996). Gender Bias and Critique of Student Writing. Assessing Writing: A Critical Sourcebook. Eds. B. Huot and P. O’Neill, Bedford/St. Martin’s, Boston. 2009, 387-434.

Kaplan, B.E. (2005). If It Made Sense, That Would Be a Very Powerful Idea. The New Yorker, August 1, 2005.

Koster, M., Tribushinina, E., de Jonh, P.F., and Van den Bergh, H. (2015). Teaching Children to Write: A Meta-Analysis of Writing Intervention Research. Journal of Writing Research, Vol. 7 No. 2, 249-274. Retrieved from

North, S.M. (1987). The Making of Knowledge in Composition: Portrait of an Emerging Field, Boyton-Cook, Upper Montclair, New Jersey.

Roska, J. (2009). Building Bridges for Student Success: Are Higher Education Articulation Policies Effective? Teachers College Record, Vol. 111 No. 10, 2444-2478. Retrieved from

Vey, P.C. (2009). Sometimes I think the collaborative process would work better without you. The New Yorker, May 18, 2009, 65.

PAB #6 – Hansen et al.

Hansen, K. et al. (2006). Are Advanced Placement English and First-Year College Composition Equivalent? A Comparison of Outcomes in the Writing of Three Groups of Sophomore College Students. Research in the Teaching of English. Vol. 40 No. 4, May 2006 461-501.


“The most exciting phrase to hear in science, the one that heralds new discoveries, is not ‘Eureka!’, but rather, ‘hmm… that’s funny…’” – Isaac Asimov

“This lengthy introduction to our study gives a broad context for our question about the equivalence of high school AP course and/or exams and college FYC courses.  If these two paths lead to the same educational outcomes, perhaps our concern that AP students are being deprived of something when they skip FYC is baseless.  If the two paths do not produce the same results, however, then perhaps it is time to question the practice of assuming equivalence between FYC and high school AP courses and/or tests” (473 [emphasis added]).  


For this PAB, I will be briefly investigating the article as an object of study, attempting to recap, review, and categorize its RAD value according to Driscoll and Perdue’s metric for RAD research from their 2012 “Theory, Lore, and More: An Analysis of RAD Research in The Writing Center Journal, 1980-2009” (link).

Object of Study

“Are Advanced Placement English and First-Year College Composition Equivalent? A Comparison of Outcomes in the Writing of Three Groups of Sophomore College Students,” from Brigham Young University’s Kristine Hansen, Gary L. Hatch, Patricia Esplin, Richard R. Sudweeks, and William S. Bradshaw, in coordination with University of Maryland’s Jennifer Gonzales and University of Washington’s Suzanne Reeves, outlines the methodologies and findings of a study conducted specifically “to inform policy” – both at BYU and in general – regarding FYC exemptions for incoming freshmen based upon performance from AP exam scores (a fairly widely-accepted standard for pass-through for introductory composition courses nation-wide) (461).

After significant theoretical grounding and a description of both BYU’s FYC program and the AP English exams and pass-through/crediting processes, the researchers note that – counterintuitively – students who met AP pass-through standards with the minimum score of 3 were actually 10% less likely to elect to enroll in FYC coursework than higher scoring 4 and 5 students. (approximately 30%, 40%, and 40%, respectively.)  Most higher-scoring students who registered for FYC content elected to join honors sections and coursework (468).  Based in part upon this information, protocols were designed to investigate not only AP pass-through vs. FYC-only outcomes, but also to compare both of these results against combined AP/FYC outcomes.  They similarly note that honors coursework specifically may account for a portion of the reported uptick in writing competencies for combination students.  I note this small fact because it indicates adherence to one of the primary standards of consistent, careful RAD methodology: the flagging and signposting of data points and factors which may appear inconsistent for further analysis, consideration, and explanation.

Two separate researchers each holistically scored two pieces of equal length from 497 total students, including a sample set-aside of 182 sophomore-status students, in a history of civilization course, with general scoring averages used to measure and index writing competencies based upon a set scoring rubric (474-76).  Based upon these samples and scores, the researchers made a final determination that both FYC and AP students’ performance was comparatively equal, but below expected quality.  In categories of self-efficacy, academic characteristics, and specific essay scoring, FYC-only students slightly underscored their AP-only counterparts.  Combination (AP+FYC) student writers outperformed both groups and general course outcome and composition expectations.  However, before drawing conclusions from this data, the authors also carefully studied the inherent variances introduced into the data from raters, grading consistency over time, differences in assignment prompts between sections, and unexplained/undetected sources of variability.  These facets of variation are carefully documented, combinatorially investigated, and represented through a single coefficient of generalizability (0.58, well within the realm of correlation for single-facet protocols) (476-79).

Considering this recognition of limitations, sample size, reporting, theoretical backgrounding, transparent selection protocols, the study’s methodology was significantly empirical, likely scoring an 11-12 (14 being a “perfect” score) using Driscoll & Perdue’s scoring metric for RAD categorization (see Appendix).  The only likely reductions for the work on Driscoll and Perdue’s scale would be related to (1) limited participant sampling, since the only coursework sampled was situated within three sections of a specific (and primarily elective) course, and (2) only a cursory exploration of representativeness of their selected sample to the student body in general (See Fig. 1 and caption), as well as (3) concerns about applicability of assessment and discussion, which I will expand upon below.

Hansen, Figure 1
Figure 1 – Sample representation (document Appendix A).  Note that, while the researchers do note the sample mean GPA & ACT scores including standard deviations in comparison to the general first year student body means (implementing careful data representation practices), they do not indicate how representative the disciplinary/college breakdown is of the 497 total sampled students, nor of the 182 sophomore-status set-asides.  Additionally, they do not separate these two sample groups for comparison in their scoring means.  Note these values are discussed elsewhere in the study as individual data points, but are not assembled with or compared to the demographic data.

In the end, for a cursory view of a specific program, the researchers’ methodology and reporting are sound.  Their sample representation is not optimal, but also far superior to most sampling protocols within humanities research (especially ethnographic research, which was a portion of the students’ self-reporting of writing process).  However, there is one moderately damning moment in the document, as noted and emphasized in the pull-quote passage at the beginning of this analysis, and the following:

“The fact that performance of students in the AP/no FYC group was not significantly different from those in the no AP/+FYC group suggests that the outcomes produced by the high school AP experience are roughly parallel to those produced by taking a first-year composition course.  But does this mean that AP test scores should immediately translate into credit hours?  It may not when one considers that the mean scores of both these groups indicated that the students had only ‘limited’ or ‘uneven’ writing skills.  In other words, though the outcomes were fairly similar, the standards achieved were not satisfactory” (484 [emphasis added]).

The researchers provided a clear standard for falsification of their data – a standard that was clearly and unequivocally met.  Although the researchers did their level best to salvage this claim by noting the inherent performance benefit of the AP+FYC group, they also already confounded the results of that group by noting that they were largely comprised of honors students who were exposed to a different and augmented composition curriculum.  Attempts through several sections of the document to reclaim this data as a positive trend, especially given the viable – but weak – coefficient provided by confounding factors, are largely for naught.  Their attempts to paint AP English testing and curriculum for being generally focused “on product rather than process” (487-88) are similarly laudable, and no doubt mostly true, but not founded within the research provided nor well-expressed through the writing skills expressed by AP-only students within their study.  Additionally, their assignment prompts and protocols were not well-designed to measure process-based writing, except in the pre-writing process allotted to students; there were no opportunities for feedback, either directive or facilitative, nor were there revision possibilities after initial submission.  Students were given severely limited time to produce the final essay, and several reported that they would have availed themselves of additional resources and feedback had they been given additional time to compose (486).  Perhaps if the researchers’ protocols had been process-centric, given that that is the philosophical basis of their FYC programming, the data would have expressed a more positive trend for BYU-trained student writers over AP pass-through students.  However, we cannot know without replication and revision of these protocols, which must instead be analyzed as presented.

It must be mentioned that the researchers also discovered through this study, despite its falsification, that students scoring a 3, rather than a 4 or 5, consistently underperformed compared to FYC-only sophomores, forming the basis for their revised recommendation – “namely, that students entering BYU with AP English scores of 3 should not be allowed to bypass the first-year composition requirement because these students may benefit from completing the FYC requirement through an honors or advanced option” (484).  This is an astute recommendation based largely in their failure to find meaningful performative difference between BYU FYC and AP results.  This embracing of falsification to instead lead to new insights is not only praiseworthy, but one of the great strengths of data-driven, RAD research in general, as data often tells us more than we may expect.

All in all, this is an excellent study, well reported, and with a strong conclusion that demonstrates how conscientious empirical research – disinterested in proving presupposed notions of pedagogy and theory – can provide new insights, alter scholars’ understanding and appreciation of facets of their teaching and their students’ capabilities, and locate new sites of study through the embracing of “failure.”

Questions for consideration:

Okay, what the heck just happened?

This summary and analysis of the article is, essentially, how I would analytically (with some commentary) apply a notion of use value and RAD quality to an OoS that came in the form of an article.  What you just read wasn’t so much a PAB written about an article, but rather a PAB about a “text,” in the same way you might dissect Pride and Prejudice differently than you would dissect a scholarly article about it.

This is my Pride and Prejudice.

Anyhow, as an OoS, this is a really an exploration of a moment in time in a resource project, and how that moment is represented through the end document.  At some point, in a room, somebody said to somebody else “Oh, shoot” (they may have substituted a vowel there in “shoot”).  I’m looking at this text, quite simply, as the aftermath of “oh, shoot,” as an example of what happens, and what is possible, when RAD research violates expected results in a big way.

So, what you’re saying is they failed?

Oh, yeah.  They may or may not feel they did, because their research made a formative contribution (which is good!), but by the standards set by many in the academy today (who are less celebratory of the research process as generating rather than confirming knowledge – which is bad!), yes.  They failed.

Sometimes, it’s most daring not to set out where there are no trails, but rather to recapitulate and retread the trails that previously defeated you.  Hansen et al discovered a newfound value in the most basic literacy components of FYC curriculum, powerful as a supplement to preexisting pedagogical structures, and they did it because they were willing to perform research that violated every presupposition they had.  And when those suppositions were defeated, they didn’t walk away, tempting though it likely was.  They went even deeper into their analysis.  That’s exciting.

Then they went ahead and published that failure after the fact.  Their failure, and their willingness to see that failure through to the end, provides new knowledge and new information for the discourse.  That’s even more exciting.

Failure is so cool.  RAD loves failure.  RAD is cool.

Suddenly RAD is cool?  Like, a week ago, tops, you said “RAD Research isn’t sexy.”

RAD is the cool discipline for cool people who are too cool to let something uncool like success get in their way.  They don’t have time for success.  They’re too busy being cool, and learning from how cool they are, and how cool it is to fail.

I said it, and I’m standing by it, both as a philosophy and an academic position.  RAD is rad.  Not sexy, sure.  But COOL.  One day, I’m going to write and publish an academic article called “COOL RAD FOR COOL KIDS” (all caps, yes) just so I can put it at the top of my CV.


Driscoll, D. and S. Perdue (2012). Theory, Lore, and More: An Analysis of RAD Research in The Writing Center Journal, 1980-2009. The Writing Center Journal, Vol. 32 No. 1, 2012 11-39.

Hansen, K. et al. (2006). Are Advanced Placement English and First-Year College Composition Equivalent? A Comparison of Outcomes in the Writing of Three Groups of Sophomore College Students. Research in the Teaching of English. Vol. 40 No. 4, May 2006 461-501.


Appendix A: Driscoll & Perdue's complete RAD Research Rubric from
Appendix A: Driscoll & Perdue’s complete RAD Research Rubric from “Theory, Lore, and More” (2012).

PAB #5 – Thompson

Thompson, G. (2014). Moving Online: Changing the Focus of a Writing Center. SiSAL Journal, Vol. 5 No. 2, Jun. 2014 127-142.

Gene Thompson - PortraitGene Thompson (link for profile)

 “Overall, the survey data suggested that the peer advising service was not positioned to appeal to students effectively.  To compound the problem, the leader of the writing center was informed that room availability would be severely limited for the writing center in the following semester […] On the other hand, the online lab had been used by nearly a quarter of students in the first cycle.  Accordingly, two primary actions were decided by the BBL team for cycle two: (1) the peer advising service would be suspended due to the room availability issue, and because the considerable use of budget and resources in providing peers in time slots on multiple days of the week was not deemed to be justified by the usage data; and (2) resources would instead be used to expand the center online by retaining one peer, part time, to assist with developing resources for the writing center online.  As a result, the writing center would become only an online ‘lab’ for the second cycle.” (134).  

A (not so) Brief Aside – The Discipline Question

In previous PAB and Paper entries, I have run on what may seem to some to be a false assumption – that RAD research is not merely a methodology, but should be recognized as a subdiscipline of its own merit within composition studies, the equal of WCS, FYC, or WPA.  I recognize this is an inherently contentious statement; even the scholars I have referenced previously have made no such claim.  Research serves the greater purposes of the field.  RAD researchers come from all walks of life and scholarship.  RAD is a production- and product-centric act of scholarship.  These are, as much as anything, the marks of “methodology.”

All of these are true.  They are also true of every other English Studies discipline and subdiscipline, each of which is tortuous, intersectional, and varied – and each of which is, in reality, based around the production and analysis of “texts” of some form or another.  The only argument against RAD specifically being a discipline, then, is that it is methodology; however, the same can be fairly said of any discipline or ideology.  So it must be that it is purely methodological – except, as Barton and Haswell have demonstrated quite clearly, the execution of empiricism within the modern academy is also participation in an extraordinarily ideological act.  It cannot be that it studies rather than being studied, since it does both – as Driscoll and Perdue, as well as Haswell, have treated RAD as a field of study for some time now.

(As an aside (within my aside), I considered titling this informal introduction “If RAD Ain’t a Discipline, Ain’t Nothin’ Is!”  I thought better of it.  But it’s true.  It’s a branch of knowledge, a field of study, a specialty, and – as my PAB authors have demonstrated – a subject area in which dedicated scholars can frequently and consistently contribute to and problematize a greater, academic discourse.  If those aren’t the marks of a discipline, I’m not sure we can know what words mean.  At which point I can go all Postmodern and just start declaring things disciplines willy-nilly, (which I’m about to do in three paragraphs.))

Point being, these lines are not so clear.  The restricted focus upon the peer writing relationship within Writing Center Studies, for example, is both highly ideological and methodological in its form, whether expressed qualitatively or quantitatively, whether empirically or not.  “Digital [anything]” as a field of study is most definitely both “purely” methodological and highly ideological – and yet recognized absolutely as a legitimate subdiscipline of the field.

If the Digital Humanities have taught us anything, it is that practitioners of a subdiscipline don’t even necessarily need to be aware of its existence to contribute meaningfully to the field.  In fact, typically the most important and foundational texts of disciplinary concerns come into being when disciplines are still nacent, under-defined, or even non-existent.  Who knows?  Maybe in three decades when state colleges are offering PhDs in “RAD Methods and Composition” instead of “Rhetoric and Composition,” I’ll be the first article the first years are required to read (a guy can dream, right?).

And it’s worth noting that the extension of this logic is not purely limited to English Studies, the Humanities, or even academia.  Marxism is certainly discipline, ideology, and methodology – as is Methodism, video game enthusiasm, or crypto-facism.  The three elements are indissoluble facets of any advocacy and performativity, a “fire triangle” of knowledge.  And practitioners of each may be claimed by the groups in question without any recognition that they would ascribe to that particular “discipline” on their own (as any crypto-fascist would tell you, if you could find one).

All of that brings me to the question of what RAD research does as a discipline, rather than as a methodologyAs a methodology, it provides (as noted by Driscoll, Perdue, and Barton) a measure of the accuracy of our knowledge, a comparable and replicable metric for determining the real state of the academy.  As a discipline, it helps to develop best practices within the informational and economic realities of the modern academic space for students, instructors, content providers and consumers, and institutional administrators.

As an example of this, I’d like to start this PAB sequence by looking at Gene Thompson’s “Moving Online: Changing the Focus of a Writing Center” (2014) as an Object of Study for my own work, and to examine what Objects of Study he engages with in his own research and reporting.

Objects of Study

Thompson’s research in this article is particularly interesting for the specific situated concerns of his study, particularly as it relates to English language writing center services in a purely ESL English writing environment at a private Japanese university within a Bilingual Business (i.e, Japanese and English) program within the department of Global Business.  As noted, this focus on the role of English within the business concerns of students and curriculum leads to ideological clashes regarding the role of the writing center itself, as well as writing in general, within the program – the production/process dichotomy is especially complicated by the nature of the program, business faculty, facilities provided for the work at hand, staffing concerns, and the ideologies inherent to business culture within Japanese academia (128-29).

Thompson’s research relates student access, satisfaction, and outcomes to changes within organizational structure within the writing center, based on both qualitative and quantitative RAD empirical data generated through online surveys combining open-ended and closed-ended questions.  Changes were instituted according to responses for a first “cycle” of development (see Fig. 1), which led to a recurrent second cycle analyzing changes in access, efficacy, and outcomes following reflection, analysis, and the transition from a face-to-face peer-centric model to an online writing lab model.

Figure 1: The Action Research Spiral
Figure 1: The Action Research Spiral (130).

Service usage was determined by a first-cycle (semester one) survey.  Quantitative values for closed-end questions are found below in Table 1.

Table 1: First Year Student Responses for Cycle 1 - Quantitative only
Table 1: First Year Student Responses for Cycle 1 – Quantitative only (132-33).

Analysis resulted in a determination that, given compounding access, budgetary, facilities, and staffing concerns, the writing center would move to an online-only model that focused on process-based writing support while providing a navigable archive of directive resources, retaining one on-site peer to maintain and develop resources for the online writing center.  Additional analysis continued based upon these changes, which I will document further in Paper #3.  Also, notably, the “action research cycles” laid out by Thompson are a continuous, self-revising process of research which are designed to perpetuate for as long as the writing lab exists – and thus are perpetually incomplete, but moving towards a more “perfect” model of a writing laboratory serving student needs.

However, at this time I would like to consider just the Cycle One survey as an Object of Study for my own research.  What fascinates me most about this survey is that even at first blush the responses feel remarkably standard for me, based upon previous Writing Center Studies research.  If I happened to engage with this article with an expectation that the cultural and economic exigencies of a Japanese English-language writing center oriented towards business communication development would yield different results than a tradition FYC-oriented native English-language writing center in terms of student service usage and attitudes, I certainly did not leave satisfied that there is any definite difference between these two student groups in terms of expectation and execution.

Perhaps there is an opportunity to seek out similar usage- and attitudes-oriented surveys for American, native-language student writers, and to compare these sets of results in order to make a more definite finding.  I have a feeling what I’d find would be strikingly (jarringly) similar despite entirely different environs, culture, educational philosophy, and resources.

On a final note, some not-insignificant number of less RAD-oriented scholars might view this data as a “negative result,” which is to say, it indicates no significant deviation from general, non-contingent values for general populations.  Given that interpretation, it would not be a “useful” result.  However, to view this data as a failure for not demonstrating specific differences for Japanese business students would miss the powerful message this data hints at: that students – regardless of language acquisition sequences, purposes, institutional ideologies, funding, facilities, or even educational culture – seem to share similar access and usage concerns which may be able to be addressed by more universal solutions (or even shared resources and solutions, a la the Purdue OWL).

Anyhow, I’m going to talk more about “failure” and negative results, and their implications in RAD as both methodology and ideology, in PAB #6, which will deal with the Brigham Young University Department of English study on differences in student preparedness for college writing depending upon AP pass-through or FYC support.

Questions for consideration:

Boy, I don’t know.  Doesn’t a large amount of RAD research you’ve talked about so far seem very institutional/administrative, rather than student-centered?  Is that the road you want to go down?

If student access to services improved, and student learning outcomes improved along with it, does it matter?

But doesn’t this really feel like Thompson used this research methodology to support what the administration was going to do anyhow (i.e., cut back writing center services)? 

Look, I get it.  The cynic in me (i.e., 90% of my personality) wants to always position myself against the million-dollar-salary presidential fat cats sitting in their tuition-supported mansions, cutting student learning services to bring in more dining options, shinier floors, and petting zoos in order to increase enrollment dollars at the expense of actual, you know, learning.  I think that’s a healthy cynicism (and, frankly, accurate).  And (usually) the default position of folks in the humanities is that any administrative cut to services that provide learning, knowledge, access, etc. is inherently bad.  We’ve been burned a few times, and it’s certainly an understandable response.

But RAD’s real benefit here may be that it has the capability to step beyond the reactionary ideologies of gut-shot anti-authoritarianism and recognize that sometimes – just sometimes – the Office of the President is right, and something just isn’t working.  One of the great things RAD can do is provide provable, demonstrable data – including financial impacts, sure – to demonstrate to the muckity-mucks that services are worth perpetuating in some form and that problems are solvable, when the default position of the administration without that information is likelier than not to be “when in doubt, just get rid of it altogether.”  RAD speaks a language that the boys upstairs understand – and, frankly, one that most liberal arts folks can’t parse that well.  RAD practitioners can act as liaisons between departmental and administrative parties – both within their own institutions at the personal level, and within the academic discourse as a broader field (and, yes, methodology) of research.


Thompson, G. (2014). Moving Online: Changing the Focus of a Writing Center. SiSAL Journal, Vol. 5 No. 2, Jun. 2014 127-142.

Paper #2 – Major Questions: RAD Advocacy and Application, an Optimistic Outlook

“I am not here speaking of probability, but knowledge; and I think not only that it becomes the modesty of philosophy not to pronounce magisterially, where we want that evidence that can produce knowledge; but also, that it is of use to us to discern how far our knowledge does reach; for the state we are at present in, not being that of vision, we must in many things content ourselves with faith and probability: and in the present question, about the Immateriality of the Soul, if our faculties cannot arrive at demonstrative certainty, we need not think it strange.” (Locke, Essay IV iii 4, 26-28).

A Brief Note

Because my arguments in the previous PABs and history paper already dealt with the major questions of empirical RAD research in terms of its interaction with the academy as a whole and its role within scholarly discourse to some degree, I have chosen with this paper to take a slightly different (and possibly slightly off-the-rails) approach and consider two philosophical questions – one overt, and one implied – which are addressed by the scholars I’ve been investigating:

  1. How does the ethic of empiricism/skepticism intersect with and supplement discipline-specific scholarly discourse in other fields of English Studies?
  2. How do we ethically promote the use of empirical methods and ideology in English Studies research?

It is the intention of this paper to explore these questions through the affirmative passages of the more modern articles by Haswell, Driscoll, Perdue, and Barton.

The Major Questions

The Intersection of Disciplines

During my analysis of Barton’s “More Methodological Matters,” I noted her interest in studying the applications of “negative argumentation” in anti-positivist and non-empirical scholarship – those moments in which scholars and theorists validated the interpersonal and interpretive nature of their own research by obviating the value of empirical skepticism.  It’s easy to be pejorative and pessimistic about the treatment of RAD research in the modern humanities.  Haswell’s “NCTE/CCCC’s Recent War on Scholarship” is similarly pessimistic about this relationship between empirical researchers and their colleagues in the field/publishing industry.

However, I’d like to take this time to instead look at the inherent optimism that many scholars express when discussing the ethical value of their own work in order to demonstrate that one of the major questions of the RAD research “movement” is not “how do we end this attack?,” but rather “what value do we provide to disciplinary discourse throughout the field?”

In his introduction, Haswell notes that his mentor and friend Stephen Witte would have opposed the notion that scholarship is warlike, nothing that “his belief in scholarship was at root ecumenical” (198).  It was Witte’s belief in this small-c catholic notion of the academy, he notes, that made Written Communication a welcoming and holistic publication.  Haswell shares this aesthetic, even if his interpretation of the current academic climate is less than hopeful; as he notes, “in the postsecondary teaching of written communication, as in every professional field, the value of RAD scholarship is its capacity for growth” (202).  Haswell’s praise for RAD is not merely its usability, nor its accuracy, but its interdisciplinarity—in RAD, we find a practice which supplements the value of “every professional field,” and in doing so contributes to the sum of human knowledge emanating from the academic life.

Barton is more disciplinarily optimistic than her few detractors give her credit for being – and more positively collaborative than I perhaps occasionally give her credit for being.  Although she believes that the field has restricted itself unduly through the lens of the socio-ethical turn to the detriment of its own progress, her first descriptions of the field of composition are that of one founded upon “theoretical and methodological diversity,” a space with no right answers and an ideological pursuit of the best practices from the best minds – and to the best result in the betterment of students’ lives as both thinkers and writers.  Her vision of the field in general is specifically and currently negative, but generally hopeful (399).  Her vision of detached RAD research is even more hopeful, and mirror’s Haswell’s: “our field has the potential to bring important research questions like these to the investigations of many different discourses [bringing new] types and areas of research questions” (406-7).  The loss of empiricism, Barton argues, is not the loss of a discourse or discipline, but an entire avenue of epistemology and knowledge acquisition.  These processes allow ideas to be “strengthened significantly by systematic analysis” which can provide “significance and force” specifically because empirical findings do not stand on their own, but (without a specific ethic) supplement and support the ethics and inquiry of other discourses, resulting in discourse which is both “richer” and greater than the sum of its parts (408).  It is only when “composition combines its empirical and non-empirical approaches,” Barton argues, that “the field can contribute a full range of ethically-formulated questions, methods, analyses, and interpretations from a truly interdisciplinary methodological repertoire” (410).

RAD Outreach

Driscoll, too, is at her core an advocate for RAD because of the value she sees in its support role to other inquiry in the field. In many ways she is metaphorically a missionary for empiricism, and the missionary act in itself raises ethical and philosophical questions which must be addressed.  To be a steward of specific values is to proselytize for their propagation, but it is also an act of cultural violence, the possible imposition of the “traditional, imperialistic hegemony” that Williams feared and Barton pushed back against in her article (401).  To profess and proffer personal ideology and values is to inherently devalue that ideology held by the recipient of the missionary act.  This raises a question for me, personally, that I think Driscoll also grapples with: “how do we act as ethical evangelists for RAD empiricism?”  How do we avoid the inherent interpretation of factual, data-driven research as morally and ethically superior to the interpretive, non-empirical findings of our colleagues, and how do we avoid coming across to those colleagues as self-righteous, smug, or even wrong-headed?  Clichés about hammers and nails notwithstanding, RAD advocacy is as much (more!) about collaborative re-education of peers as it is about the initial education of student writers.  Advocates of this form of research must both provide the right tool to support discourse, as well as the right mindset about the value of those tools to others.

Driscoll’s approach to this question is, I think, both novel and correct.  In her “Composition Studies, Professional Writing, and Empirical Research: A Skeptical View,” she stresses three primary aspects of RAD to note in advocacy of its use:

  1. RAD is part of a pluralistic model of research and inquiry; of special note is that RAD support “does not in any way suggest that empirical research is superior to other forms of scholarship” (197),
  2. RAD/Empiricism is segregated from the “qualitative/quantitative” concern; RAD can be both qualitative and quantitative, and best practices dictate one chooses “methods of inquiry based on the context of the research question, not based on a political ideology neither on their […] histories” (198),
  3. and skepticism as both ethic and ideology is a viable context for both understanding and conveying the virtue of empiricism. Indeed, far from making one the arbiter of “truths” through the leveraging of data, “empiricism combined with skepticism paints a complex worldview in which one questions the nature of experience and in which one refrains from claiming to know objective truth” (200).

In conclusion, I would return to the missionary metaphor for the RAD academic.  The possession of what one believes to be a “moral truth” (in this case, the virtues of information access and accuracy) inherently includes an impetus to “spread the Gospel” of that truth.  However, the most socially and ethically conscious missionaries do not withhold benefits of that truth to non-adherents, never enforce the adoption of that belief through threat or subterfuge, and always endeavor to tie the notion of “gospel truths” to the processes of education and outreach.  To be a truly moral (and effective) advocate for RAD research is not to zealously crusade for the adoption of one’s personal beliefs, but to create and foster and environment in which those beliefs bring about the best of all possible worlds – one where people are free to act, think, and feel as they desire, and what you wish they would do and believe is more apparently true and appealing to their uses, needs, and community.


Let’s take a moment to acknowledge explicitly something that I’ve been building into my arguments for the last five entries: this is not a question only of research protocols and philosophies, or epistemological approaches, but a highly political and ethical positioning.  As I noted in a previous post, “it is difficult to write the ‘history’ of this subdiscipline, because its treatment is unbalanced and its purpose and prestige inelastic since the middle of the twentieth century” (“Paper #1”).  That is to say, “research data is not sexy.”  It lacks the certain seductive allure of modern rhetorical theories which arbitrate meaning through personal and cultural engagement or identity narratives.  It does not contain the nebulous, meandering explorations of the interplay between Man and God found in early Christian philosophy and pedagogy.  It does not provide the nigh-impossible-to-probe depths of meaning and application found in classical, post-modern, and neoclassical rhetoric.    And because it is not sexy, and the modern academy tends towards (and here my cynicism shines through) a virtue ethics model that expresses a self-interest on the part of the instructor, I think my colleagues often lose sight of the value of knowing what, specifically, is happening with our students, our texts, and our language.  I am not purely or primarily a virtue ethicist.  I think that we should in many ways return to a more conservative pedagogy, and a more conservative research methodology for evaluating that pedagogy.  I also think that in many other ways we should not.  Education in the humanities has made massive, progressive leaps in the past decades, but at specific times and sites I believe we have done so at the cost of our own self-ideation and our own student-first values.

Empirical research, with a skeptical focus and an ability to segregate itself from the emotional and intellectual seductions of a specific discourse, provides tools and possibilities far beyond the simple data it creates.  It is the foundation of some of the most lasting and impactful scholarship in the field today, and with good reason.  But perhaps most importantly, it is one of the prime foundations of an academic model that produces earnest students who both become stewards of their own communities while also maintaining their own personal beliefs, ideologies, and identities.

Questions for Consideration

1.) You go on a lot about values and ethics in your analysis so far.  What’s your personal ethic/academic philosophy/etc.?

Here is what I believe, in short, as it pertains to the ethics of RAD and pedagogy:  1) Consequentialism has its weaknesses, but there is a value in results-driven inquiry that ignores or rejects postmodern notions of contingent truths. 2) Deontology is necessary in the pursuit of ethical research.  3) The provision of data is an ideological and interpretive act, but one separate from (and typically ethically superior to) the interpretation of contingent “truth.”  4) Free, open, honest data is a public and personal good, a consequentialist end to the pursuit of knowledge which allows for the development of new goods, both for the consequentialist and other normative ethical academics.  5.)  We should instill values in the academy, but not ideologies.  The love of learning is a value.  The desire to come as close to truth as we ever may is a value.  The love of the self as central to learning and truth is an ideology.

 2.) If there is an ethical or academic “virtue” to RAD research, shouldn’t that be self-evident?  That is to say, shouldn’t empiricism be surviving just fine without advocacy if it is as useful as these theorists say it is?

There are plenty of social and moral “goods” that do not see themselves fully expressed in the modern academy.  It is only in the last 30 years that we have finally come to anything approximating an (early) appreciation of accessibility issues, for instance.  And the fact that empiricism was once praised and is no longer may not inherently imply that its use value has declined – especially if its decline is a reaction to the temporal and political concerns regarding the corporatization of the academic “industry” (which is an entirely different paper).



Barton, E. (2000). More Methodological Matters: Against Negative Argumentation. College Composition and Communication, Vol. 51 No. 3, Feb. 2000 399-416.

Braddock, R. R. et al. (1963). Research in written composition. Champaign, Ill., National Council of Teachers of English, 1963.

Driscoll, D. Composition Studies, Professional Writing and Empirical Research: A Skeptical View. Journal of Technical Writing and Communication, Vol. 39 No. 2, 2009 195-205.

Driscoll, D. and S. Perdue (2012). Theory, Lore, and More: An Analysis of RAD Research in The Writing Center Journal, 1980-2009. The Writing Center Journal, Vol. 32 No. 1, 2012 11-39.

Haswell, R. (2005). NCTE/CCCC’s Recent War on Scholarship. Written Communication, Vol. 22 No. 2, April 2005 198-223.

Hillocks, G. (1986). Research on Written Composition : New Directions for Teaching. [New York, N.Y.] : National Conference on Research in English ; Urbana, Ill. : ERIC Clearinghouse on Reading and Communication Skills, National Institute of Education, 1986.

Locke, J. (1689). “Of the Reality of Knowledge” An Essay Concerning Human Understanding.

Nielsen, A. (2015). “Paper #1 – Subdiscipline History: Empirical Research”  Alex C. Nielsen. 17 Sep. 2015.

Schriver, K. A. (1989). Theory Building in Rhetoric and Composition: The Role of Empirical Scholarship. Rhetoric Review, (2). 272.

Witte, S. P., & Faigley, L., Evaluating College Writing Programs. Conference on Coll. Composition and Communication, U. I. (1983).