PAB #4 – Barton

Barton, E. (2000). More Methodological Matters: Against Negative Argumentation. College Composition and Communication, Vol. 51 No. 3, Feb. 2000 399-416.

Ellen Barton

Ellen Barton (link for CV)

 “The ethics of all research demand that subjects participate with full consent and that researchers present data in its full complexity, and the way that these standards are met by empirical studies needs to be better known in the field of composition. With positive argumentation, ethics could become a common ground between empirical and non-empirical researchers, establishing an area of conflict resolution. With the continuation of negative argumentation, though, ethics will remain a point of contention, needlessly continuing the unproductive conflict over the value of empirical vs. non-empirical methodologies in composition research.  The contact zone between methodologies should no longer remain a war zone, but become a resolution zone, with empirical and non-empirical researchers making positive arguments for their methodological approaches without succumbing to the temptation of throw-away negative arguments, however rhetorically satisfying they may be” (405).  


The Question

In my previous PAB selections, I’ve endeavored to demonstrate moments and fields of study where RAD research and empirical thought in general is not only often dismissed or misunderstood, but at times even attacked by academics whose personal interests and research habits lead them to oppositional views of data-driven, replicable research.  Largely, these arguments from Haswell, Driscoll, and Perdue have been a lead-up to this article by Ellen Barton, which asks, quite simply, two questions:  where is non-empirical research taking the field, and what happens if opponents of empiricism do manage to drive RAD research out of the scholarly mainstream?


Summary

At the core of Barton’s “More Methodological Matters” is an exploration of the ethics of research, the ethics of how we talk about research, and the methodologies that best produce ethical research.  Barton fears, at the most basic level, that the “ethical turn” in composition research is a philosophical movement away from the fundamental internal ethics of knowledge and data within the empirical model of research towards an external ethic defined by the discipline according to teaching and research philosophies valuing “participatory and collaborative relationships” between the researcher and participants and the “self-reflexive relationship” of the researcher with his/her own data and analysis (400).

Although this is her primary fear, her primary concern is with language implying negative assessment of empirical values within the research protocols of non-empirical research reporting.  Examples given in Barton’s text include authors describing their ethnographies as contrasted against the “traditional, imperialistic hegemony” of empiricism (401) various and general implications that the humanistic, shared elements of qualitative research are somehow inherently more ethical than the cold, distant pragmatism assumed in RAD/empirical study.

While exploring the potential rift caused by the ethical turn, Barton attempts to establish the risk that this turn has for continued ethical empiricism in research, as systematic knowledge and data is devalued by the mainstream.  Especially and particularly, she argues that “negative argumentation” is at risk of permanently limiting the field of compositional and rhetorical knowledge, by limiting available methodologies, restricting access to purposeful data, and – under the guise of ethics – ignoring ethical protocols (403).

She is, at the core of her argument “evoking the empirical-non-empirical dichotomy,” which she herself recognizes as “problematic” (412).  However, as she notes, the evocation is necessary, if only in defense of the scholars who have already leveraged that dichotomy to argue for the opposite, including heavy hitters in the field such as Lunsford, Berlin, Berkenkotter, and Dombrowski.

In the end, what Barton is calling for is an ethical diversity that includes research which is ethical for its close, collaborative interaction with its subjects (qualitative non-empirical ethnographies, for example), as well as research which is ethical precisely because it maintains its distance and a degree of data hygiene by reducing noise and interference within data to give the fairest and most accurate representation of interpreted reality.


Questions:

Isn’t this just the other side of the “I don’t understand it, so I hate and fear it” reaction you’ve been rejecting for two weeks?

Yep.  Nobody’s perfect.  Not me, and certainly not Barton.  We’re all human.  I’d like to say she’s more optimistic than Haswell, since she’s less general doom-and-gloom and more specifically dealing with the threat of negative argumentation in research philosophy statements.  I’d like to say that, but honestly I’m not sure she’s not more pessimistic than I am, even.  Barton is, however, an earnest ethicist in the field at the core of this argument.  She’s not necessarily studying the how and why of what is “good” and “bad” but the social influence of specific behaviors—in this case, the negative assessments of empirical value present throughout non-empirical reporting in the discipline.

I thought you just said in your last PAB that “there aren’t any bad guys.”  Don’t academics devaluing others’ valid work for personal gain and professional security qualify as “bad guys?”

Barton has, in her moderately careful, conservative, academic language, certainly located some “bad guys” in her text.  But that doesn’t make her the “good guy,” and I don’t think she’d say it does.  She is roundly within the camp of “just a bunch of guys.”  I’m also not going to ascribe to the notion that those people who reject empirical philosophies of research are bad scholars, because I view them as—at worst—uninformed about the value that empirical thought and protocols can provide the composition researcher.

Nor am I going to vilify even those who outright attack empiricism as positivist malarkey (Burke, Berlin, Mortensen and Kirsch, Williams, etc.) – I can recognize self-perpetuating professional survival instincts in their actions, and I understand that fear.  Burke et al. simply believe there is a finite amount of attention available in the scholarly public for composition research, and they want their ethnographic studies to receive the full attention they believe they deserve.  It’s inherently selfish, but that doesn’t make it inherently evil.

And this brings me circuitously back to the qualitative vs. quantitative (take a shot!) debate.  Opponents of empirical thought don’t want to be labeled as “non-empirical.”  As Barton points out, they instead often embrace the label of “qualitative researchers,” in their opposition to empiricism, failing to notice that nothing about RAD empirical methodology excludes either partially or completely qualitative research (410-411).  To Barton, Mortensen and Kirsch, as well as scholars like Williams, are definitely her “bad guys” (More like adversaries? Rivals?  Antagonists?  Whatever it is, it’s undoubtably pejorative, but these people aren’t evil.)  But that doesn’t mean she dislikes them.  Instead, she disapproves of the rhetorical moves they make to decentralize forms of inquiry that might falsify their own findings and approaches.  It’s a disingenuous approach to scholarship, but like I said – we’re all human.

Whether naming and targeting specific scholars for stepping on empiricists’ toes is helpful is another question – one I’ll be investigating more fully in a section of my paper on Epistemological Alignment (no. 5).

What does this mean for me as a scholar?

That’s a difficult question to answer.  Look out for examples of negative argumentation, and always be wary of the rhetorical move, I suppose, that posits the unethical nature of non-correlative research because of the ethical nature of correlative research.  Remember that just because we study rhetoric doesn’t mean we don’t use it against each other.  Consider the value of empirical thought and protocols in your own work, and be apprehensive of those folks who would rather see less information than more in the name of ideology or expedience.

Don’t lead us into a dark age of decaying moral truth and research ethics because you have math anxiety, is, I guess, what I’m saying.  Just don’t do that thing.  Don’t turn us into one of the other humanities disciplines that came before us and decided information ethics didn’t matter as much as feel-good ethics (may they all rest in unfunded peace.)


References

Barton, E. (2000). More Methodological Matters: Against Negative Argumentation. College Composition and Communication, Vol. 51 No. 3, Feb. 2000 399-416.

Barton, E. (2015). [Personal photograph] retrieved from video still, http://clas.wayne.edu/ellen-barton

PAB #3 – Driscoll (Redux) and Perdue

Driscoll, D. and S. Perdue (2012). Theory, Lore, and More: An Analysis of RAD Research in The Writing Center Journal, 1980-2009. The Writing Center Journal, Vol. 32 No. 1, 2012 11-39.

Dana Driscoll - Portrait

Dana Lynn Driscoll
(link for CV)

Sherry Perdue - Portrait

Sherry Wynn Perdue
(link for profile)

“Much more work is needed to understand the complex relationships between writing centers’ practitioners and how we produce and discuss our research. We need more research on the education, training, and support writing center directors receive to conduct RAD research. We need to understand the place of tutor-driven research and ask how tutors can contribute to these ongoing conversations about our practices. We also need more research on the research process: How do writing center researchers plan and undertake studies? How is research funded and/or sponsored? Asking and investigating such questions and re-envisioning our relationship to research will help us develop more RAD research-supported practices and move our field into the future.” (36).


A Quick Reintroduction

I’ve decided to return immediately to Driscoll (in collaboration with Sherry Wynn Perdue) for this third PAB entry.  This is not to imply that Driscoll is the only producer of RAD/empirical research in the field, nor its only vocal advocate.  Rather, this is due to her tendency to cut wide swaths in her writing research articles, covering significant ground while surveying the field with both pinpoint accuracy and a broad scope.  (Yikes.  Five clichés in a single sentence.  If I were a student in my own ENG 101 course, I would probably have to shoot me.)

The theme of these PAB entries (nos. 3 and 4) is “Major Questions” of the field, in this case empirical research in composition.  One of the questions which perpetuates itself in almost all RAD advocacy and writing is “where is all the RAD research?”  Frankly, this ends up producing many histories and meta-analyses similar to what I studied in PAB entries 1 and 2.  However, I think this is an important first question to ask, even if it may seem to prevent much immediate progress in terms of the articles selected.  The second question, which I will endeavor to begin investigating in later papers, and which is hinted at by Driscoll and Perdue’s focus on writing center studies here, is “what are the sites of scholarship and pedagogy that are most compatible with RAD research?”


Summary

Much as Driscoll’s “Composition Studies, Professional Writing and Empirical Research: A Skeptical View” did before it, Driscoll and Perdue’s “Theory, Lore, and More: An Analysis of RAD Research in The Writing Center Journal, 1980-2009investigates the role of RAD research in composition communities and the philosophy behind empirical interpretations of writing outcomes.  However, this article in many ways is much more a reflection of Haswell’s “NCTE/CCCC’s Recent War on Scholarship,” investigating a specific publication (The Writing Center Journal, hereafter referred to as WCJ) over a specific period of time in order to determine the degree to which empiricism is present in the field of writing center studies specifically.  However, if Haswell’s article was polemic and even pugilistic towards the NTCE at times, it should be noted that Driscoll and Perdue bring a much softer touch to their analysis and a much more hopeful lens for viewing the future of RAD research in writing center studies.

In general, Driscoll and Perdue are trying to do three things with this article, two of which are overt, and one of which is slightly incidental or even covert:

  • Advocate publication of—and provide potential research models of—empirical study in the writing center studies discipline.
  • Address the history of RAD research in writing center studies, both in terms of methodologies and methods of inquiry, throughout the lifetime of WCJ publications.
  • Provide a model through their own article of the relative simplicity or painlessness of converting to a RAD methodology for meta-analysis of writing center studies.

In reviewing every article in WCJ from its inception in 1980 through the end of the study in 2009, the authors identified 91 of a total of 270 articles as being data-driven “research articles” (19-20).  Notably, the authors listed their metrics and rubric of determination for RAD status in their unabridged forms, providing a degree of empirical rigor not typical to WCJ articles.  Their methodology alone spans seven pages (17-24).

Unlike Haswell, Driscoll and Perdue’s results and analysis praise the WCJ for publishing a wide variety of articles and contributing to discourse in myriad forms.  The find a praiseworthy 25.9% of WCJ articles include data-driven research, both RAD and non-RAD (25.9%) However, they also note that the significant majority of the research articles present in the publication are either non-replicable, or not sufficiently empirical to be considered RAD research—only 15 studies, or 6% of the total articles in the history of the publication (24-6). (See Fig. 1)

Also unlike Haswell, Driscoll and Perdue discover trends that imply that RAD research is on the way up, with both mean and median RAD rubric scores trending upwards over time in the field of writing center studies (26-27).  This is a fascinating finding, because, while it does not obviate Haswell’s original claims regarding the overall treatment of RAD in Rhet/Comp for the same time period, it does demonstrate that RAD methodologies are becoming more and more accepted (and even embraced) in certain discourse and scholarship communities – and especially in research on writing center outcomes. (See Fig. 2)

The authors move throughout further discussion of methodologies and results in a discussion section that warrants more investigation and overview than I can provide here, and concludes (see pull quote above) that significant additional work is needed, though there have been marked improvements in the general state of research in the field.

Driscoll/Perdue Fig 1 - Research Published in WCJ: Pie Chart by percentage, RAD (6%), Non-RAD (28%), Non-Research (66%)
Driscoll/Perdue Fig 1 – Research Published in WCJ: Pie Chart by percentage, RAD (6%), Non-RAD (28%), Non-Research (66%)
Driscoll/Perdue Fig 2 - RAD Research score by Year: scatter chart with trend line - from 1980 to 2009, average RAD score for WCJ research increases from less than 2.0 to 8.0
Driscoll/Perdue Fig 2 – RAD Research score by Year: scatter chart with trend line – from 1980 to 2009, average RAD score for WCJ research increases from less than 2.0 to 8.0

Analysis

However, the authors also note that, despite these improvements, the vast majority of the more rigorous research in the field is still not being performed by writing center experts in composition, but rather by behavioral and educational psychologists and educational scholars.  In exploring the implications of this, Driscoll and Perdue note that “while one could infer that writing center scholars have abdicated the responsibility to document the efficacy of our practices […] historically, many writing program administrators and writing faculty (particularly in higher education) have been trained in humanistic inquiry with research concepts and methods that differ from those outlined in our RAD Research Rubric” (30).  I think this is a huge point, and reinforces my general conclusion from PABs 1 and 2 and Paper #1: Rhet/Comp and WPA studies are not preparing compositionalists with the tools necessary to meaningfully and fully evaluate outcomes and efficacy, nor with the appreciation of the virtue and value of those tools.  As a result, much of the work of the WPA in terms of programmatic assessment is falling instead to scholars within the various colleges of education, devaluing and decentralizing Rhetoric and Composition as the sole arbiter of WPA programmatic questions.

I think one of the most important things that Driscoll and Perdue are doing by raising these questions, however, is not raising awareness of the deficit in rigorous RAD research – these issues have been discussed and retread, with calls for more data-driven analysis of writing center outcomes spanning the last twenty years.  Rather, I believe a key to the importance of this specific article, at this specific time, in this specific publication, is that it provides a real-world model of how accessible, informative, and viable RAD research can be for WCJ contributors.


Questions:

What does this mean for the Rhet/Comp discipline as a whole?

All in all, I’m optimistic (in this specific case) that writing centers are moving in the best possible direction, both in terms of providing best practices and best value to students, and in terms of demonstrating their own value to the increasingly corporatized (satanic?) higher administrative offices of the modern university.

In my humble opinion, writing centers and WPA, as the “boots on the ground” of Rhet/Comp, have frequently dragged the “rhet” folks kicking and screaming into the modern era (see: Computers and Writing).  I’m hopeful this means, with the continued support (and haranguing) by dedicated RAD scholars like Driscoll and Perdue (and Haswell), further resources and models will be made available in order to accentuate the importance of and facilitate the production of empirical research protocols.

There, I was optimistic for once, in my own cynical way.  I hope you’re all happy.

At what point is it enough RAD research?

You know, I’ve spent the last two weeks or so ruminating on this.  I don’t think it’s possible to say “all research, if data-driven, should be RAD.”  I think that the answer is one we’ll come to in time, when we realize that we’ve hit a saturation point, and RAD research is utilized as often as possible to address demonstrable and replicable claims.

Nor do I think it’s in any way productive to get into the “qualitative vs. quantitative” squabble.  Both have value.  I considered making my “major questions” the qualitative vs. quantitative concern before I remembered two things:

  1. I don’t care, I’m just happy people are doing research.
  2. My god, the people having that fight are just… mean.

When it comes to qualitative vs. quantitative concerns, or RAD vs. Non-RAD research, or just academic squabbling in general, I often think back on a very important line of dialogue from Zero Effect (1998): “there aren’t any good guys.  You realize that, don’t you?  I mean, you realize there aren’t evil guys and innocent guys.  It’s just, it’s just… It’s just a bunch of guys.”

It’s easy to assume that because you are advocating for one methodology, it’s better than other methodologies.  But it’s not.  It’s just better at some things, and worse at others.

What should this mean for scholarly and course outcomes?

I’m honestly still working on tying the research concern back to the classroom (and alternate sites of pedagogy.)  At the end of the day, you have to understand something to improve it, and even though RAD research brings a lot to the table, it also isn’t capable of solving things overnight.  The biggest desired personal outcome for now is for me to influence my colleagues and future scholars to appreciate the role that statistical, replicable, empirical research can have in their development as thinkers, teachers, and administrators.

Also, I need to be more clear about the difference between data-driven and RAD/empirical categories, since I’m still having trouble with some classmates articulating the significance of replicability and falsifiability as research essentials.

References

Driscoll, D. (2015). [Personal photograph]. Retrieved from Academia.edu.http://oakland.academia.edu/DanaDriscoll

Driscoll, D. and S. Perdue (2012). Theory, Lore, and More: An Analysis of RAD Research in The Writing Center Journal, 1980-2009. The Writing Center Journal, Vol. 32 No. 1, 2012 11-39.

Perdue, S. (2015). [Personal photograph]. Retrieved from https://sites.google.com/a/oakland.edu/write-space-resources/home/about-sherry-wynn-perdue

Paper #1 – Subdiscipline History: Empirical Research

The Virtue of Inquiry: recent history of empirical study
and the Rhet/Comp theory wars

“All empirical work is a subjective and social act, influenced by particular communities’ belief systems, work agendas, and assumptions about what is important to study […] Like other scholars in English studies, empirical scholars make subjective decisions about what is interesting to study, what evidence may be appropriate (or appropriated), how evidence may be evaluated, and what inferences to draw from evidence. […] Indeed, the discursive practices of empirical rhetoric position us in relation to other inquiry in the field. Moreover, empirical work is a complex rhetorical act in that we use evidence to convince each other of the plausibility of assertions about the experience” (Schriver 273).


 

A Question of Context

Depending on how one looks at it, the history of research empiricism in rhetoric and composition studies is either as old as (or older than) the field itself, or is a recent development in the field as a whole stretching back only a few scant decades.  This is in part because much of the data utilized in Rhet/Comp studies is functionally better described as being part of generalized pedagogical theory and behavioral studies than of composition specifically; while empirical research protocols have held sway at various points throughout the twentieth century, much of early empirical (positivist) study has been designed for and applied across multiple fields of pedagogy and sociology, only to be accepted if and when it supported the generally preferred paradigms of the current-traditional rhetoric (see Becker, 1958, Kaplan, 1964, Popper, 1959, etc.).  However, a vast majority of writings within the discipline on the issue came about as part of a general push towards empirical analysis and modeling during the mid-1970s to late-1980s (Schriver, 1989).

While there is a long history of specific empirical texts in the field, I am more intrigued by an interpretation of the historical influence of traditional empirical research in Rhet/Comp which preferences two texts separated by two decades as the foundations of a renewed empirical movement: Braddock, Lloyd-Jones, and Schoer of The University of Iowa’s NCTE committee report Research in Written Composition (1963) and George Hillocks Jr.’s Research on Written Composition: New Directions for Teaching (1986).  Hillocks, writing out of the University of Chicago, referenced Braddock et al. heavily and catalogued the progress of empirical study in the years between the two.  Hillocks, building upon the research protocols posited in Research in Written Composition, created a text that stood as a monolithic body of evidence in support of empirical models of pedagogical study at the beginning of what would come to be known as the “Theory Wars” of the late 80s and early 90’s (Olson, 3), a period during which cross-disciplinarity and intersectionality took reign at the forefront of a pursuit of more solidified intellectual discourse which prefaced and couched, in many ways, the compositional elements of Rhet/Comp squarely within the rhetorical concern.

In this expressionistic, ambiguous, postmodern, and social-epistemic environment, criticisms quickly arose against Hillocks’ indexing of formative research in the field, excluding as it did “research dealing only with oral language and pieces which were essentially anecdotal, hortatory, historical, curricular, or literary” (xviii).  However, empirical research rose to its station in the discipline largely to fill a vacuum, and many significant scholars of the field came to support the field’s contributions to the discipline as a whole, including expressionist pedagogues like Lil Brannon and C.H. Knoblauch, process theorists such as Lester Faigley, and many of the most influential scholars of the 80s and 90s.  Empirical, replicable, accessible research on student skills as well as on relationships between curricular and labor questions and course and programmatic outcomes reconnected teachers to proven, current, and best practices, and connected and validated the work of English departments and compositions programs to the university as a whole.


“Hearts and Minds” – (un)structured opposition to RAD

However, the history of empirical research is not the history of a subdiscipline, but rather a cautionary tale of the wholesale rejection of a subdiscipline.  Inspired by Social-Epistemic ideologies and the “social turn” within the field in general while being bolstered by the strong positioning of Kenneth Burke’s earlier opposition to positivist empiricism, several prominent theorists and practitioners have argued against all empirical research as inherently produced by positivist and prescriptive thought—a position largely held because many postmodernist theorists have never been formally trained in empirical research philosophies and protocols.

Even when histories of research in the field of Rhetoric and Composition are provided, empiricists are usually nowhere to be found despite towering contributions to the general body of knowledge within the field.  When they are recognized, it seems to be lip service at best: McComiskey notes that “prominent among these modes of inquiry [within the field of rhetoric and composition] have been historical studies; theory building; empirical research (from qualitative studies like ethnographies to quantitative studies like experiments and meta-analyses); discourse analysis and interpretive studies; feminist and teacher research; and postmodern investigations” (132).  And yet significant bodies of research indicate that empirical research is not embraced within the discipline (and sees less and less support over time in preference to more subjective forms of study).  Richard Haswell’s “NCTE/CCCC’s Recent War on Scholarship” (2005) and Dana Lynn Driscoll’s “Composition Studies, Professional Writing, and Empirical Research: A Skeptical View” (2009), for instance, demonstrate that replicable empirical research has seen less and less peer-reviewed publication through NCTE and RSA since the 1980s, with 1998, the lowest year on record for research publication at the time of Haswell’s publication, with the NCTE compiling RAD entries to the tune of 5.3% of the high-water amount since the publication of Hillocks’ Research on Written Communication in 1986.  Meanwhile, the total amount of published RAD research in the field (as compiled on CompPile) has increased by 35.4% over the same period.

Haswell - Fig2
Figure 1: Comparison of CCCC bibliographic listings vs. CompPile listings for the same period (Haswell 216).

“CCCC’s Research Initiative speaks to our belief that bold, creative research furthers the organization’s mission to become a clear, trusted public voice on issues of writing and writing instruction. That voice has never been more needed as policy makers take up questions related to writing instruction and writers” (CCCC Research Initiative, 2015).

Clearly, many scholars see an inherent value in RAD research and empirical values in Rhet/Comp.  Equally clearly, many publishers, editors, and leaders in the field see past that value and—either consciously or through repeated accidental exclusion—neglect to allow spaces for this research to flourish and reach the audiences of theorists, practitioners, and administrators who need this information most.  At the same time that the CCCC continues to fail to meaningfully publish empirical research, it also continues to fund further RAD research in an acknowledgment of its necessity in the field.  It is difficult to write the “history” of this subdiscipline, because its treatment is unbalanced and its purpose and prestige inelastic since the middle of the twentieth century—empirical research still aspires to provide the best, most complete, and most informative data and analysis in forms that meet standards of justifiability, basic impartiality, replicability, honesty, careful and considered control, and experimental hygiene–and it is persistently underrepresented regardless of content or contribution.  The field as a whole still continues, now more than ever, to either reject or ignore the standards of evidence set out by empiricists as further proof of the prescriptive positivism overruled and opposed by Burke and others.


An Advocacy of Rigor

One of my goals/outcomes for this course is to gain a stronger understanding of the role that research can have in the English studies field as a whole, and to work to become an effective advocate for RAD research protocols to my colleagues.  I think it is important in achieving this to recognize that the field is highly polarized, highly politicized, and at times plagued by ideological infighting—and that these challenges are more often than not regarding not how we engage students, but how we engage with each other and our institutions through knowledge and the sharing of information.

I think one of the reasons I most love empiricism as an academic research value and model is that it is highly democratized and much more interdisciplinary than people believe—demonstrable, replicable data knows no sex, no class, no prestige from tenure, institution, or even degree.  Honest-to-goodness data can be philosophical in its collection, even ideological in its presentation, but at the core of the data itself, truth is both universalized and devoid of creed and dogma. In many ways, research empiricism is a remarkably conservative philosophy within the discipline, demanding a degree of rigorous accountability which is decentralized in other philosophies; however, it is not the purview of the elitist or the ideologue—empiricists have come from every school of pedagogical thought, from the Department of Education, from Harvard, from two-year technical schools, and from middle-school English departments.  A respect for data, knowledge, information, truth—these principles, these values, can belong to everyone.



Questions for Consideration

1.) What role does philosophical empiricism play in modern understandings of empirical research?

I think one of the key challenges to achieving more widespread consideration of empirical values is to segregate the notions of RAD/empirical research from the philosophy of empiricism as epistemology – an equivocation which is frequently made by opponents of the form (such as Kenneth Burke in his A Grammar of Motives).  How the differences can be clearly delineated for lay researchers is an interesting and challenging question in the field.

2.) To what degree have stagnating publishing rates for empirical research in the publications of major Rhet/Comp professional organizations become a self-perpetuating concern?

I think this is also a huge concern in the field today: empirical research is not published because scholars are not well versed in RAD protocols. As a result, the scholars in question continue to be under-prepared to deal with, synthesize, and appreciate RAD data.

3.) What can we do as scholars (especially humanities scholars) to advocate for empirical values in a discipline and field that often preference anecdotes and a priori knowledge above demonstrable data?

I’m not sure I have an answer, but part of it is certainly to challenge even the most keystone assumptions about the purpose of the field as an agent of social and political change.  Data (and by extension those who obtain it or seek it out) should aspire to be as apolitical as possible.  As long as subjective truths are preferential to demonstrable objective knowledge, the field will never progress in terms of the empirical research question.

REFERENCES

Braddock, R. R. et al. (1963). Research in written composition. Champaign, Ill., National Council of Teachers of English, 1963.

CCCC Research Initiative. (2015, August 18). Retrieved September 17, 2015.

Driscoll, D. Composition Studies, Professional Writing and Empirical Research: A Skeptical View. Journal of Technical Writing and Communication, Vol. 39 No. 2, 2009 195-205.

Haswell, R. (2005). NCTE/CCCC’s Recent War on Scholarship. Written Communication, Vol. 22 No. 2, April 2005 198-223.

Hillocks, G. (1986). Research on Written Composition : New Directions for Teaching. [New York, N.Y.] : National Conference on Research in English ; Urbana, Ill. : ERIC Clearinghouse on Reading and Communication Skills, National Institute of Education, 1986.

McComiskey, B. (2006). English Studies : An Introduction to the Discipline(s). Urbana, Ill. : National Council of Teachers of English, 2006.

Olson, G. The Death of Composition as an Intellectual Discipline. Composition Studies, Vol. 28 No. 2, 2008 33-41.

Schriver, K. A. (1989). Theory Building in Rhetoric and Composition: The Role of Empirical Scholarship. Rhetoric Review, (2). 272.

Witte, S. P., & Faigley, L., Evaluating College Writing Programs. Conference on Coll. Composition and Communication, U. I. (1983).

 

PAB #1 – Haswell

Haswell, R. (2005). NCTE/CCCC’s Recent War on Scholarship. Written Communication, Vol. 22 No. 2, April 2005 198-223.

HaswellRichard H. Haswell
(link for CV and CompPile listings)

“They have been at scholarship for a long time.  Only in the past two decades have they been at war with it.  It might be more accurate to say that they have been at war with part of it, but if that part turns out to be vital to the whole, then with its defeat falls the whole.  The scholarship these organizations target goes by different names: empirical inquiry, laboratory studies, data gathering, experimental investigation, formal research, hard research, and sometimes just research” (200).


Summary

Richard Haswell’s (2005) “NCTE/CCCC’s Recent War on Scholarship” explores the history of support (and lack thereof) for RAD (Replicable, Aggregable, and Data-supported) research and empirical study in the two largest professional organizations for composition pedagogy, the National Council of Teachers of English and the Conference on College Composition and Communication.  The highly controversial article earned Hasewell both scorn from many of his colleagues in the NCTE, as well as a nomination for the CCCC Examplar Award.

Calling on his audience to recall the ideology of Stephen Witte, founder of Written Communication, who had passed away in the previous year, Haswell notes that the giants of the field of composition had long appreciated the value of empirical knowledge—and had, when necessary, defended it and its practitioners as Witte did for Hillocks’ 1986 “Synthesis of Research on Teaching Writing.”

Leaning on this traditional support for empirical research from established scholars, Haswell proceeds to establish a present schism in empirical scholarship, which he argues is many decades gone at this point (Witte hopes on Hillocks’ behalf that within twenty years he will witness “the marriage of discovery and validation in composition” (199).  At Haswell’s writing, this was not the case eighteen years later.  He likens the “war” of his title between the professional organizations and data-centric researchers to that of a “silent, internecine, self destructive [one], for instance the body’s attack against its own immune system.

Following this brief history and association of his own argument with well-respected theorists and researchers within the organizations in question, Haswell argues that he doesn’t need to defend the value such research provides to composition professionals, pointing out that “a method of scholarship under attack by one academic discipline in the United States but currently healthy and supported by every other academic discipline in the world does not need defending” (200).

Following this introduction and positioning statement, Haswell proceeds to document the forms and functions of research in postsecondary writing instruction for the previous fifty years.  In part, his stated goal in doing so is to establish that empiricism was widely accepted and sponsored by the aforementioned professional organizations, only to later become “radically unsponsored.”  He limits his historical analysis to only those presentations and articles which fit the definition of RAD scholarship, using two specific bodies of research as case studies to help restrict, define, and map this form of scholarship, considering facets of RAD including sampling, recording, data analysis, as well as demonstrating what replicability does and does not look like in the field of composition theory.

Based on this refined definition and understanding, the author then proceeds to analyze the ways in which NCTE/CCCC has begun the process of excluding and isolating RAD research from the early, voguish social-epistemic thinkpieces that had begun to flood the field in 1972, creating categories of segregation including endurability and ephemerality, an entirely subjective (and prognosticatory) standard that disallows almost any analytic research based upon specific data values—as well as any data-based research which has not yet been replicated by independent scholars.  Haswell concludes this section of his article and argument by demonstrating that the standards set by the NCTE allow the NCTE/CCCC to claim a (comparatively paltry) percentage of their supported scholarship to be RAD-based—but the truth is that even this pittance is an over-representation of the real state of research in the field of composition.

Haswell - Fig1

Figure 1: Haswell compares truly “RAD” research with non-RAD research protocols that may ignore questions of replicability and aggregation.

Haswell concludes the article by tracking publishing trends within and without the NCTE/CCCC publications (College English, College Composition and Communication, and Research in the Teaching of English) for specific, typical forms of RAD research, including: bibliographic RAD studies of research papers, RAD analysis of skill gains from writing course participation, analytic commentary on results from peer writing critique, and bibliographic analytics.  Based on his mapping of the decline in RAD scholarship support, the author concludes from his research that “the overarching pattern in all of this is a severe decline in NCTE/CCCC’s support of data-supported, aggregable research into their own professional topics during the past two decades [~1980-2005]. But […] there are other patterns, perhaps more disturbing.  The most obvious is that the decline is not paralleled in other academic disciplines, even elsewhere in the social sciences. Rather, it is the opposite; for RAD, research publication in three of the activities close to the heart of college composition instruction—the research paper, course gain in writing skill, and peer evaluation—has continued to grow everywhere else. For 25 years now […] the theoretical scholars have argued that such research is outmoded. A look at the numbers asks for whom are the theorists speaking” (215).

Haswell - Fig2

Figure 2: Haswell maps the decline in CCCC publication of research bibliographic entries against the consistent growth in the same on CompPile.


Analysis and Questions

I’ll provide more analysis of my personal engagement with this text in my PAB post for the second, responding article from Dr. Driscoll.  That said, I think that Dr. Haswell raises a very important alarm in this article, and I’m saddened to say I don’t think it’s one that many people in the field in general have yet heeded.  The Technical Writing and Communication crowd have certainly latched onto many of Haswell’s arguments–but it’s safe to say that they were predominantly in his camp to begin with.  Fortunately, bibliographic research shows that Haswell’s “Recent War” is more oft-cited than many of his opponents, but those citing him are already converts.  Haswell doesn’t often appear alongside more popular fare by Berlin, Murray, Emig, and Lunsford in pedagogical texts, nor in disciplinary anthologies.  His article is found in research manuals, in TW-specific histories, in specialist texts much more frequently than in general disciplinary surveys.  His argument, while profound, is as much victim to the erasure of the academy as that research which he praises is.  He’s not searching for paradigm-shifting realizations that order universal chaos and throw the humdrum of the academy into turmoil.  He is, in other words, “not sexy.”

When I think about the theorists who worked so hard to redefine and refine the field in the early years, I feel like they couldn’t have wanted this for the discipline–I can’t imagine why any scholar would oppose more or better knowledge in their field.  I feel that scholars have a moral obligation to the truth, and as a graduate student I count myself in that obligation.  I don’t know everything, but (to horribly misquote Plato) at least I know that not knowing is a step in the process towards knowledge.  To reject empirical research because one fears the creep of positivism into one’s personal expressionist ideology seems no different (to me) than someone refusing to read new theory because they finally got a grasp on the old one.

I love data.  I have to love data, because otherwise how can I instill a love of data in my students?  I respect any argument for more information.  And I will never be able to understand the argument against providing that data.  Paper is cheap.  Ink is cheap.  Let’s have more knowledge.  I have a subscription to College Writing.  80% of what makes its pages is… fine.  But it’s not frequently contributing to any great confirmation or upset of knowledge or paradigms.  I would say the same of College Composition and Communication.  Pulling a random issue from the shelf (Sept. 2014 – “Locations of Writing”) I find 19 articles.  Two are (in any way) empirical (Rueker and Miller).  I can’t think of a special topic more inviting of empirical data, or less inviting to vignettes, than the analysis of the physical locations and conditions under which students write.  And yet, there it is.  Two articles with empirical data, and ten vignettes that don’t even include a single footnote, illustration, citation, or bibliographic entry.  Half aren’t even about college writing.  That was literally the first publication I had within reach.  I’m sure I could replicate the experiment ten dozen times with the same results in my apartment library alone.

Is this productive?

I can’t see how it is.  I mean, ten more scholars get a CV entry in CCC.  There’s a utility value there, I suppose.  I can’t imagine anybody read a self-indulgent vignette about the feeling of isolation the author senses writing in a crowd while paying too much for coffee and staying too long for free WiFi and thought “finally, the breakthrough that will let me reach out to my students and make them see the value of careful, academic discourse.”  I can’t imagine too many people read the self-indulgent vignette, period, but that’s another question in academic publishing entirely.

Is the field doomed?  

I can’t say that I know the answer to that question, but I know the answer is not “absolutely not.”  I know in a general sense what eventually happened to other humanities disciplines that walked away from formative research to engage in personal narratives as paramount to knowledge and the interpretation of disciplinary questions.  We’ve seen the declines in African American Studies and Women’s Studies departments following this model. It’s not a comfortable future, probably, at least for a while.

What are we doing here? 

Sometimes I wonder.  I wasn’t planning to go to CCCC 2016 this year because I was feeling burnt out by the absence of (or at least difficulty locating) any scholarship that felt real or substantial.  Last year I presented on a panel with three other scholars, dealing with questions of technical writing and textual production.  I presented on how print methodologies changed distribution and production of activist texts in inner city print cooperatives.  I’d say of the four, one was less empirical than my own, the other two significantly moreso.  All said, it was a very data- and artifact-driven panel.  Three conference attendees showed up.  Even our moderator stepped out early.  I’ll be the first to admit that I’m bitter.  I’ll be the first to label myself a cynic.  I’ll be the first to point out that I’m a hypocrite, too–my proposal for this year was accepted; I’m going.  My new presentation’s not the slightest bit empirical.

I’m supposed to be writing about personal course outcomes.  Maybe I’m looking for proof that I’m not circling an intellectual drain.  Maybe I’m looking for proof that we all aren’t.

This was all so much easier for me when it was literature – at least literature is “just cultural,” is admittedly canonically-segregated.  Compositionalists are playing with other people’s core epistemological assumptions at the most formative periods in their lives – within writing, the most personal act they’ll ever engage in at the university – and the stakes are impossibly high.  Who wouldn’t want to know–not feel, or think, but know–as much as possible before engaging in that?  This is the language arts equivalent of brain surgery.  We’re tinkering with people’s minds.  I don’t think enough people pay attention to the moral imperative that includes, not only in the classroom, but in the discipline’s most essential scholarship.

Let’s call this the “Crisis-of-Faith PAB” and never speak of it again.  I don’t like history.  I especially don’t like history when it’s still happening.  I do like Haswell – he seems like a truly sincere scholar.

References

Locations of Writing. College Composition and Communication. Vol. 66 No. 1, September 2014. Print.

Haswell, R. (2005). NCTE/CCCC’s Recent War on Scholarship. Written Communication, Vol. 22 No. 2, April 2005 198-223.

Haswell, R. (2015). [Personal photograph]. Retrieved from CompPile. http://comppile.org/haswell/vita.htm

PAB #2 – Driscoll

Driscoll, D. (2009). Composition Studies, Professional Writing and Empirical Research: A Skeptical View. Journal of Technical Writing and Communication, Vol. 39 No. 2, 2009 195-205.

s200_dana.driscollDana Lynn Driscoll
(link for CV)

“Both professional writing and the larger field of rhetoric and composition have sustained a complex and troubled relationship with empirical research.  Since the field’s inception, the question of the place of empirical research as a whole and what specific types of methods (qualitative, quantitative) are acceptable has generated a significant amount of controversy. While Haswell has provided evidence as to what has been occurring within the field, he has not provided a consideration of why it has occurred or what the solution may be” (199).


Summary

Dana Lynn Driscoll’s (2009) “Composition Studies, Professional Writing and Empirical Research: A Skeptical View” picks up where Richard Haswell’s analysis in his 2005 “NCTE/CCCC’s Recent War on Scholarship” leaves off, while attempting to synthesize the benefits of RAD-based research with the current social constructivist ideology of contemporary Rhet/Comp scholarship.  Whereas Haswell’s scholarship worked to establish the fact of the NCTE/CCCC’s opposition to and segregation of meaningful, data-driven analysis and empirical research, Driscoll endeavors to position Haswell’s claims within the politics and ideology of the field in order to give a sense of why social constructivists (either intentionally or unintentionally) oppose and segregate RAD research from the emotional, anecdotal, and naturalistic publications at the forefront of the modern discipline.  Based upon this sense of where opposition to empiricism comes from within the discipline, Driscoll then attempts to resolve the apparent conflict through the ideal of empirical skepticism of both ancient and modern philosophy, arguing that this skeptical approach can demonstrate RAD’s value to some of the more subjective and personalized researchers of the modern NCTE/CCCC.

A large part of Driscoll’s argument stems from the fact that many modern scholars in Rhet/Comp simply lack the context to objectively define “empiricism” or understand its role in research—either because they were never trained in research philosophy or protocols, or because they do not consider empirical research to be “a means of making knowledge” (197).  Additionally, Driscoll’s argument hinges on establishing—both for her claims and for Haswell’s—that the argument is not for empirical research as a superior form of scholarship, but rather for the establishment of a pluralist model of scholarship that is capable of embracing empirical replicability as a valid and necessary facet of meaningful research.

Additionally, Driscoll argues that a subset of the ideological schism is the debate over the qualitative and quantitative inquiry, which divides even the opponents of “the influence of positivism,” some of whom claim that all empirical inquiry is positivistic, while others claim that only quantitative analysis is problematic.

In the end, Driscoll presents the epistemology of the skeptical school of antiquity as a scaffold for understanding and synthesizing the values of the naturalists and the empiricists, arguing that “although few composition researchers can ever claim to reach a state of quietude, research can promote a culture of skepticism about all of inquiry and scholarship, leading to the ability to critically question and reflect on our field’s knowledge base and assumptions” (200).  She provides a four point model for applying skeptic ideals to the process of Rhet/Comp research:

  1. Skeptical researchers should be skeptical of everything, including their own findings.
  2. Empirical research never claims to prove but rather provide evidence.
  3. Empirical research does not build itself upon that which has been assumed but rather that which has evidence.
  4. Empirical researchers are interested in gathering evidence from as many sources as possible—and hence, do not privilege any one data collection method.

Analysis and Questions

I remember Driscoll’s article being the first article to ever truly draw me into Rhet/Comp research as a field of interest.  Her response to Haswell is reasoned, equally empirical (though far more qualitative), and avoids either the alarm-sounding or the hand-wringing of most of the disciplinary navel-gazing for which the humanities in general (and English and Philosophy scholars in particular) are so renowned.

In the end, reading Driscoll, and reading Haswell as her primary text, I can’t stop thinking about Gary Olson.  At CCCC 2000 in Minneapolis, Olson famously announced the “death of composition as an intellectual discipline,” condemning “political-professional careerism” and predicting a “new theory war” to complement the “theory wars” of the past decade.  I was enamored when I first heard about his speech, assuming it would be a brilliant and incendiary screed against disciplinary in-fighting and shoddy, intentionalist, subjective scholarship. Imagine my horror when I realized the disingenuity at the core of his claims—what Olson really desires in his CCCC address is to place himself on the winning side of a battle over reigning hegemony, and he calls for adversarialism as a virtue.  He has no interest in defusing segregation, or in promoting more accurate scholarship, only in making sure that his restroom facilities are cleaner, his space on the bus more convenient, his friends treated with the respect they deserve as they trod over the withered corpses of their ideological (and research) adversaries.  The “death of composition” is not an event that came and went years ago with the death of meaningful, responsible scholarship, but a threat on his part if scholars don’t side with him in the upcoming social-epistemic schism.

Even as a strong believer in social-epistemic notions of expression, voice, argument, pedagogical structure, evaluation, and ideological positioning, I have always been disturbed by the disregard the average scholar in Rhet/Comp has for evidential research.  Perhaps it’s the ex-STEM in me, but I’m a firm believer, first and foremost, in teaching my students responsible information hygiene.  If arguments can’t stem from truth, knowledge, and evidence, what point can that argument possibly have?  If evidence is poorly analyzed, poorly collected, poorly managed, poorly represented, what value can that evidence have?  If truth is filtered through the intentionally subjective felt sense or personal narratives of the scholar without due consideration and recognition of the influence that perspective can have rhetorically and factually, what good is that truth?  How, in simpler terms, could anybody be a college student–or worse, instructor–and not be an informational skeptic?

In the end, it’s telling that Olson can call for war–and be praised as a foundational text of the disciplinary question.  Meanwhile, Haswell makes call for equity and research hygeine, to an end of the war–he’s not (to be fair) relegated to the dustbin, but his work is largely only celebrated by technical writers and researchers.

I’ve been attending CCCC for several years now.  Subjectively, I’m sad to say that I see many more people banging the war-drums under Olson’s banner than I see calling for research hygeine (or research at all, really.  I saw panels in Tampa last year where four speakers presented, and not one had a single source, citation, or piece of data for their (easily researchable) claims.)  It seems, anecdotally, that the consensus model of theory/data is, for better or worse, still primary among many scholars in the NCTE/CCCC.

…perhaps I could do some kind of quantitative research to prove it.

These two articles are from ten and six years ago!  You can’t use them for historical context!

First of all, that’s not a question.  Second of all, I understand that concern.  However, I think that there is a real value in noticing that these articles (especially Haswell) are the start of a renewed conversation that is forwarding empirical models as having value, and the lack of those models in previous years being a detriment to the field.

Additionally, these may not be the formative history of the Rhet/Comp field, but they are the originating documents of my history in the field.  I came across Driscoll reading about applications of philosophical skepticism in research as a mechanical engineering ethics student.  From there, I found the remainder of the field intriguing and began to study Comp Theory in earnest.  Also, I believe, for better or worse, that these defenses will, unquestionably, be a major turn in the historical record of the future field.

Why does data-driven analysis even require a defense in English?

I’m not sure I’ll ever be able to really answer this question, but I do feel like Driscoll’s “fear of positivism” explanation is insufficient, and Haswell’s “it doesn’t” is technically correct, but unhelpful.  In every other discipline outside of the creative arts (and even occasionally there), data is roundly acknowledged to be absolute in its value and contribution to knowledge – not in its perfect accuracy, mind you, which seems to be how many Rhet/Comp scholars dismiss it as naive or biased.   The fact that data and language are inherently rhetorical does not change the fact that on a sliding scale of objectivity, they are a better representation of the real state of things than pure theoretical analysis, or personal narratives, or anecdotal lesson structures.  Many Rhet/Comp theorists fear empirical positivism.  People fear what they do not understand.  By the commutative property, perhaps Rhet/Comp theorists don’t understand empirical positivism.  Maybe we need to defend the need not for empiricism, but for teaching humanities students empirical values in the first place.

Do you feel prepared to engage with empirical data in English?

I do.  But I don’t feel like that preparation is a product of my liberal arts education, rather stemming (hah) from my STEM background.  I can’t speak for other students, either here at ODU or in my past collegiate endeavors – but I did have a sense in the past that students of capable intellect frequently avoided or outright rejected research-heavy and data-driven theory out of hand, almost as if it were a product of math anxiety without the presence of any necessary math.  I recall one classmate of mine several years ago rejecting Saussure’s structural linguistics simply because she could not interpret the basic diagrams the theorist used to explore ordered signification.

What can I do to promote empirical approaches in my work as a Rhet/Comp scholar?  

I think this is going to be one of my big personal outcome questions all semester.  I’m going to continue thinking on it.  I’ll let you know.

References

Driscoll, D. Composition Studies, Professional Writing and Empirical Research: A Skeptical View. Journal of Technical Writing an d Communication, Vol. 39 No. 2, 2009 195-205.

Driscoll, D. (2015). [Personal photograph]. Retrieved from Academia.edu. http://oakland.academia.edu/DanaDriscoll

Olson, G. The Death of Composition as an Intellectual Discipline. Composition Studies, Vol. 28 No. 2, 2008 33-41.