“Certainly, he that hath a satirical vein, as he maketh others afraid of his wit, so he had need be afraid of others’ memory. He that questioneth much, shall learn much, and content much; but especially, if he apply his questions to the skill of the persons whom he asketh; for he shall give them occasion, to please themselves in speaking, and himself shall continually gather knowledge. But let his questions not be troublesome; for that is fit for a poser. And let him be sure to leave other men their turns to speak. […] If you dissemble, sometimes, your knowledge of that you are thought to know, you shall be thought, another time, to know that you know not. Speech of a man’s self ought to be seldom, and well chosen. I knew one, was wont to say in scorn, He must needs be a wise man, he speaks so much of himself: and there is but one case, wherein a man may commend himself with good grace; and that is in commending virtue in another; especially if it be such a virtue, whereunto himself pretendeth” – “Of Discourse,” The Essays or Counsels Civil and Moral of Francis Bacon (1601).
In my previous posts, I have discussed the history, major questions, ideologies, and academic concerns raised by empirical/RAD researchers in the various subfields of English Studies. In addition to this, I have made an argument for considering RAD research as more than simply a methodology, but rather a specific field of study of its own right, and a scholarly ideology which allows for inquiry and discovery not always possible through other practices.
In this post, I will attempt to define what some of the primary objects of study in RAD research are, both in terms of quantitative and qualitative data, and in the description of these Objects, explore their role, their appeal, the challenges of them, and – where applicable – their history. Following that, I will discuss the collaborative, supportive role of RAD OoSs in modern English Studies.
Objects of Study
What follows is discussion and definition – in brief – of various types of Objects of Study typically useful to the RAD researcher in English Studies
Data in general
It may seem obvious, and it is, but it warrants a brief reiteration: RAD researchers look at data (sometimes even data for its own sake), especially that data which can be replicated and aggregated to provide new insights and confirmation of previous contributions (the “D” in RAD, after all, does stand for “data-supported”). The focus on data analysis may be specific, as I will note in the following OoSs, but it may also be general, interpreting trends in research throughout the field.
Publications and Publishing Trends
As demonstrated by works from Stephen North, Richard Haswell, Dana Driscoll, Sherry Wynn Perdue, and others, RAD researchers frequently study research trends, including the nature, location, and tenor of publications that either preference RAD research, or tend to avoid it. Richard Haswell, in coordination with Glenn Blalock, was instrumental in the establishment of CompPile, a keyword-searchable bibliographic index of over 100,000 writing studies publications from 1939 to present, which focuses heavily upon data-driven indexing of research in various English Studies fields.
Part of this interest is certainly self-interest. If nobody is publishing RAD research, it certainly makes it difficult to maintain an academic career as a RAD researcher. However, there is also a strong ideological and disciplinary interest in these questions, as the limitation of RAD research likewise limits the type and scope of inquiry possible within the general field of English Studies, and these restrictive publishing realities raise serious questions about accessibility, ethics, and economics of the academy.
Program/Course/Assignment Objectives & Outcomes
Course and assignment objectives and outcomes, especially within the field of WPA, are a primary Object of Study for many RAD researchers. Allowing for data-driven quantitative and often qualitative evaluation of teaching strategies, technologies, policies, programs, and assessment protocols, student outcomes are often one of the high-water metrics of RAD research – and these outcomes are often able to be measured and reported through many discrete methodologies.
Whether looking at specific grading data to determine changes in student qualification over time relating to stated objectives, or examining set learning outcomes for programs, courses, and assignments in order to assess viability or measurability, RAD researchers tend to preference course outcomes as highly informative, measurable, and (when so designed) objective.
One of the more methodologically-centered Objects of Study, survey data – whether of students, faculty, or the professional and civic communities – provides a meaningful combination of qualitative and quantitative responses. Surveys have several benefits, including set protocols for determining confidence intervals, selecting representative samples, and reporting results. Surveys are often flexible, easily anonymized, and typically (especially in the modern digital environment) remarkably low-cost ways to collect data about outcomes, assessment, experiences, skills, and attitudes – and they are a commonly accepted and generally familiar form of research which allows RAD researchers to communicate their results not only to academics, but to administrators and the community at large.
Survey data has a strong historical foundation (along with assessment results – see below) in the origins of quantitative research in the humanities. Whereas experimental protocols can prove costly and prone to significant data management challenges (and whereas, by virtue of the source of writing, practically all writing experimentation qualifies as human experimentation and comes up against significant challenges in terms of ethics generally and getting through IRB review and approval specifically), survey data’s tendency towards simpler anonymization and proclivity towards more easily ethical application has long meant that it is a preferred tool and object of study of the humanities researcher. Its low costs have similarly long appealed to RAD researchers – who often struggle along with their English department colleagues to locate funding in the modern STEM-centered academy.
Assessment Results and Protocols
Writing skills assessment provides a serious challenge for the RAD researcher: as numerical, replicable, aggregable data which can be anonymized and complied, assessment results in the forms of testing outcomes, graded writing, portfolio scoring, and so on appear at first blush to be the perfect Object of Study for the RAD researcher. Researchers approaching assessment data in recent decades, however, have been keenly aware of the qualitative challenges of assessment values. As noted by Cherry and Meyer in their 1993 “Reliability Issues in Holistic Assessment,” a heavily research-guided analysis of challenges in unbiased and rational assessment, “like all things human, measurement is not a perfect business” and is affected by subjective factors such as assessor biases and instrument (e.g., assignment prompt) quality (29-33).
As such, much research using assessment results is heavily qualified and restricted in terms of confidence, replicability, and applicability. However, it is also a significant point of research interest, as demonstrated by significant, continuing RAD research about assessment (for examples, see Haswell and Haswell, “Gender Bias”; Freedman; Ball).
Perhaps one of the most valuable objects of study for RAD researchers (and often least appreciated by non-RAD colleagues) is metadata and meta-analysis on the field of English studies in general through previous literature and research, or of specific subdisciplines, especially composition, technical communication, discourse analysis, and rhetoric. Metadata may include hundreds of different data sources, such as assessment scores, administrative data including transfer credits, retention rates, and funding, demographic data relating to learning communities and student bodies, technology availability and usage, or academic policies and curricula and their effects on learning outcomes.
Meta-analytic RAD research allows for the compilation and interpretation of a broad spectrum of both RAD and non-RAD qualitative and quantitative findings in order to provide specific interpretations of trends within the fields in question, to make institution- or program-specific research valuable and applicable to other institutions and communities, or to make novel discoveries based upon pre-existing scholarship. For examples of current meta-analytic research on various fields, see Koster, Tribushinina, de Jong, and Van den Bergh (2015); Clayson (2009); Roska (2009); and Bangert-Drowns, Hurley, and Wilkinson (2004).
Meta-analysis is not, however, without its inherent pitfalls. Due to a publication bias towards positive findings, metadata from existing literature tends to skew towards optimistic interpretations of policies and results. Also, because the originating research was not written with a mind towards broad applicability to meta-analysis, the compilation of this data is challenging and prone to bias on the part of well-intentioned RAD researchers – the ease of shoehorning various and disparate research and literature into a specific, presupposed result through sample study selection is significantly pronounced. Additionally, these selection issues and the need to express standard deviations for quantitative values, standard error, and observational error (especially pronounced because of various qualitative methodologies used in English Studies for producing the originating literature) can lead to rejection of valuable research in order to maintain good statistical controls.
In general, because RAD research originates in general research methodology, and because RAD as a disciplinary approach (or even subdiscipline of writing studies) is founded upon traditional empirical values, these Objects of Study are inherently reflective of the history of empirical research and the scientific method as a whole. Also, as noted in previous posts, the outsider status of RAD researchers in many literature/rhetoric-focused traditional English departments means that many RAD researchers are essentially interacting with these Objects of Study as outsiders, often from general Education Research programs and education colleges.
What this means, in part, is that the history of RAD research (and general empirical study as its precursor) is not necessarily the history of research within English Studies. This reflects back to the major questions I discussed previously – the natural track of RAD research is one of intersection with English Studies, rather than one of parallel development. Thus, we must examine these objects of study and their value to English Studies in terms of how they can support discipline-specific scholarly discourses, and we should advocate the transition of these Objects of Study into the fields and disciplines in question, while promoting the ethical use of their benefits in future studies.
I’ve done a lot of moralizing lately about virtue ethics and epistemology and the nature of truth and a million other things that probably don’t need to be rehashed further. Instead, I’d like to briefly speak towards the way in which these Objects of Study demonstrate the nuance and complexity of RAD research “as discipline.”
One of the ways we can tell a discipline is valid, and vibrant, and productive is by studying the complexities inherent to their study and attempting to balance the claims of that discipline’s theory with the “boots on the ground” realities of execution. If this catalog of some of RAD’s Objects of Study (along with my previous posts) demonstrates anything, I hope it is that I am cognizant of the fact that there are serious ethical, procedural, methodological, and ideological debates happening with RAD studies about best practices and beliefs, and also that RAD researchers are aware of these challenges and attempt to address them through discipline-guided discourse and formative scholarship.
It’s for this reason as much as any that I hope that RAD research can one day be viewed as a subdiscipline of writing studies. The reflective nature of scholarly practice is a hallmark of a “true” discipline, and RAD is reflective, discursive, and multi-faceted. We have certainly granted that designation to study areas far less disciplined and self-reflective than empirical RAD research – but much more importantly, legitimizing RAD as “more-than” allows for scholars who wish to do this kind of work, but fear the repercussions of being viewed as positivist by their departments, the opportunity to claim that focus as a facet of their expertise and scholarship in a way that RAD-as-methodology likely never can.
Questions for Consideration
1.) So that’s what RAD does?
Not even close! If I’d listed all the primary, “methodological” Objects of Study alone that are available to RAD researchers, this would have gone 175,000 words over length instead of just 1750! Once you count the fact that almost every Object of Study available to any other English Studies scholar is also available for RAD researchers to support, supplement, analyze, problematize, or falsify, the possibilities are effectively infinite! I’m doing a thing with exclamation points here, I just noticed. I’ll stop now.
Anyhow, RAD “does” what “needs doing,” and that’s one of the things I love about it. In the same way that Gender Studies has become almost entirely intersectional at this point, RAD-for-RAD’s-sake has basically vanished as RAD research scholars and specialists find new ways to support their programs and create meaningful, analyzable data for their colleagues.
2.) Can [my Object of Study] be supplemented by RAD?
Well, I don’t know what you’re working on, but I’ll eat my hat if the answer is no! Ask me in the comments below; let me know what you’re working on and I’d be happy to take a swing at finding a RAD approach to supplement your scholarship.
REFERENCES AND ADDITIONAL READING
Bacon, F. (1601). On Discourse. Renasance Editions. Luminarium.org. Accessed Oct 15, 2015.
Ball, A. (1997). Expanding the Dialogue on Culture as a Critical Component When Assessing Writing. Assessing Writing: A Critical Sourcebook. Eds. B. Huot and P. O’Neill, Bedford/St. Martin’s, Boston. 2009, 357-386.
Bangert-Drowns, R., Hurley, M., and Wilkinson, B. (2004). The Effects of School-Based Writing-to-Learn Interventions on Academic Achievement: A Meta-Analysis. Review of Educational Research, Vol. 74 No. 1, 29-58. Retrieved from comppile.org.
Cherry, R. and Meyer, P. (1993). Reliability Issues in Holistic Assessment. Assessing Writing: A Critical Sourcebook. Eds. B. Huot and P. O’Neill, Bedford/St. Martin’s, Boston. 2009, 29-56.
Clayson, D.E. (2009). Student Evaluations of Teaching: Are They Related to What Students Learn?: A Meta-Analysis and Review of the Literature. Journal of Marketing Education, Vol. 31 No. 3, 16-30. Retrieved from comppile.org.
CompPile (2004). Eds. Blalock, G., & Haswell, R. (2004, May 1). Retrieved October 15, 2015, from http://comppile.org/
Driscoll, D. (2009). Composition Studies, Professional Writing and Empirical Research: A Skeptical View. Journal of Technical Writing and Communication, Vol. 39 No. 2, 195-205. Retrieved from comppile.org.
Driscoll, D. and S. Perdue (2012). Theory, Lore, and More: An Analysis of RAD Research in The Writing Center Journal, 1980-2009. The Writing Center Journal, Vol. 32 No. 1, 11-39.
Freedman, S. (1981). Influences on Evaluators of Expository Essays: Beyond the Text. Assessing Writing: A Critical Sourcebook. Eds. B. Huot and P. O’Neill, Bedford/St. Martin’s, Boston. 2009, 289-300.
Haswell, R. (2005). NCTE/CCCC’s Recent War on Scholarship, Written Communication, Vol. 22 No. 2, 198-223. Retrieved from comppile.org.
Haswell, R. and Haswell, J. (1996). Gender Bias and Critique of Student Writing. Assessing Writing: A Critical Sourcebook. Eds. B. Huot and P. O’Neill, Bedford/St. Martin’s, Boston. 2009, 387-434.
Kaplan, B.E. (2005). If It Made Sense, That Would Be a Very Powerful Idea. The New Yorker, August 1, 2005.
Koster, M., Tribushinina, E., de Jonh, P.F., and Van den Bergh, H. (2015). Teaching Children to Write: A Meta-Analysis of Writing Intervention Research. Journal of Writing Research, Vol. 7 No. 2, 249-274. Retrieved from comppile.org.
North, S.M. (1987). The Making of Knowledge in Composition: Portrait of an Emerging Field, Boyton-Cook, Upper Montclair, New Jersey.
Roska, J. (2009). Building Bridges for Student Success: Are Higher Education Articulation Policies Effective? Teachers College Record, Vol. 111 No. 10, 2444-2478. Retrieved from comppile.org.
Vey, P.C. (2009). Sometimes I think the collaborative process would work better without you. The New Yorker, May 18, 2009, 65.