Paper #3 – Objects of Study

Cartoon - Illustration of two men talking in office setting. One man says “sometimes I think the collaborative process would work better without you.” Illustration by Peter C. Vey, The New Yorker, May 18, 2009, 65
A general sense of what the typical RAD researcher hears most days in the modern academy.  Illustration by Peter C. Vey, The New Yorker, May 18, 2009.

“Certainly, he that hath a satirical vein, as he maketh others afraid of his wit, so he had need be afraid of others’ memory. He that questioneth much, shall learn much, and content much; but especially, if he apply his questions to the skill of the persons whom he asketh; for he shall give them occasion, to please themselves in speaking, and himself shall continually gather knowledge. But let his questions not be troublesome; for that is fit for a poser. And let him be sure to leave other men their turns to speak. […] If you dissemble, sometimes, your knowledge of that you are thought to know, you shall be thought, another time, to know that you know not. Speech of a man’s self ought to be seldom, and well chosen. I knew one, was wont to say in scorn, He must needs be a wise man, he speaks so much of himself: and there is but one case, wherein a man may commend himself with good grace; and that is in commending virtue in another; especially if it be such a virtue, whereunto himself pretendeth” – “Of Discourse,” The Essays or Counsels Civil and Moral of Francis Bacon (1601).


Introduction

In my previous posts, I have discussed the history, major questions, ideologies, and academic concerns raised by empirical/RAD researchers in the various subfields of English Studies.  In addition to this, I have made an argument for considering RAD research as more than simply a methodology, but rather a specific field of study of its own right, and a scholarly ideology which allows for inquiry and discovery not always possible through other practices.

In this post, I will attempt to define what some of the primary objects of study in RAD research are, both in terms of quantitative and qualitative data, and in the description of these Objects, explore their role, their appeal, the challenges of them, and – where applicable – their history.  Following that, I will discuss the collaborative, supportive role of RAD OoSs in modern English Studies.


Objects of Study

What follows is discussion and definition – in brief – of various types of Objects of Study typically useful to the RAD researcher in English Studies

Data in general

It may seem obvious, and it is, but it warrants a brief reiteration: RAD researchers look at data (sometimes even data for its own sake), especially that data which can be replicated and aggregated to provide new insights and confirmation of previous contributions (the “D” in RAD, after all, does stand for “data-supported”).  The focus on data analysis may be specific, as I will note in the following OoSs, but it may also be general, interpreting trends in research throughout the field.

Publications and Publishing Trends

As demonstrated by works from Stephen North, Richard Haswell, Dana Driscoll, Sherry Wynn Perdue, and others, RAD researchers frequently study research trends, including the nature, location, and tenor of publications that either preference RAD research, or tend to avoid it.  Richard Haswell, in coordination with Glenn Blalock, was instrumental in the establishment of CompPile, a keyword-searchable bibliographic index of over 100,000 writing studies publications from 1939 to present, which focuses heavily upon data-driven indexing of research in various English Studies fields.

Part of this interest is certainly self-interest.  If nobody is publishing RAD research, it certainly makes it difficult to maintain an academic career as a RAD researcher.  However, there is also a strong ideological and disciplinary interest in these questions, as the limitation of RAD research likewise limits the type and scope of inquiry possible within the general field of English Studies, and these restrictive publishing realities raise serious questions about accessibility, ethics, and economics of the academy.

Program/Course/Assignment Objectives & Outcomes

Course and assignment objectives and outcomes, especially within the field of WPA, are a primary Object of Study for many RAD researchers.  Allowing for data-driven quantitative and often qualitative evaluation of teaching strategies, technologies, policies, programs, and assessment protocols, student outcomes are often one of the high-water metrics of RAD research – and these outcomes are often able to be measured and reported through many discrete methodologies.

Whether looking at specific grading data to determine changes in student qualification over time relating to stated objectives, or examining set learning outcomes for programs, courses, and assignments in order to assess viability or measurability, RAD researchers tend to preference course outcomes as highly informative, measurable, and (when so designed) objective.

Survey Data

One of the more methodologically-centered Objects of Study, survey data – whether of students, faculty, or the professional and civic communities – provides a meaningful combination of qualitative and quantitative responses.  Surveys have several benefits, including set protocols for determining confidence intervals, selecting representative samples, and reporting results.  Surveys are often flexible, easily anonymized, and typically (especially in the modern digital environment) remarkably low-cost ways to collect data about outcomes, assessment, experiences, skills, and attitudes – and they are a commonly accepted and generally familiar form of research which allows RAD researchers to communicate their results not only to academics, but to administrators and the community at large.

Survey data has a strong historical foundation (along with assessment results – see below) in the origins of quantitative research in the humanities.  Whereas experimental protocols can prove costly and prone to significant data management challenges (and whereas, by virtue of the source of writing, practically all writing experimentation qualifies as human experimentation and comes up against significant challenges in terms of ethics generally and getting through IRB review and approval specifically), survey data’s tendency towards simpler anonymization and proclivity towards more easily ethical application has long meant that it is a preferred tool and object of study of the humanities researcher.  Its low costs have similarly long appealed to RAD researchers – who often struggle along with their English department colleagues to locate funding in the modern STEM-centered academy.

Assessment Results and Protocols

Writing skills assessment provides a serious challenge for the RAD researcher: as numerical, replicable, aggregable data which can be anonymized and complied, assessment results in the forms of testing outcomes, graded writing, portfolio scoring, and so on appear at first blush to be the perfect Object of Study for the RAD researcher.  Researchers approaching assessment data in recent decades, however, have been keenly aware of the qualitative challenges of assessment values.  As noted by Cherry and Meyer in their 1993 “Reliability Issues in Holistic Assessment,” a heavily research-guided analysis of challenges in unbiased and rational assessment, “like all things human, measurement is not a perfect business” and is affected by subjective factors such as assessor biases and instrument (e.g., assignment prompt) quality (29-33).

As such, much research using assessment results is heavily qualified and restricted in terms of confidence, replicability, and applicability.  However, it is also a significant point of research interest, as demonstrated by significant, continuing RAD research about assessment (for examples, see Haswell and Haswell, “Gender Bias”; Freedman; Ball).

Metadata

Perhaps one of the most valuable objects of study for RAD researchers (and often least appreciated by non-RAD colleagues) is metadata and meta-analysis on the field of English studies in general through previous literature and research, or of specific subdisciplines, especially composition, technical communication, discourse analysis, and rhetoric.  Metadata may include hundreds of different data sources, such as assessment scores, administrative data including transfer credits, retention rates, and funding, demographic data relating to learning communities and student bodies, technology availability and usage, or academic policies and curricula and their effects on learning outcomes.

Meta-analytic RAD research allows for the compilation and interpretation of a broad spectrum of both RAD and non-RAD qualitative and quantitative findings in order to provide specific interpretations of trends within the fields in question, to make institution- or program-specific research valuable and applicable to other institutions and communities, or to make novel discoveries based upon pre-existing scholarship.  For examples of current meta-analytic research on various fields, see Koster, Tribushinina, de Jong, and Van den Bergh (2015); Clayson (2009); Roska (2009); and Bangert-Drowns, Hurley, and Wilkinson (2004).

Meta-analysis is not, however, without its inherent pitfalls.  Due to a publication bias towards positive findings, metadata from existing literature tends to skew towards optimistic interpretations of policies and results.  Also, because the originating research was not written with a mind towards broad applicability to meta-analysis, the compilation of this data is challenging and prone to bias on the part of well-intentioned RAD researchers – the ease of shoehorning various and disparate research and literature into a specific, presupposed result through sample study selection is significantly pronounced.  Additionally, these selection issues and the need to express standard deviations for quantitative values, standard error, and observational error (especially pronounced because of various qualitative methodologies used in English Studies for producing the originating literature) can lead to rejection of valuable research in order to maintain good statistical controls.

Overview

In general, because RAD research originates in general research methodology, and because RAD as a disciplinary approach (or even subdiscipline of writing studies) is founded upon traditional empirical values, these Objects of Study are inherently reflective of the history of empirical research and the scientific method as a whole.  Also, as noted in previous posts, the outsider status of RAD researchers in many literature/rhetoric-focused traditional English departments means that many RAD researchers are essentially interacting with these Objects of Study as outsiders, often from general Education Research programs and education colleges.

What this means, in part, is that the history of RAD research (and general empirical study as its precursor) is not necessarily the history of research within English Studies.  This reflects back to the major questions I discussed previously – the natural track of RAD research is one of intersection with English Studies, rather than one of parallel development.  Thus, we must examine these objects of study and their value to English Studies in terms of how they can support discipline-specific scholarly discourses, and we should advocate the transition of these Objects of Study into the fields and disciplines in question, while promoting the ethical use of their benefits in future studies.


Analysis

I’ve done a lot of moralizing lately about virtue ethics and epistemology and the nature of truth and a million other things that probably don’t need to be rehashed further.  Instead, I’d like to briefly speak towards the way in which these Objects of Study demonstrate the nuance and complexity of RAD research “as discipline.”

One of the ways we can tell a discipline is valid, and vibrant, and productive is by studying the complexities inherent to their study and attempting to balance the claims of that discipline’s theory with the “boots on the ground” realities of execution.  If this catalog of some of RAD’s Objects of Study (along with my previous posts) demonstrates anything, I hope it is that I am cognizant of the fact that there are serious ethical, procedural, methodological, and ideological debates happening with RAD studies about best practices and beliefs, and also that RAD researchers are aware of these challenges and attempt to address them through discipline-guided discourse and formative scholarship.

It’s for this reason as much as any that I hope that RAD research can one day be viewed as a subdiscipline of writing studies.  The reflective nature of scholarly practice is a hallmark of a “true” discipline, and RAD is reflective, discursive, and multi-faceted.  We have certainly granted that designation to study areas far less disciplined and self-reflective than empirical RAD research – but much more importantly, legitimizing RAD as “more-than” allows for scholars who wish to do this kind of work, but fear the repercussions of being viewed as positivist by their departments, the opportunity to claim that focus as a facet of their expertise and scholarship in a way that RAD-as-methodology likely never can.


Questions for Consideration

1.) So that’s what RAD does?

Not even close!  If I’d listed all the primary, “methodological” Objects of Study alone that are available to RAD researchers, this would have gone 175,000 words over length instead of just 1750!  Once you count the fact that almost every Object of Study available to any other English Studies scholar is also available for RAD researchers to support, supplement, analyze, problematize, or falsify, the possibilities are effectively infinite!  I’m doing a thing with exclamation points here, I just noticed.  I’ll stop now.

Anyhow, RAD “does” what “needs doing,” and that’s one of the things I love about it.  In the same way that Gender Studies has become almost entirely intersectional at this point, RAD-for-RAD’s-sake has basically vanished as RAD research scholars and specialists find new ways to support their programs and create meaningful, analyzable data for their colleagues.

2.) Can [my Object of Study] be supplemented by RAD?

Well, I don’t know what you’re working on, but I’ll eat my hat if the answer is no!  Ask me in the comments below; let me know what you’re working on and I’d be happy to take a swing at finding a RAD approach to supplement your scholarship.

Cartoon - Two men walking down the street having a conversation; one man says to the other
Consider the incisive, productive guidance that a RAD researcher can provide your scholarship! Illustration by Bruce Eric Kaplan, The New Yorker, August 1, 2005.

REFERENCES AND ADDITIONAL READING

Bacon, F. (1601). On Discourse. Renasance Editions. Luminarium.org. Accessed Oct 15, 2015.

Ball, A. (1997). Expanding the Dialogue on Culture as a Critical Component When Assessing Writing. Assessing Writing: A Critical Sourcebook. Eds. B. Huot and P. O’Neill, Bedford/St. Martin’s, Boston. 2009, 357-386.

Bangert-Drowns, R., Hurley, M., and Wilkinson, B. (2004). The Effects of School-Based Writing-to-Learn Interventions on Academic Achievement: A Meta-Analysis. Review of Educational Research, Vol. 74 No. 1, 29-58. Retrieved from comppile.org.

Cherry, R. and Meyer, P. (1993). Reliability Issues in Holistic Assessment. Assessing Writing: A Critical Sourcebook. Eds. B. Huot and P. O’Neill, Bedford/St. Martin’s, Boston. 2009, 29-56.

Clayson, D.E. (2009). Student Evaluations of Teaching: Are They Related to What Students Learn?: A Meta-Analysis and Review of the Literature. Journal of Marketing Education, Vol. 31 No. 3, 16-30. Retrieved from comppile.org.

CompPile (2004). Eds. Blalock, G., & Haswell, R. (2004, May 1). Retrieved October 15, 2015, from http://comppile.org/

Driscoll, D. (2009). Composition Studies, Professional Writing and Empirical Research: A Skeptical View. Journal of Technical Writing and Communication, Vol. 39 No. 2, 195-205. Retrieved from comppile.org.

Driscoll, D. and S. Perdue (2012). Theory, Lore, and More: An Analysis of RAD Research in The Writing Center Journal, 1980-2009. The Writing Center Journal, Vol. 32 No. 1, 11-39.

Freedman, S. (1981). Influences on Evaluators of Expository Essays: Beyond the Text. Assessing Writing: A Critical Sourcebook. Eds. B. Huot and P. O’Neill, Bedford/St. Martin’s, Boston. 2009, 289-300.

Haswell, R. (2005). NCTE/CCCC’s Recent War on Scholarship, Written Communication, Vol. 22 No. 2, 198-223. Retrieved from comppile.org.

Haswell, R. and Haswell, J. (1996). Gender Bias and Critique of Student Writing. Assessing Writing: A Critical Sourcebook. Eds. B. Huot and P. O’Neill, Bedford/St. Martin’s, Boston. 2009, 387-434.

Kaplan, B.E. (2005). If It Made Sense, That Would Be a Very Powerful Idea. The New Yorker, August 1, 2005.

Koster, M., Tribushinina, E., de Jonh, P.F., and Van den Bergh, H. (2015). Teaching Children to Write: A Meta-Analysis of Writing Intervention Research. Journal of Writing Research, Vol. 7 No. 2, 249-274. Retrieved from comppile.org.

North, S.M. (1987). The Making of Knowledge in Composition: Portrait of an Emerging Field, Boyton-Cook, Upper Montclair, New Jersey.

Roska, J. (2009). Building Bridges for Student Success: Are Higher Education Articulation Policies Effective? Teachers College Record, Vol. 111 No. 10, 2444-2478. Retrieved from comppile.org.

Vey, P.C. (2009). Sometimes I think the collaborative process would work better without you. The New Yorker, May 18, 2009, 65.

Advertisements

4 thoughts on “Paper #3 – Objects of Study

  1. Alex, this is an interesting line of research. I work at institution that prizes itself on being “data-informed,” reporting every week in a webinar-style conference (for which I was a presenter for over six months on academic support programs) the most current numbers on retention-related activities, and hosting a national Moving The Needle conference (see http://movingtheneedleconference.com). As such, I cannot help but to be struck by your categorization of assessment scores and administrative data as metadata and part of RAD research. I see this type of work as part of the movement of big data and analytics, especially when we bring in third-party companies like Civitas Learning to help operationalize the institution. I admit I have not read much of your previous posts, but you suggest that you have done some serious contemplation about the ethics of RAD. In your thinking, I would be curious to know what you think about the ethics of this type of metadata filling future publications. I mean: with the ease of business intelligence models to churn out internal data in a short time, and the blurred lines between scholarly and trade publications in the higher education market, I see more articles now that read like aggrandizement of an institution, with all of its numbers (as I said, especially now that data is much more accessible), than as a real research article. Is this unethical? Are we learning from this? Or maybe you see this in a less pessimistic way than I, or maybe I am just off base altogether. I’d be curious to hear what you think. Anyway, thanks for listening.

    Liked by 1 person

    1. Matt, thanks so much for the comment.

      I agree that this is a very problematic trend. Additionally, I think in a lot of ways these questions you’ve raised get to the heart of why this topic interests me. And, as you point out with your MtN conference reference, there’s a lot out there to prove that – administratively, at the very least – these concerns demonstrate a disciplinary need for RAD ethics and RAD research across the board. These are not localized trends.

      Directly to your questions: assessment scores and administrative data can certainly be aggregated in metadata analytics and general meta-analysis. That said, I think you’re right that when it comes to “Big Data,” there is a commercial and corporate interest that usually overrides many ethical data-management approaches. We could take the end-all-be-alls of modern academic reporting at face value at first – retention rates, cost-per-credit, admission rates, student-to-faculty ratios, etc. – but we still require humanistic values and research ethics designed to contextualize those values in order to present ethical research. More importantly, we need these contexts in order to objectively identify why these metrics may be unethical in the first place.

      I don’t think I’d get much resistance if I claimed here that most of Big Data exists to perpetuate Big Data and the commercialization of the modern academy. As such, they (often) don’t like to approach ethical questions related to nuances and contexts surrounding that data – if you have a low student:faculty ratio, but poorly trained, underpaid, overworked adjuncts… can you ethically present that ratio as proof of academic quality? If you retain students without providing quality educational services, can you claim that retention is an inherently ethical metric? We’ve all seen programs and institutions that parade about big data as proof of excellence – all while ignoring basic student needs like tutoring, instructor availability, professional development, career services, or even basic facilities maintenance.

      As long as the politicization of academic administration continues, we’ll see metadata about novel practices (e-advising comes immediately to mind) flooding the publications for a while. I do think it’s highly unethical, even if it were only for the point of grandstanding – but it’s not. It’s about recruiting more students regardless of contexts, inflating rankings regardless of the accuracy of those ratings, creating a business-first ethos despite that standing in the face of all the academy allegedly stands for, and generally being horribly, horribly capitalistic about the whole thing. It’s about turning students into customers, and scholars into stakeholders. I’m not sure which of those is worse.

      To put it another way – these aren’t academic publications, usually, but quarterly stockholder reports masquerading as scholarship.

      This is why I think it’s time – now – to start thinking about merging RAD/Empirical data practices with humanities-driven notions of ethics and people-first theories. We are, as you point out, at the forefront of another sea change in the incorporated academy – and currently many academics lack the basic data tools necessary to articulate why that is problematic and most of the more advanced tools to communicate those concerns to students, colleagues, and administrators.

      Postmodernism can tell us we are losing power and influence, and it can even tell us how we are “supposed” to fight that loss of power and influence. But it lacks the tools to contribute to the extra-liberal arts discourse in meaningful, efficacious ways.

      Like

  2. You speak like a champion on this topic, Alex. Thanks for the thoughtful reply. We are connected on Academia,edu. I wrote a draft of an unpublished article, looking at Big Data through a rhetorical lens. It mostly analyzes the woes of big data (e.g., data as meta-narrative, data as induction vs. deduction, data as intuition-less). However, I do offer, for sake of our students, the call for a data literacy within the composition classroom, as a way of mediated awareness. This, of course, pushes big data onto an audience different than you mention here, but it does force academics to grapple with big data and its place within RAD/Empirical research at a different level.

    Liked by 1 person

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s