“Higher education too can make a fetish out of ‘objectivity’ and ‘rationality,’” observes John Warner, confronting specifically The Pitfalls of “Objectivity” in teaching composition.
Warner’s argument is a subset, however, of the larger problem with the white men of academia, as I have examined recently: Concepts and terms such as “objectivity,” “scientific,” “valid,” “reliable,” and “rationality” prove to be extremely powerful in academia and scholarship, yet the great irony of that power is that these concepts and terms are veneer for maintaining white male power ― inequity grounded in the racism and sexism that academics are prone to refute in their rhetoric while maintaining in their practices.
“Objectivity,” for example, frames a white male subjectivity as the norm (thus “objective”), rendering racialized (non-white) and genderized (non-male) subjectivity as the “other,” as lacking credibility.
Explore the history of which research paradigms count, and you confront the bias in favor of quantitative (experimental and quasi-experimental) research (paradigms created by and maintained by men) over qualitative research (paradigms championed by women and racial minorities) ― the former is “hard,” “scientific,” and “objective” while the latter is “soft,” “personal,” and “(merely) anecdotal.”
In an institution created by white men, academia, where their claim is that everything is based on empirical evidence filtered through the rarified lens of objectivity, we run into a real conflict when unpacking the evidence. For example, the annual or biannual self-evaluation process linked to promotion, tenure, and merit as well as the embedded key element of student evaluations of faculty.
As a tenured full professor, I am currently drafting my bi-annual self-evaluation, a process I have been using as a political document to confront the inherent inequity in both the faculty evaluation process and the traditional use of student opinion surveys.
The self-evaluation is flush with traditional norms about what counts as excellence—peer-reviewed publications, for example, but not public intellectual work. And as is the case at many colleges and universities, student evaluations are central evidence in the entire faculty evaluation process.
We are directed in our self-evaluations to “include numerical results from student opinion survey forms” ― the double whammy of quantitative and the ubiquitous student evaluations. The narrative at my small selective liberal arts university is that of the three areas of evaluation ― teaching, scholarship, and service ― teaching remains primary; therefore, I make my strongest advocacy case (nearly equaled by my argument for valuing my public intellectual writing) about how we determine faculty teaching quality.