Hybrid Pedagogies: Epistemology and Empiricism

“Not everything that counts can be counted, and not everything that can be counted counts.” – Albert Einstein

 

This week’s seminar picked up where we left off, revisiting the usage of Twitter in a classroom setting with two instructor demonstrations of Twitter backchannels, including one for an in-class film screening. Another instructor demonstrated how the Piazza platform had stimulated classroom discussion in similar ways to Twitter, which led to the first of two main questions of the night: What we want from using a hybrid pedagogy? Possibilities included more student engagement, or a better quality of student work and responsiveness, but we also voiced the desire to create a classroom space that had a different sense of community that is somehow different from that created by face-to-face interaction. Once our goals were articulated, we were faced with the more difficult question of how we can assess to what extent we get what we want. It is this second question that this blog post will focus on.

While discussion mainly centered on practical and logistical questions, as well as anecdotal successes and failures, the underlying assumptions that shaped our inquiries are both methodological and epistemological. Epistemology and methodology are inextricably linked. The purpose of this post is to review the reasons why epistemology and methodology are so contentious among researchers in composition theory and technical writing, as well as create a space for further discussion.

Epistemology is concerned with how we know what we know, whereas methodology is concerned with how we do what we do. Thus it is no surprise that epistemology has traditionally been discussed in the humanities while methodology has been discussed in the sciences. Modes of inquiry, qualitative and quantitative. But methodology as a practice does not belong to the sciences any more than epistemology as a theoretical question belongs to philosophy. Further, the reasons that some scholars in the humanities seem reluctant to adopt quantitative methods is disciplinary and ideological.

It’s been noted by scholars like Richard Haswell that in the past 60 years, fields like composition and technical writing have adopted quantitative methods, but the motive for using scientific methods continues to be debated in professional scholarship. In the past decades, empirical methods have been characterized by scholars as problematic due to the motives behind their usage. So prevalent was this resistance that in 1996 Davida Charney could generalize in “Empiricism is Not a Four Letter Word,” that “compositionists readily assume that disciplines that adopt scientific methods do so for reflected glory and access to institutional power” (576).

The selection of methodology is not apolitical, and, as Charney and others have noted, “the research methods we employ have important consequences for the intellectual authority of our field” (568). This traditionally political (and disciplinary) split between qualitative and quantitative methods, however makes Charney’s assertion that “to promote the growth of a complex and inter-connected framework of knowledge and methods, we need both qualitative and quantitative empirical methods” (591), a difficult pill for some to swallow.
However, Charney, and her empirical descendant Dana Driscoll, provide us with an adisciplinary (as opposed to apolitical) framework for the use of mixed methodology in composition and technical writing research.

Charney asserts that:

no research method per se can deliver up authority or acceptance. Rather, credence–and provisional credence at that–emerges from day-to-day critical negotiations in which disciplines identify interesting questions, decide what kinds of answers to consider, and actively critique both methods and results. (569)

Similarly, Driscoll, whose unique contribution to this discussion is to position empiricism in the tradition of Greek and Roman skepticism, provides an outline of empiricism free of disciplinary constraint in “Composition Studies, Professional Writing, and Empirical Research: A Skeptical View”:

  1. Skeptical researchers should be skeptical of everything, including their own findings.
  2. Empirical research never claims to prove but rather provide evidence. (201)
  3. Empirical research does not build itself upon that which has been assumed but rather that which has evidence.
  4. Empirical researchers are interested in gathering evidence from as many sources as possible—and hence, do not privilege any one data collection method. (202)

This framework, if followed using mixed methodologies, “will promote a larger degree of self-skepticism and reflection (#1 above), will help minimize the bias inherent in a researcher (#2 above), and will provide more evidence in knowledge formation (#3 above)” (202).

If, as Charney, Driscoll, and others would like us to believe, empiricism is an epistemology, it is indeed not the four letter word that researchers in the humanities need fear. That being said, it is still a methodology that has both disciplinary and political connotations. This dilemma is further complicated by the fact that while hybrid pedagogy is not associated with one discipline over another, the methodologies employed by researchers in the humanities will have disciplinary (and therefore political) connotations as well.

With all these factors weighing on us, how do we assess our success in the hybrid classroom?

Share articles with your friends or follow us on Twitter!
Bookmark the permalink.

5 Comments

  1. Something that has stuck in my mind is Rebecca’s description of a phenomenon (the name for which I can’t remember) in which behavior changes simply because it is being studied. So, if I tell me students I’m going to do an experiment to see whether assignment X improves their writing, it might improve not because of my assignment but because they know I’m running an experiment. This, of course, is one of the potential stumbling blocks when we engage in assessment.

    Assessment becomes especially tricky in student-centered learning environments. Last semester, my students’ final project was a flash mob. They came up with it by themselves, and I was largely left out of the picture. I remember watching in wonderment and even feeling a bit left out as they negotiated, delegated, problem solved, and scurried in and out of the room (creating their own hybrid classroom) on various missions. In their final assignment for the course, in which they reflected on their experience, they raved about how their communication skills had improved. They devised the assignment, they carried it out, and they performed their own assessment. I was left behind, scratching my head and asking questions: “How do I grade this?,” “How on earth will this fit into the portfolio?,” “How do I know they’re learning, and how they’re learning, when I can’t see what they’re doing?” Surely, the answer is not for me to insert myself into their process. If I assist with their negotiations, delegations, and troubleshooting or devise assignments for each step in their process (which means I would be dictating the process as well), the classroom is not longer student-centered, and I’ve shut down valuable opportunities for communication. So, shall I become an observer? Sit in the middle of the negotiations, attend all of their meetings, trail them on their missions, maybe even practice their dance moves, all the while feverishly (and rather conspicuously) taking notes? I might learn something, but would they learn as much? Or would their activities become a sort of performance for my sake: would the audience for the flash mob be the general public on the day of the performance or Dr. Spann in the classroom each day?

    So, I’ll tack a question onto Kate’s: how do we assess success in the student-centered classroom? Hybrid classrooms lend themselves so well to student-centered learning, after all. My hunch is that, ultimately, the students need to play a role in determining what we assess, how we assess it, and how we define success.

  2. EFFECTS: Worthwhile Information about Research
    The short version of “effects” is that the perceptions of those being studied and those doing the studying affect the outcomes in both predictable and unpredictable ways. These are three kinds of “effects” you hear mentioned in conversations about research: The Pygmalion Effect is largely referred to in educational research. The Hawthorne Effect is largely referred to in workplace research and sometimes in educational research. The Placebo Effect is largely referred to in medical research.

    Pygmalion Effect
    Rosenthal & Jacobson (1968/1992) report and discuss at length an important effect, usually called the Pygmalion effect. “Basically, they showed that if teachers were led to expect enhanced performance from some children then they did indeed show that enhancement, which in some cases was about twice that showed by other children in the same class.” From Wikipedia: “The Pygmalion effect is a form of self-fulfilling prophecy, and, in this respect, people will internalize their negative label, and those with positive labels succeed accordingly.”

    Hawthorne Effect
    The term “Hawthorne effect” refers to experiments on managing factory workers started in 1924 in the Hawthorne works of the Western Electric Company near Chicago. From Wikipedia: “Hawthorne Works had commissioned a study to see if its workers would become more productive in higher or lower levels of light. The workers’ productivity seemed to improve when changes were made and slumped when the study was concluded. It was suggested that the productivity gain occurred due to the impact of the motivational effect on the workers as a result of the interest being shown in them.” References to the Hawthorne effect all concern effects on an experiment’s results of the awareness of participants that they are the subject of an intervention.

    Placebo Effect
    “Placebos are things like sugar pills, that look like real treatments but in fact have no physical effect. They are used to create “blind” trials in which the participants do not know whether they are getting the active treatment or not, so that physical effects can be measured independently of the participants’ expectations” (http://www.psy.gla.ac.uk/~steve/hawth.htm). From Wikipedia: “The phenomenon is related to the perception and expectation that the patient has; if the substance is viewed as helpful, it can heal, but, if it is viewed as harmful, it can cause negative effects, which is known as the nocebo effect.

    All three of these effects (and a number of others) are discussed in a useful way, suggesting the implications for research: http://www.psy.gla.ac.uk/~steve/hawth.html

  3. I’m currently struggling with this issue with my Teaching Scholar’s initiative. I’m investigating how blogs are creating a sense of learning community in the classroom and I’m using one of my three sections of 1102 as the “experiment” group. While the overall blog assignment is the same for all 3 sections, I’m doing additional blog-related assignments in the third section (including using the blog groups for all group activities, giving them specific prompts that promote conversations in the blog space, and asking them to review each other’s blogs on a semi-regular basis) in order to see if increasing the role of the blog in the course work changes the larger dynamic of the class. When I presented this initiative to the Teaching Scholars, many of the other professors from the science and engineering divisions were skeptical of both my research question (whether using blogs in the classroom promotes a learning community, which other research shows promotes student engagment) and my research methodology. How, they asked, would I measure the results of this study? What numbers could I crunch? How would I know if I’d succeeded? How would I know if the blog was the variable that had led to any change, or could there be other variables at play. I admitted that I was very “untrained” in this type of methodological research and asked for their advice. Their first piece of advice was to keep it small – to ask a small question that I could answer by changing one small thing — that way it would be more manageable to implement and easier to see a correlation between the change and the result. Their other suggestion was to try and find several different ways to measure the initiative, both quantitative (looking at numbers of blog posts/comments etc.) and qualitative (student feedback in reflection essays and interviews). Their suggestions made sense and, having read the Driscoll article, the later component (multiple ways of measuring and evaluating my results) speaks to the skeptical approach.

    All that being said, I’m still struggling to find meaningful ways to measure the changes I’m observing in my classroom and as the mid-term approaches, I’m trying to design some more quantitative ways to evaluate them. I’m looking forward to discussing this tonight and getting some feedback.

  4. This discussion reminds me of a recent scholarly debate regarding Wikipedia’s policies for expertise/consensus. This actually doesn’t have much to do with student assessment, but it has a lot to do with methods for evaluating research, such as Kate’s conversation above.

    I want to point us towards two articles, Tim Messer-Kruse’s piece from the Chronicle on truth and Wikipedia: http://chronicle.com/article/The-Undue-Weight-of-Truth-on/130704/ and our own Andy Famiglietti’s response: http://copyvillain.org/blog/2012/02/20/weighing-consensus-building-truth-on-wikipedia/

    If we take Kate’s explanation of Driscoll from above:

    1. Skeptical researchers should be skeptical of everything, including their own findings.
    2. Empirical research never claims to prove but rather provide evidence. (201)
    3. Empirical research does not build itself upon that which has been assumed but rather that which has evidence.
    4. Empirical researchers are interested in gathering evidence from as many sources as possible—and hence, do not privilege any one data collection method. (202)

    we can examine the unfolding wikipedia debate in potentially productive ways.

    1. Skeptical editors should be skeptical of everything, including expert testimony
    2. Claims on Wikipedia seek to provide evidence from a variety of sources that are published according to specific rules
    3. Claims on Wikipedia require scholarly consensus as evidence, not assumptions of individuals
    4. Claims on Wikipedia look for a broad set of supporting documents that help illustrate the scholarly consensus that offers a certain truth value to the encyclopedia. (Naturally, with this one, there are potential pitfalls with the data collection methods of Wikipedia, which Andy acknowledges in his blog post).

    I find the conversation unfolding on the Chronicle website to be an often frustrating reiteration of the negatives of both online publications and collaborative pursuits of knowledge. I believe there are important connections to be made here with hybrid pedagogy as wikis are often assigned in the sorts of classes that we teach. We (as professors in general) are to encourage our students to engage in a collaborative knowledge production, but then refuse to acknowledge the work that other editors and authors have put towards something like Wikipedia? There’s a disconnect here that needs evaluated as to what kinds of knowledge are produced and legitimated and by whom.

  5. Where do you begin?

    Don’t begin with the “how?” Begin with “what?” or “why?” Ask yourself what you want to know…and then consider the problem, the gap in knowledge, the conundrum, the conflict, the inadequacy of current information.
    Methodology is the collection of strategies you use to answer your question(s). So first you need to have a question (or at least some curiosity) about the idea or artifact or situation or location. You may begin with a felt sense that leads to dozens of questions, so you pick one as a place to begin, not a yes-no question but one with the opportunity for rich detail that you can dive into, one whose answer may influence your thinking and your actions.

    Britta’s comment reminds me that you need to consider your role — observer or participant-observer. Both are legitimate, but they have different parameters. If you are engaging in research about/with your own classes, you’ll necessarily be a participant-observer.

Comments are closed