Critique of Garrison, Anderson & Archer: Cognitive Presence
Summary
In their pilot study, “Critical thinking, cognitive presence, and computer conferencing in distance education,” Garrison, Anderson and Archer (2001) describe a tool to measure the “nature and quality of critical discourse” (p. 17) that takes place in online courses. Within the framework of their Communities of Inquiry (CoI), they propose a model in which the cognitive processes of participants are observable and subject to study by analyzing the text of asynchronous online discussions (Garrison, Anderson, & Archer, 2000). The tool developed by Garrison et al. (2000) employs raters to categorize student message postings into one of 4 phases of their model of critical thinking and practical inquiry: 1. Trigger, 2. Explore, 3. Integrate, 4. Resolution. While the authors admit that their sample size is quite small, their results indicate the potential for this tool to be useful since they demonstrate that it is highly reliable between trained raters. Using this tool, the authors reveal that in the discussions they examined, most of the discussion in the classes showed the participants to be in the ‘exploration’ stage of practical inquiry and very little of the discussion is centered in ‘integration’ and/or ‘resolution’. Presumably, this tool could be used to evaluate new pedagogical strategies in terms of their effectiveness to evoke higher cognitive processes among the participants.
Critique
Despite the advantages that this kind of tool would offer educational researchers, there are a few weaknesses in the methodology that I would like to discuss. Specifically, these weaknesses are related to problems associated with quantitative measurement of cognitive processes and the validity of the data analysis. I offer suggestions for further study that may address some of these problems.
Compared to the other two elements of the community of practical inquiry model, Teaching Presence and Social Presence, there is an inherent difficulty in trying to measure latent thought processes indirectly by text analysis of an online discourse. Discussions, online (or face-to-face) are, by definition, social processes and are subject to the customs and forces by which all social activities are moderated. The frequency of markers within the different categories of cognitive processes may be more of a function of social pressures than of the actual individual cognitive processes. For example, during an undergraduate lecture, there are undoubtedly a variety of cognitive processes taking place within the minds of the individuals in the audience, but there would be very little evidence of this for reasons related to the social norms. Additionally, the type of discourse that would be acceptable would also be pre-determined. Despite the potential of this tool in terms of its convenience and statistical reliability, the difficulty of indirectly measuring cognitive processes in a social situation is the greatest weakness of the study.
Garrison et al. (2001) chose to limit their analysis to quantitative methods. The advantage of this approach is that it is reliable, simple and it generates data which is subject to rigorous statistical analysis. The practical difficulties involved with qualitative analyses notwithstanding, it might be useful for the authors to combine their quantitative analyses with a series of qualitative studies. For example, qualitative studies might ask the participants of the discussions to provide their insights into the choices they made during the online discussions. These types of studies might provide a deeper insight into cognitive processes of the participants and would provide more information about how social pressures and individual differences might contribute to the results of the study. Qualitative studies would also provide a parallel set of analyses which would help provide some validation to the proposed tool: since the present article is a pilot of the tool, the validity of the tool should be tested in combination with other, more traditional methods of analysis. If these other approaches support the validity of the tool, its advantages in terms of its convenience and reliability over other methods are obvious.
In the background section of their paper, Garrison et al. (2001) refute the past-held notion that the medium by which communication takes place has no effect, ultimately, on the quality of education. They cite numerous studies that point to the unique nature of online asynchronous communication. It is unfortunate, therefore that they chose not to use their tool to directly to explore differences in cognitive presence in face-to-face (f2f) as compared to that in online discussions of students in the same classes. Since this article was published in the American Journal of Distance Education, it is, therefore, focused on the analysis of the nature of communication and cognitive processes that take place within computer mediated communication (CMC). However, on it’s surface, the model of Practical Inquiry doesn’t differentiate between f2f and CMC-mediated educational contexts (Garrison, Anderson, & Archer, 2000). If the textual-analysis tool developed by Garrison et al. (2001) could be used to analyze all of the class discourse in a blended learning environment, it might help strengthen the study in two different ways. First, the textual analysis tool could use some additional validation: previous research has reported that f2f discourse is qualitatively different from that which occurs in the online discussion format (Garrison et al. 2001). If the text-analysis tool confirmed these findings, it would help validate the accuracy of the tool. Secondly, this type of analysis could shed light on specifically what aspects of critical thinking are more characteristic of f2f versus asynchronous-CMC discussions.
Reflection
Reading this paper caused me to reflect on a number of pedagogical issues that I will need to keep in mind in my own teaching and that might make interesting foci for further research. Although these issues were not the focus of Garrison et al. (2001), I reflected on the following issues: 1) the individual characteristics of the students participating in online discussions, 2) the type of class in which these discussions take place, and 3) the role that teaching presence would have on cognitive presence in online discussions.
The characteristics of the individual participants within this study were ignored by the authors. Particularly in a study with such a small sample size, I would expect that the background and experience of individuals would have a huge impact on the results. For example, students that have extensive training in research, logic, debate and/or philosophy might more closely resemble the idealized participant within the model of critical inquiry. However, individuals who have not grown accustomed to the value of ‘contradicting’ their peers in an academic context may spend more time in the “exploration” stage of the process: not for cognitive reasons, but for social reasons, as I stated above. These issues become even more complicated when participants from different language and/or cultural backgrounds form part of the class: such individuals may have differing expectations regarding the process of discourse and their role in it.
Garrison et al. (2001) do describe the type of graduate class from which their samples are taken but it is not clear to me what the context of the discussions were and what the roles of the participants are. Are the students contributing simply to ‘participate’ or is their some clear objective in mind? Is there any incentive for the participants to integrate the discussion and find some resolution? If the participants have no reason to force a resolution to their discussions, then why would they risk harmony within the group? Additionally, it is not clear to what extent relationships between the participants have been fostered. Have the participants known each other from many other classes? Have the worked together? Were they complete strangers before taking this course? Even adults with a strong self-image require either an element of trust or a basic set of guidelines before they can feel comfortable making (or receiving) tough suggestions to (from) their classmates. Additionally, it is not clear from this paper, how much training and/or ‘socialization’ in participating in online discussions each individual has received or to what extent the participants were ready to communicate freely with other members prior to the collection of the data used in this study. Finally, the type of classes from which the samples were taken are not typical of most distance and/or blended classes and therefore it might be interesting to explore how the course design and teacher intervention might effect undergraduate and/or K-12 classes.
References
Garrison, D. R., Anderson, T., & Archer, W. (2000). Critical Inquiry in a Text-Based Environment: Computer conferencing in Higher Education. The Internet and Higher Education, 2(2), 87-105.
Garrison, D. R., Anderson, T., & Archer, W. (2001). Critical Thinking and Computer Conferencing: A model and tool to assess cognitive presence. The American Journal of Distance Education, 15(1), 17-23.
Subscribe to:
Post Comments (Atom)
4 comments:
Hi Geoff,
In contrast to your cat, I did not find your posting at all snore-inducing.
You brought forward a number of valid criticisms of the study on which our article was based, and flagged some areas for interesting further study. Some of these were questions that had also occurred to us, and which we started to pursue but then abandoned because of too many other things going on in our lives. I'm thinking, in particular , of the interesting comparisons between interaction using written and spoken (especially f2f) communication that you touched on in your last paragraph. That was an interest of mine which I started to pursue but did not follow up.
One area in which at least some of us are extending the original research is in looking at the circumstances around the triggering events that lead off the online discussion. Ellen mentioned that point, so I'll post something about this new direction into her blog.
Walter Archer
Hi Geoff,
I enjoyed reading your review. I agree with your analysis that the sample size used is smaller than one would want for the results to be reliable; nonetheless, the results seem reasonable and I would think similar results would be found in a larger sample. You mention the ‘difficulty in trying to measure latent thought processes indirectly by text analysis of an online discourse’. I agree, there is little brought forth by the authors about preceding participant experiences or the ultimate goal of the discussions. I wonder if (in combination with instructor intervention) online discourse could be used to measure cognitive presence. Do you think with teacher presence that online discourse could be used to measure cognitive presence? Do you think the fact that the discourse was online would expose the students’ cognitive processes more than in a face-to-face setting?
Do you think that the tool would be more useful if the authors gave more information about the participants or the specific expectations and course objectives? Or do the authors expect that the tool be manipulated by further researchers to fit the future groups of learners?
Lisa
Lisa and Dr. Archer,
Thank you both for taking the time to read and comment on my critique.
Dr. Archer - I have never before had the opportuntiy to get feedback from an author of a paper I have reviewed for a course. I was pleasently surprised to say the least!
In the two weeks that have past since I wrote my critique, I have found out that these series of papers have had a significant impact on this field of research. Many other groups have employed and/or modified your method to measure cognitive presence.
In Kathrina Meyer's paper (2003, JALN 7:3 55-65), for example, qualitative methods are used to gain insight into the differences between the online and f2f discussions, but unfortunately, she didn't do a content analysis of f2f transcripts.
Lisa,
I agree with you that my draft critique was split in its focus: the main purpose of the paper was not, of course, to study the cognitive presence of the participants of this study, per se. This study was published only to describe a tool that could be used for such studies. In my final draft, I will definately clean up the text to make it more consistent.
take care,
Geoff
Post a Comment