Sunday, October 21, 2007

Redesign Proposal for an ESL class

EAP Course Redesign Prototype Project (draft)

Course Title: ELF Level 3 - Reading & Writing

Background
The English Language Foundations (ELF) is a fast-track English for Academic Purposes (EAP) program at SAIT Polytechnic. The ELF program has 4 levels and 2 course types: 1) reading and writing and 2) speaking and listening. The curriculum is focused on the development of core academic skills that are required for success in regular academic programs. While the development of grammatical competence (grammar and vocabulary) is ultimately a requirement for general proficiency in English, traditional methods of instruction in these areas are not part of the course syllabi. Students are required to develop cognitive and meta-cognitive strategies that will allow them to direct their own learning in these areas. The ELF program is growing rapidly, along with other SAIT programs, which has put classroom space at a premium on campus.
Purpose
The basic pedagogical framework under which the ELF program currently operates is conducive to the application of blended learning modalities. Meta-cognitive strategy development is enhanced by providing online diagnostic, presentation and practice materials with which learners may more effectively engage. Particularly for the development of competence in vocabulary and grammar, online activities allow students to customize their studies to suit their own needs. Listening and reading activities, which often do not make effective use of class time, can be completed as homework by moving readings, audio files, and accompanying activities to the online LMS. Discussions which are generated by these texts can take place in class and/or in online discussions. Self-tests and practice tests, which can be marked automatically, can provide instant and regular formative feedback to learners. Instructors can provide feedback in more difficult areas, such as academic writing. This would allow more opportunities for both peer review and group discussions and would allow an increase in cognitive, social and teaching presence.
Scope
• Replacement model: diagnostics/assessment, receptive skill practice, & discussions loaded to WebCT.
• Fewer class time hours will focus on:
o work in productive skill development (writing and peer review)
o issues that arise that require more specialized attention by the instructor.
• Open lab times to help those with computer literacy and technical issues.
• Reduced class times will relieve the program somewhat from scheduling pressures
Assessment
Success will be measured and analyzed in the following ways. First, a nationally recognized standardized test of English proficiency, The Canadian Language Benchmarks Assessment (CLBA) will be used to measure actual gains in language proficiency. Second, learners will take learning styles inventory surveys in order to determine if individual learning styles affect success rates in traditional vs. blended classes. Third, the pass/fail/dropout rates will be compared to past records. Finally, surveys and interviews will be conducted to determine the attitudes of students, instructors and support staff towards the blended learning format and also to gain insights into areas that could be improved.

(I look forward to your feedback on this project - PD, Finance/scheduling, access/literacy issues, etc.)

Tuesday, September 25, 2007

Critique of Garrison et al - Cognitive Presence

Critique of Garrison, Anderson & Archer: Cognitive Presence

Summary

In their pilot study, “Critical thinking, cognitive presence, and computer conferencing in distance education,” Garrison, Anderson and Archer (2001) describe a tool to measure the “nature and quality of critical discourse” (p. 17) that takes place in online courses. Within the framework of their Communities of Inquiry (CoI), they propose a model in which the cognitive processes of participants are observable and subject to study by analyzing the text of asynchronous online discussions (Garrison, Anderson, & Archer, 2000). The tool developed by Garrison et al. (2000) employs raters to categorize student message postings into one of 4 phases of their model of critical thinking and practical inquiry: 1. Trigger, 2. Explore, 3. Integrate, 4. Resolution. While the authors admit that their sample size is quite small, their results indicate the potential for this tool to be useful since they demonstrate that it is highly reliable between trained raters. Using this tool, the authors reveal that in the discussions they examined, most of the discussion in the classes showed the participants to be in the ‘exploration’ stage of practical inquiry and very little of the discussion is centered in ‘integration’ and/or ‘resolution’. Presumably, this tool could be used to evaluate new pedagogical strategies in terms of their effectiveness to evoke higher cognitive processes among the participants.

Critique

Despite the advantages that this kind of tool would offer educational researchers, there are a few weaknesses in the methodology that I would like to discuss. Specifically, these weaknesses are related to problems associated with quantitative measurement of cognitive processes and the validity of the data analysis. I offer suggestions for further study that may address some of these problems.

Compared to the other two elements of the community of practical inquiry model, Teaching Presence and Social Presence, there is an inherent difficulty in trying to measure latent thought processes indirectly by text analysis of an online discourse. Discussions, online (or face-to-face) are, by definition, social processes and are subject to the customs and forces by which all social activities are moderated. The frequency of markers within the different categories of cognitive processes may be more of a function of social pressures than of the actual individual cognitive processes. For example, during an undergraduate lecture, there are undoubtedly a variety of cognitive processes taking place within the minds of the individuals in the audience, but there would be very little evidence of this for reasons related to the social norms. Additionally, the type of discourse that would be acceptable would also be pre-determined. Despite the potential of this tool in terms of its convenience and statistical reliability, the difficulty of indirectly measuring cognitive processes in a social situation is the greatest weakness of the study.

Garrison et al. (2001) chose to limit their analysis to quantitative methods. The advantage of this approach is that it is reliable, simple and it generates data which is subject to rigorous statistical analysis. The practical difficulties involved with qualitative analyses notwithstanding, it might be useful for the authors to combine their quantitative analyses with a series of qualitative studies. For example, qualitative studies might ask the participants of the discussions to provide their insights into the choices they made during the online discussions. These types of studies might provide a deeper insight into cognitive processes of the participants and would provide more information about how social pressures and individual differences might contribute to the results of the study. Qualitative studies would also provide a parallel set of analyses which would help provide some validation to the proposed tool: since the present article is a pilot of the tool, the validity of the tool should be tested in combination with other, more traditional methods of analysis. If these other approaches support the validity of the tool, its advantages in terms of its convenience and reliability over other methods are obvious.

In the background section of their paper, Garrison et al. (2001) refute the past-held notion that the medium by which communication takes place has no effect, ultimately, on the quality of education. They cite numerous studies that point to the unique nature of online asynchronous communication. It is unfortunate, therefore that they chose not to use their tool to directly to explore differences in cognitive presence in face-to-face (f2f) as compared to that in online discussions of students in the same classes. Since this article was published in the American Journal of Distance Education, it is, therefore, focused on the analysis of the nature of communication and cognitive processes that take place within computer mediated communication (CMC). However, on it’s surface, the model of Practical Inquiry doesn’t differentiate between f2f and CMC-mediated educational contexts (Garrison, Anderson, & Archer, 2000). If the textual-analysis tool developed by Garrison et al. (2001) could be used to analyze all of the class discourse in a blended learning environment, it might help strengthen the study in two different ways. First, the textual analysis tool could use some additional validation: previous research has reported that f2f discourse is qualitatively different from that which occurs in the online discussion format (Garrison et al. 2001). If the text-analysis tool confirmed these findings, it would help validate the accuracy of the tool. Secondly, this type of analysis could shed light on specifically what aspects of critical thinking are more characteristic of f2f versus asynchronous-CMC discussions.

Reflection

Reading this paper caused me to reflect on a number of pedagogical issues that I will need to keep in mind in my own teaching and that might make interesting foci for further research. Although these issues were not the focus of Garrison et al. (2001), I reflected on the following issues: 1) the individual characteristics of the students participating in online discussions, 2) the type of class in which these discussions take place, and 3) the role that teaching presence would have on cognitive presence in online discussions.

The characteristics of the individual participants within this study were ignored by the authors. Particularly in a study with such a small sample size, I would expect that the background and experience of individuals would have a huge impact on the results. For example, students that have extensive training in research, logic, debate and/or philosophy might more closely resemble the idealized participant within the model of critical inquiry. However, individuals who have not grown accustomed to the value of ‘contradicting’ their peers in an academic context may spend more time in the “exploration” stage of the process: not for cognitive reasons, but for social reasons, as I stated above. These issues become even more complicated when participants from different language and/or cultural backgrounds form part of the class: such individuals may have differing expectations regarding the process of discourse and their role in it.

Garrison et al. (2001) do describe the type of graduate class from which their samples are taken but it is not clear to me what the context of the discussions were and what the roles of the participants are. Are the students contributing simply to ‘participate’ or is their some clear objective in mind? Is there any incentive for the participants to integrate the discussion and find some resolution? If the participants have no reason to force a resolution to their discussions, then why would they risk harmony within the group? Additionally, it is not clear to what extent relationships between the participants have been fostered. Have the participants known each other from many other classes? Have the worked together? Were they complete strangers before taking this course? Even adults with a strong self-image require either an element of trust or a basic set of guidelines before they can feel comfortable making (or receiving) tough suggestions to (from) their classmates. Additionally, it is not clear from this paper, how much training and/or ‘socialization’ in participating in online discussions each individual has received or to what extent the participants were ready to communicate freely with other members prior to the collection of the data used in this study. Finally, the type of classes from which the samples were taken are not typical of most distance and/or blended classes and therefore it might be interesting to explore how the course design and teacher intervention might effect undergraduate and/or K-12 classes.

References

Garrison, D. R., Anderson, T., & Archer, W. (2000). Critical Inquiry in a Text-Based Environment: Computer conferencing in Higher Education. The Internet and Higher Education, 2(2), 87-105.

Garrison, D. R., Anderson, T., & Archer, W. (2001). Critical Thinking and Computer Conferencing: A model and tool to assess cognitive presence. The American Journal of Distance Education, 15(1), 17-23.

Thursday, September 20, 2007

test

(In the spirit of the new wave of micro-blogging:)

I am sitting next to our cat, Momi, who is snoring.

Thanks for reading.
comments? suggestions?

Geoff