Instead of "throwing technology" at educational problems, consider a systematic approach to evaluating effectiveness and cost/benefit ratios.
Bransford et al., writing for the National Research Council, asserted: "The committee recommends extensive evaluation research be conducted . . . to determine the goals, assumptions, and uses of technologies in classrooms and the match or mismatch of these uses with the principles of learning and the transfer of learning."
The Boyer Commission echoed Bransford's assertion: "It is incumbent upon the faculties of research universities to think carefully and systematically not only about how to make the most effective use of existing technologies but also how to create new ones that will enhance their own teaching and that of their colleagues. . . . However, as innovations multiply, so do dangers: in many circumstances, casual over-use of technological aids already increases the real and psychological distance between living faculty members and living students. Technological devices cannot substitute for direct contact."
Since the 1960s, computer-based instructional materials have been playing an ever-increasing role in the process of education -- both in the classroom and outside of it; both on-campus and for distance learning students. The professional organizations of many scientific disciplines have strongly encouraged the use of computer-based instructional techniques in undergraduate education. Significant instructional value has been observed and attributed to such factors as engaging students as active participants in their learning through increased interactivity, self-pacing, simulations, and ubiquitous availability of the materials in the learning process.
However, these benefits came at a high cost. Early creators of computer-based instructional materials found that their teams (discipline and technical specialists) needed at least one to two hundred hours of development time for each hour of instructional time, and one to five years to complete an entire course. If anything, for today's far more complex and sophisticated instructional materials and advanced computer-human interfaces, the ratio is at least as high, if not higher, even given the use of the most modern development hardware and software tools. Ever-larger development teams are being employed for producing the complex multimedia materials and advanced interfaces for computer-based education.
Furthermore, the costs do not stop with production. The delivery of complex instructional materials requires considerable bandwidth; high-speed processors; large, high-resolution monitors; and, sometimes, other special peripherals. In addition, compared to more traditional instructional techniques, instructors often must spend significantly more time preparing for the use and delivery of complex instructional materials.
Instructional time also increases. Generally, the more complex the instructional materials, the longer they take for a learner to complete. Unfortunately, sometimes this additional time is spent more on the complex effects than on actual productive learning. For example, viewing an animation may take longer than reading one or two descriptive passages on the same subject. The obvious question is whether these increased costs of instruction increase the effectiveness of the instruction sufficiently to be justifiable. Clearly, large development costs may be acceptable, if they are amortized over a sufficient number of students. Large delivery and instructional time costs cannot be amortized in the same manner.
Shelton and Lane have decried the too often expressed attitude of "Hey, cool technology, let's use it." The assumption by some developers that appears to be implicit in a large variety of recent instructional materials is that the more complex the interaction, the interface, and the production, the better the instruction. What is needed is a test of this assumption -- to assess both the effectiveness of 1) the various levels of learner interaction with the materials and 2) production complexities, and their costs (in development, delivery, and learner time and effort) versus their benefits.
Background and Previous Research
There is little or no previous research on the specific question of the appropriate complexity of learner interaction and interface complexity for particular instructional materials in given educational situations. For example, in teaching anatomy and physiology, the use of a three-dimensional "virtual temporal bone" has been reported. The development of this complex simulation involved a team of experts from both medical and computer sciences. The delivery of the materials requires the use of considerable bandwidth and an ImmersaDesk (a device costing approximately $200,000). Valuable and effective as this simulation unquestionably is, the authors have not reported using any existing studies, nor are they currently performing any of their own, to measure increased learning efficiency or cost/benefit.
Another, similar example involves the use of the Internet2 to deliver high-resolution video over the network to classrooms, as reported by Olseqn in the Chronicle of Higher Education:
"At the Spring 2000 Internet2 meeting [a demonstration] showed just how far Internet2 researchers have come in transforming digital video, which produces a disappointingly small and jerky picture on the regular Internet but can appear as a large and vivid movie when transmitted on a high-speed digital research network."
The proposed classroom applications of the digital video (live interactions with distant persons, and rapid search and retrieval from large video databanks, such as the digital archive of C-Span's broadcasts) appear to be of potential instructional value. However, the implication that a "large vivid movie" provides more effective or more cost beneficial instruction when compared with a "small jerky picture" remains untested.
Although there are no prior studies specifically investigating the appropriate levels of learner interactivity and production complexity, there is related research in computer-human interfacing, communication science, consumer psychology, visualization, instructional design, and multimedia development that may be quite valuable as a basis for considering these issues. Schneiderman has developed a taxonomy of interaction styles that can be used to provide an organized basis for categorizing the nature of the interactions included in instructional modules. For categorizing multimodal interfaces, the work of Martin, Veldman, and B�roule can be used as a starting point. Numerous authors have provided guides for design principles.
Theoretical perspectives and research relevant to the possible positive impact of perceptual enhancement of computer-based instructional materials is provided in three principal areas:
1) Redundancy and the advantage of multiple messengers
2) More elaborate presentations (e.g., text plus illustrations rather than text alone) may lead to more elaborate and accessible memory representations
3) Enriched presentations may provide a greater variety of cues that the designer can use to engage, maintain, and redirect the learner's attention
There are also theoretical perspectives and research relevant to the possible negative impact of perceptual enhancement of computer-based instructional materials.
1) Perceptual enhancements may increase cognitive load
2) Perceptual enhancements may decrease elaborative rehearsal
3) Enhanced realism may lead to inappropriate attentional strategies, which lead to less efficient learning, and can sometimes lead to perceptual illusions and ambiguities
Studies by social psychology teams may provide useful perspectives on the impact on the effectiveness of a message of what are termed "peripheral cues," which are analogous to (and sometimes identical to) production complexities.
How Might these Issues be Addressed?
The fundamental question of the relative effectiveness and efficiency of increasingly complex learner interactions and production interfaces (peripheral cues) could be studied by preparing, employing, and then comparing and contrasting several versions of computer-based instructional materials designed for the same learning objectives. It would be best if the materials could be presented in a double-blind manner, with neither the student nor the professor knowing which version was employed.
To address this issue effectively, instructional materials should be differentiated along three dimensions. The first dimension is categorical: the discipline being studied. It is possible that different disciplines may require different degrees of interactivity and/or design complexity. The second and third dimensions are continuous, but they may be artificially categorized.
The second dimension, the various instructional objectives for each individual set of instructional materials within each course, can be categorized along the continuum of increasingly complex intellectual tasks as described by such theoreticians as Bloom and Gagn� and the many who have built on their work over the last 40 years. It seems likely that the complexity of the intellectual task will dictate different levels of interactivity and production values.
The third dimension, complexity of the learner interactivity and resulting production effects and interfaces, can be classified categorically also, for example as either "minimal," "moderate," or "high." A "minimally" complex production might have only still images and no interaction; a "moderate" one might have motion graphics or animations together with some learner inputs and responses; and a "high" level production might include three-dimensional, interactive models.
Analyses of the relationships among these dimensions may help to provide general guidance to developers of computer-based materials regarding the most effective and cost-beneficial levels of interactivity and production complexity. However, much more detailed analyses should also be conducted based on gender, because various studies suggest a relationship between gender and the effectiveness of different forms of learning, and between gender and attitudes toward and experience with computers. Furthermore, preparation, motivation, and learning styles have been shown to be significant factors in the effectiveness of persuasive messages and in many forms of instruction. Therefore, measures of student motivation, such as valuing of subject matter and intention to enroll in further courses in the discipline, should be considered in preparing guidelines. In addition, it would be appropriate to study whether there is a difference between minority and non-minority groups in the effectiveness of various complexities of instructional materials.
One possible method for studying this problem would entail the following steps:
1. Define a taxonomy of levels of complexity in learner interactivity and computer-based instructional materials, and a categorization of increasingly complex intellectual tasks.
2. Develop clear, concise, explicit, detailed instructional objectives and lesson plans for each of a number of courses.
3. Use the lesson plans and the taxonomy to plan several versions of instruction for each of the major portions of each course. The first version of each unit would have no or almost no computer-based instructional materials. The other versions would each have increasingly complex computer-based materials.
4. Develop instruments for measuring the effectiveness of the instruction in each unit of each course.
5. Develop instruments for measuring the various costs associated with the development, delivery, and faculty and student use of the computer-based instructional materials.
6. Create computer-based instructional materials at each of the levels of complexity. The several versions of the materials for each portion of each course should be as similar as possible in terms of their content and objectives; they should vary only in terms of the degree of complexity of learner interactions and production elements employed, as defined by the taxonomy.
7. Pretest students on the learning objectives. Offer the courses, randomly assigning students to the various levels of computer-based materials, using a double-blind methodology.
8. Assess the faculty activities and the students' learning as the courses progress.
9. Evaluate the effectiveness of the various levels of computer-based materials versus their expense.
10. Develop guidelines for developers of computer-based materials that indicate appropriate levels of production complexity and learner interactivity given the nature of the subject matter and the characteristics of the learners.
It is clear that such a research program would require several years and a considerable investment. However, completion of these steps could result in considerable increases in the efficiency and effectiveness of both the development and the delivery of computer-based instructional materials. The ultimate outcome of such research would be to produce broad, widely applicable guidelines and a rationale to assist future instructional designers and developers of computer-based instructional materials in deciding the appropriate degree of complexity and sophistication to employ.
The human-computer interface for instructional materials can range from extremely simple to extraordinarily complex. It is the usual practice to endeavor to include the most complex learner interaction and production effects affordable in computer-based instructional material, on the assumption that these will produce the most effective learning. However, the effectiveness and cost/benefit ratio of different levels of complexity in such materials is rarely tested. What is needed is a systematic consideration of both the effectiveness and the cost/benefit of achieving various types of instructional objectives in courses using different levels of complexity of computer-based materials. The costs of development, delivery, and instructional time should be considered in determining the cost/benefit.
Bloom, Benjamin S. (Editor). Taxonomy of Educational Objectives. Handbook I: Cognitive Domain. New York: David McKay Company, Inc. 1956.
Bloom, Benjamin S. "Mastery Learning." In Block, J. H. (editor). Mastery Learning: Theory and Practice. New York: Holt, Rinehart and Winston. 1971. pp. 47-63.
The Boyer Commission on Educating Undergraduates in the Research University. Reinventing Undergraduate Education: A Blueprint for America's Research Universities. http://notes.cc.sunysb.edu/Pres/boyer.nsf/. 1998.
Bransford, John D, Ann L. Brown, and Rodney R. Cocking (editors). How People Learn: Brain, Mind, Experience, and School. (Committee on Developments in the Science of Learning, Commission on Behavioral and Social Sciences and Education, National Research Council). Washington, DC: National Academy Press, 1999.
Gagn�, Robert M. The Conditions of Learning (second edition). New York: Holt, Rinehart and Winston, Inc. 1970.
Gagn�, Robert M. and Marcy Perkins Driscoll. Essentials of Learning for Instruction (second edition). Englewood Cliffs, NJ: Prentice Hall. 1988.
Gagn�, Robert M., Leslie J. Briggs, and Walter W. Wager. Principles of Instructional Design (fourth edition). Fort Worth: Harcourt Brace Jovanovich College Publishers. 1992.
Martin, Jean-Claude., Remko Veldman, and Dominique B�roule. "Developing Multimodal Interfaces: A Theoretical Framework and Guided Propagation Networks." In Bunt, Harry, Robbert-Jan Beun, and Tijn Borghuis (Editors). Multimodal Human-Computer Communication: Systems, Techniques, and Experiments. Berlin: Springer-Verlag, 1998. pp. 158-187.
Olsen, Florence. "Much-Improved Digital Video Is Demonstrated at Internet2 Meeting." Chronicle of Higher Education (on-line edition). March 30, 2000
Schneiderman, Ben. "A Taxonomy and Rule Base for the Selection of Interaction Styles." In Shackel, B. and S. Richardson (editors). Human Factors for Informatics Usability. Cambridge: Cambridge University Press, 1991. pp. 325-342.
Shelton, Michael W. and Derek R. Lane. " The Centrality of Communication Education in Classroom Computer-Mediated-Communication: Toward a Practical and Evaluative Pedagogy." Paper under review by Communication Education, 2000.