This is a video project illustrating the concept of learning to code webpages using “The Multimedia Principle” to encode one’s learning more efficiently in learning to code webpages. The video is 4:50 minutes long and attempts to demonstrate how to create a webpage from scratch using a text editor using both pictorial and verbal encoding in learning to code webpages. The video title “The Multimedia Principle: multimedia encoding through multimedia coding” was thought to be a play on words at the time this video was created. This video project was added for inclusion among the other portfolio artifacts as this video fits the larger portfolio theme represented within the main idea of this blog: how people experience themselves learning.
The multimedia principle states that people construct more meaning out of what they perceive if what they perceive is not just on-screen text or words alone, but perceptions of on-screen words and graphics illustrating a common meaning perceived through the two channels/modes of pictorial and verbal encoding. Perceptions designed so as to convert the meaning to be learned from what these pictorial and verbal perceptions have in common into mental structures that model the meaning to be learned. In a perfect world where theory and application of theory are the same, those mental structures that model the meaning to be learned by the learner would be influenced by the kinds of perceptions designed to present that meaning to the learner. The act of perceiving or attending to words transformed from visual perceptions to verbal perceptions by reading the display of visual words and converting them to inner speech is the verbal mode of encoding such information into memory. The act of perceiving images is the act of perceiving visual patterns that allow us to recognize and construct the meaning of such patterns from past associations in visual memory. The human brain has to encode the visual representation of words as a verbal perception using a longer process than that of just hearing the words and encoding them as verbal perceptions or just encoding the visual perception of images. We can see more information in one image than we can represent or describe with one word or many words often. To not use graphics is to ignore both how we learned as pre-linguistic infants and the impact of our visual cortex on our evolution as a species.
The multimedia principle argues for more efficient information processing as the least restriction in the bandwidth of working memory by using multi-modal or dual channel (pictorial/visual and word/verbal) perceptions to get the information encoded in long-term memory. The contiguity principle has to do with the proximity in space and time between pictorial and verbal perceptions that our learning of the information presented through these perceptions might coincide as that experience most conducive to the intent of that instruction which we are to learn. This involves the amount of unnecessary cognitive processing lost to our sense-making of the irrelevant placement in both space and time between our pictorial and verbal perceptions as contained in the presentation of learning. The modality principle picks up from where the multimedia principle stops by theorizing the multi-modal or dual channel concept of cognition to its logical conclusion. Pictorial and verbal encoding of informational perceptions occurs best when these perceptions as graphics and words are presented in their natural sensory forms as the least restrictive to our cognitive processing of them. This means perceptions conveying contiguous information to be learned through relevant graphical perceptions as visual encoding and relevant linguistic auditory perceptions as verbal encoding will be less restrictive on our cognitive processing as sense-makers than combinations of graphics and on-screen words and text.
Narrative text is processed differently through our auditory system than that of visual text, so learning the presented material is made easier through visual graphics and narrative audio text alone. I remember the martial arts movies that did not synchronize the movement of actors’ and actresses’ lips with the English narration representing what they were supposed to be saying. This is to say that one could recognize that the visual action of the lips did not match the kinesthetic action required to pronounce the same words as one would hear in the narration of the movie. One is processing the visual channel to see the pictorial information in the facial appearance of actors’ and actresses’ speaking. One is processing the somatosensory or kinesthetic channel to compare the visual orientation of others’ lips to that of one’s self. Finally, one hears the narration of English words. By comparison of the three kinds of sensory modalities, one can discern that the speech of others is not timed correctly to produce what one is hearing in the narration. This integration of oral kinesthetic perception would not be possible without a common neurological base for language in both visual and auditory perception, further illustrating the cohesiveness of cognitive learning through those channels simultaneously.