Cognitive Information Processing and Memory

As a learning theory, Behaviorism dominated American psychology for half a century, but it suffered from not being able to describe the challenges associated with information recall.  Cognitive Information Processing (CIP) was not new to psychology, but its use as a learning theory allows us to address issues that behaviorism cannot.

The cognitive information processing model portrays the mind as possessing a structure consisting of components for processing (storing, retrieving, transforming, using) information and procedures for using the components. Like the behaviorism, the cognitive information processing model holds that learning consists partially of the formation of associations between new and stored information.

In the CIP model, learning occurs when information is input from the environment, processed, stored in memory, and then output in the form of some learned capability.  The question is to understand how the environment modifies human behavior, bearing in mind there is an intervening variable between the environment and the subsequent behavior; specifically, the information processing system of the learner.

CIP proposes a three stage memory system:  sensory memory, short-term memory, and long-term memory.
Sensory Memory
Sensory memory is associated with the senses including vision, hearing, tech (haptics) etc.  Sensory memory functions to hold this sensory information in memory very briefly, just long enough for the information to be processed further. There is a separate sensory memory corresponding with each of the five senses, but all are assumed to operate in essentially the same way.

When dealing with visual stimuli, it appears sensory memory is temporarily, rather than visually limited. That means that a great deal of visual information registers, but it decays very rapidly without further processing.
Relatively little information is known about sensory memories corresponding to the other senses, but they are presumed to function in a similar way.  Visual information is called Icon.

Unlike visual information, auditory information remains in sensory memory longer.  This is presumed to be because of the time it takes for speech processing to occur.  Auditory information is known as Echo.

Working or Short-term Memory

In this memory, further processing is carried out to make the information ready for either long-term storage or response and action at that time.  Working memory is generally thought to have independent processors for each sensory module.  Additionally, working memory has been likened to consciousness. If you are actively thinking about ideas, they are in working memory.  One important aspect of working memory is the relatively small amount of information it can contain and the short period of time it is available, typically 15 to 30 seconds.  (Have you ever wondered why you can walk from one room to another and forget why you went to the other room?)  This limited storage and short access time leads to practices like chunking to break complex tasks into simpler ones which can be more easily processed thorough sensory and working memory.

Information selected for further processing comes from sensory memory to working memory.  At this stage, concepts from long-term memory will be activated for use in making sense of the incoming information.  However, there are limits as to how much information can be held in working memory at one time, and for how long that information can be retained.  It is believed that working memory capacity can be increased through creating smaller bits of information to process, which are known as chunks.  The process of creating chunks is known as chunking.

Consequently, learning should be organized to allow activities to be easily chunked by the learner.  The current hypothesis is that as new chunks come into memory, they push out chunks that were previously occupying the available spaces in working memory. This is the now accepted explanation for the serial position known as recency. This is why people can remember with a higher degree of certainty the things that they heard most recently.

Research has shown that unrehearsed information, that is information which has not been selected for additional processing or storage into long term memory, will be lost from working memory in about 15 to 30 seconds.  To prevent the loss of information from working memory and to ensure its transfer to long-term storage, two processes are necessary: rehearsal and encoding.

Long-term Memory
We consider long term memory as the permanent record, or information storehouse.  It is currently assumed that once information has been processed into long term memory, it is never truly lost.  As far as we know, long-term memory is capable of retaining an unlimited amount and variety of information.  However, we forget things not because the information is lost, but only because the association or pathway to the information has deteriorated.

Episodic memory is memory for specific events, such as when you remember the circumstances surrounding how you learn to read a weather map, or perform some other task. Semantic memory, refers to all the general information stored in memory that can be recalled independently of how it was learned.   Sometimes we may not be able to remember how we learned something, because the circumstances surrounding the event were not particularly memorable. As far as educators are concerned, the emphasis is on semantic memory.

Representation of Information Storage as a Network
One way to conceive of long-term memory is to think of it as a sort of mental dictionary where concepts are represented according to their associations to one another.  A network model assumes the existence of nodes in memory, which correspond to concepts.  These nodes are thought to be interconnected in a vast network structure, representing the learned relationships among concepts.

Feature Comparison Models of Long-term Memory
In this model, concepts and memory are stored with sets of defining features.  The association to other concepts is then accomplished through a comparison of overlapping features.  The defining features are those an object must have in order for it to be classified in a category.  Characteristic features, are those that are usually associated with typical members of the category.  One challenge with the feature comparison model is that it does not take into account the issue of context as it relates to a specific concept.

Attention and Memory
Invariably some information is lost due to the excuse that the individual was not paying attention. Attention, has been conceptualized in a number of ways. Attention, is not an all or none proposition. Rather it serves to attenuate, or tune out, stimulation. We can see examples of this such as when we are attending a party and are involved in one conversation, and hear our name or a topic of interest elsewhere and our attention shifts. This means enough information was being processed about the other conversation to prompt us to react.

Ongoing research regarding attentionI speculates it is a resource with limited capacity to be allocated and shared among competing activities. This suggests learners have some control over the process, and can selectively focus attention to meet certain ends. This also suggests the tasks that require relatively little attention may be accomplished effortlessly or automatically.

Propositional Models of Long-term Memory
A proposition is a combination of concepts that have a subject and a predicate. In this model, instead of concept nodes comprising the basic unit of knowledge, the basic unit is now set to be the proposition. Because memory recallIs are often structured around propositions, propositions have been used for many recall experiments.
The propositional model is also a network model, like the Representation model we saw earlier.

Parallel Distributed Processing Models of Long-term Memory
Parallel processing is distinguished from serial processing in that multiple cognitive operations occur simultaneously as opposed to sequentially.  Network memory models have come to include the assumption of parallel processing, but this assumption is at the very core of this model.  This model is also known as a connectionist model.

The model proposes that the building blocks of memory are in fact connections. These connections are sub-symbolic in nature, which means they do not correspond to meaningful bits of information, like concepts or propositions. Instead the units are simple processing devices and connections describe how the units interact with each other.  Consequently, this forms a vast network, across which processing is assumed to be distributed.  The parallel distributed processing model seems to account for the incremental nature of human learning.
This model also allows for incorporating goals into the dynamics of the information processing system.  There has been limited evidence supporting the parallel distributed processing model as a mirror of neural processes in the brain.

Dual Code Models of Long-term Memory
Imagery is often called “images in my mind”.  Imagery could be tactile, auditory, visual or others such as olfactory or kinesthetic in nature.  There are challenges in memory with words that are more abstract in nature than words that are more concrete. For example people find it much easier to remember words like sailboat, apple, and zebra rather than words like liberty and justice.

According to the dual code or dual systems view, there are two systems of memory representation, one for verbal information and the other for nonverbal information. The theory currently suggests that mental images are not exact copies of visual images.  Images tend to be imprecise representations, with many details omitted, incomplete, or in some cases accurately recorded.

Retrieval of Learned Information
After information has been stored in long-term memory, it needs to be recalled for later use.  Previously learned information is brought back into working memory, either for the purpose of understanding some new input or for  making a response.  To recall information, learners must both generate an answer and then determine whether it correctly answers the question.  In recognition, however, potential answers are already generated, and the learner must only recognize which one is correct.

In free recall situations, learners must retrieve previously stored information with no clues or hints to help them remember.  Because there are no cues to potentially bias the retrieval of information, the theory is that the output of free recall is assumed to accurately represent what is in memory.  However providing learners with cues raises the overall amount the individual is able to remember.  Cued recall tasks are those in which a hint or cue is provided to help the learner remember the desired information.

Unlike free recall, recognition involves a set of pre-generated stimuli presented to learners for a decision or judgment.  One factor affecting recognition is the strength of the memory. Stronger memories will be more accurately recognized than weaker memories. Another factor affecting recognition is based upon the context surrounding the recognition task. High risk conditions lead to a more stringent criterion than low risk conditions, even though the memory trace in both situations is equivalent in strength and match the test stimulus.

The encoding specificity principle states “that whatever cues are used by a learner to facilitate encoding will also serve as the best retrieval cues for that information at test time”.  Information retrieval is very much influenced by the context of encoding the information into long-term memory. This suggests for instruction, that many different contexts or examples may be important to discuss during the presentation of concepts. In this way, students will have many cues available to assist in encoding.  These cues cal be used later  for recall.

The failure to encode simply means that the information sought during retrieval cannot be found.  The concept of encoding failure emphasizes once again the importance of activating relevant prior knowledge in learning.  The failure to retrieve information that has been encoded in memory is a second cause of forgetting and refers to the inability to access previously learned information.

There are methods to support or inhibit encoding.  One factor supporting encoding is note taking.  Taking notes provide an external retrieval mechanism, as it provides memory storage which is external to the learner.  Students who elaborate on their notes also tend to perform better than those are simply reread them.  The process of taking the notes and elaborating upon them forces the learner to recall what they have learned and supports associating the new information with other knowledge.

Interference, such as other events or information get in the way of effective retrieval.  Interference has occurred when numerous events and competing information has interfered with the retrieval of the desired information.
Interference can also occur from information that was learned either before or after the desired information affecting the recall of the desired information.  Retroactive interference occurs when newer information interferes with the retrieval of previously learned information.  Proactive interference occurs when previous learning interferes with the recall of later learning.

Using the Cognitive Processing Model for Instruction
If learners are supposed to understand new information in particular ways, then the instruction must be organized to help them.  Instructional tactics such as signaling what information is important and drawing learners attention to specific features of that information, can facilitate selective attention and appropriate pattern recognition.

Using imagery and representing information in multiple ways can help encoding and retrieval, as well as counteract the effects of interference.  Additionally, arranging extensive and variable practice is important.  The saying “practice makes perfect” is not exactly an accurate statement.  While automaticity is a desirable educational goal, it is not just the amount of practice that makes things perfect. It is also the type of practice. So the dictum should really say “perfect practice makes perfect”.

Tagged with: , , , , ,