Memory is the ability to encode, store and recall information. Memories give an organism the capability to learn and adapt from previous experiences as well as build relationships. Encoding allows the perceived item of use or interest to be converted into a construct that can be stored within the brain and recalled later from short term or long term memory. Working memory stores information for immediate use or manipulation.
Types
Visual, acoustic, and semantic encodings are the most intensively used. Other encodings are also used.
Visual encoding
Visual encoding is the process of encoding images and visual sensory information. Visual sensory information is temporarily stored within our iconic memory and working memory before being encoded into permanent long-term storage. Baddeley’s model of working memory states that visual information is stored in the visuo-spatial sketchpad.
The amygdala is a complex structure that has an important role in visual encoding. It accepts visual input in addition to input from other systems and encodes the positive or negative values of conditioned stimuli.
Acoustic encoding
Acoustic encoding is the processing and encoding of sound, words, and all other auditory input for storage and later retrieval. According to Baddeley, processing of auditory information is aided by the concept of the phonological loop, which allows input within our echoic memory to be sub vocally rehearsed in order to facilitate remembering.
Studies indicate that lexical, semantic and phonological factors interact in verbal working memory. The phonological similarity effect (PSE), is modified by word concreteness. This emphasizes that verbal working memory performance cannot exclusively be attributed to phonological or acoustic representation but also includes an interaction of linguistic representation. What remains to be seen is whether linguistic representation is expressed at the time of recall or whether they participate in a more fundamental role in encoding and preservation.
Other senses
Tactile encoding is the processing and encoding of how something feels, normally through touch. Neurons in the primary somatosensory cortex (S1) react to vibrotactile stimuli by activating in synchronisation with each series of vibrations. Odors and tastes may also lead to encode.
In general encoding for short-term storage (STS) in the brain relies primarily on acoustic rather than semantic encoding.
Semantic encoding
Semantic encoding is the processing and encoding of sensory input that has particular meaning or can be applied to a context. Various strategies can be applied such as chunking and mnemonics to aid in encoding, and in some cases, allow deep processing, and optimizing retrieval.
Words studied in semantic or deep encoding conditions are better recalled as compared to both easy and hard groupings of nonsemantic or shallow encoding conditions with response time being the deciding variable. Brodmann’s areas 45, 46, and 47 (the left inferior prefrontal cortex or LIPC) showed significantly more activation during semantic encoding conditions compared to nonsemantic encoding conditions regardless of the difficulty of the nonsemantic encoding task presented. The same area showing increased activation during initial semantic encoding will also display decreasing activation with repetitive semantic encoding of the same words. This suggests the decrease in activation with repetition is process specific occurring when words are semantically reprocessed but not when they are nonsemantically reprocessed.
Long term potentiation
Encoding is a biological event that begins with perception. All perceived and striking sensations travel to your brain’s hippocampus where all these sensations are combined into one single experience. The hippocampus is responsible for analyzing these inputs and ultimately deciding if they will be committed to your long term memory; these various threads of information are stored in various parts of the brain. However, the exact way in which these pieces are identified and recalled later remains unknown.
Encoding is achieved using a combination of chemicals and electricity. Neurotransmitters are released when an electrical pulse crosses the synapse which serves as a connection from nerve cells to other cells. The dendrites receive these impulses with their feathery extensions. A phenomenon called Long Term Potentiation allows a synapse to increase strength with increasing numbers of transmitted signals between the two neurons. These cells also organise themselves into groups specializing in different kinds of information processing. Thus, with new experiences your brain creates more connections and may ‘rewire’. The brain organizes and reorganizes itself in response to your experiences, creating new memories prompted by experience, education, or training. Therefore the use of a brain reflects how it is organised. This ability to re-organize is especially important if ever a part of your brain becomes damaged. Scientists are unsure of whether the stimuli of what we do not recall are filtered out at the sensory phase or if they are filtered out after the brain examines their significance.
Mapping activity
Positron emission tomography (PET) demonstrates a consistent functional anatomical blueprint of hippocampal activation during episodic encoding and retrival. Activation in the hippocampal region associated with episodic memory encoding has been shown to occur in the rostral portion of the region whereas activation associated with episodic memory retrieval occurs in the caudal portions. This is referred to as the Hippocampal Encoding/Retrieval model or HIPER model.
One study used PET to measure cerebral blood flow during encoding and recognition of faces in both young and older participants. Young people displayed increased cerebral blood flow in the right hippocampus and the left prefrontal and temporal cortices during encoding and in the right prefrontal and parietal cortex during recognition. Elderly people showed no significant activation in areas activated in young people during encoding, however they did show right prefrontal activation during recognition. Thus it may be concluded that as we age, failing memories may be the consequence of a failure to adequately encode stimuli as demonstrated in the lack of cortical and hippocampal activation during the encoding process.
Recent findings in studies focusing on patients with post traumatic stress disorder demonstrate that amino acid transmitters, glutamate and GABA, are intimately implicated in the process of factual memory registration, and suggest that amine neurotransmitters, norepinephrine and serotonin, are involved in encoding emotional memory.
Molecular perspective
The process of encoding is not yet well understood, however key advances have shed light on the nature of these mechanisms. Encoding begins with any novel situation, as the brain will interact and draw conclusions from the results of this interaction. These learning experiences have been known to trigger a cascade of molecular events leading to the formation of memories. These changes include the modification of neural synapses, modification of proteins, creation of new synapses, activation of gene expression and new protein synthesis. However, encoding can occur on different levels. The first step is short-term memory formation, followed by the conversion to a long-term memory, and then a long-term memory consolidation process.
Synaptic plasticity
Synaptic plasticity is the ability of the brain to strengthen, weaken, destroy and create neural synapses and is the basis for learning. These molecular distinctions will identify and indicate the strength of each neural connection. The effect of a learning experience depends on the content of such an experience. Reactions that are favoured will be reinforced and those that are deemed unfavourable will be weakened. This shows that the synaptic modifications that occur can operate either way, in order to be able to make changes over time depending on the current situation of the organism. In the short term, synaptic changes may include the strengthening or weakening of a connection by modifying the preexisting proteins leading to a modification in synapse connection strength. In the long term, entirely new connections may form or the number of synapses at a connection may be increased, or reduced.
The encoding process
A significant short-term biochemical change is the covalent modification of pre-existing proteins in order to modify synaptic connections that are already active. This allows data to be conveyed in the short term, without consolidating anything for permanent storage. From here a memory or an association may be chosen to become a long-term memory, or forgotten as the synaptic connections eventually weaken. The switch from short to long-term is the same concerning both implicit memory and explicit memory. This process is regulated by a number of inhibitory constraints, primarily the balance between protein phosphorylation and dephosphorylation. Finally, long term changes occur that allow consolidation of the target memory. These changes include new protein synthesis, the formation of new synaptic connections and finally the activation of gene expression in accordance with the new neural configuration. The encoding process has been found to be partially mediated by serotonergic interneurons, specifically in regard to sensitization as blocking these interneurons prevented sensitization entirely. However, the ultimate consequences of these discoveries have yet to be identified. Furthermore, the learning process has been known to recruit a variety of modulatory transmitters in order to create and consolidate memories. These transmitters cause the nucleus to initiate processes required for neuronal growth and long term memory, mark specific synapses for the capture of long-term processes, regulate local protein synthesis and even appear to mediate attentional processes required for the formation and recall of memories.
Encoding and genetics
Human memory, including the process of encoding, is known to be a heritable trait that is controlled by more than one gene. In fact, twin studies suggest that genetic differences are responsible for as much as 50% of the variance seen in memory tasks. Proteins identified in animal studies have been linked directly to a molecular cascade of reactions leading to memory formation, and a sizeable number of these proteins are encoded by genes that are expressed in humans as well. In fact, variations within these genes appear to be associated with memory capacity and have been identified in recent human genetic studies.
Complementary processes
The idea that the brain is separated into two complementary processing networks (task positive and task negative) has recently become an area of increasing interest. The task positive network deals with externally oriented processing whereas the task negative network deals with internally oriented processing. Research indicates that these networks are not exclusive and some tasks overlap in their activation. A study done in 2009 shows encoding success and novelty detection activity within the task-positive network have significant overlap and have thus been concluded to reflect common association of externally-oriented processing. It also demonstrates how encoding failure and retrieval success share significant overlap within the task negative network indicating common association of internally oriented processing. Finally, a low level of overlap between encoding success and retrieval success activity and between encoding failure and novelty detection activity respectively indicate opposing modes or processing. In sum task positive and task negative networks can have common associations during the performance of different tasks.
Depth of processing
Different levels of processing influence how well information is remembered. These levels of processing can be illustrated by maintenance and elaborate rehearsal.
Maintenance and elaborative rehearsal
Maintenance rehearsal is a shallow form of processing information which involves focusing on an object without thought to its meaning or its association with other objects. For example the repetition of a series of numbers is a form of maintenance rehearsal. In contrast, elaborative or relational rehearsal is a deep form of processing information and involves thought of the objects meaning as well as making connections between the object, past experiences and the other objects of focus. Using the example of numbers, one might associate them with dates that are personally significant such as your parents’ birthdays (past experiences) or perhaps you might see a pattern in the numbers that helps you to remember them.
Due to the deeper level of processing that occurs with elaborative rehearsal it is more effective than maintenance rehearsal in creating new memories. This has been demonstrated in people’s lack of knowledge of the details in everyday objects. For example, in one study where Americans were asked about the orientation of the face on their country’s penny few recalled this with any degree of certainty. Despite the fact that it is a detail that is often seen, it is not remembered as there is no need to because the color discriminates the penny from other coins. The ineffectiveness of maintenance rehearsal, simply being repeatedly exposed to an item, in creating memories has also been found in people’s lack of memory for the layout of the digits 0-9 on calculators and telephones.
Maintenance rehearsal has been demonstrated to be important in learning but its effects can only be demonstrated using indirect methods such as lexical decision tasks, and word stem completion which are used to assess implicit learning. In general, however previous learning by maintenance rehearsal is not apparent when memory is being tested directly or explicitly with questions like “ Is this the word you were shown earlier?”
Intention to learn
Studies have shown that the intention to learn has no direct effect on memory encoding. Instead, memory encoding is dependent on how deeply each item is encoded, which could be affected by intention to learn, but not exclusively. That is, intention to learn can lead to more effective learning strategies, and consequently, better memory encoding, but if you learn something incidentally (i.e. without intention to learn) but still process and learn the information effectively, it will get encoded just as well as something learnt with intention.
The effects of elaborative rehearsal or deep processing can be attributed to the number of connections made while encoding that increase the number of pathways available for retrieval.
Optimal encoding
Organization can be seen as the key to better memory. As demonstrated in the above section on levels of processing the connections that are made between the to-be-remembered item, other to-be-remembered items, previous experiences and context generate retrieval paths for the to-be-remembered item. These connections impose organization on the to-be-remembered item, making it more memorable.
Mnemonics
For simple material such as lists of words Mnemonics are the best strategy. Mnemonic Strategies are an example of how finding organization within a set of items helps these items to be remembered. In the absence of any apparent organization within a group organization can be imposed with the same memory enhancing results. An example of a mnemonic strategy that imposes organization is the peg-word system which associates the to- be-remembered items with a list of easily remembered items. Another example of a mnemonic device commonly used is the first letter of every word system or acronyms. When learning the colours in a rainbow most students learn the first letter of every colour and impose their own meaning by associating it with a name such as Roy. G. Biv which stands for red, orange, yellow, green, blue, indigo, violet. In this way mnemonic devices not only help the encoding of specific items but also their sequence. For more complex concepts, understanding is the key to remembering. In a study done by Wiseman and Neisser in 1974 they presented participants with picture (the picture was of a Dalmatian in the style of pointillism making it difficult to see the image). They found that memory for the picture was better if the participants understood what was depicted.
Chunking
Another way understanding may aid memory is by reducing the amount that has to be remembered via chunking. Chunking is the process of organizing objects into meaningful wholes. These wholes are then remembered as a unit rather than separate objects. Words are an example of chunking, where instead of simply perceiving letters we perceive and remember their meaningful wholes: words. The use of chunking increases the number of items we are able to remember by creating meaningful “packets” in which many related items are stored as one.
State-dependent learning
For optimal encoding, connections are not only formed between the items themselves and past experiences, but also between the internal state or mood of the encoder and the situation they are in. The connections that are formed between the encoders internal state or the situation and the items to be remembered are State-dependent. In a 1975 study by Godden and Baddeley the effects of State-dependent learning were shown. They asked deep sea divers to learn various materials while either under water or on the side of the pool. They found that those who were tested in the same condition that they had learned the information in were better able to recall that information, i.e. those who learned the material under water did better when tested on that material under water than when tested on land. Context had become associated with the material they were trying to recall and therefore was serving as a retrieval cue. Results similar to these have also been found when certain smells are present at encoding.
However, although the external environment is important at the time of encoding in creating multiple pathways for retrieval, other studies have shown that simply creating the same internal state that you had at the time of encoding is sufficient to serve as a retrieval cue. Therefore putting yourself in the same mindset that you were in at the time of encoding will help recall in the same way that being in the same situation helps recall. This effect called context reinstatement was demonstrated by Fisher and Craik 1977 when they matched retrieval cues with the way information was memorized.
Encoding specificity
The context of learning shapes how information is encoded. For instance, Kanizsa in 1979 showed a picture that could be interpreted as either a white vase on a black background or 2 faces facing each other on a white background. The participants were primed to see the vase. Later they were shown the picture again but this time they were primed to see the black faces on the white background. Although this was the same picture as they had seen before, when asked if they had seen this picture before, they said no. The reason for this was that they has been primed to see the vase the first time the picture was presented, and it was therefore unrecognizable the second time as two faces. This demonstrates that the stimulus is understood within the context it is learned in as well the general rule that what really constitutes good learning are tests that test what has been learned in the same way that it was learned. Therefore, to truly be efficient at remembering information, one must consider the demands that future recall will place on this information and study in a way that will match those demands.
Computational Models of Memory Encoding
Computational models of memory encoding have been developed in order to better understand and simulate the mostly expected, yet sometimes wildly unpredictable, behaviors of human memory. Different models have been developed for different memory tasks, which include item recognition, cued recall, free recall, and sequence memory, in an attempt to accurately explain experimentally observed behaviors.
Item Recognition
In item recognition, one is asked whether or not a given probe item has been seen before. It is important to note that the recognition of an item can include context. That is, one can be asked whether an item has been seen in a study list. So even though one may have seen the word “apple” sometime during their life, if it was not on the study list, it should not be recalled.
Item recognition can be modeled using Multiple trace theory and the attribute-similarity model. In brief, every item that one sees can be represented as a vector of the item’s attributes, which is extended by a vector representing the context at the time of encoding, and is stored in a memory matrix of all items ever seen. When a probe item is presented, the sum of the similarities to each item in the matrix (which is inversely proportional to the sum of the distances between the probe vector and each item in the memory matrix) is computed. If the similarity is above a threshold value, one would respond, “Yes, I recognize that item.” Given that context continually drifts by nature of a random walk, more recently seen items, which each share a similar context vector to the context vector at the time of the recognition task, are more likely to be recognized than items seen longer ago.
Cued Recall
In cued recall, one is asked to recall the item that was paired with a given probe item. For example, one can be given a list of name-face pairs, and later be asked to recall the associated name given a face.
Cued recall can be explained by extending the attribute-similarity model used for item recognition. Because in cued recall, a wrong response can be given for a probe item, the model has to be extended accordingly to account for that. This can be achieved by adding noise to the item vectors when they are stored in the memory matrix. Furthermore, cued recall can be modeled in a probabilistic manner such that for every item stored in the memory matrix, the more similar it is to the probe item, the more likely it is to be recalled. Because the items in the memory matrix contain noise in their values, this model can account for incorrect recalls, such as mistakenly calling a person by the wrong name.
Free Recall
In free recall, one is allowed to recall items that were learnt in any order. For example, you could be asked to name as many countries in Europe as you can. Free recall can be modeled using SAM (Search of Associative Memory) which is based on the dual-store model, first proposed by Atkinson and Shiffrin in 1968. SAM consists of two main components: short-term store (STS) and long-term store (LTS). In brief, when an item is seen, it is pushed into STS where it resides with other items also in STS, until it displaced and put into LTS. The longer the an item has been in STS, the more likely it is to be displaced by a new item. When items co-reside in STS, the links between those items are strengthened. Furthermore, SAM assumes that items in STS are always available for immediate recall.
SAM explains both primacy and recency effects. Probabilistically, items at the beginning of the list are more likely to remain in STS, and thus have more opportunities to strengthen their links to other items. As a result, items at the beginning of the list are made more likely to be recalled in a free-recall task (primacy effect). Because of the assumption that items in STS are always available for immediate recall, given that there were no significant distractors between learning and recall, items at the end of the list can be recalled excellently (recency effect).
Incidentally, the idea of STS and LTS was motivated by the architecture of computers, which contain short-term (see Cache) and long-term storage.
Sequence Memory
Sequence memory is responsible for how we remember lists of things, in which ordering matters. For example, telephone numbers are a ordered list of one digit numbers. There are currently two main computational memory models that can be applied to sequence encoding: associative chaining and positional coding.
Associative chaining theory states that every item in a list is linked to its forward and backward neighbors, with forward links being stronger than backward links, and links to closer neighbors being stronger than links to farther neighbors. For example, associative chaining predicts the tendencies of transposition errors, which occur most often with items in nearby positions. An example of a transposition error would be recalling the sequence “apple, orange, banana” instead of “apple, banana, orange.”
Positional coding theory suggests that every item in a list is associated to its position in the list. For example, if the list is “apple, banana, orange, mango” apple will be associated to list position 1, banana to 2, orange to 3, and mango to 4. Furthermore, each item is also, albeit more weakly, associated to its index +/- 1, even more weakly to +/- 2, and so forth. So banana is associated not only to its actual index 2, but also to 1, 3, and 4, with varying degrees of strength. For example, positional coding can be used to explain the effects of recency and primacy. Because items at the beginning and end of a list have fewer close neighbors compared to items in the middle of the list, they have less competition for correct recall.
Although the models of associative chaining and positional coding are able to explain a great amount of behavior seen for sequence memory, they are far from perfect. For example, neither chaining nor positional coding is able to properly illustrate the details of the Ranschburg effect, which reports that sequences of items that contain repeated items are harder to reproduce than sequences of unrepeated items. Associative chaining predicts that recall of lists containing repeated items is impaired because recall of any repeated item would cue not only its true successor but also the successors of all other instances of the item. However, experimental data have shown that spaced repetition of items resulted in impaired recall of the second occurrence of the repeated item. Furthermore, it had no measurable effect on the recall of the items that followed the repeated items, contradicting the prediction of associative chaining. Positional coding predicts that repeated items will have no effect on recall, since the positions for each item in the list act as independent cues for the items, including the repeated items. That is, there is no difference between the similarity between any two items and repeated items. This, again, is not consistent with the data.
Because no comprehensive model has been defined for sequence memory to this day, it makes for an interesting area of research.
History
Encoding is still relatively new and unexplored but origins of encoding date back to age old philosophers such as Aristotle and Plato. A major figure in the history of encoding is Hermann Ebbinghaus (1850–1909). Ebbinghaus was a pioneer in the field of memory research. Using himself as a subject he studied how we learn and forget information by repeating a list of nonsense syllables to the rhythm of a metronome until they were committed to his memory. These experiments lead him to suggest the learning curve. He used these relatively meaningless words so that prior associations between meaningful words would not influence learning. He found that lists that allowed associations to be made and semantic meaning was apparent were easier to recall. Ebbinghaus’ results paved the way for experimental psychology in memory and other mental processes.
During the 1900s further progress in memory research was made. Ivan Pavlov began research pertaining to classical conditioning. His research demonstrated the ability to create a semantic relationship between two unrelated items. In 1932 Bartlett proposed the idea of mental schemas. This model proposed that whether new information would be encoded was dependent on its consistency with prior knowledge (mental schemas). This model also suggested that information not present at the time of encoding would be added to memory if it was based on schematic knowledge of the world. In this way, encoding was found to be influenced by prior knowledge. With the advance of Gestalt theory, came the realisation that memory for encoded information was often perceived as different than the stimuli that triggered it. In addition it was also influenced by the context that the stimuli were embedded in.
With advances in technology, the field of neuropsychology emerged and with it a biological basis for theories of encoding. In 1949 Hebb looked at the neuroscience aspect of encoding and stated that “neurons that fire together wire together” implying that encoding occurred as connections between neurons were established through repeated use. The 1950s and 60’s saw a shift to the information processing approach to memory based on the invention of computers, followed by the initial suggestion that encoding was the process by which information is entered into memory. At this time George Armitage Miller in 1956 wrote his paper on how our short-term memory is limited to 7 items, plus-or-minus 2 called The Magical Number Seven, Plus or Minus Two. This number was appended when studies done on chunking revealed that seven, plus or minus two could also refer to seven “packets of information”. In 1974, Alan Baddeley and Graham Hitch proposed their model of working memory, which consists of the central executive, visuo-spatial sketchpad, and phonological loop as a method of encoding. In 2000, Baddeley added the episodic buffer. Simultaneously Endel Tulving (1983) proposed the idea of encoding specificity whereby context was again noted as an influence on encoding.