Sensation is the process by which we detect physical energy from our environment and encode it as neural signals. Perception is the process of organizing and interpreting sensory information, enabling us to recognize meaningful objects and events.
The task of each sense is to receive stimulus energy, transform it into neural signals, and send those neural messages to the brain. In vision, light waves are converted into neural impulses by the retina; after being coded, these impulses travel up the optic nerve to the brain’s visual cortex, where they are interpreted. The Young-Helmholtz and opponent-process theories together help
explain color vision.
In hearing, sound waves are transmitted to the fluid-filled cochlea, where they are converted to neural messages and sent to the brain. We locate sounds by differences in the timing and loudness of the sounds received by each ear. Together, the place and frequency theories explain how we hear both high-pitched and low-pitched sounds.
The sense of touch is actually four senses—pressure, warmth, cold, and pain—that combine to produce other sensations, such as “hot.” A kinesthetic sense and a vestibular sense together enable us to detect body position and movement. Taste, a chemical sense, is a composite of sweet, sour, salty, bitter, and umami sensations and of the aromas that interact with information from the taste buds. Smell, also a chemical sense, does not have basic sensations as there are for touch and taste.
In organizing sensory data into whole perceptions, our first task is to discriminate figure from ground. We then organize the figure into meaningful form by following certain rules for grouping stimuli. We transform two-dimensional retinal images into three-dimensional perceptions by using binocular cues, such as retinal disparity, and monocular cues, such as the relative sizes of objects.
In perceiving motion, we assume that shrinking objects are moving away from us and growing objects are moving toward us. The perceptual constancies enable us to perceive objects as enduring in shape, size, lightness, and color, regardless of viewing angle, distance, and illumination. The constancies explain several well-known illusions. Studies of sensory deprivation reveal that, for many species, infancy is a critical period during which experience must activate the brain’s innate visual mechanisms. For example, when cataracts
are removed from adults who have been blind from birth, these persons can distinguish figure and ground and can perceive color but are unable to distinguish shapes and forms.
At the same time, human vision is remarkably adaptable. Given glasses that turn the world upside down, people manage to adapt and move about with ease. Clear evidence that perception is influenced by our experience comes from the many demonstrations of perceptual set and context effects. Because perceptions vary, they may not be what the designer of a machine assumes. Human factors psychologists study how people perceive and use machines and how machines and physical environments can be better suited to their use.
Try this quick experiment before moving on:
Press play for an overview of the first two sections:
Sensing the World: Some Basic Principles
Sensation is the process by which our sensory receptors and nervous system receive and represent stimulus energies from our environment. Bottom-up processing is analysis that begins with the sense receptors and works up to the brain’s integration of sensory information. Perception is the process of organizing and interpreting sensory information, enabling us to recognize meaningful
objects and events. Top-down processing is information processing guided by our experience and expectations.
Selective attention means that at any moment, awareness focuses on only a limited aspect of all that we experience. The cocktail party effect refers to our ability to attend to only one voice among many. When talking on the phone while driving, our selective attention shifts back and forth from the road to the phone. The process of shifting attentional gears can entail a fatal delay in coping.
One analysis of phone records for the moments before a car crash found cellphone users were four times more at risk. Selective attention limits our perception, as many stimuli will pass by unnoticed. This lack of awareness is evident in studies of inattentional blindness. Forms of this include change blindness, choice blindness, and even choice-blindness blindness. Selective attention even extends to our sleep when we are oblivious to most but not all of what is happening around us.
Press play to see some experiments on change blindness:
In studying the relationship between physical energy and psychological experience, researchers in psychophysics identified an absolute threshold as the minimum stimulation needed to detect a particular stimulus 50 percent of the time. Signal detection theory predicts how and when we detect the presence of a faint stimulus, assuming that our individual absolute thresholds vary with our experiences, expectations, motivation, and level of fatigue.
The priming effect, as shown in experiments, reveals that we can process some information from stimuli too weak to recognize, indicating that much of our information processing occursautomatically, unconsciously. But the effect is too fleeting to enable advertisers to exploit us with subliminal messages.
A difference threshold is the minimum difference between two stimuli that a person can detect 50 percent of the time. In humans, difference thresholds (experienced as a just noticeable difference [jnd]) increase in proportion to the size of the stimulus—a principle known as Weber’s law.
Sensory adaptation refers to diminished sensitivity as a consequence of constant stimulation. Constant, unchanging images on the eye’s inner surface fade and then reappear. The phenomenon of sensory adaptation enables us to focus our attention on informative changes in our environment without being distracted by uninformative background stimulation.
Press play for a audio/visual summary:
When finished this section, go to http://edmodo.com to take the Unit 4:1 Quiz
We all have the ability to convert one sort of energy to another. Our eyes, for example, receive light energy and transduce (transform) it into neural messages that our brain then processes into what we consciously see. The energies we experience as visible light are a thin slice from the broad spectrum of electromagnetic energy. Our sensory experience of light is determined largely by the light energy’s wavelength, which determines the hue of a color, and its intensity, which influences brightness.
After light enters the eye through the pupil, whose size is regulated by the iris, a cameralike lens focuses the rays by changing its curvature, a process called accommodation, on the retina. This light-sensitive surface contains receptors that begin the processing of visual information. The retina’s rods and cones (most of which are clustered around the fovea) transform the light energy into neural signals. These signals activate the neighboring bipolar cells, which in turn activate the neighboring ganglion cells, whose axons converge to form the optic nerve that carries information via the thalamus to the brain. Where the optic nerve leaves the eye, there are no receptor cells—creating a blind spot. The cones, which are located mostly in the fovea, enable vision of color and fine detail. The rods enable black-and-white vision, remain sensitive in dim light, and are necessary for peripheral vision.
Press play for a quick video on the structure and function of the eye:
Click on link below to take a practice quiz on the anatomy of the eye:
We process information at progressively more abstract levels. The information from the retina’s 130 million rods and cones is received and transmitted by the million or so ganglion cells whose axons make up the optic nerve. When individual ganglion cells register information in their region of the visual field, they send signals to the occipital lobe’s visual cortex. In the cortex, individual neurons (feature detectors) respond to specific features of a visual stimulus. The visual cortex passes this information along to other areas of the cortex where teams of cells (supercell clusters) respond to more complex patterns.
Subdimensions of vision (color, movement, depth, and form) are processed by neural teams working separately and simultaneously, illustrating our brain’s capacity for parallel processing. Other teams collaborate in integrating the results, comparing them with stored information and enabling perceptions. This contrasts sharply with the step-by-step serial processing of most computers and of conscious problem solving. Some people who have lost part of their visual cortex experience blindsight.
Press play to learn more about the phenomenon of blindsight:
The Young-Helmholtz trichromatic (three-color) theory states that the retina has three types of color receptors, each especially sensitive to red, green, or blue. When we stimulate combinations of these cones, we see other colors. For example, when both red- and green-sensitive cones are stimulated, we see yellow.
Hering’s opponent-process theory states that there are two additional color processes, one responsible for red versus green perception and one for yellow versus blue plus a third black versus white process. Subsequent research has confirmed that after leaving the receptor cells, visual information is analyzed in terms of the opponent colors red and green, blue and yellow, and also black and white. Thus, in the retina and in the thalamus, some neurons are turned “on” by red, but turned “off ” by green. Others are turned on by green but off by red. These opponent processes help explain afterimages.
Press play for a brief explanation of the trichromatic theory:
Press play for an audio/visual summary:
When finished with this section, go to http://edmodo.com to take the Unit 4:2 Quiz
Press play for an overview of sections 3 & 4:
Audition, or hearing, is highly adaptive. The pressure waves we experience as sound vary in amplitude and frequency and correspondingly in perceived loudness and pitch. Decibels are the measuring unit for sound energy.
The visible outer ear channels the sound waves through the auditory canal to the eardrum, a tight membrane that vibrates with the waves. Transmitted via the bones of the middle ear (the hammer, anvil, and stirrup) to the fluid-filled cochlea in the inner ear, these vibrations cause the oval window to vibrate, causing ripples in the basilar membrane, which bends the hair cells that line its
surface. This movement triggers neural messages to be sent (via the thalamus) to the temporal lobe’s auditory cortex. Damage to the hair cells accounts for most hearing loss.
Press play for a 3D presentation on the anatomy of the ear:
Click on link below to practice labeling the parts of the ear:
Place theory presumes that we hear different pitches because different sound waves trigger activity at different places along the cochlea’s basilar membrane. Thus, the brain can determine a sound’s pitch by recognizing the place on the membrane from which it receives neural signals.
Frequency theory states that the rate of nerve impulses traveling up the auditory nerve matches the frequency of a tone, thus enabling us to sense its pitch. The volley principle explains hearing sounds with frequencies above 1000 waves per second. Place theory best explains how we sense high-pitched sounds, and frequency theory best explains how we sense low-pitched sounds. Some combination of the two theories explains sounds in between.
Press Play to learn more about how we percieve pitch:
Sound waves strike one ear sooner and more intensely than the other ear. We localize sounds by detecting the minute differences in the intensity and timing of the sounds received by each ear. Problems with the mechanical system that conducts sound waves to the cochlea cause conduction hearing loss. If the eardrum is punctured or if the tiny bones of the middle ear lose their ability to vibrate, the ear’s ability to conduct vibrations diminishes. Damage to the cochlea’s hair cell receptors or their associated nerves can cause the more common sensorineural hearing loss. Once destroyed, these tissues remain dead. Disease, biological changes linked with aging, or prolonged
exposure to ear-splitting noise or music may cause sensorineural hearing loss.
Those who live with hearing loss face social challenges. Cochlear implants are wired into various sites on the auditory nerve, allowing them to transmit electrical impulses to the brain. They help children to become proficient in oral communication. The latest cochlear implants also can help restore hearing for most adults. Deaf culture advocates object to using the implants on children who were deaf before developing language. They note that deafness is not a disability because sign is a complete language. Some also argue that sensory compensation, which enhances other senses, gives deaf people advantages that the hearing do not have.
Press play for a quick look at how cochlea implants work:
Press play for an audio/visual summary:
When finished with this section, go to http://edmodo.com to take the Unit 4:3 Quiz
Our sense of touch is actually four senses—pressure, warmth, cold, and pain—that combine to produce other sensations, such as “hot.” There is no simple relationship between what we feel and the type of specialized nerve ending found there. Only pressure has identifiable receptors. The rubber-hand illusion illustrates how touch is not only a bottom-up property of our senses but also a
top-down product of our brain and expectations.
Press play to see the rubber hand illusion:
Kinesthesis is the system for sensing the position and movement of individual body parts. Sensors in the tendons, joints, bones, and ears as well as skin sensors are continually providing our brain with information. A companion vestibular sense monitors the head’s (and thus the body’s) position and movement. The biological gyroscopes for this sense of equilibrium are in the semicircular canals and vestibular sacs in the inner ear.
Pain is an alarm system that draws our attention to some physical problem. Without the ability to experience pain, people may die before early adulthood. There is no one type of stimulus that triggers pain, and there are no special receptors for pain. Instead there are different nociceptors— sensory receptors that detect hurtful temperatures, pressure, or chemicals. The gate-control theory of pain is that small fibers in the spinal cord open a “gate” to permit pain signals to travel up to the brain, or large fibers close the “gate” to prevent their passage.
The biopsychosocial approach views pain not only as a product of biological influences, for example, of injured nerves sending impulses to the brain, but also as a result of psychological influences such as our expectations, and social influences such as the presence of others. Pain is controlled through a combination of medical and psychological treatments.
Press play to learn more about how we experience pain:
Taste, a chemical sense, is a composite of sweet, sour, salty, bitter, and umami sensations and of the aromas that interact with information from the taste buds. Taste buds on the top and sides of the tongue contain taste receptor cells, which send information to an area of the brain’s temporal lobe. Taste receptors reproduce themselves every week or two. As we grow older, the number of taste buds and taste sensitivity decrease.
Sensory interaction refers to the principle that one sense may influence another, as when the smell of food influences its taste. In a few individuals, the senses become joined in a phenomenon called synaesthesia, where one kind of sensation such as hearing sound produces another such as seeing color.
Smell (olfaction) is also a chemical sense, but without any basic sensations. The 5 million or more olfactory receptor cells, with their approximately 350 different receptor proteins, recognize individual odor molecules, with some odors triggering a combination of receptors. The receptor cells send messages to the olfactory lobe, then to the temporal lobe and to parts of the limbic system. An odor’s ability to spontaneously evoke memories is due in part to the close connections between brain areas that process smell and those involved in memory storage.
Press play to see more on the sensory interaction of taste and smell:
Press play for an audio/visual summary:
When finished with this section, go to http://edmodo.com to take the UNit 4:4 Quiz
Press play for a preview of sections 5 & 6:
Gestalt psychologists described principles by which we organize our sensations into perceptions. They provided many compelling demonstrations of how, given a cluster of sensations, the human perceiver organizes them into a gestalt, a German word meaning a “form” or a “whole.” They further demonstrated that the whole may differ from the sum of its parts. Clearly, our brains do more than merely register information about the world. We are always filtering sensory information and inferring perceptions in ways that make sense to us.
Our first task in perception is to perceive any object, called the figure, as distinct from its surroundings, called the ground. We must also organize the figure into a meaningful form. Gestalt principles for grouping that describe this process include proximity (we group nearby figures together), similarity (we group similar figures together), continuity (we perceive smooth, continuous patterns rather than discontinuous ones), connectedness (we perceive spots, lines, or areas as a single unit when uniform and linked), and closure (we fill in gaps to create a whole object).
Press play for a video lesson on Gestalt prinicples:
Depth perception is the ability to see objects in three dimensions, although the images that strike the eye are two-dimensional. It enables us to judge distance. Research on the visual cliff (a miniature cliff with a drop-off covered by sturdy glass) reveals that depth perception is, in part, innate. Many species perceive the world in three dimensions at, or very soon after, birth.
Binocular cues require information from both eyes. In the retinal disparity cue, the brain computes the relative distance of an object by comparing the slightly different images an object casts on our two retinas. The greater the difference, the greater the distance.
Monocular cues enable us to judge depth using information from only one eye. The monocular cues include relative size (the smaller image of two objects of the same size appears more distant), interposition (nearby objects partially obstruct our view of more distant objects), relative height (higher objects are farther away), relative motion (as we move, objects at different distances change their relative positions in our visual image, with those closest moving most), linear perspective (the converging of parallel lines indicates greater distance), and light and shadow (dimmer objects seem more distant).
Press play for a video lesson on visual cues for depth:
Our basic assumption is that shrinking objects are retreating and enlarging objects are approaching. The brain will also interpret a rapid series of slightly varying images as continuous movement, a phenomenon called stroboscopic movement. By flashing 24 still pictures a second, a motion picture creates perceived movement. The phi phenomenon, another illusion of movement, is created when two or more adjacent lights blink on and off in succession. Lighted signs exploit the effect with a succession of lights that create the impression of, say, a moving arrow.
Perceptual constancy is necessary to recognize an object. It enables us to see an object as unchanging (having consistent shape, size, lightness, and color) even as illumination and retinal images change. Shape constancy is our ability to perceive familiar objects (for example, an opening door) as unchanging in shape, and size constancy is perceiving objects as unchanging in size, despite the changing images they cast on our retinas. Given the perceived distance of an object, we instantly and unconsciously infer the object’s size. The perceived relationship between distance and size is generally valid but, under special circumstances, can lead us astray. For example, one reason for the Moon illusion is that cues to objects’ distances at the horizon make the Moon behind them seem farther away. Thus, the Moon on the horizon seems larger. In the distorted (trapezoidal) room designed by Adelbert Ames, we perceive both corners as being the same distance away. Thus, anything in the near corner appears disproportionately large compared with anything in the far corner.
Lightness constancy enables us to perceive an object as having a constant lightness even when the light that falls on it changes. Perceived lightness depends on relative luminance, which is the amount of light an object reflects relative to its surroundings. Color constancy refers to our perceiving familiar objects as having consistent color, even if changing illumination alters the wavelengths reflected by the object. We see color as a result of our brain’s computations of the light reflected by any object relative to its surroundings.
Press play for a video lesson on constancy:
Press play for an audio/visual summary:
When finished with this section, go to http://edmodo.com to take the Unit 4:5 Reading Quiz
In the classic version of the nature-nurture debate, the German philosopher Immanuel Kant maintained that knowledge comes from our inborn ways of organizing sensory experiences. On the other side, the British philosopher John Locke argued that we learn to perceive the world through our experiences of it. It’s now clear that different aspects of perception depend more or less on nature’s endowments and on the experiences that influence what we make of our sensations.
When cataracts are removed from adults who have been blind from birth, these people remain unable to perceive the world normally. Generally, they can distinguish figure from ground and perceive colors, but they are unable to recognize shapes, forms, and complete faces. In controlled experiments, infant kittens and monkeys have been reared with severely restricted visual input. When their visual exposure is returned to normal, they, too, suffer enduring visual handicaps. For many species, infancy is a critical period during which experience must activate the brain’s innate visual mechanisms.
Clear evidence that perception is influenced by our experiences—our learned assumptions and beliefs—as well as by sensory input comes from the many demonstrations of perceptual set, a mental predisposition to perceive one thing and not another. Through experience, we also form concepts, or schemas, which organize and interpret unfamiliar information, a fact that helps explain why some of us “see” monsters, faces, and UFOs that others do not.
Click on link below if you want to learn more about perceptual set:
A given stimulus may trigger radically different perceptions, partly because of our different schemas, but also because of the immediate context. For example, we discern whether a speaker said “morning” or “mourning” or “dye” or “die” from the surrounding words.
Perceptions are influenced, top-down, not only by our expectations and by the context but also by our motivations and emotions.
Is There Extrasensory Perception?
Claims are made by parapsychologists for three varieties of extrasensory perception (ESP): telepathy (mind-to-mind communication), clairvoyance (perceiving remote events), and precognition (perceiving future events). Closely linked with these are claims of psychokinesis
(PK), or “mind over matter.” Research psychologists remain skeptical because the forecasts of “leading psychics” reveal meager
accuracy, because checks of psychic visions have been no more accurate than guesses made by others, and because sheer chance guarantees that some stunning coincidences are sure to occur. An important reason for their skepticism, however, is the absence of a reproducible ESP result.
Press play for a audio/visual summary of sections 6&7:
When finished with this section, go to http://edmodo.com to take the Unit 4:6&7 Reading Quiz