Information theory studies the transmission, processing, extraction, and utilization of information. Abstractly, information can be thought of as the resolution of uncertainty. In the case of communication of information over a noisy channel, [...] "information" is thought of as a set of possible messages, where the goal is to send these messages over a noisy channel, and then to have the receiver reconstruct the message with low probability of error, in spite of the channel noise. Shannon's main result, the noisy-channel coding theorem showed that, in the limit of many channel uses, the rate of information that is asymptotically achievable is equal to the channel capacity, a quantity dependent merely on the statistics of the channel over which the messages are sent.
The elements of communication as per Shannon's theory are as follows,
- An information source that produces a message
- A transmitter that operates on the message to create a signal which can be sent through a channel
- A channel, which is the medium over which the signal, carrying the information that composes the message, is sent
- A receiver, which transforms the signal back into the message intended for delivery
- A destination, which can be a person or a machine, for whom or which the message is intended
Most cognitive scientists think about the brain and behavior within an information-processing framework: Stimuli acting on sensory receptors provide information about the state of the world. The sensory receptors transduce the stimuli into neural signals, streams of action potentials (aka spikes). The spike trains transmit the information contained in the stimuli from the receptors to the brain, which processes the sensory signals in order to extract from them the information that they convey. The extracted information may be used immediately to inform ongoing behavior, or it may be kept in memory to be used in shaping behavior at some later time. Cognitive scientists seek to understand the stages of processing by which information is extracted, the representations that result, the motor planning processes through which the information enters into the direction of behavior, the memory processes that organize and preserve the information, and the retrieval processes that find the information in memory when it is needed.
But not all information are important and some may even be harmful. Daniel Dennett, who advocates for the use of thinking tools to become better thinkers, addressed the relationship between Shannon's breakthrough and semantic information in his latest book From Bacteria to Bach and Back. He defines semantic information as "information in general as that which justifies representational activity." He also claims,
- Semantic information is valuable—misinformation and disinformation are either pathologies or parasitic perversions of the default cases.
- The value of semantic information is receiver-relative and not measurable in any nonarbitrary way but can be confirmed by empirical testing.
- The amount of semantic information carried or contained in any delimited episode or item is also not usefully measurable in units but roughly comparable in local circumstances.
- Semantic information need not be encoded to be transmitted or saved.
Dennett adds that just because semantic information is irrelevant to Shannon's information theory and may not be digitized it may still be stored, transmitted, processed, and useful. An important implication, especially under uncertainty, is the distinction between signal to noise of semantic information which if neglected can be detrimental to an organism. Dennett explains,
If information is a distinction that makes a difference, we are invited to ask: A difference to whom? Cui bono? Who benefits?—the question that should always be on the lips of the adaptationist since the answer is often surprising. It is this that ties together economic information in our everyday human lives with biological information and unites them under the umbrella of semantic information. And it is this that permits us to characterize misinformation and disinformation as not just kinds of information but dependent or even parasitic kinds of information as well. Something emerges as misinformation only in the context of a system that is designed to deliver—and rely on—useful information. An organism that simply ignores distinctions that might mislead (damage the design of) another organism has not been misinformed, even if the distinction registers somehow (ineffectually) on the organism’s nervous system. In Stevie Smith’s poem, “Not Waving but Drowning” (1972), the onlookers on the beach who waved back were misinformed but not the seagulls wheeling overhead. We can’t be misinformed by distinctions we are not equipped to make. Understanding what disinformation is benefits doubly from our asking cui bono? Disinformation is the designed exploitation (for the benefit of one agent) of another agent’s systems of discrimination, which themselves are designed to pick up useful information and use it. This is what makes the Ebola virus’s design an instance of camouflage.
With that caution in hand, we can address the idea that is now sweeping through cognitive science as a very promising answer to how the brain picks up, and uses, the available semantic information: Bayesian hierarchical predictive coding. (For excellent accounts see Hinton 2007; Clark 2013; and the commentary on Clark, Hohwy 2013.) The basic idea is delicious. The Reverend Thomas Bayes (1701–1761) developed a method of calculating probabilities based on one’s prior expectations. Each problem is couched thus: Given that your expectations based on past experience (including, we may add, the experience of your ancestors as passed down to you) are such and such (expressed as probabilities for each alternative), what effect on your future expectations should the following new data have? What adjustments in your probabilities would it be rational for you to make? Bayesian statistics, then, is a normative discipline, purportedly prescribing the right way to think about probabilities.41 So it is a good candidate for a competence model of the brain: it works as an expectation-generating organ, creating new affordances on the fly.
The fundamental architecture of animal brains (including human brains) is probably composed of Bayesian networks that are highly competent expectation-generators that don’t have to comprehend what they are doing. Comprehension—our kind of comprehension—is only made possible by the arrival on the scene quite recently of a new kind of evolutionary replicator—culturally transmitted informational entities: memes.
Our (mis)understanding of how the brain functions and how it interacts with its environment has many implications in our everyday lives.Tweet to @jvrbntz