Based on this idea, in the following sections we present some “en

Based on this idea, in the following sections we present some “energetic design principles” for presynaptic terminals and postsynaptic spines. First, we estimate how much ATP is needed to transmit information across a single synapse, as a prelude to explaining how the information transmitted can be maximized at minimum energy cost. The input to a synapse can be considered over

a sequence of time intervals, Δt, in which an action potential either does or does not arrive along the axon, e.g., signifying the presence or absence of some stimulus ( Figure 3A, Δt is the smallest interval over which the neuron can represent information, set by the refractory period of the action potential). If the mean spike firing rate is S, the probability of an action Androgen Receptor antagonist potential arriving in any given interval is s = SΔt (with 0 < s < 1), and we assume no correlation between the occurrence of different action potentials, the rate at which information arrives in the input train is ( Shannon, 1948; Dayan and Abbott, 2001, Equation 4.4; Levy and Baxter, 1996, Equation 2.1) equation(1) Iinput(s)=−s⋅2log(s)–(1−s)⋅2log(1−s)Iinput(s)=−s⋅log2(s)–(1−s)⋅log2(1−s)bits

per Δt ( Figure 3A). This is maximized with s = 0.5, or S = 1/(2Δt), i.e., selleck compound with the neuron firing at half its maximum rate. This is ∼200 Hz for a refractory period of Δt = 2.5 ms, yet in practice the mean firing rate of neurons in vivo is much lower than this, around 4 Hz (

Attwell and Laughlin, 2001; Perge et al., 2009). To explain this difference, Levy and Baxter (1996) suggested Parvulin that, in fact, the nervous system maximizes the ratio of information transmitted to energy consumed (rather than maximizing coding capacity). They showed that, if the energy use of a neuron (and associated glia) is r-fold higher when producing a spike than when inactive, then the spike probability (s∗) that maximizes the information transmitted per energy consumed is much lower than that which would maximize information coding capacity. Their analysis implies that the factor, r, by which spiking increases energy use is related to s∗ via the equation equation(2) r=log2(s∗)log2(1−s∗),which we use below. Applying similar principles to the transmission of information through a synapse leads to the surprising conclusion that the energetic design of synapses is optimized if presynaptic release of transmitter fails often—just as is seen in most synapses. To understand this we need to consider information flow through synapses and the energy it consumes. For a synapse with a single release site (e.g., to the orange cell in Figure 3), if each time a presynaptic action potential arrives a vesicle is released with probability p, then for p < 1 information is lost during synaptic transmission.

Comments are closed.