The main question now to ponder about is what does the brain do with its plasticity and changes that utilizes for understanding the world. So the concept of "computation" comes in play in particular cells, in particular dendrites, meaning the functional aspect of learning and plasticity in terms of using it to compute aspects of the world in order to behave appropriately. (

*click Read More to continue reading*)**The brain computes**

How do the neuronal ingredients, synapses, neurons, their electrical and chemical signals and the distributed, interacting networks that they form, represent and process information (compute)? So we accept that there is incoming information (visual, auditory, etc.) which the system

**utilizes (encoding/decoding)**in order to behave. So we can think of 3 main problems that relate to brain or any computational system in particular:

1. What

2. What is the

3. Finally, how do these algorithms are

**is**the problem to be solved? (move my arm, grasp a cup, cross the road, recognize a face, etc).2. What is the

**algorithm**for the problem to be solved?3. Finally, how do these algorithms are

**implemented**by the various brain regions?Each brain region has a specific computation role, specific problem-solving duties and capabilities (movement, touch, vision, coordination, hearing, associations). These areas are not isolated but they interact with each other, i.e. movement->location of object, distance, movement direction and speed, etc. These are computations. Other example, visual system->figure-ground separation (the process that separates the object from its background (pattern recognition)). So we use our cells, spikes, synapses and all of the anatomy of this network in order to compute and behave.

Screenshot from lesson's material The fact that brain uses algorithms to solve problems (i.e. face recognition) is shown to the image to the left, which depicts how recorded eye movement proves the "scanning" process/procedure our eyes/brain are entering into, in order to recognize a face. The black dots highlight where our eyes tend to stop or focus during recognition process. We see that it's not a random movement nor a systematic one (not all spots are scanned). It's a specific algorithm dictated and performed by the brain (important/not important "hot" spots for recognition). So the solution of the computation is the fact that we recognized a face (

*actually, we categorized -after computation- a visual input into the cognitive category of "Faces". The solution is the fact of categorization, meaning the fact that we're able to slice the world into taxonomies*). Motion is also an absolutely fundamental computational aspect of any brain. Computation is the most fundamental thing any behaving machine has to do and this question is how this particular network implements computation.**Computation at the level of single neurons**

The main mission of the brain is to compute. Now let's see examples of single cells in the behaving brain. The most important example is the Hubel and Weisel one for which they received the Nobel prize. They recorded a particular region of the visual cortex of a cat, implanting an electron and trying to find out what are the parameters being computed by a single cell in the visual cortex. It was found that the cell was firing at a particular occasion and more specifically, when a

**line**was moving.

They chose one cell by chance and they found out that when there is a line crossing the screen that cat's watching, suddenly this cell starts to fire. And when the line's angle changes, then this cell stops firing. So there are spikes responding (or coding). This cell is called orientation-selective neuron cell and is tuned to respond to specific angles. This was a breakthrough because computational cells were found deep in the brain (not in retina or thalamus). Apparently, we decompose the world, early on in the early visual system into lines.

**Fundamentals of dendritic cable theory**

The most fundamental, direct and early paper on the field, trying to understand a neuron as a micro-chip and computation device is the McCulloch and Pitts (M&P) point-neuron (1943) ("A logical calculus of the ideas immanent in nervous activity"). This paper was inspired by two properties of the neurons: a) the all-or-none property and b) the excitatory and inhibitory nature of synapses.

What they suggested is the following: let's assume that there is a point-neuron (a simple/alone neuron out of any network, check image to the left). And that this neuron accepts synapses (e1, e2, e3, i (e for excitatory and i for inhibitory) from which just one is active, fires a spike and depolarizes the membrane to reach the threshold to fire a spike and the inhibitory one tries to veto this activity. The logical expression (logical statement) can be written as follows: OUTPUT is generated if (e1 OR e2 or e3) AND NOT i. So they wanted to look at a neuron as a logical device (something that inspired the modern computer science).

But of course neurons are not "isopotential" and not just simple points. They have axons and dendrites, they're part of networks and they're rather considered to be distributed electrical systems. So the question is: what are the computational implications of this structure? Does this structure add to the computational capability of the nervous system?

**Reasons to model mathematically (in details)**

1. Correct interpretation of experimental results (also provided predictions).

2. Gain insights into

**key**biophysical parameters (enables compact description of the physiological behavior).

3. Suggest possible computational (functional) role for the modeled system.

**Ramon Y Cajal on Theorists**(

*food for thought!!*)

**Rall Cable Theory for Dendrites**

This theory tries to model mathematically the impact of (remote) dendritic synapses (the input) on the soma/axon (output) region. Rall thought that over-simplification of representation on neurons (like in M&P neuron) may have negative implications (things missing) on total understanding of the system (he insisted on the contrast between the schematic neuron and the real, histological neuron). He discovered that when injecting current to a cell soma, most of this current flows to the dendrites (away from the soma) instead of flowing through the membrane.

So in a real neuron, soma is not isopotential, in anatomical, physiological terms. This means also a) the dendrites are not isopotential electrical devices, b) the voltage attenuates from synapse to soma, c) it takes time (delay) for the PSP to reach the soma (because we have a distributed electrical system), meaning it takes time to see the effect of the synapse to the cell body and d) that somatic EPSP/IPSP shape is expected to change with synaptic location. Rall looked at a dendritic tree as a set of connected cylinders:

He tried to find the meaning of having a distributed electrical system like the above, in terms of flow of the electrical current. So, following Rall's cable theory, suppose you have a cylinder wrapped with a membrane and inside this cylinder (remember, it's a symbolization for a dendrite) there is some axial resistivity. So inside the cylinder there are ingredients that behave like a resistance. Suppose also that a synapse is activated (check image below) and that ion channels are opened, so we have a case where the origin of current (synapse) flows from outside to the inside. This currents starts to flow, either to the right or the left and then some of this current leaks out through the membrane (which (the membrane) behaves as an RC circuit). This loss of current will make membrane's voltage to attenuate in time. This is the origin of cable theory.

Eventually, this will mean that if you inject a current at some place, you will get locally, at this specific point the maximum voltage which then will attenuate along the structure. So the question is how to describe mathematically a passive cable (the membrane is passive RC membrane) or the bifurcation of current in a passive dendritic tree.

Intuitively, one could say that we have an axial current (coming from the left, symbolized by the right-headed arrows) which is proportional to the derivative of voltage with distance (meaning, to the δv/δx). Some of this axial current is lost through the membrane (downwards movement of arrows). But we know how to describe membrane current (Resistive current + capacitative current). So the axial current is either becoming membrane current or continues to the right. The final cable theory equation says that the change in axial current (which is proportional to the derivative of voltage to distance) is equal to the membrane current. So the sum of the change of axial current and the membrane current is zero (unless you inject more current).

So the voltage changes with distance and with time. And the slope of attenuation very strongly depends on the boundary condition at the end of the cable (meaning, where does the cable ends, how long it is (or infinite)). In the case of dendrites, we talk about "short cylinders", structures with a short-sealed boundary (end of cable) so the voltage there attenuates less steeply.

Now, suppose we have a dendritic tree branch like the above and you inject current at the most distant tip of the tree. We start with a certain voltage (right axon of the diagram) and we see that there is a very steep attenuation until the first branch point. Then the current is bifurcated to the sibling branch (S) to the right (as we look at it) and to the "father" branch. At this point we have a very shallow attenuation and the same pattern goes on until the current reaches the soma. This tells us that locally, at the synaptic point, the voltage builds up and then, as the current is bifurcated we observe steep attenuation because at the end of the first "cable" (meaning, branch), we have a "leaky end" (meaning that the current leaks either to the sibling of the father branch). But the fact that the sibling branch is sealed, means that we almost have no attenuation (we saw that when the branch is very short there is almost no attenuation, check diagram below the equation @ 19:01 and on). So there is a big asymmetry in the dendritic tree, although we have two similar branches ( I and S, the top ones in the diagram above). This is explained by the boundary conditions not being identical in two branches (in I, current has the option to go either to the right or continue to the main cell structure while in S, there is a short sealed end). So this is a highly non-isopotential system, since we have very big voltage at the synaptic point and eventually very low voltage at the soma (and this is a property of dendrites). Another property of dendrites is that if you take the same amount of synaptic current and inject it directly to the soma, not that much of voltage is

__(not a typo), meaning that you still have loss (attenuation), this loss is less - compared to the loss from the distal dendritic synapse - BUT the total gain of voltage at a somatic synapse compared to a dendritic one is not that much. The efficiency of a somatic synapse is not significantly bigger from a dendritic one).__**gained**So now we can think of dendrites as being built by sub-regions. And each synapse has a "neighborhood" (synaptic territory/subunit/subregion). This is the notion of functional subunits. So as we move away from the synaptic point, the voltage attenuates, the EPSP becomes broader as you move away from the synapse.

A theoretical result coming from the cable theory that was extremely useful for experimentalists is that distal synapses are broader and delayed compared to proximal synapses.

The synapse at compartment no 10 is most delayed and broader than the one (say, no 2) found near the soma (notice that all recordings are made to the soma, (meaning, left to the no2 compartment). From the distal synapse, the time to peak is delayed (it takes time to reach the peak) while the opposite is observed from the proximal synapse (notice also that peaks are normalized). So now one can predict the origin of the synapse by examining the shape of the EPSP which is recorded from the cell body (soma) all the time, in other words, by just "sitting" at the cell body and just recording incoming EPSPs, you can judge by their shape how distant or close they are, meaning eventually, where exactly is the synapse happening.

**Computing with neurons**

1. Dendrites enable neurons to act as multiple functional subunits (first locally, then globally, at soma).

2. Dendrites can classify inputs.

3. They can compute direction of motion.

4. They can improve sound localization (in the auditory system).

5. They help to sharpen the tuning of cortical neurons making them more accurate.

Just the fact that we have a distributed system and various synapses (EPSP, IPSP, EPSP, IPSP, and so on) enables us to perform a more extended logical operation.

And this is because now, the location of inhibition becomes important if we take into consideration the distal and proximal properties of synaptic potential, either excitatory or (now) inhibitory. The proximal to the soma IPSP can veto the sum of distal EPSPs. So all the inhibition found on the path from the synapse to the soma is considered somehow "harmful" to the success of getting an output, the inhibition vetoes the EPSPs.

Another idea is that of input classification with dendritic neurons. The clustering of synapses to sub-regions, where within each sub-region you have local non-linear operations, enables the sum of all sub-region operations to the soma. This idea was used to be shown that a neuron can be used as a classifier (example: you take an image, say a face, and you project the image to different regions of the dendritic tree, meaning that nose is projected to one cluster, mouth to another, e.t.c. The sum of all this becomes very sensitive to the recognition of a particular face (mine, not yours) meaning that the neuron can act as a classifier.

But the most influential idea of all is the fact (Rall, 1964) that the neuron can act as selective computational device.

The A-B-C-D graph shows voltage profile when synapses start close to the soma and move away (remember, we record at the soma). In this case, there is temporal summation, the voltage reaches a specific point and then attenuates as it moves away. The D-C-B-A graph shows voltage profile when synapses start far from soma. Of course there is still summation but it seems that the direction of movement affects the amount of voltage created. In this case, we have a (delayed) bigger and briefer voltage. So what does it mean?

Suppose that somewhere on the graph (left axon of the diagram, at a point between 0.1 and 0.15 values, we have a threshold point above which spikes are generated. We see that only

**in the one direction**of synapses activation we have a spike (the D-C-B-A movement). The other one (A-B-C-D) will produce sub-threshold potential, so no... BOOM! :-). So we talk about a directional selective neuron and we assume that something about the structure is responsible for some functions to appear.Rall suggested, in a deep sense, that it's crucial to decide about the "granularity of models" or their descriptive power in explaining various phenomena (comparing his theory to the point-neuron of McCulloh and Pits). At any time, the level of detail is proportional to our descriptive needs.