The predictive brain (Stimulus-Specific Error Prediction Neurons)

In summary, "The predictive brain" concept revolves around Stimulus-Specific Error Prediction Neurons (SPEPNs), which play a crucial role in how the brain anticipates and responds to stimuli. These neurons are responsible for detecting discrepancies between expected and actual sensory input, allowing the brain to adjust its predictions for future events. This predictive coding framework suggests that the brain continuously generates hypotheses about incoming information and updates them based on sensory feedback, ultimately enhancing perception and adaptive behavior.
  • #1
Fra
4,208
631
I don't post alot in this subforums and don't know what neuroscientists lurk here, but as someone that is trying to understand foundations of law and interactions, and how that is "encoded" in the makeup of matter - trying to understand what happens during the first fractions of a second in the big bang, is entirely theoretical, so studying analgous problems, in more complex, but hands-on system, like the human brain has always had be attention.

Just adding these interesting ideas here so see if there are anyone into these details on here.

The analog problem is to try to explain/understand the phenomenology of human behaviour, and here the "predictive brain hypothesis" from an evolutionary perspective is right in line with this and there are many analogies that are intriguing. From this there are a plethoria of theories also for how emotions are rather "constructed" as related to expectations of the future state, but starting from basic affects, such as arousal or valence. And these can be categorized. The corre afffects is easily associated to simple "good/bad" and betting amounts and motivation, plausible precursors.

One idea is that the brain entertains (in some way which is not clear) a "internal model" that predicts not only the environment, but it's own internal future state, and "errors" are somehow detected and drives corrections, and also improved predictive models, that predict errors from previous models. In this way dimensionality and complexity can be "emergent" in the evolving emergent picture. It's hard not ot associate to holography as well, supposing we have equilibrium.

"Comparing expectation with experience is an important neural computation performed throughout the brain and is a hallmark of predictive processing
...
Together, these findings reveal that cortical predictions about self-generated sounds have specificity in multiple simultaneous dimensions and that cortical prediction error neurons encode specific violations from expectation."
-- https://www.jneurosci.org/content/43/43/7119

All the above are part of a larger bayeian like understanding of the brain as a "predictive encoding" system.

And this is very analogous to how one can think also about the qbist derivative interpretations and emergent laws of physics. It seems to me that same "scale invariant" principles can be hinted here?

Just me making these assoications?

/Fredrik
 
Biology news on Phys.org
  • #2
The Neurosciences are plagued by a whole range of problems in trying to make sense out of how the brain works. The general methods are typically analytic, in that the focus is usually on specific activities and or specific areas in the brain in order to develop predictive theories, and the tools used are unfortunately simply not up to the task. The fact that we can identify individual neural activity doesn't tell us, what that neurone is engaged with, if anything whether it is inhibitory or excitatory, or its specific targets (each neurone can interconnect with up to 10,000 others and in different ways) I was tempted to try and answer this question when it was first posted, but decided it was too complicated and gave up, but I've come back to it now & hope my comments make sense.

I thought this article was specific to the issue and gives a reasonable overview of part of the neurology, but there are a range of other areas that inform this question. These include motivation, affect, attention, memory, cognitive processing, skills and learning, environmental effects and biology.

https://www.sciencedirect.com/science/article/pii/S0378595524000078

I can't say I'm sold on the predictive brain hypothesis as a model for understanding brain function, it suggests that some sort of central processing is occurring, and is in some way altering the sensory experience even before the sensory stimulus occurs. While it's easy to see how this does occur, because of biological constraints and learning, we need to consider that stimulus response patterns occur in single cell animals, with no brain, and in some animals the initial processing occurs within the sensory apparatus outside of the brain.

The real problem for us is in the fact that we can identify patterns of activity that show how sensory experiences are distributed in the brain, (bottom up processing) but at the same time this information is processed in different ways and at different levels of complexity, the results of this also being distributed in both directions that facilitates further processing while also modifying the nature of the sensory experience (Top down processing). It does seem that sensory experiences are relatively easy to manipulate, to the point that they can be created without any external stimulus. There are in fact a large range of factors that can be brought into play, this isn't a process that is in any way linear, as soon as the sensory information enters the CNS, and there is no single point of entry, it is distributed to a wide network of areas and every part of this network will have some control functions over perception and processing. Then, just to show that nature has a sense of humour, the sensory information being processes will change the way in which the whole network is functioning, and we have to consider that the brain is always engaged in multiple activities, the brain is embodied.

I think there are clearly some stimulus response functions that we are biologically predisposed towards, among these is unpredictable or unexpected events, but this is usually from a range of acceptable stimulus conditions and is rarely very specific. How sudden this is also plays a part, as does the stimulus intensity and clarity. These probably represent the broad rules we apply for threat detection and also evoke rapid safety responses, but again these are not fixed, there are various ways that we can use to remove threats. Having said that it has been suggested that even in these apparently automatic responses that occur before conscious awareness, the behaviour chosen, (Fight, Flight, Freeze) might not be random. It also appears that very high levels of emotional arousal use the same resources available for information processing, so high levels of arousal reduce processing capacity and complexity.
 
  • Like
  • Informative
Likes jim mcnamara, pinball1970, Fra and 1 other person
  • #3
Laroxe said:
The Neurosciences are plagued by a whole range of problems in trying to make sense out of how the brain works. The general methods are typically analytic, in that the focus is usually on specific activities and or specific areas in the brain in order to develop predictive theories, and the tools used are unfortunately simply not up to the task. The fact that we can identify individual neural activity doesn't tell us, what that neurone is engaged with, if anything whether it is inhibitory or excitatory, or its specific targets (each neurone can interconnect with up to 10,000 others and in different ways) I was tempted to try and answer this question when it was first posted, but decided it was too complicated and gave up, but I've come back to it now & hope my comments make sense.
Thanks for the comments! Yes this is all extremly complex, too complex, which is exactly why it is interesting.

My main association here was to see common abstractions of problems from different domains
- the human brain, unification of it's "phenomenology"
- unification of fundamental forces in physics, unification of it's phenomenology

Several subproblems seem simiilar and suggest that they may both make sense in a more general and abstract evolutionary inference system.
Laroxe said:
it suggests that some sort of central processing is occurring, and is in some way altering the sensory experience even before the sensory stimulus occurs. While it's easy to see how this does occur, because of biological constraints and learning, we need to consider that stimulus response patterns occur in single cell animals, with no brain, and in some animals the initial processing occurs within the sensory apparatus outside of the brain.
This seems analogous to the objection that physical interactions take place even without human observers. But just like physical particels or objects can be considered "observers", "information processing" can probably take place inside single cells, if you generealized the concept of "information processing" and "information encoding" to using any microstructure as t he memory base, and ANY manipulation of microstate or microstructures is a form of information processing.

So for me the interesting part is to see the abstraction, can be "scaled" down to systems that does not have a brain. Then I think we may also understand more.
Laroxe said:
The real problem for us is in the fact that we can identify patterns of activity that show how sensory experiences are distributed in the brain, (bottom up processing) but at the same time this information is processed in different ways and at different levels of complexity, the results of this also being distributed in both directions that facilitates further processing while also modifying the nature of the sensory experience (Top down processing).
Yes this hits the head of the nail. Same in physics, the "reductionist approach" in physics, also use the bottom up approach. Ie. we isolate things, and try to understand the whole from it's parts. But this approach, seem from the perspective of the inference system (the observer) simply does not work, when the complexity gets over the head of the observer.

So we need a top down approach as well, to tame things. Here in physics we usually throw in constraints but often in ad hoc ways that require an unexplained finetuning.

I think in both neuroscience and in physics, we NEED the evolutionary context and perspective to get a hold of the top-down perspective.

Laroxe said:
It also appears that very high levels of emotional arousal use the same resources available for information processing, so high levels of arousal reduce processing capacity and complexity.
This is interesting and a possible logical consequence of that any system has a limited information processing capacity, and multitasking will one way or the other have to share. And it is interestin to consider what happens if you abstractly would "scale" the information capacity to zero. Then, we should expect some unification.

I think this is idea makes sense both for biological systems, and physics, and these two different domains which are rich in insights are interesting to contemplate in paralell.

Analogy:

We scale the "information capacity" of biological systems from say HUMAN brains, all the way down, with simple organisms, and then single cells, and then ultimatrely just complex molecules.

We scale the "information capcity" of "basic inside observers" (really meaning elementary particles etc) all the way to energies so high that they break up and no large massive particles are stable.

What "scale invariant" phenomenolgoy and top down principles rule the "inferences and actions" made by these particles, molecules, single cells and humans?

/Fredrik
 
  • Love
Likes Laroxe
  • #4
Laroxe said:
The fact that we can identify individual neural activity doesn't tell us, what that neurone is engaged with, if anything whether it is inhibitory or excitatory, or its specific targets (each neurone can interconnect with up to 10,000 others and in different ways).
Fra said:
Thanks for the comments! Yes this is all extremly complex, too complex, which is exactly why it is interesting.

There are of course, experimental systems researchers use where a lot of this can be done. They are obscure or not appreciated by those not in the field.
There are many neurologically simple organisms (worms, arthropods, slugs, ...) that can be analyzed in ways not possible in most vertebrates. There are specific individual cells in each organism that that can be identified with a specific neuron in another animal. this is great for replicating things. Some of these animals have a limited number of neurons which can all be named (repeatedly identified). The basic difference is where medically relevant vertebrates have nuclei of hundreds or thousands of neurons, animals with simpler nervous systems like Drosophila, C. elegans, or Aplesia have single or very few or particular neuronal types.
because a lot of labs worked on these specific species for a long time they built up an extensive knowledge base and set of tools for biological research. Thus, for a small number of simple (and to many people boring) species a lot is known about their nervous systems and what underlies specific behaviors.
By working in embryonic zebrafish you get the best of two research worlds. The simplicity of an invertebrate nervous system with some indivdually identifiable neurons, but also they are vertebrates, so everything basic to the vertebrate CNS is present in them.


WRT the information processing capacities of biological entities, there are several things I find interesting:
  1. how many logic gates can be manifested in a cell's molecules?
  2. what is the effect of having them in so many copies (numbers of molecules) in a cell?
  3. how many logic gates does it take to make something with the agent-like property of being able to independently choose from a list of options? These kinds of relationships can happen at different levels of organization from the molecular level up to the brain.
  4. One of the big differences between large veretebrate nervous sysems and those of simple invertebrates is the number of neurons. A neural processing pathway in a simple invertebrate might use a single nueron at each step. In a large vertebrate brain the equivalent step might be processed by a nucleus (group of similar CNS neuons grouped together) could hndred or thusands of neurons. This has to have an effect on the processing, but I don't know what.
 
Last edited:
  • Like
Likes jim mcnamara and Laroxe
  • #5
Fra said:
Thanks for the comments! Yes this is all extremly complex, too complex, which is exactly why it is interesting.

My main association here was to see common abstractions of problems from different domains
- the human brain, unification of it's "phenomenology"
- unification of fundamental forces in physics, unification of it's phenomenology

Several subproblems seem simiilar and suggest that they may both make sense in a more general and abstract evolutionary inference system.

This seems analogous to the objection that physical interactions take place even without human observers. But just like physical particels or objects can be considered "observers", "information processing" can probably take place inside single cells, if you generealized the concept of "information processing" and "information encoding" to using any microstructure as t he memory base, and ANY manipulation of microstate or microstructures is a form of information processing.

So for me the interesting part is to see the abstraction, can be "scaled" down to systems that does not have a brain. Then I think we may also understand more.

Yes this hits the head of the nail. Same in physics, the "reductionist approach" in physics, also use the bottom up approach. Ie. we isolate things, and try to understand the whole from it's parts. But this approach, seem from the perspective of the inference system (the observer) simply does not work, when the complexity gets over the head of the observer.

So we need a top down approach as well, to tame things. Here in physics we usually throw in constraints but often in ad hoc ways that require an unexplained finetuning.

I think in both neuroscience and in physics, we NEED the evolutionary context and perspective to get a hold of the top-down perspective.


This is interesting and a possible logical consequence of that any system has a limited information processing capacity, and multitasking will one way or the other have to share. And it is interestin to consider what happens if you abstractly would "scale" the information capacity to zero. Then, we should expect some unification.

I think this is idea makes sense both for biological systems, and physics, and these two different domains which are rich in insights are interesting to contemplate in paralell.

Analogy:

We scale the "information capacity" of biological systems from say HUMAN brains, all the way down, with simple organisms, and then single cells, and then ultimatrely just complex molecules.

We scale the "information capcity" of "basic inside observers" (really meaning elementary particles etc) all the way to energies so high that they break up and no large massive particles are stable.

What "scale invariant" phenomenolgoy and top down principles rule the "inferences and actions" made by these particles, molecules, single cells and humans?

/Fredrik
Thank you, an excellent and informative response.
 
  • #6
BillTre said:
There are of course, experimental systems researchers use where a lot of this can be done. They are obscure or not appreciated by those not in the field.
There are may neurologically simple organisms (worms, arthropods, slugs, ...) that can be analyzed in ways not possible in most vertebrates. There are specific individual cells in each organism that that can be identified with a specific neuron in another animal. this is great for replicating things. Some of these animals have a limited number of neurons which can all be named (repeatedly identified). The basic difference is where medically relevant vertebrates have nuclei of hundreds or thousands of neurons, animals with simpler nervous systems like Drosophila, C. elegans, or Aplesia have single or very few or particular neuronal types.
because a lot of labs worked on these specific species for a long time they built up an extensive knowledge base and set of tools for biological research. Thus, for a small number of simple (and to many people boring) species a lot is known about their nervous systems and what underlies specific behaviors.
By working in embryonic zebrafish you get the best of two research worlds. The simplicity of an invertebrate nervous system with some indivdually identifiable neurons, but also they are vertebrates, so everything basic to the vertebrate CNS is present in them.


WRT the information processing capacities of biological entities, there are several things I find interesting:
  1. how many logic gates can be manifested in a cell's molecules?
  2. what is the effect of having them in so many copies (numbers of molecules) in a cell?
  3. how many logic gates does it take to make something with the agent-like property of being able to independently choose from a list of options? These kinds of relationships can happen at different levels of organization from the molecular level up to the brain.
  4. One of the big differences between large veretebrate nervous sysems and those of simple invertebrates is the number of neurons. A neural processing pathway in a simple invertebrate might use a single nueron at each step. In a large vertebrate brain the equivalent step might be processed by a nucleus (group of similar CNS neuons grouped together) could hndred or thusands of neurons. This has to have an effect on the processing, but I don't know what.
I suppose that one of the problems in biological research is the tendency for cell types to be conserved as complexity has increased in evolution, while the way in which the cells function may not be. When you change the cellular environment that the neurone operates in, the activity within the neurone can mean very different things. Your right of course that we have developed a lot of information, though this is usually not in the context of the complex networks that neurones operate in. In fact recent discoveries in how neurones function have and continue to redefine how we understand their functioning. A similar situation developed with our understanding of genetics with our prior knowledge potentially acting as a roadblock to discovery.
Now, I have been called something of a sceptic in some areas of biological research, in fact I've been called a number of other things as well, but I think the neurosciences do have particular problems, problems that many neuroscientists also recognise. I think trying to generalize findings from simpler organisms can be a particular problem, while we can track neural activation along particular pathways in the more complex animals, these pathways are simply not fixed, this is particularly true when it comes to information processing.

I suspect we would all agree that making sense of how our brain works isn't easy, the methods we have give us information about specific elements within the system, but when we try to put these elements together we often find the system itself has changed. I agree that an evolutionary understanding is needed but I don't think ideas like the predictive brain doesn't come anywhere near a working model. I think there is a very high level of motivation to develop a reasonable explanatory model but its also easy to see how all sorts of other areas of science and philosophy have been drawn into this problem.
 
  • Like
Likes BillTre
  • #7
I see analogous deep questions even to these in the general evolutionary inference model I think we need. My thinking about this is that in the general case (not just neuroscience) the very concept of "counting", what is true and what is false, needs to be constructed evolutionary, simply because there is no "neutral" measuring stick, and no "external observers".

My hunch after thinking about this for quite some time is that the natural precursor to both "true/false" and core affects such as valence is wether it is has a survival value or stabilisation value for the "agent/lifeform" in the actual context/environment. So something can be "effectively true" in the sense that the it's preferred in the local environment.

This suggests, that the relevant evolutionary perspective must consider also, that the MEASURES themselves, as encoded in primordal structures, are also thus emergent in an evoluiontary context - as no external handles are allowed (which we are tempted to use. This can I think actuall SOLVE the counting problem (~renormalization and finetuning problems in physics) as the counting-context, is itself reduced. So the question becomes, how far can a molecule "count" another "molecules" states as they interact.

It is tempting to associate some primordal boolean state, as some core affect/valence for a life form, such as this is good or bad for me? or alternative true or false. But such a state can probalby flip randomly in a noisy environment. So how can we build, higher cognition, structure, an phenomenology from such a simple boolean state? This is a key part of pondering about howto build dimensionality and higher pheneomenology from simple starting points. And the more i think about this, the strongers are the abstractions of evolution of lifeforms in general. And "regulatory problems" are solved by simple cells in an amasingly clever way, given what it has to work with, while "regulatory problems" in a human body naturally
has a more complex phenomenoloy, but it seems that at any scale, wherever you look, one can see the similar "problems".
BillTre said:
WRT the information processing capacities of biological entities, there are several things I find interesting:
  1. how many logic gates can be manifested in a cell's molecules?
  2. what is the effect of having them in so many copies (numbers of molecules) in a cell?
  3. how many logic gates does it take to make something with the agent-like property of being able to independently choose from a list of options? These kinds of relationships can happen at different levels of organization from the molecular level up to the brain.
  4. One of the big differences between large veretebrate nervous sysems and those of simple invertebrates is the number of neurons. A neural processing pathway in a simple invertebrate might use a single nueron at each step. In a large vertebrate brain the equivalent step might be processed by a nucleus (group of similar CNS neuons grouped together) could hndred or thusands of neurons. This has to have an effect on the processing, but I don't know what.
A general question is, what is the number fo distinguishable microstates for a given microstructure? One problem which leads to divergences in many models is that as you describe the microstructure, therer is always a background context, and this embedding influences the answers. The problem is when the background is non-physical. How does real numbers exists in nature? At what point have we lost track of the intrinsic vs extrinsic measures?

So counting, is always in the eyes of the beholder, strange as it may seem.

I think the "predictive system" behind the predictive brain ideas are conceptually very similar to how I think of how a particle constantly "predicts it's own state, and it's environment". In both cases their own existence and integrity is at stake. And both cases are a "game of life". Where the evolved surviving "predictive system" will be abundant. So why we have ceratain cells, molecules and life forms in nature are in perfect paralellt to why we have the set of elementary particles and forces we have.

Once we see the similarity, I think additional clues to both sides can come. But we need to release ourselves from "reductionist" or bottom-up thinking only.

/Fredrik
 
  • Like
Likes BillTre
  • #8
Fra said:
It is tempting to associate some primordal boolean state, as some core affect/valence for a life form, such as this is good or bad for me? or alternative true or false. But such a state can probalby flip randomly in a noisy environment. So how can we build, higher cognition, structure, an phenomenology from such a simple boolean state? This is a key part of pondering about howto build dimensionality and higher pheneomenology from simple starting points. And the more i think about this, the strongers are the abstractions of evolution of lifeforms in general. And "regulatory problems" are solved by simple cells in an amasingly clever way, given what it has to work with, while "regulatory problems" in a human body naturally
has a more complex phenomenoloy, but it seems that at any scale, wherever you look, one can see the similar "problems".

Here is how I think about these things going on in biological entities:
(I am discussing this at the level of the simplest of biological entities, like single cells. Whole organisms and nervous systems can have decision making structures involving many cells, but I am not talking about them.)

I think of most living things as autopoietic chemical systems. Autopoietic chemical systems are a collection of chemicals that are able to make (or obtain) all of their parts (their component chemicals) from environmental resources.

Such systems will only have an extended existence (through time) if they can successfully grow and divide (reproduction based on making excess system parts). This is a "naturally defined" goal of such an entity or system, in that it underlies their continued persistence. These are all emergent properties derived from the formation of a chemical system set apart from its environment. The system is dependent upon those properties if it to prosper. They are not properties of single molecules.

Success (through growth and division) in these systems will be naturally defined (short term) by their ability to accumulate excess parts and (long term) by their environmental persistence. Their environmental persistence amounts to the effects of natural selection upon them which will select effective systems for the next "generation". Since growth and division are dependent on the accumulation of excess parts, this will be of paramount importance to the system's success (as is naturally defined by their persistence).
None of this control functioning seems materially substantial until selection finds those systems that do these things right. Those systems (and their specific molecular assemblies) will be selected (by surviving). The structure of their molecular components can be built upon later. This will in some way be the physical manifestation of early biological control structures.

Single molecules are not going to made any decisions that will differ depending on what is optimal for them. These decisions require more complexity of response that a single molecule can produce. They are higher properties of systems and their many interacting components. This won't happen without a system's functional structure guiding the molecules (and system's) behaviors.

The particular molecular structures initially involved in making adaptive decisions will be gradually assembled in ways that will not necessarily follow standard IT approaches. If it works and is easy to assemble from existing components, it will be used.
 
  • Like
Likes Fra
  • #9
BillTre said:
Single molecules are not going to made any decisions that will differ depending on what is optimal for them. These decisions require more complexity of response that a single molecule can produce. They are higher properties of systems and their many interacting components.
This is a key issue. And I superficially agree, but this can be elaborated. On this people often object such as "a particle" does not "think" or "make decisions". So howto emerge decisions making?

I have given that alot of thought as well, but in the context of emergent physics, and the reasonable resolution I have come up with, is to treat all "decision making" itself as a emergent from "random actions".

So the ultimatel primordal decision, is simply a random walk. No higher structure is needed for this. Instead higher structures (and later more "cognitive or complex" decision making, which actually reflects upon options in terms of internal modelling of the environment) are sponaneously emergent, because they are more fit, and more competitive. This is why the unification is important. Unification in biology and physics thus sort of rests on the same principles.

So what is the simplest possible unit you can imagine? A boolean state? or is there a better answer? Just for the sake of dicussion, consider it's a boolean state. Then consider what happens when you combine them. Can we construct complexity from a game of interacting boolean agents? Would they spontaneously cooperated (grow)? Why? Why not?

BillTre said:
This won't happen without a system's functional structure guiding the molecules (and system's) behaviors.

The particular molecular structures initially involved in making adaptive decisions will be gradually assembled in ways that will not necessarily follow standard IT approaches. If it works and is easy to assemble from existing components, it will be used.
Yes many computer sciences descrptions have an external "absolute" context, and this is what we need to release ourself from to get further. And this artifical context, leaves imprints on the conclusions from such models.

/Fredrik
 
  • Like
Likes BillTre
  • #10
Fra said:
So the ultimatel primordal decision, is simply a random walk. No higher structure is needed for this.
I am not clear on what you mean here.
A random walk of a behavior seems to me to not involve a decisions.
Directing the random walk could involve decision making.

Fra said:
So what is the simplest possible unit you can imagine? A boolean state? or is there a better answer? Just for the sake of discussion, consider it's a boolean state.
WRT living things, I think of logic gates that are manifested in the molecules of the chemical system.
To generating an agent, I would expect a combination of more logic gates interacting in some useful to the chemical system manner.
They would have to have one or more inputs, outputs (more than one), and a decision maker part that makes a decision maker than chooses (or produces) different outputs under different conditions. Seems like it would take several parts to do this.

Fra said:
Can we construct complexity from a game of interacting boolean agents?
Yes but nature would select from among naturally occurring variants based on those making successful chemical systems.
Fra said:
Would they spontaneously cooperated (grow)? Why? Why not?
In that randomly generated system variants could be considered spontaneous, some of those variants, those better able to deal with the variability of their environment, would grow.
 
  • Like
Likes Fra
  • #11
BillTre said:
I am not clear on what you mean here.
A random walk of a behavior seems to me to not involve a decisions.
Exactly! Which is why it is a good starting point in that it requires no further explanation.
BillTre said:
Directing the random walk could involve decision making.
Yes, and one idea is that this guidance or bias that "looks like decision making" is spontaneous self organisation, driven by evolution and selection.

The challenge is to explain the correspondenc to "replication" in primordal systems that obviously can't replicate the way we are used to in higher biology or rna replucation. How does "evolutionary mechanisms" work in this primordal domain? Where systems are so simple that there is seemingly only trivial information carried? ( ie no rna/dna, just a hot plasma soup, or maybe more extreme ar the first split seconds of big bang )

https://arxiv.org/abs/1205.3707 contains grains to ideas. I think what replaces replication may be for an agent to induce a bias in the envuronment, and thus bias is making new similiar agents more probable. So the agents contribution is leaving a permanent bias in the environment. Baby steps indeed.

Human analogy if this is: suppose you dont get your own kids, you can still contribute to make the world a better place for future generations! That is just as important.

BillTre said:
WRT living things, I think of logic gates that are manifested in the molecules of the chemical system.
To generating an agent, I would expect a combination of more logic gates interacting in some useful to the chemical system manner.
They would have to have one or more inputs, outputs (more than one), and a decision maker part that makes a decision maker than chooses (or produces) different outputs under different conditions. Seems like it would take several parts to do this.

Yes but nature would select from among naturally occurring variants based on those making successful chemical systems.

In that randomly generated system variants could be considered spontaneous, some of those variants, those better able to deal with the variability of their environment, would grow.
This is indeed complex and i don't think there are any final answers yet but more subquestions appear, such as - how does complexity (say more than one input, and dimensionality) spontaneously appear - but it was interesting to reflect upon against your perspective, which was all I hoped for in thus thread, thanks for responding!

Discussing in more detail models if potential solutions to subquestions posed here will quickly get us into explicit speculative so i stop there.

/Fredrik
 
Last edited:
  • #12
Fra said:
The challenge is to explain the correspondenc to "replication" in primordal systems that obviously can't replicate the way we are used to in higher biology or rna replucation. How does "evolutionary mechanisms" work in this primordal domain?
I don't think this is so difficult.
I simple bounded system need merely divide itself into two. This would be like a cell dividing.
Each daughter system will have a set of components derived from the mother system.
If the organization in the daughters recapitulates the functional organization of the mother system, they will have somewhat similar functioning. This would be expected if the parts self-assemble into their former relationships and if there are many copies of each component part in the mother system (which would be expected from how cells are made).
Evolution would then work on each system and its variants that arise as those that operate more successfully stick around longer and generate more daughters.

Fra said:
Where systems are so simple that there is seemingly only trivial information carried? ( ie no rna/dna, just a hot plasma soup, or maybe more extreme ar the first split seconds of big bang )
Some think that anything not genetic is trivial and not important.
I disagree with this.
The envelopment of a set of molecules by a lipid membrane forms something with the potential to be an autopoietic chemical system. If that entity should divide, its basic enveloped organization is inherited by it daughters, just like in a dividing cell. The envelopment is a very simple (but important in that it defines the system) trait that is inherited in a binary manner (enveloped or not).
Another simple but important inherited trait is having a functioning chemical reaction network (a bunch of small chemicals that can react with each other). Some of these may be usefully productive for the system and underlie metabolism. Nucleic acids (genetics) along can not make metabolism.

Fra said:
how does complexity (say more than one input, and dimensionality) spontaneously appear
Adding molecules to a system will make it more complex in a simple way. More intricate increases in complexity can be generated by a series of many such additions.

To me this stuff is all rather mechanical (in a biological way) and does not require pre-existing agents or biases.
 
  • #13
I'm about to attempt to put some rather ill formed ideas into words, that I hope will make some sort of sense. I'm aware we started with the idea of the “predictive brain” and its potential functioning in biological systems along with the problems of trying to make links between the variety of complex nervous organisations. I tend to think of these things as reflecting broad motivational states, which can range from simple approach avoidance behaviours in single celled animals to the complexities of virtually all of the information processes that guide human behaviour. We can reasonably suggest that in humans that virtually all cognitive functions have a predictive element and are motivated, we need to consider the very wide range of mental functions that drive and facilitate human learning and behaviours.This becomes incredibly complex, as every point in the explanation may have very different functional significance, even when the mental representations used are exactly the same. In fact, for many of these, including our perceptions of the environment, we may have very limited access to the mental processes being used.

We are however attempting to make some biological sense of what might be going on by using evolutionary principles and the principle that specific effects in the brain, must reflect an identifiable biochemical cause under genetic control. At this point it all just seems to fall apart for me, there are a whole raft of unanswered issues, for example:

We have discussed the neurone as the basic unit of processing, but single celled animals like amoeba behave in ways that suggest some level of environmental awareness that guides its behavioural choices. The behaviour is also modified by some stimulus qualities like intensity, even in humans we know that astrocytes (non neurones) can store memories and influence memory storage.

We continue to identify some significant differences in the way neurones in complex organisms function that make investigations into the functioning of simple organisms as a model for understanding, highly suspect. The persistence of biological structures in our evolutionary history is not really a good predictor that it functions in the same way.

Perhaps one of the problems is in the fact that accumulation of mutations is in fact a feature of life, over time this means that there can be considerable variation in the genes of a population, to compensate for some of the problems this might cause, we often see multiple genetic pathways to achieve the same biological functions. We have to be careful about how we understand natural selection which occurs because of the increased fitness of an organism in its current environment, it can't select for the next generation, this is one example where there is no prediction. I'm also unclear by what “accumulation of excess parts ” means, it implies increased complexity but evolution is not directional. Natural selection is at best chaotic, what is selected is a genetic trait from the large number of pre-existing accumulated mutations, that give some sort of advantage in reproduction, in the current environment. This must then be heritable, and the advantage needs to be maintained in offspring, for a change to be seen in the population at large really requires a huge amount of luck. Even then we need to consider that 99% of all new species that successfully evolved are now extinct, biology is a messy business. We then have to consider that in the human genome only around 20,000 genes are responsible for the variety of protein products that we use to control our physiology that's less than 2% of our total genome, and genetics is another area of science that's changing.

I can't help thinking that at our current state of knowledge, our attempts top in down the cause and effect relationships are futile. I suspect that no one has told old mother nature that our understanding of how life works should, like other sciences, be predictable and consistent. I suspect, as a mother, she is now serving a long prison sentence for her persistent abuse of her children, particularly those who have gone into science.
 
  • Like
Likes Fra
  • #14
Laroxe said:
I'm also unclear by what “accumulation of excess parts ” means, it implies increased complexity but evolution is not directional.
Accumulation of excess parts leads to growth in a bounded chemical system. More parts means it takes up more space, thus getting larger.
If it is going to go through a series of divisions, it is going to have to add parts or it will run out of parts to form the increasing number of daughter systems. The parts in the systems will get diluted among the many daughter systems until there are not enough in a given daughter to sustain activity.

Adding parts to a chemical system will increase its complexity at that time. It will go down after a division.
Accumulating more parts over many divisions could lead to an evolutionary increase in complexity. However, this does not have to happen and there are well known exceptions, like simplified parasites.

-------------------------------------------------
Natural selection is not the only process driving evolutionary change.
Random genetic drift can have big effects of evolving genomes. Mutations can be "silent", like changes in nucleotide sequences that produce no change in the protein it encodes due to the redundancy of the genetic code.
Some genetic differences have such small effects on fitness that no selective pressure is felt on the genome. I think this is kind of what you are saying.
Here are links to two references on how natural selection can get flummoxed by the complexities of biology:
Evolutionary layering and the limits to cellular perfection
The Nature of Limits to Natural Selection
 
  • Like
Likes Laroxe
  • #15
As my own interest in this, does not START at the level of chemistry or primordal "life", it starts at the big bang, my general framwork for understading this is that the emergence (of elementary particles, then chemistry, then life) is like a kind of evolutionary reinforced learning, which hopefully can be understood prior of biological systems...

I think we need a mix of "regular reinforced learning" on the level of individual system, and a "evolutionary reinforcement" on population level. But "population" for me, in abstract terms can be the population of cells or rats, or it can be the "population" of electroncs or quarks. I expect a common explanatory model, that scales all the way.

So I don't think we should get stuck on concepts that are only well defined in terms of higher structure, we need to find the generalization that works for the smaller parts.

/Fredrik
 
  • Like
Likes Laroxe
  • #16
Laroxe said:
what is selected is a genetic trait from the large number of pre-existing accumulated mutations, that give some sort of advantage in reproduction, in the current environment. This must then be heritable, and the advantage needs to be maintained in offspring, for a change to be seen in the population at large really requires a huge amount of luck.
I think for simpler systems evolution needs to be understood in a different way that the explicit reproduction in cell biology. Instead of thinking about "generations" one can simply maybe think about how the environment gets more friendly for certain system. This is a form of "evolution" without explicit reproduction such as division. Maybe we can see this a form of non-evolutionary reinforced learning; where the reinforcement involves a "cooperation" with the envirenment.

For example, if the probability for a certain "system" to appear spontanously from the preexisting parts, can be modified by changing the environment. Then "reproduction" might be indirect, by increaseing the probability for spontaneous formation. This seems to me, the only "type" of reproduction mechanism that is available at the big bang as well.

At each time of the evolution, the environment also changes of course, thus the most fit systems that dominate and any time will vary as well. As we increase the temperature and get closer to big bang, there are only a few fit systems that can preserve their integritry in the chaotic aggressive environment.

I think one very fundamental and physical indirect way of "reproduction" is generation of mass; what drives particles to acquire mass? Can we understand this in a evolutionary way? It seems gaining mass makes you more resistance against disturbances; so acquiring a certain mass may be required for stability. This is not too different than "taking control of the environment", like on social contexts as well, if you get the environment to be "on your side" and support you, you have better luck. So making friends, not enemies is a survival strategy. I believe this is easily abstracted to apply to very simple information processing systems.

There is this talk about the "selfish gene" but I think any part in the universe likely has some spontanous "selfish" self-preservation. I think even elementary particles must be selfish, or none of this would make sense.

Edit: The "generalisation" of "predictive brain hypothesis, would this be the hypothesis that every part of the system, for "self preservation" adopts a kind state is is maximally stable as per some primordal "environment predictive" way. Maybe the makeup of a cell, with its internal structure and cellmembrane ina way is a manifestation of a "prediction" of a given environment. After all, cell structures seems "cleverly" designed to be able to survive and even divide. Can we call this a primordal form of "predictive cell"? Seems reasonable to me; and we then have a scale invariant logic.

/Fredrik
 
Last edited:

Similar threads

Back
Top